uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,869,038,154,028 | arxiv | \section{Introduction}
Many engineering optimization problems can be formulated to quadratic programming (QP) problems. For example, model predictive control (MPC), which has been widely used in many industrial processes \cite{Qin_2003}. However, the solving of the QP problem is often computationally demanding. In practice, many industrial processes require a fast solution of the problem, for example, the control systems with high sampling rate \cite{Juan_2014}. Therefore, it is important to develop an accelerated algorithm for solving QP problems.
For reducing the computational load of the controller, QP problems are solved by using online optimization technique. Popular QP solvers use an interior-point method \cite{Domahidi2012}, an active-set method \cite{Ferreau_2014} and a dual Newton method \cite{Frasch_2015}. However, above solvers require the solution of the linearization system of the Karush-Kuhn-Tucker (KKT) conditions at every iteration. For this reason, the great attention has been given to the first-order methods for the online optimization \cite{Parys_2019,Giselsson_2013,Alberto_2021}. In recent years, the proximal gradient-based accelerated algorithms are widely used to solve MPC problems \cite{Giselsson2014}. Specifically, the iterative algorithm is designed based on the proximal gradient method (PGM) to deal with the constraint of Lagrange multiplier more easily \cite{Giselsson_2013,Parys_2019,Giselsson2014,Giselsson2015}. Moreover, methods in \cite{Beck_2009,Nesterov2013}, i.e., fast iterative shrinkage-thresholding algorithm (FISTA) improves the iteration convergence rate from $\small O(1/p)$ to $\small O(1/p^{2})$. The key idea of this improvement is that the positive real root of a specific quadratic polynomial equation is selected as the iterative parameter. Inspired by the work in \cite{Beck_2009} and \cite{Giselsson_2013}, an accelerated PGM algorithm is proposed for fast solving QP problems in this letter. We show that the FISTA in \cite{Beck_2009} is a special case of the proposed method and the convergence rate can be improved from $\small O(1/p^{2})$ in \cite{Beck_2009} to $\small O(1/p^{\alpha})$ by selecting the positive real roots of a group of high order polynomial equations as the iterative parameters. To assess the performance of the proposed algorithm, a batch of randomly generated MPC problems are solved. Then, comparing the resulted execution time to state-of-the-art optimization softwares, in particular MOSEK \cite{Erling2003} and ECOS \cite{Domahidi2013ecos}.
The paper is organized as follows. In Section \ref{section2}, the QP problem is formulated into the dual form and the PGM is introduced. The accelerated PGM for the dual problem is proposed in Section \ref{section3}. In Section \ref{section4}, the numerical experiment based on the MPC are provided. Section \ref{section5} concludes the result of this letter.
\section{Problem Formulation}
\label{section2}
\subsection{Primal and Dual Problems}
Consider the standard quadratic programming problem
\begin{small}
\begin{equation}\label{standard_QP}
\begin{split}
&\min\limits_{\xi}
\frac{1}{2}
\xi^{T}\mathcal{H}\xi
+
\mathcal{G}^{T}\xi
\\
&\ s.t.\ \mathcal{A}\xi\leq\mathcal{B}.
\end{split}
\end{equation}
\end{small}Assume that there exists $\xi$ such that $ \mathcal{A}\xi<\mathcal{B}$, which means that the Slater's condition holds and there is no duality gap \cite{Boyd}, the dual problem of (\ref{standard_QP}) is formulated as
\begin{small}
\begin{equation}\label{dual_P}
\sup\limits_{\mu\geq0}
\inf\limits_{\xi}
\begin{bmatrix}
\frac{1}{2}\xi^{T}\mathcal{H}\xi
+
\mathcal{G}^{T}\xi
+
\mu^{T}(\mathcal{A}\xi-\mathcal{B})
\end{bmatrix}.
\end{equation}
\end{small}Take the partial derivative with respect to $\small \xi$ and according to the first-order optimality condition, we have
\begin{small}
\begin{equation*}
\begin{split}
&\frac{\partial}{\partial \xi}
\begin{bmatrix}
\frac{1}{2}\xi^{T}\mathcal{H}\xi+
(\mathcal{A}^{T}\mu+\mathcal{G})^{T}\xi-
\mu^{T}\mathcal{B}
\end{bmatrix}=0 \\
&\Rightarrow \xi=\mathcal{H}^{-1}(-\mathcal{A}^{T}\mu-\mathcal{G}).
\end{split}
\end{equation*}
\end{small}In this way, (\ref{dual_P}) is transformed into
\begin{small}
\begin{equation}\label{dual_problem}
\sup\limits_{\mu\geq0}
\begin{bmatrix}
-\frac{1}{2}
(\mathcal{A}^{T}\mu+\mathcal{G})^{T}
\mathcal{H}^{-1}
(\mathcal{A}^{T}\mu+\mathcal{G})
-
\mathcal{B}^{T}\mu
\end{bmatrix}.
\end{equation}
\end{small}Let $\small f(\mu)=\frac{1}{2}(\mathcal{A}^{T}\mu+\mathcal{G})^{T}
\mathcal{H}^{-1}(\mathcal{A}^{T}\mu+\mathcal{G})+\mathcal{B}^{T}\mu$ be the new objective, then minimizing $\small f(\mu)$ yields the new optimization problem.
\subsection{Proximal Gradient Method}
In this subsection, the PGM is used to solve the dual problem. Specifically, the following nonsmooth function $\small g$ is introduced to describe the constraint of $\small f(\mu)$
\begin{small}
\begin{equation}
g(\mu)
=
\begin{cases}
0, & \mbox{if } \mu\geq0 \\
+\infty, & \mbox{otherwise}.
\end{cases}
\end{equation}
\end{small}In this way, the constrained optimization problem $\small \min\limits_{\mu\geq0}f(\mu)$ is equivalent to the unconstrained one, i.e., $\small \min\limits_{\mu}f(\mu)+g(\mu)$. Based on the work in \cite{Beck_2009}, let $\small \zeta^{p}=\mu^{p}+\frac{\tau_{p}-1}{\tau_{p+1}}(\mu^{p}-\mu^{p-1})$, where $\small \tau_{p}>0$ for $\small p=1,2,\cdots$ and $\small p$ is iteration number. Then the above problem can be solved by
\begin{small}
\begin{equation}\label{proximal_step}
\mu^{p+1}
=
\mathrm{P}_{\mu}
(\zeta^{p}-\frac{1}{L}{\Greekmath 0272} f(\zeta^{p})),
\end{equation}
\end{small}where $\small L$ is the Lipschitz constant of $\small {\Greekmath 0272} f$ and $\small \mathrm{P}_{\mu}$ is the Euclidean projection to $\small \{\mu|\mu\geq0\}$. According to the result in \cite{Giselsson_2013}, there is
$\small {\Greekmath 0272} f(\mu)=\mathcal{A}\mathcal{H}^{-1}(\mathcal{A}^{T}\mu+\mathcal{G})+\mathcal{B}$, then we have
\begin{small}
\begin{equation}
{\Greekmath 0272} f(\zeta^{p})
=
-
\mathcal{A}
\begin{bmatrix}
\xi^{p}
+
\frac{\tau_{p}-1}{\tau_{p+1}}(\xi^{p}-\xi^{p-1})
\end{bmatrix}+\mathcal{B}.
\end{equation}
\end{small}Therefore, (\ref{proximal_step}) can be written as
\begin{small}
\begin{equation}
\mu_{l}^{p+1}
=
\max
\begin{Bmatrix}
0,
\zeta_{l}^{p}+\frac{1}{L}
\begin{bmatrix}
\mathcal{A}_{l}
(
\xi^{p}
+
\frac{\tau_{p}-1}{\tau_{p+1}}(\xi^{p}-\xi^{p-1})
)
-\mathcal{B}_{l}
\end{bmatrix}
\end{Bmatrix}
\end{equation}
\end{small}where $\mu_{l}$ denotes the $l$-th component of the vector $\mu$. $\mathcal{A}_{l}$ and $\mathcal{B}_{l}$ are the $\small l$-th row of $\mathcal{A}$ and $\mathcal{B}$. The classical PGM to solve $\min\limits_{\mu}f(\mu)+g(\mu)$ can be summarized as Algorithm \ref{algorithm1}, in which $\tau_{p}$ and $\tau_{p+1}$ are iterative parameters, which will be discussed in the next subsection.
\begin{algorithm}
\caption{Proximal Gradient Method.}
\label{algorithm1}
\begin{algorithmic}[1]
\REQUIRE ~~\\
Initial parameters $\small \zeta_{l}^{1}=\mu_{l}^{0}$, $\small \tau_{1}=1$ and $\small \bar{\xi}^{1}=\xi_{0}$.
\ENSURE ~~The optimal decision variable $\small \mu^{*}$. \\
\WHILE {$\small p\geq1$}
\STATE $\small \mu_{l}^{p}=\max\{0,\zeta_{l}^{p}+\frac{1}{L}(\mathcal{A}_{l}\bar{\xi}^{p}-\mathcal{B}_{l})\},\ \forall l$.\\
\STATE Looking up table for $\tau_{p}$ and $\tau_{p+1}$.\\
\STATE $\small \xi^{p}=\mathcal{H}^{-1}(-\mathcal{A}^{T}\mu^{p}-\mathcal{G})$.\\
\STATE $\small \zeta^{p+1}=\mu^{p}+\frac{\tau_{p}-1}{\tau_{p+1}}(\mu^{p}-\mu^{p-1})$.\\
\STATE $\small \bar{\xi}^{p+1}=\xi^{p}+\frac{\tau_{p}-1}{\tau_{p+1}}(\xi^{p}-\xi^{p-1})$.\\
\STATE $\small p=p+1.$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\section{Accelerated MPC Iteration}
\label{section3}
\subsection{Accelerated Scheme and Convergence Analysis}
The traditional iterative parameters $\tau_{p}$ and $\tau_{p+1}$ are selected based on the positive real root of the following second-order polynomial equation
\begin{small}
\begin{equation}\label{2_polynomial}
\tau_{p+1}^{2}-\tau_{p+1}-\tau_{p}^{2}=0
\end{equation}
\end{small}with $\tau_{1}=1$. With the aid of (\ref{2_polynomial}), the convergence rate $O(1/p^{2})$ can be achieved \cite{Beck_2009}. In this work, we show that the convergence rate can be enhanced to $O(1/p^{\alpha})$ only by selecting iterative parameters appropriately. Specifically, for the given order $\alpha\in\{2,3,\cdots\}$, iterative parameters are determined by the positive real root of the $\alpha$th-order equation
\begin{small}
\begin{equation}\label{alpha_polynomial}
\tau_{p+1}^{\alpha}-\tau_{p+1}^{\alpha-1}-\tau_{p}^{\alpha}=0
\end{equation}
\end{small}with the initial value $\tau_{1}=1$, instead of (\ref{2_polynomial}). This is the main difference between our method and the method in \cite{Beck_2009}.
\begin{lemma}\label{lemma1}
The $\alpha$th-order polynomial equation (\ref{alpha_polynomial}) has following properties:
\begin{enumerate}
\item For $\alpha\in\{2,3,\cdots\}$, the polynomial equation (\ref{alpha_polynomial}) has the unique positive real root.
\item For $p\in\{1,2,\cdots\}$, the unique positive real root has the lower bound as
\begin{small}
\begin{equation*}
\tau_{p}\geq\frac{p+\alpha-1}{\alpha}.
\end{equation*}
\end{small}
\end{enumerate}
\end{lemma}
\begin{proof}
For the first argument, we first show that $\tau_{p}>0$ for all positive integer $p\geq1$ with the aid of mathematical induction. Specifically, the base case $\tau_{1}>0$ holds since the given initial value $\tau_{1}=1$. Assume the induction hypothesis that $\tau_{p}>0$ holds. Then we have
\begin{small}
\begin{equation}\label{temp_1}
\tau_{p+1}^{\alpha}-\tau_{p+1}^{\alpha-1}-\tau_{p}^{\alpha}=0
\Rightarrow
\tau_{p+1}^{\alpha-1}(\tau_{p+1}-1)>0.
\end{equation}
\end{small}Since (\ref{temp_1}) holds for all $\alpha\in\{2,3,\cdots\}$, $\tau_{p+1}$ should be a positive value greater than one, therefore we have $\tau_{p+1}>0$. In this way, we conclude that $\tau_{p}>0$ for all $p\in\{1,2,\cdots\}$. To the uniqueness of positive real root, let
\begin{small}
\begin{equation}
f_{1}(\tau_{p+1})\triangleq\tau_{p+1}^{\alpha}-\tau_{p+1}^{\alpha-1}-\tau_{p}^{\alpha},
\end{equation}
\end{small}which has the derivative as
\begin{small}
\begin{equation}
f_{1}^{'}(\tau_{p+1})=\tau_{p+1}^{\alpha-2}(\alpha\tau_{p+1}-\alpha+1),
\end{equation}
\end{small}it has zero points $\tau_{p+1}=0$ and $\tau_{p+1}=\frac{\alpha-1}{\alpha}$. Therefore, $f_{1}(\tau_{p+1})$ monotonically decreases from $f_{1}(0)$ to $f_{1}(\frac{\alpha-1}{\alpha})$, and monotonically increases from $f_{1}(\frac{\alpha-1}{\alpha})$ to $f_{1}(+\infty)$. Since $f_{1}(0)=-\tau_{p}^{\alpha}<0$ and $\lim\limits_{\tau_{p+1}\rightarrow+\infty}f_{1}(\tau_{p+1})=+\infty$, the function $f_{1}(\tau_{p+1})$ has only one zero point, which implies that the equation (\ref{alpha_polynomial}) has the unique positive real root.
For the second argument, we still use the mathematical induction. The base case $\tau_{1}\geq1$ holds since the given initial value $\tau_{1}=1$. Assume the induction hypothesis that $\tau_{p}\geq\frac{p+\alpha-1}{\alpha}$ holds. To show $\tau_{p+1}\geq\frac{p+\alpha}{\alpha}$, we can equivalently prove that the inequality $f_{1}(\frac{p+\alpha}{\alpha})<0$ holds. Moreover, since the induction hypothesis $\tau_{p}\geq\frac{p+\alpha-1}{\alpha}>0$, we can prove the following inequality
\begin{small}
\begin{equation}
\begin{pmatrix}
\frac{p+\alpha}{\alpha}
\end{pmatrix}^{\alpha}
-
\begin{pmatrix}
\frac{p+\alpha}{\alpha}
\end{pmatrix}^{\alpha-1}
-
\begin{pmatrix}
\frac{p+\alpha-1}{\alpha}
\end{pmatrix}^{\alpha}
<0
\end{equation}
\end{small}holds and it is equivalent to show
\begin{small}
\begin{equation}
f_{2}(p)
\triangleq
(\alpha-1)\ln
\begin{pmatrix}
\frac{p+\alpha}{\alpha}
\end{pmatrix}
+
\ln
\begin{pmatrix}
\frac{p}{\alpha}
\end{pmatrix}
-
\alpha\ln\begin{pmatrix}
\frac{p+\alpha-1}{\alpha}
\end{pmatrix}
<0
\end{equation}
\end{small}holds for all $p\in\{1,2,\cdots\}$. The derivative of $f_{2}(p)$ is
\begin{small}
\begin{equation}
f_{2}^{'}(p)
=
\frac{\alpha(\alpha-1)}{p(p+\alpha)(p+\alpha-1)},
\end{equation}
\end{small}which implies that the function $f_{2}(p)$ monotonically increases from $f_{2}(1)$ to $f_{2}(+\infty)$. Next, we show that $f_{2}(1)<0$. Notice that
\begin{small}
\begin{equation}\label{f_2_1}
f_{2}(1)
=
(\alpha-1)\ln
\begin{pmatrix}
\frac{1+\alpha}{\alpha}
\end{pmatrix}
+
\ln
\begin{pmatrix}
\frac{1}{\alpha}
\end{pmatrix},
\end{equation}
\end{small}which is a function about $\alpha$, hence, denote (\ref{f_2_1}) as
\begin{small}
\begin{equation}
f_{3}(\alpha)
\triangleq
(\alpha-1)\ln(\alpha+1)
-
\alpha\ln\alpha.
\end{equation}
\end{small}The first and second derivatives of $f_{3}(\alpha)$ are
\begin{small}
\begin{subequations}
\begin{align}
f_{3}^{'}(\alpha)
&=
\ln
\begin{pmatrix}
\frac{\alpha+1}{\alpha}
\end{pmatrix}
-
\frac{2}{\alpha+1},
\\
f_{3}^{''}(\alpha)
&=
\frac{\alpha-1}{\alpha(\alpha+1)^{2}},
\end{align}
\end{subequations}
\end{small}which implies that $f_{3}(\alpha)$ monotonically
increases for $\small \alpha\in\{2,3,\cdots\}$. Since $f_{3}(2)<0$ and $\lim\limits_{\alpha\rightarrow+\infty}f_{3}(\alpha)=0$, we conclude that $f_{2}(1)<0$. Finally, according to $\lim\limits_{p\rightarrow+\infty}f_{2}(p)=0$, the unique positive real root $\tau_{p}$ has the lower bound $\frac{p+\alpha-1}{\alpha}$.
\end{proof}
For the purpose of saving computing time, the $\alpha$th-order equations are solved offline and the roots are stored in a table. In this work, the look-up table is obtained by recursively solving the polynomial equation (\ref{alpha_polynomial}) in MATLAB environment, which can be summarized as Algorithm \ref{algorithm2}. Notice that the MATLAB function $\RIfM@\expandafter\text@\else\expandafter\mbox\fi{roots}(\cdot)$ is used for the polynomial root seeking. The following theorem show that the convergence rate can be improved to $O(1/p^{\alpha})$ by using (\ref{alpha_polynomial}).
\begin{theorem}\label{Theorem_2}
For $\alpha\in\{2,3,\cdots\}$, let $\xi^{*}$ and $\mu^{*}$ denote the optimizers of the problems (\ref{standard_QP}) and (\ref{dual_problem}) respectively, the convergence rate of the primal variable by Algorithm \ref{algorithm1} is
\begin{small}
\begin{equation}\label{convergence_rate_general}
\|\xi^{p}-\xi^{*}\|_{2}^{2}
\leq
\frac{\alpha^{\alpha}L\|\mu^{0}-\mu^{*}\|_{2}^{2}}{\underline{\sigma}(\mathcal{H})(p+\alpha-1)^{\alpha}},\ p=1,2,\cdots
\end{equation}
\end{small}where $\small \underline{\sigma}(\cdot)$ denotes the minimum eigenvalue.
\end{theorem}
\begin{proof}
Let $\small \upsilon^{p}=f(\mu^{p})-f(\mu^{*})$, according to Lemma 2.3 in \cite{Beck_2009}, we have
\begin{small}
\begin{subequations}
\begin{align}
\begin{split}
\frac{2}{L}(\upsilon^{p}-\upsilon^{p+1})
&\geq\|\mu^{p+1}-\zeta^{p+1}\|_{2}^{2}\\
&+2\langle\mu^{p+1}-\zeta^{p+1},\zeta^{p+1}-\mu^{p}\rangle,
\end{split}\label{lemma_4_1_1} \\
\begin{split}
-\frac{2}{L}\upsilon^{p+1}
&\geq\|\mu^{p+1}-\zeta^{p+1}\|_{2}^{2}\\
&+2\langle\mu^{p+1}-\zeta^{p+1},\zeta^{p+1}-\mu^{*}\rangle.
\end{split}\label{lemma_4_1_2}
\end{align}
\end{subequations}
\end{small}Follow the line of Lemma 4.1 in \cite{Beck_2009}, multiply $\small (\tau_{p+1}-1)$ to the both sides of (\ref{lemma_4_1_1}) and add the result to (\ref{lemma_4_1_2}), which leads to
\begin{small}
\begin{equation}\label{Most_Important_Ieq_1}
\begin{split}
\frac{2}{L}
\begin{bmatrix}
(\tau_{p+1}-1)\upsilon^{p}-\tau_{p+1}\upsilon^{p+1}
\end{bmatrix}\geq
\tau_{p+1}\|\mu^{p+1}-\zeta^{p+1}\|_{2}^{2}\\
+2\langle\mu^{p+1}-\zeta^{p+1},\tau_{p+1}\zeta^{p+1}-(\tau_{p+1}-1)\mu^{p}-\mu^{*}\rangle.
\end{split}
\end{equation}
\end{small}Based on the second argument of Lemma \ref{lemma1}, we can obtain $\small \tau_{p+1}\geq1,\ \forall p\geq1$, then multiply $\small \tau_{p+1}^{\alpha-1}$ and $\small \tau_{p+1}$ to the left and right-hand side of (\ref{Most_Important_Ieq_1}), respectively, we have
\begin{small}
\begin{equation}\label{Most_Important_Ieq_2}
\begin{split}
\frac{2}{L}
\begin{bmatrix}
\tau_{p+1}^{\alpha-1}(\tau_{p+1}-1)\upsilon^{p}-\tau_{p+1}^{\alpha}\upsilon^{p+1}
\end{bmatrix}\geq
\|\tau_{p+1}(\mu^{p+1}-\zeta^{p+1})\|_{2}^{2}\\
+2\tau_{p+1}\langle\mu^{p+1}-\zeta^{p+1},\tau_{p+1}\zeta^{p+1}-(\tau_{p+1}-1)\mu^{p}-\mu^{*}\rangle.
\end{split}
\end{equation}
\end{small}Let $\small y_{1}=\tau_{p+1}\zeta^{p+1}$, $\small y_{2}=\tau_{p+1}\mu^{p+1}$ and $\small y_{3}=(\tau_{p+1}-1)\mu^{p}+\mu^{*}$, the right-hand side of (\ref{Most_Important_Ieq_2}) can be written as
\begin{small}
\begin{equation}
\|y_{2}-y_{1}\|_{2}^{2}+2\langle y_{2}-y_{1},y_{1}-y_{3}\rangle
=
\|y_{2}-y_{3}\|_{2}^{2}
-\|y_{1}-y_{3}\|_{2}^{2}.
\end{equation}
\end{small}Since $\small \tau_{p}^{\alpha}=\tau_{p+1}^{\alpha}-\tau_{p+1}^{\alpha-1}$, the inequality (\ref{Most_Important_Ieq_2}) is equivalent to
\begin{small}
\begin{equation}\label{Most_Important_Ieq_3}
\begin{split}
\frac{2}{L}
\begin{bmatrix}
\tau_{p}^{\alpha}\upsilon^{p}-\tau_{p+1}^{\alpha}\upsilon^{p+1}
\end{bmatrix}&\geq
\|y_{2}-y_{3}\|_{2}^{2}
-\|y_{1}-y_{3}\|_{2}^{2} \\
&=
\|\tau_{p+1}\mu^{p+1}-(\tau_{p+1}-1)\mu^{p}-\mu^{*}\|_{2}^{2}\\
&-
\|\tau_{p+1}\zeta^{p+1}-(\tau_{p+1}-1)\mu^{p}-\mu^{*}\|_{2}^{2}.
\end{split}
\end{equation}
\end{small}Let $\small \kappa_{p}=\tau_{p}\mu^{p}-(\tau_{p}-1)\mu^{p-1}-\mu^{*}$, combine with $\small \tau_{p+1}\zeta^{p+1}=\tau_{p+1}\mu^{p}+(\tau_{p}-1)(\mu^{p}-\mu^{p-1})$, the right-hand side of (\ref{Most_Important_Ieq_3}) is equal to $\small \|\kappa_{p+1}\|_{2}^{2}-\|\kappa_{p}\|_{2}^{2}$. Therefore, similar as Lemma 4.1 in \cite{Beck_2009}, we have the following conclusion
\begin{small}
\begin{equation}
\frac{2}{L}\tau_{p}^{\alpha}\upsilon^{p}
-\frac{2}{L}\tau_{p+1}^{\alpha}\upsilon^{p+1}\geq
\|\kappa_{p+1}\|_{2}^{2}-\|\kappa_{p}\|_{2}^{2}.
\end{equation}
\end{small}According to Lemma 4.2 in \cite{Beck_2009}, let $\small \bar{y}_{1}^{p}=\frac{2}{L}\tau_{p}^{\alpha}\upsilon^{p}$, $\small \bar{y}_{2}^{p}=\|\kappa_{p}\|_{2}^{2}$ and $\small \bar{y}_{3}=\|\mu^{0}-\mu^{*}\|_{2}^{2}$, we have $\small \bar{y}_{1}^{p}+\bar{y}_{2}^{p}\geq\bar{y}_{1}^{p+1}+\bar{y}_{2}^{p+1}$. Assume $\small \bar{y}_{1}^{1}+\bar{y}_{2}^{1}\leq\bar{y}_{3}$ holds, we have $\small \bar{y}_{1}^{p}+\bar{y}_{2}^{p}\leq\bar{y}_{3}$, which leads to $\small \bar{y}_{1}^{p}\leq\bar{y}_{3}$. Moreover, according to the second argument of Lemma \ref{lemma1}, we have
\begin{small}
\begin{equation*}
\begin{split}
\frac{2}{L}\tau_{p}^{\alpha}\upsilon^{p}\leq\|\mu^{0}-\mu^{*}\|_{2}^{2}
\Rightarrow
f(\mu^{p})-f(\mu^{*})
\leq
\frac{\alpha^{\alpha}L\|\mu^{0}-\mu^{*}\|_{2}^{2}}{2(p+\alpha-1)^{\alpha}}.
\end{split}
\end{equation*}
\end{small}The proof of the assumption $\small \bar{y}_{1}^{1}+\bar{y}_{2}^{1}\leq\bar{y}_{3}$ can be found in Theorem 4.4 of \cite{Beck_2009}. Then, according to the procedures in Theorem 3 of \cite{Giselsson_2013}, we conclude that
\begin{small}
\begin{equation}
\|\xi^{p}-\xi^{*}\|_{2}^{2}\leq
\frac{2}{\underline{\sigma}(\mathcal{H})}(f(\mu^{p})-f(\mu^{*}))
\leq
\frac{\alpha^{\alpha}L\|\mu^{0}-\mu^{*}\|_{2}^{2}}{\underline{\sigma}(\mathcal{H})(p+\alpha-1)^{\alpha}}.
\end{equation}
\end{small}In this way, the convergence rate (\ref{convergence_rate_general}) is obtained.
\end{proof}
\begin{algorithm}[t]
\caption{Look-up table generation for the $\alpha$th-order polynomial equation (\ref{alpha_polynomial}).}
\label{algorithm2}
\begin{algorithmic}[1]
\REQUIRE ~~\\
The order $\alpha\geq2$, the initial root $\tau_{1}=1$, the initial iteration index $p=1$ and the table length $\mathcal{P}$.
\ENSURE ~~Look-up table $\mathcal{T}_{\alpha}$. \\
\STATE Look-up table initialization: $\mathcal{T}_{\alpha}=[\tau_{1}]$.\\
\WHILE {$p\leq\mathcal{P}$}
\STATE Polynomial coefficients: $\boldsymbol{p_c}=[1,-1,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{zeros}(1, \alpha-2),-\mathcal{T}_{\alpha}(\RIfM@\expandafter\text@\else\expandafter\mbox\fi{end})^{\alpha}]$.\\
\STATE Polynomial roots: $\boldsymbol{p_r}=\RIfM@\expandafter\text@\else\expandafter\mbox\fi{roots}(\boldsymbol{p_c})$.\\
\STATE Finding the positive real root $\tau_{p+1}$ in the vector $\boldsymbol{p_r}$.\\
\STATE Updating the look-up table: $\mathcal{T}_{\alpha} =[\mathcal{T}_{\alpha},\tau_{p+1}]$.\\
\STATE $\small p=p+1.$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\begin{figure}[hbtp]
\centering
\includegraphics[scale=0.53]{upper_bound_rate-eps-converted-to.pdf}
\caption{Right-hand side of (\ref{convergence_rate_general}) with the variation of $\small \alpha$.}
\label{fig_upper_bound_rate}
\end{figure}
Theorem \ref{Theorem_2} shows that the FISTA in \cite{Beck_2009} is a special case of the proposed method and the iteration performance is determined by (\ref{alpha_polynomial}). Specifically, a suitable selection of the iterative parameter $\small \tau_{p}$ can improve the convergence rate, i.e., from $\small O(1/p^{2})$ in \cite{Beck_2009} to $\small O(1/p^{\alpha})$. To show the upper bound of the convergence rate can be reduced, denote the right-hand side of (\ref{convergence_rate_general}) as
\begin{small}
\begin{equation}\label{upper_bound}
Up
=
\frac{\alpha^{\alpha}}{(p+\alpha-1)^{\alpha}}C,\ p=1,2,\cdots
\end{equation}
\end{small}where $C$ is the constant part of the right-hand side of (\ref{convergence_rate_general}). The variation of $U_{p}$ with $p\in\{1,\cdots,10\}$ is shown in Fig. \ref{fig_upper_bound_rate}, in which different color lines denote different $\alpha\in\{2,\cdots,20\}$. Fig. \ref{fig_upper_bound_rate} implies that $U_{p}$ is decreasing with the increase of $\alpha$.
\subsection{Cholesky Decomposition of $\mathcal{H}$}
According to the QP in Section \ref{section2}, the quadratic objective term $\mathcal{H}$ in (\ref{standard_QP}) may be a dense matrix, then more computation time could be consumed than a banded matrix if solving (\ref{standard_QP}) by Algorithm \ref{algorithm1} directly. To cope with this difficulty, the matrix decomposition technique can be used. Since $\mathcal{H}$ is symmetric and positive definite, there exists the Cholesky decomposition $\mathcal{H}=\mathcal{Z}^{T}\mathcal{Z}$, based on which, the quadratic programming problem (\ref{standard_QP}) can be formulated into
\begin{small}
\begin{equation}\label{compact_original_problem3}
\begin{split}
&\min\limits_{\psi}
\frac{1}{2}
\psi^{T}I\psi
+
\mathcal{G}^{T}\mathcal{Z}^{-1}\psi
\\
&\ s.t.\ \mathcal{A}\mathcal{Z}^{-1}\psi\leq\mathcal{B},
\end{split}
\end{equation}
\end{small}where $\psi=\mathcal{Z}\xi$. Since $\mathcal{Z}$ is a upper triangular matrix with real and positive diagonal components, (\ref{compact_original_problem3}) can be solved by Algorithm \ref{algorithm1} and the control input can be calculated by $\xi=\mathcal{Z}^{-1}\psi$. In this way, the quadratic objective term is transformed into the identity matrix, which can reduce the computation time in step $4$ of Algorithm \ref{algorithm1}.
\begin{table*}[t]
\centering
\caption{Iteration performance with four methods.}
\begin{tabular}{lcccccccc}
\toprule
& \multicolumn{2}{c}{{\ul \ \ \ \ \ \ \ \ \ \ \ \ n=m=2\ \ \ \ \ \ \ \ \ \ \ }}
& \multicolumn{2}{c}{{\ul \ \ \ \ \ \ \ \ \ \ \ \ \ n=m=4\ \ \ \ \ \ \ \ \ \ \ }}
& \multicolumn{2}{c}{{\ul \ \ \ \ \ \ \ \ \ \ \ \ \ n=m=6\ \ \ \ \ \ \ \ \ \ \ }}
& \multicolumn{2}{c}{{\ul \ \ \ \ \ \ \ \ \ \ \ \ \ n=m=8\ \ \ \ \ \ \ \ \ \ }}
\\
& \multicolumn{2}{c}{{\ul \ \ \ \ \ \ vars/cons: 10/40\ \ \ \ \ \ }}
& \multicolumn{2}{c}{{\ul \ \ \ \ \ \ vars/cons: 20/80\ \ \ \ \ \ }}
& \multicolumn{2}{c}{{\ul \ \ \ \ \ \ vars/cons: 30/120\ \ \ \ \ \ }}
& \multicolumn{2}{c}{{\ul \ \ \ \ \ \ vars/cons: 40/160\ \ \ \ \ \ }}
\\
& \multicolumn{1}{l}{\ \ ave.iter}
& \multicolumn{1}{l}{\ \ ave.time (s)}
& \multicolumn{1}{l}{\ \ ave.iter}
& \multicolumn{1}{l}{\ \ ave.time (s)}
& \multicolumn{1}{l}{\ \ ave.iter}
& \multicolumn{1}{l}{\ \ ave.time (s)}
& \multicolumn{1}{l}{\ \ ave.iter}
& \multicolumn{1}{l}{\ \ ave.time (s)} \\
\midrule
MOSEK
& -- & 0.10149 & -- & 0.10226 & -- & 0.10873 & -- & 0.10887 \\
ECOS
& -- & 0.00452 & -- & 0.00659 & -- & 0.00849 & -- & 0.01287 \\
FISTA
& 29.37 & 0.00098 & 115.95 & 0.00398 & 159.76 & 0.00800 & 272.88 & 0.01777 \\
Algorithm \ref{algorithm1} ($\alpha=20$)
& 26.56 & 0.00078 & 78.15 & 0.00251 & 119.03 & 0.00484 & 176.00 & 0.00785 \\
\bottomrule
\end{tabular}\label{table2}
\end{table*}
\section{Performance Analysis based on MPC}
\label{section4}
\subsection{Formulation of Standard MPC}
Consider the discrete-time linear system as
\begin{small
\begin{equation}\label{state_equation}
x_{k+1}=Ax_{k}+Bu_{k},
\end{equation}
\end{small}where $A$ and $B$ are known time-invariant matrixes. $x_{k}\in\mathcal{R}^{n}$ and $u_{k}\in\mathcal{R}^{m}$ have linear constraints as $Fx_{k}\leq\boldsymbol{1}$ and $Gu_{k}\leq\boldsymbol{1}$, respectively, in which $F\in\mathcal{R}^{f\times n}$, $G\in\mathcal{R}^{g\times m}$ and $\boldsymbol{1}$ is a vector with each component is equal to $1$. The standard MPC problem can be presented as
\begin{small}
\begin{equation}\label{original_MPC}
\min \limits_{\boldsymbol{u}_{k}}
J(x_{k},\ \boldsymbol{u}_{k}),\ \
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{s.t.}\ \boldsymbol{u}_{k}\in \mathbb{U},
\end{equation}
\end{small}where $x_{k}$ is the current state, the decision variables is the nominal input trajectory $\boldsymbol{u}_{k}=(u_{0|k},\cdots,u_{N-1|k})\in\mathcal{R}^{Nm}$, $N$ is the prediction horizon. The construction of $\mathbb{U}$ can be found in \cite{Mayne_2000}. Moreover, the cost function $J(x_{k},\boldsymbol{u}_{k})$ is
\begin{small}
\begin{equation}\label{original_cost_function}
J(x_{k},\boldsymbol{u}_{k})
=
\frac{1}{2}
\sum_{l=0}^{N-1}
\begin{bmatrix}
\|x_{l|k}\|_{Q}^{2}
+
\|u_{l|k}\|_{R}^{2}
\end{bmatrix}
+
\frac{1}{2}
\|x_{N|k}\|_{P}^{2},
\end{equation}
\end{small}where $l|k$ denotes the $l$-th step ahead prediction from the current time $k$. $Q$, $R$ and $P$ are positive definite matrices. $P$ is chosen as the solution of the discrete algebraic Riccati equation of the unconstrained problem. The standard MPC problem (\ref{original_MPC}) can be formulated as the QP problem (\ref{standard_QP}), which has been shown in Appendix \ref{appendices}.
\subsection{Existing Methods for Comparison}
The performance comparisons with the optimization software MOSEK \cite{Erling2003}, the embedded solver ECOS \cite{Domahidi2013ecos} and the FISTA \cite{Beck_2009} have been provided. The MOSEK and ECOS quadratic programming functions in MATLAB environment, i.e., $\RIfM@\expandafter\text@\else\expandafter\mbox\fi{mskqpopt}(\cdot)$ and $\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ecosqp}(\cdot)$, are used, they are invoked as
\begin{small}
\begin{subequations}
\begin{align}
\begin{split}
&[\RIfM@\expandafter\text@\else\expandafter\mbox\fi{sol}]
=
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{mskqpopt}
(\mathcal{H},
\mathcal{G},
\mathcal{A},
[\ ],
\mathcal{B},
[\ ],
[\ ],
[\ ],
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{'minimize info');}\\
&\RIfM@\expandafter\text@\else\expandafter\mbox\fi{time}
=
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{sol.info.MSK\_DINF\_INTPNT\_TIME;}
\end{split}
\\
\nonumber
\\
&[\RIfM@\expandafter\text@\else\expandafter\mbox\fi{sol},
\sim,
\sim,
\sim,
\sim,
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{time}]
=
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ecosqp}
(\mathcal{H},
\mathcal{G},
\mathcal{A},
\mathcal{B});
\end{align}
\end{subequations}
\end{small}The version of MOSEK is 9.2.43 and the numerical experiments are proceeded by running MATLAB R2018a on Windows 10 platform with 2.9G Core i5 processor and 8GB RAM.
\subsection{Performance Evaluation of Algorithm \ref{algorithm1}}
Four kinds of system scales are considered, they are $n=m=2,\ 4,\ 6,\ 8$. The performance of above methods are evaluated by solving $400$ random MPC problems in each system scale. Since we develop the efficient solving method in one control step, without loss of generality, a batch of stable and controllable plants with the random initial conditions and constraints are used. The components in the dynamics and input matrices are randomly selected from the interval $[-1,1]$. Each component in the state and input are upper and lower bounded by random bounds generated from intervals $[1,10]$ and $[-10,-1]$ respectively. The prediction horizon is $\small N=5$, the controller parameters are $\small Q=I$ and $\small R=10I$. Only the iteration process in the first control step is considered and the stop criterion is $\|\xi^{p}-\xi^{p-1}\|_{2}\leq10^{-3}$. Let $\alpha=20$ in Algorithm \ref{algorithm1}, the results are shown in Table \ref{table2}, in which "ave.iter" and "ave.time" are the abbreviations of "average iteration number" and "average execution time", and "vars/cons" denotes the number of variables and constraints. Table \ref{table2} implies that the average execution time can be reduced by using the proposed method. Noticing that Table \ref{table2} shows that the execution time of Algorithm \ref{algorithm1} and ECOS are much faster than MOSEK, hence, only the discussions about Algorithm \ref{algorithm1} and ECOS are provided in the rest of the letter for the purpose of conciseness.
\begin{figure}[hbtp]
\centering
\includegraphics[scale=0.53]{exc_time_10-eps-converted-to.pdf}
\caption{Average execution time of Algorithm \ref{algorithm1} and ECOS in the case of $\small n=m=8$.}
\label{fig_exc_time}
\end{figure}
To show the performance improvement of Algorithm \ref{algorithm1} with the increase of $\small \alpha\in\{2,\cdots,20\}$, an example in the case of $\small n=m=8$ is given in Fig. \ref{fig_exc_time}, which presents the results in terms of the average execution time. Since only the upper bound of convergence rate is reduced by increasing $\small\alpha$, the execution time may not strictly decline. Fig. \ref{fig_exc_time} implies that the execution time of Algorithm \ref{algorithm1} can be shorten by increasing $\small\alpha$ and faster than the ECOS for solving the same MPC optimization problem. Notice that there is no significant difference in the execution time if $\small\alpha$ keeps increasing. In fact, it depends on the stop criterion, therefore, a suitable $\small\alpha$ can be selected according to the required solution accuracy.
\begin{figure}[hbtp]
\centering
\includegraphics[scale=0.54]{n_m_8-eps-converted-to.pdf}
\caption{Execution time for each experiment in the case of $n=m=8$.}
\label{fig_exc_time_8}
\end{figure}
\subsection{Statistical Significance of Experimental Result}
Table \ref{table2} verifies the effectiveness of Algorithm \ref{algorithm1} by using the average execution time, the statistical significance is discussed as follows. Since the sample size is large in our test, i.e., $400$ random experiments in each case, the paired $t$-test developed in Section $10.3$ and $12.3$ of \cite{Wackerly2008} can be used. Denote the average execution time under the ECOS and Algorithm \ref{algorithm1} as $\mu_e$ and $\mu_a$, and the difference of execution time between the two methods as $D_{i}$ for $i=1,\cdots,M$, in which $M=400$. If the average execution time for the ECOS is larger, then $\mu_{D}=\mu_{e}-\mu_{a}>0$. Thus, we test
\begin{small}
\begin{equation*}
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{H}_{0}:\ \mu_{D}=0\ \
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{versus}\ \
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{H}_{1}:\ \mu_{D}>0.
\end{equation*}
\end{small}Define the sample mean and variance as
\begin{small}
\begin{equation*}
\bar{D}
=
\frac{1}{M}
\sum_{i=1}^{M}D_{i},\ \
S_{D}^{2}
=
\frac{1}{M-1}
\sum_{i=1}^{M}
(D_{i}-\bar{D})^{2},
\end{equation*}
\end{small}then the test statistic is calculated as
\begin{small}
\begin{equation*}
t=
\frac{\bar{D}-\mu_{D}}{S_{D}/\sqrt{M}},
\end{equation*}
\end{small}which is the observed value of the statistic under the null hypothesis $\RIfM@\expandafter\text@\else\expandafter\mbox\fi{H}_{0}$. In the case of $n=m=8$, for example, the execution time for each random experiment is given in Fig. \ref{fig_exc_time_8} and the test statistic is $t=15.7623$, which leads to an extremely small $p$-value compared with the significance level $0.001$. Hence, the result is statistically significant to suggest that the ECOS yields a larger execution time than does Algorithm \ref{algorithm1}. In other cases of the system scale, the similar results can be obtained.
\begin{figure}[hbtp]
\centering
\includegraphics[scale=0.53]{solution_error_MPC-eps-converted-to.pdf}
\caption{Solution error between Algorithm \ref{algorithm1} and ECOS in the case of $n=m=8$.}
\label{fig_solution_error}
\end{figure}
\begin{figure}[hbtp]
\centering
\includegraphics[scale=0.53]{limit_analysis-eps-converted-to.pdf}
\caption{Average execution time of Algorithm \ref{algorithm1} and ECOS at different system scales.}
\label{fig_limit_analysis}
\end{figure}
\subsection{Error and Limitation Analysis of Algorithm \ref{algorithm1}}
To verify the accuracy of the solutions of Algorithm \ref{algorithm1}, the solution error $\small \xi^{p}-\xi^{ecos}$ is calculated as $\small \xi^{p}$ satisfies the stop criterion, in which the ECOS solution is denoted as $\small \xi^{ecos}$. For example, give one random MPC problem in the case of $n=m=8$, each component of solution error is shown in Fig. \ref{fig_solution_error}, in which different color lines denote different $\small\alpha$. The results in Fig. \ref{fig_solution_error} reveal that the component is not greater than $\small2.2\times10^{-3}$ in each case of $\small\alpha$, hence, the solution of Algorithm \ref{algorithm1} is close to the ECOS solution. Moreover, notice that the solution error with different $\small\alpha$ is close to each other, which means that the selection of $\small\alpha$ has little influence on the final solution. In other random optimization problems, the same conclusion can be obtained. In this way, the accuracy of the solutions of Algorithm \ref{algorithm1} is verified. However, the limitation of Algorithm \ref{algorithm1} is that it is only suitable for the small size MPC problems. The illustration is given as Fig. \ref{fig_limit_analysis}, in which the average execution time of Algorithm \ref{algorithm1} ($\alpha=20$) and ECOS are presented. Fig. \ref{fig_limit_analysis} implies that the performance of Algorithm \ref{algorithm1} degrades with the increase of the system scale. The extension of Algorithm \ref{algorithm1} such that the large-scale optimization problems can be solved efficiently is the topic of the future research.
\section{Conclusion}
\label{section5}
In this letter, QP problems are solved by a novel PGM. We show that the FISTA is a special case of the proposed method and the convergence rate can be improved from $O(1/p^{2})$ to $O(1/p^{\alpha})$ by selecting the positive real roots of a group of high order polynomial equations as the iterative parameters. Based on a batch of random experiments, the effectiveness of the proposed method on MPC problem has been verified.
\begin{appendices}
\section{From Standard MPC to QP}
\label{appendices}
According to the nominal model (\ref{state_equation}), the relationship between the predicted nominal states and inputs in a finite horizon $\small N$ can be expressed as
\begin{small}
\begin{equation}
\boldsymbol{x}_{k}
=
A_{1}x_{k}
+
A_{2}\boldsymbol{u}_{k},
\end{equation}
\end{small}where
\begin{small}
\begin{equation}
A_{1}
=
\begin{bmatrix}
A \\
\vdots \\
A^{N}
\end{bmatrix},\
A_{2}
=
\begin{bmatrix}
B & \boldsymbol{0} & \cdots & \boldsymbol{0} \\
AB & B & \cdots & \boldsymbol{0} \\
\vdots & \vdots & \ddots & \boldsymbol{0} \\
A^{N-1}B & A^{N-2}B & \cdots & B
\end{bmatrix}.
\end{equation}
\end{small}Denote $\small Q_{1}=\RIfM@\expandafter\text@\else\expandafter\mbox\fi{diag}(Q,\cdots,Q,P)\in\mathcal{R}^{Nn\times Nn}$ and $\small R_{1}=\RIfM@\expandafter\text@\else\expandafter\mbox\fi{diag}(R,\cdots,R)\in\mathcal{R}^{Nm\times Nm}$, the objective (\ref{original_cost_function}) containing the equality constraints can be written as
\begin{small}
\begin{equation}
J
(\boldsymbol{x}_{k},\boldsymbol{u}_{k})
=
\frac{1}{2}
\boldsymbol{u}_{k}^{T}\mathcal{H}\boldsymbol{u}_{k}
+
\mathcal{G}(x_{k})^{T}\boldsymbol{u}_{k}
+
c(x_{k}),
\end{equation}
\end{small}where $\mathcal{H}=A_{2}^{T}Q_{1}A_{2}+R_{1}$, $\mathcal{G}(x_{k})=A_{2}^{T}Q_{1}A_{1}x_{k}$ and $c(x_{k})=\frac{1}{2}x_{k}^{T}A_{1}^{T}Q_{1}A_{1}x_{k}$. Then the standard quadratic optimization objective is obtained. Let $\tilde{F}=\RIfM@\expandafter\text@\else\expandafter\mbox\fi{diag}(F,\cdots,F)\in\mathcal{R}^{Nf\times Nn}$, $\tilde{\Phi}=(\boldsymbol{0},\Phi)\in\mathcal{R}^{w\times Nn}$ ($\Phi$ is the terminal constraint on the predicted state $x_{N|k}$), $\bar{F}=(\tilde{F}^{T},\tilde{\Phi}^{T})^{T}\in\mathcal{R}^{(Nf+w)\times Nn}$ and $\bar{G}=\RIfM@\expandafter\text@\else\expandafter\mbox\fi{diag}(G,\cdots,G)\in\mathcal{R}^{Ng\times Nm}$, the linear constraints of (\ref{original_MPC}) can be written as
\begin{small}
\begin{equation}
\mathcal{A}
\boldsymbol{u}_{k}
\leq
\mathcal{B}(x_{k}),
\end{equation}
\end{small}where
\begin{small}
\begin{equation}
\mathcal{A}
=
\begin{bmatrix}
\bar{F}A_{2} \\
\bar{G}
\end{bmatrix},\
\mathcal{B}(x_{k})
=
\begin{bmatrix}
\boldsymbol{1}-\bar{F}A_{1}x_{k} \\
\boldsymbol{1}
\end{bmatrix}.
\end{equation}
\end{small}In this way, the MPC problem (\ref{original_MPC}) is formulated into the quadratic programming form (\ref{standard_QP}). After solving the MPC problem, the first term of the optimal input trajectory $\small \boldsymbol{u}_{k}^{*}$ is imposed to the plant at time $\small k$.
\end{appendices}
\bibliographystyle{IEEEtranS}
\section*{Abstract (Not appropriate in this style!)}%
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}%
\quotation
\fi
}%
}{%
}%
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}%
\@ifundefined{maketitle}{\def\maketitle#1{}}{}%
\@ifundefined{affiliation}{\def\affiliation#1{}}{}%
\@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}%
\@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}%
\@ifundefined{newfield}{\def\newfield#1#2{}}{}%
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }%
\newcount\c@chapter}{}%
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}%
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}%
\@ifundefined{subsection}{\def\subsection#1%
{\par(Subsection head:)#1\par }}{}%
\@ifundefined{subsubsection}{\def\subsubsection#1%
{\par(Subsubsection head:)#1\par }}{}%
\@ifundefined{paragraph}{\def\paragraph#1%
{\par(Subsubsubsection head:)#1\par }}{}%
\@ifundefined{subparagraph}{\def\subparagraph#1%
{\par(Subsubsubsubsection head:)#1\par }}{}%
\@ifundefined{therefore}{\def\therefore{}}{}%
\@ifundefined{backepsilon}{\def\backepsilon{}}{}%
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}%
\@ifundefined{registered}{%
\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}%
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr
\mathhexbox20D}}}}{}%
\@ifundefined{Eth}{\def\Eth{}}{}%
\@ifundefined{eth}{\def\eth{}}{}%
\@ifundefined{Thorn}{\def\Thorn{}}{}%
\@ifundefined{thorn}{\def\thorn{}}{}%
\def\TEXTsymbol#1{\mbox{$#1$}}%
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}%
\newdimen\theight
\@ifundefined{Column}{\def\Column{%
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}%
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{%
\rightline{\rlap{\box\z@}}%
\vss
}%
}%
}}{}%
\@ifundefined{qed}{\def\qed{%
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}%
}}{}%
\@ifundefined{cents}{\def\cents{\hbox{\rm\rlap c/}}}{}%
\@ifundefined{tciLaplace}{\def\tciLaplace{\ensuremath{\mathcal{L}}}}{}%
\@ifundefined{tciFourier}{\def\tciFourier{\ensuremath{\mathcal{F}}}}{}%
\@ifundefined{textcurrency}{\def\textcurrency{\hbox{\rm\rlap xo}}}{}%
\@ifundefined{texteuro}{\def\texteuro{\hbox{\rm\rlap C=}}}{}%
\@ifundefined{euro}{\def\euro{\hbox{\rm\rlap C=}}}{}%
\@ifundefined{textfranc}{\def\textfranc{\hbox{\rm\rlap-F}}}{}%
\@ifundefined{textlira}{\def\textlira{\hbox{\rm\rlap L=}}}{}%
\@ifundefined{textpeseta}{\def\textpeseta{\hbox{\rm P\negthinspace s}}}{}%
\@ifundefined{miss}{\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}}{}%
\@ifundefined{vvert}{\def\vvert{\Vert}}{
\@ifundefined{tcol}{\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}}{}%
\@ifundefined{dB}{\def\dB{\hbox{{}}}}{
\@ifundefined{mB}{\def\mB#1{\hbox{$#1$}}}{
\@ifundefined{nB}{\def\nB#1{\hbox{#1}}}{
\@ifundefined{note}{\def\note{$^{\dag}}}{}%
\defLaTeX2e{LaTeX2e}
\ifx\fmtnameLaTeX2e
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\fi
\def\alpha{{\Greekmath 010B}}%
\def\beta{{\Greekmath 010C}}%
\def\gamma{{\Greekmath 010D}}%
\def\delta{{\Greekmath 010E}}%
\def\epsilon{{\Greekmath 010F}}%
\def\zeta{{\Greekmath 0110}}%
\def\eta{{\Greekmath 0111}}%
\def\theta{{\Greekmath 0112}}%
\def\iota{{\Greekmath 0113}}%
\def\kappa{{\Greekmath 0114}}%
\def\lambda{{\Greekmath 0115}}%
\def\mu{{\Greekmath 0116}}%
\def\nu{{\Greekmath 0117}}%
\def\xi{{\Greekmath 0118}}%
\def\pi{{\Greekmath 0119}}%
\def\rho{{\Greekmath 011A}}%
\def\sigma{{\Greekmath 011B}}%
\def\tau{{\Greekmath 011C}}%
\def\upsilon{{\Greekmath 011D}}%
\def\phi{{\Greekmath 011E}}%
\def\chi{{\Greekmath 011F}}%
\def\psi{{\Greekmath 0120}}%
\def\omega{{\Greekmath 0121}}%
\def\varepsilon{{\Greekmath 0122}}%
\def\vartheta{{\Greekmath 0123}}%
\def\varpi{{\Greekmath 0124}}%
\def\varrho{{\Greekmath 0125}}%
\def\varsigma{{\Greekmath 0126}}%
\def\varphi{{\Greekmath 0127}}%
\def{\Greekmath 0272}{{\Greekmath 0272}}
\def\FindBoldGroup{%
{\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}%
}
\def\Greekmath#1#2#3#4{%
\if@compatibility
\ifnum\mathgroup=\symbold
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\else
\FindBoldGroup
\ifnum\mathgroup=\theboldgroup
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\fi}
\newif\ifGreekBold \GreekBoldfalse
\let\SAVEPBF=\pbf
\def\pbf{\GreekBoldtrue\SAVEPBF}%
\@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{}
\@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{}
\@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{}
\@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{}
\@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{}
\@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{}
\@ifundefined{remark}{\newtheorem{remark}{Remark}}{}
\@ifundefined{example}{\newtheorem{example}{Example}}{}
\@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{}
\@ifundefined{definition}{\newtheorem{definition}{Definition}}{}
\@ifundefined{mathletters}{%
\newcounter{equationnumber}
\def\mathletters{%
\addtocounter{equation}{1}
\edef\@currentlabel{\arabic{equation}}%
\setcounter{equationnumber}{\c@equation}
\setcounter{equation}{0}%
\edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}%
}
\def\endmathletters{%
\setcounter{equation}{\value{equationnumber}}%
}
}{}
\@ifundefined{BibTeX}{%
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}%
\@ifundefined{AmS}%
{\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}%
A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}%
\@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}%
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}%
\else \def\@tempa{&}\fi
\@tempa
\if@eqnsw
\iftag@
\@taggnum
\else
\@eqnnum\stepcounter{equation}%
\fi
\fi
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@eqnswtrue
\global\@eqcnt\z@\cr}
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}%
\global\def\@currentlabel{#1}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}%
\global\def\@currentlabel{#1}}
\def\QATOP#1#2{{#1 \atop #2}}%
\def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}%
\def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}%
\def\QABOVE#1#2#3{{#2 \above#1 #3}}%
\def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}%
\def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}%
\def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}%
\def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}%
\def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}%
\def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}%
\def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}%
\def\QTABOVED#1#2#3#4#5{{\textstyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\QDABOVED#1#2#3#4#5{{\displaystyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\tint{\msi@int\textstyle\int}%
\def\tiint{\msi@int\textstyle\iint}%
\def\tiiint{\msi@int\textstyle\iiint}%
\def\tiiiint{\msi@int\textstyle\iiiint}%
\def\tidotsint{\msi@int\textstyle\idotsint}%
\def\toint{\msi@int\textstyle\oint}%
\def\tsum{\mathop{\textstyle \sum }}%
\def\tprod{\mathop{\textstyle \prod }}%
\def\tbigcap{\mathop{\textstyle \bigcap }}%
\def\tbigwedge{\mathop{\textstyle \bigwedge }}%
\def\tbigoplus{\mathop{\textstyle \bigoplus }}%
\def\tbigodot{\mathop{\textstyle \bigodot }}%
\def\tbigsqcup{\mathop{\textstyle \bigsqcup }}%
\def\tcoprod{\mathop{\textstyle \coprod }}%
\def\tbigcup{\mathop{\textstyle \bigcup }}%
\def\tbigvee{\mathop{\textstyle \bigvee }}%
\def\tbigotimes{\mathop{\textstyle \bigotimes }}%
\def\tbiguplus{\mathop{\textstyle \biguplus }}%
\newtoks\temptoksa
\newtoks\temptoksb
\newtoks\temptoksc
\def\msi@int#1#2{%
\def\@temp{{#1#2\the\temptoksc_{\the\temptoksa}^{\the\temptoksb}}
\futurelet\@nextcs
\@int
}
\def\@int{%
\ifx\@nextcs\limits
\typeout{Found limits}%
\temptoksc={\limits}%
\let\@next\@intgobble%
\else\ifx\@nextcs\nolimits
\typeout{Found nolimits}%
\temptoksc={\nolimits}%
\let\@next\@intgobble%
\else
\typeout{Did not find limits or no limits}%
\temptoksc={}%
\let\@next\msi@limits%
\fi\fi
\@next
}%
\def\@intgobble#1{%
\typeout{arg is #1}%
\msi@limits
}
\def\msi@limits{%
\temptoksa={}%
\temptoksb={}%
\@ifnextchar_{\@limitsa}{\@limitsb}%
}
\def\@limitsa_#1{%
\temptoksa={#1}%
\@ifnextchar^{\@limitsc}{\@temp}%
}
\def\@limitsb{%
\@ifnextchar^{\@limitsc}{\@temp}%
}
\def\@limitsc^#1{%
\temptoksb={#1}%
\@ifnextchar_{\@limitsd}{\@temp
}
\def\@limitsd_#1{%
\temptoksa={#1}%
\@temp
}
\def\dint{\msi@int\displaystyle\int}%
\def\diint{\msi@int\displaystyle\iint}%
\def\diiint{\msi@int\displaystyle\iiint}%
\def\diiiint{\msi@int\displaystyle\iiiint}%
\def\didotsint{\msi@int\displaystyle\idotsint}%
\def\doint{\msi@int\displaystyle\oint}%
\def\dsum{\mathop{\displaystyle \sum }}%
\def\dprod{\mathop{\displaystyle \prod }}%
\def\dbigcap{\mathop{\displaystyle \bigcap }}%
\def\dbigwedge{\mathop{\displaystyle \bigwedge }}%
\def\dbigoplus{\mathop{\displaystyle \bigoplus }}%
\def\dbigodot{\mathop{\displaystyle \bigodot }}%
\def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}%
\def\dcoprod{\mathop{\displaystyle \coprod }}%
\def\dbigcup{\mathop{\displaystyle \bigcup }}%
\def\dbigvee{\mathop{\displaystyle \bigvee }}%
\def\dbigotimes{\mathop{\displaystyle \bigotimes }}%
\def\dbiguplus{\mathop{\displaystyle \biguplus }}%
\if@compatibility\else
\RequirePackage{amsmath}
\fi
\def\makeatother\endinput{\makeatother\endinput}
\bgroup
\ifx\ds@amstex\relax
\message{amstex already loaded}\aftergroup\makeatother\endinput
\else
\@ifpackageloaded{amsmath}%
{\if@compatibility\message{amsmath already loaded}\fi\aftergroup\makeatother\endinput}
{}
\@ifpackageloaded{amstex}%
{\if@compatibility\message{amstex already loaded}\fi\aftergroup\makeatother\endinput}
{}
\@ifpackageloaded{amsgen}%
{\if@compatibility\message{amsgen already loaded}\fi\aftergroup\makeatother\endinput}
{}
\fi
\egroup
\typeout{TCILATEX defining AMS-like constructs in LaTeX 2.09 COMPATIBILITY MODE}
\let\DOTSI\relax
\def\RIfM@{\relax\ifmmode}%
\def\FN@{\futurelet\next}%
\newcount\intno@
\def\iint{\DOTSI\intno@\tw@\FN@\ints@}%
\def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}%
\def\iiiint{\DOTSI\intno@4 \FN@\ints@}%
\def\idotsint{\DOTSI\intno@\z@\FN@\ints@}%
\def\ints@{\findlimits@\ints@@}%
\newif\iflimtoken@
\newif\iflimits@
\def\findlimits@{\limtoken@true\ifx\next\limits\limits@true
\else\ifx\next\nolimits\limits@false\else
\limtoken@false\ifx\ilimits@\nolimits\limits@false\else
\ifinner\limits@false\else\limits@true\fi\fi\fi\fi}%
\def\multint@{\int\ifnum\intno@=\z@\intdots@
\else\intkern@\fi
\ifnum\intno@>\tw@\int\intkern@\fi
\ifnum\intno@>\thr@@\int\intkern@\fi
\int
\def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi
\ifnum\intno@>\tw@\intop\intkern@\fi
\ifnum\intno@>\thr@@\intop\intkern@\fi\intop}%
\def\intic@{%
\mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}%
\def\negintic@{\mathchoice
{\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}%
\def\ints@@{\iflimtoken@
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits
\else\multint@\nolimits\fi
\eat@
\else
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits\else
\multint@\nolimits\fi}\fi\ints@@@}%
\def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}%
\def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}%
\def\intdots@{\mathchoice{\plaincdots@}%
{{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}%
\def\RIfM@{\relax\protect\ifmmode}
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi}
\let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice
{\textdef@\displaystyle\f@size{#1}}%
{\textdef@\textstyle\tf@size{\firstchoice@false #1}}%
{\textdef@\textstyle\sf@size{\firstchoice@false #1}}%
{\textdef@\textstyle \ssf@size{\firstchoice@false #1}}%
\glb@settings}
\def\textdef@#1#2#3{\hbox{{%
\everymath{#1}%
\let\f@size#2\selectfont
#3}}}
\newif\iffirstchoice@
\firstchoice@true
\def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}%
\def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}%
\def\multilimits@{\bgroup\vspace@\Let@
\baselineskip\fontdimen10 \scriptfont\tw@
\advance\baselineskip\fontdimen12 \scriptfont\tw@
\lineskip\thr@@\fontdimen8 \scriptfont\thr@@
\lineskiplimit\lineskip
\vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}%
\def\Sb{_\multilimits@}%
\def\endSb{\crcr\egroup\egroup\egroup}%
\def\Sp{^\multilimits@}%
\let\endSp\endSb
\newdimen\ex@
\[email protected]
\def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}%
\def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow
\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\overrightarrow{\mathpalette\overrightarrow@}%
\def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\let\overarrow\overrightarrow
\def\overleftarrow{\mathpalette\overleftarrow@}%
\def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\overleftrightarrow{\mathpalette\overleftrightarrow@}%
\def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr
\leftrightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\underrightarrow{\mathpalette\underrightarrow@}%
\def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}%
\let\underarrow\underrightarrow
\def\underleftarrow{\mathpalette\underleftarrow@}%
\def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}%
\def\underleftrightarrow{\mathpalette\underleftrightarrow@}%
\def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th
\hfil#1#2\hfil$\crcr
\noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}%
\def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@}
\let\nlimits@\displaylimits
\def\setboxz@h{\setbox\z@\hbox}
\def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr
\hfil$#1\m@th\operator@font lim$\hfil\crcr
\noalign{\nointerlineskip}#2#1\crcr
\noalign{\nointerlineskip\kern-\ex@}\crcr}}}}
\def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\copy\z@\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill
\mkern-6mu\box\z@$}
\def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}}
\def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}}
\def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@}
\def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@}
\def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}}
\def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@
\hbox{$#1\m@th\operator@font lim$}}}}
\def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}}
\def\mathpalette\varlimsup@{}@#1{\mathop{\overline
{\hbox{$#1\m@th\operator@font lim$}}}}
\def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}%
\begingroup \catcode `|=0 \catcode `[= 1
\catcode`]=2 \catcode `\{=12 \catcode `\}=12
\catcode`\\=12
|gdef|@alignverbatim#1\end{align}[#1|end[align]]
|gdef|@salignverbatim#1\end{align*}[#1|end[align*]]
|gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]]
|gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]]
|gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]]
|gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]]
|gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]]
|gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]]
|gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]]
|gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]]
|gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]]
|endgroup
\def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim
You are using the "align" environment in a style in which it is not defined.}
\let\endalign=\endtrivlist
\@namedef{align*}{\@verbatim\@salignverbatim
You are using the "align*" environment in a style in which it is not defined.}
\expandafter\let\csname endalign*\endcsname =\endtrivlist
\def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim
You are using the "alignat" environment in a style in which it is not defined.}
\let\endalignat=\endtrivlist
\@namedef{alignat*}{\@verbatim\@salignatverbatim
You are using the "alignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endalignat*\endcsname =\endtrivlist
\def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim
You are using the "xalignat" environment in a style in which it is not defined.}
\let\endxalignat=\endtrivlist
\@namedef{xalignat*}{\@verbatim\@sxalignatverbatim
You are using the "xalignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endxalignat*\endcsname =\endtrivlist
\def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim
You are using the "gather" environment in a style in which it is not defined.}
\let\endgather=\endtrivlist
\@namedef{gather*}{\@verbatim\@sgatherverbatim
You are using the "gather*" environment in a style in which it is not defined.}
\expandafter\let\csname endgather*\endcsname =\endtrivlist
\def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim
You are using the "multiline" environment in a style in which it is not defined.}
\let\endmultiline=\endtrivlist
\@namedef{multiline*}{\@verbatim\@smultilineverbatim
You are using the "multiline*" environment in a style in which it is not defined.}
\expandafter\let\csname endmultiline*\endcsname =\endtrivlist
\def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim
You are using a type of "array" construct that is only allowed in AmS-LaTeX.}
\let\endarrax=\endtrivlist
\def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim
You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.}
\let\endtabulax=\endtrivlist
\@namedef{arrax*}{\@verbatim\@sarraxverbatim
You are using a type of "array*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endarrax*\endcsname =\endtrivlist
\@namedef{tabulax*}{\@verbatim\@stabulaxverbatim
You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endtabulax*\endcsname =\endtrivlist
\def\endequation{%
\ifmmode\ifinner
\iftag@
\addtocounter{equation}{-1}
$\hfil
\displaywidth\linewidth\@taggnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\else
$\hfil
\displaywidth\linewidth\@eqnnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\fi
\else
\iftag@
\addtocounter{equation}{-1}
\eqno \hbox{\@taggnum}
\global\@ifnextchar*{\@tagstar}{\@tag}@false%
$$\global\@ignoretrue
\else
\eqno \hbox{\@eqnnum
$$\global\@ignoretrue
\fi
\fi\fi
}
\newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}%
\global\def\@currentlabel{#1}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}%
\global\def\@currentlabel{#1}}
\@ifundefined{tag}{
\def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}}
\def\@tag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@tagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
}{}
\def\tfrac#1#2{{\textstyle {#1 \over #2}}}%
\def\dfrac#1#2{{\displaystyle {#1 \over #2}}}%
\def\binom#1#2{{#1 \choose #2}}%
\def\tbinom#1#2{{\textstyle {#1 \choose #2}}}%
\def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}%
\makeatother
\endinput
\section{Introduction}
Many engineering optimization problems can be formulated to quadratic programming (QP) problems. For example, model predictive control (MPC), which has been widely used in many industrial processes \cite{Qin_2003}. However, the solving of the QP problem is often computationally demanding. In practice, many industrial processes require a fast solution of the problem, for example, the control systems with high sampling rate \cite{Juan_2014}. Therefore, it is important to develop an accelerated algorithm for solving QP problems.
For reducing the computational load of the controller, QP problems are solved by using online optimization technique. Popular QP solvers use an interior-point method \cite{Domahidi2012}, an active-set method \cite{Ferreau_2014} and a dual Newton method \cite{Frasch_2015}. However, above solvers require the solution of the linearization system of the Karush-Kuhn-Tucker (KKT) conditions at every iteration. For this reason, the great attention has been given to the first-order methods for the online optimization \cite{Parys_2019,Giselsson_2013,Alberto_2021}. In recent years, the proximal gradient-based accelerated algorithms are widely used to solve MPC problems \cite{Giselsson2014}. Specifically, the iterative algorithm is designed based on the proximal gradient method (PGM) to deal with the constraint of Lagrange multiplier more easily \cite{Giselsson_2013,Parys_2019,Giselsson2014,Giselsson2015}. Moreover, methods in \cite{Beck_2009,Nesterov2013}, i.e., fast iterative shrinkage-thresholding algorithm (FISTA) improves the iteration convergence rate from $\small O(1/p)$ to $\small O(1/p^{2})$. The key idea of this improvement is that the positive real root of a specific quadratic polynomial equation is selected as the iterative parameter. Inspired by the work in \cite{Beck_2009} and \cite{Giselsson_2013}, an accelerated PGM algorithm is proposed for fast solving QP problems in this letter. We show that the FISTA in \cite{Beck_2009} is a special case of the proposed method and the convergence rate can be improved from $\small O(1/p^{2})$ in \cite{Beck_2009} to $\small O(1/p^{\alpha})$ by selecting the positive real roots of a group of high order polynomial equations as the iterative parameters. To assess the performance of the proposed algorithm, a batch of randomly generated MPC problems are solved. Then, comparing the resulted execution time to state-of-the-art optimization softwares, in particular MOSEK \cite{Erling2003} and ECOS \cite{Domahidi2013ecos}.
The paper is organized as follows. In Section \ref{section2}, the QP problem is formulated into the dual form and the PGM is introduced. The accelerated PGM for the dual problem is proposed in Section \ref{section3}. In Section \ref{section4}, the numerical experiment based on the MPC are provided. Section \ref{section5} concludes the result of this letter.
\section{Problem Formulation}
\label{section2}
\subsection{Primal and Dual Problems}
Consider the standard quadratic programming problem
\begin{small}
\begin{equation}\label{standard_QP}
\begin{split}
&\min\limits_{\xi}
\frac{1}{2}
\xi^{T}\mathcal{H}\xi
+
\mathcal{G}^{T}\xi
\\
&\ s.t.\ \mathcal{A}\xi\leq\mathcal{B}.
\end{split}
\end{equation}
\end{small}Assume that there exists $\xi$ such that $ \mathcal{A}\xi<\mathcal{B}$, which means that the Slater's condition holds and there is no duality gap \cite{Boyd}, the dual problem of (\ref{standard_QP}) is formulated as
\begin{small}
\begin{equation}\label{dual_P}
\sup\limits_{\mu\geq0}
\inf\limits_{\xi}
\begin{bmatrix}
\frac{1}{2}\xi^{T}\mathcal{H}\xi
+
\mathcal{G}^{T}\xi
+
\mu^{T}(\mathcal{A}\xi-\mathcal{B})
\end{bmatrix}.
\end{equation}
\end{small}Take the partial derivative with respect to $\small \xi$ and according to the first-order optimality condition, we have
\begin{small}
\begin{equation*}
\begin{split}
&\frac{\partial}{\partial \xi}
\begin{bmatrix}
\frac{1}{2}\xi^{T}\mathcal{H}\xi+
(\mathcal{A}^{T}\mu+\mathcal{G})^{T}\xi-
\mu^{T}\mathcal{B}
\end{bmatrix}=0 \\
&\Rightarrow \xi=\mathcal{H}^{-1}(-\mathcal{A}^{T}\mu-\mathcal{G}).
\end{split}
\end{equation*}
\end{small}In this way, (\ref{dual_P}) is transformed into
\begin{small}
\begin{equation}\label{dual_problem}
\sup\limits_{\mu\geq0}
\begin{bmatrix}
-\frac{1}{2}
(\mathcal{A}^{T}\mu+\mathcal{G})^{T}
\mathcal{H}^{-1}
(\mathcal{A}^{T}\mu+\mathcal{G})
-
\mathcal{B}^{T}\mu
\end{bmatrix}.
\end{equation}
\end{small}Let $\small f(\mu)=\frac{1}{2}(\mathcal{A}^{T}\mu+\mathcal{G})^{T}
\mathcal{H}^{-1}(\mathcal{A}^{T}\mu+\mathcal{G})+\mathcal{B}^{T}\mu$ be the new objective, then minimizing $\small f(\mu)$ yields the new optimization problem.
\subsection{Proximal Gradient Method}
In this subsection, the PGM is used to solve the dual problem. Specifically, the following nonsmooth function $\small g$ is introduced to describe the constraint of $\small f(\mu)$
\begin{small}
\begin{equation}
g(\mu)
=
\begin{cases}
0, & \mbox{if } \mu\geq0 \\
+\infty, & \mbox{otherwise}.
\end{cases}
\end{equation}
\end{small}In this way, the constrained optimization problem $\small \min\limits_{\mu\geq0}f(\mu)$ is equivalent to the unconstrained one, i.e., $\small \min\limits_{\mu}f(\mu)+g(\mu)$. Based on the work in \cite{Beck_2009}, let $\small \zeta^{p}=\mu^{p}+\frac{\tau_{p}-1}{\tau_{p+1}}(\mu^{p}-\mu^{p-1})$, where $\small \tau_{p}>0$ for $\small p=1,2,\cdots$ and $\small p$ is iteration number. Then the above problem can be solved by
\begin{small}
\begin{equation}\label{proximal_step}
\mu^{p+1}
=
\mathrm{P}_{\mu}
(\zeta^{p}-\frac{1}{L}{\Greekmath 0272} f(\zeta^{p})),
\end{equation}
\end{small}where $\small L$ is the Lipschitz constant of $\small {\Greekmath 0272} f$ and $\small \mathrm{P}_{\mu}$ is the Euclidean projection to $\small \{\mu|\mu\geq0\}$. According to the result in \cite{Giselsson_2013}, there is
$\small {\Greekmath 0272} f(\mu)=\mathcal{A}\mathcal{H}^{-1}(\mathcal{A}^{T}\mu+\mathcal{G})+\mathcal{B}$, then we have
\begin{small}
\begin{equation}
{\Greekmath 0272} f(\zeta^{p})
=
-
\mathcal{A}
\begin{bmatrix}
\xi^{p}
+
\frac{\tau_{p}-1}{\tau_{p+1}}(\xi^{p}-\xi^{p-1})
\end{bmatrix}+\mathcal{B}.
\end{equation}
\end{small}Therefore, (\ref{proximal_step}) can be written as
\begin{small}
\begin{equation}
\mu_{l}^{p+1}
=
\max
\begin{Bmatrix}
0,
\zeta_{l}^{p}+\frac{1}{L}
\begin{bmatrix}
\mathcal{A}_{l}
(
\xi^{p}
+
\frac{\tau_{p}-1}{\tau_{p+1}}(\xi^{p}-\xi^{p-1})
)
-\mathcal{B}_{l}
\end{bmatrix}
\end{Bmatrix}
\end{equation}
\end{small}where $\mu_{l}$ denotes the $l$-th component of the vector $\mu$. $\mathcal{A}_{l}$ and $\mathcal{B}_{l}$ are the $\small l$-th row of $\mathcal{A}$ and $\mathcal{B}$. The classical PGM to solve $\min\limits_{\mu}f(\mu)+g(\mu)$ can be summarized as Algorithm \ref{algorithm1}, in which $\tau_{p}$ and $\tau_{p+1}$ are iterative parameters, which will be discussed in the next subsection.
\begin{algorithm}
\caption{Proximal Gradient Method.}
\label{algorithm1}
\begin{algorithmic}[1]
\REQUIRE ~~\\
Initial parameters $\small \zeta_{l}^{1}=\mu_{l}^{0}$, $\small \tau_{1}=1$ and $\small \bar{\xi}^{1}=\xi_{0}$.
\ENSURE ~~The optimal decision variable $\small \mu^{*}$. \\
\WHILE {$\small p\geq1$}
\STATE $\small \mu_{l}^{p}=\max\{0,\zeta_{l}^{p}+\frac{1}{L}(\mathcal{A}_{l}\bar{\xi}^{p}-\mathcal{B}_{l})\},\ \forall l$.\\
\STATE Looking up table for $\tau_{p}$ and $\tau_{p+1}$.\\
\STATE $\small \xi^{p}=\mathcal{H}^{-1}(-\mathcal{A}^{T}\mu^{p}-\mathcal{G})$.\\
\STATE $\small \zeta^{p+1}=\mu^{p}+\frac{\tau_{p}-1}{\tau_{p+1}}(\mu^{p}-\mu^{p-1})$.\\
\STATE $\small \bar{\xi}^{p+1}=\xi^{p}+\frac{\tau_{p}-1}{\tau_{p+1}}(\xi^{p}-\xi^{p-1})$.\\
\STATE $\small p=p+1.$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\section{Accelerated MPC Iteration}
\label{section3}
\subsection{Accelerated Scheme and Convergence Analysis}
The traditional iterative parameters $\tau_{p}$ and $\tau_{p+1}$ are selected based on the positive real root of the following second-order polynomial equation
\begin{small}
\begin{equation}\label{2_polynomial}
\tau_{p+1}^{2}-\tau_{p+1}-\tau_{p}^{2}=0
\end{equation}
\end{small}with $\tau_{1}=1$. With the aid of (\ref{2_polynomial}), the convergence rate $O(1/p^{2})$ can be achieved \cite{Beck_2009}. In this work, we show that the convergence rate can be enhanced to $O(1/p^{\alpha})$ only by selecting iterative parameters appropriately. Specifically, for the given order $\alpha\in\{2,3,\cdots\}$, iterative parameters are determined by the positive real root of the $\alpha$th-order equation
\begin{small}
\begin{equation}\label{alpha_polynomial}
\tau_{p+1}^{\alpha}-\tau_{p+1}^{\alpha-1}-\tau_{p}^{\alpha}=0
\end{equation}
\end{small}with the initial value $\tau_{1}=1$, instead of (\ref{2_polynomial}). This is the main difference between our method and the method in \cite{Beck_2009}.
\begin{lemma}\label{lemma1}
The $\alpha$th-order polynomial equation (\ref{alpha_polynomial}) has following properties:
\begin{enumerate}
\item For $\alpha\in\{2,3,\cdots\}$, the polynomial equation (\ref{alpha_polynomial}) has the unique positive real root.
\item For $p\in\{1,2,\cdots\}$, the unique positive real root has the lower bound as
\begin{small}
\begin{equation*}
\tau_{p}\geq\frac{p+\alpha-1}{\alpha}.
\end{equation*}
\end{small}
\end{enumerate}
\end{lemma}
\begin{proof}
For the first argument, we first show that $\tau_{p}>0$ for all positive integer $p\geq1$ with the aid of mathematical induction. Specifically, the base case $\tau_{1}>0$ holds since the given initial value $\tau_{1}=1$. Assume the induction hypothesis that $\tau_{p}>0$ holds. Then we have
\begin{small}
\begin{equation}\label{temp_1}
\tau_{p+1}^{\alpha}-\tau_{p+1}^{\alpha-1}-\tau_{p}^{\alpha}=0
\Rightarrow
\tau_{p+1}^{\alpha-1}(\tau_{p+1}-1)>0.
\end{equation}
\end{small}Since (\ref{temp_1}) holds for all $\alpha\in\{2,3,\cdots\}$, $\tau_{p+1}$ should be a positive value greater than one, therefore we have $\tau_{p+1}>0$. In this way, we conclude that $\tau_{p}>0$ for all $p\in\{1,2,\cdots\}$. To the uniqueness of positive real root, let
\begin{small}
\begin{equation}
f_{1}(\tau_{p+1})\triangleq\tau_{p+1}^{\alpha}-\tau_{p+1}^{\alpha-1}-\tau_{p}^{\alpha},
\end{equation}
\end{small}which has the derivative as
\begin{small}
\begin{equation}
f_{1}^{'}(\tau_{p+1})=\tau_{p+1}^{\alpha-2}(\alpha\tau_{p+1}-\alpha+1),
\end{equation}
\end{small}it has zero points $\tau_{p+1}=0$ and $\tau_{p+1}=\frac{\alpha-1}{\alpha}$. Therefore, $f_{1}(\tau_{p+1})$ monotonically decreases from $f_{1}(0)$ to $f_{1}(\frac{\alpha-1}{\alpha})$, and monotonically increases from $f_{1}(\frac{\alpha-1}{\alpha})$ to $f_{1}(+\infty)$. Since $f_{1}(0)=-\tau_{p}^{\alpha}<0$ and $\lim\limits_{\tau_{p+1}\rightarrow+\infty}f_{1}(\tau_{p+1})=+\infty$, the function $f_{1}(\tau_{p+1})$ has only one zero point, which implies that the equation (\ref{alpha_polynomial}) has the unique positive real root.
For the second argument, we still use the mathematical induction. The base case $\tau_{1}\geq1$ holds since the given initial value $\tau_{1}=1$. Assume the induction hypothesis that $\tau_{p}\geq\frac{p+\alpha-1}{\alpha}$ holds. To show $\tau_{p+1}\geq\frac{p+\alpha}{\alpha}$, we can equivalently prove that the inequality $f_{1}(\frac{p+\alpha}{\alpha})<0$ holds. Moreover, since the induction hypothesis $\tau_{p}\geq\frac{p+\alpha-1}{\alpha}>0$, we can prove the following inequality
\begin{small}
\begin{equation}
\begin{pmatrix}
\frac{p+\alpha}{\alpha}
\end{pmatrix}^{\alpha}
-
\begin{pmatrix}
\frac{p+\alpha}{\alpha}
\end{pmatrix}^{\alpha-1}
-
\begin{pmatrix}
\frac{p+\alpha-1}{\alpha}
\end{pmatrix}^{\alpha}
<0
\end{equation}
\end{small}holds and it is equivalent to show
\begin{small}
\begin{equation}
f_{2}(p)
\triangleq
(\alpha-1)\ln
\begin{pmatrix}
\frac{p+\alpha}{\alpha}
\end{pmatrix}
+
\ln
\begin{pmatrix}
\frac{p}{\alpha}
\end{pmatrix}
-
\alpha\ln\begin{pmatrix}
\frac{p+\alpha-1}{\alpha}
\end{pmatrix}
<0
\end{equation}
\end{small}holds for all $p\in\{1,2,\cdots\}$. The derivative of $f_{2}(p)$ is
\begin{small}
\begin{equation}
f_{2}^{'}(p)
=
\frac{\alpha(\alpha-1)}{p(p+\alpha)(p+\alpha-1)},
\end{equation}
\end{small}which implies that the function $f_{2}(p)$ monotonically increases from $f_{2}(1)$ to $f_{2}(+\infty)$. Next, we show that $f_{2}(1)<0$. Notice that
\begin{small}
\begin{equation}\label{f_2_1}
f_{2}(1)
=
(\alpha-1)\ln
\begin{pmatrix}
\frac{1+\alpha}{\alpha}
\end{pmatrix}
+
\ln
\begin{pmatrix}
\frac{1}{\alpha}
\end{pmatrix},
\end{equation}
\end{small}which is a function about $\alpha$, hence, denote (\ref{f_2_1}) as
\begin{small}
\begin{equation}
f_{3}(\alpha)
\triangleq
(\alpha-1)\ln(\alpha+1)
-
\alpha\ln\alpha.
\end{equation}
\end{small}The first and second derivatives of $f_{3}(\alpha)$ are
\begin{small}
\begin{subequations}
\begin{align}
f_{3}^{'}(\alpha)
&=
\ln
\begin{pmatrix}
\frac{\alpha+1}{\alpha}
\end{pmatrix}
-
\frac{2}{\alpha+1},
\\
f_{3}^{''}(\alpha)
&=
\frac{\alpha-1}{\alpha(\alpha+1)^{2}},
\end{align}
\end{subequations}
\end{small}which implies that $f_{3}(\alpha)$ monotonically
increases for $\small \alpha\in\{2,3,\cdots\}$. Since $f_{3}(2)<0$ and $\lim\limits_{\alpha\rightarrow+\infty}f_{3}(\alpha)=0$, we conclude that $f_{2}(1)<0$. Finally, according to $\lim\limits_{p\rightarrow+\infty}f_{2}(p)=0$, the unique positive real root $\tau_{p}$ has the lower bound $\frac{p+\alpha-1}{\alpha}$.
\end{proof}
For the purpose of saving computing time, the $\alpha$th-order equations are solved offline and the roots are stored in a table. In this work, the look-up table is obtained by recursively solving the polynomial equation (\ref{alpha_polynomial}) in MATLAB environment, which can be summarized as Algorithm \ref{algorithm2}. Notice that the MATLAB function $\RIfM@\expandafter\text@\else\expandafter\mbox\fi{roots}(\cdot)$ is used for the polynomial root seeking. The following theorem show that the convergence rate can be improved to $O(1/p^{\alpha})$ by using (\ref{alpha_polynomial}).
\begin{theorem}\label{Theorem_2}
For $\alpha\in\{2,3,\cdots\}$, let $\xi^{*}$ and $\mu^{*}$ denote the optimizers of the problems (\ref{standard_QP}) and (\ref{dual_problem}) respectively, the convergence rate of the primal variable by Algorithm \ref{algorithm1} is
\begin{small}
\begin{equation}\label{convergence_rate_general}
\|\xi^{p}-\xi^{*}\|_{2}^{2}
\leq
\frac{\alpha^{\alpha}L\|\mu^{0}-\mu^{*}\|_{2}^{2}}{\underline{\sigma}(\mathcal{H})(p+\alpha-1)^{\alpha}},\ p=1,2,\cdots
\end{equation}
\end{small}where $\small \underline{\sigma}(\cdot)$ denotes the minimum eigenvalue.
\end{theorem}
\begin{proof}
Let $\small \upsilon^{p}=f(\mu^{p})-f(\mu^{*})$, according to Lemma 2.3 in \cite{Beck_2009}, we have
\begin{small}
\begin{subequations}
\begin{align}
\begin{split}
\frac{2}{L}(\upsilon^{p}-\upsilon^{p+1})
&\geq\|\mu^{p+1}-\zeta^{p+1}\|_{2}^{2}\\
&+2\langle\mu^{p+1}-\zeta^{p+1},\zeta^{p+1}-\mu^{p}\rangle,
\end{split}\label{lemma_4_1_1} \\
\begin{split}
-\frac{2}{L}\upsilon^{p+1}
&\geq\|\mu^{p+1}-\zeta^{p+1}\|_{2}^{2}\\
&+2\langle\mu^{p+1}-\zeta^{p+1},\zeta^{p+1}-\mu^{*}\rangle.
\end{split}\label{lemma_4_1_2}
\end{align}
\end{subequations}
\end{small}Follow the line of Lemma 4.1 in \cite{Beck_2009}, multiply $\small (\tau_{p+1}-1)$ to the both sides of (\ref{lemma_4_1_1}) and add the result to (\ref{lemma_4_1_2}), which leads to
\begin{small}
\begin{equation}\label{Most_Important_Ieq_1}
\begin{split}
\frac{2}{L}
\begin{bmatrix}
(\tau_{p+1}-1)\upsilon^{p}-\tau_{p+1}\upsilon^{p+1}
\end{bmatrix}\geq
\tau_{p+1}\|\mu^{p+1}-\zeta^{p+1}\|_{2}^{2}\\
+2\langle\mu^{p+1}-\zeta^{p+1},\tau_{p+1}\zeta^{p+1}-(\tau_{p+1}-1)\mu^{p}-\mu^{*}\rangle.
\end{split}
\end{equation}
\end{small}Based on the second argument of Lemma \ref{lemma1}, we can obtain $\small \tau_{p+1}\geq1,\ \forall p\geq1$, then multiply $\small \tau_{p+1}^{\alpha-1}$ and $\small \tau_{p+1}$ to the left and right-hand side of (\ref{Most_Important_Ieq_1}), respectively, we have
\begin{small}
\begin{equation}\label{Most_Important_Ieq_2}
\begin{split}
\frac{2}{L}
\begin{bmatrix}
\tau_{p+1}^{\alpha-1}(\tau_{p+1}-1)\upsilon^{p}-\tau_{p+1}^{\alpha}\upsilon^{p+1}
\end{bmatrix}\geq
\|\tau_{p+1}(\mu^{p+1}-\zeta^{p+1})\|_{2}^{2}\\
+2\tau_{p+1}\langle\mu^{p+1}-\zeta^{p+1},\tau_{p+1}\zeta^{p+1}-(\tau_{p+1}-1)\mu^{p}-\mu^{*}\rangle.
\end{split}
\end{equation}
\end{small}Let $\small y_{1}=\tau_{p+1}\zeta^{p+1}$, $\small y_{2}=\tau_{p+1}\mu^{p+1}$ and $\small y_{3}=(\tau_{p+1}-1)\mu^{p}+\mu^{*}$, the right-hand side of (\ref{Most_Important_Ieq_2}) can be written as
\begin{small}
\begin{equation}
\|y_{2}-y_{1}\|_{2}^{2}+2\langle y_{2}-y_{1},y_{1}-y_{3}\rangle
=
\|y_{2}-y_{3}\|_{2}^{2}
-\|y_{1}-y_{3}\|_{2}^{2}.
\end{equation}
\end{small}Since $\small \tau_{p}^{\alpha}=\tau_{p+1}^{\alpha}-\tau_{p+1}^{\alpha-1}$, the inequality (\ref{Most_Important_Ieq_2}) is equivalent to
\begin{small}
\begin{equation}\label{Most_Important_Ieq_3}
\begin{split}
\frac{2}{L}
\begin{bmatrix}
\tau_{p}^{\alpha}\upsilon^{p}-\tau_{p+1}^{\alpha}\upsilon^{p+1}
\end{bmatrix}&\geq
\|y_{2}-y_{3}\|_{2}^{2}
-\|y_{1}-y_{3}\|_{2}^{2} \\
&=
\|\tau_{p+1}\mu^{p+1}-(\tau_{p+1}-1)\mu^{p}-\mu^{*}\|_{2}^{2}\\
&-
\|\tau_{p+1}\zeta^{p+1}-(\tau_{p+1}-1)\mu^{p}-\mu^{*}\|_{2}^{2}.
\end{split}
\end{equation}
\end{small}Let $\small \kappa_{p}=\tau_{p}\mu^{p}-(\tau_{p}-1)\mu^{p-1}-\mu^{*}$, combine with $\small \tau_{p+1}\zeta^{p+1}=\tau_{p+1}\mu^{p}+(\tau_{p}-1)(\mu^{p}-\mu^{p-1})$, the right-hand side of (\ref{Most_Important_Ieq_3}) is equal to $\small \|\kappa_{p+1}\|_{2}^{2}-\|\kappa_{p}\|_{2}^{2}$. Therefore, similar as Lemma 4.1 in \cite{Beck_2009}, we have the following conclusion
\begin{small}
\begin{equation}
\frac{2}{L}\tau_{p}^{\alpha}\upsilon^{p}
-\frac{2}{L}\tau_{p+1}^{\alpha}\upsilon^{p+1}\geq
\|\kappa_{p+1}\|_{2}^{2}-\|\kappa_{p}\|_{2}^{2}.
\end{equation}
\end{small}According to Lemma 4.2 in \cite{Beck_2009}, let $\small \bar{y}_{1}^{p}=\frac{2}{L}\tau_{p}^{\alpha}\upsilon^{p}$, $\small \bar{y}_{2}^{p}=\|\kappa_{p}\|_{2}^{2}$ and $\small \bar{y}_{3}=\|\mu^{0}-\mu^{*}\|_{2}^{2}$, we have $\small \bar{y}_{1}^{p}+\bar{y}_{2}^{p}\geq\bar{y}_{1}^{p+1}+\bar{y}_{2}^{p+1}$. Assume $\small \bar{y}_{1}^{1}+\bar{y}_{2}^{1}\leq\bar{y}_{3}$ holds, we have $\small \bar{y}_{1}^{p}+\bar{y}_{2}^{p}\leq\bar{y}_{3}$, which leads to $\small \bar{y}_{1}^{p}\leq\bar{y}_{3}$. Moreover, according to the second argument of Lemma \ref{lemma1}, we have
\begin{small}
\begin{equation*}
\begin{split}
\frac{2}{L}\tau_{p}^{\alpha}\upsilon^{p}\leq\|\mu^{0}-\mu^{*}\|_{2}^{2}
\Rightarrow
f(\mu^{p})-f(\mu^{*})
\leq
\frac{\alpha^{\alpha}L\|\mu^{0}-\mu^{*}\|_{2}^{2}}{2(p+\alpha-1)^{\alpha}}.
\end{split}
\end{equation*}
\end{small}The proof of the assumption $\small \bar{y}_{1}^{1}+\bar{y}_{2}^{1}\leq\bar{y}_{3}$ can be found in Theorem 4.4 of \cite{Beck_2009}. Then, according to the procedures in Theorem 3 of \cite{Giselsson_2013}, we conclude that
\begin{small}
\begin{equation}
\|\xi^{p}-\xi^{*}\|_{2}^{2}\leq
\frac{2}{\underline{\sigma}(\mathcal{H})}(f(\mu^{p})-f(\mu^{*}))
\leq
\frac{\alpha^{\alpha}L\|\mu^{0}-\mu^{*}\|_{2}^{2}}{\underline{\sigma}(\mathcal{H})(p+\alpha-1)^{\alpha}}.
\end{equation}
\end{small}In this way, the convergence rate (\ref{convergence_rate_general}) is obtained.
\end{proof}
\begin{algorithm}[t]
\caption{Look-up table generation for the $\alpha$th-order polynomial equation (\ref{alpha_polynomial}).}
\label{algorithm2}
\begin{algorithmic}[1]
\REQUIRE ~~\\
The order $\alpha\geq2$, the initial root $\tau_{1}=1$, the initial iteration index $p=1$ and the table length $\mathcal{P}$.
\ENSURE ~~Look-up table $\mathcal{T}_{\alpha}$. \\
\STATE Look-up table initialization: $\mathcal{T}_{\alpha}=[\tau_{1}]$.\\
\WHILE {$p\leq\mathcal{P}$}
\STATE Polynomial coefficients: $\boldsymbol{p_c}=[1,-1,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{zeros}(1, \alpha-2),-\mathcal{T}_{\alpha}(\RIfM@\expandafter\text@\else\expandafter\mbox\fi{end})^{\alpha}]$.\\
\STATE Polynomial roots: $\boldsymbol{p_r}=\RIfM@\expandafter\text@\else\expandafter\mbox\fi{roots}(\boldsymbol{p_c})$.\\
\STATE Finding the positive real root $\tau_{p+1}$ in the vector $\boldsymbol{p_r}$.\\
\STATE Updating the look-up table: $\mathcal{T}_{\alpha} =[\mathcal{T}_{\alpha},\tau_{p+1}]$.\\
\STATE $\small p=p+1.$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\begin{figure}[hbtp]
\centering
\includegraphics[scale=0.53]{upper_bound_rate-eps-converted-to.pdf}
\caption{Right-hand side of (\ref{convergence_rate_general}) with the variation of $\small \alpha$.}
\label{fig_upper_bound_rate}
\end{figure}
Theorem \ref{Theorem_2} shows that the FISTA in \cite{Beck_2009} is a special case of the proposed method and the iteration performance is determined by (\ref{alpha_polynomial}). Specifically, a suitable selection of the iterative parameter $\small \tau_{p}$ can improve the convergence rate, i.e., from $\small O(1/p^{2})$ in \cite{Beck_2009} to $\small O(1/p^{\alpha})$. To show the upper bound of the convergence rate can be reduced, denote the right-hand side of (\ref{convergence_rate_general}) as
\begin{small}
\begin{equation}\label{upper_bound}
Up
=
\frac{\alpha^{\alpha}}{(p+\alpha-1)^{\alpha}}C,\ p=1,2,\cdots
\end{equation}
\end{small}where $C$ is the constant part of the right-hand side of (\ref{convergence_rate_general}). The variation of $U_{p}$ with $p\in\{1,\cdots,10\}$ is shown in Fig. \ref{fig_upper_bound_rate}, in which different color lines denote different $\alpha\in\{2,\cdots,20\}$. Fig. \ref{fig_upper_bound_rate} implies that $U_{p}$ is decreasing with the increase of $\alpha$.
\subsection{Cholesky Decomposition of $\mathcal{H}$}
According to the QP in Section \ref{section2}, the quadratic objective term $\mathcal{H}$ in (\ref{standard_QP}) may be a dense matrix, then more computation time could be consumed than a banded matrix if solving (\ref{standard_QP}) by Algorithm \ref{algorithm1} directly. To cope with this difficulty, the matrix decomposition technique can be used. Since $\mathcal{H}$ is symmetric and positive definite, there exists the Cholesky decomposition $\mathcal{H}=\mathcal{Z}^{T}\mathcal{Z}$, based on which, the quadratic programming problem (\ref{standard_QP}) can be formulated into
\begin{small}
\begin{equation}\label{compact_original_problem3}
\begin{split}
&\min\limits_{\psi}
\frac{1}{2}
\psi^{T}I\psi
+
\mathcal{G}^{T}\mathcal{Z}^{-1}\psi
\\
&\ s.t.\ \mathcal{A}\mathcal{Z}^{-1}\psi\leq\mathcal{B},
\end{split}
\end{equation}
\end{small}where $\psi=\mathcal{Z}\xi$. Since $\mathcal{Z}$ is a upper triangular matrix with real and positive diagonal components, (\ref{compact_original_problem3}) can be solved by Algorithm \ref{algorithm1} and the control input can be calculated by $\xi=\mathcal{Z}^{-1}\psi$. In this way, the quadratic objective term is transformed into the identity matrix, which can reduce the computation time in step $4$ of Algorithm \ref{algorithm1}.
\begin{table*}[t]
\centering
\caption{Iteration performance with four methods.}
\begin{tabular}{lcccccccc}
\toprule
& \multicolumn{2}{c}{{\ul \ \ \ \ \ \ \ \ \ \ \ \ n=m=2\ \ \ \ \ \ \ \ \ \ \ }}
& \multicolumn{2}{c}{{\ul \ \ \ \ \ \ \ \ \ \ \ \ \ n=m=4\ \ \ \ \ \ \ \ \ \ \ }}
& \multicolumn{2}{c}{{\ul \ \ \ \ \ \ \ \ \ \ \ \ \ n=m=6\ \ \ \ \ \ \ \ \ \ \ }}
& \multicolumn{2}{c}{{\ul \ \ \ \ \ \ \ \ \ \ \ \ \ n=m=8\ \ \ \ \ \ \ \ \ \ }}
\\
& \multicolumn{2}{c}{{\ul \ \ \ \ \ \ vars/cons: 10/40\ \ \ \ \ \ }}
& \multicolumn{2}{c}{{\ul \ \ \ \ \ \ vars/cons: 20/80\ \ \ \ \ \ }}
& \multicolumn{2}{c}{{\ul \ \ \ \ \ \ vars/cons: 30/120\ \ \ \ \ \ }}
& \multicolumn{2}{c}{{\ul \ \ \ \ \ \ vars/cons: 40/160\ \ \ \ \ \ }}
\\
& \multicolumn{1}{l}{\ \ ave.iter}
& \multicolumn{1}{l}{\ \ ave.time (s)}
& \multicolumn{1}{l}{\ \ ave.iter}
& \multicolumn{1}{l}{\ \ ave.time (s)}
& \multicolumn{1}{l}{\ \ ave.iter}
& \multicolumn{1}{l}{\ \ ave.time (s)}
& \multicolumn{1}{l}{\ \ ave.iter}
& \multicolumn{1}{l}{\ \ ave.time (s)} \\
\midrule
MOSEK
& -- & 0.10149 & -- & 0.10226 & -- & 0.10873 & -- & 0.10887 \\
ECOS
& -- & 0.00452 & -- & 0.00659 & -- & 0.00849 & -- & 0.01287 \\
FISTA
& 29.37 & 0.00098 & 115.95 & 0.00398 & 159.76 & 0.00800 & 272.88 & 0.01777 \\
Algorithm \ref{algorithm1} ($\alpha=20$)
& 26.56 & 0.00078 & 78.15 & 0.00251 & 119.03 & 0.00484 & 176.00 & 0.00785 \\
\bottomrule
\end{tabular}\label{table2}
\end{table*}
\section{Performance Analysis based on MPC}
\label{section4}
\subsection{Formulation of Standard MPC}
Consider the discrete-time linear system as
\begin{small
\begin{equation}\label{state_equation}
x_{k+1}=Ax_{k}+Bu_{k},
\end{equation}
\end{small}where $A$ and $B$ are known time-invariant matrixes. $x_{k}\in\mathcal{R}^{n}$ and $u_{k}\in\mathcal{R}^{m}$ have linear constraints as $Fx_{k}\leq\boldsymbol{1}$ and $Gu_{k}\leq\boldsymbol{1}$, respectively, in which $F\in\mathcal{R}^{f\times n}$, $G\in\mathcal{R}^{g\times m}$ and $\boldsymbol{1}$ is a vector with each component is equal to $1$. The standard MPC problem can be presented as
\begin{small}
\begin{equation}\label{original_MPC}
\min \limits_{\boldsymbol{u}_{k}}
J(x_{k},\ \boldsymbol{u}_{k}),\ \
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{s.t.}\ \boldsymbol{u}_{k}\in \mathbb{U},
\end{equation}
\end{small}where $x_{k}$ is the current state, the decision variables is the nominal input trajectory $\boldsymbol{u}_{k}=(u_{0|k},\cdots,u_{N-1|k})\in\mathcal{R}^{Nm}$, $N$ is the prediction horizon. The construction of $\mathbb{U}$ can be found in \cite{Mayne_2000}. Moreover, the cost function $J(x_{k},\boldsymbol{u}_{k})$ is
\begin{small}
\begin{equation}\label{original_cost_function}
J(x_{k},\boldsymbol{u}_{k})
=
\frac{1}{2}
\sum_{l=0}^{N-1}
\begin{bmatrix}
\|x_{l|k}\|_{Q}^{2}
+
\|u_{l|k}\|_{R}^{2}
\end{bmatrix}
+
\frac{1}{2}
\|x_{N|k}\|_{P}^{2},
\end{equation}
\end{small}where $l|k$ denotes the $l$-th step ahead prediction from the current time $k$. $Q$, $R$ and $P$ are positive definite matrices. $P$ is chosen as the solution of the discrete algebraic Riccati equation of the unconstrained problem. The standard MPC problem (\ref{original_MPC}) can be formulated as the QP problem (\ref{standard_QP}), which has been shown in Appendix \ref{appendices}.
\subsection{Existing Methods for Comparison}
The performance comparisons with the optimization software MOSEK \cite{Erling2003}, the embedded solver ECOS \cite{Domahidi2013ecos} and the FISTA \cite{Beck_2009} have been provided. The MOSEK and ECOS quadratic programming functions in MATLAB environment, i.e., $\RIfM@\expandafter\text@\else\expandafter\mbox\fi{mskqpopt}(\cdot)$ and $\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ecosqp}(\cdot)$, are used, they are invoked as
\begin{small}
\begin{subequations}
\begin{align}
\begin{split}
&[\RIfM@\expandafter\text@\else\expandafter\mbox\fi{sol}]
=
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{mskqpopt}
(\mathcal{H},
\mathcal{G},
\mathcal{A},
[\ ],
\mathcal{B},
[\ ],
[\ ],
[\ ],
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{'minimize info');}\\
&\RIfM@\expandafter\text@\else\expandafter\mbox\fi{time}
=
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{sol.info.MSK\_DINF\_INTPNT\_TIME;}
\end{split}
\\
\nonumber
\\
&[\RIfM@\expandafter\text@\else\expandafter\mbox\fi{sol},
\sim,
\sim,
\sim,
\sim,
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{time}]
=
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ecosqp}
(\mathcal{H},
\mathcal{G},
\mathcal{A},
\mathcal{B});
\end{align}
\end{subequations}
\end{small}The version of MOSEK is 9.2.43 and the numerical experiments are proceeded by running MATLAB R2018a on Windows 10 platform with 2.9G Core i5 processor and 8GB RAM.
\subsection{Performance Evaluation of Algorithm \ref{algorithm1}}
Four kinds of system scales are considered, they are $n=m=2,\ 4,\ 6,\ 8$. The performance of above methods are evaluated by solving $400$ random MPC problems in each system scale. Since we develop the efficient solving method in one control step, without loss of generality, a batch of stable and controllable plants with the random initial conditions and constraints are used. The components in the dynamics and input matrices are randomly selected from the interval $[-1,1]$. Each component in the state and input are upper and lower bounded by random bounds generated from intervals $[1,10]$ and $[-10,-1]$ respectively. The prediction horizon is $\small N=5$, the controller parameters are $\small Q=I$ and $\small R=10I$. Only the iteration process in the first control step is considered and the stop criterion is $\|\xi^{p}-\xi^{p-1}\|_{2}\leq10^{-3}$. Let $\alpha=20$ in Algorithm \ref{algorithm1}, the results are shown in Table \ref{table2}, in which "ave.iter" and "ave.time" are the abbreviations of "average iteration number" and "average execution time", and "vars/cons" denotes the number of variables and constraints. Table \ref{table2} implies that the average execution time can be reduced by using the proposed method. Noticing that Table \ref{table2} shows that the execution time of Algorithm \ref{algorithm1} and ECOS are much faster than MOSEK, hence, only the discussions about Algorithm \ref{algorithm1} and ECOS are provided in the rest of the letter for the purpose of conciseness.
\begin{figure}[hbtp]
\centering
\includegraphics[scale=0.53]{exc_time_10-eps-converted-to.pdf}
\caption{Average execution time of Algorithm \ref{algorithm1} and ECOS in the case of $\small n=m=8$.}
\label{fig_exc_time}
\end{figure}
To show the performance improvement of Algorithm \ref{algorithm1} with the increase of $\small \alpha\in\{2,\cdots,20\}$, an example in the case of $\small n=m=8$ is given in Fig. \ref{fig_exc_time}, which presents the results in terms of the average execution time. Since only the upper bound of convergence rate is reduced by increasing $\small\alpha$, the execution time may not strictly decline. Fig. \ref{fig_exc_time} implies that the execution time of Algorithm \ref{algorithm1} can be shorten by increasing $\small\alpha$ and faster than the ECOS for solving the same MPC optimization problem. Notice that there is no significant difference in the execution time if $\small\alpha$ keeps increasing. In fact, it depends on the stop criterion, therefore, a suitable $\small\alpha$ can be selected according to the required solution accuracy.
\begin{figure}[hbtp]
\centering
\includegraphics[scale=0.54]{n_m_8-eps-converted-to.pdf}
\caption{Execution time for each experiment in the case of $n=m=8$.}
\label{fig_exc_time_8}
\end{figure}
\subsection{Statistical Significance of Experimental Result}
Table \ref{table2} verifies the effectiveness of Algorithm \ref{algorithm1} by using the average execution time, the statistical significance is discussed as follows. Since the sample size is large in our test, i.e., $400$ random experiments in each case, the paired $t$-test developed in Section $10.3$ and $12.3$ of \cite{Wackerly2008} can be used. Denote the average execution time under the ECOS and Algorithm \ref{algorithm1} as $\mu_e$ and $\mu_a$, and the difference of execution time between the two methods as $D_{i}$ for $i=1,\cdots,M$, in which $M=400$. If the average execution time for the ECOS is larger, then $\mu_{D}=\mu_{e}-\mu_{a}>0$. Thus, we test
\begin{small}
\begin{equation*}
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{H}_{0}:\ \mu_{D}=0\ \
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{versus}\ \
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{H}_{1}:\ \mu_{D}>0.
\end{equation*}
\end{small}Define the sample mean and variance as
\begin{small}
\begin{equation*}
\bar{D}
=
\frac{1}{M}
\sum_{i=1}^{M}D_{i},\ \
S_{D}^{2}
=
\frac{1}{M-1}
\sum_{i=1}^{M}
(D_{i}-\bar{D})^{2},
\end{equation*}
\end{small}then the test statistic is calculated as
\begin{small}
\begin{equation*}
t=
\frac{\bar{D}-\mu_{D}}{S_{D}/\sqrt{M}},
\end{equation*}
\end{small}which is the observed value of the statistic under the null hypothesis $\RIfM@\expandafter\text@\else\expandafter\mbox\fi{H}_{0}$. In the case of $n=m=8$, for example, the execution time for each random experiment is given in Fig. \ref{fig_exc_time_8} and the test statistic is $t=15.7623$, which leads to an extremely small $p$-value compared with the significance level $0.001$. Hence, the result is statistically significant to suggest that the ECOS yields a larger execution time than does Algorithm \ref{algorithm1}. In other cases of the system scale, the similar results can be obtained.
\begin{figure}[hbtp]
\centering
\includegraphics[scale=0.53]{solution_error_MPC-eps-converted-to.pdf}
\caption{Solution error between Algorithm \ref{algorithm1} and ECOS in the case of $n=m=8$.}
\label{fig_solution_error}
\end{figure}
\begin{figure}[hbtp]
\centering
\includegraphics[scale=0.53]{limit_analysis-eps-converted-to.pdf}
\caption{Average execution time of Algorithm \ref{algorithm1} and ECOS at different system scales.}
\label{fig_limit_analysis}
\end{figure}
\subsection{Error and Limitation Analysis of Algorithm \ref{algorithm1}}
To verify the accuracy of the solutions of Algorithm \ref{algorithm1}, the solution error $\small \xi^{p}-\xi^{ecos}$ is calculated as $\small \xi^{p}$ satisfies the stop criterion, in which the ECOS solution is denoted as $\small \xi^{ecos}$. For example, give one random MPC problem in the case of $n=m=8$, each component of solution error is shown in Fig. \ref{fig_solution_error}, in which different color lines denote different $\small\alpha$. The results in Fig. \ref{fig_solution_error} reveal that the component is not greater than $\small2.2\times10^{-3}$ in each case of $\small\alpha$, hence, the solution of Algorithm \ref{algorithm1} is close to the ECOS solution. Moreover, notice that the solution error with different $\small\alpha$ is close to each other, which means that the selection of $\small\alpha$ has little influence on the final solution. In other random optimization problems, the same conclusion can be obtained. In this way, the accuracy of the solutions of Algorithm \ref{algorithm1} is verified. However, the limitation of Algorithm \ref{algorithm1} is that it is only suitable for the small size MPC problems. The illustration is given as Fig. \ref{fig_limit_analysis}, in which the average execution time of Algorithm \ref{algorithm1} ($\alpha=20$) and ECOS are presented. Fig. \ref{fig_limit_analysis} implies that the performance of Algorithm \ref{algorithm1} degrades with the increase of the system scale. The extension of Algorithm \ref{algorithm1} such that the large-scale optimization problems can be solved efficiently is the topic of the future research.
\section{Conclusion}
\label{section5}
In this letter, QP problems are solved by a novel PGM. We show that the FISTA is a special case of the proposed method and the convergence rate can be improved from $O(1/p^{2})$ to $O(1/p^{\alpha})$ by selecting the positive real roots of a group of high order polynomial equations as the iterative parameters. Based on a batch of random experiments, the effectiveness of the proposed method on MPC problem has been verified.
\begin{appendices}
\section{From Standard MPC to QP}
\label{appendices}
According to the nominal model (\ref{state_equation}), the relationship between the predicted nominal states and inputs in a finite horizon $\small N$ can be expressed as
\begin{small}
\begin{equation}
\boldsymbol{x}_{k}
=
A_{1}x_{k}
+
A_{2}\boldsymbol{u}_{k},
\end{equation}
\end{small}where
\begin{small}
\begin{equation}
A_{1}
=
\begin{bmatrix}
A \\
\vdots \\
A^{N}
\end{bmatrix},\
A_{2}
=
\begin{bmatrix}
B & \boldsymbol{0} & \cdots & \boldsymbol{0} \\
AB & B & \cdots & \boldsymbol{0} \\
\vdots & \vdots & \ddots & \boldsymbol{0} \\
A^{N-1}B & A^{N-2}B & \cdots & B
\end{bmatrix}.
\end{equation}
\end{small}Denote $\small Q_{1}=\RIfM@\expandafter\text@\else\expandafter\mbox\fi{diag}(Q,\cdots,Q,P)\in\mathcal{R}^{Nn\times Nn}$ and $\small R_{1}=\RIfM@\expandafter\text@\else\expandafter\mbox\fi{diag}(R,\cdots,R)\in\mathcal{R}^{Nm\times Nm}$, the objective (\ref{original_cost_function}) containing the equality constraints can be written as
\begin{small}
\begin{equation}
J
(\boldsymbol{x}_{k},\boldsymbol{u}_{k})
=
\frac{1}{2}
\boldsymbol{u}_{k}^{T}\mathcal{H}\boldsymbol{u}_{k}
+
\mathcal{G}(x_{k})^{T}\boldsymbol{u}_{k}
+
c(x_{k}),
\end{equation}
\end{small}where $\mathcal{H}=A_{2}^{T}Q_{1}A_{2}+R_{1}$, $\mathcal{G}(x_{k})=A_{2}^{T}Q_{1}A_{1}x_{k}$ and $c(x_{k})=\frac{1}{2}x_{k}^{T}A_{1}^{T}Q_{1}A_{1}x_{k}$. Then the standard quadratic optimization objective is obtained. Let $\tilde{F}=\RIfM@\expandafter\text@\else\expandafter\mbox\fi{diag}(F,\cdots,F)\in\mathcal{R}^{Nf\times Nn}$, $\tilde{\Phi}=(\boldsymbol{0},\Phi)\in\mathcal{R}^{w\times Nn}$ ($\Phi$ is the terminal constraint on the predicted state $x_{N|k}$), $\bar{F}=(\tilde{F}^{T},\tilde{\Phi}^{T})^{T}\in\mathcal{R}^{(Nf+w)\times Nn}$ and $\bar{G}=\RIfM@\expandafter\text@\else\expandafter\mbox\fi{diag}(G,\cdots,G)\in\mathcal{R}^{Ng\times Nm}$, the linear constraints of (\ref{original_MPC}) can be written as
\begin{small}
\begin{equation}
\mathcal{A}
\boldsymbol{u}_{k}
\leq
\mathcal{B}(x_{k}),
\end{equation}
\end{small}where
\begin{small}
\begin{equation}
\mathcal{A}
=
\begin{bmatrix}
\bar{F}A_{2} \\
\bar{G}
\end{bmatrix},\
\mathcal{B}(x_{k})
=
\begin{bmatrix}
\boldsymbol{1}-\bar{F}A_{1}x_{k} \\
\boldsymbol{1}
\end{bmatrix}.
\end{equation}
\end{small}In this way, the MPC problem (\ref{original_MPC}) is formulated into the quadratic programming form (\ref{standard_QP}). After solving the MPC problem, the first term of the optimal input trajectory $\small \boldsymbol{u}_{k}^{*}$ is imposed to the plant at time $\small k$.
\end{appendices}
\bibliographystyle{IEEEtranS}
\section*{Abstract (Not appropriate in this style!)}%
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}%
\quotation
\fi
}%
}{%
}%
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}%
\@ifundefined{maketitle}{\def\maketitle#1{}}{}%
\@ifundefined{affiliation}{\def\affiliation#1{}}{}%
\@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}%
\@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}%
\@ifundefined{newfield}{\def\newfield#1#2{}}{}%
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }%
\newcount\c@chapter}{}%
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}%
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}%
\@ifundefined{subsection}{\def\subsection#1%
{\par(Subsection head:)#1\par }}{}%
\@ifundefined{subsubsection}{\def\subsubsection#1%
{\par(Subsubsection head:)#1\par }}{}%
\@ifundefined{paragraph}{\def\paragraph#1%
{\par(Subsubsubsection head:)#1\par }}{}%
\@ifundefined{subparagraph}{\def\subparagraph#1%
{\par(Subsubsubsubsection head:)#1\par }}{}%
\@ifundefined{therefore}{\def\therefore{}}{}%
\@ifundefined{backepsilon}{\def\backepsilon{}}{}%
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}%
\@ifundefined{registered}{%
\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}%
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr
\mathhexbox20D}}}}{}%
\@ifundefined{Eth}{\def\Eth{}}{}%
\@ifundefined{eth}{\def\eth{}}{}%
\@ifundefined{Thorn}{\def\Thorn{}}{}%
\@ifundefined{thorn}{\def\thorn{}}{}%
\def\TEXTsymbol#1{\mbox{$#1$}}%
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}%
\newdimen\theight
\@ifundefined{Column}{\def\Column{%
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}%
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{%
\rightline{\rlap{\box\z@}}%
\vss
}%
}%
}}{}%
\@ifundefined{qed}{\def\qed{%
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}%
}}{}%
\@ifundefined{cents}{\def\cents{\hbox{\rm\rlap c/}}}{}%
\@ifundefined{tciLaplace}{\def\tciLaplace{\ensuremath{\mathcal{L}}}}{}%
\@ifundefined{tciFourier}{\def\tciFourier{\ensuremath{\mathcal{F}}}}{}%
\@ifundefined{textcurrency}{\def\textcurrency{\hbox{\rm\rlap xo}}}{}%
\@ifundefined{texteuro}{\def\texteuro{\hbox{\rm\rlap C=}}}{}%
\@ifundefined{euro}{\def\euro{\hbox{\rm\rlap C=}}}{}%
\@ifundefined{textfranc}{\def\textfranc{\hbox{\rm\rlap-F}}}{}%
\@ifundefined{textlira}{\def\textlira{\hbox{\rm\rlap L=}}}{}%
\@ifundefined{textpeseta}{\def\textpeseta{\hbox{\rm P\negthinspace s}}}{}%
\@ifundefined{miss}{\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}}{}%
\@ifundefined{vvert}{\def\vvert{\Vert}}{
\@ifundefined{tcol}{\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}}{}%
\@ifundefined{dB}{\def\dB{\hbox{{}}}}{
\@ifundefined{mB}{\def\mB#1{\hbox{$#1$}}}{
\@ifundefined{nB}{\def\nB#1{\hbox{#1}}}{
\@ifundefined{note}{\def\note{$^{\dag}}}{}%
\defLaTeX2e{LaTeX2e}
\ifx\fmtnameLaTeX2e
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\fi
\def\alpha{{\Greekmath 010B}}%
\def\beta{{\Greekmath 010C}}%
\def\gamma{{\Greekmath 010D}}%
\def\delta{{\Greekmath 010E}}%
\def\epsilon{{\Greekmath 010F}}%
\def\zeta{{\Greekmath 0110}}%
\def\eta{{\Greekmath 0111}}%
\def\theta{{\Greekmath 0112}}%
\def\iota{{\Greekmath 0113}}%
\def\kappa{{\Greekmath 0114}}%
\def\lambda{{\Greekmath 0115}}%
\def\mu{{\Greekmath 0116}}%
\def\nu{{\Greekmath 0117}}%
\def\xi{{\Greekmath 0118}}%
\def\pi{{\Greekmath 0119}}%
\def\rho{{\Greekmath 011A}}%
\def\sigma{{\Greekmath 011B}}%
\def\tau{{\Greekmath 011C}}%
\def\upsilon{{\Greekmath 011D}}%
\def\phi{{\Greekmath 011E}}%
\def\chi{{\Greekmath 011F}}%
\def\psi{{\Greekmath 0120}}%
\def\omega{{\Greekmath 0121}}%
\def\varepsilon{{\Greekmath 0122}}%
\def\vartheta{{\Greekmath 0123}}%
\def\varpi{{\Greekmath 0124}}%
\def\varrho{{\Greekmath 0125}}%
\def\varsigma{{\Greekmath 0126}}%
\def\varphi{{\Greekmath 0127}}%
\def{\Greekmath 0272}{{\Greekmath 0272}}
\def\FindBoldGroup{%
{\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}%
}
\def\Greekmath#1#2#3#4{%
\if@compatibility
\ifnum\mathgroup=\symbold
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\else
\FindBoldGroup
\ifnum\mathgroup=\theboldgroup
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\fi}
\newif\ifGreekBold \GreekBoldfalse
\let\SAVEPBF=\pbf
\def\pbf{\GreekBoldtrue\SAVEPBF}%
\@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{}
\@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{}
\@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{}
\@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{}
\@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{}
\@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{}
\@ifundefined{remark}{\newtheorem{remark}{Remark}}{}
\@ifundefined{example}{\newtheorem{example}{Example}}{}
\@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{}
\@ifundefined{definition}{\newtheorem{definition}{Definition}}{}
\@ifundefined{mathletters}{%
\newcounter{equationnumber}
\def\mathletters{%
\addtocounter{equation}{1}
\edef\@currentlabel{\arabic{equation}}%
\setcounter{equationnumber}{\c@equation}
\setcounter{equation}{0}%
\edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}%
}
\def\endmathletters{%
\setcounter{equation}{\value{equationnumber}}%
}
}{}
\@ifundefined{BibTeX}{%
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}%
\@ifundefined{AmS}%
{\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}%
A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}%
\@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}%
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}%
\else \def\@tempa{&}\fi
\@tempa
\if@eqnsw
\iftag@
\@taggnum
\else
\@eqnnum\stepcounter{equation}%
\fi
\fi
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@eqnswtrue
\global\@eqcnt\z@\cr}
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}%
\global\def\@currentlabel{#1}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}%
\global\def\@currentlabel{#1}}
\def\QATOP#1#2{{#1 \atop #2}}%
\def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}%
\def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}%
\def\QABOVE#1#2#3{{#2 \above#1 #3}}%
\def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}%
\def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}%
\def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}%
\def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}%
\def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}%
\def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}%
\def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}%
\def\QTABOVED#1#2#3#4#5{{\textstyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\QDABOVED#1#2#3#4#5{{\displaystyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\tint{\msi@int\textstyle\int}%
\def\tiint{\msi@int\textstyle\iint}%
\def\tiiint{\msi@int\textstyle\iiint}%
\def\tiiiint{\msi@int\textstyle\iiiint}%
\def\tidotsint{\msi@int\textstyle\idotsint}%
\def\toint{\msi@int\textstyle\oint}%
\def\tsum{\mathop{\textstyle \sum }}%
\def\tprod{\mathop{\textstyle \prod }}%
\def\tbigcap{\mathop{\textstyle \bigcap }}%
\def\tbigwedge{\mathop{\textstyle \bigwedge }}%
\def\tbigoplus{\mathop{\textstyle \bigoplus }}%
\def\tbigodot{\mathop{\textstyle \bigodot }}%
\def\tbigsqcup{\mathop{\textstyle \bigsqcup }}%
\def\tcoprod{\mathop{\textstyle \coprod }}%
\def\tbigcup{\mathop{\textstyle \bigcup }}%
\def\tbigvee{\mathop{\textstyle \bigvee }}%
\def\tbigotimes{\mathop{\textstyle \bigotimes }}%
\def\tbiguplus{\mathop{\textstyle \biguplus }}%
\newtoks\temptoksa
\newtoks\temptoksb
\newtoks\temptoksc
\def\msi@int#1#2{%
\def\@temp{{#1#2\the\temptoksc_{\the\temptoksa}^{\the\temptoksb}}
\futurelet\@nextcs
\@int
}
\def\@int{%
\ifx\@nextcs\limits
\typeout{Found limits}%
\temptoksc={\limits}%
\let\@next\@intgobble%
\else\ifx\@nextcs\nolimits
\typeout{Found nolimits}%
\temptoksc={\nolimits}%
\let\@next\@intgobble%
\else
\typeout{Did not find limits or no limits}%
\temptoksc={}%
\let\@next\msi@limits%
\fi\fi
\@next
}%
\def\@intgobble#1{%
\typeout{arg is #1}%
\msi@limits
}
\def\msi@limits{%
\temptoksa={}%
\temptoksb={}%
\@ifnextchar_{\@limitsa}{\@limitsb}%
}
\def\@limitsa_#1{%
\temptoksa={#1}%
\@ifnextchar^{\@limitsc}{\@temp}%
}
\def\@limitsb{%
\@ifnextchar^{\@limitsc}{\@temp}%
}
\def\@limitsc^#1{%
\temptoksb={#1}%
\@ifnextchar_{\@limitsd}{\@temp
}
\def\@limitsd_#1{%
\temptoksa={#1}%
\@temp
}
\def\dint{\msi@int\displaystyle\int}%
\def\diint{\msi@int\displaystyle\iint}%
\def\diiint{\msi@int\displaystyle\iiint}%
\def\diiiint{\msi@int\displaystyle\iiiint}%
\def\didotsint{\msi@int\displaystyle\idotsint}%
\def\doint{\msi@int\displaystyle\oint}%
\def\dsum{\mathop{\displaystyle \sum }}%
\def\dprod{\mathop{\displaystyle \prod }}%
\def\dbigcap{\mathop{\displaystyle \bigcap }}%
\def\dbigwedge{\mathop{\displaystyle \bigwedge }}%
\def\dbigoplus{\mathop{\displaystyle \bigoplus }}%
\def\dbigodot{\mathop{\displaystyle \bigodot }}%
\def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}%
\def\dcoprod{\mathop{\displaystyle \coprod }}%
\def\dbigcup{\mathop{\displaystyle \bigcup }}%
\def\dbigvee{\mathop{\displaystyle \bigvee }}%
\def\dbigotimes{\mathop{\displaystyle \bigotimes }}%
\def\dbiguplus{\mathop{\displaystyle \biguplus }}%
\if@compatibility\else
\RequirePackage{amsmath}
\fi
\def\makeatother\endinput{\makeatother\endinput}
\bgroup
\ifx\ds@amstex\relax
\message{amstex already loaded}\aftergroup\makeatother\endinput
\else
\@ifpackageloaded{amsmath}%
{\if@compatibility\message{amsmath already loaded}\fi\aftergroup\makeatother\endinput}
{}
\@ifpackageloaded{amstex}%
{\if@compatibility\message{amstex already loaded}\fi\aftergroup\makeatother\endinput}
{}
\@ifpackageloaded{amsgen}%
{\if@compatibility\message{amsgen already loaded}\fi\aftergroup\makeatother\endinput}
{}
\fi
\egroup
\typeout{TCILATEX defining AMS-like constructs in LaTeX 2.09 COMPATIBILITY MODE}
\let\DOTSI\relax
\def\RIfM@{\relax\ifmmode}%
\def\FN@{\futurelet\next}%
\newcount\intno@
\def\iint{\DOTSI\intno@\tw@\FN@\ints@}%
\def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}%
\def\iiiint{\DOTSI\intno@4 \FN@\ints@}%
\def\idotsint{\DOTSI\intno@\z@\FN@\ints@}%
\def\ints@{\findlimits@\ints@@}%
\newif\iflimtoken@
\newif\iflimits@
\def\findlimits@{\limtoken@true\ifx\next\limits\limits@true
\else\ifx\next\nolimits\limits@false\else
\limtoken@false\ifx\ilimits@\nolimits\limits@false\else
\ifinner\limits@false\else\limits@true\fi\fi\fi\fi}%
\def\multint@{\int\ifnum\intno@=\z@\intdots@
\else\intkern@\fi
\ifnum\intno@>\tw@\int\intkern@\fi
\ifnum\intno@>\thr@@\int\intkern@\fi
\int
\def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi
\ifnum\intno@>\tw@\intop\intkern@\fi
\ifnum\intno@>\thr@@\intop\intkern@\fi\intop}%
\def\intic@{%
\mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}%
\def\negintic@{\mathchoice
{\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}%
\def\ints@@{\iflimtoken@
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits
\else\multint@\nolimits\fi
\eat@
\else
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits\else
\multint@\nolimits\fi}\fi\ints@@@}%
\def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}%
\def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}%
\def\intdots@{\mathchoice{\plaincdots@}%
{{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}%
\def\RIfM@{\relax\protect\ifmmode}
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi}
\let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice
{\textdef@\displaystyle\f@size{#1}}%
{\textdef@\textstyle\tf@size{\firstchoice@false #1}}%
{\textdef@\textstyle\sf@size{\firstchoice@false #1}}%
{\textdef@\textstyle \ssf@size{\firstchoice@false #1}}%
\glb@settings}
\def\textdef@#1#2#3{\hbox{{%
\everymath{#1}%
\let\f@size#2\selectfont
#3}}}
\newif\iffirstchoice@
\firstchoice@true
\def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}%
\def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}%
\def\multilimits@{\bgroup\vspace@\Let@
\baselineskip\fontdimen10 \scriptfont\tw@
\advance\baselineskip\fontdimen12 \scriptfont\tw@
\lineskip\thr@@\fontdimen8 \scriptfont\thr@@
\lineskiplimit\lineskip
\vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}%
\def\Sb{_\multilimits@}%
\def\endSb{\crcr\egroup\egroup\egroup}%
\def\Sp{^\multilimits@}%
\let\endSp\endSb
\newdimen\ex@
\[email protected]
\def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}%
\def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow
\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\overrightarrow{\mathpalette\overrightarrow@}%
\def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\let\overarrow\overrightarrow
\def\overleftarrow{\mathpalette\overleftarrow@}%
\def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\overleftrightarrow{\mathpalette\overleftrightarrow@}%
\def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr
\leftrightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\underrightarrow{\mathpalette\underrightarrow@}%
\def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}%
\let\underarrow\underrightarrow
\def\underleftarrow{\mathpalette\underleftarrow@}%
\def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}%
\def\underleftrightarrow{\mathpalette\underleftrightarrow@}%
\def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th
\hfil#1#2\hfil$\crcr
\noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}%
\def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@}
\let\nlimits@\displaylimits
\def\setboxz@h{\setbox\z@\hbox}
\def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr
\hfil$#1\m@th\operator@font lim$\hfil\crcr
\noalign{\nointerlineskip}#2#1\crcr
\noalign{\nointerlineskip\kern-\ex@}\crcr}}}}
\def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\copy\z@\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill
\mkern-6mu\box\z@$}
\def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}}
\def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}}
\def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@}
\def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@}
\def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}}
\def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@
\hbox{$#1\m@th\operator@font lim$}}}}
\def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}}
\def\mathpalette\varlimsup@{}@#1{\mathop{\overline
{\hbox{$#1\m@th\operator@font lim$}}}}
\def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}%
\begingroup \catcode `|=0 \catcode `[= 1
\catcode`]=2 \catcode `\{=12 \catcode `\}=12
\catcode`\\=12
|gdef|@alignverbatim#1\end{align}[#1|end[align]]
|gdef|@salignverbatim#1\end{align*}[#1|end[align*]]
|gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]]
|gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]]
|gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]]
|gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]]
|gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]]
|gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]]
|gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]]
|gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]]
|gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]]
|endgroup
\def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim
You are using the "align" environment in a style in which it is not defined.}
\let\endalign=\endtrivlist
\@namedef{align*}{\@verbatim\@salignverbatim
You are using the "align*" environment in a style in which it is not defined.}
\expandafter\let\csname endalign*\endcsname =\endtrivlist
\def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim
You are using the "alignat" environment in a style in which it is not defined.}
\let\endalignat=\endtrivlist
\@namedef{alignat*}{\@verbatim\@salignatverbatim
You are using the "alignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endalignat*\endcsname =\endtrivlist
\def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim
You are using the "xalignat" environment in a style in which it is not defined.}
\let\endxalignat=\endtrivlist
\@namedef{xalignat*}{\@verbatim\@sxalignatverbatim
You are using the "xalignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endxalignat*\endcsname =\endtrivlist
\def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim
You are using the "gather" environment in a style in which it is not defined.}
\let\endgather=\endtrivlist
\@namedef{gather*}{\@verbatim\@sgatherverbatim
You are using the "gather*" environment in a style in which it is not defined.}
\expandafter\let\csname endgather*\endcsname =\endtrivlist
\def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim
You are using the "multiline" environment in a style in which it is not defined.}
\let\endmultiline=\endtrivlist
\@namedef{multiline*}{\@verbatim\@smultilineverbatim
You are using the "multiline*" environment in a style in which it is not defined.}
\expandafter\let\csname endmultiline*\endcsname =\endtrivlist
\def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim
You are using a type of "array" construct that is only allowed in AmS-LaTeX.}
\let\endarrax=\endtrivlist
\def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim
You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.}
\let\endtabulax=\endtrivlist
\@namedef{arrax*}{\@verbatim\@sarraxverbatim
You are using a type of "array*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endarrax*\endcsname =\endtrivlist
\@namedef{tabulax*}{\@verbatim\@stabulaxverbatim
You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endtabulax*\endcsname =\endtrivlist
\def\endequation{%
\ifmmode\ifinner
\iftag@
\addtocounter{equation}{-1}
$\hfil
\displaywidth\linewidth\@taggnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\else
$\hfil
\displaywidth\linewidth\@eqnnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\fi
\else
\iftag@
\addtocounter{equation}{-1}
\eqno \hbox{\@taggnum}
\global\@ifnextchar*{\@tagstar}{\@tag}@false%
$$\global\@ignoretrue
\else
\eqno \hbox{\@eqnnum
$$\global\@ignoretrue
\fi
\fi\fi
}
\newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}%
\global\def\@currentlabel{#1}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}%
\global\def\@currentlabel{#1}}
\@ifundefined{tag}{
\def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}}
\def\@tag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@tagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
}{}
\def\tfrac#1#2{{\textstyle {#1 \over #2}}}%
\def\dfrac#1#2{{\displaystyle {#1 \over #2}}}%
\def\binom#1#2{{#1 \choose #2}}%
\def\tbinom#1#2{{\textstyle {#1 \choose #2}}}%
\def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}%
\makeatother
\endinput
|
2,869,038,154,029 | arxiv | \section{Introduction}
\noindent The discovery of Maxwell's equations covariance with respect to the
conformal group $C = \left\{ L, D, P_4, S_4 \right\} $
(where $L$,$D$,$P_4$,$S_4$ mean \mbox{Lorentz-,} \mbox{Dilatation-,} \mbox{
Poincar\`e-,} and special conformal transformations respectively building up
$C$)~\cite{1} have induced several
authors~\cite{2} to conjecture that Minkowski space-time ${\mathbb M}={\mathbb R}^{3,1}$ may be densely
contained in conformally compactified\footnote{From which
Robertson-Walker space-time ${\mathbb M}_{RW}= S^3 \times R^1$ is obtained,
familiar to cosmologists (being at the origin of the Cosmological
Principle~\cite{3}),
where $R^1$ is conceived as the infinite covering of $S^1$.} space-time ${\mathbb M}_c$:
\begin{equation}
{\mathbb M}_c=\frac{S^3 \times S^1}{{\mathbb Z}_2} \label{eq1}
\end{equation}
often conceived as the homogeneous space of the conformal group:
$$
{\mathbb M}_c=\frac{C}{c_1}
$$
where $c_1 = \left\{ L, D, S_4 \right\}$ is the stability group of the origin $x_\mu=0$.
As well known $C$ may be linearly represented by $SO(4,2)$, acting in ${\mathbb R}^{4,2}$,
containing the Lorentz group $SO(3,1)$ as a subgroup which, because of the
relevance of space-time reflections for natural phenomena, should be
extended to $O(3,1)$. But then $SO(4,2)$ should be also extended to $O(4,2)$
including conformal reflections $I$ (~with respect to hyperplanes orthogonal
to the $5^{th}$ and $6^{th}$ axis~), whose relevance for physics should then be
expected as well. To start with, in fact, in this case ${\mathbb M}_c=C/c_1$ seems not
to be the only automorphism space of $O(4,2)$, since:
\begin{equation}
I {\mathbb M}_c I^{-1} = I \frac{C}{c_1} I^{-1} = \frac{C}{c_2} = {\mathbb P}_c = \frac{S^3
\times S^1}{{\mathbb Z}_2} \label{eq2}
\end{equation}
where $c_2 =\left\{ L, D, P_4\right\}$ is the stability group of
infinity. Therefore, being $c_1$ and $c_2$ conjugate, ${\mathbb M}_c$ and ${\mathbb P}_c$ are two copies of
the same homogeneous space ${\mathcal H}$ of the conformal group including
reflections, and, as we will see, both are needed to represent the group linearly .
Because of eq. (\ref{eq2}) we will call ${\mathbb M}_c$ and ${\mathbb P}_c$ conformally dual.
There are good arguments~\cite{4} (see also footnote \ref{fn1}) in favor of the
hypothesis that ${\mathbb P}_c$ may represent conformally compactified momentum space
${\mathbb P}={\mathbb R}^{3,1}$. In this case ${\mathbb M}_c$ and ${\mathbb P}_c$ build up conformally compactified phase
space, which then is a space of automorphism of the conformal group
$C$ including reflections.
\section{Conformally compactified phase space}
For simultaneous compactification of ${\mathbb M}$ and ${\mathbb P}$ in $M_c$ and ${\mathbb P}_c$ no exact Fourier
transform is known. It can be only approximated by a finite lattice phase
space~\cite{5}.
An exact Fourier transform may be defined instead in the 2-dimensional
space time when ${\mathbb M} ={\mathbb R}^{1,1}={\mathbb P}$ and for which
\begin{equation}
{\mathbb M}_c=\frac{S^1 \times S^1}{{\mathbb Z}_2}={\mathbb P}_c \label{eq4}
\end{equation}
Then inscribing in each $S_1$, of radius $R$, of ${\mathbb M}_c$ and in each $S_1$ of radius $K$
of ${\mathbb P}_c$ a regular polygon with
\begin{equation}
2 N = 2 \pi R K \label{eq5}
\end{equation}
vertices any function $f\left(x_{mn}\right)$ defined in the resulting lattice
$M_L \subset {\mathbb M}_c$ is correlated to the Fourier-transformed
$ F\left( k_{\rho\tau} \right)$ on the
$P_L \subset {\mathbb P}_c$ lattice by the finite Fourier series:
\begin{eqnarray}
f\left(x_{nm}\right)=\frac{1}{2 \pi R^2} \sum_{\rho, \tau = -N}^{N-1}
\varepsilon^{\left(n \rho - m \tau\right)} F\left(k_{\rho \tau}\right) \nonumber \\
\;\;\;\;\; \label{eq6} \\
F\left( k_{\rho\tau} \right) = \frac{1}{2 \pi K^2} \sum_{n,m = -N}^{N-1}
\varepsilon^{-\left(n \rho - m \tau \right)} f \left(x_{nm}\right) \nonumber
\end{eqnarray}
where $ \varepsilon=e^{i \frac{\pi}{N}}$ is the $2N$-root of unit.
They may be called Fourier transforms since either for $R \rightarrow \infty$
or $K \rightarrow \infty$, (or both) they coincide with the standard
ones. This further confirms the identification of ${\mathbb P}_c$ with momentum
space which is here on purpose characterized geometrically rather then
algebraically~(Poisson bracket). On this model the action of the conformal
group $O(2, 2)$ may be easily operated and tested.
\section{Conformal duality}
The non linear, local action of $I$ on ${\mathbb M}$ is well known; for $x_\mu \in {\mathbb M}$ we have:
\begin{equation}
I \; : \; x_\mu \rightarrow I\left( x_\mu \right) = \pm \frac{x_\mu}{x^2} \label{eq7}
\end{equation}
For $x^2 \not= 0$ and $x_\mu$ space-like, if $x$ indicates the distance of a
point from the
origin, we have \footnote{$I$ maps every point, at a distance $x$ from the
center, in the sphere $S^2$, to a point outside of it at a distance $x^{-1}$.
For ${\mathbb M}={\mathbb R}^{2,1}$ the sphere $S^2$ reduces to a circle $S^1$ and then
(\ref{eq7p}) reminds Target Space duality in string theory~\cite{6}, which then
might be the consequence of conformal inversion. For ${\mathbb M}={\mathbb R}^{1,1}$, that is
for the two dimensional model $I$ may be locally represented through quotient
rational transformation by means of $I=i \sigma_2$ and the result is
(\ref{eq7p}), which in turn represents the action of $I$ for the conformal
group $G= \left\{ D, P_1, T_1\right\}$ on a straight line ${\mathbb R}^1$.} :
\begin{equation}
I \; : \; x \rightarrow I\left( x \right) = \frac{1}{x} .
\label{eq7p}
\end{equation}
Since ${\mathbb M}$ is densely contained in ${\mathbb M}_c$, $x_\mu$ defines a point of the homogeneous
space ${\mathcal H}$, of automorphisms for $C$; as such $x_\mu$ must then be conceived as
dimensionless in (\ref{eq7}), as usually done in mathematics.
Therefore for physical applications, to represent space-time we must
substitute $x_\mu$ with $x_\mu/l$, where $l$ represents an (arbitrary) unit of
lengths, then from (\ref{eq7}) and (\ref{eq7p}) we obtain:
\underline{ Proposition $P_1$}: Conformal reflections determine a map in
space of the microworld to the macroworld~( with respect to $l$) and vice-versa.
The conformal group may be well represented in momentum space ${\mathbb P}={\mathbb R}^{3,1}$, densely
contained in ${\mathbb P}_c$, where the action of $I$ induces non linear transformation
like (\ref{eq7}) and (\ref{eq7p}), where $x_\mu$ and $x$ are substituted by
$k_\mu \in {\mathbb P}$ and $k$. If we then take (\ref{eq7p}) and the corresponding for
${\mathbb P}$ we obtain (see also footnote 2):
\begin{equation}
I \; : \; x k \rightarrow I\left( x k \right) =
\frac{1 }{x k} \label{eq8}
\end{equation}
Now physical momentum $p$ is obtained after multiplying the wave-number $k$ by
an (arbitrary) unit of action $H$ by which (\ref{eq8}) becomes:
\begin{equation}
I \; : \; \frac{x p}{H} \rightarrow I\left( \frac{x p}{H} \right) =
\frac{H }{x p} \label{eq9}
\end{equation}
from which we obtain:
\underline{Proposition $P_2$}: Conformal reflections determine a map,
in phase space of the
world of micro actions to the one of macro actions (with respect to $H$), and
vice-versa.
Now if we choose for the arbitrary unit $H$ the Planck's constant $\hbar$ then from
propositions $P_1$ and $P_2$ we have:
\underline{Corollary $C_2$}: Conformal reflections determine a map between classical and
quantum mechanics.
Let us now remind the identifications ${\mathbb M}_c \equiv C/c_1$ and ${\mathbb P}_c \equiv
C/c_2$ to be conceived as two copies of
the homogeneous space ${\mathcal H}$, representing conformally compactified
space-time and
momentum space respectively, and that $I {\mathbb M}_c I^{-1} ={\mathbb P}_c$ and then
$I$ represents a map\footnote{
The action of $I$ may be rigorously tested in the two dimensional model where
$I(x_{nm}) =k_{nm}$ and the action of $I$ is linear in the compactified phase
space at difference with its local non linear action in ${\mathbb M}$ and ${\mathbb P}$, as will be further
discussed elsewhere.}
of every point $x_\mu$ of ${\mathbb M}$ to a point $k_\mu$ of
${\mathbb P}$ : $ I \rightarrow I(x_\mu) = p_\mu$ and we have:
\underline{Proposition $P_3$}: Conformal reflections determine a map between
space-time and momentum space.
Let us now assume, as the history of celestial mechanics suggests, that
space-time ${\mathbb M}$ is the most appropriate for the description of classical
mechanics then, as a consequence of propositions $P_1$, $P_3$ and of corollary
$C_2$, momentum space should be the most appropriate for the description of
quantum mechanics. The legitimacy of this conjecture seems in fact to be
supported by spinor geometry as we will see.
\section{Quantum mechanics in momentum space}
Notoriously the most elementary constituents of matter are fermions,
represented by spinors, whose geometry, as formulated by its discoverer E.
Cartan~\cite{7} has already the form of equations of motions for fermions in
momentum space.
In fact given a pseudo euclidean, $2n$-dimensional vector space $V$ with
signature $\left(k,l \right)$; $k+l=2n$ and the associated Clifford algebra
${\mathbb C}\ell\left(k,l \right)$ with generators $\gamma_a$ a Dirac spinor $\psi$ is an
element of the endomorphism space of ${\mathbb C}\ell(k,l)$ and is defined by Cartan's equation
\begin{equation}
\gamma_a p^a \psi =0 \label{eq10}
\end{equation}
where $p_a$ are the components of a vector $p \in V$.
Now it may be shown that for the signatures $(k,l)= (3,1),(4,1)$,
the Weyl, (Maxwell), Majorana, Dirac equations, respectively may be
natural\-ly obta\-ined~\cite{8}, from (\ref{eq10}), precisely in momentum space. For the
signature $(4,2)$ eq. (\ref{eq10}) contains twistors equations and, for $\psi
\not= 0$, the vector $p$ is null: $p_a p^a =0$ and the directions of $p$ form
the projective quadric\footnote{Since Weyl equation in ${\mathbb P}={\mathbb R}^{3,1}$, out of
which Maxwell's equation for the electromagnetic tensor $F_{\mu\nu}$
(expressed bilinearly in term of Weyl spinors) may be obtained, is contained in
twistor equation in ${\mathbb P}={\mathbb R}^{4,2}$, defining the projective quadric: ${\mathbb P}_c =
\left( S^3 \times S^1 \right)/{\mathbb Z}_2$. This is a further argument why momentum
space ${\mathbb P}$ should be densely contained in ${\mathbb P}_c$. \label{fn1}} $(S^3 \times S^1)/{\mathbb Z}_2$
identical to conformally compactified momentum space ${\mathbb P}_c$ given in (\ref{eq2}).
For the signature $(5,3)$ instead one may easily obtain~\cite{8} from (\ref{eq10})
the equation
\begin{equation}
\left( \gamma_\mu p^\mu + \vec{\pi} \cdot \vec{\sigma} \otimes \gamma_5+M\right) N =0
\label{eq11}
\end{equation}
where
$ \vec{\pi}=\langle \tilde{N} , \vec{\sigma} \otimes \gamma_5 N \rangle$
and $N=\left[ \matrix{ \psi_1 \cr \psi_2 } \right]$ ;
$M= \left[ \matrix { m & 0 \cr 0 & m} \right]$ ;
$\tilde N=\left[ \tilde \psi_1 , \tilde \psi_2 \right]$ ;
$\tilde \psi_j=\psi_j^\dagger \gamma_0$,
with $\psi_1$,$\psi_2$ - space-time Dirac spinors, and
$\vec{\sigma}=\left( \sigma_1,\; \sigma_2,\;\sigma_3 \right)$ Pauli matrices.
Eq. (\ref{eq11}) represents the equation, in momentum space, of the
proton-neutron doublet interacting with the pseudoscalar pion triplet$
\vec{\pi}$.
Also the equation for the electroweak model may be easily obtained~\cite{9} in
the frame of ${\mathbb C}\ell(5,3)$.
All this may further justify the conjecture that spinor geometry in
conformally compac\-tified momentum space is the appropriate arena for the
description of quantum mechanics of fermions.
It is remarkable that for signatures $(3,1)$, $(4,1)$ and $(5,3)$, $(7,1)$ (
while not for $(4,2)$~) the real
components $p_a$ of $p$ may be bilinearly expressed in terms of spinors~\cite{8}.
If spinor geometry in momentum space is at the origin of quantum mechanics
of fermions, then their multiplicity, observed in natural phenomena, could
be a natural consequence of the fact that a Dirac spinor associated with
${\mathbb C}\ell(2n)$ has $2^n$ components, which naturally splits in multiplets of space-time
spinors (as it already appears in eq. (\ref{eq11})). Since in this approach vectors
appear as bilinearly composed by spinors, some of the problematic aspects of
dimensional reduction could be avoided dealing merely with spinors~\cite{10}.
Also rotations naturally arise, in spinor spaces as products of reflections
operated by the generators $\gamma_a$ of the Clifford algebras. These could then
be at the origin of the so called internal symmetry. This appears in eq.
(\ref{eq11}) where the isospin symmetry of nuclear forces arises from conformal
reflections which appear there as the units of the quaternion field of
numbers of which, proton-neutron equivalence, for strong interactions, could
be a realization in nature.
For ${\mathbb C}\ell(8)$ and higher Clifford algebras, and the associated spinors,
octonions could be expected to play a role as recently advocated~\cite{10}.
\section{Some further consequences of conformal duality}
Compact phase space implies, for field theories, the absence of the
concept of infinity in both space-time and momentum and, provided we may
rigorously define Fourier dual manifolds, where fields may be defined, it
would imply also
the absence of both infrared and ultraviolet divergences in perturbation
expansions. This is, for the moment, possible only in the four-dimensional
phase space-model, where such manifold restricts to the lattice $M_L$ and $P_L$
on a double torus, where Fourier transforms (\ref{eq6}) hold.
One could call $M_L$ and $P_L$ the physical
spaces which are compact and discrete, to distinguish them from the
mathematical spaces $M_c$ and $P_c$ which are compact and continuous, the
latter are only conformally dual while the former are both conformally and
Fourier dual.
In the realistic eight dimensional phase space one would also expect to
find physical spaces represented by lattices~\cite{5} as non commutative
geometry seems also to suggest~\cite{11}.
Let us now consider as a canonical example of a quantum system: the
hydrogen atom in stationary states. According to our hypothesis it should be
appropriated to deal with it in momentum space (as a non relativistic limit of
the Dirac equation for an electron subject to an external e.m. field). This
is possible as shown by v. Fock~\cite{12} through the integral equation:
\begin{equation}
\phi\left( p \right) = \frac{\lambda}{2 \pi^2} \int_{S^3} \,
\frac{\phi\left(q\right)}{\left( p - q \right)^2 } d^3 q \label{eq12}
\end{equation}
where $S^3$ is the one point compactification of momentum space ${\mathbb P}={\mathbb R}^{3}$,
and $\lambda=\frac{e^2}{\hbar c } \sqrt{\frac{p_0^2}{-2m E}}$, where
$p_0$ is a unit of momentum and $E$ the (negative) energy of the H-atom.
For $\lambda = n+1$, $\phi(p) = {Y_{nlm}}{\left(\alpha\beta\gamma \right)}$
which are the harmonics on $S^3$, and for $p_0=mc$, we obtain:
$$
E_n =- \frac{m e^4}{2 \hbar^2 \left( n+1 \right)^2}
$$
which are the energy eigenvalues of the H-atom.
It is interesting to observe that eq. (\ref{eq12}) is a purely geometrical
equation, where the only quantum parameter\footnote{The geometrical
determination of this parameter in eq. (\ref{eq12}) (through harmonic
analysis, say) could furnish a clue to understanding the geometrical origin of
quantum mechanics. This was a persistent hope of the late Wolfgang Pauli.}
is the dimensionless fine structure constant
$$
\frac{e^2}{\hbar c} = \frac{1}{137,...}
$$
According to this equation the stationary states of the H-atom may be
represented as eigenvibrations of the $S^3$ sphere (of radius $p_0$ ) in
conformally compactified momentum space and out of which the quanum numbers
$n$,$l$,$m$ result. If conformal duality is realized
in nature then there should exist a classical system represented by
eigenvibration of $S^3$ in ${\mathbb M}_c$ or ${\mathbb M}_{RW}$.
In fact this system could be the universe since recent observations on
distant galaxies (in the direction of N-S galactic poles) have revealed
that their distribution may be represented~\cite{13} by the $S^3$ eigenfunction
\begin{equation}
Y_{n,0,0} = k_n \frac{\rm{sin} \; \left(n+1\right) \chi }{\rm{sin} \; \chi}
\label{eq13}
\end{equation}
with $k_n$ a constant, $\chi$ the geodetic distance from the center of the
corresponding eigenvibration on the $S^3$ sphere of the ${\mathbb M}_{RW}$ universe.
Now $ Y_{n,0,0} $ is exactly equal to the eigenfunction of the H-atom however in
momentum space. If the astronomical observations will confirm eq. (\ref{eq13}) then the
universe and the H-atom would represent a realization in nature of
conformal duality. Here we have in fact that $ Y_{n,0,0}$ on ${\mathbb P}_c$
represents the (most symmetric) eigenfunction of the (quantum) H-atom and
the same $ Y_{n,0,0}$ on ${\mathbb M}_c$ may represent the (visible) mass distribution of the
(classical) universe. They could then be an example\footnote{There could be
other examples of conformal duality represented by our
planetary system. In fact observe that in order to compare with the
density of matter the square of the $S^3$ harmonic $Y_{nlm}$ has to be taken. Now
$Y_{n00}^2$ presents maxima for $r_n=\left(n+\frac{1}{2} \right)r_0$, and
it has been shown by Y.K. Gulak~\cite{14} that the values of the large semi axes
of the 10 major solar planets satisfy this rule, which could then suggest
that they arise from a planetary cloud presenting the structure of a $S^3$
eigenvibrations, as will be discussed elsewhere.}
of conformally dual systems: one classical and one quantum mechanical.
There could be another important consequence of conformal duality. in fact
suppose that ${\mathbb M}_c$ and ${\mathbb P}_c$ are also Fourier dual\footnote{
Even if, for
defining rigorously Fourier duality for ${\mathbb M}_c$ and ${\mathbb P}_c$ one may
have to abandon standard differential calculus, locally it may be assumed to
be approximately true, with reasonable confidence since the spacing of the
possible lattice will be extremely small. In fact taking for $K$ the Planck's
radius one could
have of the order of $10^{30}$ points of the lattice per centimeter.
}.
Then to the eigenfunctions: ${Y_{nlm}}{\left(\alpha\beta\gamma \right)}$ on
${\mathbb P}_c$ of the H-atom there will correspond
on ${\mathbb M}={\mathbb R}^{3,1}$ (densely contained in ${\mathbb M}_c$) their Fourier transforms; that is the
known eigenfunctions $\Psi_{nlm}\left(x_1,x_2,x_3 \right)$ in ${\mathbb M}={\mathbb R}^{3,1}$ of
the H-atom stationary states.
Now according to propositions $P_1$, $P_3$ and corollary $C_2$ for high values of the
action of the system; that is for high values of $n$ in $Y_{nlm}$, the
system should be identified with the corresponding classical one, and in
space-time ${\mathbb M}$ where it is represented by $\Psi_{nlm}\left(x_1,x_2,x_3
\right)$.
In fact this is what postulated by the correspondence principle: for high
values of the quantum numbers the wavefunction $\Psi_{nlm}\left(x_1,x_2,x_3
\right)$
identifies with the Kepler orbits; that is the same system (with potential
proportional to $1/r$) dealt in the frame of classical mechanics. In this
way, at least in this particular example, the correspondence principle
appears as a consequence of conformal duality\footnote{Also the property of
Fourier transforms play a role. Consider in fact a classical system with fixed
orbits: a massive point particle on $S^1$, say. Its quantization appears in
the Fourier dual momentum space $Z$ : ($m=0,\pm1, \pm 2 ,\dots $) and for the
large $m$ the eigenfunctions identifies with $S^1$.} and precisely of propositions
$P_1$, $P_3$ and corollary $C_2$. Obviously at difference with the previous case of
duality, in this case it is the same system (the two body problem) dealt
once quantum mechanically in ${\mathbb P}_c$ and once classically in ${\mathbb M}_c$. What they keep
in common is the $SO(4)$ group of symmetry, named accidental symmetry when
discovered by W. Pauli for the Kepler orbits, while here it derives from the
properties of conformal reflections, which preserve $S^3$, as seen from (\ref{eq2}).
|
2,869,038,154,030 | arxiv | \section{Introduction}
\label{introduction}
The ability of reinforcement learning (RL) agents to solve very large problems efficiently depends on building and leveraging knowledge that can be re-used in many circumstances. One type of such knowledge comes in the form of
options \cite{Sutton:1999:MSF:319103.319108, Precup2000TemporalAI}, temporally extended actions that can be viewed as specialized skills which can improve learning and planning efficiency \cite{Precup2000TemporalAI,TRIO}. The option-critic framework \cite{bacon2017option} proposed a formulation to learn option policies as well as the termination conditions end-to-end, through gradient descent, just from the agent's experience and rewards. However, this can lead to the option set collapsing in various ways \cite{bacon2017option,termination-critic2019}, for example options becoming primitive actions, one option learning to solve the entire task and dominating the others, or several options becoming very similar. These degeneracies not only negatively impact the agent's ability to re-use the learned option set in new situations, they often hurt performance. Furthermore, learning options in a model-free setting is often accompanied with increased sample and computational complexity over primitive action policies, without the desired performance improvements. This raises the fundamental question of why temporal abstraction is needed, especially when a primitive action policy achieves comparable results.
There have been attempts to tackle the problem of options collapsing onto multiple copies of the optimal policy \cite{bacon2017option,termination-critic2019} as well as ensuring that options do not shrink too much over time \cite{deliiberationcost}. However, finding a solution that can easily generalize over a wide range of tasks with minimal human supervision is still an ongoing challenge. In this paper, we tackle the problem by constructing a \emph{diverse set of options}, which can be beneficial to increase exploration as well as for robustness in learning challenging tasks \cite{gregor2016variational,diaynpaper-2018,termination-critic2019}.
A common approach for encouraging diversity in a policy is entropy regularization \cite{Williams1991FunctionOU, Mnih2016AsynchronousMF}, but it does not capture the idea of the set of options itself containing skills that differ from each other. Unlike in the case of primitive action policies where each action is significantly distinct, options often learn similar skills reducing the effectiveness of entropy regularization. To address this issue, we use intrinsic motivation. Augmenting the standard maximum reward objective with auxiliary bonus have encouraging results in promoting good exploration and performance \cite{ng1999policy, singh2010intrinsically, count_1}. We introduce an auxiliary reward which, when combined with the task reward, encourages diversity in the policies of the options. We empirically show how this diversity can help options explore better on several standard continuous control tasks.
We then focus on option termination. The termination objective used in option-critic \cite{bacon2017option} increases the likelihood of an option to terminate if the value of the current option is sub-optimal with respect to another. Though logical, this objective tends to suppress the worse option quickly without adequate exploration, often resulting in a single option dominating over the entire task \cite{bacon2017option, deliiberationcost, termination-critic2019}. To overcome this, we also propose a novel termination objective which produces diverse, robust and task relevant options. Our approach suggests that instead of having options compete for selection, adequate exploration of available options should be encouraged so long as they exhibit diverse behavior. Upon testing this new objective quantitatively and qualitatively in a classic tabular setting as well as several standard discrete and continuous control tasks, we demonstrate that our approach achieves a new state-of-the-art performance. Furthermore, our approach demonstrates significant improvements in robustness, interpretibility and reusability of specialized options in transfer learning.
\section{Background}
In RL, an agent interacts with an environment typically assumed to be a Markov Decision Process (MDP) $\mathcal{M} = (\mathcal{S}, \mathcal{A},\gamma, r, \mathcal{P})$ where $\mathcal{S}$ is the set of states, $\mathcal{A}$ is the action set, $\gamma \in [0,1)$ is the discount factor, $r:\mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ is the reward function and $\mathcal{P} :\mathcal{ S} \times \mathcal{A} \times \mathcal{S} \rightarrow [0, 1]$ is the transition dynamics. A policy $\pi$ is a probabilistic distribution over actions conditioned on states, $\pi: \mathcal{S} \times \mathcal{A} \rightarrow [0, 1]$ . The value function of a policy $\pi$ is the expected discounted return $V_{\pi}(s) = \mathbb{E}_{\pi} \Big[ \sum_{t=0}^{\infty} \gamma^{t}r_{t+1}|s_{0} = s \Big] $.
Policy gradient methods aim to find a good policy by optimizing the expected return over a given family of parameterized stochastic policies $\pi_{\theta}$. Policy improvement is carried out by performing stochastic gradient ascent over the policy parameters.
Techniques for defining useful abstractions through hierarchy have sparked a lot of interest \cite{parr1998reinforcement, dietterich2000hierarchical, Sutton:1999:MSF:319103.319108, Precup2000TemporalAI, mcgovern2001automatic, Stolle_learningoptions, Vezhnevets2017FeUdalNF}.
We use the options framework \cite{Sutton:1999:MSF:319103.319108,Precup2000TemporalAI}, which formalizes temporal abstraction by representing knowledge, learning and planning over different temporal scales. An option \textit{o} is a temporally extended action represented as a triple ($\mathcal{I_{\textit{o}}, \pi_{\textit{o}}, \beta_{\textit{o}} }$) where $\mathcal{I_{\textit{o}} \subseteq S }$ is an initiation set, $\pi_{\textit{o}}$ is an intra-option policy and $\beta_{\textit{o}}$: $\mathcal{S} \rightarrow [0,1]$ is a termination function.
The policy over options $\pi_{\Omega}$, selects an option from those available at a given state and executes it until termination. Upon termination, $\pi_{\Omega}$ selects a new option and this process is repeated. The option-critic architecture \cite{bacon2017option} is a gradient-based method for learning options end-to-end without explicitly providing any sub-goals, by updating the parameters of intra-option policies $(\theta_{\pi})$ and terminations ($\theta_{\beta}$).
The termination gradient in option-critic~\cite{bacon2017option} states that at a state, if the value of an option is sub-optimal compared to the value of the policy over options, the likelihood of its termination should be increased. as follows:
\begin{equation} \label{eq_terminationgradient}
\frac{ \partial L(\theta) }{ \partial \theta_{\beta} } = \mathbb{E} \bigg[- \frac{ \partial \beta (S_{t},O_{t}) }{ \partial \theta_{\beta} } A(S_{t},O_{t}) \bigg]
\end{equation}
where $ A(S_{t},O_{t}) = Q_{\pi}(S_{t},O_{t}) - V_{\pi}(S_{t}) $ is the termination advantage function.
Since primitive actions are sufficient for learning any MDP, options often degenerate \cite{bacon2017option}. Several techniques have been proposed to tackle this problem \cite{bacon2017option,deliiberationcost,termination-critic2019}.
In the next two sections, we outline the main idea for our approach: encouraging diversity in option policies and encouraging the options to terminate in diverse locations.
\begin{figure*}[!ht]
\centering
\subfloat[HalfCheetah-v2]{\includegraphics[scale=0.1]{Figures/DEOC_halfCheetah_results.png} \label{Fig_DEOCvsPPOC_HalfCheetah}}
\subfloat[Hopper-v2]{\includegraphics[scale=0.1]{Figures/DEOC_hopper_results.png}\label{Fig_DEOCvsPPOC_Hopper}}
\subfloat[Walker2d-v2]{\includegraphics[scale=0.1]{Figures/DEOC_walker_results.png}\label{Fig_DEOCvsPPOC_Walker}}
\caption{\textbf{Diversity-Enriched Option-Critic (DEOC) compared against Option-Critic (OC)}. Each plot is averaged over 20 independent runs.
}
\label{Fig_DEOCvsPPOC}
\end{figure*}
\section{Encouraging Diversity While Learning}\label{IntrinsicReward_Section}
A good reward function can capture more than just information required to perform the task. In this section, we highlight the importance of diversity in options and an approach to achieve it using intrinsic motivation. We design a pseudo-reward signal complementing the task reward in order to encourage diversity.
While most relevant literature on learning diverse options \cite{gregor2016variational,diaynpaper-2018,termination-critic2019} use states to distinguish and specialize options, we instead look directly at an option's behavior to assess its diversity. This idea is well suited when all options are available everywhere, when the state information is imperfect (for example, because the latent representation of the state is still being learned), and when the agent aims to transfer knowledge across tasks. This approach allows the agent to effectively reuse specialized options \cite{bacon2017option}. For example, an option specialized in leaping over hurdles can be reused if and when the agent encounters a hurdle anywhere in its trajectory. We study reusability and transfer characteristics of options in Section \ref{section_transfer_tasks}.
For simplicity of exposition, we use two options in our notation in this chapter; however the approach can be easily extended to any number of options. An empirical study with varying number of options are presented in Appendix \ref{app_variable_options}.
We construct our pseudo reward function using concepts from information theory. Maximizing the entropy of a policy prevents the policy from quickly falling into a local optimum and has been shown to have substantial improvements in exploration and robustness \cite{Williams1991FunctionOU, Mnih2016AsynchronousMF, haarnoja2018soft}. We maximize the entropy $\mathcal{H}( A^{\pi_{\text{\text{O}}_{1}}} \mid S)$ and $\mathcal{H}( A^{\pi_{\text{\text{O}}_{2}}} \mid S)$ where $ \mathcal{H} $ is the Shannon entropy computed with base \textit{e} and $ A $ represents respective action distributions. Since we want want different options behave differently from each other at a given state, we maximize the divergence between their action distributions $\mathcal{H}(A^{\pi_{\text{\text{O}}_{1}}}; A^{\pi_{\text{\text{O}}_{2}}}\mid S)$. This aligns with our motivation that skill discrimination should rely on actions.
Lastly, we seek to maximize the stochasticity of the policy over options $\mathcal{H}( O^{\pi_{\Omega}} \mid S)$ to explore all available options at $S$. Combining all the above terms, we get the following pseudo reward $ \mathcal{R}_{bonus} $:
\begin{align}\label{eq_pseudoreward}
\mathcal{R}_{bonus} &= \mathcal{H}(A^{\pi_{\text{\text{O}}_{1}}} \mid S) + \mathcal{H}(A^{\pi_{\text{\text{O}}_{2}}} \mid S) \nonumber \\
&+ \mathcal{H}( O^{\pi_{\Omega}} \mid S) + \mathcal{H}(A^{\pi_{\text{\text{O}}_{1}}}; A^{\pi_{\text{\text{O}}_{2}}}\mid S)
\end{align}
The first three terms in Eq. \eqref{eq_pseudoreward} seeks to increase the stochasticity of the policies and the fourth term encourages overall diversity in options. Since we use entropy regularization for policy updates in all our implementations as well as baseline experiments, we only use $\mathcal{H}(A^{\pi_{\text{\text{O}}_{1}}}; A^{\pi_{\text{\text{O}}_{2}}}\mid S)$ as our pseudo reward, $\mathcal{R}_{bonus}$, highlighting the significance of diversity in the option set.
We incorporate this objective within the standard RL framework by augmenting the reward function to include the pseudo reward bonus from Eq. \eqref{eq_pseudoreward}:
\begin{equation}\label{Eq_reward_augmentation}
\mathcal{R}_{aug}(S_{t},A_{t}) = (1 - \tau)R(S_{t},A_{t}) + \tau \mathcal{R}_{bonus}(S_{t})
\end{equation}
where $ \tau $ is a hyper-parameter which controls relative importance of the diversity term against the reward. The proposed reward augmentation yields the maximum diversity objective. The standard RL objective can be recovered in the limit as $\tau \rightarrow 0$.
To demonstrate the benefits of maximizing diversity through augmenting the reward, we test our algorithm, Diversity-Enriched Option-Critic (DEOC), against Option-Critic (OC) in classic Mujoco \cite{todorov2012mujoco} environments.
We use the same hyper-parameter settings across all 20 seeds in all our experiments throughout the paper to test stability. Fig. \ref{Fig_DEOCvsPPOC} shows that encouraging diversity improves sample efficiency as well as performance.
Details regarding implementation and choices for the underlying algorithm, PPO \cite{Schulman2017ProximalPO}, are provided in Appendix \ref{App_Nonlinearcase}.
\section{Encouraging Diversity in Termination}\label{TerminationSection}
In Section \ref{IntrinsicReward_Section}, we empirically demonstrate that encouraging diversity in option policies improves exploration and performance of option-critic. However, unlike primitive action policies where all actions are available at every step, options execute for variable time steps until a termination condition is met, during which, all other options remain dormant. Due to this, the maximum entropy objective fails to be as effective with options as with primitive action policies. Although having options terminate at every time step may solve this problem, it renders the use of options moot.
Additionally, option-critic's termination function solely validates the best option, suppressing other potentially viable options which may also lead to near-optimum behavior. As a consequence, at a given state, only the best current option gets updated, eventually leading to a single option dominating the entire task.
Noise in value estimates or state representations may also cause an option to terminate and consequently lead to the selection of a sub-optimal option. Selecting a sub-optimal option around ``vulnerable'' states can be catastrophic and also severely hurt performance. In our case, despite $ \mathcal{R}_{bonus}(s,a)$ encouraging diverse options, option-critic's termination function prevents exploiting this diversity due to inadequate exploration of all relevant options.
We tackle these problems by encouraging all options available at a given state to be explored, so long as they exhibit diverse behavior.\\
In this section we present a novel termination objective which no longer aims to maximize the expected discounted returns, but focuses on the option's behavior and identifying states where options are distinct, while still being relevant to the task.
We build our objective function to satisfy the following two conditions:
\begin{itemize}
\item \textbf{Options should terminate in states where the available options are diverse.}
In the classic four-rooms task \cite{Sutton:1999:MSF:319103.319108}, such states would be the hallways.
Terminations localized around hallways have shown significant improvements in performance in the transfer setting \cite{bacon2017option}.
\item \textbf{The diversity metric used in the termination objective should capture the diversity relative to other states in the sampled trajectories.} This prevents options terminating at every step while exploiting diversity effectively for exploration and stability.
\end{itemize}
\begin{algorithm}[!t]
\caption{Termination-DEOC (TDEOC) algorithm with tabular intra-option Q-Learning}
\label{deoc_algo}
\begin{algorithmic}
\STATE Initialize $\pi_{\Omega} $, $\pi{o} $ and $\beta_{o} $
\STATE Choose $ o_{t} $ according to $ \pi_{\Omega} (o_{t}|s_{0}) $
\REPEAT
\STATE Act $ a_{t} \sim \pi_{o} (a_{t}|s_{t}) $ and observe $ s_{t+1} $ and $ r_{t} $
\STATE $ r'_{t} = (1 - \tau)r_{t} + \tau \, r_{bonus}(s_{t}) $
\IF{$ o_{t}$ terminates in $s_{t+1} $ }
\STATE Choose new $o _{t+1} $ according to $ \pi_{\Omega} (\cdot | s_{t+1}) $
\ELSE
\STATE $o_{t+1} = o_{t}$
\ENDIF
\STATE $\textit{D}(s_{t}) \leftarrow $ Standardized samples of $r_{bonus}(s_{t})$.
\STATE $\delta \leftarrow$ $r'_{t}$ - $Q_{U}(s_{t}, o_{t}, a_{t})$
\STATE $\delta \leftarrow$ $\delta$ $+$ $\gamma$(1 $-$ $\beta_{o_{t}} (s_{t+1}))Q_{\Omega}(s_{t+1},o_{t})$ + \\
$\: \: \:$ $\gamma \beta_{o_{t}} (s_{t+1}) max_{o_{t+1}} Q_{\Omega}(s_{t+1},o_{t+1})$ \\
$Q_{U}(s_{t}, o_{t}, a_{t}) \leftarrow $ $Q_{U}(s_{t}, o_{t}, a_{t}) + \alpha \delta$ \\
\STATE $\theta_{\pi} \leftarrow \theta_{\pi} + \alpha_{\theta_{\pi}} \frac{\partial log \pi_{o_{t}}(a_{t} | s_{t})}{\partial \theta} Q_{U}(s_{t}, o_{t}, a_{t}) $
\STATE $\theta_{\beta} \leftarrow \theta_{\beta} + \alpha_{\theta_{\beta}} \frac{\partial \beta_{o_{t}}(s_{t+1})}{\partial \nu}$ $\mathcal{D}(s_{t+1})$
\UNTIL{$s_{t+1}$ is terminal}
\end{algorithmic}
\end{algorithm}
The termination objective hence becomes:
\begin{equation} \label{eq_deocobjective}
L(\theta_{\beta}) = \mathbb{E} \big[\beta(S_{t},O_{t}) \mathcal{D}(S_{t}) \big]
\end{equation}
The term $ \mathcal{D}(S_{t}) $ indicates the relative diversity of options at a given state. We compute $ \mathcal{D}(S_{t}) $ by standardizing (with a mean $ \mu=0 $, and standard deviation $ \sigma=1 $), the samples of $ \mathcal{R}_{bonus}(S_{t})$ defined in Eq. \eqref{eq_pseudoreward}, collected in the buffer.
\begin{equation} \label{eq_standardize}
\mathcal{D}(S_{t}) = \frac{\mathcal{R}_{bonus}(S_{t}) - \mu_{\mathcal{R}_{bonus}}}{ \sigma_{\mathcal{R}_{bonus}}}
\end{equation}
Our approach solves the issue of constant termination at all states, and the updates are scaled appropriately relative to the diversity values of other states in the buffer. Terminating while options are most diverse encourages both options to be selected fairly and explored by the policy over options.
\begin{theorem}\label{terminationtheorem}
Given
a set of Markov options $\Omega$ each with a stochastic termination function defined by Eq. \eqref{eq_deocobjective} and stochastic intra-option policies, with $|\Omega|<\infty$ and $|\mathcal{A}|<\infty$, repeated application of policy-options evaluation and improvement \cite{bacon2017option, Bacon2013phdthesis} yields convergence to a locally optimum solution.
Proof. See Appendix \ref{app_proofterminationtheorem}.
\end{theorem}
Note that as with $ \mathcal{R}_{bonus}(S_{t}) $, $ \mathcal{D}(S_{t}) $ is independent of the termination parameters. An added advantage of using relative diversity is the agent's ability to respond to events or obstacles in its trajectory.
Such events characterize some of the most sensitive states in the environment.
The relative diversity $ \mathcal{D}(S_{t}) $ in our objective is capable of identifying such states, causing both options to collectively explore and learn the event. We study transfer characteristics further in Section \ref{section_transfer_tasks}.
\section{Experiments} \label{section_TDEOC_experiments}
We evaluate the effects of the new termination objective on several tasks, to test its performance and stability. The pseudo-code of the algorithm, Termination-DEOC (TDEOC), is presented in Algorithm \ref{deoc_algo}. Implementation details are provided in Appendix \ref{App_Implementation_details}. \par
\begin{figure} [!ht]
\centering
\subfloat[Termination-DEOC (TDEOC)]{\includegraphics[scale=0.25]{Figures/TDEOC-Termination_plots} \label{Fig_DEOC_termination_plot}}
\subfloat[Termination-DEOC (TDEOC) VS OC]{\includegraphics[scale=0.175]{Figures/Tabular_fourrooms_results-nips.png} \label{Fig_OC_termination_plot}}
\caption{\textbf{Visualization of Terminations for different options} after 1000 episodes. Darker colors correspond to higher termination likelihood. Both TDEOC and OC show higher terminations around hallways.
\textbf{Four-rooms transfer experiment with four options}. After 1000 episodes, the goal state, is moved from the east hallway to a random location in the south east room. TDEOC recovers faster than OC with a difference of almost 70 steps when the task is changed. Each line is averaged over 300 runs.}
\label{Fig_Termination_plots}
\end{figure}
\begin{figure*}[!ht]
\centering
\textsc{\textbf{Empirical Performance}}\\
\subfloat[Humanoid-v2]{\includegraphics[scale=0.12]{Figures/TDEOC_Humanoid_results-nips-2.png} \label{Fig_Humanoid_results}}
\subfloat[HalfCheetah-v2]{\includegraphics[scale=0.12]{Figures/TDEOC_halfcheetah_results-nips-2.png} \label{Fig_Halfcheetah_results}}
\subfloat[Sidewalk (Discrete)]{\includegraphics[scale=0.12]{Figures/TDEOC_Sidewalk_results-nips.png} \label{Fig_Sidewalk_results}}\\
\subfloat[Ant-v2]{\includegraphics[scale=0.12]{Figures/TDEOC_ant_results-nips-2.png} \label{Fig_Ant_results}}
\subfloat[Walker2d-v2]{\includegraphics[scale=0.12]{Figures/TDEOC_walker_results-nips-2.png} \label{Fig_Walker_results}}
\subfloat[TMaze(Discrete)]{\includegraphics[scale=0.12]{Figures/TDEOC_TMaze_discrete_results-nips.png} \label{Fig_Tmaze_discrete_results}} \\
\vspace{1mm}
\textsc{\textbf{Option Relevance}}\\
\subfloat[HalfCheetah-v2]{\includegraphics[scale=0.12]{Figures/Option_activity_HalfCheetah-nips.png} \label{Fig_HalfCheetah_relevance}}
\subfloat[Hopper-v2]{\includegraphics[scale=0.12]{Figures/Option_activity_Hopper-nips.png} \label{Fig_Ant_relevance}}
\subfloat[Walker2d-v2]{\includegraphics[scale=0.12]{Figures/Option_activity_Walker-nips.png} \label{Fig_Walker_relevance}} \\
\caption{\textbf{TDEOC results on standard Mujoco and Miniworld tasks}. Our proposed termination objective significantly improves exploration, performance, and each option's relevance to the task. \textbf{Option activity refers to number of steps during which the option (Opt1 or Opt2) was active for buffer samples generated at respective time steps.} Each plot is averaged over 20 independent runs.
}
\label{Fig_TDEOC_results}
\end{figure*}
\setlength{\parindent}{0ex} \textbf{Tabular Four-rooms Navigation Task}\qquad
We first test our algorithm TDEOC, on the classic four-rooms navigation task \cite{Sutton:1999:MSF:319103.319108} where transfer capabilities of options were demonstrated against primitive action frameworks \cite{bacon2017option}. Initially the goal is located in the east hallway and the agent starts at a uniformly sampled state. After 1000 episodes, the goal state is moved to a random location in the lower right room. The goal state yields a reward of +1 while all other states produce no rewards.
Visualizations of option terminations (Fig. \ref{Fig_DEOC_termination_plot}) show that
TDEOC identifies the hallways as the `\textit{bottleneck}' states where options tend to grow diverse. Fig. \ref{Fig_OC_termination_plot} shows that both TDEOC and OC have nearly the same learning speed for the first 1000 episodes. Upon changing the goal state, TDEOC recovers faster than OC by almost 70 steps while exhibiting lower variance.\par
\begin{figure*}[!ht]
\centering
\subfloat[HalfCheetahHurdle-v0]{\includegraphics[scale=0.12]{Figures/HalfCheetahWall_results-nips.png} \label{Fig_TDEOC_HalfCheetahHurdle}}
\subfloat[HopperIceWall-v0]{\includegraphics[scale=0.12]{Figures/HopperIce_results-nips.png} \label{Fig_TDEOC_HopperIce}}
\subfloat[TMaze (Continuous)]{\includegraphics[scale=0.12]{Figures/TDEOC_tmaze_transfer_results-nips.png} \label{Fig_TDEOC_TMaze_transfer_results}}
\hspace{1mm} \\
\subfloat[Steps$<$2e6]{\includegraphics[scale=0.225]{Figures/HalfCheetahWall_before.png}}\hspace{0.2mm}
\subfloat[Steps$>$2e6]{\includegraphics[scale=0.275]{Figures/HalfCheetahWall_after.png}}\hspace{2.0mm}
\subfloat[Steps$<$1e6]{\includegraphics[scale=0.084]{Figures/HopperIce_before.png}}\hspace{0.2mm}
\subfloat[Steps$>$1e6]{\includegraphics[scale=0.0933]{Figures/HopperIce_after.png}}\hspace{2.0mm}
\subfloat[Steps$<$ 2e5]{\includegraphics[scale=0.0631]{Figures/TMaze_Both.png}}\hspace{0.2mm}
\subfloat[Steps$>$ 2e5]{\includegraphics[scale=0.0631]{Figures/TMaze_Right.png}}
\caption{\textbf{TDEOC results on three transfer tasks in Mujoco} each averaged over 20 independent runs. The height of the hurdle in HalfCheetahWall-v0 is increased by 0.8 metres after 2e6 steps. For HopperIceWall-v0, the block is moved 0.5 metres away from the agent's starting point after 1e6 steps. As for TMaze, the most frequent goal is removed after 2e5 steps.
}
\label{Fig_DEOC_term_transfer_mujojco_plots}
\end{figure*}
\textbf{Continuous Control Tasks}\qquad
Next, we show the advantages of a diversity-targeted termination in the non-linear function approximation setting using standard Mujoco tasks \cite{todorov2012mujoco}.
We tested the performance of the TDEOC algorithm against Option-Critic (OC) and PPO \cite{Schulman2017ProximalPO}.
Fig. \ref{Fig_TDEOC_results} shows that while OC quickly stagnates to a sub-optimal solution, TDEOC keep improving. We believe the reason for OC's stagnation is caused by sub-optimal option selection caused by terminations due to noisy value estimates. Since the sub-optimal option isn't adequately explored, it leads to the selection of a sub-optimal action, which can be catastrophic in states where \textit{balance} is vital.
TDEOC, on the other hand, learns to generate diverse yet relevant option trajectories, thereby gaining better control. This explains why TDEOC handles environment perturbations more robustly. To demonstrate this property, we visualize the activity of each option for TDEOC and OC, in terms of the number of steps the option was active for buffer samples generated at respective time steps. (Fig. \ref{Fig_HalfCheetah_relevance}, \ref{Fig_Ant_relevance}, \ref{Fig_Walker_relevance}). Unlike OC, where only one option stays relevant for the task, TDEOC encourages both options to be selected fairly. TDEOC achieves a new state-of-the-art performance, not only outperforming OC by a wide margin, but also PPO. Our approach easily extends to very complex high dimensional tasks such as Humanoid-v2. TDEOC also exhibits lower variance demonstrating stable results across various random seeds despite using the same hyper-parameter settings.
See Appendix \ref{app_option_relevance} for additional results.
TDEOC however, exhibits slower learning during the initial phase. This is to be expected, as TDEOC grooms both options to remain relevant and useful, while OC only updates a single dominant option, which requires fewer samples.
We study the \textit{critical states} which inspire diversity in Section \ref{section_Interpreting_options}.\\
\textbf{Sparse Reward Tasks}\qquad
We evaluate our approach in 3D visual control tasks implemented in Miniworld \cite{gym_miniworld} with discrete actions and a visual sensory input. We consider the T-Maze and Sidewalk environment.
We observe that while OC stagnates to a sub-par solution for both tasks, TDEOC manages to learn a better solution faster (Fig. \ref{Fig_TDEOC_results}). TDEOC even outperforms PPO in Sidewalk, despite the added complexity of learning a hierarchy. We can also observe significantly lower variance in the TDEOC plots.
\subsection{Evaluating Transfer} \label{section_transfer_tasks}
A key advantage of using options is to learn skills which can be reused in similar tasks. In Section \ref{section_TDEOC_experiments}, we studied this property in the tabular four-rooms task. In this section, we further test our approach on tasks which require adapting to changes. The benefits of using options are best observed in tasks where hierarchical representation can be exploited. \\
\textbf{HalfCheetah Hurdle}\qquad
Through this experiment we evaluate the ability of the agent to react to changes happening later in the trajectory.
Reusing the HalfCheetah-v2 environment, a hurdle of height 0.12 metres is placed 10m away from the agent's starting position. After two million steps, the height of the hurdle is increased by 0.8 metres (to 2 metres).
Not only does TDEOC learn faster than option-critic and PPO (Fig. \ref{Fig_TDEOC_HalfCheetahHurdle}), it also adapts to the change quicker. TDEOC also keeps improving after recovery while PPO's and OC's performances stagnate. Despite OC being more robust to transfer \cite{bacon2017option}, PPO recovers faster than OC, as indicated by a smaller difference in performance after recovery than before. This suggests higher velocity can cause increased instability during recovery. \\
\textbf{Hopper Ice Wall}\qquad
We add a friction-less block in the Hopper-v2 task with dimensions (0.25m, 0.4m, 0.12m) corresponding to its length, width and height respectively, 2.8 metres away from the agent's starting position.
After a million steps, the block is moved 0.5 metres away from the agent's starting position.
Initially, TDEOC and option-critic have a similar rate of improvement. However, after the change, TDEOC learns to stabilize better and keeps improving (Fig. \ref{Fig_TDEOC_HopperIce}). \\
\textbf{TMaze Continuous}\qquad
We use a task similar to the sparse reward task TMaze from Miniworld \cite{Khetarpal2020OptionsOI}. There are two goals located at both ends of the hallway, each producing a reward of +1. After 200,000 steps, the goal most visited is removed, forcing the agent to seek the other goal. Although OC initially recovers from the change better, TDEOC surpasses OC, achieving better final performance (Fig. \ref{Fig_TDEOC_TMaze_transfer_results}).
\subsection{Interpreting Options Behavior} \label{section_Interpreting_options}
Learning tasks hierarchically through options can help us better understand the agent's solution. In this section, we study qualitatively the states targeted by TDEOC and the corresponding options behavior. Videos of all our experiments are provided on our website \footnote{\url{https://sites.google.com/view/deoc/home}}.\\
\begin{figure}
\centering
\subfloat[Trajectory before task change]{\includegraphics[scale=0.29]{Figures/TMaze_Trajectory_before.png}}\hspace{2mm}
\subfloat[Trajectory after most frequent goal is removed]{\includegraphics[scale=0.21]{Figures/TMaze_Trajectory_after.png}} \\
\subfloat[Terminations for option 1 ($\beta_{o1}$)]{\includegraphics[scale=0.315]{Figures/TMaze_Terminations_Op0.png}}\hspace{2mm}
\subfloat[Terminations for option 2 ($\beta_{o2}$)]{\includegraphics[scale=0.315]{Figures/TMaze_Terminations_Op1.png}}\\
\caption{\textbf{Visualizations on TMaze task using two options} (marked red and yellow respectively in (a) and (b)). Option terminations localize in the vertical hallway where the agent has yet to decide which goal to navigate towards.}
\label{Fig_Tmaze_transfer_terminations}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.3]{Figures/HopperSequence_Time.png}
\caption{\textbf{Sample trajectory of the Hopper-v2 task.} Terminations are localized near states where the agent is in the air. Both options collaborate to ensure proper posture and balance prior to descending.}
\label{Fig_hopper_trajectory}
\end{figure}
\textbf{TMaze}\qquad
We visualize option behaviors and terminations for the TMaze transfer task from Section \ref{section_transfer_tasks}. Figure \ref{Fig_Tmaze_transfer_terminations} visualize a sample run where options terminate in the vertical hallway. Once the target goal is determined, a single option navigates towards it. This seems very intuitive, as the choice of navigating to either of the goals is still open in that hallway, indicating that the options are capable of diverse strategies each focused on a specific goal. Therefore, the termination objective we proposed gives rise to intuitive and reusable option behaviors. \\
\textbf{Hopper-v2}\qquad
Next, we consider the Hopper-v2 simulation from Mujoco. From Fig. \ref{Fig_hopper_trajectory}, we can see that options terminate when the agent is in the air, just before descending. Naturally, landing without losing balance is very important, as even a slight mistake can cause catastrophic outcomes. TDEOC manages to employ both options around these states to complement each other, thereby achieving robust control and proper balance (See Appendix \ref{App_hopper}). \\
\textbf{OneRoom}\qquad
We consider the OneRoom task from Miniworld. We visualize a sample trajectory (Fig. \ref{Fig_oneroom_trajectory}) where one option learns to scan the room by turning on the spot, and upon observing the goal, the second option navigates towards it (see results in Appendix \ref{App_oneroom}). The termination objective hence gives rise to intuitive option strategies.
\begin{figure}[h]
\centering
\includegraphics[scale=0.20]{Figures/OneRoom_Trajectory.png}
\caption{\textbf{Option trajectories in OneRoom task.} The first options scans the environment for the goal while the other option moves forward towards it.}
\label{Fig_oneroom_trajectory}
\end{figure}
\section{Related work}
Over the years, identifying bottleneck states as useful sub-goals for options has had lot of success \cite{McGovern_Automaticdiscovery, Stolle_learningoptions, Bacon2013OnTB}. Our approach can be seen to target bottleneck states characterized by diversity in the induced option set. Unsupervised skill discovery using diversity has also shown to be capable of learning challenging tasks using an information theoretic objective \cite{gregor2016variational,diaynpaper-2018}. Our algorithm however, exploits a novel behavioral diversity metric for option discovery while also being capable of learning the task simultaneously. While most similar works use states to distinguish and specialize options \cite{gregor2016variational,diaynpaper-2018,termination-critic2019}, our approach exploits option's behavior.
Additionally, our approach can learn reusable and interpretable options even though we do not explicitly shape the initiation set~\cite{bagaria2020option, Khetarpal2020OptionsOI}.
Intrinsic motivation and reward modifications have been very successful in inducing certain desirable properties in RL algorithms, such as efficient exploration \cite{count_1, count_2, count_4, count_3}. Our approach falls in this category as well. A lot of literature relates to learning reward functions to improve performance \cite{ng1999policy,zheng2018learning}, generalization and robustness \cite{singh2010intrinsically}, and we would like to investigate the utility of such methods in learning options as well.
\section{Discussions and Future Work}
In this paper, we highlighted the importance of learning a diverse set of options end-to-end. Inspired by intrinsic motivation, we presented Diversity-Enriched Option-Critic (DEOC), and proposed a novel diversity-targeted, information theoretic termination objective capable of generating diverse option strategies to encourage exploration and make the learning more reliable. The new termination objective, coupled with DEOC's reward augmentation, produces relevant, robust and useful options. Our proposed algorithm, Termination-DEOC (TDEOC), significantly outperforms option-critic as well as PPO on a wide range of tasks. TDEOC can potentially benefit in scaling tasks to longer horizons. In the future, we would like to further investigate general methods for inferring reward functions and optimization criteria that lead to efficient exploration and transfer in new domains.
\nocite{puterman_book}
\nocite{bellman1954}
\nocite{henderson2017multitask}
\nocite{reproducibility_checklist}
|
2,869,038,154,031 | arxiv | \section{Introduction}
Discourse markers are universal linguistic events subject to language variation. Although an extensive literature has already reported language specific traits of these events (e.g.~\cite{Fraser1990,Fraser1999,Fischer2000,Beeching2014,Ghezzi2014}), little has been said on their cross-language behavior and, subsequently, on building an inventory of multilingual lexica of discourse markers. Thus, this work describes new methods and approaches for the description, classification, and annotation of discourse markers in the specific domain of the Europarl corpus. The study of discourse markers in the context of translation is crucial due to the idiomatic nature of these structures (e.g.~\cite{Aijmer2007,Beeching2013}). Multilingual lexica together with the functional analysis of such structures are useful tools for the hard task of translating discourse markers into possible equivalents from one language to another.
Using Daniel Marcu's validated discourse markers for English~\cite{Marcu2000}, extracted from the Brown Corpus~\cite{Francis1979}, our purpose is to build multilingual lexica of discourse markers for other languages, based on machine translation techniques.
The major assumption in this study is that the usage of a discourse marker is independent of the language, i.e., the rhetorical function of a discourse marker in a sentence in one language is equivalent to the rhetorical function of the same discourse marker in another language.
\section{Methodology}
We used the European Parliament corpus\footnote{Available at \url{http://www.statmt.org/europarl/} (visited March 2015).}, version 7, in this experiment. The procedure is applied to all the pairs of languages available. The corpus consists of proceedings of the European Parliament. It includes versions in 21 European languages: Romanic (French, Italian, Spanish, Portuguese, Romanian), Germanic (English, Dutch, German, Danish, Swedish), Slavik (Bulgarian, Czech, Polish, Slovak, Slovene), Finni-Ugric (Finnish, Hungarian, Estonian), Baltic (Latvian, Lithuanian), and Greek.
Machine translation systems can be classified according to the atomic units to be translated: for example, while for word-based methods the atomic unit is the word, for phrase-based methods the atomic unit is the phrase. Thus, the most important knowledge sources of phrase-based methods are tables of possible phrase translations between language pairs. Phrase-based methods, due to their nature, have, at least, one interesting advantage for this specific work: they can handle non-compositional phrases.
Before computing the phrase table for each language pair in the corpus, alignment and normalization steps are necessary. These steps ensure that each phrase in one language has a counterpart in the other language. The alignment algorithm is an implementation of the Church and Gale algorithm for bilingual corpora~\cite{Gale1993}. However, in this step, the algorithm was adapted to take into account specific aspects of the Europarl corpus. This was necessary because the least alignment unit is the paragraph, not the phrase. In this case, when the number of lines per paragraph is different in the two languages, all the paragraphs are collapsed into a single line. After the previous step, a normalization step is carried out. First, all SGML (Standard Generalized Markup Language) tags are removed, since they are no longer needed in the aligned corpus; second, sentences are tokenized (using a tool provided by the Europarl package that constitutes a weak language dependency, i.e., it only separates word tokens); and, third, all text is converted to lowercase.
\subsection{Building the phrase table}
In order to create the phrase table between foreign-English language pairs, for all the experiments, we used the Moses decoder\footnote{\url{http://www.statmt.org/moses/index.php?n=Main.HomePage}}, particularly the train model, with the default parameters. This function uses the GIZA++ tool~\cite{Och2003}, an implementation of the IBM models, and is used to establish word alignments.
\subsection{Pruning the phrase table}
As the phrase table contains all pairs found in the parallel corpus, there is much noise and, consequently, not all pairs are likely to be either selected as candidate or as a good marker translation. This occurs because the models take into account all possible observations. We used a tool provided by the Moses Decoder that re-implements the algorithm proposed in~\cite{Johnson2007}, which prunes out unlikely pairs. This tool is dependent of the SALM tool ~\cite{Zhang2006}.
\subsection{Selection of discourse markers candidates}
After the creation of the phrase table, we developed a method for extracting from the table only the desired translations. This was accomplished by searching the table for each of the target discourse markers, where the marker appears followed or preceded by any kind of punctuation marks.
\subsection{Filtering undesirable translations}
Since we want the best translations of the markers, we developed a way to filter and remove all the undesirable candidates generated in the previous step. The filtering process uses the information from the phrase table: the translation to the target language, the marker in English, the inverse phrase translation probability $\varphi(f|e)$, the inverse lexical weighting $lex(f|e)$, the direct phrase translation probability $\varphi(e|f)$, the direct lexical weighting $lex(e|f)$, and the word-to-word alignment. Beyond the phrase table information, several heuristics were used to help filtering out bad alignments.
\section{Preliminary Results}
The following table presents some translation examples for markers above all and since to different languages.
\begin{table}
\centering
\begin{tabular}{|p{.15\columnwidth}|p{.2\columnwidth}|p{.22\columnwidth}|p{.15\columnwidth}|p{.2\columnwidth}|}\hline
\textbf{English} & \textbf{Portuguese} & \textbf{French} & \textbf{German} & \textbf{Italian} \\\hline
above all & sobretudo & avant tout & vor allem & soprattutto \\
& sobretudo de & surtout & vor allen & soprattutto di \\
& acima de tudo & & & \\
& acima de todas & & & \\\hline
since & pois & puisque & da & dal momento \\
& desde então & depuis & seit & dal \\
& desde de & dans la mesure où & da seit & poiché \\
& desde que & & die seit & \\
& & & seit sich & \\\hline
\end{tabular}
\end{table}
The original list of 427 markers in English generated for the languages selected for the above example, 846 for Portuguese (27 were pruned from the phrase table), 861 for French (40 removed after pruning), 906 for German (43 do not exist in the phrase table after pruning), and 1293 for Italian (46 do not exist in the phrase table after pruning).
Another experiment aiming at a cross-domain analysis of discourse markers in two spontaneous speech corpora (university lectures and map-task dialogues) was conducted to identify and classify discourse markers in Portuguese. The discourse markers collected were coded as conversational markers, i.e., those that only occur in oral communications, and as both conversational and textual markers, meaning those that can occur both in speech and in written texts. In order to compare the discourse markers in these corpora with the Portuguese lexicon of discourse markers in the Europarl corpus, extracted within this work, we searched their equivalents in the latter for English. Results showed that, out of around 70 discourse markers, only 18 were available and had an English translation. As for the classification of the discourse markers found in the Europarl corpus, 7 were coded both as conversational and textual markers, and the remaining were classified only as conversational markers. A possible interpretation for these results is that the register used in the three corpora differs considerably. University lectures and map-task dialogues have a much more informal type of speech than Europarl. The following table presents some examples.
\begin{table}
\centering
\begin{tabular}{|l|l|}\hline
\textbf{Portuguese} & \textbf{English} \\\hline
A seguir & After that; after; afterwards; following; next; thereafter \\
Agora & From now on; now \\
Bem & All right; and; as well; fine; okay; well \\
Bom & Well \\
E depois & And then; but then; then \\
Enfim & Anyway; at last; finally; in short; lastly \\
Entretanto & But; in the meantime; meanwhile \\
Muito bem & All right; fine; okay \\
Ora & But; however; now; or; well; yet \\
Pois & Because; on the grounds; since; then; therefore \\\hline
\end{tabular}
\end{table}
\section{Conclusions}
This preliminary work describes new methods and approaches for the description, classification, and annotation of discourse markers based on the Europarl corpus. Building multilingual lexica is a much need task, especially in the fields of machine translation and (computational) linguistics. As preliminary results show, multilingual lexica allow for the establishment of an inventory of discourse markers, for multiple translations of each entry in the inventory, and also for the analysis of contexts of usage in several languages. Per se, these lexica are a very useful tool to correlate cross-language discourse markers, validated by the fact that professional conference interpreters established those relations. Ultimately, this work may be considered as a step forward towards the rhetorical analysis of discourse markers in a cross-language framework.
\bibliographystyle{plain}
|
2,869,038,154,032 | arxiv | \section{Introduction}
Let $\{\xi_n\}$ be a sequence of independent and identically distributed (i.i.d.) random variables, and $\{a_n\}$ be a sequence of positive real numbers. We consider the random series $\sum_{n=1}^{\infty}a_n\xi_n$. Such random series are basic objects in time series analysis and in regression models (see \cite{Davis-Resnick}), and there have been a lot of research. For example, \cite{Gluskin-Kwapien} and \cite{Latala} studied tail probabilities and moment estimates of the random series when $\{\xi_n\}$ have logarithmically concave tails. Of special interest are the series of positive random variables, or the series of the form $\sum_{n=1}^{\infty}a_n|\xi_n|^p$. Indeed, by Karhunen-Lo\'{e}ve expansion, the $L_2$ norm of a centered continuous Gaussian process $X(t), t\in[0,1],$ can be represented as $\|X\|_{L_2}=\sum_{n=1}^{\infty}\lambda_nZ_n^2$ where $\lambda_n$ are the eigenvalues of the associated covariance
operator, and $Z_n$ are i.i.d. standard Gaussian random variables. It is also known (see \cite{Lifshits-1994}) that the series $\sum_{n=1}^{\infty}a_n|Z_n|^p$ coincides with some bounded Gaussian process $\{Y_t,t\in \mathrm{T}\}$, where $\mathrm{T}$ is a suitable parameter set: $\sum_{n=1}^{\infty}a_n|Z_n|^p=\sup_{\mathrm{T}}Y_t.$
In this paper, we study the the limiting behavior of the upper tail probability of the series
\begin{align}\label{eq:introduction}
\mathbb{P}\left\{\sum_{n=1}^{\infty}a_n|\xi_n|^p\geq r\right\}\qquad\text{ as }r\rightarrow\infty.
\end{align}
This probability is also called large deviation probability (see \cite{Arcones}). As remarked in \cite{Gao-Li}, for Gaussian process $\|X\|_{L_2}=\sum_{n=1}^{\infty}\lambda_nZ_n^2,$ the eigenvalues $\lambda_n$ are rarely found exactly. Often, one only knows the asymptotic approximation. Thus, a natural question is to study the relation between
the upper tail probability of the original random series and the one with approximated eigenvalues. Also, it is much easier to analyze the rate function in the large deviation theory when $\{a_n\}$ are explicitly given instead of asymptotic approximation.
Throughout this paper, the following notations will be used. The $l^q$ norm of a real sequence $a=\{a_n\}$ is denoted by $||a||_q=\left(\sum_{n=1}^{\infty} a_n^q\right)^{1/q}.$ In particular, the $l^{\infty}$ norm should be understood as $||a||_{\infty}=\max|a_n|.$
We focus on the following two types of comparisons. The first is at the exact level
\begin{align}\label{comparison-exact}
\frac{\mathbb{P}\left\{\sum_{n=1}^{\infty}a_n|\xi_n|\geq r\|a\|_2\beta+|\alpha|\sum_{n=1}^{\infty} a_n\right\}}{\mathbb{P}\left\{\sum_{n=1}^{\infty}b_n|\xi_n|\geq r\|b\|_2\beta+|\alpha|\sum_{n=1}^{\infty} b_n\right\}}\sim1\qquad\text{ as }r\rightarrow\infty
\end{align}
where $\{\xi_n\}$ are i.i.d. Gaussian random variables $N(\alpha,\beta^2);$ see Theorem \ref{exact-standard-normal} and Theorem \ref{exact-general-normal}. This is motivated by \cite{Gao-Hannig-Torcaso} in which the following exact level comparison theorems for small deviations were obtained: as $r\rightarrow0,$
$\mathbb{P}\left\{\sum_{n=1}^{\infty}a_n|\xi_n|\leq r\right\}\sim c\mathbb{P}\left\{\sum_{n=1}^{\infty}b_n|\xi_n|\leq r\right\}$ for i.i.d. random variables $\{\xi_n\}$ whose common distribution satisfies several weak assumptions in the vicinity of zero. The proof of the small deviation comparison is based on the equivalence form of $\mathbb{P}\left\{\sum_{n=1}^{\infty}a_n|\xi_n|\leq r\right\}$ introduced in \cite{Lifshits-1997}. Our proof of upper tail probability comparison (\ref{comparison-exact}) is also based on an equivalent form of $\mathbb{P}\left\{\sum_{n=1}^{\infty}a_n|\xi_n|\geq r\right\}$ in \cite{Lifshits-1994} for Gaussian random variables. The main difficulty is to come up with suitable inequalities which can be used for a specified function $\widehat{\varepsilon}(x,y)$ in Lemma \ref{Lifshits-Theorem2}, and such inequalities are obtained in Lemma \ref{core-lemma-1} and Lemma \ref{core-lemma-2}.
For more general random variables, difficulties arise due to the lack of known equivalent form of $\mathbb{P}\left\{\sum_{n=1}^{\infty}a_n|\xi_n|\geq r\right\}.$ Thus, instead of exact comparison, we consider logarithmic level comparison for upper tail probabilities
\begin{align}\label{comparison-log}
\frac{\log\mathbb{P}\left\{\sum_{n=1}^{\infty}a_n|\xi_n|\geq r\|a\|_q\right\}}{\log\mathbb{P}\left\{\sum_{n=1}^{\infty}b_n|\xi_n|\geq r\|b\|_q\right\}}\sim1\qquad\text{ as }r\rightarrow\infty.
\end{align}
It turns out that under suitable conditions on the sequences $\{a_n\}$ and $\{b_n\}$ the comparison (\ref{comparison-log}) holds true for i.i.d. random variables $\{\xi_n\}$ satisfying $$\lim_{u\rightarrow\infty}u^{-p}\log\mathbb{P}\left\{|\xi_1|\geq u\right\}=-c$$ for some finite constants $p\geq1$ and $c>0;$ see Theorem \ref{rough-comparison}. Here we note that logarithmic level comparisons for small deviation probabilities can be found in \cite{Gao-Li}.
From comparisons (\ref{comparison-exact}) and (\ref{comparison-log}), we see that two upper tail probabilities are equivalent as long as suitable scaling is made. We believe that this holds true for more general random variables; see the conjecture at the end of Section \ref{section-exact-comparison} for details.
\section{Exact comparisons for Gaussian random series}\label{section-exact-comparison}
\subsection{The main results}\label{exact-for-Gaussian}
The following two theorems are the main results in this section. The first one is on standard Gaussian random variables.
\begin{theorem}\label{exact-standard-normal}
Let $\{Z_n\}$ be a sequence of i.i.d. standard Gaussian random variables $N(0,1),$ and $\{a_n\},\{b_n\}$ be two non-increasing sequences of positive real numbers such that $\sum_{n=1}^{\infty}a_n<\infty,\sum_{n=1}^{\infty}b_n<\infty,$
\begin{equation}\label{a-n-b-n-converge}
\begin{aligned}
\prod_{n=1}^{\infty}\left(2-\frac{a_n/\|a\|_2}{b_n/\|b\|_2}\right)\text{ and }\prod_{n=1}^{\infty}\left(2-\frac{b_n/\|b\|_2}{a_n/\|a\|_2}\right)\text{ converge}.
\end{aligned}
\end{equation}
Then as $r\rightarrow\infty$
\begin{align*}
\mathbb{P}\left\{\sum_{n=1}^{\infty}a_n|Z_n|\geq r\|a\|_2\right\}\sim\mathbb{P}\left\{\sum_{n=1}^{\infty}b_n|Z_n|\geq r\|b\|_2\right\}.
\end{align*}
\end{theorem}
For general Gaussian random variables $Z_n,$ it turns out that the condition (\ref{a-n-b-n-converge}) is not convenient to derive the comparison because some more complicated terms appear in the proof. Therefore, an equivalent condition in another form is formulated which forms the following comparison.
\begin{theorem}\label{exact-general-normal}
Let $\{Z_n\}$ be a sequence of i.i.d. Gaussian random variables $N(\alpha,\beta^2),$ and $\{a_n\},\{b_n\}$ be two non-increasing sequences of positive real numbers such that $\sum_{n=1}^{\infty}a_n<\infty,\sum_{n=1}^{\infty}b_n<\infty,$
\begin{equation}\label{a-n-b-n-converge-regular-normal}
\begin{aligned}
\sum_{n=1}^{\infty}\left(1-\frac{a_n/\|a\|_2}{b_n/\|b\|_2}\right)\text{ converges, and }\sum_{n=1}^{\infty}\left(1-\frac{a_n/\|a\|_2}{b_n/\|b\|_2}\right)^2<\infty.
\end{aligned}
\end{equation}
Then as $r\rightarrow\infty$
\begin{align*}
\mathbb{P}&\left\{\sum_{n=1}^{\infty}a_n|Z_n|\geq r\|a\|_2\beta+|\alpha|\sum_{n=1}^{\infty} a_n\right\}\\
&\qquad\qquad\sim\mathbb{P}\left\{\sum_{n=1}^{\infty}b_n|Z_n|\geq r\|b\|_2\beta+|\alpha|\sum_{n=1}^{\infty} b_n\right\}.
\end{align*}
\end{theorem}
\subsection{Proofs of Theorem \ref{exact-standard-normal} and Theorem \ref{exact-general-normal}}
The function $\Phi$ stands for the distribution function of a standard Gaussian random variable
$$\Phi(x)=\int_{-\infty}^x\frac{1}{\sqrt{2\pi}}e^{-u^2/2}du.$$
The first lemma is our starting point.
\begin{lemma}[\cite{Lifshits-1994}]\label{Lifshits-Theorem2} Let $\{\xi_n\}$ be a sequence of i.i.d. Gaussian random variables $N(\alpha,\beta^2),$ and $\{a_n\}$ be a sequence of positive real numbers such that $\sum_{n=1}^{\infty}a_n<\infty.$ Then as $r\rightarrow\infty$
\begin{equation}\label{lemma-Lifshits-Theorem2}
\begin{aligned}
\mathbb{P}&\left\{\sum_{n=1}^{\infty}a_n|\xi_n|\geq r\right\}\\
&\sim\prod_{n=1}^{\infty}\widehat{\varepsilon}\left(\frac{a_n(r-|\alpha|\sum_{n=1}^{\infty}a_n)}{||a||_2^2\beta},\frac{\alpha}{\beta}\right)\cdot\left[1-\Phi\left(\frac{r-|\alpha|\sum_{n=1}^{\infty}a_n}{||a||_2\beta}\right)\right]
\end{aligned}
\end{equation}
where $\widehat{\varepsilon}(x,y)=\Phi(x+|y|)+\exp\{-2x|y|\}\Phi(x-|y|).$
\end{lemma}
\begin{lemma}[Lemma 5 in \cite{Gao-Hannig-Torcaso}]\label{lemma5Gao}
Suppose $\{c_n\}$ is a sequence of real numbers such that $\sum_{n=1}^{\infty}c_n$ converges, and $g$ has total variation $D$ on $[0,\infty).$ Then, for any monotonic non-negative sequence $\{d_n\},$
$$\left|\sum_{n\geq N}c_n g(d_n)\right|\leq (D+\sup_x|g(x)|)\sup_{k>N}\left|\sum_{n=N}^k c_n\right|.$$
\end{lemma}
As mentioned in the introduction, the key step of the proofs is to come up with suitable inequalities that can be used for the function $\widehat{\varepsilon}(x,y)$ in Lemma \ref{Lifshits-Theorem2}. For the proof of Theorem \ref{exact-standard-normal}, we need the following
\begin{lemma}\label{core-lemma-1}
For $a\leq0$ and small enough $\delta,$ we have
$$1+a\cdot \delta\leq (1+\delta)^a.$$
\end{lemma}
The proof of this lemma is trivial. The proof of Theorem \ref{exact-general-normal} requires a more complicated inequality as follows.
\begin{lemma}\label{core-lemma-2}
For a fixed $\sigma>0$ and any $\gamma>0,$ there is a constant $\lambda(\sigma)$ only depending on $\sigma$ such that for any $|a|\leq \sigma$ and $|\delta|\leq \lambda,$
$$1+a\cdot \delta+\gamma\leq (1+\delta)^a(1+\delta^2)(1+\gamma)^2.$$
\end{lemma}
The proof of Lemma \ref{core-lemma-2} is elementary (but not trivial) which is given at the end of this section.
\begin{proof}[Proof of Theorem \ref{exact-standard-normal}]
By otherwise considering $\tilde{a}_n=a_n/\|a\|_2$ and $\tilde{b}_n=b_n/\|b\|_2,$ we assume that $\|a\|_2=\|b\|_2=1.$ It follows from Lemma \ref{Lifshits-Theorem2} that
$$\mathbb{P}\left\{\sum_{n=1}^{\infty}a_n|Z_n|\geq r\right\}\sim\prod_{n=1}^{\infty}2\Phi\left(ra_n\right)\cdot\left[1-\Phi\left(r\right)\right].$$
Therefore,
\begin{align*}
\frac{\mathbb{P}\left\{\sum_{n=1}^{\infty}a_n|Z_n|\geq r\right\}}{\mathbb{P}\left\{\sum_{n=1}^{\infty}b_n|Z_n|\geq r\right\}}\sim\prod_{n=1}^{\infty}\frac{\Phi\left(ra_n\right)}{\Phi\left(rb_n\right)}.
\end{align*}
Now we prove that $\prod_{n=N}^{\infty}\frac{\Phi\left(ra_n\right)}{\Phi\left(rb_n\right)}$ tends to $1$ as $N\rightarrow\infty$ uniformly in $r.$ Then the limit of $\prod_{n=1}^{\infty}\frac{\Phi\left(ra_n\right)}{\Phi\left(rb_n\right)}$ as $r\rightarrow\infty$ is equal to $1$ since the limit of each $\frac{\Phi\left(ra_n\right)}{\Phi\left(rb_n\right)}$ as $r\rightarrow\infty$ is $1.$
By applying Taylor's expansion to $\Phi$ up to the second order, we have
\begin{align*}
\Phi\left(ra_n\right)=&\Phi\left(rb_n\right)+\Phi'\left(rb_n\right)\left(ra_n-rb_n\right)\\
&+\frac{\Phi''(rc_n)}{2}\left(ra_n-rb_n\right)^2
\end{align*}
where $c_n$ is between $a_n$ and $b_n.$ It follows from $\Phi''(rc_n)\leq0$ that
\begin{align*}
\frac{\Phi\left(ra_n\right)}{\Phi\left(rb_n\right)}
\leq1+\frac{rb_n\Phi'\left(rb_n\right)}{\Phi\left(rb_n\right)}\left(\frac{a_n}{b_n}-1\right).
\end{align*}
Let us introduce a new function $g(x)=-\frac{x\Phi'(x)}{\Phi(x)}.$ Now we apply Lemma \ref{core-lemma-1} with $a=g(rb_n)$ to get
\begin{align*}
\frac{\Phi\left(ra_n\right)}{\Phi\left(rb_n\right)}
\leq\left(2-\frac{a_n}{b_n}\right)^{g(rb_n)}.
\end{align*}
It then follows from Lemma \ref{lemma5Gao} that
\begin{align*}
\prod_{n\geq N}\frac{\Phi\left(ra_n\right)}{\Phi\left(rb_n\right)}
&\leq\exp\left\{\sum_{n\geq N}g(rb_n)\log\left(2-\frac{a_n}{b_n}\right)\right\}\\
&\leq\exp\left\{(D+\sup_x|g(x)|)\sup_{k>N}\left|\sum_{n=N}^k\log\left(2-\frac{a_n}{b_n}\right)\right|\right\}\\
\end{align*}
which tends to $1$ uniformly in $r$ from condition (\ref{a-n-b-n-converge}). Thus
\begin{align*}
\limsup_{N\rightarrow\infty}\prod_{n\geq N}\frac{\Phi\left(ra_n\right)}{\Phi\left(rb_n\right)}
\leq 1.
\end{align*}
Similarly,
\begin{align*}
\limsup_{N\rightarrow\infty}\prod_{n\geq N}\frac{\Phi\left(rb_n\right)}{\Phi\left(ra_n\right)}
\leq 1
\end{align*}
which completes the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{exact-general-normal}]
From Lemma \ref{Lifshits-Theorem2} we get
\begin{align*}
\frac{\mathbb{P}\left\{\sum_{n=1}^{\infty}a_n|\xi_n|\geq r\|a\|_2\beta+|\alpha|\sum_{n=1}^{\infty} a_n\right\}}{\mathbb{P}\left\{\sum_{n=1}^{\infty}b_n|\xi_n|\geq r\|b\|_2\beta+|\alpha|\sum_{n=1}^{\infty} b_n\right\}}\sim\prod_{n=1}^{\infty}\frac{h(ra_n/\|a\|_2)}{h(rb_n/\|b\|_2)}
\end{align*}
where $h(x)=\Phi(x+|\alpha/\beta|)+\exp\{-2x|\alpha/\beta|\}\Phi(x-|\alpha/\beta|).$ Without loss of generality, we assume $\|a\|_2=\|b\|_2=1.$ We use the notation $f(x)=\exp\{-2x|\alpha/\beta|\}\Phi(x-|\alpha/\beta|),$ thus
$$h(ra_n)=\Phi(ra_n+|\alpha/\beta|)+f(ra_n).$$
Now we apply Taylor's expansions to $\Phi$ at point $rb_n+|\alpha/\beta|$, and to $f$ at point $rb_n$ both up to the second order, so
\begin{align*}
h(ra_n)=&\Phi(rb_n+|\alpha/\beta|)+rb_n\Phi'(rb_n+|\alpha/\beta|)\left(\frac{a_n}{b_n}-1\right)\\
&\qquad\qquad\qquad+\Phi''(rc_{1,n}+|\alpha/\beta|)\left(ra_n-rb_n\right)^2/2\\
&+f(rb_n)+rb_nf'(rb_n)\left(\frac{a_n}{b_n}-1\right)+\frac{r^2b_n^2f''(rc_{2,n})}{2}\left(\frac{a_n}{b_n}-1\right)^2
\end{align*}
where $c_{1,n}$ and $c_{2,n}$ are between $a_n$ and $b_n.$ Because $\Phi''\leq0,$
\begin{align*}
h(ra_n)\leq&h(rb_n)+rb_n\left[\Phi'(rb_n+|\alpha/\beta|)+f'(rb_n)\right]\left(\frac{a_n}{b_n}-1\right)\\
&\qquad\qquad\qquad+\frac{r^2b_n^2f''(rc_{2,n})}{2}\left(\frac{a_n}{b_n}-1\right)^2.
\end{align*}
Taking into account that $\left|r^2b_n^2f''(rc_{2,n})\right|\leq 2c(|\alpha/\beta|)$ for large $N$ uniformly in $r$ with some positive constant $c$ depending on $|\alpha/\beta|,$ we have
\begin{align*}
\frac{h(ra_n)}{h(rb_n)}\leq1+\frac{rb_n\left[\Phi'(rb_n+|\alpha/\beta|)+f'(rb_n)\right]}{h(rb_n)}\left(\frac{a_n}{b_n}-1\right)+c\left(\frac{a_n}{b_n}-1\right)^2.
\end{align*}
The function $g(x):=x\left[\Phi'(x+|\alpha/\beta|)+f'(x)\right]/h(x)$ is bounded and continuously differentiable on $[0,\infty)$ with a bounded derivative. Therefore it follows from Lemma \ref{core-lemma-2} that
\begin{align*}
\frac{h(ra_n)}{h(rb_n)}\leq\left(\frac{a_n}{b_n}\right)^{g(rb_n)}\left(1+\left(\frac{a_n}{b_n}-1\right)^2\right)\left(1+c\left(\frac{a_n}{b_n}-1\right)^2\right)^2.
\end{align*}
By taking the infinite product, we get
\begin{align*}
\prod_{n=N}^{\infty}\frac{h(ra_n)}{h(rb_n)}\leq\prod_{n=N}^{\infty}\left(\frac{a_n}{b_n}\right)^{g(rb_n)}\prod_{n=N}^{\infty}\left(1+\left(\frac{a_n}{b_n}-1\right)^2\right)\left(1+c\left(\frac{a_n}{b_n}-1\right)^2\right)^2.
\end{align*}
According to Lemma \ref{lemma5Gao}, the first product
\begin{align*}
\prod_{n=N}^{\infty}\left(\frac{a_n}{b_n}\right)^{g(rb_n)}&=\exp\left\{\sum_{n\geq N}g(rb_n)\log\left(\frac{a_n}{b_n}\right)\right\}\\
&\leq\exp\left\{(D+\sup_x|g(x)|)\sup_{k>N}\left|\sum_{n=N}^k \log\left(\frac{a_n}{b_n}\right)\right|\right\}
\end{align*}
which tends to $1$ because the series $\sum_{n=1}^{\infty} \log\left(\frac{a_n}{b_n}\right)$ is convergent (this is from condition (\ref{a-n-b-n-converge-regular-normal}), see Appendix for more details).
For the second product, we use $1+x\leq e^x$ to get
\begin{align*}
&\prod_{n=N}^{\infty}\left(1+\left(\frac{a_n}{b_n}-1\right)^2\right)\left(1+c\left(\frac{a_n}{b_n}-1\right)^2\right)^2\\
&\leq\exp\left\{(1+2c)\sum_{n\geq N}\left(\frac{a_n}{b_n}-1\right)^2\right\}
\end{align*}
and this tends to $1$ because of (\ref{a-n-b-n-converge-regular-normal}). Thus
\begin{align*}
\limsup_{N\rightarrow\infty}\prod_{n=N}^{\infty}\frac{h(ra_n)}{h(rb_n)}\leq 1.
\end{align*}
We can similarly prove $\limsup_{N\rightarrow\infty}\prod_{n=N}^{\infty}\frac{h(rb_n)}{h(ra_n)}\leq 1$ which ends the proof.
\end{proof}
\begin{proof}[Proof of Lemma \ref{core-lemma-2}] We first show that under the assumptions of Lemma \ref{core-lemma-2}, the following inequality holds
\begin{align}\label{middle}
1+a\cdot \delta\leq (1+\delta)^a(1+\delta^2).
\end{align}
Let us consider the function $p(\delta)$ for $|\delta|<1$ and $|a\delta|<1$ defined as
$$p(\delta)=a\log(1+\delta)+\log(1+\delta^2)-\log(1+a\delta).$$
It is clear that $p(0)=0$ and
\begin{align*}
p'(\delta)=\frac{\delta}{(1+\delta)(1+\delta^2)(1+a\delta)}\left[\delta^2\left(a^2+a\right)+\delta\left(2a+2\right)+\left(a^2-a+2\right)\right]
\end{align*}
which is greater than $3/2$ for sufficiently small $\lambda_1$ depending on $\sigma$ with $|a|\leq \sigma$ and $|\delta|\leq \lambda_1,$ since $a^2-a+2\geq 7/4.$ Inequality (\ref{middle}) is thus proved.
Now we define a new function
$$q(\gamma)=(1+\delta)^a(1+\delta^2)(1+\gamma)^2-(1+a\delta+\gamma).$$
From (\ref{middle}) we have $q(0)\geq0.$ Furthermore,
$$q'(\gamma)=(1+\delta)^a(1+\delta^2)2(1+\gamma)-1$$
which can be made positive for small $\lambda_2$ depending on $\sigma$ with $|\delta|\leq \lambda_2.$ The proof is complete by taking $\lambda=\min\{\lambda_1,\lambda_2\}.$
\end{proof}
\subsection{Appropriate extensions}
By using again an equivalence form for $\mathbb{P}\left\{\sum_{n=1}^{\infty}a_n|Z_n|^p\geq r\right\}$ discussed in \cite{Lifshits-1994} with $1\leq p<2,$ we can similarly derive, without much difficulty, exact comparison for the upper tail probabilities of $\sum_{n=1}^{\infty}a_n|Z_n|^p.$ We formulate this as a proposition as follows without a proof.
\begin{prop}\label{exact-general-normal-p}
Let $\{\xi_n\}$ be a sequence of i.i.d. Gaussian random variables $N(\alpha,\beta^2),$ and $\{a_n\},\{b_n\}$ be two sequences of positive real numbers such that $\sum_{n=1}^{\infty}a_n<\infty,\sum_{n=1}^{\infty}b_n<\infty$ and
\begin{equation}\label{abs_sum}
\begin{aligned}
\sum_{n=1}^{\infty}\left|1-\frac{a_n/\sigma_a^p}{b_n/\sigma_b^p}\right|<\infty
\end{aligned}
\end{equation}
for $1\leq p<2,$ $\sigma_a=\left(\sum_{n=1}^{\infty}a_n^{m/p}\right)^{1/m}\beta$ with $m=2p/(2-p).$ Then as $r\rightarrow\infty$
\begin{align*}
\mathbb{P}&\left\{\sum_{n=1}^{\infty}a_n|\xi_n|^p\geq \left(r\sigma_a+|\alpha|\sum_{n=1}^{\infty} a_n^{1/p}\right)^p\right\}\\
&\qquad\qquad\sim\mathbb{P}\left\{\sum_{n=1}^{\infty}b_n|\xi_n|^p\geq \left(r\sigma_b+|\alpha|\sum_{n=1}^{\infty} b_n^{1/p}\right)^p\right\}.
\end{align*}
\end{prop}
Based on what we have observed for Gaussian random variables so far, it is reasonable to believe that after suitable scaling, two upper tail probabilities involving $\{a_n\}$ and $\{b_n\}$ separately are equivalent. Namely, we have the following.
\textbf{Conjecture}: Under suitable conditions on $\{a_n\}$ and $\{b_n\},$ for general i.i.d. random variables $\{\xi_n\},$ the following exact comparison holds
\begin{align*}
\mathbb{P}&\left\{\sum_{n=1}^{\infty}a_n|\xi_n|\geq h\Big(rf^{\xi}(a)+g^{\xi}(a)\Big)\right\}\sim\mathbb{P}\left\{\sum_{n=1}^{\infty}b_n|\xi_n|\geq h\Big(rf^{\xi}(b)+g^{\xi}(b)\Big)\right\}
\end{align*}
for some function $h(r)$ satisfying $\lim_{r\rightarrow\infty}h(r)=\infty,$ and for two suitable scaling coefficients $f^{\xi}(a)$ and $g^{\xi}(a)$ whose values at sequence $a=\{a_n\}$ only depend on $a$ and the structure of the distribution of $\xi_1$ (such as the mean, the variance, the tail behaviors, etc).
In the next section, we show that indeed two upper tail probabilities in the logarithmic level are equivalent after some scaling. This adds more evidence of our conjecture.
\section{Logarithmic level comparison}\label{-section-logarithmic-level-comparison}
In this section, we illustrate the logarithmic level comparison for more general random variables $\{\xi_n\}$ other than the Gaussian ones.
\begin{theorem}\label{rough-comparison}
Let $\{\xi_n\}$ be a sequence of i.i.d. random variables whose common distribution satisfies $\mathbb{E}|\xi_1|<\infty$ and
\begin{align}\label{log-condition-c}
\lim_{u\rightarrow\infty}u^{-p}\log\mathbb{P}\left\{|\xi_1|\geq u\right\}=-c
\end{align}for some constants $p\geq1$ and $0<c<\infty.$ Suppose that a sequence of positive real numbers $\{a_n\}$ is such that $\sum_{n=1}^{\infty}a_n^{2\wedge q}<\infty$ with $q$ given by $\frac{1}{p}+\frac{1}{q}=1.$ Then as $r\rightarrow\infty$
\begin{align}\label{end-rough}
\log\mathbb{P}\left\{\sum_{n=1}^{\infty}a_n|\xi_n|\geq r\right\}\sim -r^p\cdot c\cdot \|a\|_q^{-p}.
\end{align}
\end{theorem}
\begin{remark}
If $\xi_1$ is the standard Gaussian random variable, then $p=2$ and $c=1/2$ in condition (\ref{log-condition-c}). If $\xi_1$ is an exponential random variable with density function $e^{-x}$ on $[0,\infty),$ then $p=c=1.$ One can easily produce more examples. It is straightforward to deduce the following comparison result from (\ref{end-rough}).
\end{remark}
\begin{cor}
Let $\{\xi_n\}$ be a sequence of i.i.d. random variables satisfying the assumptions in Theorem \ref{rough-comparison}. Suppose that two sequences of positive real numbers $\{a_n\}$ and $\{b_n\}$ satisfy $\sum_{n=1}^{\infty}a_n^{2\wedge q}<\infty$ and $\sum_{n=1}^{\infty}b_n^{2\wedge q}<\infty$ with $q$ given by $\frac{1}{p}+\frac{1}{q}=1.$ Then as $r\rightarrow\infty$
$$\frac{\log\mathbb{P}\left\{\sum_{n=1}^{\infty}a_n|\xi_n|\geq r\right\}}{\log\mathbb{P}\left\{\sum_{n=1}^{\infty}b_n|\xi_n|\geq r\right\}}\sim\left(\frac{\|b\|_q}{\|a\|_q}\right)^p$$
and
$$\frac{\log\mathbb{P}\left\{\sum_{n=1}^{\infty}a_n|\xi_n|\geq r\|a\|_q\right\}}{\log\mathbb{P}\left\{\sum_{n=1}^{\infty}b_n|\xi_n|\geq r\|b\|_q\right\}}\sim1.$$
\end{cor}
The proof of Theorem \ref{rough-comparison} is based on the large deviation principle for random series which was derived in \cite{Arcones}. Let us recall a result in \cite{Arcones} (revised a little for our purpose).
\begin{lemma}[\cite{Arcones}]\label{Arones-result}
Let $\{\eta_k\}$ be a sequence of i.i.d. random variables with mean zero satisfying the following condition
\begin{equation}\label{appendix-equation}
\begin{cases}
&\lim_{u\rightarrow\infty}u^{-p}\log\mathbb{P}\left\{\eta_1\leq-u\right\}=-c_1;\\
&\lim_{u\rightarrow\infty}u^{-p}\log\mathbb{P}\left\{\eta_1\geq u\right\}=-c_2,
\end{cases}
\end{equation}
for some $p\geq1$ and $0<c_1,c_2\leq\infty$ with $\min\{c_1,c_2\}<\infty.$ Suppose $\{x_k\}$ is a sequence of real numbers such that $\sum_{k=1}^{\infty}|x_k|^{2\wedge q}<\infty.$ Then the family $\{n^{-1}\sum_{k=1}^{\infty}x_k\eta_k\}$ satisfies the large deviation principle with speed $n^p$ and a rate function
\begin{equation*}
I(z)=\inf\left\{\sum_{j=1}^{\infty}\psi(u_j):\sum_{j=1}^{\infty}u_jx_j=z\right\}, \,\,\,z\in\mathbb{R}
\end{equation*}
where
\begin{align*}
\psi(t)=
\begin{cases}
c_1|t|^p & \text{ if }t<0;\\
0 & \text{ if }t=0;\\
c_2|t|^p & \text{ if }t>0.
\end{cases}
\end{align*}
Namely, for any measurable set $A\subseteq\mathbb{R},$
\begin{align*}
&-\inf\{I(y):y\in\text{interior of }A\}\leq\liminf_{n\rightarrow\infty}n^{-p}\log\mathbb{P}\left\{n^{-1}\sum_{k=1}^{\infty}x_k\eta_k\in A\right\}\\
&\leq\limsup_{n\rightarrow\infty}n^{-p}\log\mathbb{P}\left\{n^{-1}\sum_{k=1}^{\infty}x_k\eta_k\in A\right\}\leq-\inf\{I(y):y\in\text{closure of }A\}.
\end{align*}
\end{lemma}
\begin{proof}[Proof of Theorem \ref{rough-comparison}] We apply Lemma \ref{Arones-result} to the i.i.d. random variables $\eta_k=|\xi_k|-\mathbb{E}|\xi_k|.$ The condition (\ref{log-condition-c}) implies that (\ref{appendix-equation}) is fulfilled. Let us consider a special measurable set $A=[1,\infty).$ By using the Lagrange multiplier, it follows that
\begin{align*}
-\inf\{I(y):y\in\text{interior of }A\}=-\inf\{I(y):y\in\text{closure of }A\}=-c\|x\|_q^{-p}
\end{align*}
(this can be also deduced from Lemma 3.1 of \cite{Arcones}). Then (\ref{end-rough}) follows from the large deviation principle.
\end{proof}
Now let us assume
\begin{align*}
\lim_{u\rightarrow\infty}u^{-p}\log\mathbb{P}\left\{|\xi_1|\geq u\right\}=-c.
\end{align*}
Then it follows easily that
\begin{align*}
\lim_{u\rightarrow\infty}u^{-p/k}\log\mathbb{P}\left\{|\xi_1|^k\geq u\right\}=-c.
\end{align*}
So the logarithmic level comparison for $\xi_n^k$ can be similarly derived as follows.
\begin{prop}\label{rough-comparison-p}
Let $k>0$ be a positive real number, $\{\xi_n\}$ be a sequence of i.i.d. random variables whose common distribution satisfies $\mathbb{E}|\xi_1|^k<\infty$ and
\begin{align*}
\lim_{u\rightarrow\infty}u^{-p}\log\mathbb{P}\left\{|\xi_1|\geq u\right\}=-c
\end{align*}
for some constants $0<c<\infty$ and $p$ such that $p/k\geq1.$ Two sequences of positive real numbers $\{a_n\}$ and $\{b_n\}$ satisfy $\sum_{n=1}^{\infty}a_n^{2\wedge q}<\infty$ and $\sum_{n=1}^{\infty}b_n^{2\wedge q}<\infty$ where $q$ is given by $\frac{1}{p/k}+\frac{1}{q}=1.$ Then as $r\rightarrow\infty$
$$\frac{\log\mathbb{P}\left\{\sum_{n=1}^{\infty}a_n|\xi_n|^k\geq r\right\}}{\log\mathbb{P}\left\{\sum_{n=1}^{\infty}b_n|\xi_n|^k\geq r\right\}}\sim\left(\frac{\|b\|_q}{\|a\|_q}\right)^{p/k}$$
and
$$\frac{\log\mathbb{P}\left\{\sum_{n=1}^{\infty}a_n|\xi_n|^k\geq r\|a\|_q\right\}}{\log\mathbb{P}\left\{\sum_{n=1}^{\infty}b_n|\xi_n|^k\geq r\|b\|_q\right\}}\sim1.$$
\end{prop}
\section*{Appendix}\label{appendix}
In this section, we make a few remarks on the conditions in Theorem \ref{exact-standard-normal} and Theorem \ref{exact-general-normal}. First, we note that conditions (\ref{a-n-b-n-converge}) and (\ref{a-n-b-n-converge-regular-normal}) are not very restrictive, and examples of sequences satisfying these conditions can be produced. For instance, we can consider two sequences with
$$1-\frac{a_n/\|a\|_2}{b_n/\|b\|_2}=\frac{(-1)^n}{n}.$$
To see the relation between (\ref{a-n-b-n-converge}) and (\ref{a-n-b-n-converge-regular-normal}), let us post part of a useful theorem in \cite{Wermuth-1992} from which many convergence results on infinite products and series can be easily derived.
\begin{lemma}[Part (a) of Theorem 1 in \cite{Wermuth-1992}]
Let $\{x_n\}$ be a sequence of real numbers. If any two of the four expressions
$$\prod_{n=1}^{\infty}(1+x_n),\quad \prod_{n=1}^{\infty}(1-x_n),\quad \sum_{n=1}^{\infty}x_n,\quad \sum_{n=1}^{\infty}x_n^2$$
are convergent, then this holds also for the remaining two.
\end{lemma}
Under condition (\ref{a-n-b-n-converge-regular-normal}) in Theorem \ref{exact-general-normal}, it follows from this result that
$$\prod_{n=1}^{\infty}\left(2-\frac{a_n/\|a\|_2}{b_n/\|b\|_2}\right)\text{ and }\prod_{n=1}^{\infty}\frac{a_n/\|a\|_2}{b_n/\|b\|_2}\text{ converge.}$$
This implies that $\sum_{n=1}^{\infty}\log\left(\frac{a_n/\|a\|_2}{b_n/\|b\|_2}\right)$ is convergent. The facts that
$$\sum_{n=1}^{\infty}\left(1-\frac{b_n/\|b\|_2}{a_n/\|a\|_2}\right)^2<\infty\text{ and }\prod_{n=1}^{\infty}\frac{b_n/\|b\|_2}{a_n/\|a\|_2}\text{ converge}$$
yield
$$\prod_{n=1}^{\infty}\left(2-\frac{b_n/\|b\|_2}{a_n/\|a\|_2}\right)\text{ is convergent.}$$
|
2,869,038,154,033 | arxiv | \section{Introduction}
\paragraph{Why euclidean field theory?}
During the last two decades it turned out that
the techniques of euclidean field theory are
powerful tools in order to construct
quantum field theory models.
Compared to the method of canonical quantization in Minkowski space,
which, for example, has been used for the construction of
$P(\phi)_2$ and Yukawa$_2$ models
\cite{GlJa1,GlJa2,GlJa3,GlJa4,Schra0,Schra1}, the functional integral
methods of euclidean field theory simplify the construction of
interactive quantum field theory models.
In particular,
the existence of the $\phi^4_3$ model as a Wightman theory
has been established by using euclidean methods
\cite{FeldOst,SeilSim76,MagSen76}
combined with the famous Osterwalder-Schrader
reconstruction theorem \cite{OstSchra1}.
For this model the methods of canonical quantization are much more
difficult to handle and lead by no means so far as euclidean techniques do.
Only the proof of the positivity of the
energy has been carried out within the hamiltonian framework
\cite{GlJa1,GlJa5}.
One reason why the functional integral point of view
simplifies a lot is that the theory of
classical statistical mechanics can be used.
For example, renormalization group analysis \cite{GawKup} and
cluster expansions \cite{Br} can be applied
in order to perform the continuum and the infinite volume limit of a
lattice regularized model.
Instead of working with non-commutative objects,
one considers the moments
\begin{eqnarray*}
\6S_n(x_1,\cdots,x_n)=\int\8d\mu(\phi) \ \phi(x_1)\cdots\phi(x_n)
\end{eqnarray*}
of reflexion positive measures
$\mu$, usually called
Schwinger distributions or euclidean correlation functions,
on the space of tempered distributions.
Heuristically, the functional integral point of view leads to
conceptionally simple construction scheme for a quantum field theory.
Starting from a given lagrangian density $L$,
the measure $\mu$ under consideration is simply given by
\begin{eqnarray*}
\8d\mu(\phi)&=&Z^{-1}\ \bigotimes_{x\in\7R^d}\8d\phi(x) \
\exp\biggl(-\int \8dx \ L(\phi(x),\8d\phi(x))\biggr)
\end{eqnarray*}
where the factor $Z^{-1}$ is for normalization. Therefore, the
lagrangian $L$ can be interpreted as a germ of a
quantum field theory. Moreover, this also leads
to a nice explanation of the minimal action principle.
However, to give the expression above a rigorous mathematical
meaning is always accompanied with serious technical difficulties.
\paragraph{Some comments on the Osterwalder-Schrader reconstruction
theorem.}
In order to motivate the main purpose of our paper, we shall
make some brief remarks on the Osterwalder-Schrader reconstruction
theorem \cite{OstSchra1} which relates Schwinger and Wightman distributions.
Let $T(S)$ be the tensor algebra over the
space of test functions $S$ (in $\7R^d$) and let us denote by
$J_E$ ($E$ stands for {\em euclidean}) the two-sided
ideal in $T(S)$, which is generated by elements
$f_1\otimes f_2-f_2\otimes f_1\in T(S)$ where
$f_1$ and $f_2$ have disjoint supports. We build the
algebra $T_E(S):=T(S)/J_E$ and take the closure $T^\2T_E(S)$ of
it in an {\em appropriate} locally convex topology. We claim that
the euclidean group $\8E(d)$ acts naturally by automorphisms
$(\alpha_g,g\in\8E(d))$ on $T^\2T_E(S)$.
A linear functional $\eta\in T^\2T_E(S)^*$ fulfills
the Osterwalder-Schrader axioms if the following conditions hold:
\begin{description}
\itno{E0} $\eta$ is continuous and unit preserving: $<\eta,\11>=1$.
\itno{E1} $\eta$ is invariant under euclidean transformations:
$\omega}\def\Om{\Omega\circ\alpha_g=\omega}\def\Om{\Omega$.
\itno{E2} $\eta$ is reflexion positive:
The sesqui-linear form $a\otimes b\mapsto <\eta,\iota_e(a^*)b>$ is
a positive semi-definite
on those elements which are localized at
positive times with respect to the direction $e\in S^{d-1}$
where $\iota_e$ is the automorphism which corresponds to the
reflexion $e\mapsto -e$.
\end{description}
Given a linear functional $\eta$ which satisfies the conditions
{\it (E0)} to {\it (E2)}, the analytic properties
of the distributions
\begin{eqnarray*}
\6S_n(f_1,\cdots ,f_n)&:=&<\eta,f_1\otimes\cdots\otimes f_n>
\vspace{0.2cm} \\\vs
\mbox{ and } \ \
S_n(\xi_1,\cdots,\xi_n)&=& \6S_{n+1}(x_0,\cdots,x_n) \ \ ; \
\ \xi_j=x_{j+1}-x_j
\end{eqnarray*}
lead to the result:
\begin{The}\1:
There exists a distribution $\tilde W_n\in S'(\7R^{nd})$ supported in
the n-fold closed forward light cone $(\bar V_+)^n$
which is related to $S_n$ by the Fourier-Laplace transform:
\begin{eqnarray*}
S_n(\xi)=\int \8d^{nd}q \ \exp(-\xi^0q^0-\8i\vec{\xi}\vec{q}) \ \tilde W_n(q)
\end{eqnarray*}
\end{The}
The proof of this Theorem \cite{OstSchra1} relies essentially on the
choice of the topology $\2T$.
It does not apply for the ordinary $S$-topology, i.e.
it is not enough to require that the $\6S_n$'s are tempered distributions.
This was stated wrongly in
the first paper of \cite{OstSchra1} and was later
corrected in the second one. We claim that, nevertheless,
the Theorem might be true for the ordinary $S$-topology, but, at the
moment, there is no correct proof for it.
These problems show that the relation
between euclidean field theory and quantum field theory
is indeed subtle.
In order to formulate the famous Osterwalder-Schrader reconstruction theorem
from a more algebraic point of view,
we shall briefly introduce the notion of a
{\em local net} and a {\em vacuum state}.
\subparagraph{\it ${\8P^\uparrow_+}$-covariant local nets:}
A ${\8P^\uparrow_+}$-covariant local net of *-algebras is an
isotonous \footnote{Isotony:
$\2O_1\subset \2O_2$ implies $A(\2O_1)\subset A(\2O_2)$.} prescription
$\underline A:\2O\mapsto A(\2O)$, which assigns to each double cone
$\2O=V_++x\cap V_-+y$ a unital *-algebra $A(\2O)$, on which the
the Poincar\' e group ${\8P^\uparrow_+}$
acts covariantly on $\underline A$, i.e. there is a
group homomorphism $\alpha\in{\rm Hom}({\8P^\uparrow_+},{\rm Aut} A)$, such that
$\alpha_gA(\2O)=A(g\2O)$. Here $A$ denotes the *-inductive limit of the net
$\underline A$. Furthermore,
the net fulfills locality, i.e. if $\2O,\2O_1$ are two
space-like separated regions $\2O\subset\2O_1'$ then
$[A(\2O),A(\2O_1)]=\{0\}$.
A ${\8P^\uparrow_+}$-covariant local net of C*-algebras is called
a {\em Haag-Kastler} net.
\subparagraph{\it Vacuum states:}
A state $\omega}\def\Om{\Omega$ on $A$ is called a vacuum state iff
$\omega}\def\Om{\Omega$ is ${\8P^\uparrow_+}$-invariant
(or translationally invariant), i.e. $\omega}\def\Om{\Omega\circ\alpha_g=\omega}\def\Om{\Omega$
for each $g\in{\8P^\uparrow_+}$, and for each $a,b\in A$
\begin{eqnarray*}
\int \8dx \ <\omega}\def\Om{\Omega,a\alpha_{(1,x)}(b)> \ f(x) &=& 0
\end{eqnarray*}
for each test function $f\in S$ with
${\rm supp}(\tilde f)\cap \bar V_+=\emptyset$.
This implies that there exists a strongly continuous
representation $U$ of ${\8P^\uparrow_+}$
on the GNS Hilbert space of $\omega}\def\Om{\Omega$ such that
\begin{eqnarray*}
U(g)\pi(a)U(g)^*=\pi(\alpha_ga)
\end{eqnarray*}
and the spectrum of $U(1,x)$ is contained in the closed forward light cone.
Here $\pi$ is the GNS representation of $\omega}\def\Om{\Omega$.
Usually it is required that a vacuum state $\omega}\def\Om{\Omega$ is a
pure state. This aspect is not so important for our purpose and
we do not assume this here.
\vskip 0.5cm
An example for a ${\8P^\uparrow_+}$-covariant local net of *-algebras is given by
the prescription
\begin{eqnarray*}
{\underline T}_M(S):\2O \ \longmapsto \ T_M(S(\2O))
\end{eqnarray*}
where $T_M(S):=T(S)/J_M$\footnote{The ideal
$J_M$ ($M$ stands for Minkowski) is the two-sided
ideal in $T(S)$, which is generated by elements
$f_1\otimes f_2-f_2\otimes f_1\in T(S)$ where
$f_1$ and $f_2$ have space-like separated supports.}
is the well known {\it Borchers-Uhlmann algebra}.
We should mention here that now the test functions in $S$ are
test functions in {\em Minkowski space-time}.
Let $\tau\in {\rm Hom}({\8P^\uparrow_+},\8{GL}(S))$ be the action of
the Poincar\' e group on the test functions which is given by
$\tau_gf=f\circ g^{-1}$ then
\begin{eqnarray*}
\alpha_g(f_1\otimes\cdots\otimes f_n)&:=& \tau_gf_1\otimes\cdots\otimes \tau_gf_n
\end{eqnarray*}
defines a covariant action of ${\8P^\uparrow_+}$ on ${\underline T}_M(S)$.
Now, the theorem above leads to the famous
Osterwalder-Schrader reconstruction theorem:
\begin{The}\1:
Given a linear functional $\eta$ which satisfies the conditions
{\it (E0)} to {\it (E2)}, then there exists a vacuum
state $\omega}\def\Om{\Omega_\eta$ on the Borchers algebra $T_M(S)$
such that
\begin{eqnarray*}
<\omega}\def\Om{\Omega_\eta,f_1\otimes\cdots\otimes f_n>&=&\6W_n(f_1,\cdots ,f_n)
\end{eqnarray*}
where $\6W_n$ is defined by
\begin{eqnarray*}
\6W_n(x)=\int \8d^{nd}q \ \exp(-\8i\xi q) \ \tilde W_n(q)
&;& \xi_j=x_{j+1}-x_j \ \ .
\end{eqnarray*}
\end{The}
The fact that $\omega}\def\Om{\Omega_\eta$ is a vacuum state
on the Borchers algebra is completely equivalent to the
statement that the distributions $\6W_n$ fulfill the
Wightman axioms
in its usual form (except the clustering)(see \cite{StrWgh89}).
\paragraph{A heuristic proposal for the
treatment of gauge theories.}
As mentioned above, the main reason for using euclidean field
theory is for constructing quantum field theory models
with interaction. In four space time dimensions, the
most promising candidates for interactive quantum field theory
models are gauge theories. Scalar or multi-component scalar
field theories of $P(\phi)_4$-type are less promising
to describe interaction, since their construction either
run into difficulties with renormalizability
or, as conjectured for the $\phi_4^4$-model, they seem to be trivial
\cite{Froh82}.
The description of gauge theories within the
Wightman framework leads to some conceptional problems.
For example,
in order to study gauge invariant objects in quantum electro dynamics
one may think of
vacuum expectation values of products of the field strength $F_{\mu\nu}$
\begin{eqnarray*}
\6W_{\mu_1\nu_1,\cdots,\mu_n\nu_n}(x_1,\cdots, x_n)
=\<\Om,F_{\mu_1\nu_1}(x_1)\cdots F_{\mu_n\nu_n}(x_n)\Om\>
\end{eqnarray*}
which satisfy the Wightman axioms. Here, the problem arises
when one wish to include fermions. For the minimal coupling
one has to study correlation functions of the gauge field
instead of those in the field strength. This leads to such well
known problems as indefinite metric, solving constraints and so forth.
Moreover, there is another problem which we would like to mention here.
Within the Wightman framework
the quantized version of the
gauge field $u_\mu$ is an operator valued distribution.
On the other hand, the classical
concept of a gauge field leads to the notion of a connection in a
vector or principal bundle over some manifold which suggests to
consider as gauge invariant objects Wilson loop variables
\begin{eqnarray*}
w_\gamma}\def\Gam{\Gamma[u]={\rm tr}}\def\Tr{{\rm Tr}[\8{Pexp}(\smallint_\gamma}\def\Gam{\Gamma A)]
\end{eqnarray*}
and string-like objects
\begin{eqnarray*}
s_\gamma}\def\Gam{\Gamma[u,\psi]=\bar\psi(r(\gamma}\def\Gam{\Gamma))\8{Pexp}(\smallint_\gamma}\def\Gam{\Gamma u)\psi(s(\gamma}\def\Gam{\Gamma))
\end{eqnarray*}
where $\psi$ is a smooth section in an appropriate vector bundle and
$\gamma}\def\Gam{\Gamma$ is an oriented path which starts at $s(\gamma}\def\Gam{\Gamma)$
and ends at $r(\gamma}\def\Gam{\Gamma)$.
Unfortunately, to
express $w_\gamma}\def\Gam{\Gamma(u)$ in terms of Wightman fields leads to difficulties.
From a perturbation theoretical point of view one expects
that the distribution $u$ is too singular in order to be
restricted to a one-dimensional sub-manifold.
To motivate our considerations, we shall discuss here, heuristically,
an alternative proposal which might be related
to a quantized version of a gauge theory.
It is concerned with the direct quantization of regularized
Wilson loops
\begin{eqnarray*}
w_\gamma}\def\Gam{\Gamma(f)[u]=\int\8dx \ \ w_{\gamma}\def\Gam{\Gamma+x}[u] \ f(x) \ \ .
\end{eqnarray*}
Here we allow $f\in E'(\7R^d)$ to be a
distribution with compact support which has the form
\begin{eqnarray*}
f(x)= f_\Sgm(x)\delta_\Sgm(x)
\end{eqnarray*}
where $\Sgm$ is a $d-1$-dimensional hyper-plane and
$f_\Sgm\in C^\infty_0(\Sgm)$ and $\delta_\Sgm$ is the natural measure
on $\Sgm$. We claim that such a type
of regularization is necessary since in
$d$-dimensional quantum field theories there are no
bounded operators which are localized within $d-2$-dimensional
hyper-planes \cite{DrieSum86}.
Such a point of view has been
discussed by J. Fr\"ohlich \cite{Froh79}, E. Seiler \cite{Seil82}
or more recently by A. Ashtekar and J. Lewandowski \cite{AshLew}.
In order to describe a quantum gauge theory in terms of regularized
Wilson loop variables one wishes to construct a
function $\gamma}\def\Gam{\Gamma\mapsto \1w_\gamma}\def\Gam{\Gamma$ which assigns to each
path $\gamma}\def\Gam{\Gamma$ an operator valued distribution
$\1w_\gamma}\def\Gam{\Gamma:f\mapsto \1w_\gamma}\def\Gam{\Gamma(f)$, where the operators $\1w_\gamma}\def\Gam{\Gamma(f)$
are represented by operators on some Hilbert space $\2H$.
Heuristically, one expects that the operators $\1w_\gamma}\def\Gam{\Gamma(f)$
are unbounded \cite{Pol79}.
\begin{enumerate}
\itno 1
The operators $\1w_\gamma}\def\Gam{\Gamma(f)$ are self-adjoint for real-valued
test functions with a joint core $\2D\subset \2H$
\itno 2
$\1w$ should transform
covariantly under the action of the Poincar\' e group, i.e.
\begin{eqnarray*}
\1w_{g\gamma}\def\Gam{\Gamma}(f\circ g^{-1})= U(g)\1w_\gamma}\def\Gam{\Gamma(f) U(g)^* \ \ ; \ \ g\in{\8P^\uparrow_+}\ \ ,
\end{eqnarray*}
where $U$ is a unitary strongly continuous representation of
the Poincar\' e group on $\2H$ and the spectrum of the
translations is contained in the closed forward light cone $\bar V_+$.
\itno 2
Moreover, the operators $\1w_\gamma}\def\Gam{\Gamma(f)$ should satisfy the locality
requirement, i.e.
\begin{eqnarray*}
[\1E_{(\gamma}\def\Gam{\Gamma,f)}(\Delta_1), \1E_{(\gamma}\def\Gam{\Gamma_1,f_1)}(\Delta)]=0
\end{eqnarray*}
if the (convex hulls) of the regions $\gamma}\def\Gam{\Gamma+{\rm supp}(f)$
and $\gamma}\def\Gam{\Gamma_1+{\rm supp}(f_1)$ are space-like separated. Here
\begin{eqnarray*}
\1w_\gamma}\def\Gam{\Gamma(f)&=&\int\8d \1E_{(\gamma}\def\Gam{\Gamma,f)}(\lambda}\def\Lam{\Lambda) \ \lambda}\def\Lam{\Lambda
\end{eqnarray*}
\end{enumerate}
is the spectral resolution of $\1w_\gamma}\def\Gam{\Gamma(f)$
According to \cite{Froh79,Seil82},
it has been suggested to reconstruct Wilson loop operators
$\1w_\gamma}\def\Gam{\Gamma$ from euclidean correlation functions of loops
\begin{eqnarray*}
\gamma}\def\Gam{\Gamma_1,\cdots, \gamma}\def\Gam{\Gamma_n \ \longmapsto \ \6S_n(\gamma}\def\Gam{\Gamma_1,\cdots, \gamma}\def\Gam{\Gamma_n)
\end{eqnarray*}
which satisfy the analogous axioms as the usual
Schwinger distributions do, namely the reflexion
positivity and the symmetry. However,
within the analysis of J. Fr\"ohlich, K. Osterwalder and
E. Seiler \cite{FrohOstSeil,Seil82},
the correlation function may have singularities
in those points where two loops intersect and
there are some additional technical conditions assumed which are
related to the behavior of these singularities. He has
proven (compare also \cite{Froh79}) that one can reconstruct from the
euclidean correlation functions $\6S_n$
an operator valued function $\gamma}\def\Gam{\Gamma\mapsto \1w_\gamma}\def\Gam{\Gamma$
together with a unitary strongly continuous representation of
${\8P^\uparrow_+}$ on $\2H$ \cite{FrohOstSeil}. Here $\1w_\gamma}\def\Gam{\Gamma$ is only defined for
loops which are contained in some space-like plane and it
fulfills the covariance condition {\it (1)}.
E. Seiler \cite{Seil82} has also discussed an idea how to
proof locality {\it (2)}. We shall come back to this point later.
For our purpose, we look from an algebraic point of view
at the problem of reconstructing
a quantum field theory from euclidean data. Let us consider
functions
\begin{eqnarray*}
a:\2A_E\ni u \ \longmapsto \
a^\circ \biggl( \int \8d x \ w_{\gamma}\def\Gam{\Gamma_j+x}(u) \ f_j(x); j=1,\cdots, n\biggr)
\end{eqnarray*}
on the space of smooth connections $\2A_E$ in a vector bundle $E$
over the euclidean space $\7R^d$ where $a^\circ$ is a bounded function
on $\7R^n$. These functions are bounded
and thus they generate an abelian C*-algebra $A$ with C*-norm
\begin{eqnarray*}
\|a\|=\sup_{u\in\2A_E}|a(u)| \ \ .
\end{eqnarray*}
We assign to a given
bounded region $\2U\subset\7R^d$ the C*-sub-algebra $A(\2U)\subset A$
which is generated by all functions of Wilson loop variables $w_\gamma}\def\Gam{\Gamma(f)$
with $\gamma}\def\Gam{\Gamma+{\rm supp}(f)\subset\2U$. The euclidean group $\8E(d)$ acts
naturally by automorphisms on $A$, namely the prescription
\begin{eqnarray*}
\alpha_g:a \ \longmapsto \ a\circ g^{-1}:u \ \longmapsto \ a(u\circ g)
\end{eqnarray*}
defines for each $g\in\8E(d)$ an appropriate automorphism of $A$, which,
of course, acts covariantly on the isotonous net
\begin{eqnarray*}
\underline A:\2U \ \longmapsto \ A(\2U) \ \ ,
\end{eqnarray*}
namely we have: $\alpha_gA(\2U)=A(g\2U)$.
Motivated by the work of E. Seiler, J. Fr\"ohlich
and K. Osterwalder \cite{Seil82,Froh79,FrohOstSeil}
as well as that of A. Ashtekar and J. Lewandowski \cite{AshLew},
we propose to
consider reflexion positive functionals on $A$, i.e. linear functionals
$\eta\in A^*$ which fulfill
conditions, corresponding to the axioms {\it (E0)}-{\it (E2)} above.
These functionals can be interpreted as the analogue of the
functional integral.
Note, if $\eta$ is a state, then $\eta$ is nothing else but a measure
on the spectrum $X$ of the C*-algebra $A$.
The advantage of this point of view is based on the fact that
abelian C*-algebras are rather simple objects namely
algebras of continuous functions on a (locally)-compact
Hausdorff space.
\paragraph{Overview.}
In order to make the comprehension of the subsequent sections
easier, we shall give an overview of the content of our paper
by stating the main ideas and results. This paragraph is also addressed
to quick readers who are not so much interested into
technical details.
Motivated by the considerations above, we make
in Section \ref{axioms} a suggestion for axioms which an
euclidean field theory should satisfy.
We start from an isotonous net
\begin{eqnarray*}
\underline A:\2U \ \longmapsto \ A(\2U)\subset A
\end{eqnarray*}
of C*-algebra on which
the euclidean group $\8E(d)$ acts covariantly by
automorphisms of $\alpha:\8E(d)\to{\rm Aut} A$, like in the example
of Wilson loop variables given in the previous paragraph.
However, we assume a somewhat weaker condition than commutativity
for $A$. For our considerations we only have to assume
that two operators commute if they are localized in
disjoint regions.
In addition to that, we consider a
reflexion positive functional $\eta$ on $A$.
We shall call the triple $(\underline A,\alpha,\eta)$, consisting of
the net $\underline A$ of C*-algebra, the action of the
euclidean group $\alpha$, and the reflexion positive functional,
an euclidean field.
We show in Section \ref{efthqfth} how to construct from
a given euclidean field a quantum field theory in a
particular vacuum representation.
In order to point out the relation between the euclidean
field $(\underline A,\alpha,\eta)$ and the Minkowskian world,
we briefly describe the construction of a
Hilbert space $\2H$ on which the reconstructed physical
observables are represented.
According to our axioms, the map
\begin{eqnarray*}
a\otimes b \ \longmapsto \ <\eta,\iota_e(a^*)b>
\end{eqnarray*}
is a positive semidefinite sesqui-linear form on the algebra $A(e)$
of operators which are localized in $e\7R_++\Sgm_e$ where
$\Sgm_e$ is the hyper plane orthogonal to the
euclidean time direction $e\in S^{d-1}$. Here $\iota_e$ is
the automorphism on $A$ which corresponds to the reflexion
$e\mapsto -e$. By dividing the null-space and taking the closure
we obtain a Hilbert space $\2H$. The construction
of the observables, which turn out to be bounded operators on $\2H$,
is based on two main steps.
\subparagraph{\it Step 1:}
In Section \ref{reconpoin}, we reconstruct a
unitary strongly continuous representation of the Poincar\' e group
$U$ on $\2H$. To carry through this analysis, it is not
necessary to impose new ideas. The construction is essentially analogous
to those which has been presented in \cite{FrohOstSeil}
(compare also \cite{Seil82}).
In order to keep the present paper self contained,
we feel obliged to discuss this point within our context in more detail.
\subparagraph{\it Step 2:}
We discuss in Section \ref{reconlocobs}
the construction of the physical observables.
At the moment this can only be done, if we assume that
the algebra $A$ contains operators which are localized
at sharp times, i.e. we require that the algebra $A(e)\cap A(-e)$
is larger than $\7C\11$. We shall abbreviate this condition by
(TZ) which stands for {\em time-zero}. For the fix-point algebra $B(e)$ of
$\iota_e$ in $A(e)\cap A(-e)$ we obtain a *-representation $\pi$ on
$\2H$, where an operator $\pi(b)$, $b\in B(e)$, is given by
the prescription
\begin{eqnarray*}
\pi(b)\8p(a) \ \longmapsto \ \8p(ba) \ \ .
\end{eqnarray*}
Here $\8p$ is the
quotient map, identifying an operator $a\in A(e)$ with its
equivalence class $\8p(a)$ in $\2H$. Now, we consider
for a given Poincar\' e transform $g\in{\8P^\uparrow_+}$ and a given
time-zero operator $b\in B(e)$ the following bounded
operator:
\begin{eqnarray*}
\Phi(g,b)&:=&U(g)\pi(b)U(g)^* \ \ .
\end{eqnarray*}
We shall say that $\Phi(g,b)$ is localized in a
region $\2O$ in Minkowski space if
$b$ is localized in $\2U\subset \Sgm_e$ and
the transformed region $g\2U$ is contained in
the double cone $\2O$.
Let us denote the C*-algebra which is
generated by all operators $\Phi(g,b)$, which are localized in
$\2O$, by $\6A(\2O)$.
Hence we get an isotonous net of C*-algebras
\begin{eqnarray*}
\underline{\6A}:\2O \ \longmapsto \ \6A(\2O)
\end{eqnarray*}
indexed by double cones in Minkowski space on which
the Poincar\' e group acts covariantly by the automorphisms
$\alpha_g:={\rm Ad}(U(g))$, $g\in{\8P^\uparrow_+}$.
\subparagraph{\it The main result:}
\begin{enumerate}
\itno 1
The reconstructed isotonous net $\underline{\6A}$ is a
Haag-Kastler net: locality holds,
i.e. if $\2O,\2O_1$ are two double cones such that
$\2O\subset\2O_1'$ then $[\6A(\2O),\6A(\2O_1)]=\{0\}$.
\itno 2
Furthermore,
the ${\8P^\uparrow_+}$-invariant vector $\Om=\8p(\11)$ induces
a vacuum state
\begin{eqnarray*}
\omega}\def\Om{\Omega:a \ \longmapsto \ <\omega}\def\Om{\Omega,a>&:=&\<\Om,a\Om\> \ \ .
\end{eqnarray*}
\end{enumerate}
The non trivial aspect of this statement is the proof of locality.
As already mentioned above, E. Seiler has discussed an
idea how to prove locality for a net of Wilson loops $\1w_\gamma}\def\Gam{\Gamma$.
This idea does not rely on the fact that one considers loops.
It can also be used for general euclidean fields.
However, we have not found a complete proof within the common
literature and therefore, which is also one purpose of our paper,
we shall present a complete proof
here (Section \ref{reconlocobs}).
The prove is based on the analytic properties of the functions
\begin{eqnarray*}
F(z_1,z_2)&:=&\<\psi,\Phi_{X_1}(z_1,b_1)\Phi_{X_2}(z_2,b_2)\hat\psi\>
\vspace{0.2cm} \\\vs
\hat F(z_1,z_2)&:=&\<\psi,\Phi_{X_2}(z_2,b_2)\Phi_{X_1}(z_1,b_1)\hat\psi\>
\ \ .
\end{eqnarray*}
We have introduced the operators
\begin{eqnarray*}
\Phi_X(z,b):=U(\exp(zX))\pi(b)U(\exp(-zX))
\end{eqnarray*}
where $b\in B(e)$ is a time-zero operator and $X$ is a Boost generator
or $\8iH$ where $H$ is the hamiltonian with respect to the
time direction $e$.
Roughly, the argument for the proof of locality goes as follows:
Suppose $b_j$ is localized in $\2U_j\subset\Sgm_e$.
We shall show that the regions $G$ ($\hat G$) in which $F$ ($\hat F$)
are holomorphic are
\begin{enumerate}
\itno a connected and they contain pure imaginary points $(\8is_1,\8is_2)$ and
\itno b the intersection $G\cap\bar G$ contains all those points $(t_1,t_2)$
for which $\2O_1=\exp(t_1X)\2U_1$ and $\2O_2=\exp(t_2X_2)\2U_2$ are
space-like separated.
\end{enumerate}
But $F$ and $\hat F$ coincide in the pure imaginary points
since operators which are localized in disjoint regions commute.
This implies
\begin{eqnarray*}
F|_{G\cap\hat G}=\hat F|_{G\cap\hat G}
\end{eqnarray*}
and thus by {\it (b)} we conclude
\begin{eqnarray*}
\<\psi,[\Phi_{X_1}(t_1,b_1),\Phi_{X_2}(t_2,b_2)]\hat\psi\>&=&0
\end{eqnarray*}
if $\Phi_{X_1}(t_1,b_1)$ and $\Phi_{X_2}(t_2,b_2)$ are localized in
space-like separated regions. We claim that the regions
$G$ and $\hat G$ depend on the choice of the vector
$\hat\psi$. However, one can find a dense sub-space $D$
such that $F$ ($\hat F$) are holomorphic in
$G$ ($\hat G$) for all $\hat\psi\in D$. Thus the
commutator $[\Phi_{X_1}(t_1,b_1),\Phi_{X_2}(t_2,b_2)]$ vanishes
on a dense sub-space and, since $\Phi_X(t,b)$ is bounded
for real points $t\in\7R$, the commutator vanishes on $\2H$.
In order to get analyticity of $F$ within a region $G$ which is large enough,
we prove in the appendix an statement which is the analogue
of the famous Bargmann-Hall-Wightman theorem
\cite{HallWght57,Jost65,StrWgh89}.
In Section \ref{strucasp}, we discuss some miscellaneous consequences
of our result. Note, that for the application of our reconstruction
scheme it was crucial to assume that the there are
non-trivial euclidean operators which can be localized at sharp times.
We shall give some remarks on the condition (TZ) in
Section \ref{tz}.
Our considerations can easily be generalized to
the case in which there are also fermionic operators present or even though
for super-symmetric theories.
Here one starts with an isotonous net
$\underline F:\2U\mapsto F(\2U)$ of $\7Z_2$-graded C*-algebras
which fulfills the time-zero condition (TZ), i.e. the
fix-point algebra $B(e)$ of $\iota_e$ in $F(e)\cap F(-e)$
is larger than $\7C\11$. The euclidean group acts covariantly by automorphisms
on $F$ and we require that the graded commutator
$[a,b]_g=0$ vanishes if $a$ and $b$ are localized in
disjoint regions.
Let $\eta$ be a reflexion positive functional, then, by
replacing the commutator by the graded commutator,
we conclude that the operators
\begin{eqnarray*}
\Phi(g,b) = U(g)\pi(b)U(g)^* \ \ \mbox{ ; $b\in B(e)$ and $g\in{\8P^\uparrow_+}$}
\end{eqnarray*}
generate a fermionic net $\underline{\6F}$ of C*-algebras.
This can really be done analogously to the construction of
the Haag-Kastler net $\underline{\6A}$, described above.
Finally, we close our paper by the Section \ref{conout}
{\em conclusion and outlook}.
\section{Axioms for euclidean field theories}
\label{axioms}
In the present section we make a suggestion for axioms which an euclidean
field theory should satisfy.
In the first step, we
introduce the notion of an {\em euclidean net of C*-algebras}.
Within our interpretation
this notion is related to {\em physical observations}.
\begin{Def}\1: \em
A $d$-dimensional {\em euclidean net} of C*-algebras is given by a
pair $(\underline A,\alpha)$ which consists of an isotonous net
\begin{eqnarray*}
\underline A:\7R^d\supset\2U \ \longmapsto \ A(\2U)
\end{eqnarray*}
of C*-algebras, indexed by bounded subsets
in $\7R^d$ and
a group homomorphism $\alpha\in{\rm Hom}(\8E(d),{\rm Aut}(A))$.\footnote{We denote the
the C*-inductive limit of $\underline A$ by $A$. For an unbounded region $\Sgm$
the algebra $A(\Sgm)$ denotes the C*-sub-algebra which is generated by
the algebras $A(\2U)$, $\2U\subset \Sgm$.}
We require that the pair fulfills the conditions:
\begin{enumerate}
\itno 1 Locality: $\2U_1\cap\2U_2=\emptyset$ implies
$[A(\2U_1),A(\2U_2)]=\{0\}$.
\itno 2 Euclidean covariance: $\alpha_gA(\2U)=A(g\2U)$ for each $\2U$.
\end{enumerate}
\end{Def}
For an euclidean direction
$e\in S^{d-1}$ we consider the reflection $\theta}\def\Te{\Theta_e:e\mapsto -e$.
and the sub-group $\8E_e(d-1)$ which commutes with $\theta}\def\Te{\Theta_e$.
Moreover, we set $\iota_e:=\alpha_{\theta}\def\Te{\Theta_e}$.
As in the introduction, we denote by $A(e)$ the C*-algebra
$A(e\7R_++\Sgm_e)$ where $\Sgm_e$ is the hyper-plane orthogonal to $e$.
Now we formulate a selection criterion for linear functionals
on $A$ which corresponds to the selection criterion for
physical states. We shall see that class of functional, which is introduced
below, is the euclidean analogue of the set of vacuum states.
\begin{Def}\1: \em
We define $S(A,\alpha)$ to be the set of all continuous linear functionals $\eta$
on $A$ which fulfill the following conditions:
\begin{enumerate}
\itno 1
$e$-reflexion positivity: There exists a euclidean direction $e\in S^{d-1}$
such that
\begin{eqnarray*}
\forall a\in A(e): &<\eta,\iota_e(a^*)a>& \geq 0 \ \ .
\end{eqnarray*}
\itno 2
Unit preserving: $<\eta,\11>=1$.
\itno 3
Invariance: $\forall g\in \8E(d): \eta\circ\alpha_g=\eta$.
\end{enumerate}
\end{Def}
\paragraph{Remark:}
We easily observe that the definition of
$S(A,\alpha)$ is independent of the chosen direction $e$.
In the subsequent, we call the functionals in $S(A,\alpha)$
{\em reflexion positive}.
\vskip 0.5cm
For our purpose it is necessary to require a further condition for
the functionals under consideration.
\begin{Def}\1: \em
We denote by $S_R(A,\alpha)$ the set of all
reflexion positive functionals $\eta$ of $A$ for which the
map
\begin{eqnarray*}
\8E(d)\ni g \ \longmapsto \ <\eta,a(\alpha_gb)c>
\end{eqnarray*}
is a continuous function for each $a,b,c\in A$.
These functionals are called {\em regular reflexion positive}.
\end{Def}
We shall call a triple $(\underline A,\alpha,\eta)$ which consists of
an euclidean net and a regular reflexion positive
functional $\eta$ an {\em euclidean field}.
As already mentioned in the introduction, we have to assume that the
operators of the euclidean net can be localized at a sharp
$d-1$-dimensional hyper plane. For an euclidean time direction $e$
we denote by $B(e)$ the fix-point algebra of
$A(e)\cap A(-e)$ under the reflexion $\iota_e$.
\paragraph{Condition (TZ):}
A $d$-dimensional euclidean net of C*-algebras $(\underline A,\alpha)$
fulfills the time-zero condition (TZ) iff $B(e)$ is a non-trivial
C*-algebra, i.e. it is not $\7C\11$.
We call the algebras $B(e)$ time-zero algebras.
For a region $\2U\subset \Sgm_e$, we denote by $B(e,\2U)$ the
sub-algebra which is generated by operators localized
in $\2U$.
\vskip 0.5cm
\paragraph{Remark:}
Let $(\underline A,\alpha)$ be a $d$-dimensional euclidean net of C*-algebras
which fulfill the condition (TZ). Then
the net
\begin{eqnarray*}
\underline B^e:\Sgm_e\supset \2U \ \longmapsto \ B(e,\2U)
\end{eqnarray*}
together with the group homomorphism
$\beta^e:=\alpha|_{E_e(d-1)}$ is, of course,
a $d-1$-dimensional euclidean net of C*-algebras.
\section{From euclidean field theory to quantum field theory}
\label{efthqfth}
In the present section, we discuss how to pass from a
euclidean field $(\underline A,\alpha,\eta)$ to a quantum
field theory in a particular vacuum representation.
In the first step we construct from a given euclidean field
$(\underline A,\alpha,\eta)$ a unitary strongly continuous
representation of the Poincar\' e group (Section \ref{reconpoin}).
In the second step we have to require that condition (TZ) is satisfied
in order to show that a concrete Haag-Kastler can be reconstructed from
the elements of the time-zero algebras and the representation of
the Poincar\' e group (Section \ref{reconlocobs}).
\subsection{Reconstruction of the Poincar\'e group}
\label{reconpoin}
For $e\in S^{d-1}$ we introduce a positive semidefinite sesqui-linear form on
$A(e)$ as follows:
\begin{eqnarray*}
a\otimes b \ \longmapsto \ <\eta,\iota_e(a^*)b> \ \ .
\end{eqnarray*}
Its null space is given by
\begin{eqnarray*}
N(e,\eta):=\{a\in A(e)|\forall b\in A(e):<\eta,\iota_e(a^*)b>=0 \}
\end{eqnarray*}
and we obtain a pre-Hilbert space
\begin{eqnarray*}
D(e,\eta):=A(e)/N(e,\eta)
\end{eqnarray*}
The corresponding quotient map is denoted by
\begin{eqnarray*}
\8p_{(e,\eta)}:A(e) \ \longrightarrow \ D(e,\eta)
\end{eqnarray*}
and its closure $\2H(e,\eta)$ is a Hilbert space with scalar product
\begin{eqnarray*}
\<\8p_{(e,\eta)}(a),\8p_{(e,\eta)}(b)\>&:=&<\eta,\iota_e(a^*)b> \ \ .
\end{eqnarray*}
\begin{Lem}\1: \label{lem1}
The map
\begin{eqnarray*}
T_{(e,\eta)}:s\in\7R_+ \ \longmapsto \ T_{(e,\eta)}(s):
\8p_{(e,\eta)}(a) { \ \longmapsto \ } \8p_{(e,\eta)}(\alpha_{(1,se)}a)
\end{eqnarray*}
is a strongly continuous semi-group of contractions with a positive
generator $H_{(e,\eta)}\geq 0$.
\end{Lem}
\paragraph*{\it Proof.}
Since
\begin{eqnarray*}
<\eta,\iota_e(b^*)a> \ \ = \ \ 0
\end{eqnarray*}
for each $b\in A(e)$ implies
\begin{eqnarray*}
<\eta,\iota_e(b^*)\alpha_{se}a> \ \ = \ \ <\eta,\iota_e(\alpha_{se}b^*)a>=0
\end{eqnarray*}
for each
$b\in A(e)$, we conclude that
\begin{eqnarray*}
T_{(e,\eta)}(s)\8p_{(e,\eta)}(a)=0
\end{eqnarray*}
for $a\in N(e,\eta)$. Hence $T_{(e,\eta)}$
is well defined.
The fact that $T_{(e,\eta)}$ is a semi-group of contractions
follows by standard arguments,
i.e. a multiple application of the Cauchy-Schwartz inequality.
Finally, the strong continuity follows from the regularity
of $\eta$.
$\Box$\skb
We consider the set $\8{Con}(e)$ of all cones $\Gam$ (in euclidean space)
of the form $\Gam=\7R_+(B_d(r+e))+\eps e$
where $B_d(r)$ denotes the ball in $\7R^d$ with center $x=0$ and radius $r$.
In addition, we define the following subspace of $\2H(e,\eta)$
\begin{eqnarray*}
D(\Gam;\eta):=\8p_{(e,\eta)}A(\Gam) \ \ .
\end{eqnarray*}
\begin{Lem}\1: \label{lem2}
For each cone $\Gam\in\8{Con}(e)$,
the vector space $D(\Gam,\eta)$ is a dense subspace of $\2H(e,\eta)$.
\end{Lem}
\paragraph*{\it Proof.}
Lemma \ref{lem1} states that
$T_{(e,\eta)}$
is a semi-group of contractions with a positive generator. Furthermore,
$D(\Gam,\eta)$ is mapped into itself by $T_{(e,\eta)}(s)$.
Since for each operator $a\in A(e)$ there exists an $s>0$
such that
\begin{eqnarray*}
T_{(e,\eta)}(s)\8p_{(e,\eta)}(a)\in D(\Gam,\eta) \ \ ,
\end{eqnarray*}
we can apply a Reeh-Schlieder argument in order to prove that $D(\Gam,\eta)$
is a dense subspace of $\2H(e,\eta)$.
$\Box$\skb
\begin{Lem}\1: \label{lem3}
Let $\2V\subset\8E(d)$ be a small neighborhood of
the unit element $1\in\8E(d)$ and let $\Gam\in\8{Con}(e)$ be a cone
such that $\2V\Gam\subset e\7R_++\Sgm_e$.
Then $a\in A(\Gam)\cap N(e,\eta)$ implies $\alpha_ga\in N(e,\eta)$
for each $g\in \2V$.
\end{Lem}
\paragraph*{\it Proof.}
We have $<\eta,\iota_e(b^*)\alpha_{se}a> \ \ = \ \ 0$
for each $b\in A(\Gam)$ and hence
$<\eta, \iota_e(b^*)\alpha_ga>=<\eta,\iota_e(\alpha_{\theta}\def\Te{\Theta_eg}b^*)a>=0$.
Since we may choose $\2V$ to be $\theta}\def\Te{\Theta_e$-invariant, we have
$\alpha_{\theta}\def\Te{\Theta_eg}b^*\in A(e)$ and the result follows by
Lemma \ref{lem2}.
$\Box$\skb
\begin{The}\1: \label{thepoin}
Let $\eta\in S_R(A,\alpha)$ be a regular reflexion positive functional.
Then for each
$e\in S^{d-1}$ there exists a unitary strongly continuous
representation $U_{(e,\eta)}$
of the $d$-dimensional Poincar\'e group ${\8P^\uparrow_+}$
\begin{eqnarray*}
U_{(e,\eta)}\in{\rm Hom}[{\8P^\uparrow_+}, U(\2H(e,\eta))]
\end{eqnarray*}
such that the spectrum of the translations $x\to U_{(e,\eta)}(1,x)$
is contained in the closed forward light cone $\bar V_+$.
\end{The}
\paragraph*{\it Proof.}
The theorem
can be proven by using the proof of \cite[Theorem 8.10]{Seil82}.
We briefly illustrate the construction of the representation
$U_{(e,\eta)}$. Let $\2V\subset\8E(d)$ be a small neighborhood of
the unit element $1\in\8E(d)$. Then there exists a
cone $\Gam\in\8{Con}(e)$ such that $\2V\Gam\subset e\7R_++\Sgm_e$.
According to Lemma \ref{lem3} we may define for each
$g\in \2V$ the operator
\begin{eqnarray*}
V_{(e,\eta)}(g)\8p_{(e,\eta)}(a):=\8p_{(e,\eta)}(\alpha_ga)
\end{eqnarray*}
with domain $D(\Gam,\eta)$.
If $g$ belongs to the group $\8E_e(d-1)$ then
we conclude that $V_{(e,\eta)}(g)=U_{(e,\eta)}(g)$ is a unitary operator.
Let $\6e(d)$ be the Lie algebra of $\8E(d)$
and let $\6e_e(d-1)\subset\6e(d)$ be the
sub-Lie algebra of $\8E_e(d-1)\subset \8E(d)$. We decompose
$\6e(d)$ as follows:
\begin{eqnarray*}
\6e(d)=\6e_e(d-1)\oplus\6m_e(d-1)
\end{eqnarray*}
and we obtain another real Lie algebra:
\begin{eqnarray*}
\6p(d):=\6e_e(d-1)\oplus\8i\6m_e(d-1)
\end{eqnarray*}
which is the Lie algebra of the Poincar\'e group ${\8P^\uparrow_+}$.
For each $X\in\6m_e(d-1)$ there
exists a self adjoint operator $L_{(e,\eta)}(X)$ where $D(\Gam,\eta)$
consists of analytic vectors for $L_{(e,\eta)}(X)$ and
for each $s\in\7R$ with $\exp(sX)\in \2V$ we have:
\begin{eqnarray*}
V_{(e,\eta)}(\exp(sX))=\exp(sL_{(e,\eta)}(X)) \ \ .
\end{eqnarray*}
According to \cite[Theorem 8.10]{Seil82} we conclude that
the unitary operators
\begin{eqnarray*}
U_{(e,\eta)}(\exp(\8isX)):=\exp(\8isL_{(e,\eta)}(X)) &;& X\in\6m_e(d-1)
\vspace{0.2cm} \\\vs
U_{(e,\eta)}(g):=V_{(e,\eta)}(g) &;& g\in\8E_e(d-1)
\end{eqnarray*}
induce a unitary strongly continuous representation of the
Poincar\'e group ${\8P^\uparrow_+}$. The positivity of
the Energy follows from the positivity of the transfer matrix
$T_{(e,\eta)}(1)$.
$\Box$\skb
\paragraph{Remark:}
The vector $\Om_{(e,\eta)}:=\8p_{(e,\eta)}(\11)$ is invariant under
the action of the Poincar\'e group.
\subsection{Reconstruction of the net of local observables}
\label{reconlocobs}
In the subsequent,
we consider a euclidean net of C*-algebras $(\underline A,\alpha)$ which
fulfills the condition (TZ).
\begin{Pro}\1:
Let $\eta$ be a regular reflexion positive functional
on $A$. Then the map
\begin{eqnarray*}
\pi_{(e,\eta)}:B(e)\ni b \ \longmapsto \ \pi_{(e,\eta)}(b):
\8p_{(e,\eta)}(a) \ \longmapsto \ \8p_{(e,\eta)}(ba)
\end{eqnarray*}
is a well defined *-representation of $B(e)$.
\end{Pro}
\paragraph*{\it Proof.}
For each $a\in N(e,\eta)$ and for each $c\in A(e)$ we have
\begin{eqnarray*}
<\eta,\iota_e(c^*)ba> \ \ = \ \ <\eta,\iota_e(c^*b)a> \ \ = \ \ 0
\end{eqnarray*}
and hence $\pi_{(e,\eta)}(b)$ is a well defined linear and bounded operator.
By construction it is clear
that $\pi_{(e,\eta)}$ is a *-homomorphism.
$\Box$\skb
\paragraph{Remark:}
The restriction of $\eta|_{B(e)}$
is a state of $B(e)$. Of course, the GNS-representation of
$\eta|_{B(e)}$ is a sub-representation of $\pi_{(e,\eta)}$.
\begin{Def}\1: \em
\begin{enumerate}
\itno 1
Let $\2O$ be a double cone in $\7R^d$. Then we define
$\6A_{(e,\eta)}(\2O)$ to be the C*-algebra on
$\2H(e,\eta)$ which is generated by operators
\begin{eqnarray*}
\Phi_{(e,\eta)}(g,b):=U_{(e,\eta)}(g)\pi_{(e,\eta)}(b)U_{(e,\eta)}(g)^*
\end{eqnarray*}
with $b\in B(e,\2U)$, $g\in{\8P^\uparrow_+}$ and $g\2U\subset\2O$.
\itno 2
We denote by $\underline{\6A}_{(e,\eta)}$ the net of C*-algebras which is
given by the prescription
\begin{eqnarray*}
\underline{\6A}_{(e,\eta)}:\2O \ \longmapsto \ \6A_{(e,\eta)}(\2O) \ \ .
\end{eqnarray*}
\end{enumerate}
\end{Def}
\begin{The}\1: \label{thehk}
The pair $(\underline{\6A}_{(e,\eta)},{\rm Ad}(U_{(e,\eta)}))$
is a ${\8P^\uparrow_+}$-covariant Haag-Kastler which is represented
on $\2H(e,\eta)$.
\end{The}
\paragraph{Remark:}
\begin{enumerate}
\itno 1
Note that
\begin{eqnarray*}
\omega}\def\Om{\Omega_{(e,\eta)}:\6A_{(e,\eta)}\ni a \ \longmapsto \ \<\Om_{(e,\eta)},a\Om_{(e,\eta)}\>
\end{eqnarray*}
is a vacuum state since $U_{(e,\eta)}$ is a positive energy
representation of the Poincar\'e group.
However, in general $\omega}\def\Om{\Omega_{(e,\eta)}$ is not a pure state.
\itno 2
For the local algebra $\6A_{(e,\eta)}(\2O)$, we do not
take the von Neumann algebra generated by the corresponding
operators $\Phi_{(e,\eta)}(g,b)$ since this might to problems
with locality.
\end{enumerate}
\paragraph{Preparation of the proof of Theorem \ref{thehk}:}
For a Lie algebra element $X\in\8i\6m_e(d-1)$ and a complex
number $z\in\7C$ we define a linear (unbounded) operator on $\2H(e,\eta)$ by
\begin{eqnarray*}
\Phi_{(e,\eta,X)}(z,b):=
U_{(e,\eta)}(\exp(zX))\pi_{(e,\eta)}(b)U_{(e,\eta)}(\exp(-zX))
\end{eqnarray*}
on a dense domain $D(\Gam,\eta)$ where $\Gam\in\8{Con}(e)$
an appropriate cone.
In order to formulate the our next result, we define for two generators
$X_1,X_2\in\8i\6m_e(d-1)$, for an interval $I$, for a
neighborhood $\2V\supset{\8L^\uparrow_+}$ of the unit element in $\8P_+(\7C)$, and for
two subsets $\2U_j\subset\Sgm_e$, $j=1,2$, the region
\begin{eqnarray*}
G(\2V;X_1,X_2;\2U_1,\2U_2;I)&:=&\bigcup_{g\in\2V\times{\8L^\uparrow_+}}
\biggl\{(z_1,z_2)\in(\7R\times\8iI)^2\biggm |\forall \1x_j\in\2U_j:
\vspace{0.2cm} \\\vs
&&e \ \8{Im}[g(\exp(z_1X_1)\1x_1-\exp(z_2X_2)\1x_2)]\in \7R_+ \biggr\} \ \ .
\end{eqnarray*}
We shall prove in the appendix the lemma given below
which is the analogue of the famous BHW theorem (compare also
\cite{Jost65,StrWgh89}
and references given there):
\begin{Lem}\1: \label{lem21}
For a given interval $I$,
there exists a dense subspace $D\subset\2H(e,\eta)$, such that the function
\begin{eqnarray*}
F_{(X_1,X_2,b_1,b_2)}:(z_1,z_2) \ \longmapsto \
\<\psi_1,\Phi_{(e,\eta,X_1)}(z_1,b_1)\Phi_{(e,\eta,X_2)}(z_2,b_2)\psi_2\>
\end{eqnarray*}
is holomorphic in $G(\2V;X_1,X_2;\2U_1,\2U_2,I)$
for each $\psi_1,\psi_2\in D$.
\end{Lem}
We claim that the $E(d)$ invariance of $\eta$ yields that
the dense subspace $D\subset\2H(e,\eta)$ can be chosen in such a way that
\begin{eqnarray*}
&&\8I(\2V;X_2,X_2;\2U_2,\2U_1;I)
\vspace{0.2cm} \\\vs
&&:= G(\2V;X_2,X_2;\2U_2,\2U_1;I)\cap
G(\2V;X_1,X_2;\2U_1,\2U_2;I)\cap\8i\7R^2
\vspace{0.2cm} \\\vs
&&\not= \emptyset \ \ .
\end{eqnarray*}
\begin{Lem}\1: \label{lem22}
If $\2U_1\cap\2U_2=\emptyset$ and
$(s_1,s_2)\in \8I(\2V;X_2,X_2;\2U_2,\2U_1;I)$, then
\begin{eqnarray*}
F_{(X_1,X_2,b_1,b_2)}(\8i s_1,\8i s_2)=
F_{(X_2,X_1,b_2,b_1)}(\8i s_2,\8i s_1) \ \ .
\end{eqnarray*}
\end{Lem}
\paragraph*{\it Proof.}
The lemma is a direct consequence of the euclidean covariance and the
locality of the net $\underline A$.
$\Box$\skb
\paragraph{\it Proof of Theorem \ref{thehk}:}
We conclude from Theorem \ref{thepoin} and the construction of the
algebras $\6A_{(e,\eta)}(\2O)$ that $\underline{\6A}_{(e,\eta)}$
is a Poincar\'e covariant net of C*-algebras, represented on
$\2H(e,\eta)$.
It remains to be proven that $\underline{\6A}_{(e,\eta)}$ is a local net.
For this purpose it is sufficient to show that for each pair
\begin{eqnarray*}
(t_1,t_2)&\in& R(X_1,X_2;\2U_1,\2U_2)
\vspace{0.2cm} \\\vs
&:=&\{ (t_1,t_2) \in\7R^2 |\exp(t_1X_1)\2U_1\subset(\exp(t_2X_2)\2U_2)'\}
\end{eqnarray*}
the commutator
\begin{eqnarray*}
[\Phi_{(e,\eta,X_1)}(t_1),\Phi_{(e,\eta,X_2)}(t_2)]|_D=0
\end{eqnarray*}
vanishes on an appropriate dense domain $D\subset\2H(e,\eta)$.
Since the points in $R(X_1,X_2;\2U_1,\2U_2)$ are space-like points,
we conclude that there exist complex
Lorenz boosts $g_\pm\in\2V$ such that
\begin{eqnarray*}
\8{Im}g_\pm R(X_1,X_2;\2U_1,\2U_2))\subset V_\pm \ \ .
\end{eqnarray*}
Hence we have
\begin{eqnarray*}
R(X_1,X_2;\2U_1,\2U_2)\subset G(\2V;X_1,X_2;\2U_1,\2U_2;I)\cap
G(\2V,X_2,X_1;\2U_2,\2U_1;I) \ \ .
\end{eqnarray*}
Using Lemma \ref{lem22}, we conclude that
\begin{eqnarray*}
F_{(X_1,X_2,b_1,b_2)}(z_1,z_2)=F_{(X_2,X_1,b_2,b_1)}(z_2,z_1)
\end{eqnarray*}
for
\begin{eqnarray*}
(z_1,z_2)\in G(\2V;X_1,X_2;\2U_1,\2U_2;I)\cap
G(\2V;X_2,X_1;\2U_2,\2U_1;I)
\end{eqnarray*}
which finally yields
\begin{eqnarray*}
F_{(X_1,X_2,b_1,b_2)}(t_1,t_2)=F_{(X_2,X_1,b_2,b_1)}(t_2,t_1)
\end{eqnarray*}
for each $(t_1,t_2)\in R(X_1,X_2;\2U_1,\2U_2)$. This proves the locality
of $\underline{\6A}_{(e,\eta)}$.
$\Box$\skb
\section{Discussion of miscellaneous consequences}
\label{strucasp}
Due to Theorem \ref{thehk} we are able to pass form a
euclidean field $(\underline A,\alpha,\eta)$ to a quantum field theory
in a particular vacuum representation. One crucial condition to apply
our method is the existence of the time-zero algebras.
We shall see that the discussion
of Section \ref{tz} covers all possible situations
for euclidean fields which fulfill the condition (TZ).
Afterwards,
we discuss in Section \ref{fermi} how the
reconstruction scheme has to be generalized in order to
include fermionic operators.
\subsection{Some remarks on euclidean fields which satisfy the
time-zero condition}
\label{tz}
Let us consider a $d-1$-dimensional euclidean net $(\underline B,\beta)$ of
abelian C*-algebras.
\begin{Def}\1: \em\label{defrel}
Let $G$ be a group which contains $\8E(d-1)$ as a sub-group.
We define $A_0(G;B,\beta)$ to be the *-algebra
which is generated by pairs $(g,b)\in G\times B$ modulo the relations:
\begin{enumerate}
\itno 1
For each $g\in G$, the map $b \ \longmapsto \ (g,b)$ is a *-homomorphism.
\itno 2
For each $g\in G$, for each $h\in\8E(d-1)$, and for each $b\in B$:
\newline $(gh,b)=(g,\beta_hb)$
\end{enumerate}
\end{Def}
The algebra $A_0(G;B,\beta)$ possesses a natural C*-norm which is given by
\begin{eqnarray*}
\|a\|:=\sup_{(\2H,\pi)\in R(G;B,\beta)}\|\pi(a)\|_{\2B(\2H)}
\end{eqnarray*}
where $R(G;B,\beta)$ is the set of all representations $\pi$ of
$A_0(G;B,\beta)$ by bounded operators on a Hilbert space $\2H$.
The closure of $A_0(G;B,\beta)$ is denoted by $A(G;B,\beta)$.
\paragraph{Remark:}
There is a natural group homomorphism \newline
$\alpha\in{\rm Hom}(G,{\rm Aut} A(G;B,\beta))$
and a natural faithful embedding $\phi\in{\rm Hom}^*(B,A(G;B,\beta))$ given by:
\begin{eqnarray*}
\alpha_g(g_1,b)&:=&(gg_1,b)
\vspace{0.2cm} \\\vs
\phi(b)&:=&(1,b) \ \ .
\end{eqnarray*}
Of course, we have for each $h\in\8E(d-1)$:
\begin{eqnarray*}
\phi\circ\beta_h&=&\alpha_h\circ\phi \ \ .
\end{eqnarray*}
\vskip 0.5cm
We are mostly interested in two cases for $G$, namely $G={\8P^\uparrow_+}$ and
$G=\8E(d)$. For both groups $A(G;B,\beta)$ has a natural local structure
since ${\8P^\uparrow_+}$ and $\8E(d)$ act as groups on $\7R^d$.
\begin{Def}\1: \em
For a region $\2O\in\7R^d$ we define $A(G;B,\beta|\2O)$ to be the
C*-sub-algebra in $A(G;B,\beta)$ which is generated by elements
$(g,b)$ with $b\in B(\2U)$ and $g\2U\subset \2O$ and we obtain nets
\begin{eqnarray*}
\underline A(G;B,\beta):\2O \ \longmapsto \ A(G;B,\beta|\2O) \ \ .
\end{eqnarray*}
\end{Def}
In order to get a Haag-Kastler net for $G={\8P^\uparrow_+}$ and a
euclidean net for $G=\8E(d)$, we consider the following ideals:
\begin{enumerate}
\itno 1
$J_c({\8P^\uparrow_+};B,\beta)$ is the two-sided ideal which is generated by
elements $[(g,b),(g_1,b_1)]$ where $(g,b)$ and $(g_1,b_1)$
are localized in space like separated regions.
\itno 2
$J_c(\8E(d);B,\beta)$ is the two-sided ideal which is generated by
elements $[(g,b),(g_1,b_1)]$ where $(g,b)$ and $(g_1,b_1)$
are localized in disjoint regions.
\end{enumerate}
Thus the prescription
\begin{eqnarray*}
\underline{\6A}_G:\2O \ \longmapsto \ \6A_G(\2O):=
A(G;B,\beta|\2O)/J_c(G;B,\beta)
\end{eqnarray*}
is a ${\8P^\uparrow_+}$-covariant Haag-Kastler net for $G={\8P^\uparrow_+}$, and
an euclidean net of C*-algebras for $G=\8E(d)$.
\begin{Pro}\1: \label{prouni}
Let $(\underline A,\alpha)$ be a $d$-dimensional euclidean net which fulfills
the condition (TZ) and let $(B,\beta)$ be the $d-1$-dimensional
euclidean net, corresponding to the hyper plane $\Sgm_e$. Then the map
\begin{eqnarray*}
\chi:\6A_{\8E(d)}\ni (g,b) \ \longrightarrow \ \alpha_g(b)\in A
\end{eqnarray*}
is a *-homomorphism which preserves indeed the net structure.
\end{Pro}
\paragraph*{\it Proof.}
By using the relations in
Definition \ref{defrel} we conclude, by some
straight forward computations, that $\chi$ is a
a *-homomorphism which preserves the net structure.
$\Box$\skb
An application of Theorem \ref{thehk} gives:
\begin{Cor}\1:
For each regular reflexion positive functional $\eta$ on $\6A_{\8E(d)}$
there exists a vacuum state $\omega}\def\Om{\Omega_\eta$ on $\6A_{\8P^\uparrow_+}$ such that
\begin{eqnarray*}
\omega}\def\Om{\Omega_\eta|_B=\eta|_B \ \ .
\end{eqnarray*}
\end{Cor}
\paragraph{Remark:}
\begin{enumerate}
\itno 1
Note that we may view $B$ as a common sub algebra of $\6A_{\8E(d)}$ and
$\6A_{\8P^\uparrow_+}$ since $B\cap J_c(G;B,\beta)=\{0\}$.
\itno 2
Given an euclidean field $(\underline A,\alpha,\eta)$, for which the
time zero algebra $B:=B(e)$ is non trivial.
By Proposition \ref{prouni}, we conclude that there is a
positive energy representation $\pi_{(e,\eta)}$ of
$\underline{\6A}_{\8P^\uparrow_+}$ on the Hilbert space $\2H(e,\eta)$ whose
image is precisely the net $\underline{\6A}_{(e,\eta)}$. In particular
the GNS-representation of $\omega}\def\Om{\Omega_\eta$ is a sub-representation of
$\pi_{(e,\eta)}$.
\itno 3
Both, the algebra $\6A_{\8P^\uparrow_+}$ of observables in
Minkowski space and the euclidean algebra $\6A_{\8E(d)}$
can be considered as
sub-algebras of $\6A_{P_+(\7C)}$ where the algebra $\6A_{P_+(\7C)}$
is defined by
\begin{eqnarray*}
\6A_{P_+(\7C)}:=
A(P_+(\7C);B,\beta)/[J_c({\8P^\uparrow_+};B,\beta)\cup J_c(\8E(d);B,\beta)] \ \ .
\end{eqnarray*}
\end{enumerate}
\subsection{The treatment of fermionic operators}
\label{fermi}
In order to discuss the treatment of fermionic operators we
introduce the notion of a fermionic euclidean net. The
axioms for such a net coincide with those of an
euclidean net, except the locality requirement.
\begin{Def}\1: \em
An isotonous and $\8E(d)$-covariant net $(\underline F,\alpha)$
\begin{eqnarray*}
\underline F:\7R^d\supset\2U \ \longmapsto \ F(\2U)=F_+(\2U)\oplus F_-(\2U)
\end{eqnarray*}
of $\7Z_2$-graded C*-algebras is called a {\em fermionic euclidean net}
iff $\2U_1\cap\2U_2=\emptyset$ implies
$[F(\2U_1),F(\2U_2)]_g=\{0\}$, where
$[\cdot,\cdot]_g$ denotes the graded commutator.
\end{Def}
For a given $d-1$-dimensional fermionic net $(\underline F,\beta)$, we
build the C*-algebras $A(\8E(d);F,\beta)$ and $A({\8P^\uparrow_+};F,\beta)$
as introduced in the previous section.
Note, that the algebra $A({\8P^\uparrow_+};F,\beta)$ possesses a
$\7Z_2$-grading, namely we have
\begin{eqnarray*}
A({\8P^\uparrow_+};F,\beta)=A_+({\8P^\uparrow_+};F,\beta)\oplus A_-({\8P^\uparrow_+};F,\beta)
\end{eqnarray*}
where the algebra $A_+({\8P^\uparrow_+};F,\beta)$ is spanned
by products of elements
$(g,b)$ containing an even number of generators in $G\times F_-$:
\begin{eqnarray*}
(g_1,b_1)\cdots (g_{2n},b_{2n}) \ \ .
\end{eqnarray*}
Therefore the sub-space $A_-({\8P^\uparrow_+};F,\beta)$
is spanned by elements which are products of elements
$(g,b)$ containing an odd number of generators in $G\times F_-$:
\begin{eqnarray*}
(g_1,b_1)\cdots (g_{2n-1},b_{2n-1}) \ \ .
\end{eqnarray*}
Analogously to the purely bosonic case, we consider the two-sided ideals
\begin{enumerate}
\itno 1
$J_g({\8P^\uparrow_+};F,\beta)$ which is generated by
graded commutators $[(g,b),(g_1,b_1)]_g$ where $(g,b)$ and $(g_1,b_1)$
are localized in space like separated regions and
\itno 2
$J_g(\8E(d);B,\beta)$ which is generated by
graded commutators $[(g,b),(g_1,b_1)]_g$ where $(g,b)$ and $(g_1,b_1)$
are localized in disjoint regions.
\end{enumerate}
Thus the prescription
\begin{eqnarray*}
\underline{\6F}_G:\2O \ \longmapsto \ \6F_G(\2O):= A(G;F,\beta|\2O)/J_g(G;F,\beta)
\end{eqnarray*}
is a fermionic ${\8P^\uparrow_+}$-covariant Haag-Kastler net for $G={\8P^\uparrow_+}$, and
an fermionic euclidean net for $G=\8E(d)$.
By following the arguments in the proof of Theorem \ref{thehk}
and by keeping in mind that the ordinary commutator has to be
substituted by the graded commutator, we get the result:
\begin{Cor}\1:
For each regular reflexion positive functional $\eta$ on
the fermionic euclidean net $\6F_{\8E(d)}$
there exists a vacuum state $\omega}\def\Om{\Omega_\eta$ on $\6F_{\8P^\uparrow_+}$ such that
\begin{eqnarray*}
\omega}\def\Om{\Omega_\eta|_F=\eta|_F \ \ .
\end{eqnarray*}
\end{Cor}
\paragraph{Remark:}
As described in
Section \ref{reconlocobs} the state is defined by
\begin{eqnarray*}
<\omega}\def\Om{\Omega_\eta,\prod_{j=1}^n(g_j,b_j)>&=&
\<\Om_{(e,\eta)},\prod_{j=1}^n\Phi_{(e,\eta)}(g_j,b_j)\Om_{(e,\eta)}\> \ \ .
\end{eqnarray*}
\section{Conclusion and outlook}
\label{conout}
\subsection{Concluding remarks and comparison}
We have shown, how a quantum field theory can be reconstructed
form a given euclidean field $(\underline A,\alpha,\eta)$ which fulfills the
condition (TZ). We think, that in comparison to the
usual Osterwalder-Schrader reconstruction theorem the
reconstruction of a quantum field theory from
euclidean fields (in our sense) has the following advantages:
\paragraph{$\oplus$}
The Osterwalder-Schrader reconstruction theorem relates Schwinger
distributions to a Wightman theory.
One obtains an operator valued distribution
$\Phi$ which satisfies the Wightman axioms.
The reconstructed field operators $\Phi(f)$ are,
in general, unbounded operators and in order to
get a Haag-Kastler net of bounded operators one
has to prove that not only the field operators
$\Phi(f)$, $\Phi(f_1)$ commute if $f$ and $f_1$ have
space-like separated supports, but also
its corresponding spectral projections. Furthermore, as mentioned
in the introduction, in order to apply the results of
\cite{OstSchra1} one has to prove that the Schwinger distributions
are continuous with respect to an appropriate topology.
Since our considerations are based on C*-algebras, we
directly obtain, via our reconstruction scheme,
a Haag-Kastler net of {\em bounded} operators.
In our case, the technical conditions
which a reflexion positive functional has to satisfy
are more natural. It has to be continuous and
regular where the continuity is automatically fulfilled if one
considers reflexion positive states.
Our reconstruction scheme does also include
objects, like Wilson loop variables, which are not
point-like localized objects in a distributional sense.
This point of view may also be helpful for
constructing gauge theories.
Furthermore, one also may start with an abelian C*-algebra like
the example of Wilson loop variables, given in the introduction.
Abelian C*-algebras are rather simple objects, namely
nothing else but continuous functions on a compact
Hausdorff space. In comparison to the
construction of reflexion positive functional on
the tensor algebra $T^\2T_E(S)$, one may hope that it is
easier to construct reflexion positive functionals for abelian
C*-algebras. This might simplify the construction of quantum
field theory models.
\vskip 0.5cm
Nevertheless, we also have to mention some drawbacks:
\paragraph{$\ominus$}
Unfortunately, our reconstruction scheme is not a
complete generalization of the Osterwalder-Schrader reconstruction.
This is due to that fact, that we have assumed the existence of
enough operators in $A$ which can be localized on a sharp
$d-1$-dimensional hyper plane (condition (TZ)).
Such a condition is not needed within the
Osterwalder-Schrader framework and there are indeed examples
of quantum field theories which do not fulfill this
condition, for instance the generalized free field for
which the mass distribution is not $L_1$.
On the other hand, the known interacting models like
the $P(\phi)_2$, the Yukawa$_2$ as well as the $\phi^4_3$ model
fulfill the condition (TZ). Thus we think that
the existence of the time-zero algebras is not such a harmful
requirement.
\subsection{Work in progress}
The main aim of our work in progress is concerned with the
construction of examples for euclidean fields which go
beyond the free fields.
It would also be desirable to develop a generalization of
our reconstruction scheme which also lead directly
to a Haag-Kastler net but which do not rely on the
condition (TZ).
A further open question is concerned with a reconstruction scheme
for euclidean fields with cutoffs.
The main motivation for such a considerations
is based on the
work of J. Magnen, V. Rivasseau, and R. S\'en\'eor \cite{MagRivSen93}
where it is claimed that the Yang-Mills$_4$ exists within a
finite euclidean volume.
\subsubsection*{{\it Acknowledgment:}}
I am grateful to Prof. Jakob Yngvason for
supporting this investigation with many ideas.
I am also grateful to Prof. Erhard Seiler and Prof. Jacques Bros
for many hints and discussions during the workshop at the
Erwin Schr\"odinger International Institute for Mathematical Physics
in Vienna (ESI) this autum.
This investigation is financially supported by the
Deutsche Forschungsgemeinschaft (DFG) who is also gratefully acknowledged.
\newpage
\begin{appendix}
\section{Analytic properties}
Within this appendix we give a complete proof of
Lemma \ref{lem21}. We shall use a simplified version of the notation
introduced in the previous sections by dropping the indices $(e,\eta)$.
Let $(A,\alpha,\eta)$ be an euclidean field and let $U$
be the corresponding strongly continuous representation of the Poincar\'e
group on $\2H=\2H(e,\eta)$
which has been constructed by Theorem \ref{thepoin}.
Furthermore, let $\pi$ be the *-representation of the time-zero algebra
$B$ on $\2H$.
For a given tuple $(X,b)\in\8i\6m(d-1)^n\times B^n$,
we like to study the analytic properties of the function
\begin{eqnarray*}
\Psi_n[X,b]:\7C^{2n}\ni (z,z') \ \longmapsto \
\prod_{j=1}^nU_{X_j}(z_j)\pi(b_j)U_{X_j}(z'_j)\psi
\end{eqnarray*}
where $\psi\in D(\Gam,\eta)$ and $\Gam$ is a cone which is contained in
$\8{Con}(e)$ and we write:
\begin{eqnarray*}
U_X(\zeta):=U(\exp(-\8i\zeta X)) \ \ .
\end{eqnarray*}
For this purpose, we introduce some technical definitions.
\begin{Def}\1: \em
For a generator $X\in\8i\6m(d-1)$, for an operator $b\in B(\2U)$
and for a cone $\Gam\in\8{Con}(e)$,
we define the regions:
\begin{eqnarray*}
I(\Gam,X)&:=&\{s'|\exp(-\8is'X)\Gam\subset e\7R_++\Sgm_e\}
\vspace{0.2cm} \\\vs
J(\Gam,X,b,s')&:=&\{s|\exp(-\8isX)[\exp(-\8is'X)\Gam\cup\2U]
\subset e\7R_++\Sgm_e\}
\vspace{0.2cm} \\\vs
G(\Gam|X,b)&:=&\bigcup_{s'\in I(\Gam,X)}[\7R+\8iJ(\Gam,X,b,s')\times
\7R+\8i\{s'\}]
\end{eqnarray*}
\end{Def}
\begin{Def}\1: \em
\begin{enumerate}
\itno 1
Consider a region $\2U$ which is contained in
$\Sgm_e+e\tau$, $\tau\geq 0$.
We define the corresponding time-zero algebra by
$B(\2U):=\alpha_{e\tau}B(\2U-e\tau)$.
\itno 2
For a given tuple
\begin{eqnarray*}
(X,b,s,s')\in\8i\6m(d-1)^n\times B(\2U_1)\times\cdots\times B(\2U_n)
\times\7R^{2n}
\end{eqnarray*}
we define recursively the regions
\begin{eqnarray*}
\Gam_0&:=&\Gam
\vspace{0.2cm} \\\vs
\Gam_1(s_1,s_1')&:=&\8{conv}(\exp(-\8is_1X_1)[\exp(-\8is'_1X_1)\Gam\cup\2U_1])
\vspace{0.2cm} \\\vs
\Gam_n(s_1\cdots s_n,s_1'\cdots s_n')&:=&
\8{conv}(\exp(-\8is_nX_n)[\exp(-\8is'_nX_n)\times
\vspace{0.2cm} \\
&&
\times\Gam_{n-1}(s_1\cdots s_{n-1},s_1'\cdots s_{n-1}')\cup\2U_n])
\end{eqnarray*}
\end{enumerate}
\end{Def}
\begin{Def}\1: \em
For each $n\in \7N$ we introduce the region:
\begin{eqnarray*}
G_n(\Gam;X,b):=\{(s_1\cdots s_n,s_1'\cdots s_n')|\forall k\leq n:
\Gam_k(s_1\cdots s_k,s_1'\cdots s_k')\subset e\7R_++\Sgm_e\} \ \ .
\end{eqnarray*}
\end{Def}
\begin{Lem}\1: \label{lema}
For a given tuple
\begin{eqnarray*}
(X,b)\in\8i\6m(d-1)^n\times B(\2U_1)\times\cdots\times B(\2U_n)
\end{eqnarray*}
the function $\Psi_n[X,b]$ is holomorphic in $\7R^{2n}+\8iG_n(\Gam;X,b)$.
\end{Lem}
\paragraph*{\it Proof.}
We prove the statement by induction.
The vector
$\psi\in D(\Gam,\eta)$ is contained in the domain of
$U_{X_1}(\8is_1')$ as long as $s_1'\in I(\Gam,X_1)$.
For a fixed value $s_1'\in I(\Gam,X_1)$ the
vector $\pi(b_1)U_{X_1}(\8is_1')\psi$ is contained
in the domain of $U_{X_1}(\8is_1)$ for $s_1\in J(\Gam,X_1,b_1,s_1')$.
This implies that $\Psi_1[X_1,b_1]$ is holomorphic in
$G(\Gam|X_1,b_1)\supset\7R+\8i G_1(\Gam;X,b)$.
Suppose $\Psi_{n-1}[X_1\cdots X_{n-1},b_1\cdots b_{n-1}]$ is
holomorphic in $\7R^{2(n-1)}+\8i G_{n-1}(\Gam;X,b)$. By the same argument as
above we conclude that for a fixed values
$(s,s')\in G_{n-1}(\Gam;X,b)$ the function
\begin{eqnarray*}
(z_n,z_n') \ \longmapsto \ \Psi_n[X,b]
(\8is,z_n,\8is',z_n')
\end{eqnarray*}
is holomorphic in
\begin{eqnarray*}
G(\Gam_{n-1}(s,s')|X_n,b_n)
\end{eqnarray*}
and hence it is holomorphic in
\begin{eqnarray*}
\bigcup_{(s,s')\in G_{n-1}(\Gam;X,b)}
\7R^{2(n-1)}+\8i\{(s,s')\}\times G(\Gam_{n-1}(s,s')|X_n,b_n)
\end{eqnarray*}
which is a region containing $G_n(\Gam;X,b)$.
$\Box$\skb
\section{Proof of Lemma \ref{lem21}}
For a given euclidean field $(\underline A,\alpha,\eta)$ we introduce
the following notions:
\begin{Def}\1: \em
\begin{enumerate}
\itno 1
We define the subspace
\begin{eqnarray*}
D(\Gam;\eta):=\8p_{(e,\eta)}A(\Gam) \ \mbox{ and } \ \hat D(\Gam;\eta):=
U_{(e,\eta)}({\8L^\uparrow_+})D(\Gam;\eta) \ \ .
\end{eqnarray*}
\itno 2
Let $X\in\8i\6m(d-1)$. For two regions $\Gam_1\subset\Gam$ we define
\begin{eqnarray*}
I(\Gam_1,\Gam;X):=
\{s\in\7R_+|\exp(-\8isX)\Gam_1\subset\Gam\} \ \ .
\end{eqnarray*}
\itno 3
For a generator $X\in\8i\6m(d-1)$ we define
the region
\begin{eqnarray*}
\2U(s,X):=\exp(-\8isX)\2U
\end{eqnarray*}
for each $s\in\7R$.
\itno 4
Given two regions $\2U_1,\2U_2$ in $\7R^d$, we define
\begin{eqnarray*}
G_e(X_1,X_2;\2U_1,\2U_2;I)&:=&\biggl\{(z_1,z_2)\in(\7R\times\8iI)^2\biggm
|\forall \1x_j\in\2U_j:
\vspace{0.2cm} \\\vs
&&e \ \8{Im}(\exp(z_1X_1)\1x_1-\exp(z_2X_2)\1x_2)\in \7R_+ \biggr\}
\vspace{0.2cm} \\\vs
G^g_e(X_1,X_2;\2U_1,\2U_2;I)&:=&\biggl\{(z_1,z_2)\in(\7R\times\8iI)^2\biggm
|\forall \1x_j\in\2U_j:
\vspace{0.2cm} \\\vs
&&e \ \8{Im}[g(\exp(z_1X_1)\1x_1-\exp(z_2X_2)\1x_2)]\in \7R_+ \biggr\}
\end{eqnarray*}
where $g\in\8P_+(\7C)$ is a complex Poincar\' e transformation.
\end{enumerate}
\end{Def}
\begin{Lem}\1: \label{lemhola}
Let $\Gam_1,\Gam\in\8{Con}(e)$ be two conic regions
such that $g\Gam_1\subset\Gam$ is a proper inclusion.
Then there exists an interval $I$ such that
for each $b_1\in B(\2U_1),b_2\in B(\2U_2)$
and for each $\psi_1,\psi_2\in \hat D(\Gam_1;\eta)$ the function
\begin{eqnarray*}
F^{(\psi_1,\psi_2)}_{(X_1,X_2,b_1,b_2)}:(z_1,z_2) \ \longmapsto \
\<\psi_1,\Phi_{X_1}(z_1,b_1)\Phi_{X_2}(z_2,b_2)\psi_2\>
\end{eqnarray*}
is holomorphic in $G_e(X_1,X_2;\2U_1,\2U_2;I)$.
\end{Lem}
\paragraph*{\it Proof.}
First we obtain by an application of Lemma \ref{lema},
that for each $\psi_1\in\2H(e,\eta)$
and for each $\psi\in D(\Gam,\eta)$, the function
\begin{eqnarray*}
(z,\zeta) \ \longmapsto \
\<\psi_1,\Phi_{X_2}(z,b_2)U_X(\zeta)\psi_2\>
\end{eqnarray*}
is holomorphic for $\8{Im}\zeta\in I(\Gam_1,\Gam;X)$ and
$\8{Im}z\in I(\Gam;X_2)$.
for $X\in\8i\6m(d-1)$. The holomorphy is due to the fact that
$U$ is a strongly continuous representation of the
Poincar\' e group and that $D(\Gam;\eta)$ consists of
analytic vectors for the boost generators.
For a fixed values $s'\in I(\Gam_1,\Gam;X)$ and $s\in I(\Gam;X_2)$, we have
\begin{eqnarray*}
\Phi_{X_2}(\8is,b_2)U_{(e,\eta,X)}(-\8is')\psi_2\in
D(\hat\Gam;\eta)
\end{eqnarray*}
for each region $\hat\Gam\subset e\7R_++\Sgm_e$
which contains $\Gam\cup\2U_2(s,X_2)$.
\vskip 0.5cm
Now, for a given point $(z,\8is)\in G_e(X_1,X_2;\2U_1,\2U_2;I)$
there exists a conic region $\Gam(z,\8is)\in\8{Con}(e)$ with
$\Gam(z,\8is)\supset\Gam\cup\2U_2(s,X_2)$ such that
$D(\Gam(z,\8is);\eta)$ is contained in the domain of
$\Phi_{X_1}(z,b_2)$. Furthermore, for a
given interval $I$, the cone $\Gam$ can be
chosen to be small enough such that this holds for
each $(z,\8is)$ with $\8{Im}z,s\in I$.
Since $\Gam_1$ is $O(d-1)$-invariant, the result follows.
$\Box$\skb
Let $\2V\supset{\8L^\uparrow_+}$ be a neighborhood of the identity in $\8P_+(\7C)$.
We may choose a cone $C(\Gam,\2V)\in\8{Con}(e)$
such that
\begin{eqnarray*}
g C(\Gam,\2V)\subset \Gam \ \ .
\end{eqnarray*}
for each $g\in \8E(d)\cap\2V$. Note that the representation
$U$ can be extended to $\2V$ by unbounded operators
with domain $\hat D(\Gam_1,\eta)$ where $\Gam_1\subset C(\Gam,\2V)$.
In order to finish the proof of Lemma \ref{lem21}, we show the
following statement:
\begin{Lem}\1: \label{lemholb}
Let $\2U_1,\2U_2$ be two bounded disjoint regions
and let $\Gam_1\in\8{Con}(e)$ such that $\Gam_1\subset C(\Gam,\2V)$
is a proper inclusion.
Then the function $F^{(\psi_1,\psi_2)}_{(X_1,X_2,b_1,b_2)}$
has an extension $\hat F^{(\psi_1,\psi_2)}_{(X_1,X_2,b_1,b_2)}$
which is holomorphic in
\begin{eqnarray*}
G(\2V;X_1,X_2;\2U_1,\2U_2;I):=
\bigcup_{g\in\2V} G_e^g(X_1,X_2;\2U_1,\2U_2;I)
\end{eqnarray*}
for each $\psi_1,\psi_2\in \hat D(\Gam_1;\eta)$.
\end{Lem}
\paragraph*{\it Proof.}
For a given neighborhood $\2V\supset {\8L^\uparrow_+}$ of the unit element in $\8P_+(\7C)$
and for a given cone $\Gam\in\8{Con}(e)$, there
exists $\eps>0$ such that $g\2U_2+\eps e\subset \Gam$.
We easily observe that the substitution
\begin{eqnarray*}
\psi_j'&:=&T(\eps)U(g)\psi_j
\vspace{0.2cm} \\\vs
X_j'&:=& \exp(-\8i\eps H)gX_jg^{-1}\exp(\8i\eps H)
\end{eqnarray*}
yields
\begin{eqnarray*}
F^{(\psi_1',\psi_2')}_{(X_1',X_2',b_1,b_2)}(z_1,z_2)=
F^{(\psi_1,\psi_2)}_{(X_1,X_2,b_1,b_2)}(z_1,z_2)
\end{eqnarray*}
for each $(z_1,z_2)\in G_e(X_1,X_2;\2U_1,\2U_2;I)$ where
$H$ is the generator of translations in $e$-direction.
According to Lemma \ref{lemhola},
the function $F^{(\psi_1',\psi_2')}_{(X_1',X_2',b_1,b_2)}$ is
holomorphic in $G^g_e(X_1,X_2;\2U_1,\2U_2;I)$
which implies the result.
$\Box$\skb
\end{appendix}
\newpage
|
2,869,038,154,034 | arxiv | \section{Introduction}
The double Hawking temperature $2T_H$ appears in some approaches to the Hawking radiation from the black hole and cosmological horizons considered as quantum tunneling.
In the case of the black hole horizon, the $2T_H$ is the result of the incorrect choice of reference frame, while the correct choice (the Painleve-Gullstrand coordinate system) gives the Hawking temperature $T_H$.
In the de Sitter Universe the situation is different, the temperature $T=2T_H$ is physical: it is the local temperature experienced by matter well inside the cosmological horizon. We discuss the possible connections between these two manifestations of the double Hawking temperature.
\section{$2T_H$ problem for black holes}
\label{2THblack hole}
In the semiclassical consideration of the Hawking radiation\cite{Hawking1975} in terms of the quantum tunneling \cite{Volovik1999,Wilczek2000,Volovik2009},
the Painleve-Gullstrand (PG) coordinate system \cite{Painleve,Gullstrand} is used with the metric:
\begin{equation}
ds^2= - dt^2(1-{\bf v}^2) - 2dt\, d{\bf r}\cdot {\bf v} + d{\bf r}^2 \,,
\label{PGmetric}
\end{equation}
where $v^2(r) =R/r$ and $R=2M$ is the position of the black hole horizon ($G=c=\hbar=1$).
This metric does not have singularity at the horizon, and it gives the tunneling exponent corresponding to Hawking temperature $T_H= 1/8\pi M$. This process can be interpreted as the pair production of two entangled particles, of which one goes towards the center of the black hole, while other one escapes.
In some approaches to the Hawking radiation as quantum tunneling, there exists the $2T_H$ problem.\cite{Akhmedov2006,Laxmi2021} We consider this problem using Klein–Gordon equation for massive field
in a curved background,\cite{Akhmedov2006}
which leads to the relativistic Hamilton--Jacobi equation for the classical action:
\begin{equation}
g^{\mu\nu}\partial_\mu S \partial_\nu S + m^2 =0\,.
\label{HJ}
\end{equation}
Let us start with the PG metric (\ref{PGmetric}), since it does not have coordinate singularity at the horizon.
One has for the fixed energy $E$:\cite{Akhmedov2006}
\begin{equation}
-E^2 +(1-v^2) \left(\frac{dS}{dr}\right)^2 + 2vE\frac{dS}{dr}+m^2=0\,.
\label{KG}
\end{equation}
This gives for the classical action
\begin{equation}
S=-\int dr \frac{Ev}{1-v^2} \pm \int \frac{dr}{1-v^2}\sqrt{E^2-m^2(1-v^2)} \,.
\label{SPG}
\end{equation}
For the plus sign in the second term the imaginary parts of two terms cancel each other. This corresponds to the incoming trajectory of the particle.
For the minus sign in the second term the total tunneling exponent describes the radiation with the Hawking temperature $T_H$:
\begin{eqnarray}
\exp{ \left(-2\,{\rm Im}\, S\right)}= \exp{ \left(-\frac{E}{2T_H} \right)} \exp{ \left(-\frac{E}{2T_H} \right)} =
\nonumber
\\
=\exp{ \left(-\frac{E}{T_H} \right)} \,.
\label{TH2}
\end{eqnarray}
Note that each of the two terms in Eq.(\ref{SPG}) corresponds to the effective temperature $2T_H$.
That is why if one of the two terms is lost in calculations, the temperature $2T_H$ erroneously emerges.
In Ref. \cite{Akhmedov2006} the Schwarschild coordinates were also used (see also Ref.\cite{Srinivasan1999}):
\begin{equation}
ds^2=- \left( 1- \frac{R}{r} \right)dt^2 + \frac{dr^2}{\left( 1- \frac{R}{r} \right)} + r^2 d\Omega^2
\,.
\label{StaticMetric}
\end{equation}
This gives only the second term in Eq.(\ref{SPG}) and as a result the $2T_H$ temperature is obtained.
In Ref.\cite{Srinivasan1999} it was argued that the Hawking temperature is restored by consideration of the ratio between emission and absorption.
The first term in Eq.(\ref{SPG}) is missing, because it is not taken into account, that the Schwarschild metric has singularity at the horizon. As a result, the transition from PG coordinates to Schwarschild coordinates requires the singular coordinate transformation:
\begin{equation}
dt \rightarrow d\tilde t +dr \frac{v}{1-v^2} \,.
\label{CoordinateTransformation}
\end{equation}
Then the corresponding action contains two terms:
\begin{equation}
S=-E dt \pm \int dr \frac{1}{1-v^2}\sqrt{E^2-m^2(1-v^2)} \,,
\label{SScgh}
\end{equation}
The first term in Eq. (\ref{SScgh}) gives the missing tunneling exponent:
\begin{eqnarray}
\exp{ \left(-2{\rm Im} \int E dt \right)} =
\nonumber
\\
= \exp{ \left(-2{\rm Im} \int \left(E d\tilde t + dr \frac{Ev}{1-v^2} \right) \right)} =
\nonumber
\\
= \exp{ \left(-2{\rm Im} \int dr \frac{Ev}{1-v^2} \right)} =
\nonumber
\\
= \exp{ \left(-2\pi E R\right)}
= \exp{ \left(-\frac{E}{2T_H} \right)} \,.
\label{Et}
\end{eqnarray}
So, the total tunneling exponent is
\begin{equation}
\exp{ \left(-2\,{\rm Im}\, S\right)}= \exp{ \left(-\frac{E}{2T_H}\right)}\exp{ \left(\pm\frac{E}{2T_H}\right)}\,.
\label{TunnSchw}
\end{equation}
The minus sign corresponds to the radiation from the PG black hole considered in the Schwarschild coordinates, and it gives the Hawking radiation with $T=T_H$. Note again that in this approach, the Eq.(\ref{TunnSchw}) also contains two contributions, each corresponding to the effective temperature $2T_H$.
Note that according to Refs.\cite{Volovik2020a,Volovik2021f,Volovik2021d,Volovik2022}, the hole object can be in three different states: the PG black hole, the PG white hole and the intermediate state: the neutral fully static object described by Schwarschild coordinates. These three states can be obtained from each other by singular coordinate transformations in Eq.(\ref{CoordinateTransformation}). These singularities have been used for calculations of the macroscopic tunneling transitions between these objects and for calculations of the entropy of each object.\cite{Volovik2020a,Volovik2021f,Volovik2021d,Volovik2022} For the fully static Schwarschild black hole, the imaginary parts of two terms in Eq.(\ref{SPG}) cancel each other. This demonstrates that such hole does not radiate and has zero entropy.
\section{$2T_H$ problem in the de Sitter spacetime}
Let us now go to the problem of the double-Hawking temperature in de Sitter spacetime. In the dS spacetime, one has $v^2= r^2H^2$, where $H$ is the Hubble parameter, and the cosmological horizon is at $R=1/H$. The same procedure as in Section \ref{2THblack hole} for the PG black hole gives the Hawking radiation in Eq.(\ref{TH2}) with the Hawking temperature $T_H=H/2\pi$.
However, there are several arguments that there is the local temperature experienced by matter well inside the horizon, and it is twice the Hawking temperature, $T=H/\pi=2T_H$.\cite{Volovik2009b,Volovik2021g,Volovik2022} This is supported in particular by calculations of the tunneling rate of ionization of atoms in the de Sitter space, which is determined by the local temperature.
The same local temperature describes the process of the splitting of the composite particle with mass $m$ into two components with $m_1 +m_2>m$, which is also not allowed in the Minkowski vacuum.\cite{Bros2008,Bros2010,Volovik2009b} The probablity of this process (for $m\gg T_{\rm H}$):
\begin{eqnarray}
\Gamma(m \rightarrow m_1+m_2) \sim \exp{\left(-\frac{ m_1+m_2-m}{ 2T_H}\right)}\,.
\label{mto2}
\end{eqnarray}
In particular, for $m_1 =m_2=m$, the decay rate of massive field in the de Sitter spacetime is obtained
\begin{eqnarray}
\Gamma \sim \exp{\left(-\frac{m}{2T_H} \right)} \,,
\label{MassCreation}
\end{eqnarray}
which is in agreement with Ref. \cite{Jatkar2012}.
Let us look at this problem using the action (\ref{SPG}). Till now we considered this action for the calculations of the imaginary part for particle with energy $E>m$ on the trajectory in the complex plane which connects the trajectory inside the horizon and the trajectory outside the horizon. This corresponds to the creation of two particles: one inside the horizon and another one outside the horizon.
However, there is also the trajectory which allows for the creation of a single particle fully inside the cosmological horizon. In this creation from "nothing", the particle with mass $m$ must have zero energy, $E=0$. This is possible, as follows from the second term in Eq.(\ref{SPG}), which gives the following imaginary part of the action at $E=0$:
\begin{eqnarray}
{\rm Im}\, S(E=0)= m \int _0^{1/H}\frac{dr}{\sqrt{1- r^2H^2}} =\frac{\pi}{2}\frac{m}{H}\,.
\label{LocalCreation1}
\end{eqnarray}
The probability of radiation
\begin{eqnarray}
\exp{ \left(-2\,{\rm Im}\, S\right)}= \exp{ \left(-\frac{\pi m}{H}\right)}=\exp{ \left(-\frac{m}{2T_H}\right)}\,.
\label{LocalCreation2}
\end{eqnarray}
corresponds to the thermal creation of particles by the environment with local temperature equals to the double Hawking temperature, $T=H/\pi=2T_H$ in Eq.(\ref{MassCreation}). The same $T=2T_H$ is obtained for the tunneling rate of ionization of atoms in the de Sitter space, which looks as thermal.
The same approach is applicable also to the black hole. It describes the process of creation of particle with zero energy, $E=0$, without creation of its partner inside the black hole. Observation of such single particle creation is possible, if the detector is at finite distance $R_{\rm o}$ from the black hole. Then the second term in (\ref{SPG}) gives
\begin{eqnarray}
{\rm Im}\, S(E=0)= m \int _R^{R_{\rm o}}\frac{dr}{\sqrt{1- R/r}} \,.
\label{SingleBH1}
\end{eqnarray}
Far from the horizon, at $R_{\rm o}\gg R$, this process is exponentially suppressed:
\begin{equation}
p \sim \exp{(-2mR_{\rm o}})\,\,,\,\, R_{\rm o}\gg R \,.
\label{SingleBH2}
\end{equation}
The more general case with nonzero $E<m$ was considered in Ref.\cite{Jannes2011}. It is the combined process, at which the Hawking radiation is measured by the observer at finite distance $R_{\rm o}$ from the black hole. The process is described by two tunneling exponents. For $R_{\rm o}\gg R$ one obtains:\cite{Jannes2011}
\begin{eqnarray}
p \sim \exp\left(-\frac{E}{T_H}\right),\,\, E>m\,,
\label{SingleBH3}
\\
p \sim \exp\left(-\frac{E}{T_H}\right) \exp\left(-2R_{\rm o}\sqrt{m^2-E^2}\right), E<m.
\label{SingleBH4}
\end{eqnarray}
The first exponent in Eq.(\ref{SingleBH4}) comes from the horizon and corresponds to the conventional Hawking radiation, and the second one describes the process of tunneling of the created particle, which occurs outside the horizon.
The second process, which takes place only at $E<m$, is exponentially suppressed for $R_{\rm o}\gg R$.
On the contrary, in the de Sitter background both processes have relation to the Hawking temperature: the single-particle process is described by the local temperature $T=H/\pi=2T_H$ and the entangled particles process is characterized by the Hawking temperature $T_H$. The explanation of such connection between the two processes has been suggested in Sec. VB in Ref.\cite{Volovik2022}. In the de Sitter Universe all the points are equivalent, and thus each point may serve as the event horizon for some other observers. That is why the observer at a given point sees the Hawking radiation as creation of two particles: one is inside horizon viewed by the distant observer and the other is outside this horizon. The probability of simultaneous radiation of two particles in the thermal environment with the local temperature $T=2T_H$ is:
\begin{equation}
\exp{ \left(-\frac{E}{2T_H}\right)}\exp{ \left(-\frac{E}{2T_H}\right)}=\exp{ \left(-\frac{E}{T_H}\right)}\,.
\label{DoubleHawking}
\end{equation}
For the distant observer, who can see only one of the two radiated particles, this looks as radiation with Hawking temperature $T_H$.
\section{Conclusion}
There is the connection between Eqs. (\ref{LocalCreation2}) and (\ref{DoubleHawking}) for the de Sitter spacetime on one hand and the similar equations (\ref{TH2}) and (\ref{TunnSchw}) for the black hole horizon on the other hand. In both cases the temperature $T=2T_H$ enters. However, the physics is different.
In case of the PG black hole, there are two contributions to the tunneling process of radiation, each governed by the temperature $T=2T_H$. They are coherently combined to produce the radiation with the Hawking temperature $T_H$. This process can be traditionally interpreted as the pair creation of two entangled particles, of which one goes towards the centre of the black hole, while the other one escapes from the black hole.
In case of de Sitter spacetime, the temperature $T=2T_H$ is physical. Instead of the creation of the entangled pair, this local temperature describes the thermal creation of a single (non-entangled) particle inside the cosmological horizon. Such single particle process is highly suppressed in case of the black hole, see Eqs.(\ref{SingleBH2}) and (\ref{SingleBH4}).
How such process influences the thermodynamics of the de Sitter spacetime is an open question.\cite{Volovik2022}
{\bf Acknowledgements}. I thank E. Akhmedov for discussions. This work has been supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant Agreement No. 694248).
|
2,869,038,154,035 | arxiv | \section{Introduction}
Recently, the investigation of entanglement properties in systems
with many degrees of freedom has become a challenging line of research
which has put into evidence a growing number of connections between
quantum information and computational science, statistical mechanics,
quantum field theory as well as formerly very far away topics, such
as spin systems and black hole physics.
In particular, much attention has been devoted to the study of strongly
correlated spin and/or electronic models in one dimension, with the
aim to unveil the relationship between entanglement properties of
the ground state and quantum phase transitions. As soon as one has
to deal with systems made up of more than two two-level subsystems,
there does not exist a unique way of characterizing the degree of
entanglement stored by the system itself. Thus, several different
measures of entanglement have been proposed in literature over the
last few years and used to study the critical properties of the above
mentioned systems. These studies include more standard definitions
of entanglement measures, such as concurrence \cite{Fazio,Fazio2,Vidal},
as well as others, such as Renyi entropies \cite{Franchini}, local
\cite{Campos} indicators, or fidelity \cite{Zanardi}.
In the realm of quantum field theory, one usually studies entanglement
properties of a system via the computation of the so called Von Neumann
entropy $S=-\mbox{Tr}[\rho_{A}\log\rho_{A}]$ related to the reduced
density matrix $\rho_{A}$ of a subsystem A, as proposed in a seminal
paper by Calabrese and Cardy \cite{Cardy}, in which both the critical
(conformal) and the free massive cases have been examined. As for
the massless situation, much evidence has been accumulated proving
that, for a wide class of lattice systems, from the formula for $S$
one can extract the central charge of its scaling limit conformal
field theory. This is the case, for example, of spin $\kappa/2$ XXZ-chains
$(\kappa=1,2,3...)$ \cite{Weston} or of the $SU(3)$ AFM Heisenberg
chain \cite{Aguado}. In addition, some attention has been devoted
to the relationship between entanglement properties of the vacuum
and the boundary conditions of the theory \cite{Weston,Asorey}. Important
analytical progress has also been made in the context of massive integrable
quantum field models, for which a framework for the computation of
Von Neumann entropy has been developed using factorized scattering
and form factor techniques \cite{Doyon,Doyon2}.
In this paper we will first study the XYZ model and compute the exact
expression for the entanglement entropy for an infinite bipartition
of such quantum spin chain. As a measure of the entanglement, we will
use the Von Neumann entropy of the density matrix associated with
a semi-infinite segment of the chain. We will avoid the difficulties
concerning the direct computation of the density matrix by mapping
the quantum chain onto a two-dimensional classical spin system. As
Nishino \emph{et al.} \cite{Nishino1,Nishino2} first pointed out,
there is a connection between the density matrix of a quantum chain
and the classical partition function of a two-dimensional strip with
a cut perpendicular to it \cite{Peschel1}. In fact, the ground state
of the quantum chain described by a Hamiltonian $\hat{H}$ is also
an eigenvalue of the row-to-row transfer matrix $T$ of the classical
model, provided that $[\hat{H},T]=0$. In \cite{Cardy}, this analogy
was used in order to compute the entanglement of a transverse Ising
chain and of a XXZ chain near their critical points. By exploiting
the existing link between the eight vertex model and the XYZ chain
\cite{Sutherland}, we will obtain a formula for the entanglement
entropy for the latter, which once more confirms the following universal
expression for $S$, which is valid near criticality when the correlation
length $\xi$ is much larger than the lattice spacing $a$: \begin{equation}
S\sim(c/6)\log(\xi/a)+U\label{eq:intro1}\end{equation}
where $c$ is the central charge of the conformal field theory describing
the critical point that the theory is approaching. This formula first
appeared in the context of black hole physics \cite{Srednicki}.
Here $U$ is a non-universal constant which is well known to depend
on the particular model under investigation. When the bipartition
is made up of two semi-infinite chains, as it is in this paper, it
has been noted by several authors \cite{Cardy,Weston,Asorey,Olalla}
that it contains information about the so-called Affleck-Ludwig boundary
entropy \cite{Affleck}.
At the present time, with the method of this paper, we are not able
to extract the boundary entropy information from such non-universal
$U$. The study of the exact link between this term and the boundary
entropy needs more accurate calculations on correlation functions
that are not yet easily accessible in integrable models. The present
exact result, however, could become very useful the day one will be
able to compare it with calculations coming from independent methods.
Finally, we will be interested in a particular and relevant scaling
limit of the XYZ chain, which yields the 1+1 dimensional sine-Gordon
model \cite{Luther,McCoy}. After a brief discussion of the connection
between these two models in the thermodynamic limit, we will present
the formula for the entanglement entropy of the latter which has a
dominant logarithmic term in perfect agreement with what we expected
by seeing the sine-Gordon model as a perturbed $c=1$ conformal field
theory. This formula also gives an analytic expression for the constant
$U$.
\section{Entanglement entropy for the XYZ model via corner transfer matrix\label{sec:XYZ}}
Let us consider the quantum spin-$\frac{1}{2}$ XYZ chain, which is
described by the following hamiltonian\begin{equation}
\hat{H}_{XYZ}=-{\displaystyle \sum_{n}}(J_{x}\sigma_{n}^{x}\sigma_{n+1}^{x}+J_{y}\sigma_{n}^{y}\sigma_{n+1}^{y}+J_{z}\sigma_{n}^{z}\sigma_{n+1}^{z})\label{eq:XYZ1}\end{equation}
where the $\sigma_{n}^{i}$ ($i=x,y,z$) are Pauli matrices acting
on the site $n$, the sum ranges over all sites $n$ of the chain
and the constants $J_{x}$, $J_{y}$ and $J_{z}$ take into account
the degree of anisotropy of the model. Without any loss of generality,
we can put $J_{x}=1$. In the following we will exploit the very well
known connection between the XYZ model and the eight vertex model
\cite{Baxter}. Indeed, as shown by Sutherland \cite{Sutherland},
when the coupling constants of the XYZ model are related to the parameters
$\Gamma$ and $\Delta$ of the eight vertex model %
\footnote{On the relation among $\Gamma,\Delta$ and the Boltzmann weights of
XYZ we adopt here the conventions of \cite{Baxter}. %
} at zero applied field by the relations\begin{equation}
J_{x}:J_{y}:J_{z}=1:\Gamma:\Delta\label{eq:XYZ4bis}\end{equation}
the row-to-row transfer matrix $T$ of the latter model commutes
with the Hamiltonian $\hat{H}$ of the former. It is customary \cite{Baxter}
to parametrize the constants $\Delta$ and $\Gamma$ in terms of elliptic
functions\begin{equation}
\Gamma=\frac{1+k^{2}\mbox{sn}^{2}(i\lambda)}{1-k\;\mbox{sn}^{2}(i\lambda)}\qquad,\qquad\Delta=-\frac{\mbox{cn}(i\lambda)\mbox{dn}(i\lambda)}{1-k\;\mbox{sn}^{2}(i\lambda)}\label{eq:XYZ3bis}\end{equation}
where $\mbox{sn}(x)$, $\mbox{cn}(x)$ and $\mbox{dn}(x)$ are Jacobian
elliptic functions while $\lambda$ and $k$ are parameters whose
domains are the following\begin{equation}
0<k<1\qquad,\qquad0<\lambda<I(k')\label{eq:XYZ4}\end{equation}
$I(k')$ being the complete elliptic integral of the first kind of
argument $k'=\sqrt{1-k^{2}}$. We recall that this parametrization
is particularly suitable to describe the anti-ferroelectric phase
of the eight vertex model, that is when $\Delta<-1$, even if it can
be used in all cases by redefining the relations that hold between
$\Gamma$ and $\Delta$ and the Boltzmann weights.
Then, if relation (\ref{eq:XYZ4bis}) holds, in the thermodynamic
limit one can obtain \cite{Nishino1,Nishino2,Peschel1} the reduced
density matrix $\hat{\rho}_{1}$ of the XYZ model relative to a semi-infinite
chain as product of the four Corner Transfer Matrices $\hat{A}=\hat{C}$
and $\hat{B}=\hat{D}$ of the eight vertex model at zero field. The
starting point is the density matrix defined by $\hat{\rho}=\mid0\,\rangle\langle\,0\mid$,
where $\mid0\,\rangle\in\mathcal{H}$ is the ground state in the infinite
tensor product space $\mathcal{H}=\mathcal{H}_{1}\otimes\mathcal{H}_{2}$
of our quantum spin chain. The subscripts $_{1}$ and $_{2}$ refer
to the semi-infinite left and right chains. $\hat{\rho}_{1}$ is then
defined as the partial trace: \begin{equation}
\hat{\rho}_{1}={\rm Tr}_{\mathcal{H}_{2}}(\hat{\rho})\label{added}\end{equation}
More precisely one can write \begin{equation}
\hat{\rho}_{1}(\bar{\sigma},\bar{\sigma}')=(\hat{A}\hat{B}\hat{C}\hat{D})_{\bar{\sigma},\bar{\sigma}'}=(\hat{A}\hat{B})_{\bar{\sigma},\bar{\sigma}'}^{2}\label{eq:XYZ4a}\end{equation}
where $\bar{\sigma}$ and $\bar{\sigma}'$ are particular spin configurations
of the semi-infinte chain. \\
Generally speaking, the above quantity does not satisfy the constraint
${\rm Tr}\hat{\rho}_{1}=1$, so we must consider a normalized version
$\hat{\rho}_{1}'$. Let us definine for future convenience the partition
function $\mathcal{Z}={\rm Tr}\hat{\rho}_{1}$. For the zero field
eight vertex model with fixed boundary conditions, as it is shown
in \cite{Baxter,Baxter1,Baxter2}, one can write down an explicit
form of the Corner Transfer Matrices in the thermodynamic limit \begin{eqnarray}
\hat{A}(u) & = & \hat{C}(u)=\left(\begin{array}{cc}
1 & 0\\
0 & s\end{array}\right)\otimes\left(\begin{array}{cc}
1 & 0\\
0 & s^{2}\end{array}\right)\otimes\left(\begin{array}{cc}
1 & 0\\
0 & s^{3}\end{array}\right)\otimes...\nonumber \\
\hat{B}(u) & = & \hat{D}(u)=\left(\begin{array}{cc}
1 & 0\\
0 & t\end{array}\right)\otimes\left(\begin{array}{cc}
1 & 0\\
0 & t^{2}\end{array}\right)\otimes\left(\begin{array}{cc}
1 & 0\\
0 & t^{3}\end{array}\right)\otimes...\label{eq:XYZ5}\end{eqnarray}
where $s$ and $t$ are functions of the parameters $\lambda$, $k$
and $u$ whose explicit expressions are given by $s=\exp\left[-\pi u/2I(k)\right]$
and $t=\exp\left[-\pi(\lambda-u)/2I(k)\right]$, $I(k)$ being an
elliptic integral of the first kind of modulus $k$. Thus the density
operator of formula (\ref{eq:XYZ4a}) is given by\begin{equation}
\hat{\rho}_{1}=\left(\begin{array}{cc}
1 & 0\\
0 & x\end{array}\right)\otimes\left(\begin{array}{cc}
1 & 0\\
0 & x^{2}\end{array}\right)\otimes\left(\begin{array}{cc}
1 & 0\\
0 & x^{3}\end{array}\right)\otimes...\label{eq:XYZ6}\end{equation}
where $x=(st)^{2}=\exp[-\pi\lambda/I(k)]$. We notice that $\hat{\rho}_{1}$
is a function of $\lambda$ and $k$ only and that, furthermore, it
can be rewritten as\begin{equation}
\hat{\rho}_{1}=(\hat{A}\hat{B})^{2}=e^{-\epsilon\hat{O}}\label{eq:XYZ7}\end{equation}
$\hat{O}$ is an operator with integer eigenvalues and\begin{equation}
\epsilon=\pi\lambda/I(k)\label{eq:eps}\end{equation}
The Von Neumann entropy $S$ can then be easily calculated according
to\begin{equation}
S=-\mbox{Tr}\hat{\rho}_{1}'\ln\hat{\rho}_{1}'=-\epsilon{\displaystyle \frac{\partial\ln\mathcal{Z}}{\partial\epsilon}}+\ln\mathcal{Z}\label{eq:XYZ8}\end{equation}
where, in our case, the partition function is given by\begin{equation}
\mathcal{Z}={\displaystyle \prod_{j=1}^{\infty}}(1+x^{j})={\displaystyle \prod_{j=1}^{\infty}}(1+e^{-\pi\lambda j/I(k)})\label{eq:XYZ9}\end{equation}
Thus we obtain an exact analytic expression for the entanglement
entropy of the XYZ model\begin{equation}
S_{XYZ}=\epsilon{\displaystyle \sum_{j=1}^{\infty}}{\displaystyle \frac{j}{(1+e^{j\epsilon})}}+{\displaystyle \sum_{j=1}^{\infty}}\ln(1+e^{-j\epsilon})\label{eq:XYZ10}\end{equation}
which is valid for generic values of $\lambda$ and $k$. This is
the main result of this section. When $\epsilon\ll1$, i.e. in the
scaling limit analogous to the one of \cite{Weston}, formula (\ref{eq:XYZ10})
can be approximated by its Euler-Maclaurin asymptotic expansion, yielding\begin{eqnarray}
S_{XYZ} & = & {\displaystyle \int_{0}^{\infty}}\;{\rm d}x\;\left({\displaystyle \frac{x\epsilon}{1+e^{x\epsilon}}}+\ln(1+e^{-x\epsilon})\right)-{\displaystyle \frac{\ln2}{2}}+O(\epsilon)\nonumber \\
\, & = & {\displaystyle \frac{\pi^{2}}{6}}{\displaystyle \frac{1}{\epsilon}}-{\displaystyle \frac{\ln2}{2}}+O(\epsilon)\label{eq:XYZ11}\end{eqnarray}
This will be used in the next subsection where we will check our
analytic results with some special known cases, namely the XXZ and
the XY chain.
\section{Some checks against known results}
Let us first consider the case $k=0$ (i.e. $\Gamma=1$) and the limit
$\lambda\to0^{+}$, which corresponds to the spin $1/2$ XXZ. In this
limit the eight vertex model reduces to the six vertex model. Let
us note that formula (\ref{eq:XYZ11}) coincides exactly with the
one proposed by Weston \cite{Weston} which was obtained in the study
of more general spin $\kappa/2$. In this limit the approximation
$\epsilon\ll1$ is still valid and we can use the result of equation
(\ref{eq:XYZ11}). From relation (\ref{eq:XYZ3bis}), it follows that
\begin{equation}
\lambda={\displaystyle \sqrt{2}}\sqrt{-\Delta-1}+O\left((-1-\Delta)^{3/2}\right)\label{eq:XYZ12}\end{equation}
Thus equation (\ref{eq:XYZ11}) gives\begin{equation}
S=\frac{\pi^{2}}{12\sqrt{2}}\,\frac{1}{\sqrt{-\Delta-1}}-\frac{\ln(2)}{2}+O\left((-1-\Delta)^{1/2}\right)\label{eq:XYZ13}\end{equation}
which can be written in a simpler form if we recall that, when $\epsilon\to0$
(i.e. $\Delta\rightarrow-1^{-}$), the correlation length is given
by \cite{Baxter} \begin{equation}
\ln\frac{\xi}{a}={\displaystyle {\frac{\pi^{2}}{\epsilon}}-2\ln(2)+O(e^{-\pi^{2}/\epsilon})}\end{equation}
where $a$ is the lattice spacing. Recalling that \begin{equation}
\epsilon={\displaystyle 2\sqrt{2}}\sqrt{-\Delta-1}+O\left((-1-\Delta)^{3/2}\right)\end{equation}
the expression for the entanglement entropy becomes \begin{equation}
S=\frac{1}{6}\ln\frac{\xi}{a}+U+O\left((-1-\Delta)^{1/2}\right)\end{equation}
where $U=-\ln(2)/6$. This last expression confirms the general theory
of equation (\ref{eq:intro1}) with $c=1$, which is exactly what
one should expect, the XXZ model along its critical line being a free
massless bosonic field theory with $c=1$.
As a second check, we consider the case $\Gamma=0$, which corresponds
to the XY chain. It is convenient now to describe the corresponding
eight vertex model by using Ising-like variables which are located
on the dual lattice \cite{Baxter}, thus obtaining an anisotropic
Ising lattice, rotated by $\pi/4$ with respect to the original one,
with interactions round the face with coupling constants $J,J'$,
as shown in figure \ref{fig:ising}. %
\begin{figure}
\begin{centering}
\includegraphics[scale=0.7]{doppioising}
\par\end{centering}
\caption{Decoupled anisotropic Ising lattices. Horizontal and vertical lines
belong to the original eight vertex model lattice, diagonal lines
belong to the dual Ising lattice.}
\label{fig:ising}
\end{figure}
In our case the Ising lattice decouples into two single sublattices
with interactions among nearest neighbors. Now \begin{equation}
\Delta=\sinh(2\beta J)\sinh(2\beta J')\equiv k_{I}^{-1}\label{eq:XYZ17}\end{equation}
so that, using the elliptic parametrization, one has\begin{equation}
\lambda=\frac{1}{2}I(k')\label{eq:XYZ18}\end{equation}
Thus $\epsilon$ of equation (\ref{eq:eps}) becomes \begin{equation}
\epsilon=\frac{\pi I(k_{I}')}{I(k_{I})}\label{eq:XYZ19}\end{equation}
Let us approach the critical line of the anisotropic Ising model
from the ferromagnetic phase, i.e. let us assume that $k_{I}\to1^{-}$.
In this case it is straightforward to write \begin{equation}
\epsilon=-{\displaystyle \frac{\pi^{2}}{\ln(1-k_{I})}}+O\left(\ln^{-2}(1-k_{I})\right)\label{eq:XYZ20}\end{equation}
so that the entanglement entropy is\begin{equation}
S=-\frac{1}{6}\ln(1-k_{I})+O\left(\ln^{-1}(1-k_{I})\right)\label{eq:XYZ21}\end{equation}
Since $\xi^{-1}=(1-k_{I})+O\left((1-k_{I})^{2}\right)$, we can easily
conclude that\begin{equation}
S=\frac{1}{6}\ln\frac{\xi}{a}+O\left(\ln^{-1}(1-k_{I})\right)\label{eq:XYZ22}\end{equation}
where again the leading term confirms the general result (\ref{eq:intro1}),
with $c=1$. This result is in agreement with what found in previous
works \cite{Vidal,Cardy,Peschel3,Korepin} by means of different approaches.
\section{The sine-Gordon limit}
In \cite{Luther,McCoy} it has been shown that a particular scaling
limit of the XYZ model yields the sine-Gordon theory. In this section
we will use this fact to compute the exact entanglement entropy between
two semi-infinite intervals of a 1+1 dimensional sine-Gordon model.
In his article, Luther \cite{Luther} demonstrated that in the scaling
limit, where $a\to0$ while keeping the mass gap constant, the parameters
of the XYZ model and those of the sine-Gordon theory are connected
by the following relation (keeping $J_{x}=1$ from the beginning)\begin{equation}
M=8\pi\,\left({\displaystyle \frac{\sin\mu}{\mu}}\right)\,\left({\displaystyle \frac{l_{r}}{4}}\right)^{\pi/\mu}\label{eq:SG1}\end{equation}
where the parameter $\mu$ is defined as\begin{equation}
\mu\equiv\pi\left(1-{\displaystyle \frac{\beta^{2}}{8\pi}}\right)=\arccos\left({\displaystyle -J_{z}}\right)\label{eq:SG2}\end{equation}
Here $M$ is the sine-Gordon solitonic mass, and $l_{r}=l\, a^{-\mu/\pi}$,
with\begin{equation}
l^{2}={\displaystyle \frac{1-J_{y}^{2}}{1-J_{z}^{2}}}\label{eq:SG3}\end{equation}
These relations tell us how the coupling constant $J_{z}$ is connected
to the parameter $\beta$ of sine-Gordon, and how $J_{y}$ scales
when we take the scaling limit $a\to0$. It is clear from equation
(\ref{eq:SG3}) that in this limit $J_{y}\to1^{-}$. In the following
we work in the repulsive regime $4\pi<\beta^{2}<8\pi$ (which corresponds
to $0<\mu<\pi/2$ and $-1<J_{z}<0$). In this regime the mass gap
of the theory is the soliton mass $M$. Taking this limit we use the
following parametrization of the XYZ coupling constants\begin{equation}
\Gamma={\displaystyle \frac{J_{z}}{J_{y}}}\qquad,\qquad\Delta={\displaystyle \frac{1}{J_{y}}}\label{eq:SG4}\end{equation}
which amounts to a reparametrization of the Boltzmann weights of
XYZ suitable for the $|\Delta|\leq1$ disordered regime where we are
working now (see chapter 10 of \cite{Baxter} for details). As a consequence
of such reparametrization a minus sign appears in front of both equations
(\ref{eq:XYZ3bis}). Taking the sine-Gordon limit, $\lambda$ and
$k$ parametrizing $\Gamma$ and $\Delta$ must now satisfy the following
constraint\begin{equation}
\mbox{sn}^{2}(i\lambda)=-{\displaystyle \frac{\frac{J_{z}}{J_{y}}+1}{k^{2}-k\frac{J_{z}}{J_{y}}}}\label{eq:SG5}\end{equation}
Considering the parametrization of $\Delta$ and using the properties
of the Jacobian elliptic functions we can write\begin{equation}
\Delta^{2}=\frac{\mbox{cn}^{2}(i\lambda)\mbox{dn}^{2}(i\lambda)}{(1-k\;\mbox{sn}^{2}(i\lambda))^{2}}={\displaystyle \frac{\left(2k^{2}-k\frac{J_{z}}{J_{y}}+k^{2}\frac{J_{z}}{J_{y}}\right)\left(k^{2}-k\frac{J_{z}}{J_{y}}+\frac{J_{z}}{J_{y}}+1\right)}{(k^{2}+k)^{2}}}\label{eq:SG6}\end{equation}
Expanding around $k\to1^{-}$ and collecting $\Delta^{2}=1/J_{y}^{2}$
from both sides of the equation we find\begin{equation}
\Delta^{2}=1+\frac{1}{4}(1-J_{z}^{2})(k-1)^{2}+O(k-1)^{3}\label{eq:SG7}\end{equation}
Using equation (\ref{eq:SG1}) we obtain\begin{equation}
l^{2}=l_{r}^{2}a^{2\mu/\pi}=4^{2-3\mu/\pi}\left({\displaystyle \frac{M\mu a}{\pi\sin\mu}}\right)^{2\mu/\pi}\label{eq:SG8}\end{equation}
where $\mu$ is completely fixed by choosing a particular value of
$J_{z}$. Now using the definition (\ref{eq:SG3}) and (\ref{eq:SG4})
we find\begin{equation}
\Delta^{2}=1+(1-J_{z}^{2})4^{2-3\mu/\pi}\left({\displaystyle \frac{M\mu a}{\pi\sin\mu}}\right)^{2\mu/\pi}+O(a^{4\mu/\pi})\label{eq:SG9}\end{equation}
which is valid when $a\to0$. Comparing equation (\ref{eq:SG7})
with (\ref{eq:SG9}) we can identify in which way $k$ scales to $1^{-}$\begin{equation}
k=1-2^{3(1-\mu/\pi)}\left({\displaystyle \frac{M\mu a}{\pi\sin\mu}}\right)^{\mu/\pi}+O(a^{2\mu/\pi})\label{eq:SG10}\end{equation}
Remembering the constraint (\ref{eq:SG5}) and using the previous
expression for $k$ we have\begin{equation}
\mbox{sn}^{2}(i\lambda)={\displaystyle \frac{-J_{z}-1}{1-J_{z}}+O(a^{\mu/\pi})}\label{eq:SG11}\end{equation}
When $k\to1$ the elliptic function sn reduces to an hyperbolic tangent,
thus we obtain\begin{equation}
\tan^{2}\lambda={\displaystyle \frac{1+J_{z}}{1-J_{z}}+O(a^{\mu/\pi})\quad\longrightarrow\quad\lambda=\arctan\sqrt{{\displaystyle \frac{1+J_{z}}{1-J_{z}}}}+O(a^{\mu/\pi})}\label{eq:SG12}\end{equation}
Now we can evaluate the expression (\ref{eq:XYZ10}) in this limit.
Using the following asymptotic behaviour of the elliptic integral
$I(x)$\begin{equation}
I(x)\approx-\frac{1}{2}\ln(1-x)+\frac{3}{2}\ln2+O(1-x),\qquad x\approx1^{-}\label{eq:SG13}\end{equation}
along with the approximation (\ref{eq:XYZ11}), we can write the
exact entanglement entropy of a bipartite XYZ model in the sine-Gordon
limit\begin{equation}
S_{sG}=-{\displaystyle \frac{\pi}{12}\frac{\ln(1-k)-3\ln2}{\arctan\sqrt{{\displaystyle \frac{1+J_{z}}{1-J_{z}}}}}-{\displaystyle \frac{\ln2}{2}}+O(1/\ln(a))}\label{eq:SG14}\end{equation}
The leading correction to this expression comes from the $O(\epsilon)$
term of equation (\ref{eq:XYZ11}). The constant $J_{z}$ is connected
to $\beta$ by\begin{equation}
J_{z}=-\cos\pi\left(1-\frac{\beta^{2}}{8\pi}\right)\label{eq:SG15}\end{equation}
thus using this property and the scaling expression (\ref{eq:SG11})
we can write down the entanglement entropy as\begin{equation}
S_{sG}={\displaystyle \frac{1}{6}\ln\left(\frac{1}{Ma}\right)+\frac{1}{6}\ln\left(\frac{\sin\left[\pi\left(1-\frac{\beta^{2}}{8\pi}\right)\right]}{\left(1-\frac{\beta^{2}}{8\pi}\right)}\right)+O(1/\ln(a))}\label{eq:SG16}\end{equation}
This result confirms the general theory due to \cite{Cardy,Doyon},
in the limit where the system is bipartite in two infinite intervals,
with the central charge equal to 1, as it should, because the sine-Gordon
model can be considered a perturbation of a $c=1$ conformal field
theory (described by a free massless boson compactified on a circle
of radius $\sqrt{\pi}/\beta$) by a relevant operator of (left) conformal
dimension $\beta^{2}/8\pi$. We can write \begin{equation}
S_{sG}\approx{\displaystyle \frac{1}{6}\ln\left(\frac{1}{Ma}\right)+U(\beta)\qquad a\to0}\label{eq:SG17}\end{equation}
where the constant term $U(\beta)$ takes the value \begin{equation}
U(\beta)={\displaystyle \frac{1}{6}\ln\left(\frac{\sin\left[\pi\left(1-\frac{\beta^{2}}{8\pi}\right)\right]}{\left(1-\frac{\beta^{2}}{8\pi}\right)}\right)}\label{eq:SG18}\end{equation}
At $\beta^{2}=4\pi$, when the sine-Gordon model becomes the free
Dirac fermion theory, it assumes the value $U(\sqrt{4\pi})=\frac{1}{6}\log2=0.11552453...$,
while at $\beta^{2}=8\pi$, where the theory becomes a relevant perturbation
of the WZW conformal model of level 1 by its operator of left dimension
$\frac{1}{4}$, it becomes $U(\sqrt{8\pi})=\frac{1}{6}\log\pi=0.19078814...$.\\
We notice that formula (\ref{eq:SG16}) yields the exact value of
the overall constant $U(\beta)$, since it has been derived from equation
(\ref{eq:XYZ11}) which is exact up to terms $O(\epsilon)$. As mentioned
in the introduction, $U$ contains a contribution from the Affleck-Ludwig
boundary entropy that we are not able to determine at this stage.
It would be extremely interesting to be able to perform the same calculation
in other regularization schemes of the sine-Gordon model. Also, consideration
of boundary conditions other than the \char`\"{}vacuum\char`\"{} one
considered here could shed light, in the spirit of \cite{Weston},
on the problem of clearly relating this constant to the boundary entropy.
In this respect the considerations done in paper \cite{Olalla} are
very important. Clearly this question deserves further investigation
that we plan for the future.
\section{Conclusions and outlooks}
We have carried out an exact formula for the entanglement entropy
in an infinite bipartition of the XYZ model in the thermodynamic limit,
by which, as test bench, we have re-obtained some well known results
about the entanglement entropy of two integrable systems, the XXZ
model and the XY chain.
In addition we have obtained the entanglement entropy of the (repulsive)
sine-Gordon scaling limit of the XYZ model, which, on one side, confirms
once more the general theory due to \cite{Cardy,Doyon} and, on the
other side, yields for the first time an exact expression for such
quantity in the case of an interacting massive field theory. Since
we are dealing with a non free theory, an investigation of the connections
between this expression, Affleck-Ludwig boundary entropy \cite{Affleck}
and one point functions of sine-Gordon fields \cite{Zam} is non-trivial
and deserves further work.
Also it would be important to compare our results with those obtained
for massive 1+1 dimensional theories by the form factor approach of
\cite{Doyon,Doyon2} and generalized to any massive (even not integrable)
theory in \cite{Doyon3}. This implies to implement our technique
for a subsystem $A$ consisting of a finite interval, a situation
that we plan to study in the future.
Finally, let us remark that another issue addressed in \cite{Cardy}
is the dependence of entanglement entropy on finite size effects.
To investigate this aspect one should implement finite size effects
into the XYZ model, possibly within the Bethe ansatz approach, and
then rescale to the continuum. A good question to ask is how far this
approach can be related to the definition of sine-Gordon model on
a cylinder by rescaling lattice models, both in the XYZ approach \cite{Davide,Davide2}
or in the so called light-cone approach by Destri and De Vega \cite{DdV87}.
How far the finite size effects on entanglement entropy may be encoded
in structures generalizing the non-linear integral equations governing
e.g. the finite size rescaling of the central charge $c$ (see e.g.
\cite{Ravanini} and references therein) and the Affleck-Ludwig boundary
entropy $g$ \cite{Dorey,Lishman}, is an intriguing issue, surely
deserving further investigation.
All these lines of developments can shed new light on the comprehension
of the entanglement problem in quantum field theory.
\section*{Acknowledgments}
We wish to thank Francesco Buccheri, Andrea Cappelli, Filippo Colomo,
Marcello Dalmonte, Cristian Degli Esposti Boschi, Benjamin Doyon,
Davide Fioravanti, Federica Grestini and Fabio Ortolani for useful
and very pleasant discussions. This work was supported in part by
two INFN COM4 grants (FI11 and NA41) and by the Italian Ministry of
Education, University and Research grant PRIN-2007JHLPEZ.
|
2,869,038,154,036 | arxiv | \section{Introduction}\label{sec:intro}
A major goal of computable structure theory is to understand the relationship between structure and computability. For example, on the computability side, a computable structure (model) $\mathcal{A}$ is said to be \emph{computably categorical} if any computable structure that is isomorphic to $\mathcal{A}$ is
computably isomorphic to $\mathcal{A}$. On the structural side, Goncharov and Dzgoev, and independently J. Remmel, showed that a computable linear order is computably categorical if and only if it has only finitely many successor pairs\ \cite{Dzgoev.Goncharov.1980}, \cite{Remmel.1981}. Thus, the structure of a computable linear order determines whether it is computably categorical.
The interaction of computable categoricity and other computability notions with various classes of algebraic and combinatorial structures has been intensively studied.
However, until recently, analytic structures such as metric spaces and Banach spaces have been ignored in this context. Thus, a research program has lately emerged to apply computable structure theory to analytic spaces. One obvious obstacle is that these spaces are generally uncountable. However, our understanding of computability on analytic spaces has advanced considerably in the last few decades and should no longer be seen as an impediment.
Here we examine the computable structure theory of $L^p$ spaces; in particular their computable categoricity. It is generally agreed that computability can only be studied on separable spaces (at least with our current understanding of computation). If an $L^p$ space is separable, then its underlying measure space is separable.
Thus, we restrict our attention to $L^p$ spaces of separable measure spaces. The computably categorical $\ell^p$ spaces have been classified \cite{McNicholl.2015}, \cite{McNicholl.2016},\cite{McNicholl.2016.1}. So, here we will focus on non-atomic measure spaces. Our main theorem is the following.
\begin{theorem}\label{thm:main}
If $\Omega$ is a nonzero, non-atomic, and separable measure space, and if $p \geq 1$ is a computable real, then every computable presentation of $L^p(\Omega)$ is computably isometrically isomorphic to the standard computable presentation of $L^p[0,1]$.
\end{theorem}
Note that when we say that a measure space $\Omega = (X, \mathcal{M}, \mu)$ is nonzero, we mean that there is a set $A \in \mathcal{M}$ so that $0 < \mu(A) < \infty$ (so that $L^p(\Omega)$ is nonzero).
There are several corollaries.
\begin{corollary}\label{cor:Lp[0,1]}
If $\Omega$ is a non-atomic and separable measure space, and if $p \geq 1$ is a computable real, then $L^p(\Omega)$ is computably categorical. In particular, for every computable real $p \geq 1$, $L^p[0,1]$ is computably categorical.
\end{corollary}
\begin{corollary}\label{cor:Lp.iso}
Let $p$ be a computable real so that $p \geq 1$, and suppose $\Omega_1$, $\Omega_2$ are measure spaces that are nonzero, non-atomic, and separable. Then, each computable presentation of
$L^p(\Omega_1)$ is computably isometrically isomorphic to each computable presentation of $L^p(\Omega_2)$.
\end{corollary}
\begin{corollary}\label{cor:msr}
If $\Omega^\#$ is a computable presentation of a nonzero and non-atomic measure space $\Omega$,
and if $p \geq 1$ is a computable real, then the induced computable presentation of $L^p(\Omega)$ is computably isometrically isomorphic to $L^p[0,1]$.
\end{corollary}
Previously, the second author showed that $\ell^p_n$ is computably categorical when $p \geq 1$ is a computable real. This provided the first non-trivial example of a computably categorical Banach space that is not a Hilbert space. Corollary \ref{cor:Lp[0,1]} provides the first example of a computably presentable and infinite-dimensional Banach space that is computably categorical even though it is not a Hilbert space.
Our main theorem can be seen as an effective version of a result of Carath\'eodory: if $\Omega$ is a measure space that is nonzero, non-atomic, and separable, then $L^p(\Omega)$ is isometrically isomorphic to $L^p[0,1]$ \cite{Cembranos.Mendoza.1997}.
However, our proof is not a mere effectivization of a classical proof. For, the classical proofs of Carath\'eodory's result all begin with a sequence of transformations on the underlying measure space. Specifically, it is first shown that there is a $\sigma$-finite measure space $\Omega_1$ so that $L^p(\Omega)$ is isometrically isomorphic to
$L^p(\Omega_1)$. It is then shown that there is a probability space $\Omega_2$ so that $L^p(\Omega_2)$ is isometrically isomorphic to $L^p(\Omega_1)$ is then shown to be isometrically isomorphic to $L^p[0,1]$. This approach is the natural course to take in the classical setting wherein one has full access to the $L^p$ space and to the underlying measure space. But, in the world of effective mathematics, a computable presentation of $L^p(\Omega)$ does not necessarily yield a computable presentation of the underlying measure space; i.e. it allows us to `see' the vectors but not necessarily the measurable sets. This point will be made precise by way of an example in Section \ref{sec:comp.msr.spaces}. In particular, Theorem \ref{thm:main} is a stronger result than Corollary \ref{cor:msr}. Thus our demonstration of Theorem \ref{thm:main} yields a new proof of Carath\'eodory's result that does not make any transformations on the underlying measure space. Our main tool for doing this is the concept of a \emph{disintegration} of an $L^p(\Omega)$ space which was previously used for $\ell^p$ spaces by McNicholl but which we introduce here for arbitrary $L^p$ spaces.
The paper is organized as follows. Section \ref{sec:background} presents background and preliminaries from
analysis and computability theory; in particular it gives a very brief survey of computable structure theory in the countable setting and a summary of prior results on analytic computable structure theory. More expansive surveys of classical computable structure theory can be found in \cite{Fokina.Harizanov.Melnikov.2014} and \cite{Harizanov.1998}. Section \ref{sec:overview} gives an overview of the proof of Theorem \ref{thm:main}.
In Section \ref{sec:classical}, we develop precursory new material from classical analysis; in particular disintegrations. Section \ref{sec:computable} contains our new results on computable analysis and forms the bridge from the classical material in Section \ref{sec:classical} to Theorem \ref{thm:main}. Section \ref{sec:comparison} contrasts our methods with those used for $\ell^p$ spaces. Section \ref{sec:rcc} explores relative computable categoricity of $L^p$ spaces. Results on computable measure spaces and related $L^p$ spaces are expounded in Section \ref{sec:comp.msr.spaces}.
Section \ref{sec:conclusion} gives concluding remarks.
\section{Background and preliminaries}\label{sec:background}
We then briefly survey the background of classical (i.e. countable) computable structure theory and prior results in analytic computable structure theory.
\subsection{Classical world}\label{subsec:back.classical}
We summarize in this section the preparatory material from both classical and computable mathematics that is relevant to this paper. We begin with a few preliminaries from discrete mathematics. We then cover preliminaries from
measure theory and Banach spaces (in particular, $L^p$ spaces).
\subsubsection{Discrete preliminaries}
When $A$ is a finite set, we denote its cardinality by $\# A$.
When $\mathbb{P} = (P, \leq)$ is a partial order and $a,b \in P$, we write $a | b$ if $a,b$ are incomparable; i.e. if $a \not \leq b$ and $b \not \leq a$. A lower semilattice $(\Lambda, \leq)$ is \emph{simple} if $\mathbf{0}$ is the meet of any two incomparable elements of $\Lambda$. A lower semilattice $\Lambda'$ is a \emph{proper extension} of a lower semilattice $\Lambda$ if
$\Lambda \subset \Lambda'$ and for every $u \in \Lambda' - \Lambda$ there is no nonzero $v \in \Lambda$ so that $u > v$.
Suppose $\mathbb{P}_0 = (P_0, \leq_0)$ and $\mathbb{P}_1 = (P_1, \leq_1)$ are partial orders.
A map $f : P_0 \rightarrow P_1$ is \emph{monotone} if $f(a) \leq_1 f(b)$ whenever $a \leq_0 b$ and is \emph{antitone} if $f(b) \leq_1 f(a)$ whenever $a \leq_0 b$\cite{Chajda.Halavs.Radomir.Kuhr.2007}.
$\mathbb{N}$ denotes the set of all nonnegative integers. $\mathbb{N}^*$ denotes the set of all finite sequences
of nonnegative integers. (We regard a sequence as a map whose domain is an initial segment of $\mathbb{N}$.) These sequences are referred to as \emph{nodes} and the empty sequence $\lambda$ is referred to as the \emph{root node}. When $\nu \in \mathbb{N}^*$, $|\nu|$ denotes the length of $\nu$ (i.e. the cardinality of the domain of $\sigma$). When $\nu, \nu' \in \mathbb{N}^*$, we write
$\nu \subset \nu'$ if $\nu$ prefixes $\nu'$; in this case we also say that $\nu$ is an \emph{ancestor} of $\nu'$ and that $\nu'$ is a \emph{descendant} of $\nu$. Thus, $(\mathbb{N}^*, \subseteq)$ is a partial order. When $\nu, \nu' \in \mathbb{N}^*$, write
$\nu^\frown\nu'$ for the concatenation of $\nu$ with $\nu'$. We say that $\nu'$ is a \emph{child} of $\nu$ if
$\nu' = \nu^\frown(n)$ for some $n \in \mathbb{N}$ in which case we also say that $\nu$ is the \emph{parent} of $\nu'$.
If $\nu$ is a node, then $\nu^+$ denotes the set of all children of $\nu$ and if $\nu$ is a non-root node then
$\nu^-$ denotes the parent of $\nu$. We denote the lexicographic order of $\mathbb{N}^*$ by $<_{\rm lex}$.
If $S$ is a set of nodes, then $\nu \in S$ is \emph{terminal} if $\nu^+ \cap S = \emptyset$.
By a \emph{tree} we mean a set $S$ of nodes so that each ancestor of a node of $S$ also belongs to $S$; i.e. $S$ is closed under prefixes.
A set $S$ of nodes is an \emph{orchard} if it contains all of the non-root ancestors of each of
its nodes and does not contain the root node; equivalently, if $\emptyset \not \in S$ and $S \cup \{\emptyset\}$ is a tree. Note that if $(\Lambda, \leq)$ is a finite simple lower semilattice, then $(\Lambda - \{\mathbf{0}\}, \geq)$ is isomorphic to an orchard.
\subsubsection{Measure-theoretic preliminaries}
We begin by summarizing relevant facts about separable measure spaces and atoms.
Suppose $\Omega = (X, \mathcal{M}, \mu)$ is a measure space. A collection $\mathcal{D} \subseteq \mathcal{M}$ of sets whose measures are all finite is \emph{dense in $\Omega$} if for every $A \in \mathcal{M}$ with finite measure and every $\epsilon > 0$ there exists $D \in \mathcal{D}$ so that $\mu(D \triangle A) < \epsilon$. A measure space is \emph{separable} if it has a countable dense set of measurable sets.
A measurable set $A$ of a measure space $\Omega$ is an \emph{atom} of $\Omega$ if $\mu(A) >0$ and if there is no measurable subset $B$ of $A$ so that $0 < \mu(B) < \mu(A)$. If $\Omega$ has no atoms, it is said to be \emph{non-atomic}. The following is due to Sierpinski \cite{Sierpinski.1922}.
\begin{theorem}\label{thm:atomic}
Suppose $\Omega$ is a non-atomic measure space.
Then, whenever $A$ is a measurable set and $0 < r < \mu(A) < \infty$, there is a
measurable subset $B$ of $A$ so that $\mu(B) = r$.
\end{theorem}
We will also use the following observation.
\begin{proposition}\label{prop:abs.cont}
Every finite measure that is absolutely continuous with respect to a non-atomic measure is itself non-atomic.
\end{proposition}
\begin{proof}
Suppose $\Omega = (X, \mathcal{M}, \mu)$ is a non-atomic measure space, and let $\nu$ be a finite measure that is absolutely continuous with respect to $\mu$.
We first claim that whenever $A$ is a measurable set so that $\nu(A) > 0$, there is a measurable subset $B$ of $A$ so that $\mu(B) < \infty$ and $\nu(B) > 0$. For, let $f = d\nu/d\mu$. Since $\nu$ is finite, $f$ is integrable. Since $\nu(A) > 0$, there is a simple function $s$ so that $0 \leq s \leq f$ and $\int_A s\ d\mu > 0$. Let
$B_a = s^{-1}[\{a\}] \cap A$ for each real number $a$.
Since $0 < \int_A s\ d\mu < \infty$, it follows that $0 < \mu(B_a) < \infty$ for some positive real $a$.
Then,
\begin{eqnarray*}
\nu(B_a) & = & \int_{B_a} f\ d\mu\\
& \geq & \int_{B_a} s\ d\mu\\
& = & a \mu(B_a) > 0.
\end{eqnarray*}
Now, let $A$ be a measurable set so that $\nu(A) > 0$. We show that $A$ is not an atom of $\nu$.
Choose a measurable subset $B$ of $A$ so that $\nu(B) > 0$ and $\mu(B) < \infty$. It suffices to show that
$B$ is not an atom. By way of contradiction, suppose it is. We define a descending sequence of measurable subsets of $B$ as follows. Set $B_0 = B$.
Suppose $B_n$ has been defined, $\mu(B_n) > 0$, $\nu(B_n) = \nu(B)$, and $\mu(B_n) = 2^{-n} \mu(B)$. Since $\Omega$ is non-atomic,
by Theorem \ref{thm:atomic}, there is a measurable subset $C$ of $B_n$ so that $\mu(C) = \frac{1}{2}\mu(B_n)$. Let $D = B_n - C$.
Since $B_n \subseteq B$, $B_n$ is an atom of $\nu$.
Thus, either $\nu(C)$ or $\nu(D)$ is equal to $\nu(B_n)$; without loss of generality, assume $\nu(C) = \nu(B_n)$. Set $B_{n+1} = C$.
Let $B' = \bigcap_n B_n$. Thus, $\mu(B') = 0$. On the other hand, $\nu(B') = \lim_n \nu(B_n) = \nu(B) \neq 0$- a contradiction since $\nu$ is absolutely continuous with respect to $\mu$. Thus, $B$ is not an atom of $\nu$.
\end{proof}
We identify measurable sets whose symmetric difference is null. When we refer to a collection of measurable sets as a lower semilattice, we mean it is a lower semilattice under the partial ordering of inclusion modulo sets of measure $0$.
\subsubsection{Banach space preliminaries}\label{subsec:banach}
We first cover material relevant to Banach spaces in general and then that which is specific to $L^p$ spaces.
Suppose $\mathcal{B}$ is a Banach space. When $X \subseteq \mathcal{B}$, we write
$\mathcal{L}(X)$ for the linear span of $X$ and
$\langle X \rangle$ for the closed linear span of $X$; i.e. $\langle X \rangle = \overline{\mathcal{L}(X)}$.
When $K$ is a subfield of $\mathbb{C}$, write $\mathcal{L}_K(X)$ for the linear span of $X$ over $K$; i.e.
\[
\mathcal{L}_K(X) = \{ \sum_{j = 0}^M \alpha_j v_j\ :\ M \in \mathbb{N}\ \wedge\ \alpha_0, \ldots, \alpha_M \in K\ \wedge\ v_0, \ldots, v_M \in X\}.
\]
Note that the linear span of $X$ is dense in $\mathcal{B}$ if and only if the linear span of $X$ over $\mathbb{Q}(i)$ is dense in $\mathcal{B}$.
When $S$ is a finite set, we let $\mathcal{B}^S$ denote the
set of all maps from $S$ into $\mathcal{B}$. When $f \in \mathcal{B}^S$, we write
$\norm{f}_S$ for $\max\{\norm{f(t)}\ :\ t \in S\}$. It follows that $\norm{\ }_S$ is a norm on $\mathcal{B}^S$
under which $\mathcal{B}^S$ is a Banach space.
Computability on Banach spaces will be defined in terms of structures and presentations. Although these
notions may be germane only to computability theory, they are nevertheless purely classical objects so we cover them and related concepts here.
A \emph{structure} on $\mathcal{B}$ is a map $D : \mathbb{N} \rightarrow \mathcal{B}$ so that $\mathcal{B} = \langle \operatorname{ran}(D) \rangle$. If $D$ is a structure on $\mathcal{B}$, then we call the pair $(\mathcal{B}, D)$ a \emph{presentation} of $\mathcal{B}$. Clearly, a Banach space has a presentation if and only if it is separable.
Among all presentations of a Banach space $\mathcal{B}$, one may
be designated as \emph{standard}; in this case, we will identify $\mathcal{B}$ with its standard presentation.
In particular, if $p \geq 1$ is a computable real, and if $D$ is a standard map of $\mathbb{N}$ onto the set of
characteristic functions of dyadic subintervals of $[0,1]$, then $(L^p[0,1], D)$ is the standard presentation of $L^p[0,1]$. If $R(n) = 1$ for all $n \in \mathbb{N}$, then $(\mathbb{C}, R)$ is the standard presentation of $\mathbb{C}$ as a Banach space over itself.
Each presentation of a Banach space induces corresponding classes of rational vectors and rational open balls as follows. Suppose $\mathcal{B}^\# = (\mathcal{B}, D)$ is
a presentation of $\mathcal{B}$. Each vector in the linear span of $\operatorname{ran}(D)$ over $\mathbb{Q}(i)$ will be called a
\emph{rational vector of $\mathcal{B}^\#$}. An \emph{open rational ball of $\mathcal{B}^\#$} is an open ball whose center is a rational vector of $\mathcal{B}^\#$ and whose radius is a positive rational number.
A presentation of $\mathcal{B}$ induces a corresponding presentation of $\mathcal{B}^S$ as follows. Suppose $\mathcal{B}^\# = (\mathcal{B}, D)$ is a presentation of $\mathcal{B}$. Let $S$ be a finite set, and let $D^S$ denote a standard map of $\mathbb{N}$ onto
the set of all maps from $S$ into $\operatorname{ran}(D)$. It follows that $(\mathcal{B}^S)^\# := (\mathcal{B}^S, D^S)$
is a presentation of $\mathcal{B}^S$.
We now cover preliminaries of $L^p$ spaces. Fix a measure space $\Omega = (X, \mathcal{M}, \mu)$ and a real $p \geq 1$. When $f \in L^p(\Omega)$,
we write $\operatorname{supp}(f)$ for the support of $f$; i.e. the set of all $t \in X$ so that $f(t) \neq 0$. Note that since we identify measurable sets whose symmetric difference is null, $\operatorname{supp}(f)$ is well-defined.
We say that vectors $f,g \in L^p(\Omega)$ are \emph{disjointly supported} if the intersection of their supports is null; i.e. if $f(t)g(t) = 0$ for almost every $t \in X$.
If $f,g \in L^p(\Omega)$, then we write $f \preceq g$ if $f(t) = g(t)$ for almost all $t \in X$ for which $f(t) \neq 0$; in this case we say that $f$ is a \emph{subvector} of $g$. Note that $f$ is a subvector of $g$ if and only if $g - f$ and $f$ are disjointly supported. Note also that $f \preceq g$ if and only if
$f = g \cdot \chi_A$ for some measurable set $A$.
When we refer to a collection $\mathcal{D} \subseteq L^p(\Omega)$ as a
lower semilattice, we mean it is a lower semilattice with respect to the subvector ordering.
Suppose $S$ is a set of nodes and $\phi : S \rightarrow L^p(\Omega)$. We say that $\phi$ is \emph{separating} if it maps incomparable nodes to disjointly supported vectors.
We now formulate a numerical test for disjointness of support. Suppose $p \geq 1$ and $p \neq 2$. When $z, w \in \mathbb{C}$ let:
\[
\sigma(z, w) = |4 - 2\sqrt{2}^p |^{-1} |2 (|z|^p + |w|^p) - (|z - w|^p + |z + w|^p)|
\]
We will use the following result from \cite{McNicholl.2016} which extends a theorem of J. Lamperti \cite{Lamperti.1958}.
\begin{theorem}\label{thm:lamperti}
Suppose $p \geq 1$ and $p \neq 2$.
\begin{enumerate}
\item For all $z,w \in \mathbb{C}$, \label{thm:lamperti::itm:ineq}
\[
\min\{|z|^p, |w|^p\} \leq \sigma(z,w).
\]
\item Furthermore, if $p < 2$, then \label{thm:lamperti::itm:sign}
\[
2|z|^p + 2|w|^p - |z+w|^p - |z-w|^p \geq 0
\]
and if $2 < p$ then
\[
2|z|^p + 2|w|^p - |z+w|^p - |z-w|^p \leq 0.
\]
\end{enumerate}
\end{theorem}
Again, suppose $p \geq 1$ and $p \neq 2$. Let $\Omega = (X, \mathcal{M}, \mu)$ be a measure space.
When $f,g \in L^p(\Omega)$, let
\[
\sigma(f,g) = \sigma(\norm{f}_p, \norm{g}_p).
\]
It follows from Theorem \ref{thm:lamperti} part \ref{thm:lamperti::itm:sign} that
\[
\sigma(f,g) = \int_X \sigma(f(t), g(t))\ d\mu(t).
\]
It then follows that $f,g$ are disjointly supported if and only if $\sigma(f,g) = 0$.
When $S$ is a finite set of nodes and $\psi : S \rightarrow L^p(\Omega)$, set
\[
\sigma(\psi) = \sum_{\nu | \nu'} \sigma(\psi(\nu), \psi(\nu')) + \sum_{\nu' \supset \nu} \sigma(\psi(\nu') - \psi(\nu), \psi(\nu'))
\]
where $\nu$, $\nu'$ range over $S$. Theorem \ref{thm:lamperti} now yields the following numerical test to see if a map is separating and antitone.
\begin{corollary}\label{cor:sigma}
Suppose $1 \leq p < \infty$ and $p \neq 2$. Suppose $S$ is a finite set of nodes and $\phi : S \rightarrow L^p(\Omega)$. Then, $\phi$ is a separating antitone map if and only if $\sigma(\phi) = 0$.
\end{corollary}
Now, suppose $S$ is a tree. Call a map $\phi : S \rightarrow L^p(\Omega)$ \emph{summative} if
\[
\phi(\nu) = \sum_{\nu' \in \nu^+ \cap S} \phi(\nu')
\]
whenever $\nu$ is a nonterminal node of $S$. A \emph{disintegration} is a summative, separating, and injective antitone map $\phi : S \rightarrow L^p(\Omega) - \{\mathbf{0}\}$ with the additional property that the linear span of its range is dense in $L^p(\Omega)$.
We define a \emph{partial disintegration of $L^p(\Omega)$} to be a separating and injective antitone map
of a finite orchard into $L^p(\Omega) - \{\mathbf{0}\}$.
Now, suppose $\Omega_1$ and $\Omega_2$ are measure spaces.
Suppose $\phi_1$, $\phi_2$ are antitone maps of $L^p(\Omega_1)$ and $L^p(\Omega_2)$ respectively. An \emph{isomorphism} of $\phi_1$ with $\phi_2$ is
an injective monotone map $f$ of $\operatorname{dom}(\phi_1)$ onto $\operatorname{dom}(\phi_2)$ so that
$\norm{\phi_2 (f(\nu))}_p = \norm{\phi_1(\nu)}_p$ for all $\nu \in \operatorname{dom}(\phi_1)$.
A map $\phi : S \rightarrow L^p[0,1]$ is \emph{interval-valued} if $\phi(\nu)$ is the characteristic function of an interval for each $\nu \in \operatorname{dom}(\phi)$.
\subsection{Computable world}\label{subsec:computable}
We assume the reader is familiar with the rudiments of computability theory such as computable functions, sets, c.e. sets, and oracle computation. An excellent reference is \cite{Cooper.2004}.
\subsubsection{Computable categoricity in the countable realm}
To give our work some context we synopsize some background material on computable structure theory
in the countable realm; this will motivate our definitions for Banach spaces below as well as some already given. In particular we give precise definitions of \emph{computable categoricity} and \emph{relative computable categoricity} and survey related results. More expansive expositions can be found in \cite{Fokina.Harizanov.Melnikov.2014} and \cite{Ash.Knight.2000}.
To begin, suppose $\mathcal{A}$ is a structure with domain $A$. A \emph{numbering} of $\mathcal{A}$ is a surjection of $\mathbb{N}$ onto $A$. If $\nu$ is a numbering of $\mathcal{A}$, then the pair $(\mathcal{A}, \nu)$ is called a
\emph{presentation} of $\mathcal{A}$. Suppose $\mathcal{A}^\# = (\mathcal{A}, \nu)$ is a presentation of $\mathcal{A}$.
We say that $\mathcal{A}^\#$ is a \emph{computable presentation} of $\mathcal{A}$ if:
\begin{itemize}
\item $\{(m,n)\ :\ \nu(m) = \nu(n)\}$ is computable,
\item for each $n$-ary relation $R$ of $\mathcal{A}$, $\{(x_1, \ldots, x_n)\ :\ R(\nu(x_1), \ldots, \nu(x_n)\}$ is computable, and
\item for each $n$-ary function $f : A^n \rightarrow A$ of $\mathcal{A}$,
$\{(x_1, \ldots, x_n, y)\ :\ f(\nu(x_1), \ldots, \nu(x_n)) = \nu(y)\}$ is computable.
\end{itemize}
Note we regard constants as $0$-ary functions.
We say that a countable structure $\mathcal{A}$ is \emph{computably presentable} if it has a computable presentation. It is well-known that there are countable structures without computable presentations; see \cite{Fokina.Harizanov.Melnikov.2014} for a survey of such results.
Suppose $\mathcal{A}_1$ and $\mathcal{A}_2$ are structures, and suppose $\mathcal{A}_j^\# = (\mathcal{A}_j, \nu_j)$ is a presentation of $\mathcal{A}_j$ for each $j$. We say that a map
$f : \mathcal{A}_1 \rightarrow \mathcal{A}_2$ is a \emph{computable map of $\mathcal{A}_1^\#$ into $\mathcal{A}_2^\#$} if there is a computable map $F : \mathbb{N} \rightarrow \mathbb{N}$ so that $f(\nu_1(n)) = \nu_2(F(n))$ for all $n \in \mathbb{N}$. We similarly define what it means for an oracle to compute a map of $\mathcal{A}_1^\#$ into $\mathcal{A}_2^\#$.
We say that a computably presentable countable structure $\mathcal{A}$ is \emph{computably categorical} if any two computable presentations of $\mathcal{A}$ are computably isomorphic. This is equivalent to saying that $\mathcal{A}_1^\#$ is computably isomorphic to $\mathcal{A}_2^\#$ whenever $\mathcal{A}_1^\#$ and $\mathcal{A}_2^\#$ are computable presentations of structures that are isomorphic to $\mathcal{A}$.
It is easy to see that $(\mathbb{Q}, <)$ is computably categorical (use Cantor's back-and-forth construction). On the other hand, a fairly straightforward diagonalization shows that $(\mathbb{N}, <)$ is not computably categorical.
As mentioned in the introduction, the interaction of computable categoricity and structure has been
studied extensively.
For example, J. Remmel showed that a computably presentable Boolean algebra is computably categorical if and only it is has finitely many atoms \cite{Remmel.1981.2}. Goncharov, Lempp, and Solomon proved that a computably presentable ordered Abelian group is computably categorical if and only if it has finite rank \cite{Goncharov.Lempp.Solomon.2003}. Recently, O. Levin proved that every computably presentable ordered field with finite transcendence degree is computably categorical \cite{Levin.2016}.
The effect of structure on other computability notions has been studied intensively; see e.g. \cite{Hirschfeldt.Khoussainov.Shore.Slinko.2002} for a very good overview.
We now define relative computable categoricity. We first define the diagram of a presentation. Suppose $\mathcal{A}$ is a structure and $\mathcal{A}^\# = (\mathcal{A}, \nu)$ is a presentation of $\mathcal{A}$.
The \emph{diagram} of $\mathcal{A}^\#$ is the join of the following sets.
\begin{itemize}
\item $\{(m,n)\ :\ \nu(m) = \nu(n)\}$.
\item $\{(x_1, \ldots, x_n)\ :\ R(\nu(x_1), \ldots, \nu(x_n))\}$ for each $n$-ary relation $R$ of $\mathcal{A}$.
\item $\{(x_1, \ldots, x_n, y)\ :\ f(\nu(x_1), \ldots, \nu(x_n)) = \nu(y)\}$ for each function $f : A^n \rightarrow A$ of $\mathcal{A}$.
\end{itemize}
We say that a computably presentable countable structure $\mathcal{A}$ is
\emph{relatively computably categorical} if whenever $\mathcal{A}^\#$ is a computable presentation of $\mathcal{A}$ and $\mathcal{B}^\#$ is a presentation of a structure $\mathcal{B}$ that is isomorphic to $\mathcal{A}$, the diagram of $\mathcal{B}^\#$ computes an isomorphism of $\mathcal{A}^\#$ onto $\mathcal{B}^\#$.
S. Goncharov gave a syntactic characterization of the relatively computable categorical countable structures \cite{Goncharov.1975}. Clearly every relatively computably categorical structure is computably
categorical. S. Goncharov also constructed a computably categorical structure that is not relatively computably categorical \cite{Goncharov.1977}. Numerous extensions of these results have been proven; see e.g. the survey \cite{Fokina.Harizanov.Melnikov.2014}.
The effect of structure on the separation of relative computable categoricity from computable categoricity has also been examined. For example, a relatively computably categorical countable structure is computably categorical if it is either a linear order, a Boolean algebra, or an Abelian $p$-group \cite{Dzgoev.Goncharov.1980}, \cite{Remmel.1981.2}, \cite{Goncharov.1980.2}, \cite{Smith.1981}, \cite{Calvert.Cenzer.Harizanov.Morozov.2009}.
We now turn to the foundations of computable structure theory on analytic spaces.
\subsubsection{Computability on Banach spaces}
Our approach to computable structure theory on Banach spaces parallels the development of computable structure theory on metric spaces in \cite{Greenberg.Knight.Melnikov.Turetsky.2016}; see also \cite{Pour-El.Richards.1989}. We first define what is meant by a computable presentation of a Banach space. We then define for a computable presentation of a Banach space the associated computable vectors, sequences, c.e. open sets, and c.e. closed sets. We then define the computable maps for computable presentations of Banach spaces. After we summarize fundamental relationships between these notions, we define computable categoricity for Banach sapces.
Suppose $\mathcal{B}$ is a Banach space and $\mathcal{B}^\# = (\mathcal{B}, D)$ is a presentation of
$\mathcal{B}$. We say that $\mathcal{B}^\#$ is a \emph{computable presentation} of $\mathcal{B}$
if the norm is computable on the rational vectors of $\mathcal{B}^\#$. More formally if there is an algorithm
that given any nonnegative integer $k$ and any finite sequence of scalars $\alpha_0, \ldots, \alpha_M \in \mathbb{Q}(i)$, computes a rational number $q$ so that $\left|\norm{\sum_j \alpha_j D(j)} - q\right| < 2^{-k}$. The standard presentation $\mathbb{C}$ is a computable presentation as is the standard presentation of $L^p[0,1]$ when $p \geq 1$ is a computable real. We say that $\mathcal{B}$ is \emph{computably presentable} if it has a computable presentation.
We note that if $\mathcal{B}^\#$ is a computable presentation of a Banach space $\mathcal{B}$, and if $S$
is a finite set, then $(\mathcal{B}^S)^\#$ (as defined in Section \ref{sec:classical}) is a computable presentation of $\mathcal{B}^S$.
We now define the computable vectors and sequences of a computable presentation of a Banach space. Fix a Banach space $\mathcal{B}$ and a computable presentation $\mathcal{B}^\#$ of $\mathcal{B}$. A vector $v \in \mathcal{B}$ is a \emph{computable vector of $\mathcal{B}^\#$} if
there is an algorithm that given any nonnegative integer $k$ computes a rational vector $u$ of $\mathcal{B}^\#$ so that $\norm{v - u} < 2^{-k}$. In other words, it is possible to compute arbitrarily good approximations of $v$. If $v$ is a computable vector of $\mathcal{B}^\#$, then a code of such an algorithm will be referred to as an \emph{index} of $v$.
A sequence $\{v_n\}_{n \in \mathbb{N}}$ of vectors in $\mathcal{B}$ is a \emph{computable sequence of $\mathcal{B}^\#$} if there is an algorithm that given any nonnegative integers $k,n$ as input computes a rational vector
$u$ of $\mathcal{B}^\#$ so that $\norm{u - v_n} < 2^{-k}$; in other words, $v_n$ is computable uniformly in $n$.
If $\{v_n\}_n$ is a computable sequence of vectors of $\mathcal{B}^\#$, then a code of such an algorithm shall be referred to as an index of $\{v_n\}_n$.
Suppose $L^p(\Omega)^\#$ is a computable presentation of $L^p(\Omega)$, and let $S$ be a set of nodes.
A map $\phi : S \rightarrow L^p(\Omega)$ is a \emph{computable map of $S$ into $L^p(\Omega)^\#$} if
there is an algorithm that computes an index of $\phi(\nu)$ from $\nu$ if $\nu \in S$ and does not halt on any node that is not in $S$. We similarly define
(computable) disintegration of $L^p(\Omega)^\#$, etc..
We now define the c.e. open and closed subsets of a computable presentation $\mathcal{B}^\#$ of a Banach space $\mathcal{B}$. An open subset $U$ of $\mathcal{B}$ is a \emph{c.e. open subset of $\mathcal{B}^\#$} if the set of all open rational balls of $\mathcal{B}^\#$ that are
included in $U$ is c.e.. If $U$ is a c.e. open subset of $\mathcal{B}^\#$, then an index of $U$ is
a code of a Turing machine that enumerates all open rational balls that are included in $U$.
A closed subset $C$ of $\mathcal{B}$ is a \emph{c.e. closed subset of $\mathcal{B}^\#$} if the set of all open rational balls of $\mathcal{B}^\#$ that contain a point of $C$ is c.e.. If $C$ is a c.e. closed subset of $\mathcal{B}^\#$, then an \emph{index} of $C$ is a code of a Turing machine that enumerates all open rational balls that
contain a point of $C$.
Now we define computable maps. Suppose $\mathcal{B}_1^\#$ is a computable presentation of $\mathcal{B}_1$ and
$\mathcal{B}_2^\#$ is a computable presentation of $\mathcal{B}_2$. A map $T : \mathcal{B}_1 \rightarrow \mathcal{B}_2$ is a \emph{computable map of $\mathcal{B}_1^\#$ into $\mathcal{B}_2^\#$} if
there is a computable function $P$ that maps rational balls of $\mathcal{B}_1^\#$ to rational balls of
$\mathcal{B}_2^\#$ so that $T[B_1] \subseteq P(B_1)$ whenever $P(B_1)$ is defined and so that whenever $U$ is a neighborhood of $T(v)$,
there is a rational ball $B_1$ of $\mathcal{B}_1^\#$ so that $v \in B_1$ and $P(B_1) \subseteq U$. In other words, it is possible to compute arbitrarily good approximations of $T(v)$ from sufficiently good approximations of $v$. An index of such a function $P$ will be referred to as an index of $T$.
Suppose $\mathcal{B}_1^\# = (\mathcal{B}_1, R_1)$. It is well-known that if $T$ is a bounded linear operator of $\mathcal{B}_1$ into $\mathcal{B}_2$, then $T$ is computable if and only if $\{T(R_1(n))\}_n$ is a computable sequence of $\mathcal{B}_2^\#$.
Note that if $L^p(\Omega)^\#$ is a computable presentation of $L^p(\Omega)$, then
$\sigma$ (as defined in Subsection \ref{subsec:banach}) is a computable real-valued map from $(L^p(\Omega)^S)^\#$ into $\mathbb{C}$.
The following are `folklore' and follow easily from the definitions.
\begin{proposition}\label{prop:preimage.c.e.open}
Suppose $\mathcal{B}_1$, $\mathcal{B}_2$ are Banach spaces. Let $\mathcal{B}_j^\#$ be a computable
presentation of $\mathcal{B}_j$ for each $j$, and let $T$ be a computable map of $\mathcal{B}_1^\#$ into $\mathcal{B}_2^\#$. Then, $T^{-1}[U]$ is a c.e. open subset of $\mathcal{B}_1^\#$ whenever $U$ is a c.e. open subset of $\mathcal{B}_2^\#$. Furthermore, an index of $T^{-1}[U]$ can be computed from indices of $T$ and $U$.
\end{proposition}
\begin{proposition}\label{prop:bounding}
Suppose $\mathcal{B}$ is a Banach space, and let $\mathcal{B}^\#$ be a computable presentation of
$\mathcal{B}$. If $f$ is a computable real-valued function from $\mathcal{B}^\#$ into $\mathbb{C}$ with the property that $f(v) \geq d(v, f^{-1}[\{0\}])$ for all $v \in \mathcal{B}$, then, $f^{-1}[\{0\}]$ is c.e. closed. Furthermore, an index of $f^{-1}[\{0\}]$ can be computed from an index of $f$.
\end{proposition}
\begin{proposition}\label{prop:comp.point}
Suppose $\mathcal{B}$ is a Banach space and $\mathcal{B}^\#$ is a computable presentation of
$\mathcal{B}$. Let $U$ be a c.e. open subset of $\mathcal{B}^\#$, and let $C$ be a c.e. closed subset
of $\mathcal{B}^\#$ so that $C \cap U \neq \emptyset$. Then, $C \cap U$ contains a computable vector of
$\mathcal{B}^\#$. Furthermore, an index of such a vector can be computed from indices of $U$ and $C$.
\end{proposition}
\begin{proposition}\label{prop:eff.cauchy}
Suppose $\mathcal{B}$ is a Banach space and $\mathcal{B}^\#$ is a computable presentation of
$\mathcal{B}$. Let $\{v_n\}_{n \in \mathbb{N}}$ be a computable sequence of $\mathcal{B}^\#$ so that
$\norm{v_n - v_{n+1}} < 2^{-n}$ for all $n \in \mathbb{N}$. Then,
$\lim_n v_n$ is a computable vector of $\mathcal{B}^\#$. Furthermore, an index of $\lim_n v_n$
can be computed from an index of $\{v_n\}_{n \in \mathbb{N}}$.
\end{proposition}
We now define a Banach space $\mathcal{B}$ to be \emph{computably categorical} if any two of its computable presentations are computably isometrically isomorphic; equivalently if $\mathcal{B}_1^\#$ is computably isomorphically isometric to $\mathcal{B}_2^\#$ whenever $\mathcal{B}_1^\#$, $\mathcal{B}_2^\#$ are computable presentations of Banach spaces that are isomorphically isometric to $\mathcal{B}$.
\subsubsection{Summary of prior work in analytic computable structure theory}\label{subsec:survey.prior}
The earliest work in analytic computable structure theory is implicit in the 1989 monograph of Pour-El and Richards \cite{Pour-El.Richards.1989}; namely, it is shown that $\ell^1$ is not computably categorical but that
all separable Hilbert spaces are. But, there was no more progress until 2013 when a number of results on metric spaces appeared. In particular, Melnikov and Nies showed
that computably presentable compact metric spaces are $\Delta_3^0$-categorical and that there is a computably presentable polish space that is not $\Delta_2^0$-categorical \cite{Melnikov.Nies.2013}. At the same time, Melnikov showed that the Cantor space, Urysohn space, and all separable Hilbert spaces are computably categorical (as metric spaces), but that (as a metric space) $C[0,1]$ is not \cite{Melnikov.2013}.
Recently, Greenberg, Knight, Melnikov, and Turetsky announced an analog of Goncharov's syntactic characterization of relative computable categoricity for metric spaces \cite{Greenberg.Knight.Melnikov.Turetsky.2016}.
New results on Banach spaces began to appear in 2014. First, Melnikov and Ng showed that $C[0,1]$ is not computably categorical \cite{Melnikov.Ng.2014}. Then, McNicholl extended the work of Pour-El and Richards by showing that $\ell^p$ is computably categorical only when $p = 2$ and that $\ell^p$ is $\Delta_2^0$-categorical when $p$ is a computable real. McNicholl also showed that $\ell^p_n$ is computably categorical when $p$ is a computable real and $n$ is a positive integer \cite{McNicholl.2015}, \cite{McNicholl.2016},\cite{McNicholl.2016.1}. More recently, McNicholl and Stull have shown that if $p \geq 1$ is a computable real other than $2$, then whenever $\mathcal{B}^\#$ is a computable presentation of a Banach space $\mathcal{B}$ that is isometrically isomorphic to $\ell^p$, there is a least powerful Turing degree that computes a linear isometry of the standard presentation of $\ell^p$ onto $\mathcal{B}^\#$, and that these degrees are precisely the c.e. degrees \cite{McNicholl.Stull.2016}.
\section{Overview of the proof of Theorem \ref{thm:main}}\label{sec:overview}
As noted, every separable $L^2$ space is computably categorical since it is a Hilbert space. So, we can confine ourselves to the case $p \neq 2$. The three key steps to our proof of Theorem \ref{thm:main} are encapsulated in the following three theorems.
\begin{theorem}\label{thm:lifting.comp}
Suppose $p \geq 1$ is a computable real.
Let $L^p(\Omega_1)^\#$ be a computable presentation of $L^p(\Omega_1)$, and let
$L^p(\Omega_2)^\#$ be a computable presentation of $L^p(\Omega_2)$.
Suppose
there is a computable disintegration of $L^p(\Omega_1)^\#$ that is computably isomorphic to a computable disintegration of $L^p(\Omega_2)^\#$.
Then, there is a computable
linear isometry of $L^p(\Omega_1)^\#$ onto $L^p(\Omega_2)^\#$.
\end{theorem}
\begin{theorem}\label{thm:comp.disint.Lp01}
Let $p$ be a computable real so that $p \geq 1$, and let $\Omega$ be a non-atomic separable measure space.
Suppose $L^p(\Omega)^\#$ is a computable presentation of $L^p(\Omega)$, and suppose $\phi$ is a computable disintegration of $L^p(\Omega)^\#$ so that $\norm{\phi(\lambda)}_p = 1$. Then, there is a computable disintegration of $L^p[0,1]$ that is computably isomorphic to $\phi$.
\end{theorem}
\begin{theorem}\label{thm:disint.comp}
Let $p \geq 1$ be a computable real so that $p \neq 2$.
Suppose $\Omega$ is a separable nonzero measure space, and
suppose $L^p(\Omega)^\#$ is a computable presentation of $L^p(\Omega)$. Then, there is a computable disintegration of $L^p(\Omega)^\#$.
\end{theorem}
Theorem \ref{thm:main} follows immediately from Theorems \ref{thm:lifting.comp} through Theorem \ref{thm:disint.comp}. Our proofs of each of these theorems are supported by a certain amount of classical material (that is, material that is devoid of computability content) which is developed in Section \ref{sec:classical}. The transition from the classical realm to the computable is effected in Section \ref{sec:computable}.
\section{Classical world}\label{sec:classical}
We divide our work into three parts: isomorphism of disintegrations, extension of partial disintegrations, and approximation of separating antitone maps. Subsection \ref{subsec:isomorphism} contains our results on isomorphism of disintegrations; this material provides the classical component of the proofs of Theorems \ref{thm:lifting.comp} and \ref{thm:comp.disint.Lp01}. Subsection \ref{subsec:extension} contains our results on extensions of partial disintegrations, and our theorem on approximation of separating antitone maps appears in Subsection \ref{subsec:approx}; the results in these two subsections support our proof of Theorem \ref{thm:disint.comp}.
\subsection{Isomorphism results}\label{subsec:isomorphism}
Our proof of Theorem \ref{thm:lifting.comp} is based on the idea that an isomorphism can be lifted to form a linear isometry. We make this precise as follows.
\begin{definition}\label{def:lifts}
Suppose $\phi_1$, $\phi_2$ are disintegrations of $L^p(\Omega_1)$ and $L^p(\Omega_2)$ respectively, and suppose $f$ is an isomorphism of $\phi_1$ with $\phi_2$.
We say that $T : L^p(\Omega_1) \rightarrow L^p(\Omega_2)$ \emph{lifts} $f$ if
$T(\phi_1(\nu)) = \phi_2(f(\nu))$ for all $\nu \in \operatorname{dom}(\phi_1)$.
\end{definition}
We show here that liftings of isomorphisms exist and are unique. Namely, we prove the following.
\begin{theorem}\label{thm:extension.isomorphism}
Suppose $\Omega_1, \Omega_2$
are measure spaces and that $\phi_j$ is a disintegration of $L^p(\Omega_j)$ for each $j$.
Suppose $f$ is an isomorphism of $\phi_1$ with $\phi_2$. Then, there is a unique linear
isometry of $L^p(\Omega_1)$ onto $L^p(\Omega_2)$ that lifts $f$.
\end{theorem}
In Section \ref{sec:computable} we complete the proof of Theorem \ref{thm:lifting.comp} by showing that if $f$, $\phi_1$, $\phi_2$ are computable then the lifting of $f$ is computable.
The proof of Theorem \ref{thm:comp.disint.Lp01} is based on the following.
\begin{proposition}\label{prop:iso.int.disint}
Let $\Omega$ be a nonzero and non-atomic measure space, and let $\phi$ be a disintegration of $L^p(\Omega)$ so that
$\norm{\phi(\lambda)}_p = 1$. Suppose
$\psi$ is an interval-valued separating antitone map that is isomorphic to $\phi$,
and suppose $\operatorname{dom}(\psi)$ is a tree. Then, $\psi$ is a disintegration of $L^p[0,1]$.
\end{proposition}
In Section \ref{sec:computable}, we complete the proof of Theorem \ref{thm:comp.disint.Lp01} by showing that when $\phi$ is computable there \emph{is} a computable interval-valued separating antitone map that is computably isomorphic to $\phi$.
We now proceed with the proofs of Theorem \ref{thm:extension.isomorphism} and Proposition \ref{prop:iso.int.disint}.
\begin{proof}[Proof of Theorem \ref{thm:extension.isomorphism}:]
Suppose $\Omega_j = (X_j, \mathcal{M}_j, \mu_j)$. We first define a linear map $T$ on the linear span of $\operatorname{ran}(\phi_1)$. In particular, we let
\[
T(\sum_{\nu \in F} \alpha_\nu \phi_1(\nu)) = \sum_{\nu \in F} \alpha_\nu \phi_2(f(\nu))
\]
for every finite $F \subseteq \operatorname{dom}(\phi_1)$ and every corresponding family of scalars
$\{\alpha_\nu\}_{\nu \in F}$.
We first show that $T$ is well-defined. Suppose
\[
g = \sum_{\nu \in F_1} \alpha_\nu \phi_1(\nu) = \sum_{\nu \in F_2} \beta_\nu \phi_1(\nu).
\]
Without loss of generality, we assume $F_1 = F_2 = F$ where $F$ is a finite tree.
We first make some observations. Suppose $\phi : F \rightarrow L^p(\Omega)$ is a
separating antitone map. Let:
\begin{eqnarray*}
\nabla_\phi(\nu) & = & \phi(\nu) - \sum_{\nu' \in \nu^+ \cap F} \phi(\nu')\\
S_\phi(\nu) & = & \operatorname{supp}(\nabla_\phi(\nu))\\
\end{eqnarray*}
Note that $S_\phi(\nu) = \operatorname{supp}(\phi(\nu)) - \bigcup_{\nu' \in \nu^+ \cap F} \operatorname{supp}(\phi(\nu'))$ and that
$\operatorname{supp}(\phi(\lambda)) = \bigcup_\nu S_\phi(\nu)$. Also note that $\nabla_\phi(\nu)$ and $\nabla_\phi(\nu')$ are disjointly supported when $\nu \neq \nu'$.
We claim that if $\gamma_\nu \in \mathbb{C}$ for each $\nu \in F$, then
\[
\sum_{\nu \in F} \gamma_\nu \phi(\nu) = \sum_{\nu \in F} \left( \sum_{\mu \subseteq \nu} \gamma_\mu \right) \nabla_\phi(\nu).
\]
For, when $\mu \subseteq \nu$,
\begin{eqnarray*}
\nabla_\phi(\nu) & = & \phi(\nu)\cdot \chi_{S_\phi(\nu)} \\
& = & \phi(\mu) \cdot \chi_{\operatorname{supp}(\phi(\nu))} \chi_{S_\phi(\nu)}\\
& = & \phi(\mu) \chi_{S_\phi(\nu)}
\end{eqnarray*}
And, $\phi(\mu) \cdot \chi_{S_\phi(\nu)} = \mathbf{0}$ if $\mu \not \subseteq \nu$. So,
\begin{eqnarray*}
\sum_{\nu \in F} \gamma_\nu \phi(\nu) & = & \sum_{\nu \in F} \left( \sum_{\mu \in F} \gamma_\mu \phi(\mu) \right) \cdot \chi_{S_\phi(\nu)}\\
& = & \sum_{\nu \in F} \left( \sum_{\mu \subseteq \nu} \gamma_\mu \phi(\mu) \cdot \chi_{S_\phi(\nu)} \right) \\
& = & \sum_{\nu \in F} \left(\sum_{\mu \subseteq \nu} \gamma_\mu \right) \nabla_\phi(\nu).
\end{eqnarray*}
Thus,
\[
g = \sum_{\nu \in F} \left(\sum_{\mu \subseteq \nu} \alpha_\mu\right) \nabla_{\phi_1}(\nu) =
\sum_{\nu \in F} \left( \sum_{\mu \subseteq \nu} \beta_\mu \right) \nabla_{\phi_1}(\nu).
\]
Since nonzero disjointly supported vectors are linearly independent, it follows that
\[
\sum_{\mu \subseteq \nu} \alpha_\mu = \sum_{\mu \subseteq \nu} \beta_\mu
\]
whenever $\nabla_{\phi_1}(\nu) \neq \mathbf{0}$.
Let $\psi = \phi_2 \circ f$. Since $f$ is an isomorphism, $\psi$ is a disintegration, and
$\norm{\nabla_\psi(\nu)}_p = \norm{\nabla_{\phi_1}(\nu)}_p$ for all $\nu \in F$.
Thus,
\begin{eqnarray*}
\sum_{\nu \in F} \alpha_\nu \psi(\nu) & = & \sum_{\nu \in F} \left( \sum_{\mu \subseteq \nu} \alpha_\mu\right) \nabla_\psi(\nu)\\
& = & \sum_{\nu \in F} \left( \sum_{\mu \subseteq \nu} \beta_\mu\right) \nabla_\psi(\nu)\\
& = & \sum_{\nu \in F} \beta_\nu \psi(\nu).
\end{eqnarray*}
Therefore, $T$ is well-defined.
We now also note that
\begin{eqnarray*}
\norm{f}_p^p & = & \sum_{\nu \in F} \left| \sum_{\mu \subseteq \nu} \alpha_\mu \right|^p \norm{\nabla_{\phi_1}(\nu)}_p^p\\
& = & \sum_{\nu \in F} \left| \sum_{\mu \subseteq \nu} \alpha_\mu \right|^p \norm{\nabla_{\psi}(\nu)}_p^p\\
& = & \norm{T(f)}_p^p.
\end{eqnarray*}
It now follows that $T$ extends to a unique isometric linear map of $L^p(\Omega_1)$ into $L^p(\Omega_2)$;
denote this map by $T$ as well.
Since $\operatorname{ran}(\phi_2) \subseteq \operatorname{ran}(T)$, it follows that $T$ is surjective.
Now, suppose $S$ is an isometric linear map of $L^p(\Omega_1)$ onto $L^p(\Omega_2)$ so that
$S(\phi_1(\nu)) = \phi_2(f(\nu))$ for all $\nu \in \operatorname{dom}(\phi_1)$. So, $S(\phi_1(\nu)) = T(\phi_1(\nu))$ for all $\nu \in \operatorname{dom}(\phi_1))$.
That is, $S(f) = T(f)$ whenever $f \in \operatorname{ran}(\phi_1)$. Since the linear span of $\operatorname{ran}(\phi_1)$ is dense
in $L^p(\Omega_1)$, it follows that $S = T$.
\end{proof}
To prove Proposition \ref{prop:iso.int.disint}, we will need the following lemma.
\begin{lemma}\label{lm:descending}
Suppose $\{f_n\}_n$ is a sequence of vectors in $L^p(\Omega)$ so that $f_{n+1} \preceq f_n$ for all $n$. Then, $\{f_n\}_n$ converges pointwise and in the $L^p$-norm.
\end{lemma}
\begin{proof}
Since $f_{n+1} \preceq f_n$, $\{f_n\}_n$ converges pointwise to a function $f$ so that $f \preceq f_n$ for all $n$. In fact,
$f = f_n \cdot \chi_S$ where $S = \bigcap_n \operatorname{supp}(f_n)$. Thus, $f \in L^p(\Omega)$, and $\lim_n \norm{f_n}_p = \norm{f}_p$. Therefore, $\lim_n \norm{f_n- f}_p =0$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:iso.int.disint}:]
First, we claim that if $\epsilon > 0$, then there exists $n$ so that $\max\{\norm{\psi(\nu)}_p\ :\ |\nu| = n\} < \epsilon$. For, suppose otherwise. Let $S = \{\nu \in \operatorname{dom}(\psi)\ :\ \norm{\psi(\nu)}_p \geq \epsilon\}$.
Thus, since $\psi$ is an antitone map, $S$ is a tree.
Since $\psi$ is interval-valued, $S$ is a finitely branching tree. Let $\beta$ be an infinite branch of $S$; that is, $\beta$ is a function from $\mathbb{N}$ into $S$ so that
$\beta(n+1) \supset \beta(n)$ for all $n \in \mathbb{N}$.
Let $f$ be an isomorphism of $\psi$ onto $\phi$. Then, by Lemma \ref{lm:descending}, $\lim_n f(\beta(n))$ exists in the $L^p$-norm; let
$h$ denote this limit. Then, $\norm{h}_p \geq \epsilon$ and
\[
\langle \operatorname{ran}(\phi) \rangle \subseteq \langle h \rangle \oplus \{g \in L^p[0,1]\ :\ \operatorname{supp}(g) \cap \operatorname{supp}(h) = \emptyset\}.
\]
Since $\Omega$ is non-atomic, by Theorem \ref{thm:atomic} and Proposition \ref{prop:abs.cont}, there is
a measurable set $A$ so that $\norm{h \cdot \chi_A}_p = \epsilon / 2$.
Therefore, $h \cdot \chi_A \not \in \langle \operatorname{ran}(\phi) \rangle$; a contradiction.
Now, to show that $\psi$ is a disintegration, it suffices to show that $\langle \operatorname{ran}(\psi) \rangle = L^p[0,1]$. To this end, it suffices to show that $\chi_I \in \langle \operatorname{ran}(\psi) \rangle$ whenever $I$ is a subinterval of $[0,1]$. Now suppose $[a,b] \subseteq [0,1]$. Since $\psi$ is interval-valued, for each $\nu$, there is an interval $I(\nu) \subseteq [0,1]$ so that $\psi(\nu) = \chi_{I(\nu)}$. Choose $\epsilon > 0$ and $n$ so that
$\norm{\psi(\nu)}_p < \epsilon$ whenever $\nu \in \operatorname{dom}(\phi)$ and $|\nu| = n$.
Since $\phi$ is a disintegration, and since $\norm{\phi(\lambda)}_p = 1$, it follows that
$\bigcup_{|\nu| = n} I(\nu) = [0,1]$.
Let $F = \{\nu \in \operatorname{dom}(\phi)\ :\ |\nu| = n \wedge\ I(\nu) \cap [a,b] \neq \emptyset\}$.
So, $[a,b] \subseteq \bigcup_{\nu \in F} I(\nu)$.
Thus, $\mu(\bigcup_{\nu \in F} I(\nu) - [a,b]) < 2\epsilon$, and therefore $\chi_{[a,b]} \in \langle \operatorname{ran}(\psi) \rangle$.
Hence, $\langle \operatorname{ran}(\psi) \rangle = L^p[0,1]$.
\end{proof}
\subsection{Extending partial disintegrations}\label{subsec:extension}
Our goal in this subsection is to prove the following which will support our proof of Theorem \ref{thm:disint.comp}.
\begin{theorem}\label{thm:extend.partial.disint}
Suppose $\Omega$ is a separable measure space and $1 \leq p < \infty$. Suppose $\phi$ is a partial disintegration of $L^p(\Omega)$. Then, for every finite subset $F$ of
$L^p(\Omega)$ and every nonnegative integer $k$, $\phi$ extends to a partial disintegration $\psi$ so that
$d(f, \langle \operatorname{ran}(\psi) \rangle) < 2^{-k}$ for every $f \in F$.
\end{theorem}
We divide the majority of the proof of Theorem \ref{thm:extend.partial.disint} into a sequence of lemmas as follows.
\begin{lemma}\label{lm:approx.reciprocal}
Let $\Omega$ be a measure space and suppose $1 \leq p < \infty$. Let $f \in L^p(\Omega)$ be supported on a set of finite measure. Then, for every $\epsilon > 0$, there is a simple function $s$ so that
$\operatorname{supp}(s) \subseteq \operatorname{supp}(f)$ and $\norm{s \cdot f - \chi_{\operatorname{supp}(f)}}_p < \epsilon$.
\end{lemma}
\begin{proof}
Suppose $\Omega = (X, \mathcal{M}, \mu)$. Let $A = \operatorname{supp}(f)$. Without loss of generality, suppose $\norm{f}_p > 0$. Let $\epsilon > 0$. For each nonnegative integer $k$ let
\[
A_k = \{t \in X\ :\ |f(t)| > 2^{-k}\}.
\]
Since $\mu(A) < \infty$, $\lim_k \mu(A - A_k) = 0$. Choose $k$ so that $\mu(A - A_k) < \epsilon /2$, and set $g = (1/f) \cdot \chi_{A_k}$. Thus, $g \in L^\infty(\Omega)$. So, there is a simple function $s$ so that $\operatorname{supp}(s) \supseteq A_k$ and $\norm{s - g}_\infty < \frac{1}{2} \epsilon \norm{f}_p$. Then,
\begin{eqnarray*}
\norm{s \cdot f - \chi_{A_k}}_p^p & = & \norm{(s - g) \cdot f}_p^p \\
& = & \norm{ |s - g|^p |f|^p }_1\\
& \leq & \norm{ |s - g|^p}_\infty \norm{|f|^p}_1\\
& = & \norm{s - g}_\infty^p \norm{f}_p^p\\
& < & 2^{-p} \epsilon^p
\end{eqnarray*}
So,
\begin{eqnarray*}
\norm{s \cdot f - \chi_A}_p & \leq & \norm{s \cdot f - \chi_{A_k}}_p + \norm{\chi_{A_k} - \chi_A}_p\\
& < & \epsilon
\end{eqnarray*}
\end{proof}
\begin{lemma}\label{lm:density.vectors}
Suppose $\Omega$ is a measure space and $1 \leq p < \infty$. Suppose
$\mathcal{D} \subseteq L^p(\Omega)$ is a simple lower semilattice with the property that the upper semilattice generated by the supports of the vectors in $\mathcal{D}$ is dense in $\Omega$. Then, the linear span of $\mathcal{D}$ is dense in $L^p(\Omega)$.
\end{lemma}
\begin{proof}
Suppose $\Omega = (X, \mathcal{M}, \mu)$.
It suffices to show that if $\mu(A) < \infty$, then $\chi_A$ is a limit point of the linear span of $\mathcal{D}$ in the $L^p$-norm. So, suppose $\mu(A) < \infty$. Choose $f_0, \ldots, f_n \in \mathcal{D}$ so that
$\mu(A \triangle \bigcup_{j = 0}^n \operatorname{supp}(f_j)) < \epsilon / 3$. Since $\mathcal{D}$ is simple, we can assume $f_0, f_1, \ldots, f_n$ are disjointly supported.
Thus,
\[
\norm{\chi_A - \sum_{j = 0}^n \chi_{\operatorname{supp}(f_j)}}_p < \epsilon / 3.
\]
Set $f = \sum_{j = 0}^n f_j$, and set $B = \bigcup_{j \leq n} \operatorname{supp}(f_j)$.
Thus, $B = \operatorname{supp}(f)$. By Lemma \ref{lm:approx.reciprocal}, there is a simple function $s$ so that
$\norm{sf - \chi_B}_p < \epsilon/3$ and so that $\operatorname{supp}(s) \subseteq \operatorname{supp}(f)$. Hence, $\norm{\chi_A - sf}_p < 2\epsilon/3$.
Let $s = \sum_{j = 0}^k \alpha_j \chi_{A_j}$ where $A_0, \ldots, A_k$ are pairwise disjoint measurable
subsets of $X$ and $\alpha_0, \ldots, \alpha_k$ are nonzero. Thus, $\mu(A_j) < \infty$ and
\[
sf = \sum_{j = 0}^k \alpha_j f \chi_{A_j}.
\]
Set $M = \max\{|\alpha_0|, \ldots, |\alpha_k|\}$. Since $|f|^p$ is integrable, we can choose $\delta > 0$ so that
\[
\int_E |f|^p d\mu < \left(\frac{\epsilon}{3}\right)^p \frac{1}{(k+1)M}
\]
whenever $E$ is a measurable subset of $X$ so that $\mu(E) < \delta$. For each $j$, there exist
$g_{j,0}, \ldots, g_{j, m_j} \in \mathcal{D}$ so that $\mu(A_j \triangle \bigcup_s \operatorname{supp}(g_{j,s}))< \delta$.
Set $B_{j,s} = \operatorname{supp}(g_{j,s}$ and let $H_j = \bigcup_s B_{j,s})$. Thus,
\begin{eqnarray*}
\norm{sf - \sum_j \alpha_jf\chi_{H_j}}_p^p & \leq & \sum_j |\alpha_j| \int_X |f|^p| |\chi_{A_j} - \chi_{H_j}|\ d\mu\\
& = & \sum_j |\alpha_j| \int_{A_j \triangle H_j} |f|^p\ d\mu\\
& \leq & M(k+1) \left(\frac{\epsilon}{3}\right)^p \frac{1}{M(k+1)} = \left(\frac{\epsilon}{3}\right)^p.
\end{eqnarray*}
Thus, combining these inequalities and applying the triangle inequality yields
\[
\norm{\chi_A - \sum_j \alpha_j f \chi_{H_j}}_p < \epsilon.
\]
Now, note that
\[
f_t \chi_{B_{j,s}} = \left\{
\begin{array}{cc}
0 & \mbox{if $\mu(B_{j,s} \cap \operatorname{supp}(f_t)) = 0$}\\
f_t & \mbox{if $f_t \preceq g_{j,s}$}\\
g_{j,s} & \mbox{if $g_{j,s} \preceq f_t$}
\end{array}
\right.
\]
It follows that $\sum_j \alpha_j f \chi_{H_j}$ belongs to the linear span of $\mathcal{D}$.
\end{proof}
\begin{lemma}\label{lm:semilattice.adjoin}
Suppose $\Omega$ is a measure space and $\mathcal{D}$ is a finite simple lower semilattice
of measurable sets. Then, for every measurable set $A$ that does not belong to the upper semilattice generated by $\mathcal{D}$, $\mathcal{D}$ properly extends to a finite simple lower semilattice $\mathcal{D}'$ of measurable sets so that $A$ belongs to the upper semilattice generated by $\mathcal{D}'$.
\end{lemma}
\begin{proof}
When $Y \in \mathcal{D}$, define the \emph{remnant} of $Y$ to be
\[
Y - \bigcup\{Z\ :\ Z \in \mathcal{D}\ \wedge\ Z \subset Y\}.
\]
Let $\mathcal{R}$ denote the set of all remnants of sets in $\mathcal{D}$. Note that any two distinct sets in
$\mathcal{R}$ are disjoint. Let:
\begin{eqnarray*}
\mathcal{R}' & = & \{R \cap A\ :\ R \in \mathcal{R}\}\\
S_A & = & A - \bigcup \mathcal{D}\\
\mathcal{D}' & = & \mathcal{D} \cup \mathcal{R}' \cup \{S_A\}.
\end{eqnarray*}
We claim that $\mathcal{D}'$ is a simple lower semilattice. For, suppose $X_1, X_2 \in \mathcal{D}'$ are incomparable. We can suppose one of $X_1$, $X_2$ does not belong to $\mathcal{D}$. We can also assume one of $X_1$, $X_2$ does not belong to $\mathcal{R}'$. If $X_1$ or $X_2$ is in $S_A$, then $X_1 \cap X_2 = \emptyset$. So, we can assume $X_1 \in \mathcal{D}$ and $X_2 \in \mathcal{R}'$. Thus, there exists
a remnant $R$ of a set $Y \in \mathcal{D}$ so that $X_2 = R \cap A$. Thus, $R \subseteq Y$. So,
$Y \not \subseteq X_1$. If $X_1 \cap Y$ is null, then so is $X_1 \cap X_2$. So, suppose $X_1 \subset X$.
Then, $R \cap X_1 = \emptyset$, so $X_2 \cap X_1 = \emptyset$.
We now note that $\bigcup \mathcal{R} = \bigcup \mathcal{D}$. Thus, $A = S_A \cup \bigcup \mathcal{R}'$, and so $A$ belongs to the upper semilattice generated by $\mathcal{D}'$. Thus, $\mathcal{D} \subset \mathcal{D}'$.
We now show that $\mathcal{D}'$ properly extends $\mathcal{D}$. Suppose $B \in \mathcal{D}' - \mathcal{D}$ and suppose $C$ is a nonzero set in $\mathcal{D}$. If $B = S_A$, then
$B \cap C = \emptyset$ and so $B \not \supseteq C$. Suppose $B = R \cap A$ where $R$ is the remnant of $Y \in \mathcal{D}$. By way of contradiction, suppose $B \supset C$. Then, $Y \supset C$, and so $R \cap C = \emptyset$ which is impossible since $C$ is nonempty. Thus, $\mathcal{D}'$ properly extends $\mathcal{D}$.
\end{proof}
\begin{lemma}\label{lm:extend.partial.disintegration}
Suppose $\phi$ is a partial disintegration of $L^p(\Omega)$, and suppose $\mathcal{D} \subseteq L^p(\Omega)$ is a finite simple lower semilattice of vectors in $L^p(\Omega)$ that properly extends $\operatorname{ran}(\phi)$. Then,
$\phi$ extends to a partial disintegration $\psi$ of $L^p(\Omega)$ with range $\mathcal{D} - \{\mathbf{0}\}$.
\end{lemma}
\begin{proof}
Let $S = \operatorname{dom}(\phi)$. By induction, we can assume $\#(\mathcal{D} - \operatorname{ran}\phi)=1$. Suppose $f$ is the unique element of $\mathcal{D}- \operatorname{ran}(\phi)$. Since $g\not\preceq f$ for all $g\in\operatorname{ran}\phi$, precisely two cases arise. The first is $f\not\preceq g$ for all $g\in \operatorname{ran}\phi$. For this case, we let
\[
t=\max\{z\in\mathbb{N}: (z) \in S\},
\]
set $S^\prime=S\cup\{(t+1)\}$, and define $\psi:S^\prime \to L^p(\Omega)$ by
\[
\psi(\nu) = \left\{\begin{array}{cc}
f & \nu = (t+1)\\
\phi(\nu) & \nu \neq (t+1)
\end{array}
\right.
\]
By choice of $t$ and the incomparability of $(t+1)$ with each element of $S$, $\psi$ is an injective antitone map. That $\psi$ is strong follows from the incomparability of $f$ with any element of $\operatorname{ran}\phi$. Furthermore, $S^\prime \cup \{\emptyset\}$ is a finite subtree of $\mathbb{N}^*$, so $\psi$ is a partial disintegration onto $\mathcal{D}$ that extends $\phi$.
The other case is that there exists $g\in \operatorname{ran}(\phi)$ so that $f\preceq g$. Since $S$ is finite, there is a $\preceq$-minimal vector $g \in \operatorname{ran}(\phi)$ so that $f \preceq g$. Since $\operatorname{ran}(\phi)$ is simple, and since $f$ is nonzero, $g$ is unique.
Note that $f$ is incomparable with every element $h$ of $\operatorname{ran}(\phi)$ so that $g \not \preceq h$.
We let:
\begin{eqnarray*}
t & = & \max\{z\in\mathbb{N}\ :\ \phi^{-1}(g)^\frown(z) \in S\} \\
S^\prime & = & S\cup\{\phi^{-1}(g)^\frown(t+1)\} \\
\end{eqnarray*}
For all $\nu \in S'$, let
\[
\psi(\nu) = \left\{\begin{array}{cc}
f & \nu \in S' - S\\
\phi(\nu) & \nu \in S
\end{array}
\right.
\]
By our choice of $g$ and $t$, $\psi$ is an injective antitone map. That $\psi$ is strong follows from the incomparability of $\phi^{-1}(g)^\frown(t+1)$ with every element $\mu$ of $S$ so that $\mu \not \subseteq \phi^{-1}(g)$. The set $S^\prime$ is also a finite orchard, so $\psi$ is a partial disintegration onto $\mathcal{D}$ which extends $\phi$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:extend.partial.disint}:]
Let $\mathcal{R} = \{R_0, R_1, \ldots\}$ be a countable dense set of measurable sets. We build a set $\mathcal{D} \supseteq \operatorname{ran}(\phi)$ that satisfies
the hypotheses of Lemma \ref{lm:density.vectors}. To ensure this, we ensure that each set in $\mathcal{R}$ belongs to the upper semilattice generated by the supports of the vectors in $\mathcal{D}$.
We construct $\mathcal{D}$ by defining a sequence $\mathcal{D}_0 \subseteq \mathcal{D}_1 \subseteq \ldots$
and setting $\mathcal{D} = \bigcup_n \mathcal{D}_n$. To begin, set $\mathcal{D}_0 = \operatorname{ran}(\phi) \cup \{\mathbf{0}\}$.
Let $n \in \mathbb{N}$, and suppose $\mathcal{D}_n$ has been defined. Let $\mathcal{F} = \{\operatorname{supp}(f)\ :\ f \in \mathcal{D}_n\}$. By way of induction, suppose $\mathcal{D}_n$ is a simple lower semilattice. Thus, $\mathcal{F}$ is a simple lower semilattice of measurable sets. By Lemma \ref{lm:semilattice.adjoin},
there is a simple lower semilattice $\mathcal{F}'$ so that $R_n$ belongs to the upper semilattice generated by
$\mathcal{F}'$. Let $h_1 = \bigvee \mathcal{D}_n$. Let $h_2 = \chi_{R_n - \operatorname{supp}(h_1)}$. Let
$\mathcal{D}_{n+1} = \{(h_1 + h_2) \cdot \chi_S\ :\ S \in \mathcal{F}'\}$. Thus, since $\mathcal{F}'$ is a simple lower semilattice, $\mathcal{D}_{n+1}$ is a simple
lower semilattice under $\preceq$. We claim that $\mathcal{D}_n \subseteq \mathcal{D}_{n+1}$.
For, let $f \in \mathcal{D}_n$. Thus, $S := \operatorname{supp}(f) \in \mathcal{F}$.
So, $(h_1 + h_2)\cdot \chi_S \in \mathcal{D}_{n+1}$. But, $(h_1 + h_2) \cdot \chi_S = h_1 \cdot \chi_S = f$.
So, it follows from Lemma \ref{lm:density.vectors} that the linear span of $\mathcal{D}$ is dense in $L^p(\Omega)$. So, there exists
a finite $S\subseteq \mathcal{D} - \{\mathbf{0}\}$ so that $d(f, \langle S\rangle) < 2^{-k}$.
We can assume $\operatorname{ran}(\phi) \subseteq S$. We can now apply Lemma \ref{lm:extend.partial.disintegration}.
\end{proof}
\subsection{Approximating separating antitone maps}\label{subsec:approx}
We show that the $\sigma$ functional defined in Section \ref{sec:background} can be used to estimate distance to the nearest separating antitone map.
\begin{theorem}\label{thm:sigma.estimate}
Suppose $\Omega$ is a measure space and $p$ is a real so that
$p \geq 1$ and $p \neq 2$. Suppose $\phi : S \rightarrow L^p(\Omega)$ is a partial disintegration of $L^p(\Omega)$, and $\psi : S' \rightarrow L^p(\Omega)$ where $S' \supseteq S$ is a finite orchard so that each $\nu \in S' - S$ is a descendant of a node in $S$. Then, there is a separating antitone map $\psi' : S' \rightarrow L^p(\Omega)$ so that
\begin{equation}
\norm{\psi' - \psi}_{S'}^p \leq \norm{\phi - \psi |_S}^p_S + 2^p\sigma(\phi \cup (\psi|_{S' - S})). \label{eqn:sigma.estimate}
\end{equation}
\end{theorem}
\begin{proof}
Let $\Delta = S' - S$. Let $\psi_0 = \phi \cup \psi|_\Delta$.
Let
\[
\hat{\sigma}(\psi_0)=\sum_{\nu|\nu^\prime}\min\{|\psi_0(v)|^p,|\psi_0(\nu^\prime)|^p\}
+\sum_{\nu^\prime\supset \nu}\min\{|\psi_0(\nu^\prime)-\psi_0(\nu)|^p,|\psi_0(\nu^\prime)|^p\}
\]
where $\nu,\nu^\prime$ range over $S'$.
When $\nu \in \Delta$, define the \emph{nullifiable set of $\nu$} to be the set of all $t \in X$ so that
$|\psi(\mu)(t)|^p \leq \hat{\sigma}(\psi_0)(t)$ for some $\mu \subseteq \nu$ that belongs to $\Delta$.
Denote the nullifiable set of $\nu$ by $N_\nu$. When $\nu \in \Delta$, define the \emph{source node of $\nu$}
to be the maximal $\mu \subseteq \nu$ so that $\mu \in S$. Thus, each $\nu \in \Delta$ has a source node.
Let $\nu \in S'$. If $\nu \in S$, then define $\psi'(\nu)$ to be $\phi(\nu)$. If $\nu \in \Delta$, then define
$\psi'(\nu)$ to be $\phi(\mu)\cdot(1 - \chi_{N_\nu})$ where $\mu$ is the source node of $\nu$.
Note that $N_\nu \subseteq N_{\nu'}$ if $\nu, \nu' \in \Delta$ and $\nu \subseteq \nu'$. Thus,
$\psi'$ is antitone.
Suppose $\nu, \nu' \in \Delta$ are incomparable. Suppose $t \not \in N_\nu$. Then $|\psi(\nu)(t)|^p > \hat{\sigma}(\psi_0)(t)$. So, $|\psi(\nu)(t)|^p > \hat{\sigma}(\psi_0)(t)$.
Thus,
\[
|\psi(\nu')(t)|^p = \min\{|\psi(\nu)(t)|^p, |\psi(\nu')(t)|^p\} \leq \hat{\sigma}(\psi_0)(t).
\]
Hence,
$t \in N_{\nu'}$. Thus, $1 - \chi_{N_\nu}$ and $1 - \chi_{N_{\nu'}}$ are disjointly supported.
So, suppose $\nu, \nu' \in S'$ are incomparable. If either $\nu, \nu' \in S$ or if $\nu, \nu' \in \Delta$, then
$\psi'(\nu)$ and $\psi'(\nu')$ are incomparable. Suppose $\nu \in S$ and $\nu' \in \Delta$. Let $\mu$
denote the source node of $\nu'$. Then, $\mu \not \subset \nu$ and $\nu \not \subseteq \mu$.
Thus, $\mu$, $\nu$ are incomparable and so $\psi'(\mu)$ and $\psi'(\nu)$ are disjointly supported.
Thus, $\psi'(\nu')$ and $\psi'(\nu)$ are disjointly supported.
Now, note that
\[
\norm{\psi - \psi'}_{S'}^p \leq \norm{\phi - \psi|_S}_S^p + \norm{(\psi - \psi')|_\Delta}_\Delta^p.
\]
Suppose $\nu \in \Delta$ and $t \in X$. We claim that
$|\psi(\nu)(t) - \psi(\nu')(t)|^p \leq 2^p \hat{\sigma}(\psi_0)(t)$. For, suppose $t \not \in N_\nu$.
Then, $\psi'(\nu)(t) = \phi(\mu)(t)$ where $\mu$ is the source node of $\nu$. Also,
$|\psi(\nu)(t)|^p > \hat{\sigma}(\psi_0)(t)$. So,
$|\psi(\nu)(t)|^p > \min\{| \phi(\mu)(t) - \psi(\nu)(t)|^p, |\psi(\nu)(t)|^p\}$.
Thus,
\[
|\phi(\mu)(t) - \psi(\nu)(t)|^p = \min\{| \phi(\mu)(t) - \psi(\nu)(t)|^p, |\psi(\nu)(t)|^p\} \leq \hat{\sigma}(\psi_0)(t) \leq 2^p \hat{\sigma}(\psi_0)(t).
\]
Suppose $t \in N_\nu$. Then, $\psi'(\nu)(t) = 0$. There exists $\mu' \subseteq \nu$ so that
$|\psi'(\mu')(t)|^p \leq \hat{\sigma}(\psi_0)(t)$. Without loss of generality, suppose
$|\psi(\nu)(t)|^p > \hat{\sigma}(\psi_0)(t)$. So, $|\psi(\nu)(t)|^p > \min\{|\psi(\mu')(t) - \psi(\nu)(t)|^p, |\psi(\nu)(t)|^p\}$. Therefore, $|\psi(\mu')(t) - \psi(\nu)(t)|^p \leq \hat{\sigma}(\psi_0)(t)$. So,
$|\psi(\nu)(t)|^p \leq 2^p\hat{\sigma}(\psi_0)(t)$ since $|a + b|^p \leq 2^{p-1}(|a|^p + |b|^p)$.
\end{proof}
\section{Computable world}\label{sec:computable}
We now have all the pieces in place to prove Theorems \ref{thm:lifting.comp} and \ref{thm:comp.disint.Lp01}.
\begin{proof}[Proof of Theorem \ref{thm:lifting.comp}:]
Suppose $\phi_1$ is a computable disintegration of $L^p(\Omega_1)^\#$ and that
$\phi_2$ is a computable disintegration of $L^p(\Omega_2)^\#$ that is computably isomorphic to $\phi_1$.
The domain of $\phi_j$ is c.e., so there is a computable surjection $G_j'$ of $\mathbb{N}$ onto $\operatorname{dom}(\phi_j)$.
Let $G_j = \phi_j \circ G_j'$. Thus, $G_j$ is a structure on $L^p(\Omega_j)$. So, let
$L^p(\Omega_j)^+ = (L^p(\Omega_j), \phi_j)$. Since $\phi_j$ is a computable disintegration of $L^p(\Omega_j)^\#$, it follows that $L^p(\Omega_j)^+$ is a computable presentation of $L^p(\Omega_j)$ and that the identity map is a computable map of $L^p(\Omega_j)^+$ onto $L^p(\Omega_j)^\#$.
Let $f$ be a computable isomorphism of $\phi_1$ with $\phi_2$. Thus, by Theorem \ref{thm:extension.isomorphism}, there is a unique linear isometric map of $L^p(\Omega_1)$ onto $L^p(\Omega_2)$ that lifts $f$; denote this map by $T$.
Since $T$ lifts $f$, it follows that $\{T(G_1(n))\}_n$ is a computable sequence of $L^p(\Omega_2)^+$.
Since $T$ is bounded, it follows that
$T$ is a computable map of $L^p(\Omega_1)^+$ onto $L^p(\Omega_2)^+$.
Thus, there is a computable linear isometry of $L^p(\Omega_1)^\#$ onto $L^p(\Omega_2)^\#$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:comp.disint.Lp01}:]
Let $S = \operatorname{dom}(\phi)$. Every c.e. tree is computably isomorphic to a computable tree. So, without loss of generality, we assume $S$ is computable.
Set $I(\lambda) = [0,1]$.
Suppose $\nu \in S$, and let $\nu_0 <_{lex} \nu_1 <_{lex} \ldots$ be the children of $\nu$ in $S$.
For each $n$, set
\[
I(\nu_n) = \left[ \sum_{j < n} \norm{\psi(\nu_j)}_p, \sum_{j \leq n} \norm{\psi(\nu_j)}_p \right].
\]
Since $\phi(\nu) \preceq \phi(\lambda)$ for all $\nu \in S$, it follows that $I(\nu) \subseteq [0,1]$ for all $\nu$.
Set $\psi(\nu) = \chi_{I(\nu)}$. It follows that $\psi$ is a separating antitone map.
It also follows that $\psi$ is computable, and that the identity map gives a computable isomorphism of $\psi$ with $\phi$. Thus, by Proposition \ref{prop:iso.int.disint}, $\psi$ is a disintegration.
\end{proof}
To prove Theorem \ref{thm:disint.comp}, we augment the classical material developed so far with the following three lemmas. The third lemma requires the notion of a success index which we define now.
\begin{definition}\label{def:success.index}
Suppose $L^p(\Omega)^\# = (L^p(\Omega), R)$ is a presentation.
Let $S$ be a finite orchard, and let $\psi : S \rightarrow L^p(\Omega)$. The \emph{success index of $\psi$} is the largest integer $N$ so that
$d(R(j), \langle \operatorname{ran}(\psi) \rangle) < 2^{-N}$ whenever $0 \leq j < N$ and
\[
\norm{ \psi(\nu) - \sum_{\nu' \in \nu^+ \cap S} \psi(\nu')}_p < 2^{-N}
\]
whenever $\nu$ is a nonterminal node of $S$.
\end{definition}
The success index of an antitone separating map can be viewed as a measure of how close it is to being a
disintegration. Antitone separating maps with larger success indices are closer to being disintegrations.
\begin{lemma}\label{lm:c.e.open}
Suppose $\mathcal{B}$ is a Banach space and let $\mathcal{B}^\#$ be a computable presentation of $\mathcal{B}$. Let $S$ be a finite set of nodes.
\begin{enumerate}
\item The set of all injective maps in $\mathcal{B}^S$ is a c.e. open subset of $\mathcal{B}^\#$; furthermore, an index of this set can be computed from $S$. \label{lm:c.e.open::itm:injective}
\item Suppose $v_0, \ldots, v_k$ are computable vectors of $\mathcal{B}$ and $N \in \mathbb{N}$.
Then, the set of all $\psi \in \mathcal{B}^S$ so that
$d(v_j, \langle \operatorname{ran}(\psi) \rangle) < 2^{-N}$ whenever $0 \leq j \leq k$ is a c.e. open subset of
$\mathcal{B}^\#$; furthermore an index of this set can be computed from $N, S$ and indices of $v_0, \ldots, v_k$. \label{lm:c.e.open::itm:distance}
\item Suppose $N \in \mathbb{N}$ and $S$ is an orchard. Then, the set of all $\psi \in \mathcal{B}^S$ so that
\[
\norm{\psi(\nu) - \sum_{\mu \in \nu^+ \cap S} \psi(\mu) }_p < 2^{-N}
\]
for every nonterminal node $\nu$ of $S$ is a c.e. open subset of $\mathcal{B}^\#$; furthermore, and index of this set can be computed from $N$ and $S$.
\label{lm:c.e.open::itm:summative}
\end{enumerate}
\end{lemma}
\begin{proof}
We will repeatedly use the following well-known fact: if $U,V$ are c.e. open subsets of $(\mathcal{B}^S)^\#$, then $U \cap V$ is a c.e. open subset of $(\mathcal{B}^S)^\#$ and an index of $U \cap V$ can be computed from indices of $U,V$.\\
(\ref{lm:c.e.open::itm:injective}): Let $\mathcal{S}_1$ denote the set of all injective maps in $\mathcal{B}^S$.
Suppose $\nu, \nu' \in S$ are distinct. Define $G_{\nu, \nu'} : \mathcal{B}^S \rightarrow \mathcal{B}$ by
\[
G_{\nu, \nu'}(\psi) = \psi(\nu) - \psi(\nu').
\]
Then, $G_{\nu, \nu'}$ is a computable map of $(\mathcal{B}^S)^\#$ into $\mathcal{B}^S$; furthermore an
index of $G_{\nu, \nu'}$ can be computed from $S$, $\nu$, and $\nu'$.
The set of nonzero vectors in $\mathcal{B}$ is a c.e. open subset of $\mathcal{B}^\#$.
So, by Proposition \ref{prop:preimage.c.e.open}, $U_{\nu, \nu'} := G_{\nu, \nu'}^{-1}[\mathcal{B} - \{\mathbf{0}\}]$ is a c.e. open subset of $(\mathcal{B}^S)^\#$; furthermore an index of $U_{\nu, \nu'}$ can be computed from
$\nu$, $\nu'$, and $S$.
Since $\mathcal{S}_1 = \bigcap_{\nu, \nu'} U_{\nu, \nu'}$, it follows that
$\mathcal{S}_1$ is a c.e. open subset of $(\mathcal{B}^S)^\#$ and that an index of $\mathcal{S}_1$
can be computed from $S$.\\
(\ref{lm:c.e.open::itm:distance}):
By considering intersections, it suffices to consider the case where $k = 0$.
Let $\mathcal{S}_2$ denote the set of all $\psi \in \mathcal{B}^S$ so that $d(v_0, \langle \operatorname{ran}(\psi) \rangle) < 2^{-N}$.
Observe that $\psi \in \mathcal{S}_2$ if and only if there exists a map $\beta: S \to \mathbb{Q}(i)$ so that
\begin{equation}
\norm{v_0 - \sum_{\nu \in S} \beta(\nu)\psi(\nu)}_p < 2^{-N}.\label{ineq:beta}
\end{equation}
For each such a map $\beta$, define $F_\beta : \mathcal{B}^S \rightarrow \mathcal{B}^\#$ by
\[
F_\beta(\psi) = \sum_{\nu \in S} \beta(\nu) \psi(\nu).
\]
Then, $F_\beta$ is a computable map of $(\mathcal{B}^S)^\#$ into $\mathcal{B}$; furthermore an index
of $F_\beta$ can be computed from $S$ and $\beta$.
Thus, by Proposition \ref{prop:preimage.c.e.open}, $V_\beta := F_\beta^{-1}(B(v_0, 2^{-N}))$ is a c.e.
open subset of $(\mathcal{B}^S)^\#$; furthermore an index of $V_\beta$ can be computed from $\beta$, $N$, and $S$. Since $\mathcal{S}_2 = \bigcup_\beta V_\beta$, it follows that $\mathcal{S}_2$ is a c.e. open subset of $\mathcal{S}_S^\#$ and that an index of $\mathcal{S}_2$ can be computed from $S$ and $N$.\\
(\ref{lm:c.e.open::itm:summative}):
Now, suppose $S$ is an orchard.
Let $\mathcal{S}_3$ denote the set of all $\psi \in (\mathcal{B}^S)^\#$ so that for every nonterminal node $\nu$ of $S$
\[
\norm{\psi(\nu) - \sum_{\nu' \in \nu^+ \cap S} \psi(\nu')} < 2^{-N}.
\]
Fix a nonterminal node $\nu$ of $S$.
Define a map $F_\nu : \mathcal{B}^S \rightarrow \mathcal{B}$ by
\[
F_\nu(\psi) = \psi(\nu) - \sum_{\nu' \in \nu^+ \cap S} \psi(\nu').
\]
Then, $F_\nu$ is a computable map of $(\mathcal{B}^S)^\#$ into $\mathcal{B}$; furthermore an index of
$F_\nu$ can be computed from $\nu$ and $S$.
Thus, $W_\nu := F_\nu^{-1}(B(\mathbf{0}, 2^{-N}))$ is a c.e. open subset of $(\mathcal{B}^S)^\#$, and an
index of $W_\nu$ can be computed from $S$, $N$, and $\nu$.
Since $\mathcal{S}_3 = \bigcap_\nu W_\nu$, $\mathcal{S}_3$ is a c.e. open subset of
$(\mathcal{B}^S)^\#$ and an index of $\mathcal{S}_3$ can be computed from $S$ and $N$.
\end{proof}
\begin{lemma}\label{lm:sep.anti.c.e.closed}
Suppose $p \geq 1$ is a computable real so that $p \neq 2$, and suppose $L^p(\Omega)^\#$ is a computable presentation of $L^p(\Omega)$. Let $S$ be a finite orchard. Then, the set of all separating antitone maps in $L^p(\Omega)^S$ is a c.e. closed subset of $(L^p(\Omega)^S)^\#$. Furthermore, an index of this set can be computed from $S$.
\end{lemma}
\begin{proof}
Let $\mathcal{H}$ denote the set of all separating antitone maps in $L^p(\Omega)^S$. For each
$\psi \in L^p(\Omega)^S$, let $f(\psi) = 2\sigma(\psi)^{1/p}$. Thus, $f$ is a computable nonnegative function from
$(L^p(\Omega)^S)^\#$ into $\mathbb{C}$; furthermore, an index of $f$ can be computed from $S$. By Theorem \ref{thm:sigma.estimate}, $f(\psi) \geq d(\psi, \mathcal{H})$. It follows from Corollary \ref{cor:sigma}, that
$\mathcal{H} = f^{-1}[\{0\}]$. So, by Proposition \ref{prop:bounding}, $\mathcal{H}$ is a c.e. closed
subset of $(L^p(\Omega)^S)^\#$ and an index of $\mathcal{H}$ can be computed from $S$.
\end{proof}
\begin{lemma}\label{lm:approx.extension}
Suppose $p \geq 1$ is a computable real so that $p \neq 2$. Let $\Omega$ be a separable measure space, and let $L^p(\Omega)^\#$ be a computable presentation of $L^p(\Omega)$. Assume $\phi : S \rightarrow L^p(\Omega)$ is a computable partial disintegration of $L^p(\Omega)^\#$ whose success index is at least $n_1$. Then, for every $k,n \in \mathbb{N}$, there is a computable partial disintegration $\psi$ of $L^p(\Omega)^\#$ so that
$\operatorname{dom}(\psi) \supseteq S$, $\norm{\psi|_S - \phi}_S < 2^{-k}$, the success index of $\psi$ is at least $n$,
and the success index of $\psi|_S$ is at least $n_1$. Furthermore, $\operatorname{dom}(\psi)$ and an index of $\psi$ can be computed from $k,n$, and an index of $\phi$.
\end{lemma}
\begin{proof}
For the moment, fix a finite orchard $S'$. Let $U_{S'}$ denote the set of all injective maps in $L^p(\Omega)^{S'}$. Let $V_{S', n}$ denote the set of all maps on $S'$ whose success index is at least $n$.
Let $\mathcal{H}_{S'}$ denote the set of all separating antitone maps in $L^p(\Omega)^{S'}$.
By Lemma \ref{lm:c.e.open}, $U_{S'}$ and $V_{S', n}$ are c.e. open subsets of $(L^p(\Omega)^{S'})^\#$ and
indices of these sets can be computed from $S'$, $n$. By Lemma \ref{lm:sep.anti.c.e.closed}, $\mathcal{H}_{S'}$ is a c.e. closed subset of $(L^p(\Omega)^{S'})^\#$ and an index of $\mathcal{H}_{S'}$ can be computed from $S'$.
When $S' \supseteq S$, let $\pi_{S'}$ denote the canonical projection of $L^p(\Omega)^{S'}$ onto
$L^p(\Omega)^S$, and let
\[
C_{S'} = U_{S'} \cap V_{S',n} \cap \pi^{-1}_{S'}[B(\phi; 2^{-k}) \cap V_{S, n_1}] \cap \mathcal{H}_{S'}.
\]
By Theorem \ref{thm:extend.partial.disint}, there \emph{is} an $S'$ so that
$C_{S'} \neq \emptyset$. Such an $S'$ can be found by an effective search procedure. By Proposition \ref{prop:comp.point},
$C_{S'}$ contains a computable vector $\psi$ of $(L^p(\Omega)^{S'})^\#$ and an index of $\psi$ can be computed from $k$, $n$, and an index of $\phi$.
\end{proof}
We are now ready to prove Theorem \ref{thm:disint.comp}.
\begin{proof}[Proof of Theorem \ref{thm:disint.comp}:]
Suppose $L^p(\Omega)^\# = (L^p(\Omega), R)$.
Set $S_0 = \{(0)\}$. Since $\Omega$ is nonzero, $R(j_0) \neq \mathbf{0}$ for some $j_0$; such a number $j_0$ can be computed by a search procedure. Set $\hat{\phi}_0((0)) = R(j_0)$.
By Lemma \ref{lm:c.e.open} we can compute $k_0 \in \mathbb{N}$ so that every map in $B(\hat{\phi}_0; 2^{-k_0})$
is injective and never $0$.
It now follows from Lemma \ref{lm:approx.extension} that there is a sequence $\{\hat{\phi}_n\}_n$ of
computable partial disintegrations of $L^p(\Omega)^\#$ and a computable sequence $\{k_n\}_n$ of
nonnegative integers that have following properties.
\begin{enumerate}
\item An index of $\hat{\phi}_n$ and a canonical index of $\operatorname{dom}(\hat{\phi}_n)$ can be computed from $n$.
\item If $S_n =\operatorname{dom}(\hat{\phi}_n)$, then $S_n \subseteq S_{n+1}$ and $\norm{ \hat{\phi}_{n+1}|_{S_n} - \hat{\phi}_n}_{S_n} < 2^{-(k_n + 1)}$.
\item Each map in $B(\hat{\phi}_n; 2^{-k_n})$ is injective, never zero, and has a success index that is at least $n$.
\end{enumerate}
So, let $\phi_{n,t} = \hat{\phi}_{t + n} |_{S_n}$ for all $n,t$. It follows that $\{\phi_{n,t}\}_t$ is a computable
sequence of $(L^p(\Omega)^{S_n})^\#$; furthermore, an index of this sequence can be computed from $n$.
It also follows that $\norm{\phi_{n,t+1} - \phi_{n,t}}_{S_n} < 2^{-(k_{n+t} + 1)}$. Thus, by Proposition \ref{prop:eff.cauchy}, $\phi_n := \lim_t \phi_{n,t}$ is a computable vector of $(L^p(\Omega)^{S_n})^\#$; furthermore, an index of $\phi_n$ can be computed from $n$. Also, $\norm{\hat{\phi}_n - \phi_n}_{S_n} \leq 2^{-k_n}$. Thus, $\phi_n$ is a partial disintegration whose success index is at least $n$. Since $S_n \subseteq S_{n+1}$, $\phi_{n,t+1} \subseteq \phi_{n+1,t}$. Thus, $\phi_n \subseteq \phi_{n+1}$. Let $\phi = \bigcup_n \phi_n$.
The only thing that prevents $\phi$ from being a disintegration is that $\lambda \not \in \operatorname{dom}(\phi)$.
We fix this as follows. Let $S = \operatorname{dom}(\phi)$. For each $\nu \in S$, let
\[
\psi(\nu) = 2^{-\nu(0)} \norm{\phi(\nu(0))}_p^{-1} \phi(\nu).
\]
Then, let
\[
\psi(\lambda) = \sum_{\nu \in \mathbb{N}^1 \cap S} \psi(\nu).
\]
Since $S$ is computable, it follows that $\psi(\lambda)$ is a computable vector of $L^p(\Omega)^\#$. It then follows that $\psi$ is a computable disintegration of $L^p(\Omega)^\#$.
\end{proof}
\section{A comparison of arguments for $\ell^p$ and $L^p(\Omega)$ spaces}\label{sec:comparison}
Here, we discuss why arguments previously used to show that certain $\ell^p$ spaces are not computably
categorical can not be applied to $L^p[0,1]$. We then discuss why our techniques for $L^p$ spaces of
non-atomic measure spaces can not be applied to $\ell^p$ spaces.
We begin by discussing why arguments for $\ell^p$ spaces can not be generalized to $L^p[0,1]$. As mentioned in Subsection \ref{subsec:survey.prior}, Pour-El and Richards proved that $\ell^1$ is not computably categorical. Their proof rests on an observation about the extreme points of the unit ball in $\ell^1$. However,
the unit ball in $L^1[0,1]$ does not have extreme points. Later, McNicholl showed that $\ell^p$ is computably categorical only when $p \neq 2$. His proof utilizes the Banach-Lamperti characterization of the isometries of $\ell^p$, which extends to $L^p$ spaces of $\sigma$-finite spaces. However, it also uses the fact that $\ell^p$ has a disjointly supported Schauder basis which $L^p[0,1]$ does not.
We now discuss why our arguments for $L^p$ spaces can not be applied to $\ell^p$ spaces.
In particular, we look at the three key steps stated in Section \ref{sec:overview}.
Theorems \ref{thm:lifting.comp} and \ref{thm:disint.comp} do not assume the underlying measure spaces
are non-atomic. But, Theorem \ref{thm:comp.disint.Lp01} does. And, the construction in \cite{McNicholl.Stull.2016} shows that when $p \neq 2$ there is a computable presentation $\mathcal{B}^\#$ of $\ell^p$ and a computable disintegration $\phi$ of $\mathcal{B}^\#$ that is not computably isomorphic to any computable
disintegration of the standard presentation of $\ell^p$.
\section{Relative computable categoricity}\label{sec:rcc}
We begin by defining what we mean by the diagram of a presentation of a Banach space. Our approach parallels that in \cite{Greenberg.Knight.Melnikov.Turetsky.2016}.
Suppose $\mathcal{B}$ is a Banach space and $\mathcal{B}^\# = (\mathcal{B}, R)$ is a presentation of
$\mathcal{B}$. We define the \emph{lower diagram} of $\mathcal{B}^\#$ to be the
set of all pairs $(v, r)$ where $v$ is a rational vector of $\mathcal{B}^\#$ and $\norm{v} < r$.
We define the \emph{upper diagram} of $\mathcal{B}^\#$ to be the set of all pairs $(v,r)$ so that $v$ is a rational vector of $\mathcal{B}^\#$ and $\norm{v} > r$. We define the \emph{diagram} of $\mathcal{B}^\#$ to be the union of the upper and lower diagrams of $\mathcal{B}^\#$.
Suppose $\Omega$ is a nonzero, separable, and non-atomic measure space, and let $L^p(\Omega)^\#$ be a
presentation of $\Omega$. Our proofs are uniform enough to show that the diagram of $L^p(\Omega)^\#$ computes a linear isometry of $L^p(\Omega)^\#$ onto the standard presentation of $L^p[0,1]$.
Thus, we have proven the following.
\begin{theorem}\label{thm:rcc}
If $L^p(\Omega)$ is computably presentable, and if $\Omega$ is non-atomic and separable, then
$L^p(\Omega)$ is relatively computably categorical.
\end{theorem}
\section{Computable measure spaces}\label{sec:comp.msr.spaces}
We begin by defining what we mean by a computable presentation of a measure space. Our approach parallels that in \cite{Ding.Weihrauch.Wu.2009}.
To begin, suppose $\mathcal{R}$ is a ring of sets. A \emph{structure} on $\mathcal{R}$ is a map of $\mathbb{N}$ onto $\mathcal{R}$. If $R$ is a structure on $\mathcal{R}$, the pair $(\mathcal{R}, R)$ is called a \emph{presentation} of $\mathcal{R}$.
Now, suppose $\Omega = (X, \mathcal{M}, \mu)$ is a measure space. A \emph{structure} on $\Omega$ is a
structure on a ring that generates $\mathcal{M}$ and whose members have finite measure. If
$R$ is a structure on $\Omega$, then the pair $(\Omega, R)$ is called a presentation of $\Omega$.
Suppose $\mathcal{R}^\# = (\mathcal{R}, R)$ is a presentation of a ring of sets.
We say that $\mathcal{R}^\#$ is a \emph{computable presentation of $\mathcal{R}$} if there are computable functions $f,g$ from $\mathbb{N}^2$ into $\mathbb{N}$ so that for all $m,n \in \mathbb{N}$
\begin{eqnarray*}
R(n) \cup R(m) & = & R(f(n,m))\mbox{, and}\\
R(n) - R(m) & = & R(g(m,n)).
\end{eqnarray*}
Let $\Omega$ be a measure space. Suppose $R$ is a
structure on $\Omega$, and let $\mathcal{R} = \operatorname{ran}(R)$. We say $(\Omega, R)^\#$ is a \emph{computable presentation of $\Omega$} if $(\mathcal{R}, R)$ is a computable presentation
of $\mathcal{R}$ and if $\mu(R(n))$ can be computed from $n$; that is if there is an algorithm that given $n, k \in \mathbb{N}$ as input computes a rational number $q$ so that $|q - \mu(R(n))| < 2^{-k}$.
A measure space $\Omega = (X, \mathcal{M}, \mu)$ is said to be \emph{countably generated} if the $\sigma$-algebra $\mathcal{M}$ is generated by a countable collection of measurable sets each of which has finite measure; such a collection is said to \emph{generate} $\Omega$.
We have two key results.
\begin{theorem}\label{thm:comp.str.cms}
If $R$ is a computable structure on a measure space $\Omega$, and if $D_R(n) = \chi_{R(n)}$ for all $n \in \mathbb{N}$, then $(L^p(\Omega), D_R)$ is a computable presentation of $L^p(\Omega)$ for every
computable real $p \geq 1$.
\end{theorem}
\begin{theorem}\label{thm:non.comp.msr.space}
There is a countably generated measure space $\Omega$ that does not have a computable presentation
but so that $L^p(\Omega)$ has a computable presentation whenever $1 \leq p < \infty$ is computable.
\end{theorem}
To prove Theorem \ref{thm:comp.str.cms}, we need some preliminary material on measure spaces.
It is well-known that every countably generated measure space is separable but not conversely. In particular, the following is essentially Theorem A p. 168 of Halmos \cite{Halmos.1950}.
\begin{theorem}\label{thm:density.ring}
Suppose $\Omega = (X, \mathcal{M}, \mu)$ is a measure space and that
$\mathcal{G}$ is a countable set that generates $\Omega$. Then, the ring generated by $\mathcal{G}$ is dense in $\mathcal{M}$.
\end{theorem}
\begin{corollary}\label{cor:structure.1}
Suppose $\Omega = (X, \mathcal{M}, \mu)$ is a measure space that is generated by a countable set $\mathcal{G}$, and let $\mathcal{R}$ denote the ring generated by $\mathcal{G}$. Then,
the linear span of $\{\chi_R\ :\ R \in \mathcal{R}\}$ is dense in $L^p(\Omega)$.
\end{corollary}
\begin{proof}
Let $\mathcal{R}' = \{\chi_R\ :\ R \in \mathcal{R}\}$.
It follows from Theorem \ref{thm:density.ring} that $\chi_A$ lies in the subspace generated by $\mathcal{R}'$ whenever $A$ is a measurable set whose measure is finite. Thus, $s$ belongs to the subspace generated by $\mathcal{R}$ whenever $s$ is a simple function whose support has finite measure.
Since these functions are dense in $L^p(\Omega)$, so is the linear span of $\mathcal{R}'$.
\end{proof}
Suppose $\Omega^\# = (\Omega, R)$ is a presentation of $\Omega$. Set $D_R(n) = \chi_{R(n)}$ for all $n \in \mathbb{N}$. It follows from Theorem
\ref{thm:density.ring} that $(L^p(\Omega), D_R)$ is a presentation of $L^p(\Omega)$ which we refer to as the \emph{induced presentation}. We are now ready to prove Theorem \ref{thm:comp.str.cms}.
\begin{proof}[Proof of Theorem \ref{thm:comp.str.cms}:]
It follows from Corollary \ref{cor:structure.1} that the linear span of \\
$\{\chi_{R(n)}\ |\ n \in \mathbb{N}\}$ is dense in $L^p(\Omega)$.
Suppose $\alpha_0, \ldots, \alpha_M \in \mathbb{Q}(i)$. For each $h \in \{0,1\}^{M+1}$ set:
\begin{eqnarray*}
S_h & = & \bigcap_{h(j) = 1} R(j) \cap \bigcap_{h(j) = 0} (X - R(j))\\
\beta_h & = & \sum_{h(j)=1} \alpha_j
\end{eqnarray*}
Since $R$ is a computable structure on $\Omega$, $\mu(S_h)$ can be computed uniformly from $h$.
Note that $S_{h_1} \cap S_{h_2} = \emptyset$ whenever $h_1, h_2$ are distinct. Since,
\[
\sum_{n = 0}^M \alpha_n \chi_{R_n} = \sum_h \beta_h \chi_{S_h}
\]
it follows that
\[
\norm{\sum_{n = 0}^M \alpha_n \chi_{R_n}}_p^p = \sum_h |\beta_h|^p \mu(S_h).
\]
Thus, $\norm{\sum_{n = 0}^n \alpha_n \chi_{R_n}}_p$ can be computed uniformly from $M, \alpha_0, \ldots, \alpha_M$.
\end{proof}
To prove Theorem \ref{thm:non.comp.msr.space} we will need the following observation.
\begin{proposition}\label{prop:lsc.measure}
Suppose $\Omega=(X, \mathcal{M}, \mu)$ is a finite measure space and $\Omega^\#$ is a computable
presentation of $\Omega$. Then, $\mu(X)$ is a lower semi-computable real.
\end{proposition}
\begin{proof}
Suppose $\Omega^\# = (X, \mathcal{M}, \mu, R)$. Thus, $X = \bigcup_n R(n)$. Let:
\[
F_n = R(n) - \bigcup_{m < n} R(m).
\]
Thus, $F_0, F_1, \ldots$ are pairwise disjoint and $X = \bigcup_n F_n$. Furthermore,
$\mu(F_n)$ is computable uniformly from $n$. Thus, $\mu(X) = \sum_n \mu(F_n)$ is lower semi-computable.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:non.comp.msr.space}:]
Let $X = [0,1]$. Let $\mathcal{M}$ denote the $\sigma$-algebra generated by the dyadic
subintervals of $[0,1]$. Let $r$ be a positive real that is not lower semicomputable.
Whenever $A \in \mathcal{M}$, let $\mu(A) = r \cdot m(A)$ where $m$ denotes Lebesgue measure.
Thus, $\Omega := (X, \mathcal{M}, \mu)$ is a countably generated measure space.
Since $\mu(X) = r$, it follows from Proposition \ref{prop:lsc.measure} that $\Omega$ does not have a computable presentation.
Now, let $\{I_n\}_n$ be a standard enumeration of the dyadic subintervals of $[0,1]$, and let
$R(n) = r^{-1} \chi_{I_n}$. It follows that $D$ is a computable structure on $L^p(\Omega)$.
For, $\norm{R(n)}_p = m(I_n)^{1/p}$, and each sum of the form $\sum_{n = 0}^M \alpha_n R(n)$ can be effectively rewritten as a sum of the form $\sum_{j = 0}^k \beta_j R(n_j)$ where
$R(n_0), \ldots, R(n_k)$ are disjointly supported.
\end{proof}
\section{Conclusion}\label{sec:conclusion}
A long-term goal of analytic computable structure theory should be to classify the computably categorical Banach spaces. Among the Banach spaces most encountered in practice in both pure and applied mathematics are the $L^p$ spaces. So, a nearer-term subgoal is to classify the computably categorical
$L^p$ spaces. As mentioned in the introduction, all such spaces must be separable, and therefore their underlying measure spaces must be separable. As shown in Section \ref{sec:comp.msr.spaces}, the computable presentability of an $L^p$ spaces does not imply the computable presentability of its underlying measure space.
When analyzing the computable categoricity of $L^p$ spaces, it makes sense to divide them into the $L^p$ spaces of separable atomic spaces and the $L^p$ spaces of the separable non-atomic spaces. Here, we have resolved the matter of the $L^p$ spaces of non-atomic measure spaces.
That leaves the atomic spaces to be considered. These can be divided into those that are purely atomic and those that are not. Every separable atomic space has countably many atoms. So, the purely atomic case has already been resolved; namely $\ell^p$ is computably categorical only when $p = 2$ and $\ell^p_n$ is computably categorical for all $p,n$ \cite{McNicholl.2015}, \cite{McNicholl.2016}, \cite{McNicholl.2016.1}. So, only the $L^p$ spaces of non-atomic but not purely atomic spaces remain to be examined, and a future paper will do so.
\section*{Acknowledgements}
The authors thank Ananda Weerasinghe for helpful discussions and Tyler Brown for proofreading.
\def$'${$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
2,869,038,154,037 | arxiv | \section{Introduction}
Power losses due to the turbine wake are reported as high as 20\% \citep{barthelmie2008flow}. In addition to other conditions of ambient flow and wind turbines, e.g. turbulence intensity of incoming flow, and thrust and power coefficients of a wind turbine, the velocity deficit and turbulence intensity of the turbine wake are also significantly affected by a low frequency large-scale coherent oscillation, the so-called wake meandering. However, the wake meandering mechanism is still far from fully understood \citep{okulov2014regular, kang2014onset}, and low-order models \citep{larsen2008wake,yang2012computational} cannot take into account the wake meandering effects accurately. This poses a significant difficulty for wind farm control and optimization. In this study, we attempt to elucidate details on the dynamics of turbine wake meandering for different turbine operating conditions and assess the corresponding nacelle effects. \\
\indent Directly behind the turbine a system of helical vortices dominates the flow. As first proposed by \citet{joukowski1912vortex} for a $N$-bladed propeller, the wake consists of N helical tip vortices, each with circulation $\Gamma$, shed from the tip of each turbine blade and an $N\Gamma$ single counter rotating hub vortex oriented along centerline in the streamwise direction. If the circulation along the blade is not uniform, short-lived trailing vortices also shed from the trailing edge of the blade. Tip vortices have been characterized by several experimental techniques (\cite{chamorro2009wind, hu2012dynamic, sherry2013interaction, hong2014natural}) and numerical studies (\cite{ivanell2009analysis, ivanell2010stability, troldborg2007actuator}). These studies have shown that the tip vortices convect downstream due to the relatively high speed flow near the blade tip and eventually breakdown, a process which depends on many factors, such as the turbulence in the incoming flow, rotational speed, geometry of the turbine blade and the interactions of helical vortices. It was shown in the theoretical work of \citet{widnall1972stability} on helical vortex instability that the tip vortex instability includes both short-wave and long-wave instabilities but is mainly caused by mutual induction because of the pitch of the helical structure. \citet{felli2011mechanisms} were able to observe all three instability modes for a propeller in a water tunnel, while \citet{sarmast2014mutual} showed the prevalence of the mutual induction in tip vortices using mode decomposition on tip vortices obtained from a numerical simulation.
Recently, \citet{yang2016coherent} employed large-eddy simulation (LES) in conjunction with large-scale snow particle image velocimetry (PIV) experiments to investigate the complexity of the coherent structures in the tip shear layer of a utility scale turbine and uncovered a new instability mode of the tip vortices.
The hub vortex has been studied less extensively than the tip vortices. \citet{iungo2013linear} used linear stability analysis on a model turbine and showed that the hub vortex is also unstable. \citet{felli2011mechanisms} visualized the hub vortex and witnessed its breakdown with relation to the tip vortices. \citet{okulov2007stability} performed theoretical stability analysis and concluded that the system of tip vortices and a hub vortex is unconditionally unstable.
In recent geometry-resolving simulations of a hydrokinetic turbine \citep{kang2014onset} and a model wind turbine \citep{foti2016wake} using LES and the curvilinear immersed boundary method, it was shown that the helical hub vortex forms behind the nacelle and expands in an inner wake (a strong wake at the centerline of the turbine wake) formed by the nacelle. The inner wake recovers and expands quickly allowing the hub vortex to interact with the outer wake. The work of \citet{kang2014onset} and \citet{foti2016wake} further showed that such interaction of the hub vortex with the outer wake augments the intensity of wake meandering at far wake locations. \\
\indent \citet{medici2008measurements} first explored experimental results of meandering in the far wake of the turbine. The oscillations were attributed to bluff-body vortex shedding effects. Similar low frequency oscillations are recorded in \citet{chamorro2013interaction}, \citet{okulov2014regular}, and \citet{howard2015statistics}. In these studies, a similar value of the Strouhal number, the non-dimensional frequency normalized by the rotor diameter and incoming velocity at hub height was observed for various turbines with different operating conditions, which is about $0.3$. Statistics of the amplitude and curvature of the wake meandering phenomena for a model wind turbine under different operating conditions were examined in \citet{howard2015statistics} by using a filtering technique on the data obtained from PIV measurements. \\
\indent Numerical investigations, especially using LES, have been used recently to investigate the turbine wake. Due to the high spatial resolution requirement for the geometry-resolving simulations using immersed boundary methods, as found in \citet{kang2014onset} and \citet{foti2016wake}, turbine modeling in the form of actuator-based models of actuator disks or actuator lines have been used exceedingly. The actuator disk, a model consisting of a porous disk concept as in \citet{glauert1935airplane, yang2012computational, calaf2010large, porte2011large} and actuator lines modeling the blades as rotating lines with distributed lift and drag forces \citep{so̸rensen2002numerical, ivanell2010stability, yang2015large}, are commonly employed.
Previous numerical (e.g. \citet{yang2012computational, calaf2010large, yang2015large}) and experimental (e.g. \citet{espana2011spatial}) results showed that both actuator disk models and actuator line models are able to produce large-scale motion of the wake consistent with wake meandering. However, actuator-based models without accounting for the influence of the nacelle may not be able to capture the hub vortex expansion and its interaction with the outer wake and thus cannot accurately describe the effects of nacelle on the dynamics of wake meandering.
\citet{kang2014onset} simulated the flow past a hydrokinetic turbine using geometry-resolving immersed boundary method, actuator disk model and actuator line model. The study found that the hub vortex from the actuator-type simulations remains columnar without significant interaction with the outer wake resulting in significantly less turbulence kinetic energy in the far wake.
In a recent work by \citet{yang2017actuator}, a new class of actuator surface models for turbine blades and nacelle was developed and validated with the hydrokinetic turbine \citep{kang2014onset} and MEXICO (Model Experiments in Controlled Conditions) turbine cases with overall good agreement. Being able to capture the interaction of the hub vortex with the outer wake on a relatively coarse grid, the actuator models for turbine blades and nacelle provide a computationally efficient approach to investigate the effects of nacelle and different turbine and ambient flow conditions on wake meandering. \\
\indent In this work, we employ both wind tunnel experiments and LES with the new class of actuator surface models developed by \citet{yang2017actuator} to investigate: i) the influence of turbine operating conditions; and ii) the effects of the nacelle on wake meandering of a model turbine.
In the literature, there are several wind tunnel experiments investigating turbine wakes under different operating conditions, such as experiments of the ``Blind Test'' turbine of diameter 0.9 m and optimal tip speed ratio 6 at Norwegian University of Science and Technology \citep{krogstad2012performance, eriksen2012experimental, adaramola2011experimental, pierella2014blind}, and experiments of the MEXICO turbine of rotor diameter 4.5 m and optimal tip speed ratio 6.7 at the German-Dutch wind tunnel~\citep{snel2007mexico, shen2012actuator, nilsson2015validation}. Very few experiments addressed the influence of turbine operating conditions on wake meandering.
The effects of turbine operating condition on wake meandering were studied by \citet{howard2015statistics}. However, we note that the turbine employed in this paper, with diameter 1.1 meters, has more reasonable power coefficients (the maximum $C_P$ is 0.45) as compared with that (the maximum $C_P$ is 0.16) of the miniature model wind turbine with diameter 0.128 meters employed in \citet{howard2015statistics}. We consider, therefore, that the wake of the turbine employed in this work closely resembles the wake state of utility-scale turbines.
The model turbine utilized herein for wind tunnel experiments is intricately designed such that individual blades can be pitched similar to variable speed utility-scale turbines which employ blade pitching to control power output based on the operating conditions.
Like utility-scale turbines, the model wind turbine exhibits three main turbine operating regions \citep{pao2009tutorial}.
Region 1 is a low wind speed regime when the turbine is not run. In Region 2, the turbine operates at its optimal condition with maximum power coefficient and blade pitch angle of 1 degree; while in Region 3, the turbine operates with a relatively low power coefficient and a relatively high blade pitch angle of 7 degrees.
To investigate the energetic coherent structures of wake meandering, the meander filtering technique proposed by \citet{howard2015statistics}, and applied by \citet{foti2016wake}, to the miniature model wind turbine with diameter 0.128 meters, and the dynamic mode decomposition technique \citep{schmid2010dynamic} are employed to analyze the time series of the computed three-dimensional flow fields. \\
\indent This paper is organized beginning with section \ref{sec:exp_setup}, a brief summary of the wind turbine design and experimental setup which provide motivation and validation for simulations. The numerical methods of the LES and turbine modeling in section \ref{sec:numerical}. Section \ref{sec:cases} details of the selected test cases and computational setup. The results and analysis are shown in section \ref{sec:results}. Finally, we have final discussions of our results and our conclusion in section \ref{sec:conclusion}.
\section{Experimental setup}\label{sec:exp_setup}
\indent Wind tunnel experiments and numerical simulations play a key role in wind turbine design, performance, and optimization. While the wind tunnel experiments cannot reproduce the utility-scale conditions, they can provide key insights and ensure that numerical simulations are reliable through proper validation. The present experiments supply sufficient detail in terms of measurements of the velocity flow field statistics and energy spectra in the wake of the turbine, turbine power output, and a robust description of the model and wind tunnel environment that allow us to validate numerical simulations of the model turbine in different operating regimes. Below we detail the wind tunnel experiment, wind turbine model and the turbine power output as a function of blade pitch and tip speed ratio which delineates the turbine operating regimes. \\
\indent Testing was performed in the closed-return wind tunnel of the Politecnico di Milano (see \citet{campagnolo2013wind}).
It a boundary layer chamber primarily used for civil, environmental and wind power engineering applications. Within this test section, whose cross sectional area is $13.84\:\textrm{m}\times 3.84$~m with a length of 36~m, wind speeds up to 14~m/sec can be generated by means of 14 fans, each one consuming up to 100~KW.\\
\indent This tunnel size allows for the testing of relatively large models with low blockage effects, while atmospheric boundary layer (ABL) conditions can be simulated by the use of turbulence generators such as spires placed at the chamber inlet. A turntable, whose diameter is 13~m, allows for the complete experimental setup to be yawed with respect to the wind tunnel axis, in order to simulate the effect of wind direction changes on the entire setup.
\subsection{Wind turbine model: general layout}
Tests were conducted with a scaled wind turbine model with a diameter $D = 1.1 \: \textrm{m}$ and height $H = 0.8 \: \textrm{m}$, in the following named \texttt{G1} (for \underline{G}eneric wind turbine, \underline{1}~m diameter rotor). The model was designed to satisfy several specific design requirements: i.) a realistic energy conversion process, which means reasonable aerodynamic loads and damping when compared to those of full-scale wind turbines, as well as wakes of realistic geometry, velocity deficit and turbulence intensity, ii.) collective blade pitch and torque control, as well as yaw setting realized by properly misaligning the tower base with respect to the wind tunnel axis, in order to enable the testing of control strategies, and iii.) a sufficient onboard sensorization of the machine, including measures of rotor azimuth, main shaft torque, rotor speed and tower base loads with good accuracy.\\
\begin{figure}
\begin{center}
\includegraphics[width=.75\columnwidth]{./fig01.pdf}
\caption{\label{figure: G1 rotor-nacelle assembly} Overall view of the \texttt{G1} model.}
\end{center}
\end{figure}
\indent The \texttt{G1}, whose rated rotor speed is equal to 850\,rpm (clockwise rotation), is equipped with three blades mounted on the hub with two bearings, in order to limit flapwise or edgewise free-play. The collective pitch angle can be varied by means of three conical spiral pinions fixed at the blade roots, in turn moved by a driving wheel. The latter, mounted on two bearings held by the rotating hub, is connected, by means of a flexible joint, to a Maxon \texttt{EC-16 60W} brushless motor equipped with a \texttt{GP22C-128:1} precision gearhead and \texttt{MR-128CPT} relative encoder. The motor is housed in the hollow shaft and is commanded by an
electronic control board \texttt{EPOS2 24/2 DC} housed in the hub spinner.
Electrical signals from and to the pitch control board are transmitted by a through-bore 12-channels slip ring located within the rectangular carrying box holding the main shaft. \\
\indent A \texttt{LORENZ MESSTECHNIK DR2112-R} torque sensor 1\,Nm 0.2\%, located after the two shaft bearings, allows for the measurement of the torque provided by a Maxon \texttt{EC-4pole22 90W} brushless motor equipped with a \texttt{GP22HP-14:1} precision gearhead and \texttt{ENC HEDL 5540 500IMP} tacho. The motor is located in the rear part of the nacelle and is operated as a generator by using an \texttt{ESCON 50/5 4-Q} servocontroller. An optical encoder, located between the slip ring and the
rear shaft bearing, allows for the measurement of the rotor azimuth. \\
\indent The tower is designed so that the first fore-aft and side-side natural frequencies of the nacelle-tower group are properly placed with respect to the harmonic per-rev excitations. At its base, strain gages are glued to four CNC-machined small bridges, which were properly sized in order to have sufficiently large strains, in turn required to get accurate outputs from the strain gages. Two electronic boards provide for the power supply and adequate conditioning of this custom-made load cell. Calibration of the cell was performed by using dead weights, stressing the tower with fore-aft and side-side bending moments. Finally, a full 2-by-2 sensitivity matrix is obtained by linear regression.
Aerodynamic covers of the nacelle and hub ensure a satisfactory quality of the flow in the central rotor area. Fig.~\ref{figure: G1 rotor-nacelle assembly} highlights the main features of the \texttt{G1} model.
\subsection{Wind turbine model: rotor aerodynamics}
Due to the small dimensions of the scaled wind turbine the low-Reynolds Number airfoil RG14~\citep{Airfoils} is chosen for the model wind turbine blades, which is designed in order to achieve a constant lift coefficient $C_\mathrm{l}$ along the blade span, as described in~\citet{Burton2001}. Blade chord and twist angle are shown in Fig.~\ref{fig:Blade_design}
\begin{figure}[h]
\begin{center}
\includegraphics[width=\textwidth]{./fig02.pdf}
\caption{\label{fig:Blade_design}Blade chord and twist angle.}
\end{center}
\end{figure}
The performance of the \texttt{G1} rotor is measured for different values of the airfoil Reynolds numbers and at several combinations of tip speed ratios (TSR) $\lambda$ and collective pitch settings $\beta$. Significant differences are noticed between the measured and theoretical Blade Element Momentum (BEM)-based aerodynamic performance computed using nominal polars, obtained by other authors from wind tunnel measurements or numerical simulations. To correct for this problem, an identification procedure~\citep{Cacciola2014} is used to calibrate the polars, leading to a satisfactory agreement as shown in Fig.~\ref{figure: rotor performance}.
\subsection{Wind turbine model: control algorithms}
The \texttt{G1} model is controlled by a \texttt{M1 Bachmann} hard-real-time module. Similarly to what is done on real wind turbines, collective pitch-torque
control laws are implemented on and real-time executed by the control hardware. Sensor readings are used online to properly compute the desired pitch and torque
demands, which are in turn sent to the actuator control boards via analog or digital communication. \\
\indent Power control is based on the standard wind turbine control structure with two distinct regions, as described in~\citet{Bossanyi2000a}. At low wind speeds, when the wind turbine is operating in the Region 2, the main objective is the maximization of wind turbine power output. This is achieved by keeping the rotor blade pitch angle at a constant value, while the aerodynamic torque reference follows a quadratic function of rotor speed.
On the other hand, in high winds, when the wind turbine is operating in the Region 3, generator torque is kept at a constant value (defined by the power reference), while a PID pitch controller is used to track the rotor speed reference. Transition between two different operating regions is achieved by a control logic that prevents pitch activity at low wind speeds and torque activity at high wind speeds. \\
\indent The friction acting on the main shaft and due to the bearings and the slip ring brushes is measured before testing, in order to have a reliable measurement of the aerodynamic power. This is obtained by rotating the sole rotor hub, i.e. without blades installed, at several speeds, from rated down to null in steps of 50\,rpm. The average measured generator torque is than stored in look-up tables as a function of rotor speed, and it is added during operation in real time to the generator torque reference, in order to allow for the tracking of the aerodynamic torque reference.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{./fig03.pdf}
\caption{\label{figure: rotor performance} \texttt{G1} power (left) and thrust (right) experimental (solid lines) and BEM (dashed lines) coefficients, as function of TSR and blade pitch.}
\end{center}
\end{figure}
\section{Numerical methods}\label{sec:numerical}
\indent The measurements of the flow field behind the wind turbine are used in part for validation of numerical methods. For the present numerical simulations, we employ a large-eddy simulation (LES) solver capable of solving the three-dimensional, filtered continuity and momentum equations in generalized curvilinear coordinates \citep{ge2007numerical} with a hybrid staggered/non-staggered grid formulation \citep{gilmanov2005hybrid}. The governing equations are discretized with three-point central finite differencing and integrated in time using an efficient fractional step method. In compact tensor notation (repeated indices imply summation) the continuity and momentum equations are as follows ($i,j=1,2,3$):
\begin{equation}
J\frac{\partial U^{i}}{\partial \xi^{i}}=0,
\label{eqn:eq_continuity_general}
\end{equation}
\begin{align}
\frac{1}{J}\frac{\partial U^{i}}{\partial t}=& \frac{\xi _{l}^{i}}{J}\left( -%
\frac{\partial }{\partial \xi^{j}}({U^{j}u_{l}})+\frac{\mu}{\rho}%
\frac{\partial }{\partial \xi^{j}}\left( \frac{g^{jk}}{J}\frac{%
\partial u_{l}}{\partial \xi^{k}}\right) -\frac{1}{\rho}\frac{\partial }{\partial \xi^{j}} \left(\frac{%
\xi _{l}^{j}p}{J} \right)-\frac{1}{\rho}\frac{\partial \tau _{lj}}{\partial
\xi^{j}} + F_l\right) ,
\label{eqn:eq_momentum_general}
\end{align}
where $\xi _{l}^{i}={\partial \xi^{i}}/{\partial x_{l}}$ are the transformation metrics, $J$ is the Jacobian of the geometric transformation, $u_{i}$ is the $i$th component of the velocity vector in Cartesian coordinates, $U^{i}$=${(\xi _{m}^{i}/J)u_{m}}$ is the contravariant volume flux, $g^{jk}=\xi _{l}^{j}\xi _{l}^{k}$ are the components of the contravariant metric tensor, $\rho $ is the density, $\mu $ is the dynamic viscosity, $p$ is the pressure, and $\tau_{ij}$ represents the anisotropic part of the subgrid-scale stress tensor. A body force $F_l$ is used to account for the forces exerted by the actuator-based model. A dynamic Smagorinsky model \citep{smagorinsky1963general} developed by \citet{germano1991dynamic} is used for closure of $\tau _{ij}$:
\begin{equation}
{\tau }_{ij}-\frac{1}{3}\tau _{kk}\delta _{ij}=-2\mu_{t}\widetilde{S}_{ij},
\label{eqn:LES_subgrid_eq}
\end{equation}
where the $\widetilde{(\cdot)}$ denotes the grid filtering operation, and $\widetilde{S}_{ij}$ is the filtered strain-rate tensor.
The eddy viscosity $\mu_{t}$ is given by
\begin{equation}
\mu_{t}=\rho C_{s}{\Delta}^{2}|\widetilde{S}|,
\label{eqn:LES_eddyviscosity_eq}
\end{equation}
where $C_{s}$ is the dynamically calculated Smagorinsky constant \citep{germano1991dynamic}, $\Delta$ is the filter size taken as the cubic root of the cell volume, and $|\widetilde{S}|= (2\widetilde{S}_{ij}\,\widetilde{S}_{ij})^{\frac{1}{2}}$. In computing $C_{s}$, contraction of the Germano identity is carried out using the formulation for general curvilinear coordinates presented in \citet{armenio2000lagrangian}. A local averaging is then performed for the calculation of $C_{s}$ since there are no homogeneous directions in the present cases.
\subsection{Actuator surface model for turbine blades and nacelle}\label{sec:actuator surface model}
\indent In the actuator surface model for the blade, the blade geometry is represented by a surface formed by the chord lines at every radial location of the blade.
The forces are calculated in the same way as in the actuator line model using the blade element approach.
The lift ($\bm{L}$) and drag ($\bm{D}$) at each radial location are calculated as follows:
\begin{equation}
\bm{L}=\frac{1}{2} \rho C_L c |\bm{V}_{rel}|^2\bm{n}_{L},
\end{equation}
and
\begin{equation}
\bm{D}=\frac{1}{2} \rho C_D c |\bm{V}_{rel}|^2\bm{n}_{D},
\end{equation}
where $C_L$ and $C_D$ are the lift and drag coefficients from a look-up table based on the value of angle of attack, $c$ is the chord, $\bm{V}_{rel}$ is the relative incoming velocity, and $\bm{n}_{L}$ and $\bm{n}_{D}$ are the unit vectors in the directions of lift and drag, respectively. The relative incoming velocity $\bm{V}_{rel}$ is computed by
\begin{equation}
\bm{V}_{rel} (\bm{X}_{LE})= u_x(\bm{X}_{LE})\bm{e}_x+(u_\theta(\bm{X}_{LE})-\Omega r)\bm{e}_\theta
\end{equation}
where $\bm{X}_{LE}$ represents the leading edge coordinates of the blade, $\Omega$ is the rotational speed of the rotor, $\bm{e}_x$ and $\bm{e}_\theta$ are the unit vectors in the axial flow and rotor rotating directions, respectively. In the present model, the axial and azimuthal components of the flow velocity, i.e. $u_x$ and $u_\theta$ are computed at the leading edge of the blade. Generally, the leading edge point LE does not coincide with any background nodes. In the present work, the smoothed discrete delta function (i.e. the smoothed four-point cosine function) proposed by \citet{yang2009smoothing} is employed to interpolate the flow velocity at the leading edge of the blade from the background grid nodes. The forces computed on each grid node of the leading edge are then uniformly distributed onto the corresponding actuator surface meshes with the same radius from the rotor center. The forces on the blade actuator surface are then distributed to the background grid nodes for the flow field using the same discrete delta function employed in the velocity interpolation process. The stall delay model developed by \citet{du19983} and the tip-loss correction proposed by \citet{shen2005tip} are employed to take into account the three-dimensional effects. \\
\indent In the actuator surface model for the nacelle, the nacelle geometry is represented by the actual surface of the nacelle with distributed forces. The force on the actuator surface is decomposed into two parts: the normal component and the tangential component. The normal component of the force is computed in a way to satisfy the non-penetration condition, which is similar to the direct forcing immersed boundary methods \citep{uhlmann2005immersed}, as follows:
\begin{equation} \label{eq:Fn_ASN}
F_n=\frac{\left(\bm{u}^d(\bm{X})-\bm{\tilde{u}}(\bm{X})\right)\cdot \bm{n}(\bm{X})}{\Delta t},
\end{equation}
where $\bm{X}$ represents the coordinates of the nacelle surface mesh, $\bm{u}^d(\bm{X})$ is the desired velocity on the nacelle surface, $\bm{n}(\bm{X})$ is the unit vector in the normal direction of the nacelle, $\bm{\tilde{u}}$ is the velocity estimated from the previous flow field using an explicit Euler scheme, and $\Delta t$ is the time step.
The tangential force is assumed to be proportional to the incoming velocity $U$ and is computed as follows:
\begin{equation}\label{eq:Ft_ASN}
F_{t}=\frac{1}{2}c_{f}U^2
\end{equation}
where $c_{f}$ is calculated from the empirical relation proposed by F. Schultz-Grunow ~\cite{schlichting2003boundary}. The direction of the tangential force is determined by the local tangential velocity.
The smoothed discrete delta function is employed for the velocity interpolation and force distribution as in the actuator surface model for blades. Validations of the proposed actuator surface models for turbine blades and nacelle can be found in \citep{yang2017actuator}.
\section{Test cases and computational setup}\label{sec:cases}
\indent Here we discuss the computational details and overall setup for the two wind tunnel experiments where measurements of the velocity flow field are taken at two different operating conditions of Region 2 and Region 3 with the corresponding setup shown in Table \ref{tbl:cond}. \\
\begin{table}
\begin{center}
\def~{\hphantom{0}}
\begin{tabular}{lccccc}
Region & $\beta$ [deg] & Tip-speed ratio & $C_p$ & $C_T$ & $Re_D = U_{hub}D/\nu $\\
2 & 1.4 & 8.1 & 0.45 & 0.79 & $4.4\times 10^5$\\
3 & 7.0 & 8.1 & 0.25 & 0.3 & $5.1 \times 10^5$\\
\end{tabular}
\caption{Turbine operating condition parameters and performance. }
\label{tbl:cond}
\end{center}
\end{table}
\indent The wake behind the rotor was traversed and measured using 2 Tri-axial fiber-film probes \textit{Dantec 55R91}, while wind speed was also constantly measured by means of a Pitot tube located 1.5D in front of the rotor disk and at hub height, as shown in Fig.~\ref{fig:exp setup}. The pressure signals were acquired by a \textit{MENSORCPT-6100 FS.=0.36PSI} Pitot transducer. \\
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{./fig04.pdf}
\caption{\label{fig:exp setup}Experimental setup in the wind tunnel}
\end{center}
\end{figure}
\indent Simulations are designed to reproduce the real wind tunnel environment based on the experiments including the spires at the inlet to induce a turbulent boundary layer that resembles an atmospheric boundary layer.
A sketch of the entire wind tunnel domain with upstream spires and turbine is shown in Fig. \ref{fig:inflow}(a). A precursor LES of the entire wind tunnel without the turbine but resolving the details of the spire geometry using the CURVIB method is performed to acquire inflow conditions for subsequent turbine simulations \citep{foti2017use}. The dimensions of the wind tunnel domain are as follows: spanwise width $L_z/D=12.5$, height $L_y/D=3.5$, and a streamwise length $L_z/D = 31.8$. The computational domain is discretized with 600$\times$180$\times$1200 cells in spanwise, vertical and streamwise directions, respectively. Fourteen spires, resolved using with CURVIB method, with a height $H/D = 1.8$ and width $S/D=0.45$ are placed one meter apart symmetrically around the wind tunnel centerline. The trailing edge of the spires is placed at a streamwise distance of $2.3$ diameters downstream of the domain inlet. Uniform inflow velocity $U_b$ is applied at the inlet. A wall model for smooth walls \citep{yang2015large} is employed on the top, bottom and side walls of the domain, while boundary conditions on the spires are applied on the grid nodes in the vicinity of the spires using wall model reconstruction as described in \citet{kang2012numerical}.
Further analysis of the precursor simulation and the generated inflow resemblance to an atmospheric boundary layer can be found in \citet{foti2017use}. The instantaneous flow field is acquired and saved at the location upstream of the turbine at $x/D = -1.5$, and is fed into the wind turbine simulations. \\
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{./fig05.pdf}
\caption{\label{fig:inflow}(a) Sketch of the wind tunnel domain with spires at the inlet $(x/D=-20.5)$ and turbine located at ($x/D=0$ and $z/D=-2.5$). Comparison of the vertical profiles at $x/D =3.5$ from a precursor simulation (solid lines) with measurements (circles) for an empty wind tunnel for (b) mean streamwise velocity and (c) rms streamwise velocity.}
\end{center}
\end{figure}
\indent The mean streamwise velocity and rms (root-mean-square) streamwise velocity from the precursor simulation with the spires without the wind turbine at $x/D=3.5$ are shown in Fig. \ref{fig:inflow}(b) and Fig. \ref{fig:inflow}(c), respectively, where both the simulations and experimental measurements converge to the same profile. At this location the velocity at turbine hub height is $U_{hub}/U_b=1.0$, the incoming shear velocity is $u_\tau/U_b = 0.03$, and boundary layer height is $\delta/D = 1.8$, the same as the height of the spires. The Reynolds number based on $U_{hub}$ and $D$ is $4.0\times 10^5$. The value of Reynolds number of the empty wind tunnel simulations and turbine simulations in both operating conditions are beyond the value for which Reynolds number independence for turbine wakes is observed \citep{chamorro2012reynolds}. The so-generated inflow flow fields from this precursor simulation are employed to provide inflow conditions in the simulations for both turbine operating conditions without running additional precursor simulations to match the specific values of Reynolds number. \\
\indent The turbine simulations are performed in a domain $12.5D\times 2.7D\times 16D$ in spanwise, vertical and streamwise directions, respectively. The numbers of grid nodes are 272$\times$120$\times$589 in spanwise, vertical and streamwise directions, respectively, resulting in a grid spacing $D/50$ in all three directions. This resolution is comparable to other studies using actuator-type models \citep{ivanell2010stability, troldborg2010numerical} and is sufficient to capture the main characteristics of the turbine wake as shown in Fig. \ref{fig:exp_6ms} and Fig. \ref{fig:exp_7ms} where we compare the computed results with experimental measurements.
With this domain, the turbine is placed at the same location $x_0/D=20.5$ downstream of the spires and $z_0/D=-2.5$ as prescribed in the experiments. Similar to the precursor simulations, top, bottom and side walls of the domain utilize a wall model for smooth walls.
Two simulations for each turbine operating conditions based on the experimental setup shown in Table \ref{tbl:cond} are carried out: one with the actuator surface models for blades and nacelle (R2-BN and R3-BN for Region 2 and Region 3, respectively); the other one with actuator surface model for blade only (R2-B and R3-B for Regions 2 and 3, respectively).
\section{Results}\label{sec:results}
\indent We present and analyze the computational and experimental results in this section. In section \ref{sec:Time-averaged}, we first compare the computed profiles with measurements at different downstream locations and show the time-averaged flow field and turbulence statistics for simulations with and without a nacelle model for the two different turbine operating conditions. We then present instantaneous flow fields for different cases and analyze the power spectral density in section \ref{sec:instantaneous}, and investigate the wake meandering using a filtering technique and dynamic mode decomposition method in section \ref{sec:meander} and section \ref{sec:dmd}, respectively.
\subsection{Time-averaged characteristics}\label{sec:Time-averaged}
\indent \indent To evaluate the capability of the actuator surface models in predicting the wake from the model wind turbine, we compare the computational results with the experimental measurements for the turbine operating in Region 2 and Region 3 in Fig. \ref{fig:exp_6ms} and Fig. \ref{fig:exp_7ms}, respectively.
Figure \ref{fig:exp_6ms}(a) shows the comparisons of the spanwise profiles of the mean streamwise velocity deficit, $1-U/U_\infty$ across the wake of the turbine. In the profiles closest to the turbine at $x/D=1.4$ and $x/D=1.7$, the effect of the nacelle manifests itself as an increase of the streamwise velocity deficit along the centerline. The employed nacelle model captures this inner wake feature from the nacelle, but the absence of a nacelle model results in an unphysical jet at the center of the profiles. On the other hand, the shear layer of the outer wake, mainly caused by the forces on the blades extending to $z/D \pm 0.5$ (along the horizontal dotted lines in Fig. \ref{fig:exp_6ms}) is captured well by both simulations with and without a nacelle model.
Further downstream of the rotor, the inner wake of the nacelle begins to merge with the outer wake and the streamwise velocity deficit begins to flatten along the centerline. The unphysical jet in the simulation without the nacelle model, created by the absence of the nacelle model, begins to dissipate at further downstream locations as the corresponding agreement with experimental measurements becomes better. Far from the rotor $x/D > 4.0$, the wake recovery becomes slow for all the cases.
Figure \ref{fig:exp_6ms}(b) shows the comparisons of the mean vertical velocity $V$ which represents the rotational velocity on this plane. Both simulations with and without a nacelle model agree well with the experimental measurements. Close to the turbine, the rotation of the blades imparts a strong mean rotation on the inner wake that peaks just away from the center. Despite the high rotational velocity occurring near the nacelle boundary, the nacelle model does not affect the mean vertical velocity, which is due only to the rotation of the rotor. Farther downstream the mean rotation quickly dissipates.
The mean spanwise velocity $W$ is shown in Fig. \ref{fig:exp_6ms}(c). The simulations compare well with the measurements despite the small magnitude of this velocity component.
The final experimental measurement for this operating regime, shown in Fig. \ref{fig:exp_6ms}(d), is the streamwise rms (root-mean-square) velocity $u^\prime$.
The cases with and without a nacelle model capture the magnitude of the turbulence intensity of nearly $0.2$ within the tip shear layer showing agreement with the experiments. Near the centerline and $x/D=1.4$, both the measurements and the simulation with a nacelle model capture the shear layer of the inner wake with a slight increase of the turbulence intensity. The simulation without the nacelle model shows an unphysical increase peaking at the centerline. As the wake progresses downstream, the centerline peak of turbulence intensity in the simulation without a nacelle model decreases and converges to the measurement profiles at $x/D=3$. Furthermore, as the inner and outer wake shear layer expand, the turbulence intensity along the tip positions relaxes to about $u^\prime/U_{hub} = 0.1$ where the measurements, simulations with and without a nacelle model agree well with each other.\\
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{./fig06.pdf}
\caption{\label{fig:exp_6ms} Comparisons of the computed and measured spanwise profiles for Region 2 (a) streamwise velocity deficit, (b) vertical velocity, (c) spanwise velocity, and (d) streamwise RMS velocity $u^\prime$. The horizontal dotted lines at $z/D = \pm 0.5$ are the tip positions. }
\end{center}
\end{figure}
\indent The spanwise profiles of the mean velocity deficit of the measurements and simulations for the turbine operating in Region 3 are shown in Fig. \ref{fig:exp_7ms}(a). In comparison with the turbine operating in Region 2, significantly less velocity deficit is in the region near blade tips at near wake locations ($x/D=1.4, 1.7, 2$). Near the centerline the velocity deficit is lower and the inner wake of the nacelle is smaller with a flattened profile. Similarly, the simulation without a nacelle model under predicts the velocity along the centerline until $x/D > 2$. Far away from the turbine rotor, the wake relaxes and all of the simulation cases are able to accurately simulate the wake.
The comparison of the vertical velocity $V$ profiles (rotational component on this plane) is shown in Fig. \ref{fig:exp_7ms}(b). In comparison with the turbine operating in Region 2, the magnitude of rotational velocity near the blade tips are smaller for the turbine operating in Region 3 at near wake locations ($x/D=1.4, 1.7, 2$). The spanwise velocity $W$ profiles shown in Fig. \ref{fig:exp_7ms}(c), on the other hand, are similar to the profiles for the turbine operating in Region 2 and both simulations with and without nacelle model show good agreement with the measurements.
Finally, the streamwise rms velocity is shown in Fig. \ref{fig:exp_7ms}(d). Similar to what we observed in Fig. \ref{fig:exp_6ms}(d) for the turbine operating in Region 2, the lack of a nacelle model introduce peaks on the streamwise turbulence intensity in the region near the rotor centerline at near wake locations ($x/D=1.4, 1.7, 2$). The tip shear layer is accurately simulated by all the simulations. As the wake progresses downstream, differences in the mean flow field are not obvious. \\
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{./fig07.pdf}
\caption{\label{fig:exp_7ms} Comparisons of the computed and measured spanwise profiles for Region 3 (a) streamwise velocity deficit, (b) vertical velocity, (c) spanwise velocity, and (d) streamwise RMS velocity $u^\prime$. The horizontal dotted lines at $z/D = \pm 0.5$ are the tip positions.}
\end{center}
\end{figure}
\indent Next, we will investigate the spatial distribution of the mean flow field via contour plots. Mean flow and turbulence quantities for Region 2 simulations with and without a nacelle model on the vertical-streamwise centerline plane are shown in Figs. \ref{fig:6_avg}(a)-(d) and Figs. \ref{fig:6_avg}(e)-(h), respectively.
The mean streamwise velocity $U/U_{hub}$ for the simulation with a nacelle model in Fig. \ref{fig:6_avg}(a), show the outer wake formed by the turbine rotor and the inner wake behind the nacelle (outer and inner wake are demarcated on figure). In the mean velocity contour shown in Fig. \ref{fig:6_avg}(e) for the simulation without a nacelle model, a jet exists along the centerline, as discussed in the spanwise profile shown in Fig. \ref{fig:exp_6ms} and Fig. \ref{fig:exp_7ms} and seen in simulations in \citet{kang2014onset}, exists as a consequence of not having the blockage effect of a nacelle body. The outer wake shear layer at the top tip position immediately downstream of the turbine is very sharp and slowly expands. The inner wake forms a core region where the hub vortex begins to grow. \\
\indent The turbulence kinetic energy $k/u_\tau^2$ (TKE) contours from the simulations with and without a nacelle model are shown in Fig. \ref{fig:6_avg}(b) and Fig. \ref{fig:6_avg}(f), respectively.
As seen, the intense turbulence regions start at the rotor tips. From the TKE of the simulation with a nacelle model, the outer wake and the expanding inner wake are observed. The inner wake expands outwards towards and intercepts the outer wake a few diameters downstream of the turbine. Near this intersection, there is a marked increase in TKE caused by the interaction of the expanding inner wake caused by the hub vortex and the outer wake. The growth of a large region of high TKE in the tip shear commences near the intersection of the inner wake shear layer.
In \citet{kang2014onset} and with further evidence in \citet{foti2016wake}, this region of high TKE is associated with the onset of wake meandering.
For the simulation without a nacelle model, a hub vortex is still formed along the centerline of the turbine but no nacelle inner wake is formed. The increase in streamwise velocity causes the hub vortex to remain strong and columnar. Expansion of the hub vortex towards the remnants of the tip vortices at the tip position does not occur. Instead, the onset of wake meandering occurs with less turbulence energy and further downstream. The location of the maximum TKE with $k/u_{\tau}^2=52.9$ occurs at $x/D=3$ for the simulation with a nacelle model; while it is found to be at $x/D=3.5$ with $k/u_{\tau}^2=52.8$ for the simulation without a nacelle model. The locations are shown on the TKE contours in Fig. \ref{fig:6_avg}(b) and Fig. \ref{fig:6_avg}(f) as large circles.\\
\indent The mean rotational velocity in the wake is expressed as the mean spanwise velocity $W/U_{hub}$ where positive is the direction into the page. For reference, the turbine rotates clockwise, thus the top blade tip is moving out of the page. In Fig. \ref{fig:6_avg}(c) and Fig. \ref{fig:6_avg}(g), showing the simulations with and without a nacelle model, respectively, a mean rotational rate is present at turbine near wake locations. Both figures show a wake that is rotating counter-clockwise (opposite the rotation of the turbine blades) with the strongest rotation occurring near the root of the blade. By the tip of the blade, the rotation is nearly zero. By $x/D=5$, the mean rotation in the wake has dissipated to be negligible. \\
\indent The streamwise-vertical Reynolds stress $\overline{u^\prime v^\prime}/u_\tau^2$ for simulations with and without a nacelle model, shown in Fig. \ref{fig:6_avg}(d) and Fig. \ref{fig:6_avg}(h), respectively, shows the turbulence shear stress and mixing that occurs in the shear layers of the inner and outer wake. There are several regions of intense shear stress in the simulation with a nacelle model:
i.) The turbine tip shear layer which extends into the wake as a thin layer for the first few diameters downstream where the tip vortices remain coherent occupying the interface between the fast moving outer flow and the turbine wake. As the tip vortices start to breakdown the region of mixing begins to expand through entrainment of fluid from outside the wake;
ii.) Behind the nacelle along the shear layer of the inner wake. This region of high Reynolds stress promotes mixing of the slowest moving inner wake with the outer wake behind the turbine blades. Moreover, this allows for entrainment of fluid into the inner wake and rapid expansion of the inner wake and hub vortex. Encompassing the inner wake is a region of opposite-sign low Reynolds shear stress that originates from the root of the turbine blades;
iii.) Much like the TKE, there is a large region of high Reynolds shear stress that appears downstream along the tip positions and down towards the center of the wake. Here, the inner and outer wake merge together and intense mixing occurs. As the wake meandering region begins at approximately $x/D=2$ downstream, the sign of the Reynolds stress indicates that slower moving fluid in the wake will mix and entrain fluid from the outer wake to initiate wake recovery.
The streamwise-vertical Reynolds stress of the simulation without the nacelle model (Fig. \ref{fig:6_avg}(h)) provides more evidence of the strong columnar hub vortex which is dominated by strong Reynolds shear stress originating from the root of the turbine blades. While the Reynolds shear stress is relatively high in the inner wake, it is dominated by the root vortices which does not promote a similar laterally expanding inner wake from a nacelle as in the simulation with the nacelle model. The Reynolds shear stress from the root vortices indicates that mixing of the faster moving fluid from behind the turbine blades towards the centerline does not occur as effectively as in the simulation with a nacelle model. The centerline jet creates a non-physical acceleration of the streamwise flow promoting a stable columnar vortex. Further downstream, the increased Reynolds stress behaves similar to the nacelle model cases but with less turbulence mixing. Overall Reynolds shear stress comparisons of the simulations with and without nacelle suggest quantitative differences in the intensity of the far wake meandering region. This important issues will be discussed extensively in subsequent sections of this paper. \\
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{./fig08.pdf}
\caption{\label{fig:6_avg} Comparisons of vertical centerplane contours between simulations with (left images) and without (right images) a nacelle model for turbine operating in Region 2 for (a) and (e) time-averaged downwind velocity, (b) and (f) turbulence kinetic energy, (c) and (g) time-averaged spanwise (rotational) velocity, and (d) and (h) downwind-vertical Reynolds shear stress. Notations demarcating the inner and outer wake are shown on Fig. (a), (d), and (h).}
\end{center}
\end{figure}
\indent Figures \ref{fig:7_avg}(a)-(d) and Figs. \ref{fig:7_avg}(e)-(h) show contours of mean flow quantities for the turbine operating in Region 3 on the vertical-streamwise centerplane for simulations with and without a nacelle model, respectively. The mean streamwise velocity contours in Fig. \ref{fig:7_avg}(a) shows similar outer and inner wakes patterns as that in Fig. \ref{fig:6_avg}(a) for the turbine operating in Region 2.
However, in comparison with that for the turbine operating in Region 2, the strength of the velocity deficit in the outer wake for the turbine operating in Region 3 is significantly less because of smaller power and thrust coefficients that resulted from the large pitch angle of the blade. The reduced velocity deficit enables faster recovery of the wake with less expansion in the radial direction for the Region 3 case. Similar to the Region 2 cases, lack of a nacelle model also results in an unphysical centerline jet which affects the wake development. \\
\indent The TKE $k/u_\tau^2$ contours are shown in Fig. \ref{fig:7_avg}(b) and Fig. \ref{fig:7_avg}(f), for simulations with and without a nacelle model, respectively. As seen, the maximum TKE in the top tip shear layer is higher and the region with high TKE is wider for the simulation with a nacelle model in comparison with that without a nacelle model. The location of maximum TKE occurs at $x/D=5$ downstream for the simulation with a nacelle model, while the corresponding location for the simulation without a nacelle model occurs further downstream at $x/D=5.5$.
Comparing the TKE contours in Fig. \ref{fig:7_avg}(b) and Fig. \ref{fig:6_avg}(b) for the two operating conditions, we see differences in both the magnitude of the TKE and the downstream location where the highest TKE occurs. The region with high TKE for the turbine operating in Region 3 is longer and extends at further downstream locations but with significantly less maximum TKE in comparison with that in Region 2.
The rotation of the wake for the turbine operating in Region 3, presented in Fig. \ref{fig:7_avg}(c) and Fig. \ref{fig:7_avg}(g), respectively, is more confined and with a comparably weaker maximum rotation. However, the mean rotation in the wake takes about the same distance or even somewhat longer distance from the rotor to dissipate. The streamwise-vertical Reynolds stress $\overline{u^\prime v^\prime}/u_\tau^2$ is consistent with the results from the turbine operating in Region 2, but with lower turbulence mixing levels due to less velocity deficit and weaker wake meandering as will be shown in section~\ref{sec:meander}. \\
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{./fig09.pdf}
\caption{\label{fig:7_avg} Comparisons of vertical centerplane contours between simulations with (left images) and without (right images) a nacelle model for turbine operating in Region 3 for (a) and (e) time-averaged downwind velocity, (b) and (f) turbulence kinetic energy, (c) and (g) time-averaged spanwise (rotational) velocity, and (d) and (h) downwind-vertical Reynolds shear stress. Notations demarcating the inner and outer wake are shown on Fig. (a). }
\end{center}
\end{figure}
\indent Turbulence intensities at far wake locations can be considered as footprints of wake meandering. In order to further evaluate the nacelle effects at far wake locations, we show in Fig. \ref{fig:turb_uu} different components of turbulence intensities at various downstream locations. For the Region 2 cases, the streamwise turbulence intensity computed from the case with a nacelle model is slightly higher than that without a nacelle model at $x/D=3, 4$ and $5$ locations. However, the differences are more significant for the other two components at $x/D=3$ and 4 locations. For the Region 3 cases, the streamwise turbulence intensity computed from the simulation with a nacelle model is larger than that without a nacelle model especially at locations $x/D=4$ and 5 from the hub height to the top tip position. The differences for the other two components of turbulence intensity, on the other hand, are very minor downstream. However, at $x/D=3$ near the hub height differences in radial and azimuthal turbulence intensities exist due to the formation of the unphysical hub vortex developed without a nacelle model. Comparing the turbulence intensities between the two different turbine operating conditions, we can see that at $x/D=3$ and 4 locations, the turbulence intensities are larger for the Region 2 condition for all the three components; at $x/D=5$ and 7 locations; on the other hand, the streamwise turbulence intensity near the top tip location is larger for the Region 3 condition. Overall, Fig. \ref{fig:turb_uu} quantitatively indicates the significant effects of turbine operating condition and nacelle on turbulence intensity at far wake locations for this model wind turbine. \\
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{./fig10.pdf}
\caption{\label{fig:turb_uu} Vertical profiles of the turbulence variances for (a) $\overline{u^\prime u^\prime}/u_\tau^2$, (b) $\overline{v^\prime v^\prime}/u_\tau^2$, and (c) $\overline{w^\prime w^\prime}/u_\tau^2$ non-dimensionalized $u_\tau^2$, where $u_{\tau}$ is the friction velocity of the incoming turbulent boundary layer flow. The horizontal dotted line show the bottom and top tip position at $y/D=0.23$ and $1.23$, respectively.}
\end{center}
\end{figure}
\subsection{Instantaneous flow fields and spectral analysis}\label{sec:instantaneous}
\indent In this section, we focus on the instantaneous flow fields to get an intuitive cognition on the different wake meandering patterns for different operating conditions and simulations with and without a nacelle model. The analysis of the mean flow field demonstrated that both the inner wake and outer wake form behind the turbine. In the mean sense, the inner wake expands into the outer wake and the turbulence kinetic energy increases substantially downstream.
To further investigate the instantaneous evolution of the wake and substantiate the narrative presented above, several time instances of both operating conditions from the simulations with a nacelle are shown in Fig. \ref{fig:asn_inst}. Each successive instance is separated by one rotor rotation period $T = 0.07$ s. The instantaneous flow field from the Region 2 case, in Fig. \ref{fig:asn_inst}(a), contains an inner wake directly behind the nacelle and a large outer wake extending from the extent of the rotor blades. Over the successive figures the slow precession of the hub vortex and the expansion of the inner wake becomes evident. Around $x/D = 2$, the inner wake begins to interact with the outer wake, and large lateral excursions of the outer wake mark the onset of the meandering. By $x/D=4$ the full extent of the wake meandering has commenced. After the emergence of the meandering, progressively larger spanwise displacements of the wake around the streamwise velocity minimum locations occurs. The locus of the velocity minima along the streamwise direction define a helical centerline of the meandering wake. Over a period of 4 rotor rotations, the wake meandering convects downstream, and the amplitude grows. A slightly different wake evolution is seen in Fig. \ref{fig:asn_inst}(b) for the Region 3 case. Both the inner wake hub vortex core with helical precession and outer wake form but remain more columnar. As discussed before, due to the blade pitch the outer wake is weaker compared to Region 2. The outer wake remains columnar with slight distortions from interactions outside the wake while the inner wake slowly expands. The onset of wake meandering does not occur until after $x/D=5$. \\
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{./fig11.pdf}
\caption{\label{fig:asn_inst} Contours of the instantaneous streamwise velocity at four successive time instances for (a) turbine operating in Region 2 (R2-BN) and (b) turbine operating in Region 3 (R3-BN).}
\end{center}
\end{figure}
\indent Similar instantaneous snapshots of the streamwise velocity for the simulations for both turbine operating conditions without a nacelle are shown in Fig. \ref{fig:as_inst}. Close to the turbine, a jet along the centerline is clearly evident in both simulations. Instantaneously, the near wake region transitions into wake meandering near the same distance from the turbine. \\%Qualitatively, the wake meandering oscillations and amplitudes without using the nacelle model are slightly diminished. \\
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{./fig12.pdf}
\caption{\label{fig:as_inst} Contours of the instantaneous streamwise velocity without a nacelle model at four instantaneous time instances for (a) turbine operating in Region 2 (R2-B) and (b) turbine operating in Region 3 (R3-B).}
\end{center}
\end{figure}
\indent Figure \ref{fig:exp_psd} shows comparisons of the measured and Fourier power spectral density of the streamwise component of the velocity fluctuations at the top tip position at several locations in the wake for both operating conditions. The non-dimensional frequency Strouhal number $St = f D/ U_{hub}$ is defined by the turbine diameter $D$ and hub height velocity $U_{hub}$.
At $x/D = 2$, the measured spectra shows that there are peaks at $St = 2.3$ and $St = 0.8$ and a longer flat region in low frequency space. The former peak is the rotor frequency and is also present in the simulation results. Both signatures are muted in the Region 3 cases. A lower frequency around $St \sim 0.2-0.3$ is present Fig. \ref{fig:exp_psd}(b), Fig. \ref{fig:exp_psd}(c), and Fig. \ref{fig:exp_psd}(d). The low frequencies are found in both measurements and simulations and are stronger in the simulations that include a nacelle model.
Farther downstream the high frequencies in computed spectrum have slightly more energy possibly due to the LES not having sufficient grid resolution to adequately capture the high frequency modes. However, the processes we are concerned about are the low frequency modes, which will be discussed below. The energy in these frequencies is similar to the measured data. \\
\indent Figure \ref{fig:exp_psd}(b) at $x/D=4$ shows the simulation with a nacelle model turbine operating Region 2 has more energy in the low frequency energy modes than the simulation without a nacelle. Similarly, a difference in turbulent intensities in Fig. \ref{fig:turb_uu} is also present between the simulations with and without the nacelle model. At $x/D=6$ and 9 locations in Fig. \ref{fig:exp_psd} the low frequency modes have similar energy consistent with the streamwise turbulence intensities in Fig. \ref{fig:turb_uu} at $x/D=5$ and 7 locations. In the turbine operating Region 3, the energy for simulations with a nacelle model in the low frequency modes at locations $x/D=4,6$ and $9$ is higher than simulations without a nacelle and is consistent with streamwise turbulence intensity at the $x/D=4, 5$ and 7 locations.
The energy differences are another indication that without a nacelle the turbulent energy at low frequencies is reduced. This further shows that the nacelle model is necessary for accurate simulations of a wind turbine. Without it, the high energy, low frequency contributions cannot be fully taken into account. \\
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{./fig13.pdf}
\caption{\label{fig:exp_psd} Comparisons of the Fourier power spectral density of the streamwise component of the velocity fluctuations $\Phi_{11}$ with that from measurements, with the top images and bottom images in Region 2 and Region 3, respectively, along the tip position $y/D = 0.72$, and $z/D=1$ (a) at $x/D=2$, (b) at $x/D=4$, (c) at $x/D=6$, and (d) at $x/D=9$.}
\end{center}
\end{figure}
\indent The contours on the $St$-$x$ plane of Fourier power spectral density of the streamwise component of the velocity fluctuations pre-multiplied by frequency $f\Phi_{11}$ for the simulations with the nacelle model are shown in Fig. \ref{fig:linefft} for three different radial locations. This allows us to visualize the evolution of the frequency modes in the wake of the turbine. Starting at the centerline $y_h = y-H=0$, there are several frequency regions of high energy. Nearest to the turbines, $x/D=0.5$, a high energy mode is centered around $St=0.7$. Given the proximity to the turbine along the centerline, this is immediately recognized as the hub vortex. The Strouhal number is similar to the measured hub vortex frequencies of other turbines \citep{iungo2013linear, howard2015statistics, viola2014prediction, foti2016wake}. In the Region 2 case, the peak hub vortex energy occurs directly downstream of the turbine and quickly dissipates. By $x/D=2$, the energy level has dropped significantly, but, relatively, it is the strongest energy mode in the flow at that axial location. This indicates that the hub vortex contains most of the energy along the centerline. However, after $x/D=2$ another, lower, frequency mode becomes present, the wake meandering frequency, most prevalent at $St=0.3$, which has been observed in numerous studies: $St=0.23$ \citep{okulov2007stability}, $St=0.28$ \citep{chamorro2013interaction}, $St = 0.15$ \citep{foti2016wake} and $St=0.15-0.25$ \citep{medici2008measurements}. Both the wake meandering frequency and the hub vortex frequency remain the dominant frequencies throughout the rest of the domain along the centerline, a clear indication that while the hub vortex breaks down, its remnants continue to affect the flow far downstream. The centerline frequency contours of Region 3 have some drastic distinctions from the former case. Here, the hub vortex energy peaks further from the rotor. Between $2 < x/D < 4$, little low turbulent energy is present indicating the hub vortex has not broken down and no interaction with the outer wake has occurred. By $x/D > 6$, much later than that of Region 2 and consistent with mean turbulent statistics, the wake meandering mode is activated. Moreover, the interaction between the inner and outer wake not only affects the wake meandering but also strengthens the hub vortex frequency mode along the centerline very close to the turbine. \\
\indent In both cases near the mid-blade location, $y_h/D = 0.2$, the hub vortex frequency is not distinguishable from other frequency modes close to the turbine. Further downstream, the wake meandering frequency appears, first in Region 2 at $x/D=2.5$, consistent with the turbulence kinetic energy shown above. The peak wake meandering energy occurs around $x/D=4$, and high energy is observed far downstream as the wake meanders throughout the far wake. For Region 3 case, the wake meandering is observed to peak much further downstream, around $x/D=5$ and is also present far downstream. The energy present in the wake meandering mode at this radial position is higher compared to the centerline. Here, there is more turbulence energy in the wake meandering mode in the wake because of the proximity to the location of the interaction of the expanding inner wake and outer wake. Near the peak wake meandering location, higher frequency modes are also present and can be attributed to complex interaction of the inner wake including the hub vortex and outer wake. \\
\indent Closer to the blade tips at $y_h/D = 0.44$, similar to the mid-blade location, the hub vortex frequency is not present near the turbine. Higher frequencies associated with the rotor frequency and the tip vortices are present but quickly dissipate and become indistinguishable compared to wake meandering. Consistent with the findings of the turbulence kinetic energy and the mid-blade power density spectrum, wake meandering in Region 2 begins earlier than Region 3. The peak energy levels are slightly upstream compared with the mid-blade and centerline. Turbulence energy in the outer wake is highest due to the strength of the tip shear layer and interaction with the hub vortex core formed behind the rotor. From these contours, we conclude that wake meandering begins at the outer wake, and the interaction of the inner wake with the outer wake introduces more turbulence. Evidence from streamwise power spectra in Fig. \ref{fig:exp_psd} shows that without the expanding inner wake the energy of the wake meandering is diminished. From onset of the wake meandering in the outer wake shear layer, the meandering modes begins to propagate downstream and towards the centerline explaining why the centerline energy peak of wake meandering occurs slightly further downstream compared to the top tip location. \\
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{./fig14.pdf}
\caption{\label{fig:linefft} Contours of the Fourier power spectral density of the streamwise component of the velocity fluctuations pre-multiplied by frequency $f\Phi$ as a function of the axial direction, $x/D$ at several radial positions (a),(d) $y_h/D=0$, (b),(e) $y_h/D=0.2$ (c),(f) $y_h/D=0.44$. Left column (a)-(c): R2-BN and right column (d)-(f): R3-BN. }
\end{center}
\end{figure}
\subsection{Meander Profiles}\label{sec:meander}
\indent In this section, analysis is performed by reconstructing wake meandering into meander profiles that track the streamwise velocity minimum locations to further understand the dynamics of the wake of the turbine. In the instantaneous contours, Fig. \ref{fig:asn_inst}, we see that the streamwise velocity minima follow the center of the meandering wake. By tracking these positions with a three-dimensional profile, we investigate the dynamics in terms of amplitude and wavelength of meanders and obtain statistics of the dynamical wake. The reconstruction technique was first described in \citet{howard2015statistics} and developed for temporal and spatial resolved three-dimensional LES flow field in \citet{foti2016wake}. A three step procedure is utilized for the three-dimensional helical meandering profile reconstruction: i) Use finite temporal average scheme proposed by \citet{chrisohoides2003experimental} to find the coherent time scale $\tau_c$ of the wake, ii) Locate the streamwise velocity minima along the axial direction, iii) Spatially filter the velocity minimum locations to create a continuous profile. \\
\indent The proposed method for reconstruction of the wake meandering is based on tracking wake meandering as a large-scale coherent structure in the wake of the turbine. The flow field is temporally filtered to eliminate the high frequency fluctuations which are not associated with the low frequency wake meandering. The temporal filtering process uses finite averaging over a coherent time scale $\tau_c$ to decompose the flow field into the usual triple decomposition of a turbulent flow as described by \citet{hussain1986coherent}. A temporally and spatially evolving flow field $\boldsymbol{u}(\boldsymbol{x},t)$ (bold indicates a vector quantity) can be rewritten as follows:
\begin{equation}
\boldsymbol{u}(\boldsymbol{x},t) = \boldsymbol{U}(\boldsymbol{x}) + \tilde{\boldsymbol{u}}(\boldsymbol{x},t) + \boldsymbol{u_i}(\boldsymbol{x},t), \label{eqn:triple}
\end{equation}
where $\boldsymbol{U}(\boldsymbol{x})$ is the mean, strictly spatial term, $\tilde{\boldsymbol{u}}(\boldsymbol{x},t)$ is the coherent term and $\boldsymbol{u_i}(\boldsymbol{x},t)$ is the incoherent term. Based on the work of \citet{chrisohoides2003experimental}, the optimal size of an interval averaging window is equivalent to the coherent time scale and can be determined by starting with a finite averaging over a time window of $\tau$ as follows:
\begin{equation}
u_\tau(x,t) = \frac{1}{\tau} \int^{t+\tau/2}_{t-\tau/2} u(x,t^\prime) dt^\prime.
\label{eqn:interval}
\end{equation}
The proper coherence time scale can be found by employing fluctuation analysis and the central limit theorem. Using a $\tau$ too small the large-scale motions will be dominated by incoherence, while averaging over too long will smear out the coherent structures. Fluctuation analysis shows that the coherent time scale $\tau_c$ is the time window where the standard deviation over the temporal window begins to scale as $\tau^{-1/2}$. With this temporal window, the finite averaged velocity $u_\tau(x,t) = U(x) + \tilde{u}(x,t)$. For more background, see \citet{chrisohoides2003experimental} and \citet{foti2016wake}.
\indent The optimal coherent time scale for all cases is found to be about $\tau_c = 0.6T$. This value is close to the optimal time scale of a model turbine discussed in \citet{foti2016wake}. \\
\indent With the coherent time scale determined and the flow decomposed into its three parts, the minimum streamwise velocity, $U(x) + \tilde{u}(x,t)$, locations along the axial direction are tracked over 200 rotor revolutions at the coherent time scale resolution. Physically, the velocity minima track near the center of the hub vortex near the turbine and meandering in the far wake. A low-pass spatial filter using a length scale $l_c = D/2$ is used to obtain a three-dimensional continuous profile out from the velocity minima, as shown in \citet{howard2015statistics}. \\
\indent Figures \ref{fig:meander}(a)-(d) show examples of a three-dimensional meander profile projected on the hub height plane for both turbine operating conditions and simulations with and without the nacelle model. In each figure, the solid line represents the meandering profile, and the circle markers are the velocity minimum locations in the wake. The meander profile is superimposed on the instantaneous vorticity magnitude, $|\omega| D/U_{hub}$. The trends addressed previously in the instantaneous streamwise snapshots in Fig. \ref{fig:asn_inst} about the dynamics of the wake for each case are observed in the meander profiles.
In Fig. \ref{fig:meander}(a), the snapshot of the meander profile of the simulation with a nacelle model for the turbine operating in Region 2 shows an energetic wake and the meander profile that starts behind the nacelle and quickly expands tracking the hub vortex toward the outer wake. Accompanying the expansion of the hub vortex and increasing amplitude of the meander profile is an increased vorticity magnitude near the peaks of the meander profile at the tip shear layer (shown by dashed lines on the Figures \ref{fig:meander}(a)-(d)). This is further evidence that the expansion of the hub vortex towards the tip shear layer occurs with increases in turbulence intensity as the tip shear layer and hub vortex interact as the meandering of the wake commences.
Figure \ref{fig:meander}(b) shows the snapshot of the simulation without a nacelle model in Region 2. The meander profile cannot start immediately downstream of the rotor because the centerline jet affects the location of the velocity minima near the turbine. The meander profile starts at $x/D=3$ where the centerline jet dissipates. A meander profile of the Region 3 simulation with a nacelle is observed in Fig. \ref{fig:meander}(c). Unlike Region 2 simulations, the inner wake is less energetic, and the meander profile is a tight spiral for 5 diameters behind the turbine. After $x/D=5$ the meander profile has large amplitudes towards the tip shear layer. Similar to Region 2, instantaneous high vorticity regions are near the peaks of the meander profile. The simulation without the nacelle model operating in Region 3 is shown in Fig. \ref{fig:meander}(d), with similar trends as the simulation with the nacelle model. Further analysis of many meanders is necessary to understand the trends in the wake. \\
\indent The statistics of the meander profiles are useful in determining the dynamics of the hub vortex and wake meandering.
It is useful to investigate the wave-like characteristics, amplitude and wavelength, of the meander profiles as a function of axial distance from the turbine.
The behavior of amplitude and wavelength can be interpreted in both the near wake and far wake separately.
In order to obtain the wave features, the three-dimensional helical profile is projected onto the two-dimensional hub height plane where amplitudes $A$ and wavelengths $\lambda$ are readily calculated from the extrema of the profile.
In Fig. \ref{fig:meander}(e), the average amplitude $\overline{A}/D$ as a function of the downstream distance for each test case is shown.
In the simulation with a nacelle model in Region 2, the amplitude quickly increases behind the turbine with the expanding hub vortex.
By $x/D=2$ the amplitude increases to a peak in the near wake followed by a plateau as hub vortex interacts with the outer wake. The amplitude increases further as the wake begins to meander.
On the other hand, in Region 3, the amplitude is much lower in the near wake, consistent with the tight spiraling hub vortex.
The amplitude increases linearly until $x/D = 5$ where the wake meandering commences. The average amplitude in Region 3 remains lower than the amplitude in Region 2 throughout the wake.
Simulations without the nacelle yield meander profiles with lower amplitudes regardless of turbine operating condition. In Region 2, the average amplitude between the simulations with and without the nacelle model remains similar until $x/D=4.5$ where the differences becomes more pronounced. The amplitude is more attenuated for the simulation without a nacelle in Region 3 starting from $x/D=5$ where wake meandering is witnessed to commence. We can relate the amplitude $A$ to the energy $E$ of the meander profile, $A^2 \sim E$.
Consistent with the turbulence intensities in Fig. \ref{fig:turb_uu} and the power spectrum in Fig. \ref{fig:exp_psd}, the energy in the wake meandering is reduced by 40\% without a nacelle model.
Figure \ref{fig:meander}(e) provides quantitative evidence that a nacelle model is needed to adequately simulate the dynamics of the wake. \\
\indent The average wavelength $\overline{\lambda}/D$ of the meander profiles is shown in Fig. \ref{fig:meander}(f). Unlike the amplitudes for each case, the average wavelength is approximately the same regardless of the nacelle model. The similarity of the profiles is indicative that the wavelength is a large-scale feature of the flow not pertaining to the dynamics. For all cases in the far wake, the mean recovery of the wake is generally the same. The evidence shows that the mean elongation of the meandering profile is caused by the wake of the turbine. As the wake recovers, the meander is stretched in the streamwise direction. \\
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{./fig15.pdf}
\caption{\label{fig:meander} Instantaneous vorticity magnitude, $|\omega|D/Uh$ with velocity minima (dot) and meander profile (line) for (a) R2-BN, (b) R2-B, (c) R3-BN, (d) R3-B. Average meander profile (e) amplitude, $\overline{A}$, and (f) wavelength $\overline{\lambda}$ with respect to distance from rotor plane, $x/D$, non-dimensionalized by the diameter $D$. }
\end{center}
\end{figure}
\indent The statistics of the meander profiles are further investigated in Fig. \ref{fig:meander_pdf} with the probability density function (PDF) of the amplitude and the wavelength shown at different locations downstream.
The amplitude PDF for the simulation with a nacelle model in Region 2 flattens as the location from the turbine increases. At $x/D=2$ the highest amplitudes extends only $A/D = 0.15$. The maximum amplitudes eventually extend beyond $A/D=0.25$, indicating that the centerline of the wake is being displaced out to the tip shear layer by wake meandering. Conversely, the amplitude PDF of the simulation with a nacelle model in Region 3 has both a lower median and standard deviation than Region 2. The standard deviation of the amplitude normalized by the average amplitude $\sigma_A/\overline{A}$, an indication of the uncertainty in the amplitude, is nearly constant throughout the domain and ina ll simulations around 0.7.
At $x/D=2$ location, the PDF is very thin, consistent with the tightly spirally hub vortex. At locations further downstream, the amplitudes increase but the maximum in Region 3 is always less than Region 2. With regards to the simulations without a nacelle mode, they are not shown at $x/D=2$ due to the manifestation of the unphysical centerline jet which affects the location of the velocity minima in the inner wake. At locations further downstream, the simulations without the nacelle have slightly lower amplitude medians than the corresponding simulation with nacelle model. The maximum amplitudes achieved are also lower. \\
\indent The PDFs of the wavelength show a gradual growth of the median and variance as the locations from the turbine get further downstream. At $x/D=4$, most wavelengths for all cases are near $\lambda/D=1$. By $x/D=8$, the wavelengths increase significantly and the PDFs all have a flattened peaks from $1 < \lambda/D < 3$. Although the wavelength PDFs of simulations with a nacelle model far from the turbine have a slight shift towards higher wavelengths, the wavelength for both operating conditions and nacelle model are not substantially different. \\
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{./fig16.pdf}
\caption{\label{fig:meander_pdf} Probability density function of the amplitude, $A/D$ (first row) and wavelength, $\lambda/D$ (second row) of the meander profiles at (a) $x/D=2$, (b) $x/D=4$, (c) $x/D=6$, and (d) $x/D=8$.}
\end{center}
\end{figure}
\subsection{Dynamic Mode Decomposition and Meander Profile}\label{sec:dmd}
\indent Up to this point, we have concentrated the development of two large coherent structures in the wake: the hub vortex and wake meandering. Each structure is associated with a dominant frequency, and the dynamics is interpreted through the meander profiles reconstruction. To further elucidate these coherent structures and their effect on the wake of the turbine, we decompose the wake using dynamic mode decomposition (DMD). DMD was first theoretically introduced by \citet{rowley2009spectral} as a technique to decompose the flow by spectral analysis of the Koopman operator, a linear operator associated with the full non-linear system. The Arnoldi-like method was improved by \citet{schmid2010dynamic} and is able to compute the modes of a finite sequence of snapshots of the flow field. \citet{sarmast2014mutual} used DMD to separate wind turbine tip vortex modes for analysis. The modal decomposition is able to extract spatial structures and their corresponding frequencies and growth rates without having the explicit dynamic operator. Because DMD can separate specific spatial modes by their individual frequencies, we can explicitly extract the modes related to the hub vortex and wake meandering. With the spectral analysis performed above, the specific frequencies of the hub vortex and meandering wake are readily known from Fig. \ref{fig:linefft}. \\
\indent First, we will give a brief overview of the algorithm. For more information, please see \citet{schmid2010dynamic} and \citet{sarmast2014mutual}. For our simulations, a sequence of three-dimensional instantaneous flow fields $\boldsymbol{u}_i(\boldsymbol{x}_j,t_i), t_i = i\Delta t, i=0,1,N-1$ are assembled into a matrix
\begin{equation}
U_n = [\boldsymbol{u}_0, \boldsymbol{u}_1,..., \boldsymbol{u}_{N-1}],
\end{equation}
where $N$ is the number of snapshots, and $\Delta t$ is the time between each snapshot. In DMD, a linear mapping $A$ or $\widetilde{A}$ is assumed to relate a flow field $\boldsymbol{u}_j$ to the succeeding flow field $\boldsymbol{u}_{j+1}$ such that
\begin{equation}
\boldsymbol{u}_{j+1} = A \boldsymbol{u}_j = e^{\widetilde{A}\Delta t}\boldsymbol{u}_j,
\end{equation}
and decomposing the flow field into spatial eigenmodes $\phi_k(\boldsymbol{x}_j)$ and temporal coefficients $a_k(t_i)$
\begin{equation}
\boldsymbol{u}_{j+1} = \sum_{k=0}^{N-1} \boldsymbol{\phi}_k a_k = \sum_{k=0}^{N-1} \boldsymbol{\phi}_k e^{\imath \omega_k \Delta t} = \sum_{k=0}^{N-1} \boldsymbol{\phi}_k \lambda_k^j
\end{equation}
where the $\imath \omega_k$ is the corresponding eigenvalues of $\widetilde{A}$, and $\lambda_k^j$ is the corresponding eigenvalues of $A$. The amplitude $d_k$ and energy $d_k^2$ of spatial dynamic mode $\boldsymbol{\phi}_k$ are related such that $\boldsymbol{\phi}_k = d_k \boldsymbol{v}_k$ where $\boldsymbol{v}_k^T \boldsymbol{v}_k = 1$. The eigenvalues $\lambda_k$, also referred to as the Ritz values, are complex conjugates which all lie on the complex unit circle, $|\lambda_k| = 1$. To obtain the more familiar complex frequency $\imath \omega_k = \log(\lambda_k) / \Delta t$. The real part is the temporal frequency, and the imaginary part is an exponential growth rate of the dynamic mode.\\
\indent A substantial number of three-dimensional instantaneous snapshots are saved for each simulation in order to cover the amount of time needed to resolve the low frequencies of the hub vortex and wake meandering. Each snapshot contains the computational cells that are within a diameter wide and high box centered on the turbine and entire length of the computational domain. The mean flow is subtracted from each snapshot to obtain the fluctuating part of the flow field. The minimum time between the snapshots $\Delta t = T/12$, where $T$ is the rotor period, a time difference low enough to ensure that not only the low frequency modes but the blade rotation frequency are captured as well. To ensure convergence of mode decomposition, the number of snapshots used are increased until the norm of the residuals, $\epsilon$, of the mapping operator becomes sufficiently small. Figure \ref{fig:dmd_validation}(a) shows the residuals decrease by several orders of magnitude as the number of snapshots is increased to $N = 512$. Based on the residuals, the number of snapshots used for subsequent analysis will be $N=512$. Moreover, several series of snapshots are used and averaged so the modes and frequencies are averaged over a time spanning $100T$. With a $\Delta t = T/12$ and $N=512$, the low frequency precession of wake meandering will occur as many as five times, enabling its temporal resolution for our proposes. \\
\indent The energy $d_k^2$ and its frequency $St = \Re( 2 \pi \imath \omega_k ) D/U_{hub}$ of $k$th mode for simulations with the nacelle model in Region 2 and Region 3 is shown in Fig. \ref{fig:dmd_validation}(b) and Fig. \ref{fig:dmd_validation}(c), respectively. The energy of each mode is normalized by the maximum energy. The maximum energy is associated with the wake meandering frequency $St=0.3$. Also included in each figure are two normalized energy spectra: one located in the near wake at $x/D=2$ and one located the far wake at $x/D=5$. Both energy spectra have a maximum energy at $St=0.3$ (Note both the extrema of energy of dynamic modes and the energy spectra occur at the same frequency). A few energy peaks are present in the DMD. In the low frequency region, $St<1$, the dynamic mode energy peaks at the aforementioned $St=0.3$ and $St=0.74$, similar frequencies to what was determined to be the wake meandering frequency and hub vortex frequency, respectively. The energy spectra confirm that DMD decomposes the flow field into modes corresponding to modes present in the spectral analysis with hub vortex frequency is only present in the energy spectrum at $x/D=2$ and the wake meandering is present at both energy spectra shown. However, due to the location chosen for the energy spectra, the high frequencies related to the rotor frequency are not captured but are found readily in DMD. In the higher frequencies of the dynamic mode, $St > 1$, there are several peaks including the rotor frequency of $St=2.3$. The energized modes with high frequencies including $St=1$ relate to the rotor frequencies. Most high frequencies have a lower contribution to the energy of the flow compared to the low frequency modes like wake meandering and are related to energy modes present very close to the turbine blades. \\
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{./fig17.pdf}
\caption{\label{fig:dmd_validation} Validation of dynamic mode decomposition. (a) Residuals norm $\epsilon$ of DMD modes as number of modes $N$ increases. Dynamic mode energy spectrum $d_k^2$ (circle), energy spectrum at $x/D=2$ (solid), energy spectrum at $x/D=5$ (dashed) for (b) R2-BN and (c) R3-BN.}
\end{center}
\end{figure}
\indent Next, we begin analysis on the spatial modes provided by DMD of the simulations with the nacelle model for both operating conditions. We select the modes pertaining to the meandering wake and the hub vortex. Figure \ref{fig:dmd_modes}(a) shows the wake meandering dynamic mode with a $St = 0.29$ represented by the streamwise coherent velocity $u_k$ and the two-dimensional ($x-z$) plane vector field for the simulation in Region 2. The dynamic mode shows structures of sources and sinks of the streamwise coherent velocity at $x/D>2$ consistent instantaneous velocity of wake meandering from Fig. \ref{fig:asn_inst}(a). The structures expand in the streamwise direction similar to wake meandering shown through the wavelength elongation due to the recovering wake. The streamwise coherent velocity from the dynamic mode corresponding to wake meandering with a frequency $St = 0.3$ in the simulation in Region 3 is shown in Fig. \ref{fig:dmd_modes}(b).
Similar to the dynamic mode of the simulation in Region 2, sinks and sources in the coherent velocity appear downstream of the turbine. The wake meandering features begin to form further downstream in agreement with the simulation with the nacelle model in Region 3 mean and instantaneous flow fields. The meandering patterns are slightly weaker and are pulled more towards the centerline than in the simulation in Region 2. Another noteworthy feature the wake meandering dynamic mode in both simulations is the prominent velocity regions around location of the hub vortex. The velocity field in the mode shows an elongation and expansion of the hub vortex.
Separate spatial dynamic modes contains more of the dynamics of the hub vortex. The strongest hub vortex mode in the simulation in Region 2 with a frequency $St=0.73$, with an energy $d^2_k$ about 40\% of the wake meandering energy, is shown in Fig. \ref{fig:dmd_modes}(c). There is a strong streamwise velocity near the nacelle, indicative of the meandering in the inner wake caused by the hub vortex. The streamwise velocity expands outwards towards the tip shear layers quickly in agreement with the analysis of the instantaneous wake. Downstream of the expansion towards the tip shear layers, the coherent meandering of the hub vortex is lost, but high fluctuations persists far downstream. The mode demonstrates that the hub vortex has an impact downstream as spectral analysis in Fig. \ref{fig:linefft}(a) suggests.
Figure \ref{fig:dmd_modes}(d) shows the streamwise coherent velocity dynamic mode of the simulation in Region 3 with a frequency of $St=0.74$. The dynamic mode of the hub vortex is very different than the simulation from Region 2. The streamwise velocity remains in its tight spiral around the centerline with weak positive and negative regions of the streamwise coherent velocity. The Region 3 hub vortex mode is further evidence that the hub vortex does not expand quickly and interact with the tip shear layer with a similar intensity as seen in the simulation from Region 2. It is clear that operating condition affects the expansion and stability of the hub vortex. \\
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{./fig18.pdf}
\caption{\label{fig:dmd_modes} Selected dynamic modes visualized by contours of the streamwise velocity and two dimensional ($x-z$) plane vector field. (a) R2-BN: $St=0.29$, (b) R3-BN: $St=0.3$, (c) R2-BN: $St=0.73$, and (d) R3-BN: $St=0.74$.}
\end{center}
\end{figure}
\indent To elucidate and characterize how certain frequencies create the dynamics in the far wake, the dynamic modes related to wake meandering and hub vortex shown in Fig. \ref{fig:dmd_modes} are extracted to obtain a coherent velocity and used in conjunction with the meander reconstruction analysis described in the previous section \ref{sec:meander}.
Instead of using the finite temporal averaging to obtain a coherent velocity, the coherent velocity of the selected dynamic modes is used, and the velocity is used to create dynamic mode meander profiles with similarities to the meander profile statistics shown above.
Two dynamic mode meandering profiles are created from selected dynamic modes: i) Summing the wake meandering mode and hub vortex mode (h+m) and ii) Selecting only the wake meandering mode (m).
The mean velocity and coherent velocity $U + u_k = U + \widetilde{u}$ are used to find where the velocity minima are located at each axial location downstream of the turbine just like described in the section above. The dynamic mode meander profile is obtained by low-pass spatial filtering of the velocity minima for a continuous profile. Dynamic mode meander profiles are collected for each time instance using the reconstructed velocity from only selected dynamic modes. \\
\indent Figure \ref{fig:dmd_meander}(a) shows an instance of both dynamic mode meander profiles and with a meander profile created using finite average filtering overlaid on the velocity $U + u_k$ from the simulation with the nacelle model in Region 2.
Immediately downstream of the turbine, both dynamic mode meander profiles do have an increased amplitude but a similar wavelength to the finite average filtering. In the near wake, the meander profiles created with and without using the hub vortex mode show significant difference until $x/D=2.5$ where the profiles begin to converge. The hub vortex mode interacts with the wake meandering mode. However, the hub vortex mode has a large contribution to the meander profile after $x/D=5$ where both profiles begin to diverge again.
The dynamic meander profiles from the simulation in Region 3, shown in Fig. \ref{fig:dmd_meander}(b), are similar to each other until $x/D=6$ where the profiles begin to diverge. This is in contrast to the Region 2 profile because in Region 3 the hub vortex mode is significantly weaker. \\
\indent The average amplitude $\overline{A}/D$ of the dynamic mode meander profiles is shown in Fig. \ref{fig:dmd_meander}(c). The statistics of the dynamic mode meander profile for Region 2 reveal that the mean amplitude is slightly higher for the (h+m) profile which includes the hub vortex compared to the (m) profile. There is not a significant difference. The hub vortex mode has a large effect on the average amplitude of the dynamic mode meander profile for simulation in Region 3. The average amplitude for the (h+m) profile is significantly higher.
The average amplitude for both turbine operating regions is able to capture the trends of the wake meandering and is higher compared with the corresponding meander profile amplitude shown in Fig. \ref{fig:meander}(e). The higher amplitude in the far wake for the dynamic modes suggests that there are some higher frequency modes that smooth wake meandering.
However, the wavelength statistics in Fig. \ref{fig:dmd_meander}(d) show conclusively that the two dynamic modes can fully capture the expansion and elongation of the meandering wake as the average wavelength compares well with the Fig. \ref{fig:meander}(f). The wavelength grows relatively linearly, and it suggests that the wake meandering dynamic mode is responsible because of the gradual elongation of the coherent regions in the streamwise velocity.
Probability density functions of the amplitudes of the dynamic mode meander profiles at $x/D=3$ and $x/D=6$ are shown in Fig. \ref{fig:dmd_meander}(e) and Fig. \ref{fig:dmd_meander}(f), respectively. Noticeable is that the meander profiles that contain the hub vortex has a higher probability of larger amplitudes. While using two modes should affect the amount of energy, compared to only using the wake meandering mode, the energy and amplitude far from the turbine is higher where the hub vortex mode must have a large effect. \\
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{./fig19.pdf}
\caption{\label{fig:dmd_meander} Dynamic mode meander profiles created from hub vortex and wake meandering mode (h+m), dynamic mode meander profile from wake meandering mode (m) and meander profile from complete flow field overlaid on contours and velocity minima (white dots) of the sum of the average velocity and selected hub vortex and wake meandering modes, $U + u_k$, non-dimensionalized by the hub velocity $U_{hub}$ for (a) R2-BN and (b) R3-BN. For reference, the corresponding temporal averaged wake meander profile is shown (dashed-dotted line). Characteristics of dynamic mode meander profiles (c) amplitude, $\overline{A}$, and (d) wavelength $\overline{\lambda}$ with respect to distance from rotor plane. Probability density function of amplitude, $A/D$, at (e) $x/D=3$ and (f) $x/D=6$.}
\end{center}
\end{figure}
\indent The above analysis shows that at minimum two dynamic modes are needed to capture most of the fundamental features of wake meandering downstream in the wake. While the energy of the meandering wake is slightly over-predicted with the two modes, the wavelength is captured quite well. Figure \ref{fig:dmd_meander_ratio}(a) shows the ratio of the average amplitude of the dynamic mode meander profile to the filtered meander profile of the complete flow field at $x/D=8$ as the number of dynamic modes, arranged by frequency in ascending order, are included in the flow field. It shows conclusively that amplitude of the meandering profile using dynamic mode decomposition converges to the meander profile using the temporal filtering with just a few of the low frequency modes for either operating condition. Both profiles from each operating condition increase monotonically from $St=0$ to $St = 0.74$, the hub vortex frequency. After, the ratio slowly converges to unity. The low frequencies are the most important in capturing the meandering wake. This is a further indication that the low frequency meandering is captured using a few modes, namely, the modes corresponding to the wake meandering and hub vortex frequency. Figure \ref{fig:dmd_meander_ratio}(b) similarly shows the ratio of the average meander profile wavelength of the dynamic mode decomposition to temporal filtering meandering profile at $x/D=8$. The wavelength ratio peaks for both operating conditions at $St = 0.34$, near the wake meandering frequency, but when the hub vortex frequency is included the ratio trends towards unity showing that both modes are necessary for prescribing the dynamic motions of the wake.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{./fig20.pdf}
\caption{\label{fig:dmd_meander_ratio} Ratio of (a) the averaged amplitude and (b) the averaged wavelength of the dynamic mode meander profiles to those from the complete flow field as the number of employed frequency modes in ascending order are increased at $x/D=8$.}
\end{center}
\end{figure}
\section{Discussion and Conclusions}\label{sec:conclusion}
\indent Wind tunnel experiments and large-eddy simulations are carried out to investigate the flow past a model wind turbine with a rotor diameter of 1.1 meters sited in a wind tunnel operating in Region 2 and Region 3.
The incoming turbulent flow is generated by spires at the inlet of wind tunnel in both the experiments and simulations. The flow field statistics of the wake behind the wind turbine predicted by the simulations are in good agreement with measurements.
Simulations with and without a nacelle model are carried out to systematically study its effect on the hub vortex and meandering motions at far wake locations.
While the simulations without a nacelle model accurately predict the mean velocity profiles in the far wake, they fail to capture the wake just behind the nacelle, instead predicting a jet and significantly different distributions of turbulence intensity close to the turbine along the centerline. Specifically, the hub vortex along the centerline remains a columnar jet for the simulations without a nacelle model. Wake meandering with its downwind locations well correlate with the high turbulence intensity region for both simulations with and without a nacelle model. However, the simulations without a nacelle model predict lower turbulence intensity at far wake locations, and the location for maximum turbulence intensity is further downwind. Significant influence of turbine operating conditions on far wake turbulence intensity and wake meandering is observed: large amplitude wake meandering happens further downwind and the length of the region with high turbulence intensity is significantly longer for the Region 3 operating condition.\\
\indent To further investigate the characteristics of wake meandering for different operating conditions and the effects of the nacelle, the instantaneous large-scale coherent wake meander profiles are constructed using the finite time averaging technique developed by \citet{howard2015statistics} and \citet{foti2016wake}. We show that the amplitude of wake meandering is less for the simulations without a nacelle model for both operating conditions. Also, the amplitude is lower in the turbine operating in Region 3 in comparison to Region 2. However, the wavelength and growth rate of wavelength for wake meandering are similar at different downwind locations regardless of nacelle modelling or turbine operating conditions. \\
\indent The wake meandering frequency is associated with a $St \sim 0.3$ in many different studies (\citet{medici2008measurements, chamorro2013interaction, okulov2014regular}), while the hub vortex is measured to have a frequency with $St \sim 0.7$ in \citet{iungo2013linear}, \citet{howard2015statistics} and \citet{foti2016wake}. For the present turbine, frequencies in those ranges are found for both turbine operating conditions. The spectral analysis reveals that the presence of these frequencies correlated with the locations where the coherent motions of wake meandering and the hub vortex are most prevalent. Dynamic mode decomposition of the flow allows us to extract and isolate the select modes related to both the wake meandering frequency and the hub vortex frequency. The wake meandering dynamic mode consists of the large coherent oscillatory motions in the far wake. The hub vortex dynamic mode, while strongest around the nacelle, persists far downstream, especially in Region 2 where the hub vortex is more unstable. Two meandering profiles are created, using only the wake meandering mode and using both the wake meandering and hub vortex modes, respectively. It is observed that the amplitude of the meandering profile using both modes is higher than that only using the wake meandering mode, especially for downstream locations. \\
\indent From the evidence provided through our analyses, our work has further confirmed the importance of the nacelle on wake meandering at the far wake for a 1.1 meters diameter wind turbine under different operating conditions in addition to previous works for a 0.5 meters diameter hydrokinetic turbine \citep{kang2014onset} and a 0.13 meters model wind turbine \citep{foti2016wake}. Future work will look at the nacelle effect for utility scale wind turbines and investigate similarity between scales for the hub vortex and wake meandering.
\begin{acknowledgements}
This work was supported by U.S. Department of Energy (DE-EE0002980, DE-EE0005482 and DE-AC04-94AL85000), Xcel Energy through the Renewable Development Fund (grant RD4-13), and Sandia National Laboratories. Computational resources were provided by Sandia National Laboratories and the University of Minnesota Supercomputing Institute. Sandia National Laboratories is a multimission laboratory managed and operated
by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.
\end{acknowledgements}
|
2,869,038,154,038 | arxiv | \section{Introduction}
Although Moore's Law has governed the semiconductor industry for over half a century, it is widely observed and recognized that Moore's Law is becoming harder to sustain. ``Integration of separately packaged smaller functions'' is considered the extension by Moore himself~\cite{Moore.2006} and the semiconductor industry.
The traditional VLSI system is implemented on a monolithic die, also known as system-on-chip (SoC). The growth of transistors on a single die is guaranteed by the steady growth of the process technology and the die area for the past few decades. However, as process technology improvement has slowed down and the chip area is approaching the limit of the lithographic reticle, transistor growth is going to stagnate~\cite{Loh.2021}\cite{Naffziger.2021}. Meanwhile, a large chip means more complex designs, and the poor yield results in even higher costs. Re-partitioning a monolithic SoC into several chiplets can improve the overall yield of dies, thereby reducing the cost.
Besides yield improvement, chiplet reuse is another characteristic of multi-chiplet architecture. In the traditional design flow, IP or module reuse is widely used; however, this approach still requires repeating system verification and chip physics design, which occupy a large part of the total non-recurring engineering (NRE) cost. Therefore, chiplet reuse, which saves the overhead of re-verifying systems and redesigning chip physics, can save more cost.
With the advent of many works about multi-chip, especially those products from the industry~\cite{Naffziger.2021}\cite{Xia.2021}, the economic effectiveness of multi-chiplet architecture has become a consensus. However, in practice, we find that the cost advantage of a multi-chip system is not easy to achieve due to the overhead of packaging and die-to-die (D2D) interface. Compared with SoC, the cost of multi-chip systems is much more difficult to evaluate at the early stage of VLSI system design. Without careful evaluation, adopting multi-chiplet architecture may lead to even higher costs. Previous works~\cite{Stow.2016}\cite{Stow.2017} focus on the manufacturing cost of dies and silicon interposers but neglect other significant costs such as substrates, D2D overhead, and NRE cost.
To better guide the VLSI system design and explain architecture challenges~\cite{Loh.2021} such as partitioning problem, we build a quantitative model \textit{Chiplet Actuary} for cost evaluation based on three typical multi-chip integration technologies. Based on this model, we discuss the total cost of different integration schemes from various perspectives. External data~\cite{Khan.2020}\cite{HIR}\cite{Li.2020}\cite{LLC_assembly}\cite{D5}\cite{ODSA} and in-house data are used to provide a relatively accurate final total cost. In summary, this paper makes the following major contributions to the VLSI system design:
\begin{itemize}[topsep=3pt, leftmargin=12pt, itemsep=2pt]
\item We abstract monolithic SoC and multi-chip integration into different levels of concepts: module, chip, and package, by which we build a unified architecture.
\item We present a quantitative cost model \textit{Chiplet Actuary} to estimate various components of the total system cost. To the best of our knowledge, this model is the first to introduce D2D overhead and NRE cost.
\item Based on \textit{Chiplet Actuary}, we put forward an analytical method for decision-making on chiplet architecture problems: which integration scheme to use, how many chiplets to partition, whether to reuse packaging, how to leverage chiplet reusability, and how to exploit heterogeneity. Instructive insights are specified in Section \ref{summary}.
\end{itemize}
\section{Background}
\subsection{Multi-chip Integration}
\begin{figure}[b]
\includegraphics[width=0.42\textwidth]{figures/Synopsys.pdf}
\caption{Different multi-chip integration technologies\protect~\cite{Synopsys} \label{synopsys}}
\end{figure}
Multi-chip integration is not an innovation but a technology developing over decades to make better VLSI systems. As shown in Figure \ref{synopsys}, the most widely used integration scheme is assembling different dies on a unifying substrate, also known as the typical multi-chip module (MCM) or system-in-package (SiP). Compared with MCM, integrated fan-out (InFO) technology is relatively more advanced. Developed from fan-out wafer-level packaging (FOWLP), InFO uses a redistribution layer (RDL) to offer smaller footprints and better electrical performance than the conventional substrate. According to the process sequence, InFO can be divided into chip-first and chip-last (or RDL-first). In addition to 2D integration, silicon-interposer-based 2.5D integration, also called Chip-on-Wafer-on-Substrate (CoWoS) by TSMC, uses a relatively outdated chip to interconnect and integrate chiplets and memory dies. Though these three mainstream technologies are all used for multi-chip integration, they are different in package size, IO count, data rates, and cost. Therefore, chip designers are supposed to choose the right solution according to design objectives and cost constraints.
\subsection{Yield Model}
One of the core components of the cost model is the yield model, which has been an important topic since the advent of the integrated circuit industry. For predicting yields of dies, Poisson, Negative Binomial, and other models from the industry are used to provide a more accurate result. Among these models, Seed's model and the Negative Binomial model are the most widely used in the same form of~\cite{Cunningham.1990}
\begin{equation}
{\rm Yield_{~die}}=\left(1 + \frac{DS}{c}\right)^{-c} ,
\end{equation}
where $D$ is the defeat density, $S$ is the die area, and $c$ is the cluster parameter in the Negative Binomial model or the number of critical levels in Seed's model. We have followed this model and used more realistic parameters. Figure \ref{Yield} shows the yield-area and the cost-area relations of different technologies under this model. All costs are normalized to the cost per area of the raw wafer.
\begin{figure}[htbp]
\setlength{\abovecaptionskip}{5pt}
\includegraphics[width=0.39\textwidth]{figures/Yield.pdf}
\caption{Yield/Cost-Area relation of different technologies \label{Yield}}
\end{figure}
The traditional SoC is manufactured in a serial production line, so the overall yield is estimated by continuous multiplication
\begin{equation}
Y_{\rm ~overall} = Y_{\rm ~wafer} \times Y_{\rm ~die} \times Y_{\rm ~packaging} \times Y_{\rm ~test} .
\end{equation}
However, for the multi-chip system, yield cannot be estimated by simple multiplication because of the more complex manufacturing flow.
\subsection{NRE and RE Cost}
The total cost of VLSI systems can be roughly divided into two kinds: non-recurring engineering (NRE) cost and recurring engineering (RE) cost. NRE cost refers to the one-time cost of designing a VLSI system, including software, IP licensing, module/chip/package design, verification, masks, etc. RE cost refers to the fabrication costs for massive production, including wafers, packaging, test, etc.
For one VLSI system, its final engineering cost consists of the RE and the amortized NRE cost. Amortization is mainly related to the proportion of quantity. The basic concept is that if the production quantity is small, the NRE cost is dominant; otherwise, the NRE cost is negligible if the quantity is large enough.
\section{Chiplet Actuary Model}
\subsection{High Level Abstraction}
Our model is implemented for comparing the RE and NRE cost between monolithic SoC and multi-chip integration. As the problem is so complex, we use some necessary assumptions to ignore non-primary factors:
\begin{itemize}[topsep=3pt, leftmargin=15pt, itemsep=2pt]
\item All chiplets under the same process node share the same die-to-die (D2D) interface with different channel numbers;
\item Performance and power are not considered in this model;
\item Different parts of the NRE cost are independent so that they can be estimated separately.
\end{itemize}
Besides the above assumptions, many other approximations are also used in the model. More details can be referred to in our open-source code of the model\footnote{Repository URL: https://github.com/Yinxiao-Feng/DAC2022.git}.
\begin{figure}[htbp]
\includegraphics[width=0.41\textwidth]{figures/High_level.pdf}
\caption{High-level cost model diagram \label{HL}}
\end{figure}
As shown in Figure \ref{HL}, module, chip and package are the three main concepts involved in our model. A group of systems is built from a group of modules. Each module corresponds to a chiplet. Each system can be SoC formed directly from modules or multi-chip integration formed from chiplets. The relation can be described as follows:
\begin{equation}
\begin{aligned}
m_i \in & \{m_1, m_2, ... , m_{D2D}\} = M \\
c_i = & ~{\rm Chip}(\{m_i, m_{D2D}\}) \in C \\
{\rm SoC}_j = & ~{\rm Package}({\rm Chip}(\{m_{k_1}, m_{k_2}, \dots\})) \\
{\rm MCM}_j = & ~{\rm Package}(\{c_{k_1}, c_{k_2}, \dots\}) ,
\end{aligned}
\end{equation}
where $m$ and $c$ are module and chiplet, $\rm Package(\cdot)$ and $\rm Chip(\cdot)$ are methods forming system from chips and forming chip from modules. Different from the general concept of the module, our module refers to an indivisible group of functional units. D2D interface is a particular module with which each module makes up a chiplet. D2D interfaces under different process nodes are regarded as diverse modules.
\subsection{RE Cost Model}
The RE cost in our model consists of five parts: 1) cost of raw chips, 2) cost of chip defects, 3) cost of raw packages, 4) cost of package defects, 5) cost of wasted known good dies (KGDs) resulting from packaging defects. Other costs such as bumping, wafer sort, and package test are also included but not itemized separately because they are not so significant~\cite{Stow.2016}\cite{Stow.2017}.
On the basis of previous works~\cite{Stow.2016}\cite{Stow.2017}, we make several improvements. The first is the consideration of D2D interface overhead. For any multi-chip system, especially those with high interconnection bandwidth, the D2D interface occupies a considerable portion of the area~\cite{ODSA}. In our model, we regard D2D interface as a particular module shared by all chiplets. It takes a certain percentage of the chip area depending on different technologies and architectures.
Then, more multi-chip integration models are included. MCM is similar to SoC that flips chips directly on a unified organic substrate. The difference is that the MCM needs additional substrate layers for interconnection, so MCM has a growth factor on substrate RE cost. As for InFO and 2.5D, the interposer cost is calculated similarly with the die cost, and the bump cost and bounding yield are counted twice on the chip side and the substrate side. The total cost resulting from packaging is
\begin{equation}
\begin{aligned}
\rm
Cost_{~packaging} = & \rm ~Cost_{~Raw~Package} \\
+ & {\rm ~Cost_{~interposer}} \times \left(\frac{1}{y_1 \times y_2^n\times y_3}-1\right) \\
+ & {\rm ~Cost_{~substrate}} \times \left(\frac{1}{y_3}-1\right) \\
+ & {\rm ~Cost_{~KGD}} \times \left(\frac{1}{y_2^n \times y_3}-1\right) ,
\end{aligned}
\end{equation}
where $y_1$ is the yield of the interposer, $y_2$ is the bonding yield of chips, $y_3$ is the bonding yield of the interposer. The difference between chip-first and chip-last is also viewed. As shown in the equations
\begin{equation}
\begin{aligned}
\rm
Cost_{~chip-first} & \rm = \frac{\sum{\frac{C_{chip}}{Y_{chip}}} + C_{package}}{Y_{package}} \\
\rm
Cost_{~chip-last} & = \frac{\rm \frac{C_{package}}{Y_{package}}+\sum{\left(\frac{C_{chip}}{Y_{chip}}+C_{bond}\right)}}{\text{Y}_{\rm bonding}^n} ,
\end{aligned}
\end{equation}
though chip-first packaging flow is simpler, the poor yield of packaging would result in a huge waste on KGDs. Therefore, chip-last packaging is the priority selection for multi-chip systems, and our experiments below are based on it.
We break down various components of the total RE cost to better analyze the reason behind it. As we find that the cost of wasted KGDs resulting from packaging takes a significant proportion of the total cost, especially when the die cost is high and the packaging yield is poor, this part of the cost is counted separately.
\subsection{NRE Cost Model}
NRE cost is rarely discussed quantitatively in previous works because it depends on the particular circumstances of each design team. As it is so essential, in any case, we need to build a model to guide us in designing VLSI systems.
We use the area as the unified measure. In our model, the NRE cost consists of three parts: 1) cost for designing modules, 2) cost for designing chips, 3) cost for designing package. For any chip $c$, the NRE cost can be estimated by the equation
\begin{equation}
{\rm Cost} = K_cS_c + \sum_{m_i \in c}{K_{m_i}S_{m_i} + C} ,
\end{equation}
where $S_c$ is the area of the chip and $S_{m_i}$ is the area of module $i$. $K_c$ and $K_m$ are the factors associated with design complexity and design capability. $K_c$ is determined by NRE costs related to the chip area, such as system verification and chip physics design; $K_m$ is determined by NRE costs related to the module area such as module design and block verification; $C$ is the fixed NRE costs for each chip independent of area, such as IP licensing and full masks. The NRE model can reflect the difference between module-reuse-based SoC and chiplet-reuse-based multi-chip integration. For a group of systems $J$ built by monolithic SoC, the total NRE cost can be expressed as
\begin{equation}
\begin{aligned}
{\rm Cost} & = \sum_{j \in J}{(K_{c_j}S_{c_j} + C_j + K_{p_j}S_{p_j} + C_{p_j})} \\
& + \sum_{m_i \in M}K_{m_i}S_{m_i} ,
\end{aligned}
\end{equation}
where $K_{p_j}$ is the cost factor of the system $j$ related to the integration technology, $S_p$ is the package area and $Cp$ is the fixed NRE cost for each package independent of area. The same module needs to be designed only once, but every chip needs to be individually designed. If we build these systems by multi-chip integration, the total NRE cost changes into
\begin{equation}
\begin{aligned}
{\rm Cost} & = \sum_{j \in J}{(K_{p_j}S_{p_j} + C_{p_j})} \\
& + \sum_{c_i \in C}(K_{c_i}S_{c_i} + K_{m_i}S_{m_i} + C_i) \\
& + \sum_{n}{C_{
{\rm D2D}_n}} ,
\end{aligned}
\end{equation}
where $C_{{\rm D2D}_n}$ is the NRE cost for designing D2D interface under process node $n$. It is obvious that multi-chip integration benefits not only from module reuse but also from chip reuse.
\begin{figure*}[tb]
\setlength{\textfloatsep}{0pt}
\includegraphics[width=0.98\textwidth]{figures/RE_Cost.pdf}
\caption{Normalized RE cost comparison among different integrations under different technologies \label{RE_Cost}}
\end{figure*}
\section{Model Validation and Discussion}
Data used in the experiments is from commercial databases~\cite{LLC_assembly}, public information~\cite{Khan.2020}\cite{HIR}\cite{Li.2020}\cite{D5}\cite{ODSA}, and the in-house. The experiment results are convincing under these situations, but applying the model to other cases makes it necessary to include the latest relevant data as the parameters of the model.
\subsection{Validation and Comparison of RE cost}
\label{RE}
We validate our model on public works. AMD comes up with the well-known chiplet architecture~\cite{Naffziger.2021}. As Figure \ref{AMD} shows, AMD claims that their chiplet-based products have a considerable cost advantage over monolithic SoC. We validate our model on AMD's design based on external and in-house data. Considering that the TSMC 7nm and GF 12nm process has just been massive-produced when the Zen3 project is initiated, relative high defect density parameters (0.13 for 7nm, 0.12 for 12nm, speculate based on public data~\cite{D5}) are used. The comparison shows die costs result similar with AMD. Multi-chip integration can save up to 50\% of the die cost. However, AMD covers up their additional costs for reintegration. When taking packaging overhead into account, the advantages of multi-chip are reduced. Especially for the 16 core system, the packaging cost accounts for 30\%. As the yield of 7nm technology improves in recent years, the advantage is further smaller.
\begin{figure}[tb]
\includegraphics[width=0.37\textwidth]{figures/AMD.pdf}
\caption{Normalized RE cost comparison for AMD's chiplet architecture.\label{AMD} \protect \footnotemark }
\end{figure}
\footnotetext{The cost of packaging is the sum of the cost of raw package, the cost of package defects, and the cost of wasted KGDs.}
Based on recent data, more explorations on RE cost under various integrations and technologies are studied. We divide a monolithic chip into different numbers of chiplets and then assembly them by various integration methods. Referring to EPYC~\cite{Naffziger.2021}, 10\% of the D2D interface overhead is assumed, and no reuse is utilized. All costs are normalized to the 100 $mm^2$ area SoC.
As Figure \ref{RE_Cost} shows, there are significant advantages for advanced technology (5nm) because the cost resulting from die defects accounts for more than 50\% of the total manufacturing cost of the monolithic SoC at 800 $mm^2$ area. As for mature technology (14nm), though there are also up to 35\% cost-savings from yield improvement, the cost advantage of multi-chip is not that significant because of the D2D and packaging overhead (>25\% for MCM, >50\% for 2.5D). For any technology node, the benefits increase with the increase of area, and the turning point for advanced technology comes earlier than the mature technology. As InFO and 2.5D based multi-chip integration consist of a large monolithic interposer, they also suffer from the poor yield of the complex packaging process; moreover, bonding defects lead to waste of KGDs, so the cost of packaging (50\% at 7nm, 900$mm^2$, 2.5D) is comparable with the chip cost. Therefore, advanced packaging technologies are only cost-effective under advanced process technology.
Another important insight is about granularity. The cost benefits from smaller chiplet granularity have a marginal utility. With the increase of chiplets quantity (3$\rightarrow$5), the cost-saving of die defects is more negligible (<10\% at 5nm, 800$mm^2$, MCM), and the overhead is higher.
\subsection{Total Cost Comparison of Single System}
\label{single_system}
Though RE cost is a major cost to be considered, the NRE cost is often the determinant, especially for systems without huge production guarantees. Take a system of 800$mm^2$ module area as an example. We implement the system by monolithic SoC and two chiplets MCM separately. D2D overhead is also assumed at 10\%. NRE cost is amortized to each system depending on the number of modules and chips included. All cost is normalized to the RE cost of SoC.
\begin{figure}[tb]
\includegraphics[width=0.47\textwidth]{figures/single_total.pdf}
\caption{Normalized total cost structure of single system}
\label{single}
\end{figure}
As shown in Figure \ref{single}, because of the large total module area, the NRE overhead of D2D interface and packaging is no more than 2\% and 9\% (2.5D), and the total NRE cost for designing modules also remains the same. However, for each chiplet, there is a high fixed NRE cost, such as masks, hence multi-chip leads to very high NRE costs (36\% at 500k quantity) for designing and manufacturing chips. For 5nm systems, when the quantity reaches two million, multi-chip architecture starts to pay back. As for smaller systems, the turning point of production quantity is further higher. So, monolithic SoC is often a better choice for a single system unless the area or the production quantity is large enough.
\section{Chiplet Reuse Scheme Exploration}
\label{sec_reuse}
There are several common ways of chiplet reuse in the industry such as EPYC\cite{Naffziger.2021} and LEGO~\cite{Xia.2021}. In this section, we will show how these architectures achieve cost benefits. From the explorations, we can appropriately adopt multi-chiplet architectures.
\begin{figure}[h]
\includegraphics[width=0.45\textwidth]{figures/reuse_pattern.pdf}
\caption{Different reuse schemes \label{reuse_pattern}}
\end{figure}
\subsection{Single Chiplet Multiple Systems (SCMS)}
\label{sec_SCMS}
As shown in Figure \ref{reuse_pattern}(a), SCMS is a multi-chip architecture that uses a single kind of chiplet to build several systems\footnote{Symmetrical placement requires a symmetrical chiplet; otherwise, two mirrored chiplets are necessary.}. We take a 7nm chiplet with 200$mm^2$ module area as an example. Three systems containing 1, 2, and 4 chiplets are built based on MCM and 2.5D, and the production quantity for each system is assumed at 500,000. Two conditions with or without package reuse are also considered. All costs are normalized to the RE cost of the 4X MCM system.
As Figure \ref{SCMS} shows, due to chiplet reuse, there is vast chip NRE cost-saving (nearly three quarters for 4X system) compared with monolithic SoC. The advantage of the SCMS reuse scheme is that only one chiplet is needed, so it comes into effect instantly without making multiple chips. This architecture is suitable for one production line with different grades. The disadvantage is that D2D interconnections lead to significant overhead, and as there is only one kind of chiplet, there is no possibility for heterogeneous technology.
\begin{figure}[tb]
\includegraphics[width=0.4\textwidth]{figures/SCMS.pdf}
\caption{Normalized total cost of SCMS reuse scheme\label{SCMS}}
\end{figure}
If the package is reused among these three systems, for the largest 4X system, the NRE cost of the package will be reduced by two-thirds. However, for the smallest 1X system, the total cost will increase more than 20\%. Package reuse saves amortized NRE cost of package for larger systems but wastes RE cost for smaller systems. Therefore, whether using package reuse depends on which accounts for a more significant proportion.
For advanced packaging such as 2.5D, if the 4x interposer is reused in the 1x system, packaging cost more than 50\%. Therefore, package reuse is uneconomic for high-cost 2.5D integrations, but 2.5D can still benefit from chiplet reuse.
\subsection{One Center Multiple Extensions (OCME)}
\label{sec_OCME}
As Figure \ref{reuse_pattern}(b) shows, in the OCME architecture, there are a reused die (C) in the center and various extension chips with the same footprint placed around. We take a 7nm 4-160$mm^2$-sockets system as an example. Two different extension dies \{X, Y\} are used to build four different systems, and the production quantity for each system is assumed at 500,000. Both with and without package reuse are taken into account, and all cost is normalized to the RE cost of the largest MCM system. We also perform experiments on the possibility that the center die can be designed under relatively outdated process technology (14nm).
\begin{figure}[b]
\includegraphics[width=0.41\textwidth]{figures/OCMS.pdf}
\caption{Normalized total cost of OCME reuse scheme}
\label{OCME}
\end{figure}
Figure \ref{OCME} shows the amortized total cost of SoC, ordinary MCM, package reused MCM, and package reused heterogeneous MCM. The reuse benefit is not as evident (NRE cost-saving < 50\%) as the SCMS scheme because three chiplets are used, and the average reuse is less. Therefore, the OCME scheme needs more systems to come into effect.
The advantage of the OCME reuse scheme is the possibility of heterogeneity. With heterogeneous integration, shown in Figure~\ref{OCME}, the total costs are further reduced by more than 10\%. Especially for the single C system, there is almost half the cost-saving. For systems that share a large area of modules that do not benefit from advanced process technology nodes, adopting the OCME scheme is more cost-effective.
\vspace{-2pt}
\subsection{A few Sockets Multiple Collocations (FSMC)}
\label{sec_FSMC}
Besides the two schemes above, a package with several chip sockets can hold more systems. As shown in Figure \ref{reuse_pattern}(c), assume there are $n$ different chiplets with the same footprint, and the package has $k$ sockets. It follows that up to $\sum_{i=1}^{k}{(C_{n+i-1}^{i})}$ different systems can be built. It only takes six chiplets and one 4-sockets package to build a maximum of up to 119 diverse systems. We ideally assume that all of these reuse possibilities are utilized, and each system has a production quantity of 500,000. Five different situations from low to high reuse times are compared by average normalized cost.
\begin{figure}[tb]
\includegraphics[width=0.34\textwidth]{figures/FSMC.pdf}
\caption{Normalized total cost of FSMC reuse scheme}
\label{FSMC}
\end{figure}
As shown in Figure \ref{FSMC}, the more chiplets are reused, the more benefits from NRE cost amortization. When the reusability is taken full advantage of, the amortized NRE cost is small enough to be ignored. At this point, the huge cost-saving potential for multi-chip architecture is revealed. Cost advantage is achieved not only from the RE cost-saving but also the NRE cost-saving.
The models above are the most idealized result because not every chiplet can be reused many times, and not every system has actual demand. After all, few systems can have billions of production quantities, so the space for NRE cost amortization always exists. Given that most chip design teams have limited design capabilities and production quantities, it may be more economical to build systems by a few excellent chiplets from different specialized teams.
\vspace{-2pt}
\section{Summary}
\label{summary}
Multi-chip architecture has become a future trend. However, from our point of view, the benefit of multi-chip architecture is not unconditional but depends on many complicated factors. To help chip architects make better decisions on multi-chip architectures, we build a quantitative model for cost comparison among different alternatives. Our model allows designers to validate the cost at the early stage. We have also shown how multi-chip architecture can actually benefit from yield improvement, chip and package reuse, and heterogeneity. The takeaways of this paper are summarized as follows:
\begin{itemize}[topsep=3pt, leftmargin=10pt, itemsep=2pt]
\item Multi-chip architecture begins to pay off when the cost of die defects exceeds the total cost resulting from packaging; The closer to the \textit{Moore Limit} (the largest area at the most advanced technology) the system is, the higher cost-benefit from multi-chip architecture is. RE cost benefits from smaller chiplet granularity have marginal utility, so splitting a single system into two or three chiplets is usually sufficient. (Section \ref{RE})
\item For a single system, monolithic SoC is a better choice unless the production quantity is large enough to amortize the NRE overhead of multiple chiplets. (Section \ref{single_system})
\item Whether to reuse packaging depends on whether the RE or the amortized NRE cost is dominant. (Section \ref{sec_SCMS}, \ref{sec_OCME})
\item For systems of multiple grades, the SCMS scheme brings significant cost advantages; For systems that share a large area of ``unscalable'' modules, adopting the OCME scheme is more cost-effective; the FSMC scheme provides maximum reuse possibilities. (Section \ref{sec_reuse})
\item The basic principle is building more systems by fewer chiplets, and the cost benefits of chiplet reuse are more evident for finely segmented demands. (Section \ref{sec_FSMC})
\item Despite all the benefits, unfortunately, Moore's Law has not been fundamentally extended. For ultra-high performance systems which are close to the \textit{Moore Limit}, the interconnection requirements are too high to be supported by the organic substrate, so advanced packaging technologies such as InFO and 2.5D are necessary. However, with a monolithic interposer, advanced packaging technologies still suffer from poor yield and area limit.
\end{itemize}
\section{Introduction}
Although Moore's Law has governed the semiconductor industry for over half a century, it is widely observed and recognized that Moore's Law is becoming harder to sustain. ``Integration of separately packaged smaller functions'' is considered the extension by Moore himself~\cite{Moore.2006} and the semiconductor industry.
The traditional VLSI system is implemented on a monolithic die, also known as system-on-chip (SoC). The growth of transistors on a single die is guaranteed by the steady growth of the process technology and the die area for the past few decades. However, as process technology improvement has slowed down and the chip area is approaching the limit of the lithographic reticle, transistor growth is going to stagnate~\cite{Loh.2021}\cite{Naffziger.2021}. Meanwhile, a large chip means more complex designs, and the poor yield results in even higher costs. Re-partitioning a monolithic SoC into several chiplets can improve the overall yield of dies, thereby reducing the cost.
Besides yield improvement, chiplet reuse is another characteristic of multi-chiplet architecture. In the traditional design flow, IP or module reuse is widely used; however, this approach still requires repeating system verification and chip physics design, which occupy a large part of the total non-recurring engineering (NRE) cost. Therefore, chiplet reuse, which saves the overhead of re-verifying systems and redesigning chip physics, can save more cost.
With the advent of many works about multi-chip, especially those products from the industry~\cite{Naffziger.2021}\cite{Xia.2021}, the economic effectiveness of multi-chiplet architecture has become a consensus. However, in practice, we find that the cost advantage of a multi-chip system is not easy to achieve due to the overhead of packaging and die-to-die (D2D) interface. Compared with SoC, the cost of multi-chip systems is much more difficult to evaluate at the early stage of VLSI system design. Without careful evaluation, adopting multi-chiplet architecture may lead to even higher costs. Previous works~\cite{Stow.2016}\cite{Stow.2017} focus on the manufacturing cost of dies and silicon interposers but neglect other significant costs such as substrates, D2D overhead, and NRE cost.
To better guide the VLSI system design and explain architecture challenges~\cite{Loh.2021} such as partitioning problem, we build a quantitative model \textit{Chiplet Actuary} for cost evaluation based on three typical multi-chip integration technologies. Based on this model, we discuss the total cost of different integration schemes from various perspectives. External data~\cite{Khan.2020}\cite{HIR}\cite{Li.2020}\cite{LLC_assembly}\cite{D5}\cite{ODSA} and in-house data are used to provide a relatively accurate final total cost. In summary, this paper makes the following major contributions to the VLSI system design:
\begin{itemize}[topsep=3pt, leftmargin=12pt, itemsep=2pt]
\item We abstract monolithic SoC and multi-chip integration into different levels of concepts: module, chip, and package, by which we build a unified architecture.
\item We present a quantitative cost model \textit{Chiplet Actuary} to estimate various components of the total system cost. To the best of our knowledge, this model is the first to introduce D2D overhead and NRE cost.
\item Based on \textit{Chiplet Actuary}, we put forward an analytical method for decision-making on chiplet architecture problems: which integration scheme to use, how many chiplets to partition, whether to reuse packaging, how to leverage chiplet reusability, and how to exploit heterogeneity. Instructive insights are specified in Section \ref{summary}.
\end{itemize}
\section{Background}
\subsection{Multi-chip Integration}
\begin{figure}[b]
\includegraphics[width=0.42\textwidth]{figures/Synopsys.pdf}
\caption{Different multi-chip integration technologies\protect~\cite{Synopsys} \label{synopsys}}
\end{figure}
Multi-chip integration is not an innovation but a technology developing over decades to make better VLSI systems. As shown in Figure \ref{synopsys}, the most widely used integration scheme is assembling different dies on a unifying substrate, also known as the typical multi-chip module (MCM) or system-in-package (SiP). Compared with MCM, integrated fan-out (InFO) technology is relatively more advanced. Developed from fan-out wafer-level packaging (FOWLP), InFO uses a redistribution layer (RDL) to offer smaller footprints and better electrical performance than the conventional substrate. According to the process sequence, InFO can be divided into chip-first and chip-last (or RDL-first). In addition to 2D integration, silicon-interposer-based 2.5D integration, also called Chip-on-Wafer-on-Substrate (CoWoS) by TSMC, uses a relatively outdated chip to interconnect and integrate chiplets and memory dies. Though these three mainstream technologies are all used for multi-chip integration, they are different in package size, IO count, data rates, and cost. Therefore, chip designers are supposed to choose the right solution according to design objectives and cost constraints.
\subsection{Yield Model}
One of the core components of the cost model is the yield model, which has been an important topic since the advent of the integrated circuit industry. For predicting yields of dies, Poisson, Negative Binomial, and other models from the industry are used to provide a more accurate result. Among these models, Seed's model and the Negative Binomial model are the most widely used in the same form of~\cite{Cunningham.1990}
\begin{equation}
{\rm Yield_{~die}}=\left(1 + \frac{DS}{c}\right)^{-c} ,
\end{equation}
where $D$ is the defeat density, $S$ is the die area, and $c$ is the cluster parameter in the Negative Binomial model or the number of critical levels in Seed's model. We have followed this model and used more realistic parameters. Figure \ref{Yield} shows the yield-area and the cost-area relations of different technologies under this model. All costs are normalized to the cost per area of the raw wafer.
\begin{figure}[htbp]
\setlength{\abovecaptionskip}{5pt}
\includegraphics[width=0.39\textwidth]{figures/Yield.pdf}
\caption{Yield/Cost-Area relation of different technologies \label{Yield}}
\end{figure}
The traditional SoC is manufactured in a serial production line, so the overall yield is estimated by continuous multiplication
\begin{equation}
Y_{\rm ~overall} = Y_{\rm ~wafer} \times Y_{\rm ~die} \times Y_{\rm ~packaging} \times Y_{\rm ~test} .
\end{equation}
However, for the multi-chip system, yield cannot be estimated by simple multiplication because of the more complex manufacturing flow.
\subsection{NRE and RE Cost}
The total cost of VLSI systems can be roughly divided into two kinds: non-recurring engineering (NRE) cost and recurring engineering (RE) cost. NRE cost refers to the one-time cost of designing a VLSI system, including software, IP licensing, module/chip/package design, verification, masks, etc. RE cost refers to the fabrication costs for massive production, including wafers, packaging, test, etc.
For one VLSI system, its final engineering cost consists of the RE and the amortized NRE cost. Amortization is mainly related to the proportion of quantity. The basic concept is that if the production quantity is small, the NRE cost is dominant; otherwise, the NRE cost is negligible if the quantity is large enough.
\section{Chiplet Actuary Model}
\subsection{High Level Abstraction}
Our model is implemented for comparing the RE and NRE cost between monolithic SoC and multi-chip integration. As the problem is so complex, we use some necessary assumptions to ignore non-primary factors:
\begin{itemize}[topsep=3pt, leftmargin=15pt, itemsep=2pt]
\item All chiplets under the same process node share the same die-to-die (D2D) interface with different channel numbers;
\item Performance and power are not considered in this model;
\item Different parts of the NRE cost are independent so that they can be estimated separately.
\end{itemize}
Besides the above assumptions, many other approximations are also used in the model. More details can be referred to in our open-source code of the model\footnote{Repository URL: https://github.com/Yinxiao-Feng/DAC2022.git}.
\begin{figure}[htbp]
\includegraphics[width=0.41\textwidth]{figures/High_level.pdf}
\caption{High-level cost model diagram \label{HL}}
\end{figure}
As shown in Figure \ref{HL}, module, chip and package are the three main concepts involved in our model. A group of systems is built from a group of modules. Each module corresponds to a chiplet. Each system can be SoC formed directly from modules or multi-chip integration formed from chiplets. The relation can be described as follows:
\begin{equation}
\begin{aligned}
m_i \in & \{m_1, m_2, ... , m_{D2D}\} = M \\
c_i = & ~{\rm Chip}(\{m_i, m_{D2D}\}) \in C \\
{\rm SoC}_j = & ~{\rm Package}({\rm Chip}(\{m_{k_1}, m_{k_2}, \dots\})) \\
{\rm MCM}_j = & ~{\rm Package}(\{c_{k_1}, c_{k_2}, \dots\}) ,
\end{aligned}
\end{equation}
where $m$ and $c$ are module and chiplet, $\rm Package(\cdot)$ and $\rm Chip(\cdot)$ are methods forming system from chips and forming chip from modules. Different from the general concept of the module, our module refers to an indivisible group of functional units. D2D interface is a particular module with which each module makes up a chiplet. D2D interfaces under different process nodes are regarded as diverse modules.
\subsection{RE Cost Model}
The RE cost in our model consists of five parts: 1) cost of raw chips, 2) cost of chip defects, 3) cost of raw packages, 4) cost of package defects, 5) cost of wasted known good dies (KGDs) resulting from packaging defects. Other costs such as bumping, wafer sort, and package test are also included but not itemized separately because they are not so significant~\cite{Stow.2016}\cite{Stow.2017}.
On the basis of previous works~\cite{Stow.2016}\cite{Stow.2017}, we make several improvements. The first is the consideration of D2D interface overhead. For any multi-chip system, especially those with high interconnection bandwidth, the D2D interface occupies a considerable portion of the area~\cite{ODSA}. In our model, we regard D2D interface as a particular module shared by all chiplets. It takes a certain percentage of the chip area depending on different technologies and architectures.
Then, more multi-chip integration models are included. MCM is similar to SoC that flips chips directly on a unified organic substrate. The difference is that the MCM needs additional substrate layers for interconnection, so MCM has a growth factor on substrate RE cost. As for InFO and 2.5D, the interposer cost is calculated similarly with the die cost, and the bump cost and bounding yield are counted twice on the chip side and the substrate side. The total cost resulting from packaging is
\begin{equation}
\begin{aligned}
\rm
Cost_{~packaging} = & \rm ~Cost_{~Raw~Package} \\
+ & {\rm ~Cost_{~interposer}} \times \left(\frac{1}{y_1 \times y_2^n\times y_3}-1\right) \\
+ & {\rm ~Cost_{~substrate}} \times \left(\frac{1}{y_3}-1\right) \\
+ & {\rm ~Cost_{~KGD}} \times \left(\frac{1}{y_2^n \times y_3}-1\right) ,
\end{aligned}
\end{equation}
where $y_1$ is the yield of the interposer, $y_2$ is the bonding yield of chips, $y_3$ is the bonding yield of the interposer. The difference between chip-first and chip-last is also viewed. As shown in the equations
\begin{equation}
\begin{aligned}
\rm
Cost_{~chip-first} & \rm = \frac{\sum{\frac{C_{chip}}{Y_{chip}}} + C_{package}}{Y_{package}} \\
\rm
Cost_{~chip-last} & = \frac{\rm \frac{C_{package}}{Y_{package}}+\sum{\left(\frac{C_{chip}}{Y_{chip}}+C_{bond}\right)}}{\text{Y}_{\rm bonding}^n} ,
\end{aligned}
\end{equation}
though chip-first packaging flow is simpler, the poor yield of packaging would result in a huge waste on KGDs. Therefore, chip-last packaging is the priority selection for multi-chip systems, and our experiments below are based on it.
We break down various components of the total RE cost to better analyze the reason behind it. As we find that the cost of wasted KGDs resulting from packaging takes a significant proportion of the total cost, especially when the die cost is high and the packaging yield is poor, this part of the cost is counted separately.
\subsection{NRE Cost Model}
NRE cost is rarely discussed quantitatively in previous works because it depends on the particular circumstances of each design team. As it is so essential, in any case, we need to build a model to guide us in designing VLSI systems.
We use the area as the unified measure. In our model, the NRE cost consists of three parts: 1) cost for designing modules, 2) cost for designing chips, 3) cost for designing package. For any chip $c$, the NRE cost can be estimated by the equation
\begin{equation}
{\rm Cost} = K_cS_c + \sum_{m_i \in c}{K_{m_i}S_{m_i} + C} ,
\end{equation}
where $S_c$ is the area of the chip and $S_{m_i}$ is the area of module $i$. $K_c$ and $K_m$ are the factors associated with design complexity and design capability. $K_c$ is determined by NRE costs related to the chip area, such as system verification and chip physics design; $K_m$ is determined by NRE costs related to the module area such as module design and block verification; $C$ is the fixed NRE costs for each chip independent of area, such as IP licensing and full masks. The NRE model can reflect the difference between module-reuse-based SoC and chiplet-reuse-based multi-chip integration. For a group of systems $J$ built by monolithic SoC, the total NRE cost can be expressed as
\begin{equation}
\begin{aligned}
{\rm Cost} & = \sum_{j \in J}{(K_{c_j}S_{c_j} + C_j + K_{p_j}S_{p_j} + C_{p_j})} \\
& + \sum_{m_i \in M}K_{m_i}S_{m_i} ,
\end{aligned}
\end{equation}
where $K_{p_j}$ is the cost factor of the system $j$ related to the integration technology, $S_p$ is the package area and $Cp$ is the fixed NRE cost for each package independent of area. The same module needs to be designed only once, but every chip needs to be individually designed. If we build these systems by multi-chip integration, the total NRE cost changes into
\begin{equation}
\begin{aligned}
{\rm Cost} & = \sum_{j \in J}{(K_{p_j}S_{p_j} + C_{p_j})} \\
& + \sum_{c_i \in C}(K_{c_i}S_{c_i} + K_{m_i}S_{m_i} + C_i) \\
& + \sum_{n}{C_{
{\rm D2D}_n}} ,
\end{aligned}
\end{equation}
where $C_{{\rm D2D}_n}$ is the NRE cost for designing D2D interface under process node $n$. It is obvious that multi-chip integration benefits not only from module reuse but also from chip reuse.
\begin{figure*}[tb]
\setlength{\textfloatsep}{0pt}
\includegraphics[width=0.98\textwidth]{figures/RE_Cost.pdf}
\caption{Normalized RE cost comparison among different integrations under different technologies \label{RE_Cost}}
\end{figure*}
\section{Model Validation and Discussion}
Data used in the experiments is from commercial databases~\cite{LLC_assembly}, public information~\cite{Khan.2020}\cite{HIR}\cite{Li.2020}\cite{D5}\cite{ODSA}, and the in-house. The experiment results are convincing under these situations, but applying the model to other cases makes it necessary to include the latest relevant data as the parameters of the model.
\subsection{Validation and Comparison of RE cost}
\label{RE}
We validate our model on public works. AMD comes up with the well-known chiplet architecture~\cite{Naffziger.2021}. As Figure \ref{AMD} shows, AMD claims that their chiplet-based products have a considerable cost advantage over monolithic SoC. We validate our model on AMD's design based on external and in-house data. Considering that the TSMC 7nm and GF 12nm process has just been massive-produced when the Zen3 project is initiated, relative high defect density parameters (0.13 for 7nm, 0.12 for 12nm, speculate based on public data~\cite{D5}) are used. The comparison shows die costs result similar with AMD. Multi-chip integration can save up to 50\% of the die cost. However, AMD covers up their additional costs for reintegration. When taking packaging overhead into account, the advantages of multi-chip are reduced. Especially for the 16 core system, the packaging cost accounts for 30\%. As the yield of 7nm technology improves in recent years, the advantage is further smaller.
\begin{figure}[tb]
\includegraphics[width=0.37\textwidth]{figures/AMD.pdf}
\caption{Normalized RE cost comparison for AMD's chiplet architecture.\label{AMD} \protect \footnotemark }
\end{figure}
\footnotetext{The cost of packaging is the sum of the cost of raw package, the cost of package defects, and the cost of wasted KGDs.}
Based on recent data, more explorations on RE cost under various integrations and technologies are studied. We divide a monolithic chip into different numbers of chiplets and then assembly them by various integration methods. Referring to EPYC~\cite{Naffziger.2021}, 10\% of the D2D interface overhead is assumed, and no reuse is utilized. All costs are normalized to the 100 $mm^2$ area SoC.
As Figure \ref{RE_Cost} shows, there are significant advantages for advanced technology (5nm) because the cost resulting from die defects accounts for more than 50\% of the total manufacturing cost of the monolithic SoC at 800 $mm^2$ area. As for mature technology (14nm), though there are also up to 35\% cost-savings from yield improvement, the cost advantage of multi-chip is not that significant because of the D2D and packaging overhead (>25\% for MCM, >50\% for 2.5D). For any technology node, the benefits increase with the increase of area, and the turning point for advanced technology comes earlier than the mature technology. As InFO and 2.5D based multi-chip integration consist of a large monolithic interposer, they also suffer from the poor yield of the complex packaging process; moreover, bonding defects lead to waste of KGDs, so the cost of packaging (50\% at 7nm, 900$mm^2$, 2.5D) is comparable with the chip cost. Therefore, advanced packaging technologies are only cost-effective under advanced process technology.
Another important insight is about granularity. The cost benefits from smaller chiplet granularity have a marginal utility. With the increase of chiplets quantity (3$\rightarrow$5), the cost-saving of die defects is more negligible (<10\% at 5nm, 800$mm^2$, MCM), and the overhead is higher.
\subsection{Total Cost Comparison of Single System}
\label{single_system}
Though RE cost is a major cost to be considered, the NRE cost is often the determinant, especially for systems without huge production guarantees. Take a system of 800$mm^2$ module area as an example. We implement the system by monolithic SoC and two chiplets MCM separately. D2D overhead is also assumed at 10\%. NRE cost is amortized to each system depending on the number of modules and chips included. All cost is normalized to the RE cost of SoC.
\begin{figure}[tb]
\includegraphics[width=0.47\textwidth]{figures/single_total.pdf}
\caption{Normalized total cost structure of single system}
\label{single}
\end{figure}
As shown in Figure \ref{single}, because of the large total module area, the NRE overhead of D2D interface and packaging is no more than 2\% and 9\% (2.5D), and the total NRE cost for designing modules also remains the same. However, for each chiplet, there is a high fixed NRE cost, such as masks, hence multi-chip leads to very high NRE costs (36\% at 500k quantity) for designing and manufacturing chips. For 5nm systems, when the quantity reaches two million, multi-chip architecture starts to pay back. As for smaller systems, the turning point of production quantity is further higher. So, monolithic SoC is often a better choice for a single system unless the area or the production quantity is large enough.
\section{Chiplet Reuse Scheme Exploration}
\label{sec_reuse}
There are several common ways of chiplet reuse in the industry such as EPYC\cite{Naffziger.2021} and LEGO~\cite{Xia.2021}. In this section, we will show how these architectures achieve cost benefits. From the explorations, we can appropriately adopt multi-chiplet architectures.
\begin{figure}[h]
\includegraphics[width=0.45\textwidth]{figures/reuse_pattern.pdf}
\caption{Different reuse schemes \label{reuse_pattern}}
\end{figure}
\subsection{Single Chiplet Multiple Systems (SCMS)}
\label{sec_SCMS}
As shown in Figure \ref{reuse_pattern}(a), SCMS is a multi-chip architecture that uses a single kind of chiplet to build several systems\footnote{Symmetrical placement requires a symmetrical chiplet; otherwise, two mirrored chiplets are necessary.}. We take a 7nm chiplet with 200$mm^2$ module area as an example. Three systems containing 1, 2, and 4 chiplets are built based on MCM and 2.5D, and the production quantity for each system is assumed at 500,000. Two conditions with or without package reuse are also considered. All costs are normalized to the RE cost of the 4X MCM system.
As Figure \ref{SCMS} shows, due to chiplet reuse, there is vast chip NRE cost-saving (nearly three quarters for 4X system) compared with monolithic SoC. The advantage of the SCMS reuse scheme is that only one chiplet is needed, so it comes into effect instantly without making multiple chips. This architecture is suitable for one production line with different grades. The disadvantage is that D2D interconnections lead to significant overhead, and as there is only one kind of chiplet, there is no possibility for heterogeneous technology.
\begin{figure}[tb]
\includegraphics[width=0.4\textwidth]{figures/SCMS.pdf}
\caption{Normalized total cost of SCMS reuse scheme\label{SCMS}}
\end{figure}
If the package is reused among these three systems, for the largest 4X system, the NRE cost of the package will be reduced by two-thirds. However, for the smallest 1X system, the total cost will increase more than 20\%. Package reuse saves amortized NRE cost of package for larger systems but wastes RE cost for smaller systems. Therefore, whether using package reuse depends on which accounts for a more significant proportion.
For advanced packaging such as 2.5D, if the 4x interposer is reused in the 1x system, packaging cost more than 50\%. Therefore, package reuse is uneconomic for high-cost 2.5D integrations, but 2.5D can still benefit from chiplet reuse.
\subsection{One Center Multiple Extensions (OCME)}
\label{sec_OCME}
As Figure \ref{reuse_pattern}(b) shows, in the OCME architecture, there are a reused die (C) in the center and various extension chips with the same footprint placed around. We take a 7nm 4-160$mm^2$-sockets system as an example. Two different extension dies \{X, Y\} are used to build four different systems, and the production quantity for each system is assumed at 500,000. Both with and without package reuse are taken into account, and all cost is normalized to the RE cost of the largest MCM system. We also perform experiments on the possibility that the center die can be designed under relatively outdated process technology (14nm).
\begin{figure}[b]
\includegraphics[width=0.41\textwidth]{figures/OCMS.pdf}
\caption{Normalized total cost of OCME reuse scheme}
\label{OCME}
\end{figure}
Figure \ref{OCME} shows the amortized total cost of SoC, ordinary MCM, package reused MCM, and package reused heterogeneous MCM. The reuse benefit is not as evident (NRE cost-saving < 50\%) as the SCMS scheme because three chiplets are used, and the average reuse is less. Therefore, the OCME scheme needs more systems to come into effect.
The advantage of the OCME reuse scheme is the possibility of heterogeneity. With heterogeneous integration, shown in Figure~\ref{OCME}, the total costs are further reduced by more than 10\%. Especially for the single C system, there is almost half the cost-saving. For systems that share a large area of modules that do not benefit from advanced process technology nodes, adopting the OCME scheme is more cost-effective.
\vspace{-2pt}
\subsection{A few Sockets Multiple Collocations (FSMC)}
\label{sec_FSMC}
Besides the two schemes above, a package with several chip sockets can hold more systems. As shown in Figure \ref{reuse_pattern}(c), assume there are $n$ different chiplets with the same footprint, and the package has $k$ sockets. It follows that up to $\sum_{i=1}^{k}{(C_{n+i-1}^{i})}$ different systems can be built. It only takes six chiplets and one 4-sockets package to build a maximum of up to 119 diverse systems. We ideally assume that all of these reuse possibilities are utilized, and each system has a production quantity of 500,000. Five different situations from low to high reuse times are compared by average normalized cost.
\begin{figure}[tb]
\includegraphics[width=0.34\textwidth]{figures/FSMC.pdf}
\caption{Normalized total cost of FSMC reuse scheme}
\label{FSMC}
\end{figure}
As shown in Figure \ref{FSMC}, the more chiplets are reused, the more benefits from NRE cost amortization. When the reusability is taken full advantage of, the amortized NRE cost is small enough to be ignored. At this point, the huge cost-saving potential for multi-chip architecture is revealed. Cost advantage is achieved not only from the RE cost-saving but also the NRE cost-saving.
The models above are the most idealized result because not every chiplet can be reused many times, and not every system has actual demand. After all, few systems can have billions of production quantities, so the space for NRE cost amortization always exists. Given that most chip design teams have limited design capabilities and production quantities, it may be more economical to build systems by a few excellent chiplets from different specialized teams.
\vspace{-2pt}
\section{Summary}
\label{summary}
Multi-chip architecture has become a future trend. However, from our point of view, the benefit of multi-chip architecture is not unconditional but depends on many complicated factors. To help chip architects make better decisions on multi-chip architectures, we build a quantitative model for cost comparison among different alternatives. Our model allows designers to validate the cost at the early stage. We have also shown how multi-chip architecture can actually benefit from yield improvement, chip and package reuse, and heterogeneity. The takeaways of this paper are summarized as follows:
\begin{itemize}[topsep=3pt, leftmargin=10pt, itemsep=2pt]
\item Multi-chip architecture begins to pay off when the cost of die defects exceeds the total cost resulting from packaging; The closer to the \textit{Moore Limit} (the largest area at the most advanced technology) the system is, the higher cost-benefit from multi-chip architecture is. RE cost benefits from smaller chiplet granularity have marginal utility, so splitting a single system into two or three chiplets is usually sufficient. (Section \ref{RE})
\item For a single system, monolithic SoC is a better choice unless the production quantity is large enough to amortize the NRE overhead of multiple chiplets. (Section \ref{single_system})
\item Whether to reuse packaging depends on whether the RE or the amortized NRE cost is dominant. (Section \ref{sec_SCMS}, \ref{sec_OCME})
\item For systems of multiple grades, the SCMS scheme brings significant cost advantages; For systems that share a large area of ``unscalable'' modules, adopting the OCME scheme is more cost-effective; the FSMC scheme provides maximum reuse possibilities. (Section \ref{sec_reuse})
\item The basic principle is building more systems by fewer chiplets, and the cost benefits of chiplet reuse are more evident for finely segmented demands. (Section \ref{sec_FSMC})
\item Despite all the benefits, unfortunately, Moore's Law has not been fundamentally extended. For ultra-high performance systems which are close to the \textit{Moore Limit}, the interconnection requirements are too high to be supported by the organic substrate, so advanced packaging technologies such as InFO and 2.5D are necessary. However, with a monolithic interposer, advanced packaging technologies still suffer from poor yield and area limit.
\end{itemize}
|
2,869,038,154,039 | arxiv | \section{Introduction} \label{intro}
\subsection{Background and Related Work}
The next generation power grid, i.e., the smart grid, relies on advanced control and communication technologies. This critical cyber infrastructure makes the smart grid vulnerable to hostile cyber-attacks \cite{Liang16,wang2013cyber,yan2012survey}. Main objective of attackers is to damage/mislead the state estimation mechanism in the smart grid to cause wide-area power blackouts or to manipulate electricity market prices \cite{Xie10}. There are many types of cyber-attacks, among them false data injection (FDI), jamming, and denial of service (DoS) attacks are well known. FDI attacks add malicious fake data to meter measurements \cite{Liu09,Bobba10,Li_15,Necip18}, jamming attacks corrupt meter measurements via additive noise \cite{Necip18Arxiv}, and DoS attacks block the access of system to meter measurements \cite{Asri15,zhang2011distributed,Necip18}.
The smart grid is a complex network and any failure or anomaly in a part of the system may lead to huge damages on the overall system in a short period of time. Hence, early detection of cyber-attacks is critical for a timely and effective response. In this context, the framework of quickest change detection \cite{Poor08,Basseville93,Veeravalli14,Polunchenko12} is quite useful. In the quickest change detection problems, a change occurs in the sensing environment at an unknown time and the aim is to detect the change as soon as possible with the minimal level of false alarms based on the measurements that become available sequentially over time. After obtaining measurements at a given time, decision maker either declares a change or waits for the next time interval to have further measurements. In general, as the desired detection accuracy increases, detection speed decreases. Hence, the stopping time, at which a change is declared, should be chosen to optimally balance the tradeoff between the detection speed and the detection accuracy.
If the probability density functions (pdfs) of meter measurements for the pre-change, i.e., normal system operation, and the post-change, i.e., after an attack/anomaly, cases can be modeled sufficiently accurately, the well-known cumulative sum (CUSUM) test is the optimal online detector \cite{Moustakides_86} based on the Lorden's criterion \cite{Lorden_71}. Moreover, if the pdfs can be modeled with some unknown parameters, the generalized CUSUM test, which makes use of the estimates of unknown parameters, has asymptotic optimality properties \cite{Basseville93}. However, CUSUM-based detection schemes require perfect models for both the pre- and post-change cases. In practice, capabilities of an attacker and correspondingly attack types and strategies can be totally unknown. For instance, an attacker can arbitrarily combine and launch multiple attacks simultaneously or it can launch a new unknown type of attack. Then, it may not be always possible to know the attacking strategies ahead of time and to accurately model the post-change case. Hence, universal detectors, not requiring any attack model, are needed in general. Moreover, the (generalized) CUSUM algorithm has optimality properties in minimizing a least favorable (worst-case) detection delay subject to false alarm constraints \cite{Moustakides_86,Basseville93}. Since the worst case detection delay is a pessimistic metric, it is, in general, possible to obtain algorithms performing better than the (generalized) CUSUM algorithm.
Considering the pre-change and the post-change cases as hidden states due to the unknown change-point, a quickest change detection problem can be formulated as a partially observable Markov decision process (POMDP) problem. For the problem of online attack/anomaly detection in the smart grid, in the pre-change state, the system is operated under normal conditions and using the system model, the pre-change measurement pdf can be specified highly accurately. On the other hand, the post-change measurement pdf can take different unknown forms depending on the attacker's strategy. Furthermore, the transition probability between the hidden states is unknown in general. Hence, the exact model of the POMDP is unknown.
Reinforcement learning (RL) algorithms are known to be effective in controlling uncertain environments. Hence, the described POMDP problem can be effectively solved using RL. In particular, as a solution, either the underlying POMDP model can be learned and then a model-based RL algorithm for POMDPs \cite{Ross11,Finale10} can be used or a model-free RL algorithm \cite{Jaakkola94,Perkins02,LochSingh98,Lanzi00,Peshkin99} can be used without learning the underlying model. Since the model-based approach requires a two-step solution that is computationally more demanding and only an approximate model can be learned in general, we prefer to use the model-free RL approach.
Outlier detection schemes such as the Euclidean detector \cite{Manandhar14} and the cosine-similarity metric based detector \cite{Rawat15}
are universal as they do not require any attack model. They mainly compute a dissimilarity metric between actual meter measurements and predicted measurements (by the Kalman filter) and declare an attack/anomaly if the amount of dissimilarity exceeds a certain predefined threshold. However, such detectors do not consider temporal relation between attacked/anomalous measurements and make sample-by-sample decisions. Hence, they are unable to distinguish instantaneous high-level random system noise from long-term (persistent) anomalies caused, e.g., by an unfriendly intervention to the system. Hence, compared to the outlier detection schemes, more reliable universal attack detection schemes are needed.
In this work, we consider the smart grid security problem from the defender's perspective and seek for an effective detection scheme using RL techniques {(single-agent RL)}. Note that the problem can be considered from an attacker's perspective as well, where the objective would be to determine the attacking strategies leading to the maximum possible damage on the system. Such a problem can be particularly useful in vulnerability analysis, i.e., to identify the worst possible damage an attacker may introduce to the system and accordingly to take necessary precautions. In the literature, several studies investigate vulnerability analyses using RL, see e.g., \cite{YChen18} for FDI attacks and \cite{JYan17} for sequential network topology attacks. We further note that the problem can also be considered from both defender's and attacker's perspectives simultaneously, that corresponds to a game-theoretic setting.
{Extension of single-agent RL to multiple agents is the multi-agent RL framework that quite involves game theory since in this case, the optimal policies of agents depend both on the environment and the policies of the other agents. Moreover, stochastic games extend the Markov decision processes to multi-agent case where the game is sequential and consists of multiple states, and the transition from one state to another and also the payoffs (rewards/costs) depend on joint actions of all agents. Several RL-based solution approaches have been proposed for stochastic games, see e.g., \cite{nowe2012,littman1994,claus1998,hu2003nash,weinberg2004}. Further, if the game (the underlying state of the environment, actions and payoffs of other agents, etc.) is partially observed, then it is called a partially observable stochastic game, for which finding a solution is more difficult in general.}
\subsection{Contributions}
In this paper, we propose an online cyber-attack detection algorithm using the framework of model-free RL for POMDPs. The proposed algorithm is universal, i.e., it does not require attack models. This makes the proposed scheme widely applicable and also proactive in the sense that new unknown attack types can be detected. Since we follow a model-free RL approach, the defender learns a direct mapping from observations to actions (\emph{stop} or \emph{continue}) by trial-and-error. In the training phase, although it is possible to obtain/generate observation data for the pre-change case using the system model under normal operating conditions, it is generally difficult to obtain real attack data. For this reason, we follow a robust detection approach by training the defender with low-magnitude attacks that corresponds to the worst-case scenarios from a defender's perspective since such attacks are quite difficult to detect. Then, the trained defender becomes sensitive to detect slight deviations of meter measurements from the normal system operation. The robust detection approach significantly limits the action space of an attacker as well. That is, to prevent the detection, an attacker can only exploit very low attack magnitudes that are practically not much of interest due to their minimal damage on the system. To the best of our knowledge, this work is the first attempt for online cyber-attack detection in the smart grid using RL techniques.
\subsection{Organization and Notation}
We introduce the system model and the state estimation mechanism in Sec.~\ref{sys_mod}. We present the problem formulation in Sec.~\ref{prob_form} and the proposed solution approach in Sec.~\ref{solutions}. We then illustrate the performance of the proposed RL-based detection scheme via extensive simulations in Sec.~\ref{numerical}. Finally, we conclude the paper in Sec.~\ref{conclusion}. Boldface letters denote vectors and matrices, all vectors are column vectors, and $\pmb{o}^\mathrm{T}$ denotes the transpose of $\pmb{o}$. $\mathrm{P}$ and $\mathrm{E}$ denote the probability and expectation operators, respectively. Table~\ref{table:symbols} summarizes the common symbols and parameters used in the paper.
\renewcommand{\arraystretch}{1.1}
\begin{table}[t]
\centering
\begin{tabular}{ | p{1cm} | l |}
\hline
Symbol & Meaning \\ \hline \hline
$\Gamma$ & Stopping time \\ \hline
$\tau$ & Change-point \\ \hline
$I$ & Number of quantization levels \\ \hline
$M$ & Window size \\ \hline
$Q(o,a)$ & $Q$-value corresponding to observation-action pair $(o,a)$ \\ \hline
$\alpha$ & Learning rate \\ \hline
$\epsilon$ & Exploration rate \\ \hline
$T$ & Maximum length of a learning episode \\ \hline
$E$ & Number of learning episodes \\
\hline
\end{tabular}
\caption{{Common symbols/parameters in the paper.}}
\label{table:symbols}
\end{table}
\renewcommand{\arraystretch}{1}
\section{System Model and State Estimation} \label{sys_mod}
\subsection{System Model}
Suppose that there are $K$ meters in a power grid consisting of $N+1$ buses, where usually $K > N$ to have the necessary measurement redundancy against noise \cite{Abur04}. One of the buses is considered as a reference bus and the system state at time $t$ is denoted with $\mathbf{x}_t = [x_{1,t}, \dots, x_{N,t}]^\mathrm{T}$ where $x_{n,t}$ denotes the phase angle at bus $n$ at time $t$. Let the measurement taken at meter $k$ at time $t$ be denoted with $y_{k,t}$ and the measurement vector be denoted with $\mathbf{y}_t = [y_{1,t}, \dots, y_{K,t}]^\mathrm{T}$. Based on the widely used linear DC model \cite{Abur04}, we model the smart grid with the following state-space equations:
\begin{gather} \label{eq:state_upd}
\mathbf{x}_{t} = \mathbf{A} \mathbf{x}_{t-1} + \mathbf{v}_t, \\ \label{eq:meas_model}
\mathbf{y}_t = \mathbf{H} \mathbf{x}_t + \mathbf{w}_t,
\end{gather}
where $\mathbf{A} \in \mathbb{R}^{N \times N}$ is the system (state transition) matrix, $\mathbf{H} \in \mathbb{R}^{K \times N}$ is the measurement matrix determined based on the network topology, $\mathbf{v}_t = [v_{1,t}, \dots, v_{N,t}]^\mathrm{T}$ is the process noise vector, and ${\mathbf{w}_t = [w_{1,t}, \dots, w_{K,t}]^\mathrm{T}}$ is the measurement noise vector. We assume that $\mathbf{v}_t$ and $\mathbf{w}_t$ are independent additive white Gaussian random processes where ${\mathbf{v}_t \sim \mathbf{\mathcal{N}}(\mathbf{0},\sigma_v^2 \, \mathbf{I}_N)}$, ${\mathbf{w}_t \sim \mathbf{\mathcal{N}}(\mathbf{0},\sigma_w^2 \, \mathbf{I}_K)}$, and $\mathbf{I}_K \in \mathbb{R}^{K \times K}$ is an identity matrix. Moreover, we assume that the system is observable, i.e., the observability matrix
\begin{equation}\nonumber
\mathbf{O} \triangleq \left[
\begin{smallmatrix}
\mathbf{H} \\
\mathbf{H} \mathbf{A} \\
\vdots \\
\mathbf{H} \mathbf{A}^{N-1}
\end{smallmatrix}
\right]
\end{equation}
has rank $N$.
The system model given in \eqref{eq:state_upd} and \eqref{eq:meas_model} corresponds to the normal system operation. In case of a cyber-attack, however, the measurement model in \eqref{eq:meas_model} is no longer true. For instance,
\begin{enumerate}
\item in case of an FDI attack launched at time $\tau$, the measurement model can be written as
\begin{equation}\nonumber
\mathbf{y}_t = \mathbf{H} \mathbf{x}_t + \mathbf{w}_t + \mathbf{b}_t \mbox{$1\!\!1$} \{t \geq \tau\},
\end{equation}
where $\mbox{$1\!\!1$}$ is an indicator function and $\mathbf{b}_t \triangleq [b_{1,t},\dots,b_{K,t}]^\mathrm{T}$ denotes the injected malicious data at time $t \geq \tau$ {and $b_{k,t}$ denotes the injected false datum to the $k$th meter at time $t$},
\item in case of a jamming attack with additive noise, the measurement model can be written as
\begin{equation}\nonumber
\mathbf{y}_t = \mathbf{H} \mathbf{x}_t + \mathbf{w}_t + \mathbf{u}_t \mbox{$1\!\!1$} \{t \geq \tau\},
\end{equation}
where $\mathbf{u}_t \triangleq [u_{1,t},\dots,u_{K,t}]^\mathrm{T}$ denotes the random noise realization at time $t \geq \tau$ {and $u_{k,t}$ denotes the jamming noise corrupting the $k$th meter at time $t$},
\item {in case of a hybrid FDI/jamming attack \cite{Necip18Arxiv}, the meter measurements take the following form:
\begin{equation}\nonumber
\mathbf{y}_t = \mathbf{H} \mathbf{x}_t + \mathbf{w}_t + (\mathbf{b}_t + \mathbf{u}_t) \mbox{$1\!\!1$} \{t \geq \tau\},
\end{equation}}
\item in case of a DoS attack, meter measurements can be partially unavailable to the system controller. The measurement model can then be written as
\begin{equation}\nonumber
\mathbf{y}_t = \mathbf{D}_t (\mathbf{H} \mathbf{x}_t + \mathbf{w}_t),
\end{equation}
where $\mathbf{D}_t = \mathrm{diag}(d_{1,t}, \dots, d_{K,t})$ is a diagonal matrix consisting of $0$s and $1$s. Particularly, if $y_{k,t}$ is available, then $d_{k,t} = 1$, otherwise $d_{k,t} = 0$. Note that $\mathbf{D}_t = \mathbf{I}_K$ for $t<\tau$,
\item {in case of a network topology attack, the measurement matrix changes. Denoting the measurement matrix under topology attack at time $t\geq\tau$ by $\bar{\mathbf{H}}_t$, we have
\begin{equation}\nonumber
\mathbf{y}_t =
\begin{cases}
\mathbf{H} \mathbf{x}_t + \mathbf{w}_t, & \mbox{if } t<\tau \\
\bar{\mathbf{H}}_t \mathbf{x}_t + \mathbf{w}_t, & \mbox{if } t \geq \tau,
\end{cases}
\end{equation}}
\item {in case of a mixed topology and hybrid FDI/jamming attack, the measurement model can be written as follows:
\begin{equation}\nonumber
\mathbf{y}_t =
\begin{cases}
\mathbf{H} \mathbf{x}_t + \mathbf{w}_t, & \mbox{if } t<\tau \\
\bar{\mathbf{H}}_t \mathbf{x}_t + \mathbf{w}_t + \mathbf{b}_t + \mathbf{u}_t, & \mbox{if } t \geq \tau.
\end{cases}
\end{equation}}
\end{enumerate}
\subsection{State Estimation}
Since the smart grid is regulated based on estimated system states, state estimation is a fundamental task in the smart grid, that is conventionally performed using the static least squares (LS) estimators \cite{Liu09,Bobba10,Esmalifalak11}. However, in practice, the smart grid is a highly dynamic system due to time-varying load and power generation \cite{Tan17}. Furthermore, time-varying cyber-attacks can be designed and performed by the adversaries. Hence, dynamic system modeling as in \eqref{eq:state_upd} and \eqref{eq:meas_model} and correspondingly using a dynamic state estimator can be quite useful for real-time operation and security of the smart grid \cite{Necip18,Necip18Arxiv}.
For a discrete-time linear dynamic system, if the noise terms are Gaussian, the Kalman filter is the optimal linear estimator in minimizing the mean squared state estimation error \cite{Kalman_60}. Note that for the Kalman filter to work correctly, the system needs to be observable. The Kalman filter is an online estimator consisting of prediction and measurement update steps at each iteration. Denoting the state estimates at time $t$ with $\hat{\mathbf{x}}_{t|t'}$ where $t' = t-1$ and $t' = t$ for the prediction and measurement update steps, respectively, the Kalman filter equations at time $t$ can be written as follows:
\emph{Prediction}:
\begin{gather} \nonumber
\hat{\mathbf{x}}_{t|t-1} = \mathbf{A} \hat{\mathbf{x}}_{t-1|t-1}, \\ \label{eq:pred}
\mathbf{F}_{t|t-1} = \mathbf{A} \mathbf{F}_{t-1|t-1} \mathbf{A}^\mathrm{T} + \sigma_v^2 \, \mathbf{I}_N,
\end{gather}
\emph{Measurement update}:
\begin{gather} \nonumber
\mathbf{G}_{t} = \mathbf{F}_{t|t-1} \mathbf{H}^\mathrm{T} (\mathbf{H} \mathbf{F}_{t|t-1} \mathbf{H}^\mathrm{T} + \sigma_w^2 \, \mathbf{I}_K)^{-1}, \\ \nonumber
\hat{\mathbf{x}}_{t|t} = \hat{\mathbf{x}}_{t|t-1} + \mathbf{G}_{t} (\mathbf{y}_t - \mathbf{H} \hat{\mathbf{x}}_{t|t-1}), \\ \label{eq:meas_upd_fdata}
\mathbf{F}_{t|t} = \mathbf{F}_{t|t-1} - \mathbf{G}_{t} \mathbf{H} \mathbf{F}_{t|t-1},
\end{gather}
where $\mathbf{F}_{t|t-1}$ and $\mathbf{F}_{t|t}$ denote the estimates of the state covariance matrix based on the measurements up to $t-1$ and $t$, respectively. Moreover, $\mathbf{G}_{t}$ is the Kalman gain matrix at time $t$.
{We next demonstrate the effect of cyber-attacks on the state estimation mechanism via an illustrative example. We consider a random FDI attack with various magnitude/intensity levels and show how the mean squared state estimation error of the Kalman filter changes when FDI attacks are launched to the system. We assume that the attacks are launched at $\tau = 100$, i.e., the system is operated under normal (non-anomalous) conditions up to time $100$ and under attacking conditions afterwards. Three attack magnitude levels are considered:
\begin{itemize}
\item Level 1: $b_{k,t} \sim \mathcal{U}[-0.04,0.04]$, $\forall k \in \{1,\dots,K\}$, $\forall t \geq \tau$,
\item Level 2: $b_{k,t} \sim \mathcal{U}[-0.07,0.07]$, $\forall k \in \{1,\dots,K\}$, $\forall t \geq \tau$,
\item Level 3: $b_{k,t} \sim \mathcal{U}[-0.1,0.1]$, $\forall k \in \{1,\dots,K\}$, $\forall t \geq \tau$,
\end{itemize}
where $\mathcal{U}[\zeta_1,\zeta_2]$ denotes a uniform random variable in the range of $[\zeta_1,\zeta_2]$. The corresponding mean squared error (MSE) versus time curves are presented in Fig.~\ref{fig:MSE_vs_Time}. We observe that in case of cyber-attacks, the state estimates are deviated from the actual system states where the amount of deviation increases as the attack magnitudes get larger.}
\begin{figure}[t]
\center
\includegraphics[width=77mm]{MSE_vs_Time.eps}
\caption{{Mean squared state estimation error vs. time where random FDI attacks with various magnitude levels are launched at time $\tau = 100$.}}
\label{fig:MSE_vs_Time}
\end{figure}
\section{Problem Formulation} \label{prob_form}
Before we introduce our problem formulation, we briefly explain a POMDP setting as follows. Given an agent and an environment, a discrete-time POMDP is defined by the seven-tuple $(\mathcal{S}, \mathcal{A}, \mathcal{T}, \mathcal{R}, \mathcal{O}, \mathcal{G}, \gamma)$ where $\mathcal{S}$ denotes the set of (hidden) states of the environment, $\mathcal{A}$ denotes the set of actions of the agent, $\mathcal{T}$ denotes the set of conditional transition probabilities between the states, $\mathcal{R}: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ denotes the reward function that maps the state-action pairs to rewards, $\mathcal{O}$ denotes the set of observations of the agent, $\mathcal{G}$ denotes the set of conditional observation probabilities, and $\gamma \in [0, 1]$ denotes a discount factor that indicates how much present rewards are preferred over the future rewards.
At each time $t$, the environment is in a particular hidden state $s_t \in \mathcal{S}$. Obtaining an observation $o_t \in \mathcal{O}$ depending on the current state of the environment with the probability $\mathcal{G}(o_t|s_t)$, the agent takes an action $a_t \in \mathcal{A}$ and receives a reward $r_t = \mathcal{R}(s_t,a_t)$ from the environment based on its action and the current state of the environment. At the same time, the environment makes a transition to the next state $s_{t+1}$ with the probability $\mathcal{T}(s_{t+1}|s_t,a_t)$. The process is repeated until a terminal state is reached. In this process, the goal of the agent is to determine an optimal policy $\pi: \mathcal{O} \rightarrow \mathcal{A}$ that maps observations to actions and maximizes the agent's expected total discounted reward, i.e., $\mathrm{E} \big[ \sum_{t = 0}^{\infty} \gamma^{t} r_{t} \big]$. Equivalently, if an agent receives costs instead of rewards from the environment, then the goal is to minimize the expected total discounted cost. Considering the latter, the POMDP problem can be written as follows:
\begin{equation}\label{eq:POMDP}
\min_{\pi: \, \mathcal{O} \rightarrow \mathcal{A}}~ \mathrm{E} \Big[ \sum_{t = 0}^{\infty} \gamma^{t} r_{t} \Big].
\end{equation}
Next, we explain the online attack detection problem in a POMDP setting. We assume that at an unknown time $\tau$, a cyber-attack is launched to the system and our aim is to detect the attack as quickly as possible after it occurs, where the attacker's capabilities/strategies are completely unknown. This defines a quickest change detection problem where the aim is to minimize the average detection delay as well as the false alarm rate. This problem can, in fact, be expressed as a POMDP problem (see Fig.~\ref{fig:state_machine}). In particular, due to the unknown attack launch time $\tau$, there are two hidden states: \emph{pre-attack} and \emph{post-attack}. At each time $t$, after obtaining the measurement vector $\mathbf{y}_t$, two actions are available for the agent (defender): \emph{stop} and declare an attack or \emph{continue} to have further measurements. We assume that whenever the action \emph{stop} is chosen, the system moves into a \emph{terminal} state, and always stays there afterwards.
Furthermore, although the conditional observation probability for the \emph{pre-attack} state can be inferred based on the system model under normal operating conditions, since the attacking strategies are unknown, the conditional observation probability for the \emph{post-attack} state is assumed to be totally unknown. Moreover, due to the unknown attack launch time $\tau$, state transition probability between the \emph{pre-attack} and the \emph{post-attack} states is unknown.
\begin{figure}[t]
\vspace{-0.4cm}
\center
\includegraphics[width=64mm]{state_machine_diagram.eps}
\caption{State-machine diagram for the considered POMDP setting. The hidden states and the (hidden) transition between them happening at time $t = \tau$ are illustrated with the dashed circles and the dashed line, respectively. The defender receives costs ($r$) depending on its actions and the underlying state of the environment. Whenever the defender chooses the action \emph{stop}, the system moves into a \emph{terminal} state and the defender receives no further cost.}
\label{fig:state_machine}
\end{figure}
Since our aim is to minimize the detection delays and the false alarm rate, both the false alarm and the detection delay events should be associated with some costs. Let the relative cost of a detection delay compared to a false alarm event be $c>0$. Then, if the true underlying state is \emph{pre-attack} and the action \emph{stop} is chosen, a false alarm occurs and the defender receives a cost of $1$. On the other hand, if the underlying state is \emph{post-attack} and the action \emph{continue} is chosen, then the defender receives a cost of $c$ due to the detection delay. For all other (hidden) state-action pairs, the cost is assumed to be zero. Also, once the action \emph{stop} is chosen, the defender does not receive any further costs while staying in the \emph{terminal} state. The objective of the defender is to minimize its expected total cost by properly choosing its actions. Particularly, based on its observations, the defender needs to determine the stopping time at which an attack is declared.
Let $\Gamma$ denote the stopping time chosen by the defender. Moreover, let $\mathrm{P}_k$ denote the probability measure if the attack is launched at time $k$, i.e., $\tau = k$, and let $\mathrm{E}_k$ denote the corresponding expectation. Note that since the attacking strategies are unknown, $\mathrm{P}_k$ is assumed to be unknown. For the considered online attack detection problem, we can derive the expected total discounted cost as follows:
\begin{align} \nonumber
\mathrm{E} \Big[ \sum_{t = 0}^{\infty} \gamma^{t} r_{t} \Big] &= \mathrm{E}_\tau \Big[ \mbox{$1\!\!1$}\{\Gamma < \tau\} + \sum_{t = \tau}^{\Gamma} c \Big] \\ \nonumber
&= \mathrm{E}_\tau \big[ \mbox{$1\!\!1$}\{\Gamma < \tau\} + c \, (\Gamma-\tau)^+ \big] \\ \label{eq:obj_func}
&= \mathrm{P}_\tau (\{\Gamma < \tau\}) + c \, \mathrm{E}_\tau \big[(\Gamma-\tau)^+\big],
\end{align}
where $\gamma = 1$ is chosen since the present and future costs are equally weighted in our problem, $\{\Gamma < \tau\}$ is a false alarm event that is penalized with a cost of $1$, and $\mathrm{E}_\tau \big[(\Gamma-\tau)^+\big]$ is the average detection delay where each detection delay is penalized with a cost of $c$ and $(\cdot)^+ = \max(\cdot, 0)$.
Based on \eqref{eq:POMDP} and \eqref{eq:obj_func}, the online attack detection problem can be written as follows:
\begin{gather} \label{eq:opt_prob1}
\min_{\Gamma}~ \mathrm{P}_\tau (\{\Gamma < \tau\}) + c \, \mathrm{E}_\tau \big[(\Gamma-\tau)^+\big].
\end{gather}
Since $c$ corresponds to the relative cost between the false alarm and the detection delay events, by varying $c$ and solving the corresponding problem in \eqref{eq:opt_prob1}, a tradeoff curve between average detection delay and false alarm rate can be obtained. Moreover, $c < 1$ can be chosen to prevent frequent false alarms.
Since the exact POMDP model is unknown due to unknown attack launch time $\tau$ and the unknown attacking strategies and since the RL algorithms are known to be effective over uncertain environments, we follow a model-free RL approach to obtain a solution to \eqref{eq:opt_prob1}. Then, a direct mapping from observations to the actions, i.e., the stopping time $\Gamma$, needs to be learned. Note that the optimal action is \emph{continue} if the underlying state is \emph{pre-attack} and \emph{stop} if the underlying state is \emph{post-attack}. Then, to determine the optimal actions, the underlying state needs to be inferred using observations and the observation signal should be well informative to reduce the uncertainty about the underlying state. As described in Sec.~\ref{sys_mod}, the defender observes the measurements $\mathbf{y}_t$ at each time $t$. The simplest approach can be forming the observation space directly with the measurement vector $\mathbf{y}_t$ but we would like to process the measurements and form the observation space with a signal related to the deviation of system from its normal operation.
Furthermore, it is, in general, possible to obtain identical observations in the \emph{pre-attack} and the \emph{post-attack} states. This is called perceptual aliasing and prevent us to make a good inference about the underlying state by only looking at the observation at a single time. We further note that in our problem, deciding on an attack solely based on a single observation corresponds to an outlier detection scheme for which more practical detectors are available not requiring a learning phase, see e.g., \cite{Manandhar14,Rawat15}. However, we are particularly interested in detecting sudden and persistent attacks/anomalies that more likely happen due to an unfriendly intervention to the system rather than random disturbances due to high-level system noise realizations.
Since different states require different optimal actions, the ambiguity on the underlying state should be further reduced with additional information derived from the history of observations. In fact, there may be cases where the entire history of observations is needed to determine the optimal solution in a POMDP problem \cite{Meuleau97}. However, due to computational limitations, only a finite memory can be used in practice and an approximately optimal solution can be obtained. A simple approach is to use a finite-size sliding window of observations as a memory and map the most recent history window to an action, as described in \cite{LochSingh98}. This approach is particularly suitable for our problem as well since we assume persistent attacks/anomalies that happen at an unknown point of time and continue thereafter. That is, only the observations obtained after an attack are significant from the attack detection perspective.
Let the function that processes a finite history of measurements and produces the observation signal be denoted with $f(\cdot)$ so that the observation signal at time $t$ is $o_t = f(\{\mathbf{y}_t\})$. Then, at each time, the defender observes $f(\{\mathbf{y}_t\})$ and decides on the stopping time $\Gamma$, as illustrated in Fig.~\ref{fig:prob_form}. The aim of the defender is to obtain a solution to \eqref{eq:opt_prob1} by using an RL algorithm, as detailed in the subsequent section.
\begin{figure}[t]
\vspace{-0.4cm}
\center
\includegraphics[width=88mm]{prob_form.eps}
\caption{A graphical description of the online attack detection problem in the smart grid. The measurements $\{\mathbf{y}_t\}$ are collected through smart meters and processed to obtain $o_t = f(\{\mathbf{y}_t\})$. The defender observes $f(\{\mathbf{y}_t\})$ at each time $t$ and decides on the attack declaration time $\Gamma$.}
\label{fig:prob_form}
\end{figure}
\section{Solution Approach} \label{solutions}
Firstly, we explain our methodology to obtain the observation signal $o_t = f(\{\mathbf{y}_t\})$. Note that the pdf of meter measurements in the \emph{pre-attack} state can be inferred using the baseline measurement model in \eqref{eq:meas_model} and the state estimates provided by the Kalman filter. In particular, the pdf of the measurements under normal operating conditions can be estimated as follows:
\begin{gather} \nonumber
\mathbf{y}_t \sim \mathbf{\mathcal{N}}(\mathbf{H} \hat{\mathbf{x}}_{t|t},\sigma_w^2 \, \mathbf{I}_K).
\end{gather}
The likelihood of measurements based on the baseline density estimate, denoted with $L(\mathbf{y}_t)$, can then be computed as follows:
\begin{align} \nonumber
L(\mathbf{y}_t) &= (2 \pi \sigma_w^2)^{-\frac{K}{2}} \exp \Big(\frac{-1}{2 \sigma_w^2} (\mathbf{y}_t - \mathbf{H} \hat{\mathbf{x}}_{t|t})^\mathrm{T} (\mathbf{y}_t - \mathbf{H} \hat{\mathbf{x}}_{t|t}) \Big) \\ \nonumber
&= (2 \pi \sigma_w^2)^{-\frac{K}{2}} \exp \Big(\frac{-1}{2 \sigma_w^2} \eta_t \Big),
\end{align}
where
\begin{gather} \label{eq:eta_t}
\eta_t \triangleq (\mathbf{y}_t - \mathbf{H} \hat{\mathbf{x}}_{t|t})^\mathrm{T} (\mathbf{y}_t - \mathbf{H} \hat{\mathbf{x}}_{t|t})
\end{gather}
is the estimate of the negative log-scaled likelihood.
In case the system is operated under normal conditions, the likelihood $L(\mathbf{y}_t)$ is expected to be high. Equivalently, small (close to zero) values of $\eta_t$ may indicate the normal system operation. On the other hand, in case of an attack/anomaly, the system deviates from normal operating conditions and hence the likelihood $L(\mathbf{y}_t)$ is expected to decrease in such cases. Then, persistent high values of $\eta_t$ over a time period may indicate an attack/anomaly. Hence, $\eta_t$ may help to reduce the uncertainty about the underlying state to some extent.
However, since $\eta_t$ can take any nonnegative value, the observation space is continuous and hence learning a mapping from each possible observation to an action is computationally infeasible. To reduce the computational complexity in such continuous spaces, we can quantize the observations. We then partition the observation space into $I$ mutually exclusive and disjoint intervals using the quantization thresholds $\beta_0 = 0 < \beta_1 < \dots < \beta_{I-1} < \beta_I = \infty$ so that if $\beta_{i-1} \leq \eta_t < \beta_{i}, i \in {1,\dots,I}$, the observation at time $t$ is represented with $\theta_i$. Then, possible observations at any given time are $\theta_1, \dots, \theta_I$. Since $\theta_i$'s are representations of the quantization levels, each $\theta_i$ needs to be assigned to a different value.
Furthermore, as explained before, although $\eta_t$ may be useful to infer the underlying state at time $t$, it is possible to obtain identical observations in the \emph{pre-attack} and \emph{post-attack} states. For this reason, we propose to use a finite history of observations. Let the size of the sliding observation window be $M$ so that there are $I^M$ possible observation windows and the sliding window at time $t$ consists of the quantized versions of ${\{\eta_j: t-M+1 \leq j \leq t\}}$. Henceforth, by an observation $o$, we refer to an observation window so that the observation space $\mathcal{O}$ consists of all possible observation windows. For instance, if $I = M = 2$, then $\mathcal{O} = \{ [\theta_1, \theta_1], [\theta_1, \theta_2], [\theta_2, \theta_1], [\theta_2, \theta_2] \}$.
For each possible observation-action pair $(o,a)$, we propose to learn a $Q(o,a)$ value, i.e., the expected future cost, using an RL algorithm where all $Q(o,a)$ values are stored in a $Q$-table of size $I^M \times 2$. After learning the $Q$-table, the policy of the defender will be choosing the action $a$ with the minimum $Q(o,a)$ for each observation $o$. In general, increasing $I$ and $M$ may improve the learning performance but at the same time results in a larger $Q$ table, that would require to increase the number of training episodes and hence the computational complexity of the learning phase. Hence, $I$ and $M$ should be chosen considering the expected tradeoff between performance and computational complexity.
\begin{algorithm}[t]\small
\caption{\small Learning Phase -- SARSA Algorithm}
\label{alg:training}
\baselineskip=0.37cm
\begin{algorithmic}[1]
\STATE Initialize $Q(o,a)$ arbitrarily, $\forall o \in \mathcal{O}$ and $\forall a \in \mathcal{A}$.
\FOR {$e = 1:E$}
\STATE $t \gets 0$
\STATE $s \gets$ \emph{pre-attack}
\STATE Choose an initial $o$ based on the \emph{pre-attack} state and choose the initial $a = \emph{continue}$.
\WHILE {$s \neq$ \emph{terminal} and $t < T$}
\STATE $t \gets t+1$
\IF {$a =$ \emph{stop}}
\STATE $s \gets$ \emph{terminal}
\STATE $r \gets \mbox{$1\!\!1$}\{t < \tau\}$
\STATE $Q(o,a) \gets Q(o,a) + \alpha \, (r - Q(o,a))$
\ELSIF {$a =$ \emph{continue}}
\IF {$t >= \tau$}
\STATE $r \gets c$
\STATE $s \gets$ \emph{post-attack}
\ELSE
\STATE $r \gets 0$
\ENDIF
\STATE Collect the measurements $\mathbf{y}_t$.
\STATE Employ the Kalman filter using \eqref{eq:pred} and \eqref{eq:meas_upd_fdata}.
\STATE Compute $\eta_t$ using \eqref{eq:eta_t} and quantize it to obtain $\theta_i$ if $\beta_{i-1} \leq \eta_t < \beta_{i}, i \in {1,\dots,I}$.
\STATE Update the sliding observation window $o$ with the most recent entry $\theta_i$ and obtain $o'$.
\STATE Choose action $a'$ from $o'$ using the $\epsilon$-greedy policy based on the $Q$-table (that is being learned).
\STATE $Q(o,a) \gets Q(o,a) + \alpha \, (r + Q(o',a') - Q(o,a))$
\STATE $o \gets o'$, $a \gets a'$
\ENDIF
\ENDWHILE
\ENDFOR
\STATE Output: $Q$-table, i.e., $Q(o,a)$, $\forall o \in \mathcal{O}$ and $\forall a \in \mathcal{A}$.
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[t]\small
\caption{\small Online Attack Detection}
\label{alg:test}
\baselineskip=0.37cm
\begin{algorithmic}[1]
\STATE Input: $Q$-table learned in Algorithm \ref{alg:training}.
\STATE Choose an initial $o$ based on the \emph{pre-attack} state and choose the initial $a = \emph{continue}$.
\STATE $t \gets 0$
\WHILE {$a \neq$ \emph{stop}}
\STATE $t \gets t+1$
\STATE Collect the measurements $\mathbf{y}_t$.
\STATE Determine the new $o$ as in the lines 20--22 of Algorithm 1.
\STATE $a \gets {\arg \min}_{a} Q(o,a)$.
\ENDWHILE
\STATE Declare an attack and terminate the procedure.
\end{algorithmic}
\end{algorithm}
The considered RL-based detection scheme consists of learning and online detection phases. In the literature, SARSA, a model-free RL control algorithm \cite{Sutton98reinforcement}, was numerically shown to perform well over the model-free POMDP settings \cite{Peshkin99}. Hence, in the learning phase, the defender is trained with many episodes of experience using the SARSA algorithm and a $Q$-table is learned by the defender. For training, a simulation environment is created and during the training procedure, at each time, the defender takes an action based on its observation and receives a cost in return of its action from the simulation environment, as illustrated in Fig.~\ref{fig:RL}. Based on this experience, the defender updates and learns a $Q$-table. Then, in the online detection phase, based on the observations, the action with the lowest expected future cost ($Q$ value) is chosen at each time using the previously learned $Q$-table. The online detection phase continues until the action \emph{stop} is chosen by the defender. Whenever \emph{stop} is chosen, an attack is declared and the process is terminated.
\begin{figure}[t]
\vspace{-0.4cm}
\center
\includegraphics[width=64mm]{RL.eps}
\caption{An illustration of the interaction between defender and the simulation environment during the learning procedure. The environment provides an observation $o$ based on its internal state $s$, the agent chooses an action $a$ based on its observation and receives a cost $r$ from the environment in return of its action. Based on this experience, the defender updates $Q(o,a)$. This process is repeated many times during the learning procedure.}
\label{fig:RL}
\end{figure}
Note that after declaring an attack, whenever the system is recovered and returned back to the normal operating conditions, the online detection phase can be restarted. That is, once a defender is trained, no further training is needed. We summarize the learning and the online detection stages in Algorithms \ref{alg:training} and \ref{alg:test}, respectively. In Algorithm~\ref{alg:training}, $E$ denotes the number of learning episodes, $T$ denotes the maximum length of a learning episode, $\alpha$ is the learning rate, and $\epsilon$ is the exploration rate, where the $\epsilon$-greedy policy chooses the action with the minimum $Q$ value with probability $1-\epsilon$ and the other action (for exploration purposes during the learning process) with probability $\epsilon$.
{Since RL is an iterative procedure, same actions are repeated at each iteration (learning episode). The time complexity of an RL algorithm can then be considered as the time complexity of a single iteration \cite{Kokar}. As the SARSA algorithm performs one update on the $Q$-table at a time and the maximum time limit for a learning episode is $T$, the time complexity of Algorithm 1 is $O(T)$. Moreover, the overall complexity of the learning procedure is $O(T E)$, as $E$ is the number of learning episodes. Notice that the time complexity does not depend on the size of the action and observation spaces. On the other hand, as $I$ and/or $M$ increase, a larger $Q$-table needs to be learned, that requires to increase $E$ for a better learning. Furthermore, the space complexity (memory cost) of Algorithm 1 is $M + 2 \, I^M$ due to the sliding observation window of size $M$ and the $Q$-table of size $I^M\times2$. Note that the space complexity is fixed over time. During the learning procedure, based on the smart grid model and some attack models (that are used to obtain low-magnitude attacks that correspond to small deviations from the normal system operation), we can obtain the measurement data online and the defender is trained with the observed data stream. Hence, the learning phase does not require storage of large amount of training data as only a sliding observation window of size $M$ needs to be stored at each time.}
{In Algorithm 2, at each time, the observation $o$ is determined and using the $Q$-table (learned in Algorithm 1), the corresponding action $a$ with the minimum cost is chosen. Hence, the complexity at a time is $O(1)$. This process is repeated until the action \emph{stop} is chosen at the stopping time $\Gamma$. Furthermore, similarly to Algorithm 1, the space complexity of Algorithm 2 is $M + 2 \, I^M$.}
{\textit{Remark 1:} Since our solution approach is model-free, i.e., it is not particularly designed for specific types of attacks, the proposed detector does not distinguish between an attack and other types of persistent anomalies such as network topology faults. In fact, the proposed algorithm can detect any attack/anomaly as long as effect of such attack/anomaly on the system is at a distinguishable level, i.e., the estimated system states are at least slightly deviated from the actual system states. On the other hand, since we train the agent (defender) with low-magnitude attacks with some known attack types (to create the effect of small deviations from actual system operation in the \emph{post-attack} state) and we test the proposed detector against various cyber-attacks in the numerical section (see Sec.~\ref{numerical}), we lay the main emphasis on online attack detection in this study. In general, the proposed detector can be considered as an online anomaly detection algorithm.}
{\textit{Remark 2:} The proposed solution scheme can be applied in a distributed smart grid system, where the learning and detection tasks are still performed in a single center but the meter measurements are obtained in a distributed manner. We briefly explain this setup as follows:
\begin{itemize}
\item In the wide-area monitoring model of smart grids, there are several local control centers and a global control center. Each local center collects and processes measurements of a set of smart meters in its neighborhood, and communicates with the global center and with the neighboring local centers.
\item The system state is estimated in a distributed manner, e.g., using the distributed Kalman filter designed for wide-area smart grids in \cite{Necip18}.
\item Let $\mathbf{h}_k^T \in \mathbb{R}^{N}$ be the $k$th row of the measurement matrix, i.e., ${\mathbf{H}^T = [\mathbf{h}_1, \dots, \mathbf{h}_K]}$. Then, estimate of the negative log-scaled likelihood, $\eta_t$, can be written as follows (see \eqref{eq:eta_t}):
\begin{gather} \label{eq:eta_t_v2}
\eta_t = \sum_{k=1}^{K} (y_{k,t} - \mathbf{h}_k^T \hat{\mathbf{x}}_{t|t})^2.
\end{gather}
By employing the distributed Kalman filter, the local centers can estimate the system state at each time $t$. Then, they can compute the term $(y_{k,t} - \mathbf{h}_k^T \hat{\mathbf{x}}_{t|t})^2$ for the meters in their neighborhood. Let the number of local centers be $R$ and the set of meters in the neighborhood of the $r$th local center be denoted with $\mathcal{S}_r$. Then, $\eta_t$ in \eqref{eq:eta_t_v2} can be rewritten as follows:
\begin{align} \nonumber
\eta_t &= \sum_{r=1}^{R} \underbrace{\sum_{k \in \mathcal{S}_r} (y_{k,t} - \mathbf{h}_k^T \hat{\mathbf{x}}_{t|t})^2}_{\eta_{t,r}} \\ \nonumber
&= \sum_{r=1}^{R} \eta_{t,r}.
\end{align}
\item In the distributed implementation, each local center can compute $\eta_{t,r}$ and report it to the global center, which then sums $\{\eta_{t,r}, r = 1,2,\dots,R\}$ and obtain $\eta_t$.
\item The learning and detection tasks (Algorithms 1 and 2) are then performed at the global center in the same way as explained above.
\end{itemize}}
\section{Simulation Results} \label{numerical}
\subsection{Simulation Setup and Parameters}
Simulations are performed on an IEEE-14 bus power system that consists of $N+1 = 14$ buses and $K = 23$ smart meters. The initial state variables (phase angles) are determined using the DC optimal power flow algorithm for case-14 in MATPOWER \cite{Zimmerman11}. The system matrix $\mathbf{A}$ is chosen to be an identity matrix and the measurement matrix $\mathbf{H}$ is determined based on the IEEE-14 power system. The noise variances for the normal system operation are chosen as $\sigma_v^2 = 10^{-4}$ and $\sigma_w^2 = 2 \times 10^{-4}$.
For the proposed RL-based online attack detection scheme, the number of quantization levels is chosen as $I = 4$ and the quantization thresholds are chosen as $\beta_1 = 0.95\times10^{-2}$, $\beta_2 = 1.05\times10^{-2}$, and $\beta_3 = 1.15\times10^{-2}$ via an offline simulation by monitoring $\{\eta_t\}$ during the normal system operation. Further, $M = 4$ is chosen, i.e., sliding observation window consists of $4$ entries. Moreover, the learning parameters are chosen as $\alpha = 0.1$ and $\epsilon = 0.1$, and the episode length is chosen to be $T = 200$. In the learning phase, the defender is firstly trained over $4\times10^5$ episodes where the attack launch time is $\tau = 100$ and then trained further over $4\times10^5$ episodes where $\tau = 1$ to ensure that the defender sufficiently explores the observation space under normal operating conditions as well as the attacking conditions. More specifically, since a learning episode is terminated whenever the action \emph{stop} is chosen and observations under an attack become available to the defender only for $t \geq \tau$, we choose $\tau = 1$ in the half of the learning episodes to make sure that the defender is sufficiently trained under the post-attack regime.
To illustrate the tradeoff between the average detection delay and the false alarm probability, the proposed algorithm is trained for both $c = 0.02$ and $c = 0.2$. Moreover, to obtain a detector that is robust and effective against small deviations of measurements from the normal system operation, the defender needs to be trained with very low-magnitude attacks that correspond to slight deviations from the baseline. For this purpose, some known attack types with low magnitudes are used. In particular, in one half of the learning episodes, random FDI attacks are used with attack magnitudes being realizations of the uniform random variable $\pm \, \mathcal{U}[0.02,0.06]$, i.e., $b_{k,t} \sim \mathcal{U}[0.02,0.06]$, $\forall k \in \{1,\dots,K\}$, $\forall t \geq \tau$. In the other half of the learning episodes, random hybrid FDI/jamming attacks are used where $b_{k,t} \sim \mathcal{U}[0.02,0.06]$, $u_{k,t} \sim \mathcal{N}(0,\sigma_{k,t})$, and $\sigma_{k,t} \sim \mathcal{U}[2\times10^{-4},4\times10^{-4}]$, $\forall k \in \{1,\dots,K\}$, $\forall t \geq \tau$.
\subsection{Performance Evaluation}
\begin{figure}[t]
\center
\includegraphics[width=77mm]{FDI.eps}
\caption{Average detection delay vs. probability of false alarm curves for the proposed algorithm and the benchmark tests in case of a random FDI attack.}
\label{fig:FDI}
\end{figure}
In this section, performance of the proposed RL-based attack detection scheme is evaluated and compared with some existing detectors in the literature. {Firstly, we report the average false alarm period, $\mathrm{E}_\infty[\Gamma]$, of the proposed detection scheme, i.e., the first time on the average the proposed detector gives an alarm although no attack/anomaly happens at all ($\tau = \infty$). The average false alarm periods are obtained as $\mathrm{E}_\infty[\Gamma] = 9.4696\times10^5$ for $c=0.2$ and $\mathrm{E}_\infty[\Gamma] = 7.9210\times10^6$ for $c=0.02$. As expected, false alarm rate of the proposed detector reduces as the relative cost of the false alarm event, $1/c$, increases.}
Based on the optimization problem in \eqref{eq:opt_prob1}, our performance metrics are the probability of false alarm, i.e., $\mathrm{P}_\tau (\{\Gamma < \tau\})$, and the average detection delay, i.e., $\mathrm{E}_\tau \big[(\Gamma-\tau)^+\big]$. Notice that both performance metrics depend on the unknown attack launch time $\tau$. Hence, in general, the performance metrics need to be computed for each possible $\tau$. For a representative performance illustration, we choose $\tau$ as a geometric random variable with parameter $\rho$ such that $P(\tau = k) = \rho \, (1-\rho)^{k-1}, k = 1,2,3, \dots$ where $\rho \sim \mathcal{U}[10^{-4},10^{-3}]$ is a uniform random variable.
With Monte Carlo simulations over 10000 trials, we compute the probability of false alarm and the average detection delay of the proposed detector, the Euclidean detector \cite{Manandhar14}, and the cosine-similarity metric based detector \cite{Rawat15}. To obtain the performance curves, we vary the thresholds of the benchmark tests and vary $c$ for the proposed algorithm. To evaluate the proposed algorithm, we use Algorithm~\ref{alg:test} that makes use of the $Q$-tables learned in Algorithm~\ref{alg:training} for $c = 0.02$ and $c = 0.2$. {Furthermore, we report the precision, recall, and F-score for all simulation cases. As the computation of these measures requires computing the number of detected and missed trials, we define an upper bound on the detection delay (that corresponds to the maximum acceptable detection delay) such that if the attack is detected within this bound we assume the attack is detected, otherwise missed. As an example, we choose this bound as $10$ time units. Then, we compute the precision, recall, and F-score out of $10000$ trials as follows:
\begin{equation}\nonumber
\mbox{Precision} = \frac{\mbox{\# trials } (\tau \leq \Gamma \leq \tau + 10)}{\mbox{\# trials } (\tau \leq \Gamma \leq \tau + 10) + \mbox{\# trials } (\Gamma < \tau)},
\end{equation}
\begin{equation}\nonumber
\mbox{Recall} = \frac{\mbox{\# trials } (\tau \leq \Gamma \leq \tau + 10)}{\mbox{\# trials } (\tau \leq \Gamma \leq \tau + 10) + \mbox{\# trials } (\Gamma > \tau + 10)},
\end{equation}
and
\begin{equation}\nonumber
\mbox{F-score} = 2 ~ \frac{\mbox{Precision} \times \mbox{Recall}}{\mbox{Precision} + \mbox{Recall}},
\end{equation}
where ``\# trials'' means ``the number of trials with''.}
\begin{figure}[t]
\center
\includegraphics[width=77mm]{Structured_FDI.eps}
\caption{{Performance curves for the proposed algorithm and the benchmark tests in case of a structured ``stealth'' FDI attack.}}
\label{fig:stealth_FDI}
\end{figure}
We evaluate the proposed and the benchmark detectors under the following attack scenarios:
\begin{enumerate}
\item Firstly, we evaluate the detectors against a random FDI attack {where $b_{k,t} \sim \mathcal{U}[-0.07,0.07]$, $\forall k \in \{1,\dots,K\}$ and $\forall t \geq \tau$.} The corresponding tradeoff curves are presented in Fig.~\ref{fig:FDI}.
\item {We then evaluate the detectors against a structured ``stealth'' FDI attack \cite{Liu09}, where the injected data $\mathbf{b}_t$ lies on the column space of the measurement matrix $\mathbf{H}$. We choose $\mathbf{b}_t = \mathbf{H} \mathbf{g}_t$ where $\mathbf{g}_t \triangleq [g_{1,t}, \dots, g_{N,t}]^\mathrm{T}$ and $g_{n,t} \sim \mathcal{U}[0.08,0.12]$, $\forall n \in \{1,\dots,N\}$ and $\forall t \geq \tau$. The corresponding performance curves are illustrated in Fig.~\ref{fig:stealth_FDI}.}
\item Then, we evaluate the detectors in case of a jamming attack with zero-mean AWGN where {$u_{k,t} \sim \mathcal{N}(0,\sigma_{k,t})$ and $\sigma_{k,t} \sim \mathcal{U}[10^{-3},2\times10^{-3}]$, $\forall k \in \{1,\dots,K\}$ and $\forall t \geq \tau$.} The corresponding tradeoff curves are presented in Fig.~\ref{fig:jamming}.
\item Next, we evaluate the detectors in case of a jamming attack with jamming noise correlated over the meters where $\mathbf{u}_t \sim \mathbf{\mathcal{N}}(\mathbf{0},\mathbf{U}_t)$, $\mathbf{U}_t = \pmb{\Sigma}_t \pmb{\Sigma}_t^\mathrm{T}$, and $\pmb{\Sigma}_t$ is a random Gaussian matrix with its entry at the $i$th row and the $j$th column is $\pmb{\Sigma}_{t,i,j} \sim \mathcal{N}(0,8\times10^{-5})$. The corresponding performance curves are given in Fig.~\ref{fig:corr_jamm}.
\item Moreover, we evaluate the detectors under a hybrid FDI/jamming attack {where $b_{k,t} \sim \mathcal{U}[-0.05,0.05]$, $u_{k,t} \sim \mathcal{N}(0,\sigma_{k,t})$, and $\sigma_{k,t} \sim \mathcal{U}[5\times10^{-4},10^{-3}]$, $\forall k \in \{1,\dots,K\}$ and $\forall t \geq \tau$.} The corresponding tradeoff curves are presented in Fig.~\ref{fig:hybrid}.
\item Then, we evaluate the detectors in case of a random DoS attack where the measurement of each smart meter become unavailable to the system controller at each time with probability $0.2$. That is, for each meter $k$, $d_{k,t}$ is $0$ with probability $0.2$ and $1$ with probability $0.8$ at each time $t \geq \tau$. The performance curves against the DoS attack are presented in Fig.~\ref{fig:dos}.
\item {Further, we consider a network topology attack where the lines between the buses 9-10 and 12-13 break down. The measurement matrix $\bar{\mathbf{H}}_t$ for $t\geq\tau$ is changed accordingly. The corresponding tradeoff curves are given in Fig.~\ref{fig:topology}.}
\item {Finally, we consider a mixed topology and hybrid FDI/jamming attack, where the lines between buses 9-10 and 12-13 break down for $t\geq\tau$ and further, we have $b_{k,t} \sim \mathcal{U}[-0.05,0.05]$, $u_{k,t} \sim \mathcal{N}(0,\sigma_{k,t})$, and $\sigma_{k,t} \sim \mathcal{U}[5\times10^{-4},10^{-3}]$, $\forall k \in \{1,\dots,K\}$ and $\forall t \geq \tau$. The corresponding performance curves are presented in Fig.~\ref{fig:mixed}.}
\end{enumerate}
{Table~\ref{table:performance_c0p2} and Table~\ref{table:performance_c0p02} summarize the precision, recall, and F-score for the proposed RL-based detector for $c=0.2$ and $c=0.02$, respectively against all the considered simulation cases above. Moreover, for the random FDI attack case, Fig.~\ref{fig:F-measures} illustrates the precision versus recall curves for the proposed and benchmark detectors. Since we obtain similar results for the other attack cases, we report the results for the random FDI attack case as a representative.}
\begin{figure}[t]
\center
\includegraphics[width=77mm]{Jamming.eps}
\caption{Performance curves for the proposed algorithm and the benchmark tests in case of a jamming attack with AWGN.}
\label{fig:jamming}
\end{figure}
\begin{figure}[t]
\center
\includegraphics[width=77mm]{Corr_Jamm.eps}
\caption{Performance curves for the proposed algorithm and the benchmark tests in case of a jamming attack with jamming noise correlated over the space.}
\label{fig:corr_jamm}
\end{figure}
For almost all cases, we observe that the proposed RL-based detection scheme significantly outperforms the benchmark tests. This is because through the training process, the defender learns to differentiate the instantaneous high-level system noise from persistent attacks launched to the system. Then, the trained defender is able to significantly reduce its false alarm rate. Moreover, since the defender is trained with low attack magnitudes, it becomes sensitive to detect small deviations of the system from its normal operation. On the other hand, the benchmark tests are essentially outlier detection schemes making sample-by-sample decisions and hence they are unable to distinguish high-level noise realizations from real attacks that makes such schemes more vulnerable to false alarms. Finally, in case of DoS attacks, since the meter measurements become partially unavailable so that the system greatly deviates from its normal operation, all detectors are able to detect the DoS attacks with almost zero average detection delays (see Fig.~\ref{fig:dos}).
\begin{figure}[t]
\center
\includegraphics[width=77mm]{Hybrid.eps}
\caption{Performance curves for the proposed algorithm and the benchmark tests in case of a hybrid FDI/jamming attack.}
\label{fig:hybrid}
\end{figure}
\begin{figure}[t]
\center
\includegraphics[width=77mm]{DoS.eps}
\caption{Performance curves for the proposed algorithm and the benchmark tests in case of a DoS attack.}
\label{fig:dos}
\end{figure}
\begin{figure}[t]
\center
\includegraphics[width=77mm]{Topology.eps}
\caption{{Performance curves for the proposed algorithm and the benchmark tests in case of a network topology attack.}}
\label{fig:topology}
\end{figure}
\begin{figure}[t]
\center
\includegraphics[width=77mm]{Mixed.eps}
\caption{{Performance curves for the proposed algorithm and the benchmark tests in case of a mixed network topology and hybrid FDI/jamming attack.}}
\label{fig:mixed}
\end{figure}
\renewcommand{\arraystretch}{1.1}
\begin{table*}[t]
\centering
\begin{tabular}{ | p{1.6cm} | l | l | l | l | l | l | l | l |}
\hline
Measure & FDI & Jamming & Corr. Jamm. & Hybrid & DoS & Structured FDI & Topology & Mixed \\ \hline \hline
Precision & 0.9977 & 0.9974 & 0.9968 & 0.9973 & 0.9977 & 0.9968 & 0.9972 & 0.9973 \\ \hline
Recall & 1 & 1 & 1 & 1 & 1 & 0.9756 & 0.9808 & 1 \\ \hline
F-score & 0.9988 & 0.9987 & 0.9984 & 0.9986 & 0.9988 & 0.9861 & 0.9890 & 0.9986 \\
\hline
\end{tabular}
\vspace{-0.05cm}
\caption{{Precision, recall, and F-score for the proposed detector ($c = 0.2$) in detection of various cyber-attacks.}}
\label{table:performance_c0p2}
\end{table*}
\begin{table*}[t]
\centering
\begin{tabular}{ | p{1.6cm} | l | l | l | l | l | l | l | l |}
\hline
Measure & FDI & Jamming & Corr. Jamm. & Hybrid & DoS & Structured FDI & Topology & Mixed \\ \hline \hline
Precision & 0.9998 & 0.9994 & 0.9998 & 0.9997 & 0.9995 & 0.9993 & 0.9999 & 0.9995 \\ \hline
Recall & 1 & 1 & 1 & 1 & 1 & 0.9449 & 0.9785 & 1 \\ \hline
F-score & 0.9999 & 0.9997 & 0.9999 & 0.9998 & 0.9997 & 0.9713 & 0.9891 & 0.9997 \\
\hline
\end{tabular}
\vspace{-0.05cm}
\caption{{Precision, recall, and F-score for the proposed detector ($c = 0.02$) in detection of various cyber-attacks.}}
\label{table:performance_c0p02}
\end{table*}
\renewcommand{\arraystretch}{1}
\begin{figure}[t]
\center
\includegraphics[width=77mm]{random_FDI_F_scores.eps}
\caption{{Precision vs. recall for the proposed and the benchmark detectors against a random FDI attack.}}
\label{fig:F-measures}
\end{figure}
\section{Concluding Remarks} \label{conclusion}
In this paper, an online cyber-attack detection problem is formulated as a POMDP problem and a solution based on the model-free RL for POMDPs is proposed. The numerical studies illustrate the advantages of the proposed detection scheme in fast and reliable detection of cyber-attacks targeting the smart grid. The results also demonstrate the high potential of RL algorithms in solving complex cyber-security problems. In fact, the algorithm proposed in this paper can be further improved using more advanced methods. Particularly, the following directions can be considered as future works:
\begin{itemize}
\item compared to the finite-size sliding window approach, more sophisticated memory techniques can be developed,
\item compared to discretizing the continuous observation space and using a tabular approach to compute the $Q$ values, linear/nonlinear function approximation techniques, e.g., neural networks, can be used to compute the $Q$ values,
\item and deep RL algorithms can be useful to improve the performance.
\end{itemize}
Finally, we note that the proposed online detection method is widely applicable to any quickest change detection problem where the pre-change model can be derived with some accuracy but the post-change model is unknown. This is, in fact, commonly encountered in many practical applications where the normal system operation can be modeled sufficiently accurately and the objective is the online detection of anomalies/attacks that are difficult to model. Moreover, depending on specific applications, if real post-change, e.g., attack/anomaly, data can be obtained, the real data can be further enhanced with simulated data and the training can be performed accordingly, that would potentially improve the detection performance.
|
2,869,038,154,040 | arxiv | \section*{Acknowledgment}
Support by Grants-in-Aid (Nos.~14084206 and~17340116)
from MEXT/JSPS, Japan, is acknowledged.
Useful discussion with A. Hatabu is appreciated.
|
2,869,038,154,041 | arxiv | \section*{Introduction}
In this paper we consider a number of at first apparently unrelated questions.
First, we study the set of closed geodesics with a single self-intersection (Section \ref{geodonegen}). We show that any such geodesic is contained in an embedded pair of pants (Theorem \ref{pantsthm}) and use that observation together with estimates on the length of the geodesic in terms of the lengths of the boundary components of the pair of pants (Theorem \ref{pantlength}) and results of M.~Mirzakhani to show that the number of geodesics with a single self-intersection and length bounded by $L$ on a hyperbolic surface $S$ is \emph{asymptotic} to $L$ raised to the dimension of the Teichmuller space of $S.$ As a side observation we find that on any hyperbolic surface any curve with one self-intersection has to have length at least $2\acosh 3$ (this is attained for a thrice punctured sphere.
In the case of the punctured torus, we succeed in extending McShane's identity to the set of all geodesics with one self-intersection, thus:
\[
\boxed{
\sum\left(1-\sqrt{1-\left(\dfrac{6}{t(\gamma)}\right)^2}\right) =2,
}
\]
where the sum is taken over all the geodesics with a single self-intersection (and does not depend on the hyperbolic structure).
One approach to understanding self-intersecting curves is to lift to a cover where the curve is simple (that this is always possible is the subject of G. Peter Scott's classic paper \cite{scottlerf}. We show that for geodesics with a single self-intersection a four-fold covering is always sufficient (see Section \ref{cover}), but the method of proof raises the question on when a given covering of a collection of curves on a surface can be extended to a covering of the whole surface. We study these questions in Sections \ref{coversec} and \ref{regcoversec}.
Finally, we are led to the following related question: It is well-known that free groups and fundamental groups of closed surfaces are residually finite. It is reasonable to ask, given an element $g$ of one of these groups $G,$ what the index of a subgroup (normal or otherwise) of $G$ not containing $g$ is. We are able to obtain a number of upper bounds, as follows:
For free and surface groups, given an element $g$ of length $n,$ there is a subgroup of index $O(n)$ not containing $g.$ This is Theorem \ref{bourabee}, which is proved by considering coverings.
For free groups, there is a normal subgroup of index exponential in $n$ which contains \emph{no} element of length $n.$ For surface groups, we are only able to bound the index by exponential of $O(n^2).$ This is the content of Theorems \ref{expthm} and \ref{expthmsurf}. Theorem \ref{expthm} is the result for free groups, and uses the results of Lubotzky-Phillips-Sarnak on expansion of Cayley graphs of special linear groups over finite fields. Theorem \ref{expthmsurf} concerns surface groups, and uses the result of Baumslag that surface groups are residually free (this can be made quantitative, as was pointed out to the author by Henry Wilton).
For free and surface groups there is a normal subgroup of index bounded by $O(n^3)$ which does not contain $g.$ This uses arithmetic representations of both free and surface groups and a little number theory. (Theorems \ref{nongamb}, \ref{nongambsurf})
If $g$ lies at the $k$-th level of the lower central series, for both free and surface groups we get a bound $O(\log^k n)$ for the index of the normal subgroup not containing $g.$ This is the content of Theorems \ref{lowerng} and \ref{lowerngsurf}. The former is proved by constructing unipotent representations, the second follows from the former via the residual freeness approach. The idea of considering the lower central series comes from the (highly recommended) paper of J. Malestein and A. Putman \cite{maleput}
The above result may be put in context by Theorem \ref{avbourabee}, which states that (for free groups) the \emph{average} index of a subgroup not containing a given element is smaller than $3$ -- the proof gives the same result for normal subgroups, but the rank has to be at least four in the normal case. The argument uses the results of the author on distribution of homology classes in free groups, but the argument can be easily tweaked to work for surface groups.
\section{Geodesics with a single intersection on general hyperbolic surfaces}
\label{geodonegen}
Consider a hyperbolic surface $S$ and a geodesic $\gamma$ with exactly one double point $p.$ Let $\gamma_1$ and $\gamma_2$ be the two simple loops into which $\gamma$ is decomposed by $p,$ so that in $\pi_1(S, p)$ we can write $\gamma = \gamma_1 \gamma_2^{-1}.$ It is easy to see that
$=\gamma_3 =(\gamma_1 \gamma_2)^{-1}$ is freely homotopic to a simple curve, and indeed:
\begin{theorem}
\label{pantsthm}
The geodesic $\gamma$ is contained in an embedded pair of pants whose boundary components are freely homotopic to $\gamma_1, \gamma_2, \gamma_3.$
\end{theorem}
\begin{proof}
Since $\gamma_1,\gamma_2, \gamma_3$ admit disjoint simple representatives, it is standard (see, eg, Freedman, Hass, Scott\cite{freedmanhassscott}) that the geodesic representative are disjoint, and obviously bound a pair of pants. Since a pair of pants is a convex surface, it follows that t $\gamma=\gamma_1 \gamma_2^{-1}$ is contained in it.
\end{proof}
Theorem \ref{pantsthm} establishes a bijective correspondence between closed geodesics with a single double point and embedded pairs of pants.
The next observation is:
\begin{theorem}
\label{pantlength}
Let $P$ be a three-holed sphere with boundary components $a, b, c$. Let $\gamma$ be a closed geodesic in $P$ freely homotopic to $a^{-1}b.$ Then, the length of $\gamma$ satisfies:
\begin{equation}
\label{cosheq}
\cosh \ell(\gamma)/2 = 2 \cosh \ell(a)/2 \cosh \ell(b)/2 + \cosh\ell(c)/2.
\end{equation}
\end{theorem}
\begin{proof}
By the Cayley-Hamilton Theorem,
\[
A^{-1} = \tr(A) I - A.
\]
So,
\[
A^{-1} B = \tr(A) B - AB,
\]
and so
\begin{equation}
\label{trace1}
\tr(A^-1 B) = \tr(A) \tr(B) - \tr(AB) = \tr(A) \tr(B) - \tr(C).
\end{equation}
The result follows from this, the relationship between trace and the translation length and the not-quite-obvious fact that we can take $A, B, C$ with traces negative (this follows from deep results of W. Goldman \cite{goldmanreps}, but in this case we need simply to have one case easy to compute -- the thrice-punctured sphere does nicely).
\end{proof}
\begin{corollary}
A closed geodeisc with a single double point on a hyperbolic surface $S$ is no shorter than $2\acosh 3 = 2\log(3 + 2 \sqrt{2}).\approx 3.52549$
Such a geodesic is exactly of length $2\acosh 3$ if and only if $S$ is a three-cusped sphere.
\end{corollary}
\begin{proof}
Since $\cosh(x)$ is monotonic for $x\geq 0,$ the result follows immediately from Eq. \eqref{cosheq}.
\end{proof}
We are only interested in the case where $\ell(a), \ell(b), \ell(c)$ are large, in which case we can write
\[
\ell(\gamma)/2 \approx \log(\exp((\ell(a)+\ell(b)/2) + \exp(\ell(c)/2).
\]
Now, the boundary of the pair of pants can be viewed as a multicurve on the surface $S$ of length $l(a, b, c) = \ell(a)+ \ell(b)+ \ell(c).$
Further, the three-component multicurves which bound pairs of pants fall into a finite number of mapping class group orbits.
Letting $l(a, b) = \ell(a) + \ell(b),$ we see that
\[
\ell(\gamma)/2 \approx \log(\exp(l(a, b)/2) + \exp(\ell(c)/2)).
\]
Let $l_1 = \max(l(a, b)/2), \ell(c)/2).$ and let $l(a, b, c)/2 = (k+1)l_1,$ where $k \leq 1.$
Then,
\[
l(a, b)/2 < \log(\exp(l(a, b)/2) + \exp(ell(c)/2)) < l(a, b)/2 + \log(2).
\]
So, for large $l(a, b),$ we see that
\[
\ell(\gamma)/l(a, b, c) \approx 1/(k+1).
\]
From this, and the results of Rivin( \cite{geomded})and Mirzakhani (\cite{mirzakhcurves}, see also \cite{rivinmirzakh}) we see that the order of growth of the number of geodesics with a single double point is the same as the order of growth of an orbit of a multicurve. By the results of Mirzakhani \cite{mirzakhcurves} on the \emph{equidistribution} of the mapping class orbit of a multicurve in measured lamination space, we get:
\begin{theorem}
\label{mainthm}
The number of the geodesics of length not exceeding $L$ in the mapping class group orbit of a geodesic with a single double point on $S$ is asymptotic to
\[L^{\mbox{dimension of Teichmuller space of $S$}}.
\]
\end{theorem}
\begin{corollary}
\label{maincor}
The number of geodesics with a single double point of length not exceeding $L$ is asymptotic to
\[L^{\mbox{dimension of Teichmuller space of $S$}}.\]
\end{corollary}
\begin{proof}
The result is immediate from Theorem \ref{mainthm} and the observation that there is a finite number of mapping class orbits of geodesics with a bounded number of self-intersections.
\end{proof}
\section{Punctured tori}
\label{ptorus}
The results of the previous section have particularly a particularly nice form when the surface $S$ is a once-punctured torus. A punctured torus $T,$ can be cut along a simple closed geodesic $\gamma$ into a sphere with two boundary components (each of length $\ell(\gamma)$ and one cusp. This means that to each simple geodesic $\gamma$ we can associate two geodesics $\gamma_1$ and $\gamma_2$ both of which have a single self intersection, are of the same length, and if the translation corresponding to $\gamma$ is $A$ and that corresponding to $\gamma_1$ is $B,$ then, from Eq.\eqref{trace1},
\begin{equation}
\label{trace2}
\tr(B) = 3 \tr(A).
\end{equation}
If $\ell(\gamma)$ is large, Eq. \eqref{trace2} implies that $\ell(\gamma_1) \approx \ell(\gamma) + 2\log 3.$ Set $N_0(L, T)$ be the number of simple geodesics of length bounded above by $L$ on the punctured torus $T,$ and let $N_1(L, T)$ be the number of geodesics with a single double point on the same torus, then.
\begin{theorem}
\label{simpnosimp}
\[N_0(L, T) \sim N_1(L, T)/2.\]
\end{theorem}
\begin{remark}
The same result holds when instead of a punctured torus we consider a torus with a geodesic boundary component.
\end{remark}
In addition, we easily deduce an analogy of McShane's identity (\cite{mcshaneid}) for curves with a single self-intersection. Recall that McShane's identity states that on a punctured torus
\[
\sum \dfrac{1}{e^{\ell(\gamma)} + 1} = \dfrac12,
\]
where the sum is taken over all the simple geodesics on the punctured torus.
McShane's identity can be rewritten by using the trace $t(\gamma)$ of the hyperbolic translation corresponding to $\gamma$ as follows:
\[
\sum \left( 1 - \sqrt{1-\left(\dfrac{2}{t(\gamma)}\right)^2}\right) = 1.
\]
Using this last form and Eq.~\eqref{trace2}, we obtain the ``McShane's identity for self-intersection''
\begin{equation}
\label{mc2}
\sum\left(1-\sqrt{1-\left(\dfrac{6}{t(\gamma)}\right)^2}\right) =2,
\end{equation}
where the sum is taken over all geodesics with a single double point.
\begin{remark}
Combinatorial results on geodesics with a single self-intersection on a punctured torus have been obtained by D. Crisp and W. Moran in \cite{crisp1}.
\end{remark}
\section{Removing intersections by covering}
\label{cover}
It is a celebrated result of G. Peter Scott \cite{scottlerf} that for any hyperbolic surface $S$ and any closed geodesic $\gamma,$ there is a finite cover $\pi \tilde{S} \rightarrow S,$ such that the lift $\tilde{\gamma}$ to $\tilde{S}$ is simple (non-self-intersecting). Since for any $k,$ there is a finite number of mapping classes of geodesics on $S$ with no more than $k$ self-intersections, and the minimal degree of $\pi$ corresponding to a curve $\gamma$ is invariant under the mapping class group, it follows that there exists some bound $d_S(k)$ so that one can ``desingularize'' any curve with up to $k$ self-intersections by going to a cover of degree at most $k.$ Unfortunately, Scott's argument appears to give no such bound.
The question of providing good bounds for $d_S(k)$ is, as far as I can say, wide open. Here we will attempt to start the ball rolling by giving sharp bounds for $d_(1).$
Our first observation is that if $S$ is a three-holed sphere, then $d_S(1) = 2.$ To prove this we consider (without loss of generality) the case where $S$ is a thrice cusped sphere (the quotient of the hyperbolic plane by $\Gamma(2).$). The proof then is contained in the diagram below. As can easily be seen, the lifts of two of the boundary components (say, $A$ and $B$) are connected, and the lift of $C$ has two connected components.
Now, suppose that $\gamma$ is contained in a closed surface $T.$ Since $\gamma$ has a three-holed sphere neighborhood $S,$ if the covering map described above extends to all of $T,$ then $d_T(1) = 2.$ However, there are obviously examples where the map does \emph{not} extend (f when $T\backslash S$ has three connected components, or, more generally, when the boundary one of the components of $T\backslash S$ has one connected component, the lift of which is connected -- since the Euler characteristic of a surface with one boundary component is odd, such a surface is not a double cover. It is easy to see that in all other cases the double cover does extend). It is, however, clear, that a further double cover removes the obstruction, and we obtain the following result:
\begin{theorem}
For any oriented hyperbolic surface $S$ and a geodesic $\gamma\subset S$ with a single double point, there is a four-fold cover of $S$ where $\gamma$ lifts to a simple curve.
\end{theorem}
\begin{remark}
The result is not sharp for some surfaces with boundary (for example, the thrice punctured sphere). The result is vacuously true for the 2-sphere and the 2-torus equipped with metrics of constant curvature, since \emph{all} the geodesics for those metrics are simple.
\end{remark}
\section{Extending covering spaces}
\label{coversec}
The results of the previous section suggest the following question:
\begin{question}
\label{coverq}
Given an oriented surface $S$ with boundary $\partial S,$ and a covering map of $1$-manifolds
$\pi: \widetilde{C} \rightarrow \partial S$ the fibers of which have constant cardinality $n.$ When does $\pi$ extend to a covering map $\Pi: \widetilde{S}\rightarrow S,$ where $\partial \widetilde{S} \simeq \widetilde{C}?.$
\end{question}
It turns out that Question \ref{coverq} has a complete answer -- Theorem \ref{sullthm} below ( due, essentially to D. Husemoller \cite{husemollercovers}). First, we note that to every degree $n$ covering map $\sigma: X\rightarrow Y$ we can associate a permutation representation $\Sigma: \pi_1(Y) \rightarrow S_n.$ Further, two coverings $\sigma_1$ and $\sigma_2$ are equivalent if and only if the associated representations $\Sigma_1$ and $\Sigma_2$ are conjugate (see, for example, \cite{hatcheralgtop}[Chapter 1] for the details). This means that the boundary covering map $\pi$ is represented by a collection of $k=|\pi_0(\partial S)|$ conjugacy classes $\Gamma_1, \dotsc, \Gamma_k$ in $S_n.,$ each of which is the conjugacy class of the image of the generator of the fundamental group of the corresponding component under the associated permutation representation.
\begin{theorem}
\label{sullthm}
A covering $\pi:\widetilde{C}\rightarrow C=\partial S$ extends to a covering of the surface $S$ if and only if the following conditions hold:
\begin{enumerate}
\item $S$ is a planar surface (that is, the genus of $S$ is zero and there exists a collection $\{\sigma_i\}_{i=1}^k$ of elements of the symmetric group $S_n$ with $\sigma_i \in \Gamma_i$ such that
$\sigma_1 \sigma_2 \dots \sigma_k = e,$ where $e\in S_n$ is the identity.
\item $S$ is not a planar surface, and the sum of the parities of $\Gamma_1, \dotsc, \Gamma_k$ vanishes.
\end{enumerate}
\end{theorem}
\begin{proof}
In the planar case, the fundamental group of $S$ is freely generated by the generators $\gamma_, \dotsc, \gamma_{k-1}$ fundamental groups of (any) $k-1$ of the boundary components. All $k$ boundary components satisfy $\gamma_1 \dotsc \gamma_k = e,$ whence the result in this case.
In the nonplanar case, let us first consider the case where $k=1.$ The generator $\gamma$ of the single boundary component is then a product of $g$ commutators (where $g$ is the genus of the surface, and so $\Sigma(\gamma)$ is in the commutator subgroup of $S_n,$ which is the alternating group $A_n,$ so the class $\Sigma(\gamma)$ has to be even. On the other hand, it is a result of O. Ore \cite{orecomm} that any even permutation is a commutatior, $\alpha \beta \alpha^{-1} \beta^{-1}$ and thus sending some pair of handle generators to $\alpha$ and $\beta$ respectively and the other generators of $\pi_1(S)$ to $e$ defines the requisite homorphism of $\pi_1(S)$ to $S_n.$
If $k>1,$ the surface $S$ is a connected sum of a surface of genus $g>0$ (by assumption) and a planar surface with $k$ boundary components. let $\gamma$ be the ``connected summing'' circle. By the planar case, there is no obstruction to defininng $\Pi$ on the planar side (since $\gamma$ is not part of the original data). However, $\Sigma(\gamma)$ will be the inverse of the product of elements $\sigma_i \in \Gamma_i$ and so its parity will be the sum of the parities of $\Gamma_i.$ To extend the cover to the non-planar side of the connected sum, it is necessary and sufficient for this sum to be even.
\end{proof}
Some remarks are in order. The first one concerns the planar case of Theorem \ref{sullthm}. It is not immediately obvious how one might be able to figure out whether given some conjugacy classes in the symmetric group, there are representatives of these classes which multiply out to the identity. Luckily, there is the following result of Frobenius (see \cite{serregalois}[p. 69])
\begin{theorem}
\label{frob}
Let $C_1, \dotsc, C_k$ be conjugacy classes in a finite group $G.$ The number $n$ of solutions to the equation $g_1 g_2 \dots g_k = e,$ where $g_i\in C_i$ is given by
\[
n = \dfrac{1}{|G|} |C_1| \dots |C_k| \sum_{\chi} \dfrac{\chi(x_1) \dots \chi(x_k)}{\chi(1)^{k-2}},
\]
where $x_k \in C_k$ and the sum is over all the complex irreducible characters of $G.$
\end{theorem}
Special cases of the planar case are considered in \cite{Kulkarnicovers}; enumeration questions for covers are considered in a number of papers by A.~Mednykh -- see \cite{Mednykhenum} and references therein.
The second remark is on Ore's result that every element of $A_n$ is even. This result was strengthened by E. Bertram in \cite{bertramcycles} and, independently and much later, by H. Cejtin and the author in \cite{henry} (the second argument has the virtue of being completely algorithmic, the first, aside from being 30 years earlier, proves a stronger result) to the statement that every even permutation $\sigma$ is the product of two $n$-cycles (Bertram actually shows that it is the product of two $l$ cycles for any
$l\geq (M(\sigma) + C(\sigma))/2,$ where $M(\sigma)$ is the number of elements moved by $\sigma$ while $C(\sigma)$ is the number of cycles in the cycle decomposition of $\sigma$.
The significance of this to coverings is that we have a very simple way of constructing a covering of a surface with one boundary component with specified cycle structure of the covering of the component, as follows.
First, the proof of Theorem \ref{sullthm} shows that the construction reduces to the case where $g=1,$ so that we are constructing a covering of a torus with a single perforation.
Suppose now that the permutation can be written as $\sigma\tau\sigma^{-1}\tau^{-1},$ where $\sigma$ is an $n$-cycle. This means that the "standard" generators of the punctured torus group go to $\sigma$ and $\tau,$ respectively. To construct the cover, then, take the standard square fundamental domain $D$ for the torus (the puncture is at the vertices of the square), then arrange $n$ of these fundamental domains in a row, and then a strip, by gluing the rightmost edge to the leftmost edge. Then, for each $i,$ the upper edge of the $i$-th domain from the left ($D_i$) is glued to the lower edge of $D_{\tau(i)}.$
In an upcoming joint paper with Manfred Droste we extend the results of this section to \emph{infinite} covers.
\section{Quantifying residual finiteness}
\label{resfin}
Khalid Bou-Rabee in \cite{khalidresid} has analyzed the following question: given a residually finite group $G$ and an element $g\in G,$ how high an index subgroup $H< G$ must one take so that $g \notin H,$ in terms of the word-length of $g.$ Bou-Rabee answers the questions for important classes of groups, including arithmetic lattices and nilpotent groups.
Here we wish to point out that for surface groups (including free groups) we have the following bound:
\begin{theorem}
\label{bourabee}
Given an element $g\in G$ of word length $l(g),$ there is a subgroup $H$ of $G$ of index of order $O(l(g)),$ such that $g \notin H.$
\end{theorem}
\begin{proof}
Let $F$ be a surface such that $\pi_1(F) = G.$
It is obviously equivalent to consrtruct a cover $\widetilde{F}$ of the surface $F$ whose fundamental group is $H,$ such that $g$ does not lift to $\widetilde{F}.$ There are two cases. The first is when the geodesic $\gamma(g)$ in the conjugacy class of $g$ is simple. In that case there are two further cases: the first arises when $\gamma(g)$ is homologically nontrivial. In that case, there is a geodesic $\beta$ transversely intersecting $\gamma$ in one point. Cutting $F$ along $\beta$ and then doubling gives us a double cover where $g$ does not lift. The second case is when $\gamma(g)$ bounds. In that case, cut along $\gamma(g)$ to obtain two surfaces with boundary. each of them admits a (connected) cover which restricts to a triple connected cover over $\gamma(g).$ Gluing along this cover, we obtain a cover of $F$ where $g$ does not lift.
The second case is when $g$ is \emph{not} simple. In this case, an examination of G.~P.~Scott's argument in \cite{scottlerf} shows that there is a cover $\widetilde{F}$ of $F$ of index linear in the word length of $g$ where the lift of $g$ is a simple. The first case analyzed above then completes the argument.
\end{proof}
The usual definition of residual finiteness is the following: a group $G$ is residually finite, if for every $g\in G$ there is a homomorphism $\psi_g: G \rightarrow H,$ where $H$ is finite and such that $\psi_g(g) \neq e.$ In other words, it postulates the existence of a \emph{normal} subgroup of finite index ($\ker \psi_g$) which does not contain $g.$ Now, since every subgroup of index $k$ in an infinite group $G$ contains a normal subroup of index $k!$ (index in $G,$ that is) the two points of view on residual finiteness are logically equivalent \emph{if} we don't care too much about the index. If we do, note that Theorem \ref{bourabee} gives us the following Corollary:
\begin{corollary}
\label{bouracor}
Let $G$ be a surface group. Given an element $g \in G$ of word length $l(g),$ there is a \emph{normal} subgroup $H$ of index at most $(c l(g))!$ which does not contain $g.$
\end{corollary}
Corollary \ref{bouracor} can be improved considerably for free groups:
\begin{theorem}
\label{expthm}
Consider the free group on $k$ letters $F_k$, and let $n > 1.$ There exists a normal subgroup $H_n$ of $F_k$ of index $f(n)$ which contains \emph{no} non-trivial elements of word length smaller than $n,$ where the index $f(n)$ can be bounded by
\[
f(n) \leq c (2k-1)^{3n/4}.
\]
for some constant $c.$
\end{theorem}
\begin{proof}
We first note that if we have a homomorphism $\phi$ of $F_k=\langle a_1, \dots, a_k\rangle$ onto a finite group $H$ with the Cayley graph $C_H$ of $H$ with respect to the generating set $\phi(a_1), \dotsc, \phi(a_k),$ then no word in $F_k$ shorter than the girth of $C_H$ is in the kernel of $\phi.$
We now use the following result of Lubotzky, Phillips, and Sarnak (\cite{lps}, see \cite{dsv} for an expository account):
\begin{citation}
For $p, q$ prime, with $p\geq 5$ and $q \gg p$ and $p$ a quadratic non-residue mod $q$ there is a symmetric generating set $S$ of the group $\PSL(2, q)$ of order $p+1$ such that the Cayley graph of $\PSL(2, q)$ has girth no smaller than $4\log_p q - \log_p 4.$
\end{citation}
It follows no element of $F_{(p+1)/2}$ of length shorter than $n(p, q)=4\log_p q - \log_p 4$ is killed by the homomorphism $\phi_q$ that sends the free generators of $F_{(p+1)/2}$ and their inverses to $S.$
Since the order of $\PSL(2, q)$ has order $m_qq(q^2-1)/2 \sim q^3/2,$ which we can write down as
\[
m_q=(4p^{n(p, q)})^{3/4}=2^{3/2} n(p, q).
\]
If $k\neq{(p+1)/2}$ for some prime $p,$ we can find a subgroup of small index in $F_k$ which \emph{is} a free group on $(p+1)/2$ letters for some prime $p.$ Using Dirichlet's theorem on primes in arithmetic progressions we can then find a suitable $q.$
\end{proof}
Theorem \ref{bourabee} can be combined with the results of \cite{walks} to obtain the following result:
\begin{theorem}
\label{avbourabee}
Consider the set $B_N$ of all elements in the free group $F_k$ having length no more than $N$ in the generators. Then the \emph{average} index of the subgroup not containing a given element over $B_N$ is bounded above by a constant (which can be taken to be approximately $2.92$).
\end{theorem}
\begin{proof}[Proof Sketch]
Theorem \ref{bourabee} together with the results of \cite{walks} reduce the question to the same question, but with $F_k$ replaced by $\mathbb{Z}^k.$ Consider an element $x=(x_1, \dotsc, x_k) \in \mathbb{Z}^k.$ The element $x$ is \emph{not} contained in $p\mathbb{Z} \times \mathbb{Z}^{k-1}$ if $p$ does not divide $x_1.$ The result now follows by the Lemma \ref{primelem} below.
\end{proof}
\begin{lemma}
\label{primelem}
For every $n$ define $p(n)$ to be the smallest prime which does \emph{not} divide $n$. Then the expectation of $p(n)$ over all $n< N$ converges to $c=2.902...$ as $N$ tends to infinity.
\end{lemma}
\begin{proof}
For a fixed $,$ the probability that $p(n) = p,$ for some $n<N$ is given by
\[
\dfrac{p-1}{p}/\prod_{q<p}q,
\]
where the product is over all the primes smaller than $p,$ and so the expectation of $p(n)$ is given by:
\[
\mathbb{E}(p) = \sum_{p}(1-p)/\prod_{q<p}q,
\]
and the latter sum converges rapidly to $2.92005...$
\end{proof}
We note that the above argument \emph{does not} work if we replace the words \emph{index of subroup} by \emph{index of normal subroup}, since in that case we don't have the necessary control over the commutator subgroup. The bound given by Theorem \ref{expthm} is certainly not good enough -- we need a uniform estimate on the index of the normal subgroup of order $o(n^k).$
We can get an estimate of the right type as follows:
\begin{theorem}
\label{nongamb}
Let $w$ be represented by a word of length $n$ in $F_2.$ There exists a prime $p\leq c n$ such that $w$ is not in the kernel of the homomorphism of $F_2$ to $\SL_2(\mathbb{Z}/p\mathbb{Z})$ which maps the generators of $F_2$ to $g=\begin{pmatrix}1&2\\0&1\end{pmatrix}$ and $h=\begin{pmatrix}1&0\\2&1\end{pmatrix}$ respectively.
\end{theorem}
\begin{proof}
First, consider $g$ and $h$ to be elements of $\SL_2(\mathbb{Z}).$ The matrix given by the word $w(g, h)$ has entries no larger than $2^n$ in absolute value. By the Chinese Remainder Theorem, if $p_1, \dotsc, p_k$ are primes whose product exceeds $2^n,$ any such matrix which is the identity modulo all of the $p_i$ is, in fact, the identity matrix, and thus the word $w$ is the trivial element of $F_2$ (since $g$ and $h$ lie in the principal congruence subgroup of level $2$ in the modular group, and that principal congruence subgroup is free). By the prime number theorem, the product of all the primes not exceeding $m$ is asymptotic to $e^m$ for $m$ large, whence the result.
\end{proof}
The method of proof of Theorem \ref{nongamb} is quite suggestive. For example, consider a word $w$ of length $n$ in $F_2=\langle a, b\rangle $ where the sum of the exponents of all the terms is \emph{not} zero. Then, by mapping both $a$ and $b$ to $g=\begin{pmatrix}1 & 1\\0 & 1\end{pmatrix}$ we see that the elements of $w(g, g)$ are no bigger than $n,$ and so such a word is not in the kernel of a homomorphism of $F_2$ into a group of order at most $\log n.$ If the exponent sum of $w$ is zero, but the individual exponent sums of $a$ and $b$ are not, we can modify the construction by sending $a$ to $g$ (as above) but sending $b$ to $g^2.$ This will give us the same result up to an additive constant. If the individual exponent sums are zero, the method fails, but the element $w$ is in the commutator subgroup (so $w$ is in the kernel of every homomorphism of $F_2$ to an abelian group). However, we can replace the abelian group by a $2$-step nilpotent group of ($3\times 3$) unipotent matrices. Using Lemma \ref{unipot} below the proof of Theorem \ref{nomgamb} shows that $w$ is in the second level of the lower central series of $F_2,$ it is not in the kernel of a homomorphism to a group of order $\log^2(n),$ where $n$ is the length of $w,$ and so on. We thus obtain the following statement which we believe sharp:
\begin{theorem}
\label{lowerng}
For every element $w$ of length $n$ at the $k$-th level in the lower central series of $F_2,$ there is a normal subgroup $H(w)$ of index $O(\log^k(n))$ which does not contain $w$. The subgroup $H(w)$ is the kernel of a homomorphism onto a $k$-step nilpotent subgroup represented by $(k+1)\times (k+1)$ unipotent matrices.
\end{theorem}
\begin{remark}
Since $F_n$ is a finite index subgroup of $F_2$ (for any $n\geq 2$) we could have replaced $F_2$ by $F_n$ in the statement of Theorem \ref{lowerng}.
\end{remark}
We have used the following lemma:
\begin{lemma}
\label{unipot}
Let $m_1, \dotsc, m_k$ be $n\times n$ unipotent matrices. Let $M=w(m_1, \dotsc, m_k),$ where $w$ is a word of length $m.$ Then, the entries of $M$ grow no faster than $O(m^{n-1}),$ where the implicit constant in the $O$-notation depends only on the matrices $m_1, \dotsc, m_k.$
\end{lemma}
\begin{proof}
Write each $m_i$ as $m_i = I_n + m_i^\prime.$ The matrices $m_1^\prime \dotsc, m_k^{\prime}$ generate a nilpotent ideal $\mathcal{I},$ where $\mathcal{I}^n=0.$ This means that $w(m_1, \dotsc, m_k)$ can be written as sum of $j$-fold products of $m_i^\prime,$ where $j$ ranges between $0$ and $n-1.$ The number of $j$-fold products is at most $\binom{n}{j} = O(n^j),$ whence the result.
\end{proof}
The remaining question, then, is: how deeply in the lower central series can an element of word-length $n$ be? It is clear that a word of length $n$ cannot have depth greater than $n,$ so $k=O(n).$ It has been proved by J. Malestein and A. Putman in \cite{maleput} that $k=\Omega(\sqrt{n}).$
\subsection{Surface Groups}
\label{surfsec}
The first observation is that the proof of Theorem \ref{nongamb} does not depend on the fact that the group is free, but only on the existence of representations of the group into $\SL(2, \mathcal(O)),$ where $\mathcal(O)$ is the ring of integers in some algebraic number field -- instead of the prime number theorem we then use Landau's Prime Ideal Theorem (see, eg, \cite{montvaughan}) to get exactly the same estimate. To show that every surface group admits such a representation we use a (stonger) observation of C. Machlachlan and A. Reid (see \cite{MacReidCanJ}):
\begin{observation}
\label{picardobs}
The \emph{Picard Modular Group} -- $\PSL(2, \mathbb{Z}[i])$ -- contains the fundamental group of the compact surface of genus $2,$ and thus the fundamental group of every compact surface.
\end{observation}
This gives us the following version of Theorem \ref{nongamb}:
\begin{theorem}
\label{nongambsurf}
Let $w$ be represented by a word of length $n$ in the fundamental group $\Gamma_g$ of the surface of genus $g.$ There exists a normal subgroup of order $O(n^3)$ which does not contain $w.$ \end{theorem}
The arithmetic method does not (at least not obviously) extend the proofs of Theorems \ref{lowerng} and \ref{expthm} to the case of surface groups. To extend these results we note that we need only extend these results to the fundamental group $\Gamma_2$ of the surface of genus $2.$ This is so because of the following observations:
\begin{observation}
\label{surfacefinite}
Every surface group $\Gamma_g$ s a subgroup of finite index of $\Gamma_2$ (in multiple ways, but we pick a fixed (eg, cyclic) covering for each $g$).
\end{observation}
\begin{observation}
\label{distobs}
By word-hyperbolicity, there exists a constant $c,$ such that every word of lengh $l$ in $\Gamma_g$ has word length between $c l$ and $l /c $ in $\Gamma_2.$ In fact, it is not hard to show that for the cyclic cover, the constant $c$ can be taken to be $g-1$ -- it can be shown that with a more judicious choice of covering this can be improved to $O(]\log g).$
\end{observation}
\begin{observation}
\label{surfacelcs}
If $g \in \Gamma_g$ lies at the $k$-th level of the lower central series of $\Gamma_g,$ then it lies in \emph{at most} the $k$-th level of the lower central series of $\Gamma_2.$ This is a more-or-less immediate corollary of the definition.
\end{observation}
\begin{observation}
\label{subgroupint}
\item Let $H$ be a subgroup of finite index $k$ in $\Gamma_2.$ Then $H \cap \Gamma_g$ is of index at most $k$ in $\Gamma_g.$ This is a standard exercise.
\end{observation}
To deal with $\Gamma_2,$ we use the following method, suggested by Henry Wilton: Write $\Gamma_2$ as $\Gamma_2 = \langle a, b, c, d ~\left| [a, b] = [c, d]\right.\rangle.$
There is a standard map of $r:\Gamma_2 \rightarrow F_2=\langle a, b\rangle,$ with $r(a)=r(c) = x,$ and $r(b)=r(d) = y.$ The retraction $r$ has the obvious property of not decreasing the lower central series depth, but it does have the unfortunate property of having a nontrivial kernel. However, this can be dealt with by using the following result, attributed by H. Wilton to G. Baumslag:
\begin{lemma}[\cite{wiltonex}[Lemma 4.13]]
\label{wiltonlem}
Let $\mathbb{F}$ be a free group, let $z \in \mathbb{F},$ $z\neq 1,$ and let
\[
g = a_0 z^{i_1} a_1\dotsc, z^{i_n}a_n,
\]
with $a_1, \dotsc, a_n \in \mathbb{F}.$ Assume further that $n\geq 1$ and $[a_k, z] \neq 1$ for $0< k<n.$ Then, if for every $1\leq k\leq n$ it is true that
\[
|z^{i_k}|\geq |a_{k-1}| + |a_k| + |z|,
\]
then $g$ does not represent the trivial word in $\mathbb{F}.$
\end{lemma}
To use Lemma \ref{wiltonlem} we introduce the \emph{Dehn Twist automorphism} $\phi$ of $\Gamma_2,$ which is defined by $\phi(a) = a, \phi(b) = b, \phi(c) = c^{[a, b]}, \phi(d) = c^{[a, b]},$ and recall the following easy fact:
\begin{lemma}
\label{centralizer}
Consider an element $x$ in the fundamental group $G$ of a compact surface $S.$ Then $[x, y] = 1$ if and only if there exists an element $w$ and integers $k, l$ such that $x=w^k,$ $y=w^l.$
\end{lemma}
\begin{proof}
Represent $G$ as a Fuchsian group of isometries of $\mathbb{H}^2.$ Then, $x$ and $y$ are isometries of $\mathbb{H}^2.$ Since $S$ is a compact manifold, both $x$ and $y$ are hyperbolic elements, and since they commute, they have the same axis. Since the group $G$ is discrete, the translation distances of $x$ and $y$ are commensurate, and so $x=\gamma^m$ and $y=\gamma^n$ for some translation $\gamma$ (which is not necessarily in $G.$) However, by the obvious application of the Euclidean algorithm, $\beta=\gamma^{(m, n)} \in G,$ and $x=\beta^{m/(m, n)},$ and $y=\beta^{n/(m, n)},$ so $k=m/(m, n), l=n/(m, n)$ and $w=\beta$ as stated.
\end{proof}
Now we are ready to prove the key observation
\begin{theorem}
\label{limitgp}
Let $g\in \Gamma_2,$ $g\neq 1.$ Then $r(\phi^{l(g)/4}(g) )\neq 1,$ where $l(g)$ is the minimal length of $g$ in terms of the generating set $\{a, b, c, d\}.$
\end{theorem}
\begin{proof} Let $w(g)$ be a shortest word in $a, b, c, d$ representing $g.$
First, write $w(g) = L_1 R_1 \dots L_k R_k,$ where $L_j$ are blocks of $a$s and $b$s and $R_i$ are blocks of $b$s and $c$s. Further, let $z_1 = [a, b]$ and $z_2 = [c, d]$ (these represent the same element $z$ of $\Gamma_2,$ but we think of them as words for now.) Now, apply the following rewriting process (see Algorithm \ref{gen1}) to $w(g)$:
First, if any $L_i$ is a power of $z_1$, replace it by the same power of $z_2.$ Second, if any $R_j$ is a power of $z_2,$ replace it by the same power of $z_1.$ Then repeat the two steps until neither can be applied. Note that this process will terminate eventually, since each steps reduces the total number of blocks (in fact, it reduces \emph{both} the number of $L$ blocks and the number of $R$ blocks). Call the resulting word $w_0(g)$ ($|w_0(g)| = |w(g)|,$ since $w(g)$ was assumed minimal). By abuse of notation, let
$w_0(g) = L_1 R_1\dots L_k R_k$ (where the $k$ might be different from the $k$ in $w(g)$). Note that applying the automorphism $\phi^m$ to $g$ replaces
each occurrence of a block $R_j$ in $w_0$ by $z^{-m} R_j z^m,$ and hence applying
$r\circ \phi^m$ to $g$ maps $g$ to
the word $u_0(g)=L_1 z_1^{-m} r(R_1)^m L_2 z_1^{-m} r(R_2) z_1^m \dots L_k z_1^{-m} r(R_k) z_1^m.$ By construction of $w_0$ and Lemma \ref{centralizer}, the hypotheses of Lemma \ref{wiltonlem} hold, as long as $4(m-1)\geq |w_0(g)|.$
\end{proof}
\begin{algorithm}
\label{gen1}
\caption{Rewriting Algorithm}
\begin{algorithmic}[1]
\LOOP
\IF {$w(g) = z_1^p$}
\STATE Return $w(g).$
\ELSIF{Any $L_i$ is a power of $z_1,$ so that $L_i=z_1^{m_i}$}
\STATE Replace $L_i$ by $z_2^{m_i}.$
\ELSIF {any $R_j$ is a power of $z_2,$ so that $R_j=z_2^{n_j}$}
\STATE Replace $R_j$ by $z_1^{n_j}.$
\ENDIF
\ENDLOOP
\end{algorithmic}
\end{algorithm}
\begin{lemma}
\label{wordlen}
The length of $r(\phi^{l(g)/4}(g)$ is bounded above by $l(g)^2+l(g).$
\end{lemma}
\begin{proof}
Computation.
\end{proof}
As a corollary of Theorem \ref{limitgp} and Lemma \ref{wordlen}, we have the following extensions of Theorems \ref{lowerng} and \ref{expthm} respectively:
\begin{theorem}
\label{lowerngsurf}
For every element $w$ of length $n$ at the $k$-th level in the lower central series of the fundamental group of a closed surface of genus $g$ there is a normal subgroup $H(w)$ of index $O(\log^k(n))$ which does not contain $w$.
\end{theorem}
\begin{theorem}
\label{expthmsurf}
Consider the fundamental group $\Gamma_k$ of a surface of genus$g$, and let $n > 1.$ There exists a normal subgroup $H_n$ of $\Gamma_k$ of index $f(n)$ which contains \emph{no} non-trivial elements of word length smaller than $n,$ where the index $f(n)$ can be bounded by
\[
f(n) =O(g^{O(n^2)}).
\]
\end{theorem}
I do not expect that the bound in the statement of Theorem \ref{expthmsurf} is close to sharp (but it \emph{is} a bound).
\section{Regular coverings}
\label{regcoversec}
D. Futer asked whether the results of the previous section had analogues when the covering given by $\Pi$ was additionally required to be \emph{regular}. This seems to be a hard question in general. For example, in the case where $S$ is a planar surface with $k$ boundary components, we have the following result:
\begin{theorem}
\label{regplanar}
In order for a covering of the boundary of $S$ to extend to a \emph{regular} covering of $S,$ it is necessary and sufficient that, in addition to the requirements of Theorem \ref{sullthm} (part 1), there must be a subgroup $G < S_n,$ with $|G| = n$ and $G$ is generated by $\gamma_1, \dots, \gamma_k,$ where $\gamma_i \in \Gamma_i,$ for $i=1, \dots, k.$
\end{theorem}
As far as the author knows, there is no particularly efficient way of deciding whether the condition of Theorem \ref{regplanar} is satisfied.
Here is a more satisfactory (in not very positive) result:
\begin{theorem}
\label{regq}
Let $S$ be a surface with one boundary component. There does not exist a nontrivial covering of finite index $\Pi: \widetilde{S}\rightarrow S$ where $\widetilde{S}$ also has one boundary component.
\end{theorem}
\begin{proof}
Let the degree of the covering be $n.$ If $\widetilde{S}$ has one boundary component, the generator of the $\pi_1(\partial S)$ gives rise to the cyclic group $\mathbb{Z}/n\mathbb{Z},$ which is a subgroup of the deck group of $\Pi.$. Since the deck group has order $n$ (by regularity), the covering is cyclic (so that the deck group is, in fact, $\mathbb{Z}/n \mathbb{Z}.$). A cyclic group is abelian, and since the generator of $\pi_1(\partial S)$ is a product of commutators, it is killed by the map $\Sigma.$ But this contradicts the statement of the first sentence of this proof (that this same element generates the entire deck group).
\end{proof}
The proof also shows the following:
\begin{theorem}
\label{regq2}
Let $\Pi: \widetilde{S}\rightarrow S,$ where $S$ has a single boundary component, be an \emph{abelian} regular covering of degree $n.$ Then $\widetilde{S}$ has $n$ boundary components.
\end{theorem}
We can combine our results in the following omnibus theorem:
\begin{theorem}
\label{regq3}
Let $S$ be a surface, whose boundary has $k$ connected components. Let the conjugacy classes of the $k$ coverings be be $\Gamma_1, \dotsc, \Gamma_k.$ In order for a constant cardinality $n$ covering of $\partial S$ to extend to a regular covering of $S$ it is necessary and sufficient that there be elements $\gamma_1\in \Gamma_1, \dotsc, \gamma_k \in \Gamma_k$ such that $\gamma_1, \dotsc, \gamma_k$ generate an order $n$ subgroup of $S_n$ and $\gamma_1\dots \gamma_k = e.$
\end{theorem}
\bibliographystyle{plain}
|
2,869,038,154,042 | arxiv | \section{Introduction}
We fix once for all a real number $M>0$ and a bounded connected open set $\Omega$ in $\mathbb R^2$ of class $C^{3}$.
Then, for $\varepsilon>0$ small, we consider the Neumann eigenvalue problem
\begin{equation}\label{Neumann}
\left\{\begin{array}{ll}
-\Delta u_{\varepsilon}=\lambda(\varepsilon)\rho_{\varepsilon}u_{\varepsilon} & {\rm in}\ \Omega,\\
\frac{\partial u_{\varepsilon}}{\partial\nu}=0 & {\rm on}\ \partial\Omega,
\end{array}\right.
\end{equation}
in the unknowns $\lambda(\varepsilon)$ (the eigenvalue) and $u_{\varepsilon}$ (the eigenfunction). The factor $\rho_\varepsilon$ is defined by
\begin{equation*}
\rho_{\varepsilon}(x):=\left\{\begin{array}{ll}
\varepsilon & {\rm in}\ \Omega\setminus\overline\omega_{\varepsilon},\\
\frac{M-\varepsilon |\Omega\setminus\overline\omega_{\varepsilon}|}{|\omega_{\varepsilon}|} & {\rm in}\ \omega_{\varepsilon},
\end{array}\right.
\end{equation*}
where
\begin{equation*}
\omega_{\varepsilon}:=\left\{x\in\Omega:{\rm dist}\left(x,\partial\Omega\right)<\varepsilon\right\}
\end{equation*}
is the strip of width $\varepsilon$ near the boundary $\partial\Omega$ of $\Omega$ (see Figure \ref{strip}). Here and in the sequel $\nu$ denotes the outer unit normal to $\partial\Omega$.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{membrane2.pdf}
\caption{}
\label{strip}
\end{figure}
It is well-known that the eigenvalues of \eqref{Neumann} have finite multiplicity and form an increasing sequence
$$
\lambda_0({\varepsilon})<\lambda_1(\varepsilon)\leq\lambda_2(\varepsilon)\leq\cdots\leq\lambda_j(\varepsilon)\leq\cdots\nearrow +\infty.
$$
In addition $\lambda_0(\varepsilon)=0$ and the eigenfunctions corresponding to $\lambda_0(\varepsilon)$ are the constant functions on $\Omega$. We will agree to repeat the eigenvalues according to their multiplicity.
Problem \eqref{Neumann} arises in the study of the transverse vibrations of a thin elastic membrane which occupies at rest the planar domain $\Omega$ (see e.g., \cite{cohil}). The mass of the membrane is distributed accordingly to the density $\rho_{\varepsilon}$. Thus the total mass is given by
\begin{equation*}
\int_{\Omega}\rho_{\varepsilon}dx=M
\end{equation*}
and it is constant for all $\varepsilon>0$. In particular, most of the mass is concentrated in a $\varepsilon$-neighborhood of the boundary $\partial\Omega$, while the remaining is distributed in the rest of $\Omega$ with a density proportional to $\varepsilon$. The eigenvalues $\lambda_j(\varepsilon)$ are the squares of the natural frequencies of vibration when the boundary of the membrane is left free. The corresponding eigenfunctions represent the profiles of vibration.
Then, we introduce the classical Steklov eigenvalue problem
\begin{equation}\label{Steklov}
\left\{\begin{array}{ll}
\Delta u=0, & {\rm in}\ \Omega,\\
\frac{\partial u}{\partial\nu}=\frac{M}{|\partial\Omega|}\mu u, & {\rm on}\ \partial\Omega,
\end{array}\right.
\end{equation}
in the unknowns $\mu$ (the eigenvalue) and $u$ (the eigenfunction). The spectrum of \eqref{Steklov} consists of an increasing sequence of non-negative eigenvalues of finite multiplicity, which we denote by
$$
\mu_0<\mu_1\leq\mu_{2}\leq\cdots\leq\mu_{j}\leq\cdots\nearrow+\infty.
$$
One easily verifies that $\mu_{0}=0$ and that the corresponding eigenfunctions are the constants functions on $\Omega$. In addition, one can prove that
for all $j\in\mathbb N$ we have
$$
\lambda_j(\varepsilon)\rightarrow\mu_{j}\quad\text{ as }\varepsilon\rightarrow 0
$$
(see, {e.g.}, Arrieta {\it et al.}~\cite{arrieta}, see also Buoso and Provenzano \cite{buosoprovenzano} and Theorem \ref{convergence} here below). Accordingly, one may think to the $\mu_{j}$'s as to the squares of the natural frequencies of vibration of a free elastic membrane with total mass $M$ concentrated on the $1$-dimensional boundary $\partial\Omega$ with constant density $M/{|\partial\Omega|}$. A classical reference for the study of problem \eqref{Steklov} is the paper \cite{steklov} by Steklov. We refer to Girouard and Polterovich \cite{girouardpolterovich} for a recent survey paper and to the recent works of Lamberti and Provenzano \cite{lambertiprovenzano1} and of Lamberti \cite{lamberti1} for related problems. We also refer to Buoso and Provenzano \cite{buosoprovenzano} for a detailed analysis of the analogous problem for the biharmonic operator.
The aim of the present paper is to study the asymptotic behavior of the eigenvalues $\lambda_j(\varepsilon)$ of problem \eqref{Neumann} and the corresponding eigenfunctions $u_{j,\varepsilon}$ as $\varepsilon$ goes to zero, i.e., when the thin strip $\omega_{\varepsilon}$ shrinks to the boundary of $\Omega$. To do so, we show the validity of an asymptotic expansion for $\lambda_j(\varepsilon)$ and $u_{j,\varepsilon}$ as $\varepsilon$ goes to zero. In addition, we provide explicit expressions for the first two coefficients in the expansions in terms of solutions of suitable auxiliary problems. In particular, we establish a closed formula for the derivative of $\lambda_j(\varepsilon)$ at $\varepsilon=0$. We observe that such a derivative may be seen as the {\em topological derivative} of $\lambda_j$ for the domain perturbation considered in this paper. We will confine our-selfs to the case when $\lambda_j(\varepsilon)$ converges to a simple eigenvalue $\mu_j$ of \eqref{Steklov}. We observe that such a restriction is justified by the fact that Steklov eigenvalues are generically simple (see e.g., Albert \cite{albert} and Uhlenbeck \cite{uhl}).
As we have written here above, problems \eqref{Neumann} and \eqref{Steklov} concern the elastic behavior of a membrane with mass distributed in a very thin region near the boundary. In view of such a physical interpretation, one may wish to know whether the normal modes of vibration are decreasing or increasing when $\varepsilon>0$ approaches $0$.
To answer to this question one can compute the value of the derivative of $\lambda_j(\varepsilon)$ at $\varepsilon=0$ by exploiting the closed formula that we will obtain. When $\Omega$ is a ball, we can find explicit expressions for the eigenvalues $\lambda_j(\varepsilon)$ and for the corresponding eigenvectors (in this case every eigenvalue is double). In Appendix B we have verified that in such special case the eigenvalues are locally decreasing when $\varepsilon$ approaches $0$ from above. Accordingly the Steklov eigenvalues of the ball are local minimizers of the $\lambda_j(\varepsilon)$. This result is in agreement with the value of the derivative of $\lambda_j(\varepsilon)$ at $\varepsilon=0$ that one may compute from our closed formula obtained for a general domain $\Omega$ of class $C^3$.
We observe here that asymptotics for vibrating systems (membranes or bodies) containing masses along curves or masses concentrated at certain points have been considered by several authors in the last decades (see, {\it e.g.}, Golovaty {\it et al.}~\cite{gol1}, Lobo and P\'erez \cite{lope1} and Tchatat \cite{tchatatbook}). We also refer to Lobo and P{\'e}rez \cite{loboperez1,loboperez2} where the authors consider the vibration of membranes and bodies carrying concentrated masses near the boundary, and to Golovaty {\it et al.}~\cite{gomezpereznazarov2,gomezpereznazarov1}, where the authors consider spectral stiff problems in domains surrounded by thin bands. Let us recall that these problems have been addressed also for vibrating plates (see Golovaty {\it et al.}~\cite{golonape_plates1,golonape_plates2} and the references therein). We also mention the alternative approach based on potential theory and functional analysis proposed in Musolino and Dalla Riva \cite{musolinodallariva} and Lanza de Cristoforis \cite{lanza}.
The paper is organized as follows. In Section 2 we introduce the notation and certain preliminary tools that are used through the paper. In Section \ref{sec:3} we state our main Theorems \ref{asymptotic_eigenvalues} and \ref{asymptotic_eigenfunctions}, which concern the asymptotic expansions of the eigenvalues and of the eigenfunctions of \eqref{Neumann}, respectively. In Theorem \ref{asymptotic_eigenvalues} we also provide the explicit formula for the topological derivative of the eigenvalues of \eqref{Neumann}. The proof of Theorems \ref{asymptotic_eigenvalues} and \ref{asymptotic_eigenfunctions} is presented in Sections \ref{sec:4} and \ref{sec:5}. In Section \ref{sec:4} we justify the asymptotic expansions of Theorems \ref{asymptotic_eigenvalues} and \ref{asymptotic_eigenfunctions} up to the zero order terms. Then in Section \ref{sec:5} we justify the asymptotic expansions up to the first order terms {and, as a byproduct, we prove the validity of the formula for the topological derivative.} At the end of the paper we have included two Appendices. In the Appendix A we consider an auxiliary problem and prove its well-posedness. In the last Appendix B we consider the case when $\Omega$ is the unit ball and prove that the Steklov eigenvalues are local minimizers of the Neumann eigenvalues for $\varepsilon$ small enough.
\section{Preliminaries}\label{sec:2}
\subsection{A convenient change of variables}
Since $\Omega$ is of class $C^{3}$, it is well-known that there exists ${\varepsilon'_\Omega}>0$ such that the map $x\mapsto x-\varepsilon\nu(x)$ is a diffeomorphism {of class $C^2$} from $\partial\Omega$ to $\partial\omega_{\varepsilon}\cap\Omega$ for all $\varepsilon\in(0,{\varepsilon'_\Omega})$. We will exploit this fact to introduce curvilinear coordinates in the strip $\omega_{\varepsilon}$. To do so, we denote by $\gamma:[0,|\partial\Omega|)\rightarrow\partial\Omega$ the arc length parametrization of the boundary $\partial\Omega$. Then, one verifies that the map $\psi:[0,|\partial\Omega|)\times(0,\varepsilon)\rightarrow \omega_{\varepsilon}$ defined by $\psi(s,t):=\gamma(s)-t\nu(\gamma(s))$, for all $(s,t)\in [0,|\partial\Omega|)\times(0,\varepsilon)$, is a diffeomorphism and we can use the curvilinear coordinates $(s,t)$ in the strip $\omega_\varepsilon$.
We denote by $\kappa(s)$ the signed curvature of $\partial\Omega$, namely we set $\kappa(s)=\gamma_1'(s)\gamma_2''(s)-\gamma_2'(s)\gamma_1''(s)$ for all $s\in [0,|\partial\Omega|)$.
In order to study problem \eqref{Neumann} it is also convenient to introduce a change of variables by setting $\xi=t/{\varepsilon}$. Accordingly, we denote by $\psi_{\varepsilon}$ the function from $[0,|\partial\Omega|)\times(0,1)$ to $\omega_{\varepsilon}$ defined by $\psi_{\varepsilon}(s,\xi):=\gamma(s)-\varepsilon\xi\nu(\gamma(s))$ for all $(s,\xi)\in[0,|\partial\Omega|)\times(0,1)$. The variable $\xi$ is usually called `rapid variable'. We observe that in this new system of coordinates $(s,\xi)$, the strip $\omega_{\varepsilon}$ is transformed into a band of length $|\partial\Omega|$ and width $1$ (see Figures \ref{fig43} and \ref{fig44}). Moreover, we note that if $\varepsilon<({\sup_{s\in[0,|\partial\Omega|)}|\kappa(s)|})^{-1}$, then we have
\begin{equation}\label{positiveinf}
\inf_{(s,\xi)\in[0,|\partial\Omega|)\times(0,1)}\bigl(1-\varepsilon\xi\kappa(s)\bigr)>0\,,
\end{equation}
so that $|\det D\psi_{\varepsilon}|=\varepsilon(1-\varepsilon\xi\kappa(s))$ for all $(s,\xi)\in[0,|\partial\Omega|)\times(0,1)$.
We will also need to write the gradient of a function $u$ on $\omega_\varepsilon$ with respect to the coordinates $(s,\xi)$. To do so we take
\[
\varepsilon''_\Omega:=\min\left\{\varepsilon'_\Omega\,,\, \biggl({\sup_{s\in[0,|\partial\Omega|)}|\kappa(s)|}\biggr)^{-1}\right\}.
\]
and we consider $\varepsilon\in(0,\varepsilon''_\Omega)$. Then we have
\begin{equation*}
\left(\nabla u\circ\psi_{\varepsilon}\right)(s,\xi)=\left(\begin{array}{ll}{\frac{\gamma_1'(s)}{1-\varepsilon\xi\kappa(s)}\partial_s(u\circ\psi_{\varepsilon}(s,\xi))-\frac{\gamma_2'(s)+\varepsilon\xi\gamma_1''(s)}{\varepsilon(1-\varepsilon\xi\kappa(s))}\partial_{\xi}(u\circ\psi_{\varepsilon}(s,\xi))}\\{\frac{\gamma_2'(s)}{1-\varepsilon\xi\kappa(s)}\partial_s(u\circ\psi_{\varepsilon}(s,\xi))+\frac{\gamma_1'(s)-\varepsilon\xi\gamma_2''(s)}{\varepsilon(1-\varepsilon\xi\kappa(s))}\partial_{\xi}(u\circ\psi_{\varepsilon}(s,\xi){)}}\end{array}\right)
\end{equation*}
and therefore
\begin{equation}\label{grad2}
\begin{split}
&\left(\nabla u\circ\psi_{\varepsilon}\cdot\nabla v\circ\psi_{\varepsilon}\right)(s,\xi)\\
&\qquad=\frac{1}{\varepsilon^2}\partial_{\xi}(u\circ\psi_{\varepsilon}(s,\xi))\partial_{\xi}(v\circ\psi_{\varepsilon}(s,\xi))+\frac{\partial_s (u\circ\psi_{\varepsilon}(s,\xi))\partial_s (v\circ\psi_{\varepsilon}(s,\xi))}{(1-\varepsilon\xi\kappa(s))^2}
\end{split}
\end{equation}
for all $(s,\xi)\in[0,|\partial\Omega|)\times(0,1)$.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.6\textwidth]{strip1.pdf}
\caption{}\label{fig43}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.6\textwidth]{strip2.pdf}
\caption{}\label{fig44}
\end{figure}
\subsection{Some remarks about $\rho_\varepsilon$}
We can write $\rho_{\varepsilon}=\varepsilon+\frac{1}{\varepsilon}\tilde\rho_{\varepsilon}\chi_{\omega_{\varepsilon}}$, where $\chi_{\omega_{\varepsilon}}$ is the characteristic function of $\omega_{\varepsilon}$ and
\begin{equation}\label{def_tilde_rho}
\tilde\rho_{\varepsilon}:=\varepsilon\left(\frac{M-\varepsilon |\Omega\setminus\overline\omega_{\varepsilon}|}{|\omega_{\varepsilon}|}\right)-\varepsilon^2.
\end{equation}
Then we observe that for $\varepsilon\in(0,\varepsilon''_\Omega)$ we have
\begin{equation}\label{oe}
|\omega_{\varepsilon}|=\varepsilon |\partial\Omega|-\frac{\varepsilon^2}{2}K\,,
\end{equation}
where $K$ is defined by
\begin{equation}\label{K}
K:=\int_0^{|\partial\Omega|}\kappa(s)ds.
\end{equation}
By \eqref{positiveinf} it follows that $|\partial\Omega|-\frac{\varepsilon}{2}K>0$. Then by \eqref{def_tilde_rho} and \eqref{oe} one verifies that there exists a real analytic map $\tilde{R}$ from $(-\varepsilon''_\Omega,\varepsilon''_\Omega)$ to $\mathbb{R}$ such that
\begin{equation}\label{asymptotic_rho}
\tilde\rho_{\varepsilon}=\frac{M}{|\partial\Omega|}+\frac{\frac{1}{2}K M-|\Omega||\partial\Omega|}{|\partial\Omega|^2}\varepsilon+\varepsilon^2\tilde{R}(\varepsilon)\qquad\forall\varepsilon\in(0,\varepsilon''_\Omega)\,.
\end{equation}
We are now legitimate to fix once for all a real number
\begin{equation}\label{eO}
\text{$\varepsilon_\Omega\in (0,\varepsilon''_\Omega)$ such that $\inf_{\varepsilon\in(0,\varepsilon_\Omega)}\tilde\rho_{\varepsilon}>0.$}
\end{equation}
\subsection{Weak formulation of problem (\ref{Neumann}) and the resolvent operator $\mathcal A_\varepsilon$}
For all $\varepsilon\in(0,\varepsilon_\Omega)$, we denote by $\mathcal H_{\varepsilon}(\Omega)$ the Hilbert space consisting of the functions in the standard Sobolev space $H^1(\Omega)$ endowed with the bilinear form
\begin{equation}\label{bilinear_eps}
\left\langle u,v\right\rangle_{\varepsilon}:=\int_{\Omega}\nabla u\cdot\nabla v dx+\int_{\Omega}\rho_{\varepsilon}u v dx\ \ \forall u,v\in \mathcal H_{\varepsilon}(\Omega).
\end{equation}
The bilinear form \eqref{bilinear_eps} induces on $H^1(\Omega)$ a norm which is equivalent to the standard one. We denote such a norm by $\|\cdot\|_{\varepsilon}$.
We note that the weak formulation of problem \eqref{Neumann} can be stated as follows: a pair $(\lambda(\varepsilon), u_{\varepsilon})\in \mathbb R\times H^1(\Omega)$ is a solution of \eqref{Neumann} in the weak sense if and only if
\begin{equation*}
\int_{\Omega}\nabla u_{\varepsilon}\cdot\nabla\varphi dx=\lambda(\varepsilon)\int_{\Omega}\rho_{\varepsilon} u_{\varepsilon}\varphi dx\ \ \forall \varphi\in H^1(\Omega).
\end{equation*}
Then, for all $\varepsilon\in(0,\varepsilon_\Omega)$ we introduce the linear operator $\mathcal A_{\varepsilon}$ from $\mathcal H_{\varepsilon}(\Omega)$ to itself which maps a function $f\in\mathcal H_{\varepsilon}(\Omega)$ to the function $u\in\mathcal H_{\varepsilon}$ such that
\begin{equation}\label{A_eps}
\int_{\Omega}\nabla u\cdot\nabla\varphi dx+\int_{\Omega}\rho_{\varepsilon} u\varphi dx=\int_{\Omega}\rho_{\varepsilon}f\varphi dx\ \ \forall\varphi\in\mathcal H_{\varepsilon}(\Omega).
\end{equation}
{We note that such a function $u\in\mathcal H_{\varepsilon}(\Omega)$ exists by the Riesz representation theorem and it is unique because $\int_{\Omega}\nabla u\cdot\nabla\varphi dx+\int_{\Omega}\rho_{\varepsilon} u\varphi dx=0$ for all $\varphi\in\mathcal H_{\varepsilon}(\Omega)$ implies that $\norm{u}_\varepsilon=0$. }
\medskip
In the sequel we {will heavily} exploit the following lemma. We refer to Ole{\u\i}nik {\it et al.}~\cite[III.1]{oleinik} for its proof.
\begin{lemma}\label{lemma_fondamentale}
Let {$A$ be a compact, self-adjoint and positive linear operator} from a separable Hilbert space $H$ to itself. Let $u\in H$, with $\|u\|_H=1$. Let $\eta,r>0$ be such that $\|A u-\eta u\|_H\leq r$. Then, there exists an eigenvalue $\eta^*$ of the operator $A$ which {satisfies} the inequality $|\eta-\eta^*|\leq r$. Moreover, for any $r^*>r$ there exists $u^*\in H$ with $\|u^*\|_H=1$, $u^*$ belonging to the space generated by all the eigenfunctions associated with an eigenvalue of the operator $A$ lying on the segment $[\eta-r^*,\eta+r^*]$, and such that
$$
\|u-u^*\|_H\leq\frac{2 r}{r^*}.
$$
\end{lemma}
We observe that the operator $\mathcal{A}_\varepsilon$ is a good candidate for the application of Lemma \ref{lemma_fondamentale}. Indeed, we have the following Proposition \ref{Ae}.
\begin{proposition}\label{Ae}
For all $\varepsilon\in(0,\varepsilon_\Omega)$ the map $\mathcal{A}_\varepsilon$ is a compact, self-adjoint and positive linear operator from $\mathcal H_{\varepsilon}$ to itself.
\end{proposition}
\proof The proof that $\mathcal{A}_\varepsilon$ is self-adjoint and positive can be effected by noting that $\langle\mathcal{A}_\varepsilon f,g\rangle_\varepsilon=\int_{\Omega}\rho_{\varepsilon}fg\, dx$ for all $f,g\in\mathcal H_{\varepsilon}(\Omega)$. To prove that $\mathcal{A}_\varepsilon$ is compact we denote by $\tilde{\mathcal{A}}_\varepsilon$ the linear operator from $L^2(\Omega)$ to $\mathcal H_{\varepsilon}(\Omega)$ which takes a function $f\in L^2(\Omega)$ to the unique element $u\in \mathcal H_{\varepsilon}(\Omega)$ which satisfies the condition in \eqref{A_eps}. By the Riesz representation theorem one verifies that $\tilde{\mathcal{A}}_\varepsilon$ is well defined. In addition, we can prove that $\tilde{\mathcal{A}}_\varepsilon$ is bounded. Indeed, we have
\[
\|\tilde{\mathcal{A}}_\varepsilon f\|_\varepsilon^2=\langle\tilde{\mathcal{A}}_\varepsilon f,\tilde{\mathcal{A}}_\varepsilon f\rangle_\varepsilon=\int_{\Omega}\rho_{\varepsilon}f\tilde{\mathcal{A}}_\varepsilon f\, dx
\]
and by a computation based on the H\"older inequality one verifies that
\[
\int_{\Omega}\rho_{\varepsilon}f\tilde{\mathcal{A}}_\varepsilon f\, dx=\int_{\Omega}\rho^{\frac{1}{2}}_{\varepsilon}f\rho^{\frac{1}{2}}_{\varepsilon}\tilde{\mathcal{A}}_\varepsilon f\, dx \le \left(\int_{\Omega}\rho_{\varepsilon}f^2dx\right)^{\frac{1}{2}}\left(\int_{\Omega}\rho_{\varepsilon}(\tilde{\mathcal{A}}_\varepsilon f)^2dx\right)^{\frac{1}{2}}\,,
\]
which implies that $\|{\tilde{\mathcal{A}}_\varepsilon f}\|_\varepsilon\le (\sup_{x\in\Omega}{\rho_\varepsilon(x)}^{\frac{1}{2}})\norm{f}_{L^2(\Omega)}$ for all $f\in L^2(\Omega)$. Then we denote by $\mathcal{E}_\varepsilon$ the embedding map from $\mathcal{H}_\varepsilon(\Omega)$ to $L^2(\Omega)$. Via the natural isomorphism from $\mathcal{H}_\varepsilon(\Omega)$ to $H^1(\Omega)$, one deduces that $\mathcal{E}_\varepsilon$ is compact. Since $\mathcal{A}_\varepsilon=\tilde{\mathcal{A}}_\varepsilon\circ\mathcal{E}_\varepsilon$, we conclude that also $\mathcal{A}_\varepsilon$ is compact.
\qed
\medskip
We conclude this subsection by observing that the $L^2(\Omega)$ norm of a function in $H^1(\Omega)$ is uniformly bounded by its $\|\cdot\|_\varepsilon$ norm for all $\varepsilon\in(0,\varepsilon_\Omega)$. We will prove such a result in Proposition \ref{L2<eps} below by exploiting the following Lemma \ref{poinc_peso}.
\begin{lemma}\label{poinc_peso}
There exists $C_{\Omega}>0$ such that
\[
\left\|u-\frac{1}{M}\int_{\Omega}\rho_{\varepsilon}u dx\right\|_{L^2(\Omega)}\leq C_{\Omega}\|\nabla u\|_{L^2(\Omega)},
\]
for all $u\in H^1(\Omega)$ and for all $\varepsilon\in(0,\varepsilon_\Omega)$.
\proof
We argue by contradiction and we assume that there exist a sequence $\{\tau_k\}_{k\in\mathbb{N}}\subset(0,\varepsilon_\Omega)$ and a sequence $\{w_k\}_{k\in\mathbb{N}}\subset H^1(\Omega)$ such that
\begin{equation}\label{assurdo}
\left\|w_k-\frac{1}{M}\int_{\Omega}\rho_{\tau_k}w_k dx\right\|_{L^2(\Omega)}> k\|\nabla w_k\|_{L^2(\Omega)}
\end{equation}
for all $k\in\mathbb{N}$. Since $\{\tau_k\}_{k\in\mathbb{N}}$ is bounded there exist ${\tau}\in [0,\varepsilon_\Omega]$ and a subsequence of $\{\tau_k\}_{k\in\mathbb{N}}$, which we still denote by $\{\tau_k\}_{k\in\mathbb{N}}$, such that $\tau_k\to{\tau}$ as $k\to\infty$. Then we set
$$
v_k:=\left\|w_k-\frac{1}{M}\int_{\Omega}\rho_{\tau_k}w_k dx\right\|_{L^2(\Omega)}^{-1}\left(w_k-\frac{1}{M}\int_{\Omega}\rho_{\tau_k}w_k dx\right)
$$
for all $k\in\mathbb{N}$.
We verify that $\int_{\Omega}\rho_{\tau_k}v_k dx=0$ and $\|v_k\|_{L^2(\Omega)}=1$ for all $k\in\mathbb N$, and from \eqref{assurdo}, $\|\nabla v_k\|<\frac{1}{k}$. Then $v_k$ is bounded in $H^1(\Omega)$ and we can extract a subsequence, which we still denote by $\lbrace v_k\rbrace_{k\in\mathbb N}$, such that $v_k\rightharpoonup v$ weakly in $H^1(\Omega)$ and $v_k\rightarrow v$ strongly in $L^2(\Omega)$, for some $v\in H^1(\Omega)$. Moreover, since $\|\nabla v_k\|_{L^2(\Omega)}<\frac{1}{k}$ one can verify that $\nabla v=0$ a.e.~in $\Omega${, and thus $v$ is constant on $\Omega$. In addition,
\begin{equation}\label{poic_peso.eq1}
\|v\|_{L^2(\Omega)}=\lim_{k\to\infty}\|v_k\|_{L^2(\Omega)}=1.
\end{equation}
We now prove that \eqref{poic_peso.eq1} leads to a contradiction. Indeed, we can prove that $v=0$. We consider separately the case when ${\tau}>0$ and the case when ${\tau}=0$. For ${\tau}>0$ we verify that $\lim_{k\rightarrow\infty}\int_{\Omega}\rho_{\tau_k}v_k dx=\int_{\Omega}\rho_{{\tau}}v dx$. Then $\int_{\Omega}\rho_{{\tau}}v dx=0$, because
$\int_{\Omega}\rho_{\tau_k}v_k dx=0$. Since $\rho_{{\tau}}>0$ and $v$ is constant, it follows that $v=0$. If instead ${\tau}=0$, then, by an argument based on \cite[Lemmas 3.1.22, 3.1.28]{phd} we have $\lim_{k\rightarrow\infty}\int_{\Omega}\rho_{\tau_k}v_k dx=\frac{M}{|\partial\Omega|}\int_{\partial\Omega}v d\sigma$.
Since $\int_{\Omega}\rho_{\tau_k}v_k dx=0$, it follows that $\int_{\partial\Omega}v d\sigma=0$. Since $v$ is constant on $\Omega$, we deduce that $v=0$.}
\endproof
\end{lemma}
We are now ready to prove Proposition \ref{L2<eps}.
\begin{proposition}\label{L2<eps}
If $\varepsilon\in(0,\varepsilon_\Omega)$ and $v\in H^1(\Omega)$, then
$$
\|v\|_{L^2(\Omega)}\le \max\left\{C_{\Omega},\sqrt\frac{|\Omega|}{M}\right\}\|v\|_{\varepsilon},$$
where $C_\Omega$ is the constant which appears in Lemma \ref{poinc_peso}.
\end{proposition}
\begin{proof}
First we observe that
\begin{equation}\label{J1e_eq1}
\int_{\Omega}\rho_{\varepsilon} v dx=\int_{\Omega}\rho_{\varepsilon}^{\frac{1}{2}}\rho_{\varepsilon}^{\frac{1}{2}}v dx\leq \left(\int_{\Omega}\rho_{\varepsilon}dx\right)^{\frac{1}{2}}\left(\int_{\Omega}\rho_{\varepsilon}v^2 dx\right)^{\frac{1}{2}}=M^{\frac{1}{2}}\left(\int_{\Omega}\rho_{\varepsilon}v^2 dx\right)^{\frac{1}{2}}.
\end{equation}
Then, by Lemma \ref{poinc_peso} and by \eqref{J1e_eq1} we deduce that
\begin{equation*}
\begin{split}
\|v\|_{L^2(\Omega)}&=\norm{v-\frac{1}{M}\int_{\Omega}\rho_{\varepsilon} v dx+\frac{1}{M}\int_{\Omega}\rho_{\varepsilon} v dx}_{L^2(\Omega)}\\
& \leq\norm{v-\frac{1}{M}\int_{\Omega}\rho_{\varepsilon} v dx}_{L^2(\Omega)}+\norm{\frac{1}{M}\int_{\Omega}\rho_{\varepsilon} v dx}_{L^2(\Omega)}\\
& \leq C_{\Omega}\|\nabla v\|_{L^2(\Omega)}+\sqrt\frac{|\Omega|}{M}\left(\int_{\Omega}\rho_{\varepsilon}v^2 dx\right)^{\frac{1}{2}}\,.
\end{split}
\end{equation*}
Now the validity of the proposition follows by a straightforward computation.
\end{proof}
\subsection{Known results on the limit behavior of $\lambda_j(\varepsilon)$}
In the following Theorem \ref{convergence} we recall some results on the {limit} behavior of the eigenelements of problem \eqref{Neumann}.
\begin{theorem}\label{convergence}
The following statements hold.
\begin{enumerate}
\item[(i)]
For all $j\in\mathbb N$ it holds
$$
\lim_{\varepsilon\rightarrow 0}\lambda_j(\varepsilon)=\mu_{j}.
$$
\item[(ii)] Let $\mu_j$ be a simple eigenvalue of problem \eqref{Steklov} and let $\lambda_j(\varepsilon)$ be such that $\lim_{\varepsilon\rightarrow 0}\lambda_j(\varepsilon)=\mu_{j}$. Then there exists $\varepsilon_j>0$ such that $\lambda_j(\varepsilon)$ is simple for all $\varepsilon\in(0,\varepsilon_j)$.
\end{enumerate}
\end{theorem}
The proof of Theorem \ref{convergence} can be carried out by using the notion of compact convergence for the resolvent operators, and can also be obtained as a consequence of the more general results proved in Arrieta {\it et al.}~\cite{arrieta} (see also Buoso and Provenzano \cite{buosoprovenzano}).
From Theorem \ref{convergence}, it follows that the function $\lambda_j(\cdot)$ which takes $\varepsilon>0$ to $\lambda_j(\varepsilon)$ can be extended with continuity at $\varepsilon =0$ by setting $\lambda_j(0):=\mu_{j}$ for all $j\in {\mathbb{N}}$.
\section{Description of the main results}\label{sec:3}
In this section we state our main Theorems \ref{asymptotic_eigenvalues} and \ref{asymptotic_eigenfunctions} which will be proved in Sections \ref{sec:4} and \ref{sec:5} below. We will use the following notation: if $j\in\mathbb N$ and $\mu_j$ is a simple eigenvalue of problem \eqref{Steklov}, then we take
\[
\varepsilon_{\Omega,j}:=\min\{\varepsilon_j\,,\,\varepsilon_\Omega\}
\]
with $\varepsilon_j$ as in Theorem \ref{convergence} and $\varepsilon_\Omega$ as in \eqref{eO}, so that $\lambda_j(\varepsilon)$ is a simple eigenvalue of \eqref{Neumann} for all $\varepsilon\in(0,\varepsilon_{\Omega,j})$. If $f$ is an invertible function, than $f^{(-1)}$ denotes the inverse of $f$, as opposed to $r^{-1}$ and $f^{-1}$ which denote the reciprocal of a real non-zero number or of a non-vanishing function.
In the following Theorem \ref{asymptotic_eigenvalues} we provide an asymptotic expansion of the eigenvalue $\lambda_j(\varepsilon)$ up to a remainder of order $\varepsilon^2$.
\begin{theorem}\label{asymptotic_eigenvalues}
Let $j\in\mathbb N$. Assume that $\mu_j$ is a simple eigenvalue of problem \eqref{Steklov}. Then
\begin{equation}\label{expansion_eigenvalues}
\lambda_j(\varepsilon)=\mu_j+\varepsilon\mu_j^1+O(\varepsilon^2)\quad\text{as }\varepsilon\rightarrow 0
\end{equation}
where
\begin{equation}\label{top_der_formula}
\mu_j^1=\frac{|\Omega|\mu_j}{M}-\frac{|\partial\Omega|\mu_j}{M}\int_{\Omega}u_j^2dx+\frac{2M\mu_j^2}{3|\partial\Omega|}+\frac{\mu_j}{2}\int_{\partial\Omega}u_j^2\kappa{\circ\gamma^{(-1)}} d\sigma-\frac{K\mu_j}{2|\partial\Omega|}.
\end{equation}
The constant $K$ is given by \eqref{K} and $u_j\in H^1(\Omega)$ is the unique eigenfunction of problem \eqref{Steklov} associated with the eigenvalue $\mu_j$ {which satisfies the additional condition}
\begin{equation}\label{ucondition}
\int_{\partial\Omega}u_j^2 d\sigma=1.
\end{equation}
\end{theorem}
In Theorem \ref{asymptotic_eigenfunctions} here below we show an asymptotic expansion for the eigenfunction $u_{j,\varepsilon}$ associated to $\lambda_j(\varepsilon)$.
\begin{theorem}\label{asymptotic_eigenfunctions}
Let $j\in\mathbb N$ and assume that $\mu_j$ is a simple eigenvalue of problem \eqref{Steklov}. {Let $0<\varepsilon_{\Omega,j}<\varepsilon_\Omega$ be} such that $\lambda_j(\varepsilon)$ is a simple eigenvalue of problem \eqref{Neumann} for all $\varepsilon\in(0,\varepsilon_{\Omega,j})$. Let $u_j$ be the unique eigenfunction of problem \eqref{Steklov} associated with $\mu_j$ {which satisfies the additional condition \eqref{ucondition}.}
For all $\varepsilon\in (0,\varepsilon_{\Omega,j})$, let $u_{j,\varepsilon}$ be the unique eigenfunction of problem \eqref{Neumann} corresponding to $\lambda_j(\varepsilon)$ {which satisfies the additional condition
\begin{equation}\label{uecondition}
\frac{|\partial\Omega|}{M}\int_{\Omega}\rho_{\varepsilon}u_{j,\varepsilon}^2dx=1\,.
\end{equation}}
Then there exist $u_j^1\in H^1(\Omega)$ and $w_j\in H^1([0,|\partial\Omega|)\times(0,1))$ such that
\begin{equation}\label{expansion_eigenfunctions}
u_{j,\varepsilon}=u_j+\varepsilon u_j^1+\varepsilon v_{j,\varepsilon}+O(\varepsilon^2)\quad{\text{in }\ L^2(\Omega)\text{ as }\varepsilon\rightarrow 0},
\end{equation}
where the function $v_{j,\varepsilon}\in H^1(\Omega)$ is the extension by $0$ of $w_j\circ\psi_{\varepsilon}^{(-1)}$ to $\Omega$.
\end{theorem}
We shall present explicit formulas for $w_j$ {in terms of $\mu_j$ and $u_j$} (see formula \eqref{w0}) and we shall identify $u_j^1$ as the solution to a certain boundary value problem (see problem \eqref{u1_problem_simple_true}). {We also note that $\norm{v_{j,\varepsilon}}_{L^2(\Omega)}\in O(\sqrt{\varepsilon})$, so that the third term in \eqref{expansion_eigenfunctions} is in $O(\varepsilon^{\frac{3}{2}})$ in $L^2(\Omega)$ (cf. Proposition \ref{vjeL2}).}
The proof of Theorems \ref{asymptotic_eigenvalues} and \ref{asymptotic_eigenfunctions} consists of two steps. In the first step (Section \ref{sec:4}) we show that the quantity $\lambda_j(\varepsilon)-\mu_j$ is of order $\varepsilon$ as $\varepsilon$ tends to zero. Moreover, we introduce the function $w_j$ and we show that {$\|u_{j,\varepsilon}-u_j\|_{L^2(\Omega)}$} is of order $\varepsilon$ as $\varepsilon$ tends to zero. In the second step (Section \ref{sec:5}) we complete the proof of Theorems \ref{asymptotic_eigenvalues} and \ref{asymptotic_eigenfunctions} {by proving the validity of \eqref{expansion_eigenvalues} and \eqref{expansion_eigenfunctions}} and we introduce the boundary value problem which identifies $u_j^1$.
\section{First step}\label{sec:4}
We begin here the proof of Theorems \ref{asymptotic_eigenvalues} and \ref{asymptotic_eigenfunctions}. Accordingly, we fix $j\in\mathbb N$ and we take $\mu_j$, $u_j$, $\varepsilon_{\Omega,j}$, $\lambda_j(\varepsilon)$, and $u_{j,\varepsilon}$ as in the statements of Theorems \ref{asymptotic_eigenvalues} and \ref{asymptotic_eigenfunctions}. The aim of this section is to prove the following intermediate result.
\begin{proposition}\label{intermediate}
We have
\begin{equation}\label{intermediate.eq1}
\text{$\lambda_j(\varepsilon)=\mu_j+O(\varepsilon)$ as $\varepsilon\to 0$}
\end{equation}
and
\begin{equation}\label{intermediate.eq2}
\text{$u_{j,\varepsilon}=u_j+O(\varepsilon)$ in $L^2(\Omega)$ as $\varepsilon\to 0$.}
\end{equation}
\end{proposition}
In other words, we wish to justify the expansions \eqref{expansion_eigenvalues} and \eqref{expansion_eigenfunctions} up to a remainder of order $\varepsilon$. (We observe here that Theorem \ref{convergence} states the convergence of $\lambda_j(\varepsilon)$ to $\mu_j$, but it does not provide any information on the rate of convergence.)
We introduce the following notation. We denote by $w_{j}$ the function from $[0,|\partial\Omega|)\times[0,1]$ to $\mathbb R$ defined by
\begin{equation}\label{w0}
w_{j}(s,\xi):=-\frac{M\mu_j}{2|\partial\Omega|}(u_j\circ\gamma(s))\left(\xi-1\right)^2\quad\forall (s,\xi)\in[0,|\partial\Omega|)\times[0,1]\,.
\end{equation}
By a straightforward computation one verifies that $w_{j}$ solves the following problem
\begin{equation}\label{w0probl}
\left\{\begin{array}{ll}
-\partial^2_{\xi}w_{j}(s,\xi)=\frac{M\mu_j}{|\partial\Omega|}(u_j\circ\gamma(s)), & (s,\xi)\in [0,|\partial\Omega|)\times(0,1),\\
\partial_{\xi}w_{j}(s,0)=\frac{M\mu_j}{|\partial\Omega|}(u_j\circ\gamma(s)), & s\in[0,|\partial\Omega|),\\
w_j(s,1)=\partial_{\xi}w_{j}(s,1)=0, & s\in [0,|\partial\Omega|).
\end{array}
\right.
\end{equation}
Then for all $\varepsilon\in(0,\varepsilon_{\Omega,j})$ we denote by $v_{j,\varepsilon}\in H^1(\Omega)$ the extensions by $0$ of $w_{j}\circ\psi_{\varepsilon}^{(-1)}$ to $\Omega$. We note that by construction $v_{j,\varepsilon}\in H^1(\Omega)$. We also observe that the $L^2(\Omega)$ norm of $v_{j,\varepsilon}$ is in $O(\sqrt\varepsilon)$ as $\varepsilon\to 0$. Indeed, we have the following proposition.
\begin{proposition}\label{vjeL2} There is a constant $C>0$ such that $\|v_{j,\varepsilon}\|_{L^2(\Omega)}\le C\sqrt\varepsilon$ for all $\varepsilon\in(0,\varepsilon_{\Omega,j})$.
\end{proposition}
\begin{proof}
Since $v_{j,\varepsilon}$ is the extensions by $0$ of $w_{j}\circ\psi_{\varepsilon}^{(-1)}$ to $\Omega$, by the rule of change of variables in integrals we have
\[
\begin{split}
\int_{\omega_{\varepsilon}}v_{j,\varepsilon}^2 dx&=\varepsilon\int_0^{|\partial\Omega|}\int_0^1w^2_j(s,\xi)(1-\varepsilon\xi\kappa(s))\,d\xi ds\\
&\le \left(\norm{w_j}_{L^2([0,\partial\Omega)\times(0,1))}\sup_{(s,\xi)\in[0,\partial\Omega)\times(0,\varepsilon)}|1-\varepsilon\xi\kappa(s)|\right)\varepsilon\,.
\end{split}
\]
\end{proof}
We also observe that $\sqrt\varepsilon\norm{v_{j,\varepsilon}}_\varepsilon$ is uniformly bounded for $\varepsilon\in(0,\varepsilon_{\Omega,j})$. Namely, we have the following proposition.
\begin{proposition}\label{vjee} There is a constant $C>0$ such that $\sqrt\varepsilon\norm{v_{j,\varepsilon}}_\varepsilon\le C$ for all $\varepsilon\in(0,\varepsilon_{\Omega,j})$.
\end{proposition}
\begin{proof}
We have
\begin{equation}\label{vjee.eq1}
\norm{v_{j,\varepsilon}}_\varepsilon=\int_{\omega_\varepsilon}\rho_\varepsilon\,v_{j,\varepsilon}^2dx+\int_{\omega_\varepsilon}|\nabla v_{j,\varepsilon}|^2dx\,.
\end{equation}
Since $\rho_\varepsilon = \varepsilon+\frac{1}{\varepsilon}\tilde\rho_\varepsilon$ on $\omega_\varepsilon$ we have
\[
\int_{\omega_\varepsilon}\rho_\varepsilon\,v_{j,\varepsilon}^2dx= \left(\varepsilon+\frac{1}{\varepsilon}\tilde\rho(\varepsilon)\right)\norm{v_{j,\varepsilon}}^2_{L^2(\Omega)}\,.
\]
Thus, by Proposition \ref{vjeL2} and by \eqref{asymptotic_rho} we deduce that
\begin{equation}\label{vjee.eq2}
\int_{\omega_\varepsilon}\rho_\varepsilon\,v_{j,\varepsilon}^2dx\le C\qquad\forall\varepsilon\in(0,\varepsilon_{\Omega,j})
\end{equation}
for some $C>0$. By \eqref{grad2} and by the rule of change of variables in integrals we have
\[
\begin{split}
&\int_{\omega_\varepsilon}|\nabla v_{j,\varepsilon}|^2dx\\
&\qquad=\int_0^{|\partial\Omega|}\int_0^1\left(\frac{1}{\varepsilon^2}(\partial_{\xi}w_j(s,\xi))^2+\frac{(\partial_s w_j(s,\xi))^2}{(1-\varepsilon\xi\kappa(s))^2}\right)\varepsilon(1-\varepsilon\xi\kappa(s))\,d\xi ds\\
&\qquad=\frac{1}{\varepsilon} \int_0^{|\partial\Omega|} \int_0^1(\partial_{\xi}w_j(s,\xi))^2 (1-\varepsilon\xi\kappa(s))\,d\xi ds +\varepsilon\int_0^{|\partial\Omega|}\int_0^1 \frac{(\partial_s w_j(s,\xi))^2}{1-\varepsilon\xi\kappa(s)}\,d\xi ds.
\end{split}
\]
From \eqref{w0} we observe that
\begin{equation}\label{regu0}
|\partial_{\xi}w_j(s,\xi)|=\frac{M\mu_j}{|\partial\Omega|}(1-\xi)|u_j\circ\gamma(s)|
\end{equation}
and
\begin{equation}\label{regu1}
|\partial_{s}w_j(s,\xi)|=\frac{M\mu_j}{2|\partial\Omega|}(\xi-1)^2|\partial_s (u_j\circ\gamma)(s)|.
\end{equation}
Since $\Omega$ is assumed to be of class $C^3$, a classical elliptic regularity argument shows that $u_j\in C^2(\overline\Omega)$ (see e.g., Agmon {\it et al.}~\cite{agmon1}). In addition, by the regularity of $\Omega$, we have that $\gamma$ is of class $C^3$ from $[0,|\partial\Omega|)$ to $\mathbb R^2$. Thus, from \eqref{regu0} and \eqref{regu1} it follows that $|\partial_{\xi}w_j(s,\xi)|$, $|\partial_{s}w_j(s,\xi)|\leq C\|u_j\|_{C^1(\overline\Omega)}$. Then by condition \eqref{positiveinf} we verify that
\begin{equation}\label{vjee.eq3}
\int_{\omega_\varepsilon}|\nabla v_{j,\varepsilon}|^2dx\le C\frac{1}{\varepsilon}\qquad\forall\varepsilon\in(0,\varepsilon_{\Omega,j})
\end{equation}
Now, by \eqref{vjee.eq1}, \eqref{vjee.eq2}, and \eqref{vjee.eq3} we deduce the validity of the proposition.
\end{proof}
We now consider the operator $\mathcal A_{\varepsilon}$ introduced in Section \ref{sec:2}. We recall that $\mathcal A_{\varepsilon}$ is a compact self-adjoint operator from $\mathcal H_{\varepsilon}(\Omega)$ to itself. In addition, $\lambda_j(\varepsilon)$ is an eigenvalue of \eqref{Neumann} if and only if $\frac{1}{1+\lambda_j(\varepsilon)}$ is an eigenvalue of $\mathcal A_{\varepsilon}$ and Theorem \ref{convergence} implies that
\[
\lim_{\varepsilon\to 0}\frac{1}{1+\lambda_{j}(\varepsilon)}=\frac{1}{1+\mu_j}\,.
\]
Since $\mu_j$ is a simple eigenvalue of \eqref{Neumann}, we can prove that $\frac{1}{1+\lambda_j(\varepsilon)}$ is also simple for $\varepsilon$ small enough and we have the following Lemma \ref{only}.
\begin{lemma}\label{only} There exist $\delta_j\in(0,\varepsilon_{\Omega,j})$ and $r^*_j>0$ such that, for all $\varepsilon\in(0,\delta_j)$ the only eigenvalue of $\mathcal A_{\varepsilon}$ in the interval
\[
\left[\frac{1}{1+{\mu_j}}-r^*_j,\frac{1}{1+{\mu_j}}+r^*_j\right]
\]
is $\frac{1}{1+\lambda_{j}(\varepsilon)}$.
\end{lemma}
\proof
Since $\mu_j$ and $\lambda_j(\varepsilon)$ are simple we have $\mu_j\neq\mu_{j-1}$, $\mu_j\neq\mu_{j+1}$, $\lambda_j(\varepsilon)\ne\lambda_{j-1}(\varepsilon)$ and $\lambda_j(\varepsilon)\ne\lambda_{j+1}(\varepsilon)$ for all $\varepsilon\in(0,\varepsilon_{\Omega,j})$. Then, by Theorem \ref{convergence} (i) and by a standard continuity argument we can find $\delta_j\in(0,\varepsilon_{\Omega,j})$ and $r^*_j>0$ such that
\[
\left|\frac{1}{1+\mu_j}-\frac{1}{1+\lambda_{j-1}(\varepsilon)}\right|>r^*_j\,,\quad\left|\frac{1}{1+\mu_j}-\frac{1}{1+\lambda_{j+1}(\varepsilon)}\right|>r^*_j\,,
\]
and
\[
\left|\frac{1}{1+\mu_j}-\frac{1}{1+\lambda_{j}(\varepsilon)}\right|\leq r^*_j
\] for all $\varepsilon\in(0,\delta_j)$.
\qed
\medskip
To prove Proposition \ref{intermediate} we plan to apply Lemma \ref{lemma_fondamentale} to $\mathcal A_{\varepsilon}$ with $H=\mathcal H_{\varepsilon}(\Omega)$, $\eta=\frac{1}{1+\mu_j}$, $u=\frac{u_j+\varepsilon v_{j,\varepsilon}}{\|u_j+\varepsilon v_{j,\varepsilon}\|_{\varepsilon}}$, and $r=C\varepsilon <r^*_j$, where $C>0$ is a constant which does not depend on $\varepsilon$. Accordingly, we have to verify that the assumptions of Lemma \ref{lemma_fondamentale} are satisfied.
As a first step, we prove the following
\begin{lemma}\label{C1}
There exists a constant $C_1>0$ such that
\begin{equation}\label{condition_1}
\left|\left\langle\mathcal A_{\varepsilon}(u_j+\varepsilon v_{j,\varepsilon})-\frac{1}{1+\mu_j}(u_j+\varepsilon v_{j,\varepsilon}),\varphi\right\rangle_{\varepsilon}\right|\leq C_1\varepsilon \|\varphi\|_{\varepsilon}
\end{equation}
for all $\varphi\in\mathcal H_{\varepsilon}(\Omega)$ and for all $\varepsilon\in(0,\varepsilon_{\Omega,j})$.
\end{lemma}
\proof By \eqref{bilinear_eps} and \eqref{A_eps} we have
\begin{equation}\label{mod1_lem}
\begin{split}
&\left|\left\langle\mathcal A_{\varepsilon}(u_j+\varepsilon v_{j,\varepsilon})-\frac{1}{1+\mu_j}(u_j+\varepsilon v_{j,\varepsilon}),\varphi\right\rangle_{\varepsilon}\right|\\
&\quad=\left|\int_{\Omega}\rho_{\varepsilon} u_j\varphi dx+\int_{\omega_{\varepsilon}}\varepsilon\rho_{\varepsilon}v_{j,\varepsilon}\varphi dx
-\frac{1}{1+\mu_j}\left(\int_{\Omega}\nabla u_j\cdot\nabla\varphi dx+\int_{\Omega}\rho_{\varepsilon}u_j\varphi dx\right.\right.\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\left.\left.\int_{\omega_{\varepsilon}}\varepsilon\nabla v_{j,\varepsilon}\cdot\nabla\varphi dx+\int_{\omega_{\varepsilon}}\varepsilon\rho_{\varepsilon}v_{j,\varepsilon} \varphi dx\right)\right|\\
&\quad=\frac{\mu_j}{1+\mu_j}\left|\varepsilon\int_{\Omega}u_j \varphi dx+\int_{\omega_{\varepsilon}}\frac{1}{\varepsilon}\tilde\rho_{\varepsilon}u_j\varphi dx-\frac{M}{|\partial\Omega|}\int_{\partial\Omega}u_j\varphi d\sigma\right.\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.+\varepsilon\int_{\omega_{\varepsilon}}\rho_{\varepsilon}v_{j,\varepsilon}\varphi dx-\frac{\varepsilon}{\mu_j}\int_{\omega_{\varepsilon}}\nabla v_{j,\varepsilon}\cdot\nabla\varphi dx\right|
\end{split}
\end{equation}
(see also \eqref{def_tilde_rho} for the definition of $\tilde\rho_{\varepsilon}$). We observe that by the rule of change of variables in integrals we have
\begin{equation}\label{J2e_eq1.0}
\begin{split}
\int_{\omega_{\varepsilon}}\frac{1}{\varepsilon}\tilde\rho_{\varepsilon}u_j\varphi dx&=\int_0^{|\partial\Omega|}\int_0^1\tilde\rho_{\varepsilon}(u_j\circ\psi_{\varepsilon}(s,\xi))(\varphi\circ\psi_{\varepsilon}(s,\xi))(1-\varepsilon\xi\kappa(s))d\xi ds\\
&=\int_0^{|\partial\Omega|}\int_0^1\tilde\rho_{\varepsilon}(u_j\circ\psi_{\varepsilon})(\varphi\circ\psi_{\varepsilon})d\xi ds\\
&\quad-\int_0^{|\partial\Omega|}\int_0^1\tilde\rho_{\varepsilon}(u_j\circ\psi_{\varepsilon}(s,\xi))(\varphi\circ\psi_{\varepsilon}(s,\xi))\varepsilon\xi\kappa(s) d\xi ds
\end{split}
\end{equation}
and
\begin{equation}\label{J5_eq1.0}
\begin{split}
&\frac{\varepsilon}{\mu_j}\int_{\omega_{\varepsilon}}\nabla v_{j,\varepsilon}\cdot\nabla\varphi dx\\
&=\frac{\varepsilon^2}{\mu_j}\int_0^{|\partial\Omega|}\int_0^1\left(\frac{1}{\varepsilon^2}\partial_{\xi} w_{j}(s,\xi)\partial_{\xi}(\varphi\circ\psi_{\varepsilon})(s,\xi)+\partial_s w_{j}(s,\xi)\partial_s(\varphi\circ\psi_{\varepsilon})(s,\xi)\right.\\
&\quad\left. +\varepsilon\xi\kappa(s)\xi\sum_{j=1}^{+\infty}(j+1)(\varepsilon\xi\kappa(s))^{j-1}\partial_s w_{j}(s,\xi)\partial_s(\varphi\circ\psi_{\varepsilon})(s,\xi)\right)(1-\varepsilon\xi\kappa(s))d\xi ds\\
&=\frac{1}{\mu_j}\int_0^{|\partial\Omega|}\int_0^1\partial_{\xi} w_{j}(s,\xi)\partial_{\xi}(\varphi\circ\psi_{\varepsilon})(s,\xi)d\xi\,ds\\
&\quad +\frac{1}{\mu_j}\int_0^{|\partial\Omega|}\int_0^1\partial_{\xi} w_{j}(s,\xi)\partial_{\xi}(\varphi\circ\psi_{\varepsilon})\varepsilon\xi\kappa(s)d\xi ds\\
&\quad+\frac{\varepsilon^2}{\mu_j}\int_0^{|\partial\Omega|}\int_0^1\bigg(\partial_s w_{j}(s,\xi)\partial_s(\varphi\circ\psi_{\varepsilon})(s,\xi)\\
&\quad+\varepsilon\xi\kappa(s)\xi\sum_{j=1}^{+\infty}(j+1)(\varepsilon\xi\kappa(s))^{j-1}\partial_s w_{j}(s,\xi)\partial_s(\varphi\circ\psi_{\varepsilon})(s,\xi)\bigg)(1-\varepsilon\xi\kappa(s))d\xi ds
\end{split}
\end{equation}
(see also \eqref{grad2}). In addition, by integrating by parts and by \eqref{w0probl} one verifies that
\begin{equation}\label{J5_eq1.1}
\begin{split}
&\frac{1}{\mu_j}\int_0^{|\partial\Omega|}\int_0^1\partial_{\xi} w_{j}(s,\xi)\partial_{\xi}(\varphi\circ\psi_{\varepsilon})(s,\xi)d\xi\,ds\\
&=-\frac{M}{|\partial\Omega|}\int_{\partial\Omega}u_j\varphi d\sigma+\frac{M}{|\partial\Omega|}\int_0^{|\partial\Omega|}\int_0^1(u_j\circ\psi_{\varepsilon}(s,0))(\varphi\circ\psi_{\varepsilon}(s,\xi)) d\xi ds\,.
\end{split}
\end{equation}
Then by \eqref{J2e_eq1.0}, \eqref{J5_eq1.0}, and \eqref{J5_eq1.1} one deduces that the right hand side of the equality in \eqref{mod1_lem} equals
\[
\frac{\mu_j}{1+\mu_j}\left|J_{1,\varepsilon}+J_{2,\varepsilon}+J_{3,\varepsilon}+J_{4,\varepsilon}+J_{5,\varepsilon}+J_{6,\varepsilon}\right|
\]
with
\begin{eqnarray*}
J_{1,\varepsilon}&:=&\varepsilon\int_{\Omega}u_j \varphi dx,\\
J_{2,\varepsilon}&:=&-\int_0^{|\partial\Omega|}\int_0^1\tilde\rho_{\varepsilon}(u_j\circ\psi_{\varepsilon}(s,\xi))(\varphi\circ\psi_{\varepsilon}(s,\xi))\varepsilon\xi\kappa(s) d\xi ds,\\
J_{3,\varepsilon}&:=&\varepsilon\int_{\omega_{\varepsilon}}\rho_{\varepsilon}v_{j,\varepsilon}\varphi dx,\\
J_{4,\varepsilon}&:=&-\frac{1}{\mu_j}\int_0^{|\partial\Omega|}\int_0^1\partial_{\xi} w_{j}(s,\xi)\partial_{\xi}(\varphi\circ\psi_{\varepsilon})(s,\xi)\varepsilon\xi\kappa(s)d\xi ds\\
J_{5,\varepsilon}&:=&-\frac{\varepsilon^2}{\mu_j}\int_0^{|\partial\Omega|}\int_0^1\bigg(\partial_s w_{j}(s,\xi)\partial_s(\varphi\circ\psi_{\varepsilon})(s,\xi)\\
&&-\varepsilon\xi\kappa(s)\xi\sum_{j=1}^{+\infty}(j+1)(\varepsilon\xi\kappa(s))^{j-1}\partial_s w_{j}(s,\xi)\partial_s(\varphi\circ\psi_{\varepsilon})(s,\xi)\bigg)(1-\varepsilon\xi\kappa(s))d\xi ds,\\
J_{6,\varepsilon}&:=&\int_0^{|\partial\Omega|}\int_0^1\tilde\rho_{\varepsilon}(u_j\circ\psi_{\varepsilon})(\varphi\circ\psi_{\varepsilon})d\xi ds-\frac{M}{|\partial\Omega|}\int_0^{|\partial\Omega|}\int_0^1(u_j\circ\gamma)(\varphi\circ\psi_{\varepsilon}) d\xi ds.\\
\end{eqnarray*}
To prove the validity of the lemma we will show that there exists $C>0$
\begin{equation}\label{aim}
|J_{k,\varepsilon}|\le C\varepsilon\|\varphi\|_{\varepsilon}\qquad\forall\varepsilon\in(0,\varepsilon_{\Omega,j})\,,\varphi\in\mathcal H_{\varepsilon}(\Omega)
\end{equation}
for all $k\in\{1,\dots,6\}$. In the sequel we find convenient to adopt the following convention: we will denote by $C$ a positive constant which does not depend on $\varepsilon$ and $\varphi$ and which may be re-defined line by line.
We begin with $J_{1,\varepsilon}$. We observe that there exists $C>0$ such that
\begin{equation}\label{J1e_eq2}
\|u_j\|_{\varepsilon}\leq C
\end{equation}
for all $\varepsilon\in(0,\varepsilon_{\Omega,j})$. The proof of \eqref{J1e_eq2} can be effected by noting that
\begin{equation}\label{J1e_eq2.1}
\lim_{\varepsilon\rightarrow 0}\int_{\Omega}\rho_{\varepsilon}u_j^2 dx=\frac{M}{|\partial\Omega|}\int_{\partial\Omega}u_j^2 d\sigma=\frac{M}{|\partial\Omega|}.
\end{equation}
and by a standard continuity argument. Then, by the H\"older inequality and by Proposition \ref{L2<eps} we deduce that
\begin{equation*}
{\left|J_{1,\varepsilon}\right|=\varepsilon\int_{\Omega}\left|u_j\varphi\right| dx}\leq \varepsilon \|u_j\|_{L^2(\Omega)}\|\varphi\|_{L^2(\Omega)}\leq C\varepsilon\|u_j\|_{\varepsilon}\|\varphi\|_{\varepsilon}\leq C\varepsilon\|\varphi\|_{\varepsilon}
\end{equation*}
for all $\varepsilon\in(0,\varepsilon_{\Omega,j})$. Accordingly \eqref{aim} holds with $k=1$.
Now we consider $J_{2,\varepsilon}$. We write
\begin{equation*}
J_{2,\varepsilon}=-\int_0^{|\partial\Omega|}\int_0^1\tilde\rho_{\varepsilon}(u_j\circ\psi_{\varepsilon}(s,\xi))(\varphi\circ\psi_{\varepsilon}(s,\xi))\frac{\xi\kappa(s)}{1-\varepsilon\xi\kappa(s)}\varepsilon(1-\varepsilon\xi\kappa(s))d\xi ds\,.
\end{equation*}
Then we observe that by \eqref{positiveinf} there exists a constant $C$ such that
\begin{equation}\label{J2e_eq2}
\frac{\xi\kappa(s)}{1-\varepsilon\xi\kappa(s)}<C\qquad\forall\varepsilon\in(0,\varepsilon_{\Omega,j})\,,\;(s,\xi)\in[0,|\partial\Omega|)\times(0,1)\,.
\end{equation}
Hence, by the Cauchy-Schwarz inequality and by \eqref{J1e_eq2} we have
\begin{equation*}
\begin{split}
\left|J_{2,\varepsilon}\right|&\leq C\varepsilon\int_{\omega_{\varepsilon}}\frac{\tilde\rho_{\varepsilon}}{\varepsilon}|u_j\varphi|dx\\
&\leq C\varepsilon\int_{\Omega}\rho_{\varepsilon}|u_j\varphi|dx\leq C\varepsilon\norm{u_j}_{\varepsilon}\norm{\varphi}_{\varepsilon}\leq C\varepsilon\norm{\varphi}_{\varepsilon}\quad\forall\varepsilon\in(0,\varepsilon_{\Omega,j})
\end{split}
\end{equation*}
and the validity of \eqref{aim} with $k=2$ is proved.
We now pass to consider $J_{3,\varepsilon}$. By the H\"older inequality we have
\begin{equation*}
\left|J_{3,\varepsilon}\right|=\varepsilon\int_{\omega_{\varepsilon}}\rho_{\varepsilon}^{\frac{1}{2}}v_{j,\varepsilon}\, \rho_{\varepsilon}^{\frac{1}{2}}\varphi dx\le
\varepsilon\left(\int_{\omega_{\varepsilon}}\rho_{\varepsilon}v^2_{j,\varepsilon}\,dx\right)^{\frac{1}{2}} \left(\int_{\omega_{\varepsilon}}\rho_{\varepsilon}\varphi^2\,dx\right)^{\frac{1}{2}}\,.
\end{equation*}
Then \eqref{aim} with $k=3$ follows by \eqref{vjee.eq2}.
For $J_{4,\varepsilon}$ we observe that we can write
\[
J_{4,\varepsilon}=-\frac{1}{\mu_j}\int_0^{|\partial\Omega|}\int_0^1\partial_{\xi} w_{j}(s,\xi)\partial_{\xi}(\varphi\circ\psi_{\varepsilon})(s,\xi)\frac{\xi\kappa(s)}{1-\varepsilon\xi\kappa(s)}\varepsilon(1-\varepsilon\xi\kappa(s))d\xi ds\,.
\]
Then by \eqref{J2e_eq2}, by the rule of change of variables in integrals, and by the H\"older inequality we have
\begin{equation*}
\left|J_{4,\varepsilon}\right|\le C\varepsilon\|\nabla\varphi\|_{L^2(\Omega)}\qquad\forall\varepsilon\in(0,\varepsilon_{\Omega,j})\,.
\end{equation*}
Thus \eqref{aim} with $k=4$ follows by the definition of $\|\cdot\|_\varepsilon$ (cf.~\eqref{bilinear_eps}).
Similarly, by the rule of change of variables in integrals, and by the H\"older inequality one deduces that
\begin{equation*}
\left|J_{5,\varepsilon}\right|\le C\varepsilon\|\nabla\varphi\|_{L^2(\Omega)}\qquad\forall\varepsilon\in(0,\varepsilon_{\Omega,j})
\end{equation*}
and \eqref{aim} with $k=5$ follows by the definition of $\|\cdot\|_\varepsilon$.
Finally we consider $J_{6,\varepsilon}$. By a straightforward computation one verifies that
\begin{equation}\label{J7J8}
J_{6,\varepsilon}=J_{7,\varepsilon}+J_{8,\varepsilon}
\end{equation}
with
\[
\begin{split}
J_{7,\varepsilon}&:=\int_0^{|\partial\Omega|}\int_0^1 \left(\tilde\rho_{\varepsilon}-\frac{M}{|\partial\Omega|}\right)(u_j\circ\gamma)(\varphi\circ\psi_{\varepsilon})d\xi ds\,,\\
J_{8,\varepsilon}&:=\int_0^{|\partial\Omega|}\int_0^1\tilde\rho_{\varepsilon}\bigl(u_j\circ\psi_{\varepsilon}(s,\xi))-(u_j\circ\psi_{\varepsilon}(s,0))\bigr)(\varphi\circ\psi_{\varepsilon}(s,\xi)) d\xi ds\,.
\end{split}
\]
We first study $J_{7,\varepsilon}$. By \eqref{asymptotic_rho} it follows that
\begin{equation}\label{J7}
\begin{split}
&|J_{7,\varepsilon}|=\left|\int_0^{|\partial\Omega|}\int_0^1 \left(\frac{\frac{1}{2}K M-|\Omega||\partial\Omega|}{|\partial\Omega|^2}\varepsilon+\varepsilon^2\tilde{R}(\varepsilon)\right)(u_j\circ\gamma)(\varphi\circ\psi_{\varepsilon}) d\xi ds\right|\\
&\qquad\leq C\varepsilon\int_0^{|\partial\Omega|}\int_{0}^1|(u_j\circ\gamma)(\varphi\circ\psi_{\varepsilon})|d\xi ds\\
\end{split}
\end{equation}
Hence, by the H\"older inequality, by the rule of change of variables in integrals, by condition \eqref{ucondition}, and by \eqref{positiveinf} we have
\[
\begin{split}
&|J_{7,\varepsilon}|\leq C\varepsilon\left(\int_{\partial\Omega}u_j^2d\sigma\right)^{\frac{1}{2}}\left(\int_0^{|\partial\Omega|}\int_0^1(\varphi\circ\psi_{\varepsilon}(s,\xi))^2\frac{\varepsilon(1-\varepsilon\xi\kappa(s))}{\varepsilon(1-\varepsilon\xi\kappa(s))}d\xi ds\right)^{\frac{1}{2}}\\
&\qquad\leq C\varepsilon \left(\int_{\omega_{\varepsilon}}\frac{1}{\varepsilon}\varphi^2dx\right)^{\frac{1}{2}}\leq C\varepsilon \norm{\varphi}_{\varepsilon}\,.
\end{split}
\]
We now turn to $J_{8,\varepsilon}$. Since $\Omega$ is assumed to be of class $C^{3}$ and $u_j$ is a solution of \eqref{Steklov}, a classical elliptic regularity argument shows that $u_j\in C^{2}(\overline\Omega)$ (see e.g., \cite{agmon1}). In addition, by the regularity of $\Omega$ we also have that $\psi_{\varepsilon}$ is of class $C^2$ from $[0,|\partial\Omega|)\times(0,1)$ to $\mathbb{R}^2$. Thus $u_j\circ\psi_{\varepsilon}$ is of class $C^2$ from $[0,|\partial\Omega|)\times(0,1)$ to $\mathbb{R}$ and we can prove that for all $\varepsilon\in(0,\varepsilon_{\Omega,j})$ and $(s,\xi)\in [0,|\partial\Omega|)\times(0,1)$ there exists $\xi^*$ such that
\[
(u_j\circ\psi_{\varepsilon}(s,\xi))-(u_j\circ\psi_{\varepsilon}(s,0))=\xi\partial_{\xi}(u_j\circ\psi_{\varepsilon})(s,\xi^*).
\]
Then, by taking $t^*:=\varepsilon\xi^*$ we have
\begin{multline}\label{J801}
J_{8,\varepsilon}=\tilde\rho_{\varepsilon}\int_0^{|\partial\Omega|}\int_0^1\xi\partial_{\xi}(u_j\circ\psi_{\varepsilon})(s,\xi^*)(\varphi\circ\psi_{\varepsilon}(s,\xi))d\xi ds\\
=\tilde\rho_{\varepsilon}\int_0^{|\partial\Omega|}\int_0^{\varepsilon} t\partial_t(u_j\circ\psi)(s,t^*)(\varphi\circ\psi(s,t))\frac{dt}{\varepsilon}ds\,.
\end{multline}
Hence, by the H\"older inequality we deduce that
\begin{multline}\label{J802}
|J_{8,\varepsilon}|\leq C\|u_j\|_{C^{1}(\overline\Omega)}\int_0^{|\partial\Omega|}\int_0^{\varepsilon}\frac{t}{\varepsilon^{\frac{1}{2}}}\frac{|\varphi\circ\psi|}{\varepsilon^{\frac{1}{2}}}dt ds\\
\leq C\|u_j\|_{C^{1}(\overline\Omega)}\left(\int_0^{|\partial\Omega|}\int_0^{\varepsilon}\frac{t^2}{\varepsilon} dt ds\right)^{\frac{1}{2}}\left(\int_0^{|\partial\Omega|}\int_0^{\varepsilon}\frac{(\varphi\circ\psi)^2}{\varepsilon} dt ds\right)^{\frac{1}{2}}\\
= C\|u_j\|_{C^{1}(\overline\Omega)}\frac{|\partial\Omega|^{\frac{1}{2}}}{\sqrt 3}\varepsilon\left(\int_0^{|\partial\Omega|}\int_0^{\varepsilon}\frac{(\varphi\circ\psi)^2}{\varepsilon} dt ds\right)^{\frac{1}{2}}.
\end{multline}
We now observe that we have $|D\psi(s,t)|=1-t\kappa(s)$ for all $(s,t)\in [0,|\partial\Omega|)\times(0,\varepsilon)$ and
\[
\inf_{(s,t)\in [0,|\partial\Omega|)\times(0,\varepsilon)}(1-t\kappa(s))>0
\]
for all $\varepsilon\in(0,\varepsilon_{\Omega,j})$ (cf.~\eqref{positiveinf}). Thus, by the rule of change of variables in integrals we compute
\begin{equation}\label{J8}
|J_{8,\varepsilon}|\le C\varepsilon\left(\int_0^{|\partial\Omega|}\int_0^{\varepsilon}\frac{(\varphi\circ\psi)^2}{\varepsilon} \frac{1-t\kappa(s)}{1-t\kappa(s)}dt ds\right)^{\frac{1}{2}}\le C\varepsilon\|\phi\|_\varepsilon\,.
\end{equation}
Finally, by \eqref{J7J8}, \eqref{J7}, and \eqref{J8} one deduces that \eqref{aim} holds also for $k=6$. Our proof is now complete.\qed
\medskip
Our next step is to verify that $\norm{u_j+\varepsilon v_{j,\varepsilon}}^2_{\varepsilon}-\frac{M}{|\partial\Omega|}(1+\mu_j)$ is in $O(\varepsilon)$ for $\varepsilon\to 0$. To do so, we prove the following lemma.
\begin{lemma}\label{ultimolemmastep1}
There exists a constant $C>0$ such that
\begin{equation*}
\left|\|u_j+\varepsilon v_{j,\varepsilon}\|^2_{\varepsilon}-\frac{M}{|\partial\Omega|}(1+\mu_j)\right|\le C\varepsilon\qquad\forall\varepsilon\in(0,\varepsilon_{\Omega,j})\,.
\end{equation*}
\end{lemma}
\proof
A straightforward computation shows that
\begin{equation*}
\begin{split}
&\|u_j+\varepsilon v_{j,\varepsilon}\|^2_{\varepsilon}-\frac{M}{|\partial\Omega|}(1+\mu_j)\\
&\qquad=\langle u_j+\varepsilon v_{j,\varepsilon}\,,\, u_j+\varepsilon v_{j,\varepsilon}\rangle_\varepsilon-\frac{M}{|\partial\Omega|}(1+\mu_j)=\sum_{k=1}^5L_{k,\varepsilon},
\end{split}
\end{equation*}
with
\small
\begin{eqnarray*}
L_{1,\varepsilon}&:=&\int_{\Omega}\rho_{\varepsilon}u_j^2 dx-\frac{M}{|\partial\Omega|},\\
L_{2,\varepsilon}&:=&\int_{\Omega}|\nabla u_j|^2 dx-\frac{M\mu_j}{|\partial\Omega|},\\
L_{3,\varepsilon}&:=&2\varepsilon\int_{\omega_{\varepsilon}}\rho_{\varepsilon}u_jv_{j,\varepsilon} dx,\\
L_{4,\varepsilon}&:=&2\varepsilon\int_{\omega_{\varepsilon}}\nabla u_j\cdot\nabla v_{j,\varepsilon}dx,\\
L_{5,\varepsilon}&:=&\varepsilon^2\int_{\omega_{\varepsilon}}\rho_{\varepsilon}v_{j,\varepsilon}^2 dx+\varepsilon^2\int_{\omega_{\varepsilon}}|\nabla v_{j,\varepsilon}|^2dx\,.
\end{eqnarray*}
\normalsize
To prove the validity of the lemma we will show that there exists $C>0$ such that
\begin{equation}\label{LlessCe}
|L_{k,\varepsilon}|\leq C\varepsilon\qquad\forall \varepsilon\in(0,\varepsilon_{\Omega,j})\,,
\end{equation}
for all $k\in\{1,\dots,5\}$. In the sequel we will denote by $C$ a positive constant which does not depend on $\varepsilon$ and which may be re-defined line by line.
We begin with $L_{1,\varepsilon}$. We observe that by condition \eqref{ucondition} we have
\[
L_{1,\varepsilon}=\frac{1}{\varepsilon}\int_{\omega_{\varepsilon}}\tilde\rho_{\varepsilon}u_j^2 dx-\frac{M}{|\partial\Omega|}\int_{\partial\Omega}u_j^2 d\sigma+\varepsilon\int_{\Omega}u_j^2 dx\,.
\]
Hence, by \eqref{asymptotic_rho} we deduce that
\begin{equation}\label{L1.1}
\begin{split}
L_{1,\varepsilon}&=\frac{M}{|\partial\Omega|}\left(\frac{1}{\varepsilon}\int_{\omega_{\varepsilon}}u_j^2 dx-\int_{\partial\Omega}u_j^2 d\sigma\right)\\
&\quad +\left(\frac{\frac{1}{2}KM-|\Omega||\partial\Omega|}{|\partial\Omega|^2}+\varepsilon \tilde{R}(\varepsilon)\right)\int_{\omega_{\varepsilon}}u_j^2dx+\varepsilon\int_{\Omega}u_j^2 dx.
\end{split}
\end{equation}
Since $\Omega$ is assumed to be of class $C^{3}$ and $u_j$ is a solution of \eqref{Steklov}, a classical elliptic regularity argument shows that $u_j\in C^{2}(\overline\Omega)$ (see e.g., \cite{agmon1}). Then one verifies that
\begin{equation}\label{L1.2}
\int_{\omega_{\varepsilon}}u_j^2dx\le C|\partial\Omega|\,\|u_j\|^2_{C(\overline\Omega)}\,\varepsilon\qquad\text{and }\qquad\varepsilon\int_{\Omega}u_j^2 dx\le |\Omega|\, \|u_j\|^2_{C(\overline\Omega)}\,\varepsilon\,.
\end{equation}
In addition, the map which takes $(s,t)\in[0,|\partial\Omega|)\times(0,\varepsilon)$ to $\tilde u_j(s,t):=(u_j\circ\psi(s,t))^2(1-t\kappa(s))$ is of class $C^2$. It follows that
\begin{equation}\label{L1.3}
\begin{split}
&\left|\frac{1}{\varepsilon}\int_{\omega_{\varepsilon}}u_j^2 dx-\int_{\partial\Omega}u_j^2 d\sigma\right|\\
&\qquad\le\int_0^{|\partial\Omega|}\frac{1}{\varepsilon}\int_0^{\varepsilon}\left|(u_j\circ\psi(s,t))^2(1-t\kappa(s))-(u_j\circ\psi(s,0))^2\right|dtds\\
&\qquad\leq\int_0^{|\partial\Omega|}\frac{1}{\varepsilon}\left(\int_0^{\varepsilon}\norm{\tilde u_j}_{C^1([0,|\partial\Omega|]\times[0,\varepsilon])}\,tdt\right)ds\leq C\varepsilon.
\end{split}
\end{equation}
Then the validity of \eqref{LlessCe} with $k=1$ follows by \eqref{L1.1}, \eqref{L1.2}, and \eqref{L1.3}.
We now consider $L_{2,\varepsilon}$. Since $u_j$ is an eigenfunction of \eqref{Steklov} a standard argument based on the divergence theorem shows that
\[
\int_{\Omega}|\nabla u_j|^2 dx=\frac{M\mu_j}{|\partial\Omega|}\int_{\partial\Omega}u_j^2 d\sigma\,.
\]
Then, by condition \eqref{ucondition} we have
\begin{equation*}
L_{2,\varepsilon}=0,
\end{equation*}
which readily implies that \eqref{LlessCe} holds with $k=2$.
To prove \eqref{LlessCe} for $k=3$ we observe that $\rho_\varepsilon = \varepsilon+\frac{1}{\varepsilon}\tilde\rho_\varepsilon$ on $\omega_\varepsilon$. Thus by a computation based on rule of change of variables in integrals we have
\[
\begin{split}
L_{3,\varepsilon}&=2\varepsilon \left(\varepsilon+\frac{1}{\varepsilon}\tilde\rho(\varepsilon)\right)\int_{\omega_{\varepsilon}}u_jv_{j,\varepsilon} dx\\
&= 2\varepsilon^2 \left(\varepsilon+\frac{1}{\varepsilon}\tilde\rho(\varepsilon)\right)\int_0^{|\partial\Omega|}\int_0^1 u_j\circ\psi_\varepsilon(s,\xi)w_j(s,\xi)(1-\varepsilon\xi\kappa(s))\,d\xi ds\,.
\end{split}
\]
Hence,
\[
|L_{3,\varepsilon}|\le 2\varepsilon^2 \left|\varepsilon+\frac{1}{\varepsilon}\tilde\rho(\varepsilon)\right|\left(1+\varepsilon_{\Omega,j}\sup_{s\in[0,|\partial\Omega|)}|\kappa(s)|\right)\norm{u_j}_{L^\infty(\Omega)}\int_0^{|\partial\Omega|}\int_0^1w_j d\xi ds
\]
and the validity of \eqref{LlessCe} with $k=3$ follows by \eqref{asymptotic_rho}.
We now consider the case when $k=4$. By \eqref{grad2} and by the rule of change of variables in integrals we have
\[
\begin{split}
&L_{4,\varepsilon}\\
&=-2\varepsilon\int_0^{|\partial\Omega|}\int_0^1\left(\frac{1}{\varepsilon^2}\partial_{\xi}(u_j\circ\psi_{\varepsilon}(s,\xi))\partial_{\xi}w_j(s,\xi)+\frac{\partial_s (u\circ\psi_{\varepsilon}(s,\xi))\partial_s w_j(s,\xi)}{(1-\varepsilon\xi\kappa(s))^2}\right)\\
&\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\times\varepsilon(1-\varepsilon\xi\kappa(s))\,d\xi ds
\end{split}
\]
Now, by equality $\psi_\varepsilon(s,\xi)=\gamma(s)+\epsilon\xi\nu(\gamma(s))$ and by membership of $u_j$ in $C^{2}(\overline\Omega)$, we verify that
\[
\left|\partial_{\xi}(u_j\circ\psi_{\varepsilon}(s,\xi))\right|=\varepsilon\left|\nu(\gamma(s))\cdot\nabla u_j(\psi_\varepsilon(s,\xi))\right|\le\varepsilon \| u_j \|_{C^1(\overline\Omega)}
\]
for all $\varepsilon\in(0,\varepsilon_{\Omega,j})$ and for all $(s,\xi)\in[0,|\partial\Omega|)\times(0,1)$. Hence, by \eqref{positiveinf} and by a straightforward computation, we deduce that \eqref{LlessCe} holds with $k=4$.
Finally, the validity of \eqref{LlessCe} for $k=5$ is a consequence of Proposition \ref{vjee} and of equality $L_{5,\varepsilon}=\varepsilon^2\norm{v_{j,\varepsilon}}^2_\varepsilon$.
\endproof
\medskip
We are now ready to prove Proposition \ref{intermediate} by Lemma \ref{lemma_fondamentale}.
\medskip
\noindent{\em Proof of Proposition \ref{intermediate}.}
We first prove \eqref{intermediate.eq1}. By Lemma \ref{ultimolemmastep1} there exists $\varepsilon^*_j\in(0,\varepsilon_{\Omega,j})$ such that
\[
\|u_j+\varepsilon v_{j,\varepsilon}\|_{\varepsilon}>\frac{1}{2}\sqrt{\frac{M}{|\partial\Omega|}}(1+\mu_j)^{\frac{1}{2}}\qquad\forall\varepsilon\in(0,\varepsilon^*_j)\,.
\]
Hence, by multiplying both sides of \eqref{condition_1} by $\norm{u_j+\varepsilon v_{j,\varepsilon}}_{\varepsilon}^{-1}$ we deduce that
\begin{equation}\label{condition_11}
\left|\left\langle\mathcal A_{\varepsilon}\left(\frac{u_j+\varepsilon v_{j,\varepsilon}}{\|u_j+\varepsilon v_{j,\varepsilon}\|_{\varepsilon}}\right)-\frac{1}{1+\mu_j}\left(\frac{u_j+\varepsilon v_{j,\varepsilon}}{\|u_j+\varepsilon v_{j,\varepsilon}\|_{\varepsilon}}\right),\varphi\right\rangle_{\varepsilon}\right|\leq C_2\,\varepsilon\, \|\varphi\|_{\varepsilon}
\end{equation}
for all $\varphi\in H^1(\Omega)$ and $\varepsilon\in(0,\varepsilon^*_j)$, with $C_2:=2\sqrt{\frac{|\partial\Omega|}{M}}(1+\mu_j)^{-\frac{1}{2}}C_1$. By taking $\varphi=\mathcal A_{\varepsilon}\left(\frac{u_j+\varepsilon v_{j,\varepsilon}}{\|u_j+\varepsilon v_{j,\varepsilon}\|_{\varepsilon}}\right)-\frac{1}{1+\mu_j}\left(\frac{u_j+\varepsilon v_{j,\varepsilon}}{\|u_j+\varepsilon v_{j,\varepsilon}\|_{\varepsilon}}\right)$ in \eqref{condition_11}, we obtain
\begin{equation*}
\left\|\mathcal{A}_{\varepsilon}\left(\frac{u_j+\varepsilon v_{j,\varepsilon}}{\|u_j+\varepsilon v_{j,\varepsilon}\|_{\varepsilon}}\right)-\frac{1}{1+\mu_j}\left(\frac{u_j+\varepsilon v_{j,\varepsilon}}{\|u_j+\varepsilon v_{j,\varepsilon}\|_{\varepsilon}}\right)\right\|_{\varepsilon}\leq C_2\, \varepsilon\qquad\forall\varepsilon\in(0,\varepsilon^*_j)\,.
\end{equation*}
As a consequence, one can verify that the assumptions of Lemma \ref{lemma_fondamentale} hold with $A=\mathcal{A}_\varepsilon$, $H=\mathcal H_{\varepsilon}(\Omega)$, $\eta=\frac{1}{1+\mu_j}$, $u=\frac{u_j+\varepsilon v_{j,\varepsilon}}{\|u_j+\varepsilon v_{j,\varepsilon}\|_{\varepsilon}}$, and $r=C_2\,\varepsilon$ with $\varepsilon\in(0,\varepsilon^*_j)$ (see also Proposition \ref{Ae}). Accordingly, for all $\varepsilon\in(0,\varepsilon^*_j)$ there exists an eigenvalue $\eta^*_\varepsilon$ of $\mathcal A_{\varepsilon}$ such that
\begin{equation}\label{quasi_auto_1}
\left|\frac{1}{1+\mu_j}-\eta^*_\varepsilon\right|\leq C_2\varepsilon.
\end{equation}
Now we take $\varepsilon_{\Omega,j}^\#:=\min\{\varepsilon^*_j\,,\,\delta_j\,,\,C_2^{-1}r^*_j\}$ with $\delta_j$ and $r^*_j$ as in Lemma \ref{only}. By \eqref{quasi_auto_1} and Lemma \ref{only}, the eigenvalue $\eta^*_\varepsilon$ has to coincide with $\frac{1}{1+\lambda_j(\varepsilon)}$ for all $\varepsilon\in(0,\varepsilon_{\Omega,j}^\#)$. It follows that
\begin{equation*}
\left| \mu_j- \lambda_j(\varepsilon)\right|\leq C_2\left|(1+\mu_j)(1+\lambda_j(\varepsilon))\right|\varepsilon\qquad\forall\varepsilon\in(0,\varepsilon_{\Omega,j}^\#).
\end{equation*}
Then the validity of \eqref{intermediate.eq1} follows by Theorem \ref{convergence} (i) and by a straightforward computation.
We now consider \eqref{intermediate.eq2}. By Lemma \ref{lemma_fondamentale} with $r^*=r^*_j$ it follows that for all $\varepsilon\in(0,\varepsilon_{\Omega,j}^\#)$ there exists a function $u^*_\varepsilon\in\mathcal H_{\varepsilon}(\Omega)$ with $\|u^*_\varepsilon\|_{\varepsilon}=1$ which belongs to the space generated by all the eigenfunctions of $\mathcal{A}_\varepsilon$ associated with the eigenvalues contained in the segment $\left[\frac{1}{1+\mu_j}-r^*_j,\frac{1}{1+\mu_j}+r^*_j\right]$ and such that
\begin{equation}\label{u*ineq}
\left\|u^*_\varepsilon-\frac{u_j+\varepsilon v_{j,\varepsilon}}{\|u_j+\varepsilon v_{j,\varepsilon}\|_{\varepsilon}}\right\|_{\varepsilon}\leq\frac{2C_2}{r^*_j}\varepsilon.
\end{equation}
Since $\varepsilon\in(0,\varepsilon^\#_{\Omega,j})$, Lemma \ref{only} implies that $\frac{1}{1+\lambda_j(\varepsilon)}$ is the only eigenvalue of $\mathcal A_{\varepsilon}$ which belongs to the segment $\left[\frac{1}{1+\mu_j}-r^*_j,\frac{1}{1+\mu_j}+r^*_j\right]$. In addition $\lambda_j(\varepsilon)$ is simple for $\varepsilon<\varepsilon_{\Omega,j}^\#$ (because $\varepsilon_{\Omega,j}^\#\le \varepsilon_{\Omega,j})$. It follows that $u^*_\varepsilon$ coincides with the only eigenfunction with norm one corresponding to $\lambda_j(\varepsilon)$, namely
$u^*_\varepsilon=\frac{u_{j,\varepsilon}}{\|u_{j,\varepsilon}\|_{\varepsilon}}$.
Thus by \eqref{u*ineq} we have
\begin{equation}\label{intermediate.eq3}
\left\|\frac{u_{j,\varepsilon}}{\|u_{j,\varepsilon}\|_{\varepsilon}}- \frac{u_j+\varepsilon v_{j,\varepsilon}}{\|u_j+\varepsilon v_{j,\varepsilon}\|_{\varepsilon}}\right\|_{\varepsilon}\leq \frac{2C_2}{r^*_j}\varepsilon\qquad\forall\varepsilon\in(0,\varepsilon^\#_{\Omega,j}).
\end{equation}
We plan to prove that \eqref{intermediate.eq3} implies that
\begin{equation}\label{ineq_eigenfunctions_step1}
\|u_{j,\varepsilon}-u_j-\varepsilon v_{j,\varepsilon}\|_{L^2(\Omega)}\leq C_3 \varepsilon\qquad\forall\varepsilon\in(0,\varepsilon^\#_j)
\end{equation}
for some $C_3>0$. Then the validity of \eqref{intermediate.eq2} will follow by Proposition \ref{vjeL2}. To do so, we observe that by \eqref{uecondition} we have
\begin{equation}\label{intermediate.eq4}
\norm{u_{j,\varepsilon}}_\varepsilon=\sqrt\frac{M}{|\partial\Omega|}(1+\lambda_j(\varepsilon))^{\frac{1}{2}}\qquad\forall\varepsilon\in(0,\varepsilon^\#_{\Omega,j}).
\end{equation}
It follows that
\begin{multline}\label{intermediate.eq45}
\norm{u_{j,\varepsilon}}_\varepsilon-\norm{u_{j}+\varepsilon v_{j,\varepsilon}}_\varepsilon\\
=\left(\sqrt\frac{M}{|\partial\Omega|}(1+\lambda_j(\varepsilon))^{\frac{1}{2}}-\sqrt\frac{M}{|\partial\Omega|}(1+\mu_j)^{\frac{1}{2}}\right)+\left(\sqrt\frac{M}{|\partial\Omega|}(1+\mu_j)^{\frac{1}{2}}-\norm{u_{j}+\varepsilon v_{j,\varepsilon}}_\varepsilon\right)
\end{multline}
Then a computation based on \eqref{intermediate.eq1} and on Lemma \ref{ultimolemmastep1} shows that
\begin{equation}\label{intermediate.eq5}
\norm{u_{j,\varepsilon}}_\varepsilon-\norm{u_{j}+\varepsilon v_{j,\varepsilon}}_\varepsilon< C_4\, \varepsilon\qquad\forall\varepsilon\in(0,\varepsilon^\#_{\Omega,j})
\end{equation}
for some $C_4>0$.
Now we note that
\begin{multline}\label{intermediate.eq6}
\|u_{j,\varepsilon}-u_j-\varepsilon v_{j,\varepsilon}\|_{\varepsilon}=\norm{\|u_{j,\varepsilon}\|_{\varepsilon}\frac{u_{j,\varepsilon}}{\|u_{j,\varepsilon}\|_{\varepsilon}}-\|u_j+\varepsilon v_{j,\varepsilon}\|_{\varepsilon} \frac{u_j+\varepsilon v_{j,\varepsilon}}{\|u_j+\varepsilon v_{j,\varepsilon}\|_{\varepsilon}}}_\varepsilon\\
\le \norm{u_{j,\varepsilon}}_\varepsilon\norm{\frac{u_{j,\varepsilon}}{\|u_{j,\varepsilon}\|_{\varepsilon}}- \frac{u_j+\varepsilon v_{j,\varepsilon}}{\|u_j+\varepsilon v_{j,\varepsilon}\|_{\varepsilon}}}_\varepsilon+\left|\norm{u_{j,\varepsilon}}_\varepsilon-\norm{u_{j}+\varepsilon v_{j,\varepsilon}}_\varepsilon\right|
\end{multline}
Hence, by \eqref{intermediate.eq3}, \eqref{intermediate.eq4}, and \eqref{intermediate.eq5} we deduce that
\[
\|u_{j,\varepsilon}-u_j-\varepsilon v_{j,\varepsilon}\|_\varepsilon\leq C_5\, \varepsilon\qquad\forall\varepsilon\in(0,\varepsilon^\#_j)
\]
for some $C_5>0$. Now the validity of \eqref{ineq_eigenfunctions_step1} follows by Proposition \ref{L2<eps}.
\qed
\section{Second Step}\label{sec:5}
In this section we complete the proof of Theorems \ref{asymptotic_eigenvalues} and \ref{asymptotic_eigenfunctions}. Accordingly, we fix $j\in\mathbb N$ and we take $\mu_j$, $\mu_j^1$, $u_j$, $\varepsilon_{\Omega,j}$, $\lambda_j(\varepsilon)$, and $u_{j,\varepsilon}$ as in the statements of Theorems \ref{asymptotic_eigenvalues} and \ref{asymptotic_eigenfunctions}.
We denote by $u_j^1$ the unique solution in $H^1(\Omega)$ of the boundary value problem
\begin{equation}\label{u1_problem_simple_true}
\begin{cases}
-\Delta u_j^1=\mu_j u_j, & {\rm in}\ \Omega,\\
\partial_{\nu}u_j^1-\frac{M\mu_j}{|\partial\Omega|}u_j^1=\left(\frac{M\mu_j}{2|\partial\Omega|^2}(K-|\partial\Omega|\kappa{\circ\gamma^{(-1)}})-\frac{2M^2\mu_j^2}{3|\partial\Omega|^2}-\frac{|\Omega|\mu_j}{|\partial\Omega|}\right)u_j+\frac{M\mu_j^1}{|\partial\Omega|}u_j, & {\rm on}\ \partial\Omega,
\end{cases}
\end{equation}
which satisfies the additional condition
\begin{equation}\label{uj1condition}
\int_{\partial\Omega}u_j^1 u_j d\sigma=\left(\frac{\mu_j^1}{2\mu_j}+\frac{M\mu_j}{3|\partial\Omega|}\right)\,.
\end{equation}
The existence and uniqueness of $u_j^1$ is a consequence of Proposition \ref{A1} in the Appendix. Then we introduce the auxiliary function $w_j^1(s,\xi)$ from $[0,|\partial\Omega|)\times [0,1]$ to $\mathbb R$ defined by
\begin{equation}\label{w1}
\begin{split}
w_j^1(s,\xi)&:=-\frac{\kappa(s)M\mu_j}{6|\partial\Omega|}(u_j\circ\psi_{\varepsilon}(s,0))(\xi-1)^3\\
&\quad+\frac{M^2\mu_j^2}{24|\partial\Omega|^2}(u_j\circ\psi_{\varepsilon}(s,0))(\xi^2+2\xi+9)(\xi-1)^2\\
&\quad+\left(\frac{|\Omega|\mu_j}{2|\partial\Omega|}(u_j\circ\psi_{\varepsilon}(s,0))-\frac{M}{2|\partial\Omega|}(\mu_j (u_j^1\circ\psi_{\varepsilon}(s,0))\right.\\
&\qquad\qquad \left.+\mu_j^1 (u_j\circ\psi_{\varepsilon}(s,0)))-\frac{KM\mu_j}{4|\partial\Omega|^2}(u_j\circ\psi_{\varepsilon}(s,0))\right)(\xi-1)^2,
\end{split}
\end{equation}
for all $(s,\xi)\in[0,|\partial\Omega|)\times[0,1]$ (see \eqref{K} for the definition of $K$). A straightforward computation shows that {}
\[
\begin{split}
&-\partial^2_{\xi}w_j^1(s,\xi)\\
&=-\kappa(s)\partial_{\xi}w_j(s,\xi)+\frac{M}{|\partial\Omega|}\bigg(\mu_j (u_j^1\circ\psi_{\varepsilon}(s,0))+\mu_jw_j(s,\xi)+\mu_j^1(u_j\circ\psi_{\varepsilon}(s,0))\\
&\qquad\qquad-\xi\frac{M\mu_j^2}{|\partial\Omega|}(u_j\circ\psi_{\varepsilon}(s,0))-\frac{|\Omega|\mu_j}{M}(u_j\circ\psi_{\varepsilon}(s,0))+\frac{K\mu_j}{2|\partial\Omega|}(u_j\circ\psi_{\varepsilon}(s,0))\bigg)
\end{split}
\]
for all $(s,\xi)\in[0,\partial\Omega)\times (0,1)$. Moreover, $w_j^1$ satisfies
\begin{equation*}
w_j^1(s,1)=\partial_{\xi}w_j^1(s,1)=0
\end{equation*}
for all $s\in[0,|\partial\Omega|)$. Then, for all $\varepsilon\in(0,\varepsilon_{\Omega,j})$ we denote by $v_{j,\varepsilon}^1\in H^1(\Omega)$ the extension by $0$ of $w_j^1\circ\psi_{\varepsilon}^{(-1)}$ to $\Omega$. We note that by construction $v_{j,\varepsilon}^1\in H^1(\Omega)$. We also observe that the $L^2(\Omega)$ norm of $v_{j,\varepsilon}^1$ is in $O(\sqrt{\varepsilon})$ as $\varepsilon\rightarrow 0$. Indeed, we have the following proposition.
\begin{proposition}\label{vj1eL2} There is a constant $C>0$ such that $\|v_{j,\varepsilon}^1\|_{L^2(\Omega)}\le C\sqrt\varepsilon$ for all $\varepsilon\in(0,\varepsilon_{\Omega,j})$.
\end{proposition}
The proof is similar to that of Proposition \ref{vjeL2} and it is accordingly omitted. We also observe that $\sqrt{\varepsilon}\|v_{j,\varepsilon}^1\|_{\varepsilon}$ is uniformly bounded for $\varepsilon\in(0,\varepsilon_{\Omega,j})$, as it is stated in the following proposition.
\begin{proposition}\label{vj2ee} There is a constant $C>0$ such that $\sqrt\varepsilon\norm{v_{j,\varepsilon}^1}_\varepsilon\le C$ for all $\varepsilon\in(0,\varepsilon_{\Omega,j})$.
\end{proposition}
The proof of Proposition \ref{vj2ee} can be effected by following the footsteps of the proof of Proposition \ref{vjee} and it is accordingly omitted.
Possibly choosing smaller values for $r^*_j$ and $\delta_j$, we have the following Lemma \ref{only2}, which is the analogue of Lemma \ref{only}.
\begin{lemma}\label{only2} There exist $\delta_j\in(0,\varepsilon_{\Omega,j})$ and $r^*_j>0$ such that, for all $\varepsilon\in(0,\delta_j)$ the only eigenvalue of $\mathcal A_{\varepsilon}$ in the interval
\[
\left[\frac{1}{1+{\mu_j+\varepsilon\mu_j^1}}-r^*_j,\frac{1}{1+{\mu_j+\varepsilon\mu_j^1}}+r^*_j\right]
\]
is $\frac{1}{1+\lambda_{j}(\varepsilon)}$.
\end{lemma}
The proof of Lemma \ref{only2} is similar to that of Lemma \ref{only} and accordingly it is omitted.
We now consider the operator $\mathcal A_{\varepsilon}$ introduced in Section \ref{sec:2}. In order to complete the proof of Theorems \ref{asymptotic_eigenvalues} and \ref{asymptotic_eigenfunctions} we plan to apply Lemma \ref{lemma_fondamentale} to $\mathcal A_{\varepsilon}$ with
\[
\text{$H=\mathcal H_{\varepsilon}(\Omega)$, $\eta=\frac{1}{1+\mu_j+\varepsilon\mu_j^1}$, $u=\frac{u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1}{u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1}$, and $r=C\varepsilon^2<r^*_j$,}
\]
where $C>0$ is a constant which does not depend on $\varepsilon$. As we did in Section \ref{sec:4}, we have to verify that the assumptions of Lemma \ref{lemma_fondamentale} are satisfied. We observe here that, due to Proposition \eqref{vj1eL2}, the $L^2$ the norm of $\varepsilon^2 v^1_{j,\varepsilon}$ is in $o(\varepsilon^2)$ and accordingly the term $\varepsilon^2 v^1_{j,\varepsilon}$ is negligible from the approximation \eqref{expansion_eigenfunctions}. However, since we will deduce \eqref{expansion_eigenfunctions} from a suitable approximation in $\|\cdot\|_\varepsilon$ norm (cf.~inequality \eqref{r*step2} below), we have to take into account also the contribution of $\varepsilon^2 v^1_{j,\varepsilon}$ (see also Proposition \ref{vj2ee}).
We begin with the following lemma.
\begin{lemma}
There exists a constant $C_6>0$ such that
\small
\begin{equation}\label{condition_2}
\left|\left\langle\mathcal A_{\varepsilon}(u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1)-\frac{1}{1+\mu_j+\varepsilon\mu_j^1}(u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1),\varphi\right\rangle_{\varepsilon}\right|\leq C_6\varepsilon^2 \|\varphi\|_{\varepsilon},
\end{equation}
\normalsize
for all $\varphi\in \mathcal H_{\varepsilon}(\Omega)$ and for all $\varepsilon\in(0,\varepsilon_{\Omega,j})$.
\proof
By \eqref{bilinear_eps} and \eqref{A_eps} we have
\begin{equation}\label{eprod}
\begin{split}
&\left|\left\langle\mathcal A_{\varepsilon}(u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1)-\frac{1}{1+\mu_j+\varepsilon\mu_j^1}(u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1),\varphi\right\rangle_{\varepsilon}\right|\\
&=\frac{1}{|1+\mu_j+\varepsilon\mu_j^1|}\left|(\mu_j+\varepsilon\mu_j^1)\int_{\Omega}\rho_{\varepsilon}(u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1)\varphi dx\right.\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.-\int_{\Omega}\nabla (u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1)\cdot\nabla\varphi dx\right|.
\end{split}
\end{equation}
We consider the summands appearing in the absolute value on the right-hand side of equality \eqref{eprod} separately and we re-organize them in a more suitable way. We start with the terms involving $u_j$ and $u_j^1$. We have
\begin{equation}\label{xy0}
\mu_j\int_{\Omega}\rho_{\varepsilon}u_j\varphi dx=\varepsilon\mu_j\int_{\Omega}u_j\varphi dx+\mu_j\int_{\omega_{\varepsilon}}\frac{\tilde\rho_{\varepsilon}}{\varepsilon}u_j\varphi dx.
\end{equation}
By using \eqref{asymptotic_rho} we observe that
\begin{multline}\label{new1}
\mu_j\int_{\omega_{\varepsilon}}\frac{\tilde\rho_{\varepsilon}}{\varepsilon}u_j\varphi dx=\mu_j\int_{\omega_{\varepsilon}}\frac{M}{\varepsilon|\partial\Omega|}u_j\varphi dx+\mu_j\int_{\omega_{\varepsilon}}\frac{K M}{2 |\partial\Omega|^2}u_j\varphi dx\\
-\mu_j\int_{\omega_{\varepsilon}}\frac{|\Omega|}{|\partial\Omega|}u_j\varphi dx+\mu_j\,\varepsilon\,\tilde{R}(\varepsilon)\int_{\omega_{\varepsilon}}u_j\varphi dx.
\end{multline}
By the rule of change of variables in integrals we have for the first term in the right-hand side of \eqref{new1}
\begin{multline}\label{new2}
\mu_j\int_{\omega_{\varepsilon}}\frac{M}{\varepsilon|\partial\Omega|}u_j\varphi dx
=\mu_j\int_0^{|\partial\Omega|}\int_0^1\frac{M}{|\partial\Omega|}(u_j\circ\psi_{\varepsilon})(\varphi\circ\psi_{\varepsilon}) d\xi ds\\
-\mu_j\varepsilon\int_0^{|\partial\Omega|}\int_0^1\frac{M}{|\partial\Omega|}\xi\kappa(s)(u_j\circ\psi_{\varepsilon}(s,\xi))(\varphi\circ\psi_{\varepsilon}(s,\xi)) d\xi ds,
\end{multline}
while for the second term in the right-hand side of \eqref{new1} we have
\begin{multline}\label{new3}
\mu_j\int_{\omega_{\varepsilon}}\frac{K M}{2|\partial\Omega|^2}u_j\varphi dx
=\mu_j\varepsilon\int_0^{|\partial\Omega|}\int_0^1\frac{K M}{2|\partial\Omega|^2}(u_j\circ\psi_{\varepsilon})(\varphi\circ\psi_{\varepsilon}) d\xi ds\\
-\mu_j\varepsilon^2\int_0^{|\partial\Omega|}\int_0^1\frac{K M}{2|\partial\Omega|^2}\xi\kappa(s)(u_j\circ\psi_{\varepsilon}(s,\xi))(\varphi\circ\psi_{\varepsilon}(s,\xi)) d\xi ds.
\end{multline}
For the third term in the right-hand side of \eqref{new1} we have
\begin{multline}\label{new4}
-\mu_j\int_{\omega_{\varepsilon}}\frac{|\Omega|}{|\partial\Omega|}u_j\varphi dx
=-\mu_j\varepsilon\int_0^{|\partial\Omega|}\int_0^1\frac{|\Omega|}{|\partial\Omega|}(u_j\circ\psi_{\varepsilon})(\varphi\circ\psi_{\varepsilon}) d\xi ds\\
-\mu_j\varepsilon^2\int_0^{|\partial\Omega|}\int_0^1\frac{|\Omega|}{|\partial\Omega|}\xi\kappa(s)(u_j\circ\psi_{\varepsilon}(s,\xi))(\varphi\circ\psi_{\varepsilon}(s,\xi)) d\xi ds.
\end{multline}
We set
\begin{equation}\label{rem1}
\begin{split}
R_1(\varepsilon)&:=\mu_j \,\varepsilon\,\tilde{R}(\varepsilon)\int_{\omega_{\varepsilon}}u_j\varphi dx\\
&\quad-\mu_j\varepsilon^2\int_0^{|\partial\Omega|}\int_0^1\frac{K M}{2|\partial\Omega|^2}\xi\kappa(s)(u_j\circ\psi_{\varepsilon}(s,\xi))(\varphi\circ\psi_{\varepsilon}(s,\xi)) d\xi ds\\
&\quad-\mu_j\varepsilon^2\int_0^{|\partial\Omega|}\int_0^1\frac{|\Omega|}{|\partial\Omega|}\xi\kappa(s)(u_j\circ\psi_{\varepsilon}(s,\xi))(\varphi\circ\psi_{\varepsilon}(s,\xi)) d\xi ds.
\end{split}
\end{equation}
Then, by \eqref{xy0}-\eqref{rem1}, we have
\begin{equation}\label{piece1}
\begin{split}
\mu_j\int_{\Omega}\rho_{\varepsilon}u_j\varphi dx&=\varepsilon\mu_j\int_{\Omega}u_j\varphi dx+\mu_j\int_0^{|\partial\Omega|}\int_0^1\frac{M}{|\partial\Omega|}(u_j\circ\psi_{\varepsilon})(\varphi\circ\psi_{\varepsilon}) d\xi ds\\
&\quad-\mu_j\varepsilon\int_0^{|\partial\Omega|}\int_0^1\frac{M}{|\partial\Omega|}\xi\kappa(s)(u_j\circ\psi_{\varepsilon}(s,\xi))(\varphi\circ\psi_{\varepsilon}(s,\xi)) d\xi ds\\
&\quad+\mu_j\varepsilon\int_0^{|\partial\Omega|}\int_0^1\frac{K M}{2|\partial\Omega|^2}(u_j\circ\psi_{\varepsilon})(\varphi\circ\psi_{\varepsilon}) d\xi ds\\
&\quad-\mu_j\varepsilon\int_0^{|\partial\Omega|}\int_0^1\frac{|\Omega|}{|\partial\Omega|}(u_j\circ\psi_{\varepsilon})(\varphi\circ\psi_{\varepsilon}) d\xi ds+R_1(\varepsilon).
\end{split}
\end{equation}
In a similar way we observe that
\begin{equation}\label{piece2}
\varepsilon\mu_j\int_{\Omega}\rho_{\varepsilon}u_j^1\varphi dx=\varepsilon\mu_j\int_0^{|\partial\Omega|}\int_0^1\frac{M}{|\partial\Omega|}(u_j^1\circ\psi_{\varepsilon}{})(\varphi\circ\psi_{\varepsilon}{})d\xi ds+R_2(\varepsilon),
\end{equation}
where
\begin{equation}\label{rem2}
\begin{split}
R_2(\varepsilon)&:=\varepsilon^2\mu_j\int_{\Omega}u_j^1\varphi dx+\mu_j\varepsilon\int_{\omega_{\varepsilon}}\left(\frac{KM}{2|\partial\Omega|}-\frac{|\Omega|}{|\partial\Omega|}+\varepsilon\tilde R(\varepsilon)\right)u_j^1\varphi dx\\
&\quad-\varepsilon^2\mu_j\int_0^{|\partial\Omega|}\int_0^1\frac{M}{|\partial\Omega|}(u_j^1\circ\psi_{\varepsilon}(s,\xi))(\varphi\circ\psi_{\varepsilon}(s,\xi))\xi\kappa(s)d\xi ds.
\end{split}
\end{equation}
We also observe that
\begin{equation}\label{piece3}
\varepsilon\mu_j^1\int_{\Omega}\rho_{\varepsilon}u_j\varphi dx=\varepsilon\mu_j^1\int_0^{|\partial\Omega|}\int_0^1\frac{M}{|\partial\Omega|}(u_j\circ\psi_{\varepsilon}(s,\xi))(\varphi\circ\psi_{\varepsilon}(s,\xi))d\xi ds+R_3(\varepsilon),
\end{equation}
where
\begin{equation*}
\begin{split}
R_3(\varepsilon)&:=\varepsilon^2\mu_j^1\int_{\Omega}u_j\varphi dx+\mu_j^1\varepsilon\int_{\omega_{\varepsilon}}\left(\frac{KM}{2|\partial\Omega|}-\frac{|\Omega|}{|\partial\Omega|}+\varepsilon\tilde R(\varepsilon)\right)u_j\varphi dx\\
&\quad-\varepsilon^2\mu_j^1\int_0^{|\partial\Omega|}\int_0^1\frac{M}{|\partial\Omega|}(u_j\circ\psi_{\varepsilon}(s,\xi))(\varphi\circ\psi_{\varepsilon}(s,\xi))\xi\kappa(s)d\xi ds.
\end{split}
\end{equation*}
We also find convenient to set
\begin{equation}\label{rem4}
R_4(\varepsilon):=\varepsilon^2\mu_j^1\int_{\Omega}u_j^1\varphi dx.
\end{equation}
Since $u_j$ is an eigenfunction of \eqref{Steklov} associated with the eigenvalue $\mu_j$, a standard argument based on the divergence theorem shows that
\begin{equation}\label{piece5}
\int_{\Omega}\nabla u_j\cdot\nabla\varphi dx=\int_{\partial\Omega}\frac{M\mu_j}{|\partial\Omega|}u_j\varphi d\sigma.
\end{equation}
Moreover, $u_j^1,\mu_j^1$ are solutions to problem \eqref{u1_problem_simple_true} and therefore
\begin{equation}\label{piece6}
\begin{split}
&\varepsilon\int_{\Omega}\nabla u_j^1\cdot\nabla\varphi dx\\
&\quad=\varepsilon\int_{\Omega}\mu_ju_j\varphi dx\\
&\qquad+\varepsilon\int_{\partial\Omega}\left(\frac{M\mu_j}{2|\partial\Omega|^2}(K-|\partial\Omega|\kappa(s))-\frac{2M^2\mu_j^2}{3|\partial\Omega|^2}+\frac{M\mu_j^1-|\Omega|\mu_j}{|\partial\Omega|}\right)u_j\varphi d\sigma\\
&\qquad+\varepsilon\int_{\partial\Omega}\frac{M\mu_j}{|\partial\Omega|}u_j^1\varphi d\sigma.
\end{split}
\end{equation}
Now we consider the terms involving $v_{j,\varepsilon}$ and $v_{j,\varepsilon}^1$. We have
\begin{equation}\label{sopra}
\varepsilon\mu_j\int_{\Omega}\rho_{\varepsilon}v_{j,\varepsilon}\varphi dx=\varepsilon^2\mu_j\int_{\omega_{\varepsilon}}v_{j,\varepsilon}\varphi dx+\mu_j\int_{\omega_{\varepsilon}}\tilde\rho_{\varepsilon}v_{j,\varepsilon}\varphi dx
\end{equation}
By the rule of change of variables in integrals and by \eqref{asymptotic_rho} we observe that for the second summand in the right-hand side of \eqref{sopra} it holds
\begin{equation*}
\begin{split}
&\mu_j\int_{\omega_{\varepsilon}}\tilde\rho_{\varepsilon}v_{j,\varepsilon}\varphi dx\\
&=\varepsilon\mu_j\int_0^{|\partial\Omega|}\int_0^1\frac{M}{|\partial\Omega|}w_j(\varphi\circ\psi_{\varepsilon})d\xi ds\\
&\quad-\varepsilon^2\mu_j \int_0^{|\partial\Omega|}\int_0^1\frac{M}{|\partial\Omega|}\kappa(s)\xi w_j(s,\xi)(\varphi\circ\psi_{\varepsilon}(s,\xi))d\xi ds\\
&\quad+\varepsilon^2\int_0^{|\partial\Omega|}\int_0^1\left(\frac{KM}{2|\partial\Omega|^2}-\frac{|\Omega|}{|\partial\Omega|}+\varepsilon\tilde R(\varepsilon)\right)w_j(s,\xi)(\varphi\circ\psi_{\varepsilon}(s,\xi))(1-\varepsilon\xi\kappa(s))d\xi ds.
\end{split}
\end{equation*}
Hence, by \eqref{w0} we write
\begin{equation}\label{piece7}
\begin{split}
&\varepsilon\mu_j\int_{\Omega}\rho_{\varepsilon}v_{j,\varepsilon}\varphi dx\\
&\qquad=-\varepsilon\mu_j\int_0^{|\partial\Omega|}\int_0^1\frac{M^2}{2|\partial\Omega|^2}(\xi-1)^2(u_j\circ\psi_{\varepsilon}(s,0))(\varphi\circ\psi_{\varepsilon}(s,\xi))d\xi ds+R_5(\varepsilon),
\end{split}
\end{equation}
where
\begin{equation*}
\begin{split}
&R_5(\varepsilon):=\varepsilon^2\mu_j\int_{\omega_{\varepsilon}}v_{j,\varepsilon}\varphi dx
-\varepsilon^2\mu_j \int_0^{|\partial\Omega|}\int_0^1\frac{M}{|\partial\Omega|}\kappa(s)\xi w_j(s,\xi)(\varphi\circ\psi_{\varepsilon}(s,\xi))d\xi ds\\
&+\varepsilon^2\int_0^{|\partial\Omega|}\int_0^1\left(\frac{KM}{2|\partial\Omega|^2}-\frac{|\Omega|}{|\partial\Omega|}+\varepsilon\tilde R(\varepsilon)\right)w_j(s,\xi)(\varphi\circ\psi_{\varepsilon}(s,\xi))(1-\varepsilon\xi\kappa(s))d\xi ds.
\end{split}
\end{equation*}
We also find convenient to set
\begin{equation}\label{rem5}
R_6(\varepsilon):=\varepsilon^2\int_{\Omega}\rho_{\varepsilon}(\mu_j v_{j,\varepsilon}^1+\mu_j^1v_{j,\varepsilon}+\varepsilon\mu_j^1 v_{j,\varepsilon}^1)\varphi dx.
\end{equation}
Now, by \eqref{grad2} and \eqref{w0}, by the theorem on change of variables in integrals, and by integrating by parts with respect to the variable $\xi$ we have that
\begin{equation*}
\begin{split}
&\varepsilon\int_{\Omega}\nabla v_{j,\varepsilon}\cdot\nabla\varphi dx=\varepsilon\int_{\omega_{\varepsilon}}\nabla v_{j,\varepsilon}\cdot\nabla\varphi dx\\
&\quad=\varepsilon^2\int_0^{|\partial\Omega|}\int_0^1\Big(\frac{1}{\varepsilon^2}\partial_{\xi}w_j(s,\xi)\partial_{\xi}(\varphi\circ\psi_{\varepsilon})(s,\xi)\\
&\qquad+\frac{\partial_s w_j(s,\xi)\partial_s(\varphi\circ\psi_{\varepsilon})(s,\xi)}{(1-\varepsilon\xi\kappa(s))^2}\Big)(1-\varepsilon\xi\kappa(s))d\xi ds\\
&\quad=\int_0^{|\partial\Omega|}\int_0^1\frac{M\mu_j}{|\partial\Omega|}(u_j\circ\psi_{\varepsilon}(s,0))(\varphi\circ\psi_{\varepsilon}(s,\xi)) d\xi ds-\int_{\partial\Omega}\frac{M\mu_j}{|\partial\Omega|}u_j\varphi d\sigma\\
&\qquad-\varepsilon\int_0^{|\partial\Omega|}\int_0^1\frac{\mu_j M}{|\partial\Omega|}\kappa(s)\left(2\xi-1\right)(u_j\circ\psi_{\varepsilon}(s,0))(\varphi\circ\psi_{\varepsilon}(s,\xi))d\xi ds\\
&\qquad+\varepsilon^2\int_0^{|\partial\Omega|}\int_0^1 \left(\frac{\partial_s w_j(s,\xi)\partial_s(\varphi\circ\psi_{\varepsilon})(s,\xi)}{(1-\varepsilon\xi\kappa(s))^2}\right)(1-\varepsilon\xi\kappa(s))d\xi ds.
\end{split}
\end{equation*}
We write
\begin{equation}\label{piece8}
\begin{split}
&\varepsilon\int_{\Omega}\nabla v_{j,\varepsilon}\cdot\nabla\varphi dx\\
&\quad=\int_0^{|\partial\Omega|}\int_0^1\frac{M\mu_j}{|\partial\Omega|}(u_j\circ\psi_{\varepsilon}(s,0))(\varphi\circ\psi_{\varepsilon}(s,\xi)) d\xi ds-\int_{\partial\Omega}\frac{M\mu_j}{|\partial\Omega|}u_j\varphi d\sigma\\
&\qquad-\varepsilon\int_0^{|\partial\Omega|}\int_0^1\frac{\mu_j M}{|\partial\Omega|}\kappa(s)\left(2\xi-1\right)(u_j\circ\psi_{\varepsilon}(s,0))(\varphi\circ\psi_{\varepsilon}(s,\xi))d\xi ds+R_7(\varepsilon),
\end{split}
\end{equation}
where
\begin{equation*}
R_7(\varepsilon):=\varepsilon^2\int_0^{|\partial\Omega|}\int_0^1 \left(\frac{\partial_s w_j(s,\xi)\partial_s(\varphi\circ\psi_{\varepsilon})(s,\xi)}{(1-\varepsilon\xi\kappa(s))^2}\right)(1-\varepsilon\xi\kappa(s))d\xi ds.
\end{equation*}
Analogously, from \eqref{grad2}, \eqref{w1}, by a change of variables in the integrals and integrating by parts with respect to the variable $\xi$, we see that
\begin{equation}\label{new9}
\begin{split}
&\varepsilon^2\int_{\Omega}\nabla v_{j,\varepsilon}^1\cdot\nabla\varphi dx=\varepsilon^2\int_{\omega_{\varepsilon}}\nabla v_{j,\varepsilon}^1\cdot\nabla\varphi dx\\
&=-\varepsilon\int_{\partial\Omega}\left(\frac{M\mu_j(K-|\partial\Omega|\kappa)}{2|\partial\Omega|^2}-\frac{2M^2\mu_j^2}{3|\partial\Omega|^2}+\frac{M\mu_j^1-|\Omega|\mu_j}{|\partial\Omega|}\right)u_j\varphi d\sigma\\
&\quad-\varepsilon\int_{\partial\Omega}\frac{M\mu_j}{|\partial\Omega|}u_j^1\varphi d\sigma\\
&\quad+\varepsilon\int_0^{|\partial\Omega|}\int_0^1\frac{M\mu_j\kappa(s)}{|\partial\Omega|}(\xi-1)(u_j\circ\psi_{\varepsilon}(s,0))(\varphi\circ\psi_{\varepsilon}(s,\xi)) d\xi ds\\
&\quad+\varepsilon\int_0^{|\partial\Omega|}\int_0^1\frac{M}{|\partial\Omega|}\Big(\mu_j (u_j^1\circ\psi_{\varepsilon}(s,0))-\frac{M\mu_j^2}{2|\partial\Omega|}(\xi-1)^2 (u_j\circ\psi_{\varepsilon}(s,0))\\
&\quad\qquad\qquad\qquad+\mu_j^1 (u_j\circ\psi_{\varepsilon}(s,0))-\frac{M\mu_j^2}{|\partial\Omega|}\xi (u_j\circ\psi_{\varepsilon}(s,0))-\frac{|\Omega|\mu_j}{M}(u_j\circ\psi_{\varepsilon}(s,0))\\
&\quad\qquad\qquad\qquad+\frac{K\mu_j}{2|\partial\Omega|}(u_j\circ\psi_{\varepsilon}(s,0))\Big)(\varphi\circ\psi_{\varepsilon}(s,\xi)) d\xi ds+R_8(\varepsilon),
\end{split}
\end{equation}
where
\begin{equation*}
\begin{split}
R_8(\varepsilon)&:=\varepsilon^3\int_0^{|\partial\Omega|}\int_0^1 \left(\frac{\partial_s w_j^1(s,\xi)\partial_s(\varphi\circ\psi_{\varepsilon})(s,\xi)}{(1-\varepsilon\xi\kappa(s))^2}\right)(1-\varepsilon\xi\kappa(s))d\xi ds\\
&\quad-\varepsilon^2 \int_0^{|\partial\Omega|}\int_0^1 \xi\kappa(s)\partial_{\xi}w_j^1(s,\xi)\partial_{\xi}(\varphi\circ\psi_{\varepsilon})(s,\xi)d\xi ds.
\end{split}
\end{equation*}
From \eqref{piece1}, \eqref{piece2}, \eqref{piece3}, \eqref{rem4}, \eqref{piece5}, \eqref{piece6}, \eqref{piece7}, \eqref{rem5}, \eqref{piece8} and \eqref{new9}, and by a standard computation, it follows that the right-hand side of the equality in \eqref{eprod} equals
\begin{equation*}
\frac{1}{|1+\mu_j+\varepsilon\mu_j^1|}\left|I_{1,\varepsilon}+I_{2,\varepsilon}+I_{3,\varepsilon}+I_{4,\varepsilon}+I_{5,\varepsilon}+I_{6,\varepsilon}+I_{7,\varepsilon}\right|
\end{equation*}
with
\footnotesize
\begin{eqnarray}
I_{1,\varepsilon}&=&\frac{M\mu_j}{|\partial\Omega|}\int_0^{|\partial\Omega|}\int_0^1\Big((u_j\circ\psi_{\varepsilon}(s,\xi))-(u_j\circ\psi_{\varepsilon}(s,0))\nonumber\\
&&\ \ \ \ \ \ \ \ \ \ \ \ +\varepsilon\frac{M\mu_j}{|\partial\Omega|}\xi (u_j\circ\psi_{\varepsilon}(s,0))\Big)(\varphi\circ\psi_{\varepsilon}(s,\xi))d\xi ds\label{f1}\\
I_{2,\varepsilon}&=&-\varepsilon\frac{M\mu_j}{|\partial\Omega|}\int_0^{|\partial\Omega|}\int_0^1\left((u_j\circ\psi_{\varepsilon}(s,\xi))\right.\nonumber\\
&&\left.\ \ \ \ \ \ \ \ \ \ \ \ -(u_j\circ\psi_{\varepsilon}(s,0))\right)(\varphi\circ\psi_{\varepsilon}(s,\xi))\xi\kappa(s)d\xi ds\label{f2}\\
I_{3,\varepsilon}&=&\varepsilon\frac{\mu_j^1 M}{|\partial\Omega|}\int_0^{|\partial\Omega|}\int_0^1\left((u_j\circ\psi_{\varepsilon}(s,\xi))-(u_j\circ\psi_{\varepsilon}(s,0))\right)(\varphi\circ\psi_{\varepsilon}(s,\xi))d\xi ds\label{f3}\\
I_{4,\varepsilon}&=&\varepsilon\frac{M\mu_j}{|\partial\Omega|}\int_0^{|\partial\Omega|}\int_0^1\left((u_j^1\circ\psi_{\varepsilon}(s,\xi))-(u_j^1\circ\psi_{\varepsilon}(s,0))\right)(\varphi\circ\psi_{\varepsilon}(s,\xi))d\xi ds\label{f5}\\
I_{5,\varepsilon}&=&-\varepsilon\frac{\mu_j|\Omega|}{|\partial\Omega|}\int_0^{|\partial\Omega|}\int_0^1\left((u_j\circ\psi_{\varepsilon}(s,\xi))-(u_j\circ\psi_{\varepsilon}(s,0))\right)(\varphi\circ\psi_{\varepsilon}(s,\xi))d\xi ds\label{f6}\\
I_{6,\varepsilon}&=&\varepsilon\frac{\mu_j K M}{2|\partial\Omega|^2}\int_0^{|\partial\Omega|}\int_0^1\left((u_j\circ\psi_{\varepsilon}(s,\xi))-(u_j\circ\psi_{\varepsilon}(s,0))\right)(\varphi\circ\psi_{\varepsilon}(s,\xi))d\xi ds.\label{f7}\\
I_{7,\varepsilon}&=&\sum_{k=1}^8R_k(\varepsilon)\label{f8}.
\end{eqnarray}
\normalsize
To prove the validity of the lemma we will show that there exists $C>0$ such that
\begin{equation}\label{aim2}
|I_{k,\varepsilon}|\leq C\varepsilon^2\|\varphi\|_{\varepsilon}\ \ \ \forall\varepsilon\in(0,\varepsilon_{\Omega,j})\,,\varphi\in\mathcal H_{\varepsilon}(\Omega)
\end{equation}
for all $k\in\left\{1,...,7\right\}$. Through the rest of the proof we find convenient to denote by $C$ a positive constant which does not depend on $\varepsilon$ and $\varphi$ and which may be re-defined line by line.
Since $\Omega$ is assumed to be of class $C^{3}$, $u_j$ is a solution of \eqref{Steklov} and $u_j^1$ is a solution of \eqref{u1_problem_simple_true}, a classical elliptic regularity argument shows that $u_j,u_j^1\in C^{2}(\overline\Omega)$ (see e.g., \cite{agmon1}). Thus we conclude that the terms \eqref{f2}-\eqref{f7} can be bounded from above by $C\varepsilon^2\|\varphi\|_{\varepsilon}$ by the same argument used to study $J_{8,\varepsilon}$ in the proof of Lemma \ref{C1} (cf.~\eqref{J801}-\eqref{J8}). Hence \eqref{aim2} holds for $k\in\left\{2,...,6\right\}$.
Now we estimate $I_{1,\varepsilon}$ (cf.~\eqref{f1}). It is convenient to pass to the coordinates $(s,t)$ by the change of variables $x=\psi(s,t)$. From the regularity assumptions on $\Omega$ we have that $\psi$ is of class $C^2$ from $[0,|\partial\Omega|)\times(0,\varepsilon)$ to $\mathbb R^2$, for all $\varepsilon\in(0,\varepsilon_{\Omega,j})$. Thus $u_j\circ\psi$ is of class $C^2$ from $[0,|\partial\Omega|)\times(0,\varepsilon)$ to $\mathbb R$, for all $\varepsilon\in(0,\varepsilon_{\Omega,j})$. Therefore, for each $(s,t)\in[0,\partial\Omega)\times(0,\varepsilon)$, there exist $t^*\in(0,t)$ and $t^{**}\in(0,t^*)$ such that
\begin{equation*}
(u_j\circ\psi)(s,t)-(u_j\circ\psi)(s,0)=t\partial_t(u_j\circ\psi)(s,t^*)
\end{equation*}
and
\begin{equation*}
\partial_t(u_j\circ\psi)(s,t^*)-\partial_t(u_j\circ\psi)(s,0)=t^*\partial^2_t(u_j\circ\psi)(s,t^{**}).
\end{equation*}
Moreover we note that $\partial_t u_j(\psi(s,0))=-\partial_{\nu} u_j(\gamma(s))=-\frac{\mu_j M}{|\partial\Omega|}u_j(\psi(s,0))$.
Then by \eqref{f1} we have
\[
\begin{split}
I_{1,\varepsilon}&=\frac{M\mu_j}{|\partial\Omega|}\int_0^{|\partial\Omega|}\frac{1}{\varepsilon}\int_0^{\varepsilon}t \bigl(\partial_t(u_j\circ\psi)(s,t^*)-\partial_t(u_j\circ\psi)(s,0)\bigr)(\varphi\circ\psi(s,t))dt ds\\
&=\frac{M\mu_j}{|\partial\Omega|}\int_0^{|\partial\Omega|}\frac{1}{\varepsilon}\int_0^{\varepsilon}t\,t^* \partial^2_t(u_j\circ\psi)(s,t^{**})(\varphi\circ\psi(s,t))dt ds.
\end{split}
\]
Then, by a computation based on the H\"older inequality, we verify that
\[
\begin{split}
\left|I_{1,\varepsilon}\right|&\leq\frac{M\mu_j}{|\partial\Omega|}\frac{1}{\varepsilon}\int_0^{|\partial\Omega|}\int_0^{\varepsilon}t^2\left|\partial^2_t (u_j\circ\psi)(s,t^{**})\right|\left|\varphi\circ\psi(s,t)\right|dt ds\\
&\leq\frac{M\mu_j }{|\partial\Omega|}\|u_j\|_{C^2(\overline\Omega)}\int_0^{|\partial\Omega|}\int_0^{\varepsilon}\frac{t^2}{\varepsilon^{\frac{1}{2}}}\frac{\left|\varphi\circ\psi(s,t)\right|}{\varepsilon^{\frac{1}{2}}}dt ds\\
&\leq\frac{M\mu_j }{|\partial\Omega|}\|u_j\|_{C^2(\overline\Omega)}\left(\int_0^{|\partial\Omega|}\int_0^{\varepsilon}\frac{t^4}{\varepsilon}dt ds\right)^{\frac{1}{2}}
\left(\int_0^{|\partial\Omega|}\int_0^{\varepsilon}\frac{\left(\varphi\circ\psi(s,t)\right)^2}{\varepsilon}dt ds\right)^{\frac{1}{2}},\\
&\leq\frac{M\mu_j C}{|\partial\Omega|\sqrt{5}}\varepsilon^2\|\varphi\|_{\varepsilon}
\end{split}
\]
where in the latter inequality we have use the argument of \eqref{J8}.
We conclude that \eqref{aim2} holds with $k=1$.
In order to complete the proof we have to estimate \eqref{f8}. Since $I_{7,\varepsilon}=\sum_{k=1}^8R_k(\varepsilon)$, we consider separately each $R_k(\varepsilon)$ and we start with $R_1(\varepsilon)$. By the H\"older's inequality we can prove the following estimates for the first term in the definition \eqref{rem1} of $R_1(\varepsilon)$,
\begin{equation*}
\left|\mu_j \varepsilon\tilde{R}(\varepsilon)\int_{\omega_{\varepsilon}}u_j\varphi dx\right|\le\frac{\mu_j \varepsilon\tilde{R}(\varepsilon)}{\tilde\rho_{\varepsilon}}\int_{\omega_{\varepsilon}}\tilde\rho_{\varepsilon} \left|u_j\varphi \right|dx\leq \frac{\mu_j \varepsilon\tilde{R}(\varepsilon)}{\tilde\rho_{\varepsilon}}\|u_j\|_{\varepsilon}\|\varphi\|_{\varepsilon}\leq C\varepsilon^2\|\varphi\|_{\varepsilon}.
\end{equation*}
For the second summand in the right-hand side of \eqref{rem1}, a computation based on the rule of change of variables in integrals and on H\"older's inequality shows that
\begin{multline}\label{est_ch_var}
{\left|\mu_j\varepsilon^2\int_0^{|\partial\Omega|}\int_0^1\frac{K M}{2|\partial\Omega|^2}\xi\kappa(s)(u_j\circ\psi_{\varepsilon}(s,\xi))(\varphi\circ\psi_{\varepsilon}(s,\xi)) d\xi ds\right|}\\
{\leq \varepsilon^2\frac{C}{\tilde\rho_{\varepsilon}}\int_{\omega_{\varepsilon}}\frac{\tilde\rho_{\varepsilon}}{\varepsilon}\left|u_j\varphi \right| dx\leq C\varepsilon^2\|u_j\|_{\varepsilon}\|\varphi\|_{\varepsilon}.}
\end{multline}
Analogously, for the third summand in the right-hand side of \eqref{rem1} we have
\begin{multline*}
\left|\mu_j\varepsilon^2\int_0^{|\partial\Omega|}\int_0^1\frac{|\Omega|}{|\partial\Omega|}\xi\kappa(s)(u_j\circ\psi_{\varepsilon}(s,\xi))(\varphi\circ\psi_{\varepsilon}(s,\xi)) d\xi ds\right|\leq C\varepsilon^2\|u_j\|_{\varepsilon}\|\varphi\|_{\varepsilon}.
\end{multline*}
This proves that $\left|R_1(\varepsilon)\right|\leq C\varepsilon^2\|\varphi\|_{\varepsilon}$. Let us now consider $R_2(\varepsilon)$. By H\"older's inequality and by Proposition \ref{L2<eps}, one deduces the following inequality for the first term in the definition \eqref{rem2} of $R_2(\varepsilon)$,
\begin{equation*}
\left|\varepsilon^2\mu_j\int_{\Omega} u_j^1\varphi dx\right|\leq C\varepsilon^2\|u_j^1\|_{L^2(\Omega)}\|\varphi\|_{L^2(\Omega)}\leq C\varepsilon^2\|\varphi\|_{\varepsilon}.
\end{equation*}
For the second summand in the right-hand side of \eqref{rem2} we observe that, by an argument based on the H\"older inequality, we have
\begin{multline*}
\left|\mu_j\varepsilon\int_{\omega_{\varepsilon}}\left(\frac{KM}{2|\partial\Omega|}-\frac{|\Omega|}{|\partial\Omega|}+\varepsilon\tilde R(\varepsilon)\right)u_j^1\varphi dx\right|\\
{\leq C\mu_j\frac{\varepsilon^2}{\tilde\rho_{\varepsilon}}\int_{\omega_{\varepsilon}}\frac{\tilde\rho_{\varepsilon}}{\varepsilon}\left|u^1_j\varphi \right| dx\leq C\varepsilon^2\|u^1_j\|_{\varepsilon}\|\varphi\|_{\varepsilon}\le C\varepsilon^2\|\varphi\|_{\varepsilon},}
\end{multline*}
where in the latter inequality we have used the fact that
\begin{equation*}
\|u^1_j\|_{\varepsilon}\leq C\qquad\forall\varepsilon\in(0,\varepsilon_{\Omega,j})
\end{equation*}
for some $C>0$, a fact that can be proved by arguing as in \eqref{J1e_eq2}, \eqref{J1e_eq2.1}.
By a similar argument, we can also prove that the third summand in the right-hand side of \eqref{rem2} is smaller than $C\varepsilon^2\|\varphi\|_{\varepsilon}$. Hence, we deduce that
\[
\left|R_2(\varepsilon)\right|\leq C\varepsilon^2\|\varphi\|_{\varepsilon}.
\]
The proof that $|R_k(\varepsilon)|\leq C\varepsilon^2\|\varphi\|_{\varepsilon}$ for $k\in\left\{3,...,6\right\}$ can be effected by a straightforward modification of the prove that $|R_k(\varepsilon)|\leq C\varepsilon^2\|\varphi\|_{\varepsilon}$ for $k\in\left\{1,2\right\}$ and by exploiting Lemmas \ref{vjeL2} and \ref{vj1eL2}.
We now consider $R_7(\varepsilon)$. By integrating by parts with respect to the variable $s$, we have
\[
\begin{split}
&R_7(\varepsilon)=\varepsilon^2\int_0^{|\partial\Omega|}\int_0^1\frac{\partial_s w_j(s,\xi)\partial_s(\varphi\circ\psi_{\varepsilon})(s,\xi)}{(1-\varepsilon\xi\kappa(s))} d\xi ds\\
&\ =-\varepsilon^2\int_0^{|\partial\Omega|}\int_0^1\left(\frac{(1-\varepsilon\xi\kappa(s))\partial^2_sw_j(s,\xi)+\varepsilon\xi\kappa'(s)\partial_sw_j(s,\xi)}{(1-\varepsilon\xi\kappa(s))^2}\right)(\varphi\circ\psi_{\varepsilon}(s,\xi))d\xi ds.
\end{split}
\]
Then, by \eqref{positiveinf} and since $\kappa$ and $\kappa'$ are bounded on $(0,|\partial\Omega|)$ (because $\Omega$ is of class $C^3$) we deduce that
\[
\left|R_7(\varepsilon)\right|\leq C\varepsilon^2\int_0^{|\partial\Omega|}\int_0^1\left(\left|\partial^2_s w_j\right|+\left|\partial_sw_j\right|\right)\left|\varphi\right| d\xi ds.
\]
Hence, by the definition of $w_j$ in \eqref{w0} and by a computation based on the H\"older inequality (see also \eqref{est_ch_var}) we find that
\[
\left|R_7(\varepsilon)\right|\leq C\varepsilon^2\|u_j\|_{C^2(\overline\Omega)}\|\varphi\|_{\varepsilon}\,.
\]
We conclude that $\left|R_7(\varepsilon)\right|\leq C\varepsilon^2\|\varphi\|_{\varepsilon}$. In a similar way one can show that $\left|R_8(\varepsilon)\right|\leq C\varepsilon^2\|\varphi\|_{\varepsilon}$. The proof of the lemma is now complete.
\endproof
\end{lemma}
In the next step we verify that $\|u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1\|_{\varepsilon}^2-\frac{M}{|\partial\Omega|}\left(1+\mu_j+\varepsilon\mu_j^1\right)$ is in $O(\varepsilon^2)$ for $\varepsilon\rightarrow 0$. {}
\begin{lemma}\label{propedeutico_step2}
There exists a constant $C>0$ such that
\begin{equation*}
\left|\|u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1\|_{\varepsilon}^2-\frac{M}{|\partial\Omega|}\left(1+\mu_j+\varepsilon\mu_j^1\right)\right|\leq C\varepsilon^2,\ \ \ \forall\varepsilon\in (0,\varepsilon_{\Omega,j}).
\end{equation*}
\proof
A straightforward computation shows that
$$
\|u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1\|_{\varepsilon}^2=\sum_{k=1}^8N_{k,\varepsilon},
$$
where
\footnotesize
\[
\begin{split}
N_{1,\varepsilon}&:=\varepsilon\int_{\Omega}2\varepsilon u_ju_j^1+\varepsilon^2\left(u_j^1\right)^2dx+\int_{\Omega}\varepsilon^2|\nabla u_j^1|^2dx\\
&\quad+\varepsilon\int_{\omega_{\varepsilon}}\left(\varepsilon^2 v_{j,\varepsilon}^2+\varepsilon^4\left(v_{j,\varepsilon}^1\right)^2+2\varepsilon u_j v_{j,\varepsilon}+2\varepsilon^2 u_j^1 v_{j,\varepsilon}+2\varepsilon^3 u_j^1v_{j,\varepsilon}^1+2\varepsilon^2 u_j v_{j,\varepsilon}^1+2\varepsilon^3 v_{j,\varepsilon} v_{j,\varepsilon}^1\right)dx\\
&\quad+\int_{\omega_{\varepsilon}}\frac{\tilde\rho_{\varepsilon}}{\varepsilon}\left(\varepsilon^2\left(u_j^1\right)^2+\varepsilon^2 v_{j,\varepsilon}^2+\varepsilon^4\left(v_{j,\varepsilon}^1\right)^2+2\varepsilon^2 u_j^1 v_{j,\varepsilon}+2\varepsilon^3 u_j^1v_{j,\varepsilon}^1+2\varepsilon^2 u_j v_{j,\varepsilon}^1+2\varepsilon^3 v_{j,\varepsilon} v_{j,\varepsilon}^1\right)dx\\
&\quad+\int_{\omega_{\varepsilon}}\varepsilon^4|\nabla v_{j,\varepsilon}^1|^2+2\varepsilon^2\nabla u_j\cdot\nabla v_{j,\varepsilon}^1+2\varepsilon^2\nabla v_{j,\varepsilon}\nabla u_j^1+2\varepsilon^3\nabla v_{j,\varepsilon}\cdot\nabla v_{j,\varepsilon}^1+2\varepsilon^3\nabla u_j^1\cdot\nabla v_{j,\varepsilon}^1dx,\\
N_{2,\varepsilon}&:=\varepsilon\int_{\Omega}u_j^2dx,\\
N_{3,\varepsilon}&:=\int_{\omega_{\varepsilon}}\frac{\tilde\rho_{\varepsilon}}{\varepsilon}u_j^2 dx,\\
N_{4,\varepsilon}&:=2\int_{\omega_{\varepsilon}}\tilde\rho_{\varepsilon}u_j u_j^1 dx,\\
N_{5,\varepsilon}&:=2\int_{\omega_{\varepsilon}}\tilde\rho_{\varepsilon}u_j v_{j,\varepsilon} dx,\\
N_{6,\varepsilon}&:=\int_{\Omega}|\nabla u_j|^2+2\varepsilon\nabla u_j\cdot\nabla u_j^1\,dx,\\
N_{7,\varepsilon}&:=\varepsilon^2\int_{\omega_{\varepsilon}}|\nabla v_{j,\varepsilon}|^2dx,\\
N_{8,\varepsilon}&:=2\varepsilon\int_{\omega_{\varepsilon}}\nabla u_j\cdot\nabla v_{j,\varepsilon} dx.
\end{split}
\]\normalsize
We begin by considering $N_{1,\varepsilon}$. By standard elliptic regularity (see \cite{agmon1}) the functions $u_j$ and $u_j^1$ are of class $C^2$ on $\overline\Omega$. Then, by Propositions \ref{vjeL2}, \ref{vjee}, \ref{vj1eL2}, and \ref{vj2ee}, and by a standard computation one shows that
\[
\left|N_{1,\varepsilon}\right|\le C\varepsilon^2
\]
for some $C>0$.
We now re-write the $N_{k,\varepsilon}$'s with $k\in\left\{3,...,8\right\}$, in a more suitable way.
We start with $N_{3,\varepsilon}$. By the membership of $u_j$ in $C^{2}(\overline\Omega)$ and by the definition of the change of variable $\psi$, we deduce that the map from $[0,\varepsilon]$ to $\mathbb{R}$ which takes $t$ to $u_j\circ\psi(s,t)$ is of class $C^2$. Then, by the Taylor formula we have
\[
(u_j\circ\psi(s,{t}))^2=(u_j\circ\psi(s,0))^2+2tu_j(\psi(s,0))\partial_t(u_j\circ\psi)(s,0)+F(t)t^2\qquad\forall t\in[0,\varepsilon]\,,
\]
where $F(t)\in C(\overline{[0,\varepsilon]})$. Since $u_j$ is a solution of \eqref{Steklov} it follows that
\begin{equation}\label{ujpsiTaylor}
(u_j\circ\psi(s,{t}))^2=(u_j\circ\psi(s,0))^2+2t\frac{M\mu_j}{|\partial\Omega|}(u_j\circ\psi(s,0))^2+F(t)t^2.
\end{equation}
Then, by the definition of $N_{3,\varepsilon}$, by \eqref{ujpsiTaylor}, and by the expansion \eqref{asymptotic_rho} of $\tilde\rho_{\varepsilon}$, we deduce that
\normalsize
\[
\begin{split}
N_{3,\varepsilon}&
=\int_0^{|\partial\Omega|}\int_0^{\varepsilon}\left(\frac{M}{\varepsilon|\partial\Omega|}+\frac{\frac{1}{2}KM-|\Omega||\partial\Omega|}{|\partial\Omega|^2}+{\varepsilon}\tilde{R}(\varepsilon)\right)\\
&\qquad\times \left((u_j\circ\psi(s,0))^2+2t\frac{M\mu_j}{|\partial\Omega|}(u_j\circ\psi(s,0))^2+C(t)t^2\right) (1-t\kappa(s))dtds
\end{split}
\]
\normalsize
From standard computations and recalling that $\int_{\partial\Omega}u_j^2 d\sigma=1$, it follows that
$$
N_{3,\varepsilon}=\frac{M}{|\partial\Omega|}+\varepsilon\left(\frac{\frac{1}{2}KM-|\Omega||\partial\Omega|}{|\partial\Omega|^2}\right)-\varepsilon\frac{M^2\mu}{|\partial\Omega|^2}-\varepsilon\frac{M}{2|\partial\Omega|}\int_{\partial\Omega}u_j^2\kappa{\circ\gamma^{(-1)}} d\sigma+Q_{3,\varepsilon},
$$
where $Q_{3,\varepsilon}$ satisfies the inequality
$$
|Q_{3,\varepsilon}|\leq C\varepsilon^2
$$
By a similar computation and by exploiting \eqref{w0} and \eqref{uj1condition}, one can also verify that
$$
N_{4,\varepsilon}=\varepsilon \frac{2M}{|\partial\Omega|}\left(\frac{\mu_j^1}{2\mu_j}+\frac{M\mu_j}{3|\partial\Omega|}\right)+Q_{4,\varepsilon}
$$
with
$$
|Q_{4,\varepsilon}|\leq C\varepsilon^2
$$
and that
$$
N_{5,\varepsilon}=-\varepsilon\frac{M^2\mu_j}{3|\partial\Omega|^2}+Q_{5,\varepsilon},
$$
with
$$
|Q_{5,\varepsilon}|\leq C\varepsilon^2.
$$
Now we turn to consider $N_{6,\varepsilon}$. By a computation based on the divergence theorem and on equality $\partial_\nu u_j=\frac{M\mu_j}{|\partial\Omega|}u_j$, we find that
\[
\begin{split}
N_{6,\varepsilon}&=\frac{M\mu_j}{|\partial\Omega|}\int_{\partial\Omega}u_j^2d\sigma+2\varepsilon\frac{M\mu_j}{|\partial\Omega|}\int_{\partial\Omega}u_ju_j^1d\sigma+Q_{6,\varepsilon}\\
&=\frac{M\mu_j}{|\partial\Omega|}+\varepsilon\left(\frac{M\mu_j^1}{|\partial\Omega|}+\frac{2M^2\mu_j^2}{3|\partial\Omega|^2}\right)+Q_{6,\varepsilon},
\end{split}
\]
with
$$
Q_{6,\varepsilon}:=\varepsilon^2\int_{\Omega}|\nabla u_j^1|^2dx.
$$
Since $u_j^1\in C^2(\overline\Omega)$, we deduce that
$$
|Q_{6,\varepsilon}|\leq C\varepsilon^2.
$$
Next we consider $N_{7,\varepsilon}$. By passing to coordinates $(s,\xi)$ in the definition of $N_{7,\varepsilon}$, and by using formulas \eqref{grad2} and \eqref{w0}, one shows that
$$
N_{7,\varepsilon}=\varepsilon\frac{M^2\mu_j^2}{3|\partial\Omega|^2}+Q_{7,\varepsilon}
$$
with
$$
|Q_{7,\varepsilon}|\leq C \varepsilon^2.
$$
Finally we consider $N_{8,\varepsilon}$. By the membership of $u_j$ in $C^2(\overline\Omega)$ and by equality $\partial_\nu u_j=\frac{M\mu_j}{|\partial\Omega|}u_j$ we have
\[
\partial_\xi u_j\circ\psi_\varepsilon(s,\xi)=\varepsilon(\partial_\nu u_j)\circ\psi_\epsilon(s,0)+\epsilon^2\tilde U(s,\xi)=\varepsilon\frac{M\mu_j}{|\partial\Omega|}u_j\circ\psi_\epsilon(s,0)+\epsilon^2\tilde U(s,\xi),
\]
where $\tilde U$ is a continuous function on $[0,|\partial\Omega|]\times[0,1]$. Then, by passing to coordinates $(s,\xi)$ in the definition of $N_{8,\varepsilon}$, and by using formulas \eqref{grad2} and \eqref{w0}, one verifies that
$$
N_{8,\varepsilon}=-\varepsilon\frac{M^2\mu_j^2}{|\partial\Omega|^2}+Q_{8,\varepsilon},
$$
with
$$
|Q_{8,\varepsilon}|\leq C \varepsilon^2.
$$
Now we set $Q_\varepsilon:=N_{1,\varepsilon}+\sum_{k=3}^8 Q_{k,\varepsilon}$. A straightforward computation shows that
\begin{equation*}
\begin{split}
&\|u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1\|_{\varepsilon}^2-\frac{M}{|\partial\Omega|}\left(1+\mu_j+\varepsilon\mu_j^1\right)\\
&\quad=\frac{M}{|\partial\Omega|}\Bigg[1+\mu_j+\varepsilon\Bigg(\frac{|\partial\Omega|}{M}\int_{\Omega}u_j^2dx\\
&\qquad-\frac{|\Omega|}{M}-\frac{2M\mu_j}{3|\partial\Omega|}+\frac{K}{2|\partial\Omega|}-\frac{1}{2}\int_{\partial\Omega}u_j^2\kappa{\circ\gamma^{(-1)}} d\sigma+\frac{\mu_j^1}{\mu_j}+\mu_j^1\Bigg)\Bigg]\\
&\qquad-\frac{M}{|\partial\Omega|}\left(1+\mu_j+\varepsilon\mu_j^1\right)+Q_\varepsilon.
\end{split}
\end{equation*}
We note that by \eqref{top_der_formula} we have
$$
\frac{\mu_j^1}{\mu_j}=-\frac{|\partial\Omega|}{M}\int_{\Omega}u_j^2dx+\frac{|\Omega|}{M}-\frac{2M\mu_j}{3|\partial\Omega|}-\frac{K}{2|\partial\Omega|}+\frac{1}{2}\int_{\partial\Omega}u_j^2\kappa\circ\gamma^{(-1)} d\sigma,
$$
therefore
$$
\|u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1\|_{\varepsilon}^2-\frac{M}{|\partial\Omega|}\left(1+\mu_j+\varepsilon\mu_j^1\right)=Q_\varepsilon.
$$
The conclusion of the proof of the lemma follows by observing that $|Q_\varepsilon|\leq C\varepsilon^2$ for all $\varepsilon\in(0,\varepsilon_{\Omega,j})$.
\endproof
\end{lemma}
We are now ready to prove Theorems \ref{asymptotic_eigenvalues} and \ref{asymptotic_eigenfunctions} by Lemma \ref{lemma_fondamentale}.
\noindent{\em Proof of Theorems \ref{asymptotic_eigenvalues} and \ref{asymptotic_eigenfunctions}.}
We first prove \eqref{expansion_eigenvalues}. By a standard continuity argument it follows that there exists $\varepsilon_{\mu_j^1}\in(0,\varepsilon_{\Omega,j})$ such that
$$
1+\mu_j+\varepsilon\mu_j^1>1
$$
for all $\varepsilon\in(0,\varepsilon_{\mu_j^1})$. By Lemma \ref{propedeutico_step2} there exists $\varepsilon_j^*\in(0,\varepsilon_{\mu_j^1})$ such that
$$
\|u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1\|_{\varepsilon}>\frac{1}{2}\sqrt{\frac{M}{|\partial\Omega|}}\left(1+\mu_j+\varepsilon\mu_j^1\right)^{\frac{1}{2}}\ \ \ \forall\varepsilon\in(0,\varepsilon_j^*).
$$
By multiplying both sides of \eqref{condition_2} by $\|u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1\|_{\varepsilon}^{-1}$ we deduce that
\begin{equation}\label{plug_step2}
\begin{split}
&\left|\left\langle\mathcal A_{\varepsilon}\left(\frac{u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1}{\|u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1\|_{\varepsilon}}\right)\right.\right.\\
&\qquad\qquad\left.\left.-\frac{1}{1+\mu_j+\varepsilon\mu_j^1}\left(\frac{u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1}{\|u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1\|_{\varepsilon}}\right),\varphi\right\rangle_{\varepsilon}\right|\leq C_7\varepsilon^2 \|\varphi\|_{\varepsilon},
\end{split}
\end{equation}
for all $\varphi\in H^1(\Omega)$ and $\varepsilon\in(0,\varepsilon_j^*)$ with $C_7:=2\sqrt{\frac{|\partial\Omega|}{M}}(1+\mu_j+\varepsilon\mu_j^1)^{-\frac{1}{2}}C_6$. By taking
\[
\varphi=\mathcal A_{\varepsilon}\left(\frac{u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1}{\|u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1\|_{\varepsilon}}\right)-\frac{1}{1+\mu_j+\varepsilon\mu_j^1}\left(\frac{u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1}{\|u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1\|_{\varepsilon}}\right)
\] in \eqref{plug_step2}, we obtain
\begin{multline*}
\left\|\mathcal A_{\varepsilon}\left(\frac{u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1}{\|u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1\|_{\varepsilon}}\right)\right.\\
\left.-\frac{1}{1+\mu_j+\varepsilon\mu_j^1}\left(\frac{u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1}{\|u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1\|_{\varepsilon}}\right)\right\|_{\varepsilon}
\leq C_7\varepsilon^2.
\end{multline*}
As a consequence, we see that the assumptions of Lemma \ref{lemma_fondamentale} hold with $A=\mathcal A_{\varepsilon}$, $H=\mathcal H_{\varepsilon}(\Omega)$, $\eta=\frac{1}{1+\mu_j+\varepsilon\mu_j^1}$, $u=\frac{u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1}{\|u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1\|_{\varepsilon}}$, $r=C_7\varepsilon^2$ with $\varepsilon\in(0,\varepsilon_j^*)$. Accordingly, for all $\varepsilon\in(0,\varepsilon_j^*)$ there exists an eigenvalue $\eta^*$ of $\mathcal A_{\varepsilon}$ such that
\begin{equation}\label{quasi_auto_2}
\left|\frac{1}{1+\mu_j+\varepsilon\mu_j^1}-\eta^*\right|\leq C_7\varepsilon^2.
\end{equation}
Now we take $\varepsilon^{\sharp}_{\Omega,j}:=\min\left\{\varepsilon_j^*,\delta_j,C_7^{-1}r_j^*\right\}$ with $\delta_j$ and $r_j^*$ as in Lemma \ref{only2}. By \eqref{quasi_auto_2} and Lemma \ref{only2}, the eigenvalue $\eta^*_{\varepsilon}$ has to coincide with $\frac{1}{1+\lambda_j(\varepsilon)}$ for all $\varepsilon\in(0,\varepsilon^{\sharp}_{\Omega,j})$. It follows that
\begin{equation*}
|\lambda_j(\varepsilon)-\mu_j-\varepsilon\mu_j^1|\leq C_7|(1+\mu_j+\mu_j^1\varepsilon)(1+\lambda_j(\varepsilon))|\varepsilon^2\ \ \ \forall\varepsilon\in (0,\varepsilon^{\sharp}_{\Omega,j}).
\end{equation*}
The validity of \eqref{expansion_eigenvalues} follows from Theorem \ref{convergence} and by a straightforward computation.
We now consider \eqref{expansion_eigenfunctions}. By Lemma \ref{lemma_fondamentale} with $r=r_j^*$ it follows that for all $\varepsilon\in(0,\varepsilon^{\sharp}_{\Omega,j})$, there exists a function $u_{\varepsilon}^*\in\mathcal H_{\varepsilon}(\Omega)$ with $\|u_{\varepsilon}^*\|_{\varepsilon}=1$ which belongs to the space generated by all the eigenfunctions of $\mathcal A_{\varepsilon}$ associated with eigenvalues contained in the segment $\left[\frac{1}{1+\mu_j+\varepsilon\mu_j^1}-r_j^*,\frac{1}{1+\mu_j+\varepsilon\mu_j^1}+r_j^*\right]$ and such that
\begin{equation}\label{unieq2}
\left\|u_{\varepsilon^*}-\frac{u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1}{\|u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1\|_{\varepsilon}}\right\|_{\varepsilon}\leq\frac{2C_7}{r_j^*}\varepsilon^2.
\end{equation}
Since $\varepsilon\in(0,\varepsilon_{\Omega,j}^{\sharp})$, Lemma \ref{only2} implies that $\frac{1}{1+\lambda_j(\varepsilon)}$ is the only eigenvalue of $\mathcal A_{\varepsilon}$ which belongs to the segment $\left[\frac{1}{1+\mu_j+\varepsilon\mu_j^1}-r_j^*,\frac{1}{1+\mu_j+\varepsilon\mu_j^1}+r_j^*\right]$. In addition $\lambda_j(\varepsilon)$ is simple for $\varepsilon<\varepsilon^{\sharp}_{\Omega,j}$ (because $\varepsilon^{\sharp}_{\Omega,j}\leq\varepsilon_{\Omega,j}$). It follows that $u_{\varepsilon}^*$ coincides with the only eigenfunction with norm one corresponding to $\lambda_j(\varepsilon)$, namely $u_{\varepsilon}^*=\frac{u_{j,\varepsilon}}{\|u_{j,\varepsilon}\|_{\varepsilon}}$. Thus by \eqref{unieq2}
\begin{equation}\label{r*step2}
\left\|\frac{u_{j,\varepsilon}}{\|u_{j,\varepsilon}\|_{\varepsilon}}-\frac{u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1}{\|u_j+\varepsilon v_{j,\varepsilon}+\varepsilon u_j^1+\varepsilon^2 v_{j,\varepsilon}^1\|_{\varepsilon}}\right\|_{\varepsilon}\leq \frac{2C_7}{r_j^*}\varepsilon^2\ \ \ \forall\varepsilon\in(0,\varepsilon_j^{\sharp}).
\end{equation}
By exploiting \eqref{expansion_eigenvalues} and \eqref{r*step2} and by arguing as in the proof of \eqref{ineq_eigenfunctions_step1} (cf.~\eqref{intermediate.eq4}-\eqref{intermediate.eq6}), one can prove that
\begin{equation*}
\|u_{j,\varepsilon}-u_j-\varepsilon v_{j,\varepsilon}-\varepsilon u_j^1-\varepsilon^2v_{j,\varepsilon}^1\|_{L^2(\Omega)}\leq C_8\varepsilon^2,
\end{equation*}
for some $C_8>0$. Then the validity of \eqref{expansion_eigenfunctions} follows by Proposition \ref{vj1eL2}. This concludes the proof of Theorems \ref{asymptotic_eigenvalues} and \ref{asymptotic_eigenfunctions}.
\qed
\begin{appendices}
\section{}
Let $u_j$ be the unique eigenfunction associated with a simple eigenvalue $\mu_j$ of problem \eqref{Steklov} such that $\int_{\partial\Omega}u_j^2d\sigma=1$. We consider the following problem
\begin{equation}\label{u1_problem_simple}
\left\{\begin{array}{ll}
-\Delta u=f, & {\rm in}\ \Omega,\\
\partial_{\nu}u-\frac{M\mu_j}{|\partial\Omega|}u=g_1+\lambda g_2, & {\rm on}\ \partial\Omega,
\end{array}\right.
\end{equation}
where $f\in L^2(\Omega)$, $g_1,g_2\in L^2(\partial\Omega)$ are given data which satisfy the condition $\int_{\partial\Omega}g_2u_jd\sigma\ne0$, and where the unknowns are the scalar $\lambda$ and the function $u$. The weak formulation of problem \eqref{u1_problem_simple} reads: find $(\lambda,u)\in\mathbb R\times H^1(\Omega)$ such that
\begin{equation}\label{u1_weak}
\int_{\Omega}\nabla u\cdot\nabla\varphi dx-\frac{M\mu_j}{|\partial\Omega|}\int_{\partial\Omega}u\varphi d\sigma=\int_{\Omega}f\varphi dx+\int_{\partial\Omega}g_1\varphi d\sigma+\lambda\int_{\partial\Omega}g_2\varphi d\sigma,
\end{equation}
for all $\varphi\in H^1(\Omega)$. We have the following proposition.
\begin{proposition}\label{A1}
Problem \eqref{u1_problem_simple} admits a weak solution $(u,\lambda)\in H^1(\Omega)\times\mathbb R$ if and only if
\begin{equation}\label{lambda_appendix}
\lambda=-\left(\int_{\Omega}fu_j dx+\int_{\partial\Omega}g_1u_j d\sigma\right)\left(\int_{\partial\Omega}g_2u_j d\sigma\right)^{-1}.
\end{equation}
Moreover, if $u$ is a solution of \eqref{u1_problem_simple}, then any other solution of \eqref{u1_problem_simple} is given by $u + \alpha\, u_j$ for some $\alpha\in\mathbb R$.
\proof
Let $\mathcal A_1$ be the operator from $H^1(\Omega)$ to $H^1(\Omega)'$ which takes $u\in H^1(\Omega)$ to the functional $\mathcal A_1[u]$ defined by
\begin{equation*}
\mathcal A_1[u][\varphi]:=\int_{\Omega}\nabla u\cdot\nabla\varphi dx+\int_{\partial\Omega}u\varphi d\sigma\,,\ \varphi\in H^1(\Omega).
\end{equation*}
As is well-known,{} $\mathcal A_1$ is a homeomorphism from $H^1(\Omega)$ to $H^1(\Omega)'$. Then we consider the trace operator ${\rm Tr}$ from $H^1(\Omega)$ to $L^2(\partial\Omega)${}, and the operator $\mathcal J$ from $L^2(\partial\Omega)$ to $H^1(\Omega)'$ defined by
$$
\mathcal J[u][\varphi]:=\int_{\partial\Omega}u\,{\rm Tr}[\varphi] d\sigma\,,\ \forall \varphi\in H^1(\Omega).
$$
We define the operator $\mathcal A_2$ from $H^1(\Omega)$ to $H^1(\Omega)'$ as
\begin{equation*}
\mathcal A_2:=-\left(1+\frac{M\mu_j}{|\partial\Omega|}\right)\mathcal J\circ{\rm Tr}.
\end{equation*}
Since ${\rm Tr}$ is compact and $\mathcal J$ is bounded, $\mathcal A_2$ is also compact. It follows that the operator $\mathcal A:=\mathcal A_1+\mathcal A_2$ from $H^1(\Omega)$ to $H^1(\Omega)'$ is Fredholm of index zero, being the compact perturbation of an invertible operator. Now we denote by $B(\lambda)$ the element of $H^1(\Omega)'$ defined by
$$
B(\lambda)[\varphi]:=\int_{\Omega}f\varphi dx+\int_{\partial\Omega}g_1\,{\rm Tr}[\varphi] d\sigma+\lambda\int_{\partial\Omega}g_2\,{\rm Tr}[\varphi] d\sigma\,,\ \varphi\in H^1(\Omega).
$$
Problem \eqref{u1_weak} is recast into: find $(\lambda,u)\in\mathbb R\times H^1(\Omega)$ such that
\begin{equation*}
\mathcal A[u]=B(\lambda).
\end{equation*}
The kernel of $\mathcal A$ is finite dimensional and it is the space of those $u^*$ such that
$$
\int_{\Omega}\nabla u^*\cdot\nabla\varphi dx-\frac{M\mu_j}{|\partial\Omega|}\int_{\partial\Omega}u^*\,{\rm Tr}[\varphi] d\sigma=0\ \ \forall\varphi\in H^1(\Omega).
$$
Since we have assumed that $\mu_j$ is a simple eigenvalue associated with the eigenfunction $u_j$, it follows that the kernel of $\mathcal A$ coincides with the one dimensional subspace of $H^1(\Omega)$ generated by $u_j$. Therefore, problem \eqref{u1_problem_simple} has solution if and only if $B(\lambda)$ satisfies the equality
$$
B(\lambda)[u_j]=\int_{\Omega}fu_j dx+\int_{\partial\Omega}g_1u_j d\sigma+\lambda\int_{\partial\Omega}g_2u_j d\sigma=0.
$$
Since we have also assumed that $\int_{\partial\Omega}g_2u_jd\sigma\ne0$, it follows that problem \eqref{u1_weak} has solution if and only if $\lambda$ is given by \eqref{lambda_appendix}. To prove the last statement of the theorem we observe that the solution $u$ of problem \eqref{u1_weak} is defined up to elements in the kernel of $\mathcal A$, which is generated by $u_j$.
\endproof
\end{proposition}
\section{}
In this section we consider the case when $\Omega$ coincides with the unit ball $B$ of $\mathbb R^2$. In this specific case the eigenvalues of problem \eqref{Steklov} are given by
\begin{equation*}
\mu_{2j-1}=\mu_{2j}=\frac{2\pi j}{M}\,,\ j\in\mathbb N\setminus\lbrace0\rbrace,
\end{equation*}
while $\mu_{0}=0$ and, due to the symmetry of the problem, all the positive eigenvalues have multiplicity two (see, e.g., Girouard and Polterovich \cite{girouardpolterovich}). To investigate the problem, it is convenient to use polar coordinates $(r,\theta)\in[0,+\infty)\times[0,2\pi)$ in $\mathbb R^2$ and to introduce the corresponding change of variables $x=\phi_s(r,\theta)=(r\cos(\theta),r\sin(\theta))$. The eigenfunctions associated with the eigenvalue $\mu_{2j-1}=\mu_{2j}$ are the two-dimensional harmonic polynomials $u_{j,1}, u_{j,2}$ of degree $j$, which can be written in polar coordinates as
\begin{eqnarray*}
u_{j,1}(r,\theta)&=&r^j\cos(j\theta),\\
u_{j,2}(r,\theta)&=&r^j\sin(j\theta).
\end{eqnarray*}
Problem \eqref{Neumann} for $\Omega=B$ has been considered in Lamberti and Provenzano \cite{lambertiprovenzano2,lambertiprovenzano1}. In such works it has been proved that all the eigenvalues of problem \eqref{Neumann} on $B$ have multiplicity which is an integer multiple of two, except the first one which is equal to zero and has multiplicity one. Moreover, for a fixed $j\in\mathbb N\setminus\lbrace 0\rbrace$, there exists $\varepsilon_{j}>0$ such that $\lambda_j(\varepsilon)$ has multiplicity two for all $\varepsilon\in(0,\varepsilon_{j})$ (see also Theorem \ref{convergence}). The positive eigenvalues of \eqref{Neumann} on $B$ can be labelled with two indexes $k$ and $l$ and denoted by $\lambda_{2k-1,l}(\varepsilon)=\lambda_{2k,l}(\varepsilon)$, for $k,l\in\mathbb N\setminus\lbrace 0\rbrace$. The corresponding eigenfunctions, which we denote by $u_{0,l,\varepsilon}, u_{k,l,\varepsilon,1}$ and $u_{k,l,\varepsilon,2}$ can be written in the following form
\begin{eqnarray*}
u_{0,l,\varepsilon}&=&R_{0,l}(r),\\
u_{k,l,\varepsilon,1}&=&R_{k,l}(r)\cos(k\theta),\\
u_{k,l,\varepsilon,2}&=&R_{k,l}(r)\sin(k\theta),
\end{eqnarray*}
where $R_{k,l}(r)$ are suitable linear combinations of Bessel Functions of the first and second species and order $k${}. Moreover, it has been proved that $\lambda_{2k-1,1}(\varepsilon)\rightarrow\mu_{2k-1}$, $\lambda_{2k,1}(\varepsilon)\rightarrow\mu_{2k}$, $\lambda_{2k-1,l}(\varepsilon)\rightarrow+\infty$, $\lambda_{2k,l}(\varepsilon)\rightarrow+\infty$ for $l\geq 2$, $u_{k,1,\varepsilon,1}\rightarrow u_{k,1}$ and $u_{\varepsilon,k,1,2}\rightarrow u_{k,2}$ in the $L^2(\Omega)$ sense, as $\varepsilon\rightarrow 0$.
We note that, in principle, Theorem \ref{asymptotic_eigenvalues} could not be applied to this case since all the eigenvalues are multiple. Nevertheless, we have the following result concerning the derivative of the eigenvalues of \eqref{Neumann} at $\varepsilon=0$ when $\Omega=B$.
\begin{theorem}[Lamberti and Provenzano \cite{lambertiprovenzano2,lambertiprovenzano1}]\label{thmball}
For the eigenvalues of problem \eqref{Neumann} on the unit ball $B$ we have the following asymptotic expansion
\begin{equation}\label{asymptotic_ball}
\begin{split}
\lambda_{2j-1,1}(\varepsilon)&=\mu_{2j-1}+\left(\frac{2j\mu_{2j-1}}{3}+\frac{\mu_{2j-1}^2}{2(j+1)}\right)\varepsilon+O(\varepsilon^2)\\
&=\frac{2\pi j}{M}+\frac{2j^2 \pi}{M}\left(\frac{2}{3}+\frac{\pi}{M(1+j)}\right)\varepsilon+O(\varepsilon^2),
\end{split}
\end{equation}
as $\varepsilon\rightarrow 0$. The same formula holds if we substitute $\lambda_{2j-1,1}(\varepsilon)$ and $\mu_{2j-1}$ with $\lambda_{2j,1}(\varepsilon)$ and $\mu_{2j}$ respectively.
\end{theorem}
The proof of Theorem \ref{thmball} is strictly related to the fact that $\Omega$ is a ball and relies on the use of Bessel functions which allow to recast problem \eqref{Neumann} in the form of an equation $\mathcal F(\lambda,\varepsilon)=0$ in the unknowns $\lambda,\varepsilon\in\mathbb R$. The method used in \cite{lambertiprovenzano2} requires standard but lengthy computations, suitable Taylor's expansions and estimates on the corresponding remainders, as well as recursive formulas for the cross-products of Bessel functions and their derivatives.
We note that the first term in the asymptotic expansion of all the eigenvalues of \eqref{Neumann} on $B$ is positive, therefore locally, near the limiting problem \eqref{Steklov}, the eigenvalues are decreasing. Hence, we can say that the Steklov eigenvalues $\mu_{j}$ minimize the Neumann eigenvalues $\lambda_j(\varepsilon)$ for $\varepsilon$ small enough. We note that this does not prove global monotonicity of $\lambda_j(\varepsilon)$, which in fact does not hold for any $j$; see Figures \ref{fig1} and \ref{fig2}.
We now observe that, if we plug $u_j=\pi^{-\frac{1}{2}}(r^j\cos(j\theta))\circ\phi_s^{(-1)}$ into formula \eqref{top_der_formula} and we recall that the mean curvature $\kappa$ of $\partial B$ is constant end equals $1$, then we re-obtain equality \eqref{asymptotic_ball}. So we can say that, in a sense, Theorem \ref{asymptotic_eigenvalues} continues to hold also in the case when $\Omega$ is a ball, despite of the fact that the eigenvalues are in such case multiple. This is not surprising. In fact, we could have replaced through all the paper the space $H^1(\Omega)$ with the space $H^1_j(\Omega)$ of those functions $u$ in $H^1(\Omega)$ which are orthogonal to $(r^j\cos(j\theta))\circ\phi_s^{(-1)}$ with respect to the $H^1(\Omega)$ scalar product. In this way the eigenvalue $\mu_{2j-1}$ becomes simple and an argument based on Theorem \ref{asymptotic_eigenvalues} could be applied to study the asymptotic behavior.
We also remark that formula \eqref{asymptotic_ball} for the derivatives of the eigenvalues when $\Omega=B$ has been generalized to dimension $N>2$ in \cite{lambertiprovenzano2}. Again, the proof relies on the use of Bessel functions and explicit computations.
The method used in the present paper is more general and allows to find a formula for the derivative of the eigenvalues $\lambda(\varepsilon)$ of problem \eqref{Neumann} for a quite wide class of domains in $\mathbb R^2$. A generalization of such formula for domains in $\mathbb R^N$ for $N>2$, the boundary of which can be globally parametrized with the unit sphere $S^{N-1}\subset \mathbb R^N$, will be part of a future work.
\section*{}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.6\textwidth]{Neumann_vs_Steklov_1.pdf}
\caption{{ {{ $\lambda_{2k-1,l}$}} with $M=\pi$ in the range $(\varepsilon,\lambda)\in(0,1)\times(0,150)$. In particular blue ($k=0,l=1,2,3,4$), red ($k=1,l=1,2,3,4$), green ($k=2,l=1,2,3$), purple ($k=3,l=1,2,3$), orange ($k=4,l=1,2$).}
}
\label{fig1}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.6\textwidth]{Neumann_vs_Steklov_2.pdf}
\caption{ {{$\lambda_{2k-1,l}$}} with $M=\pi$ in the range $(\varepsilon,\lambda)\in(0,1)\times(0,50)$. In particular blue ($k=0,l=1,2$), red ($k=1,l=1,2$), green ($k=2,l=1,2$), purple ($k=3,l=1$), orange ($k=4,l=1$), blue ($k=5,l=1$), pink ($k=6,l=1$) .}
\label{fig2}
\end{figure}
\end{appendices}
\section*{Acknowledgements}
The authors are deeply thankful to Professor Pier Domenico Lamberti and to Professor Sergei A.~Nazarov for the fruitful discussions on the topic. The authors also thank the Center for Research and Development in Mathematics and Applications (CIDMA) of the University of Aveiro for the hospitality offered during the development of the work. In addition, the authors acknowledge the support of `Progetto di Ateneo: Singular perturbation problems for differential operators -- CPDA120171/12' - University of Padova. Matteo Dalla Riva acknowledges the support of HORIZON 2020 MSC EF project FAANon (grant agreement MSCA-IF-2014-EF-654795) at the University of Aberystwyth, UK. Luigi Provenzano acknowledges the financial support from the research project `INdAM GNAMPA Project 2015 - Un approccio funzionale analitico per problemi di perturbazione singolare e di omogeneizzazione'. Luigi Provenano is member of the Gruppo Nazionale
per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM).
\def$'${$'$} \def$'${$'$} \def$'${$'$} \def$'${$'$}
\def$'${$'$}
|
2,869,038,154,043 | arxiv | \section{Introduction}
Deep neural networks (\textsc{Dnn}s) have advanced the state of the art in various natural language processing (NLP) tasks, such as machine translation~\cite{Vaswani:2017:NIPS}, semantic role labeling~\cite{Strubell:2018:EMNLP}, and language representations~\cite{bert2018}. The strength of \textsc{Dnn}s lies in their ability to capture different linguistic properties of the input by different layers~\cite{Shi:2016:EMNLP,raganato2018analysis}, and composing (i.e. aggregating) these layer representations can further improve performances by providing more comprehensive linguistic information of the input~\cite{Peters:2018:NAACL,Dou:2018:EMNLP}.
Recent NLP studies show that single neurons in neural models which are defined as individual dimensions of the representation vectors, carry distinct linguistic information~\cite{Bau:2019:ICLR}.
A follow-up work further reveals that simple properties such as coordinating conjunction (e.g., ``but/and'') or determiner (e.g., ``the'') can be attributed to individual neurons, while complex linguistic phenomena such as syntax (e.g., part-of-speech tag) and semantics (e.g., semantic entity type) are distributed across neurons~\cite{Dalvi:2019:AAAI}.
These observations are consistent with recent findings in neuroscience, which show that task-relevant information can be decoded from a group of neurons interacting with each other~\cite{Morcos:2016:Nature}. One question naturally arises: {\em can we better capture complex linguistic phenomena by composing/grouping the linguistic properties embedded in individual neurons?}
The starting point of our approach is an observation in neuroscience: {\em stronger neuron interactions} -- directly exchanging signals between neurons, enable more information processing in the nervous system~\cite{koch1983nonlinear}. We believe
that simulating the neuron interactions in nervous system would be an appealing alternative to representation composition, which can potentially better learn the compositionality of natural language with subtle operations at a smaller granularity.
Concretely, we employ bilinear pooling~\cite{Lin:2015:ICCV}, which executes pairwise multiplicative interactions among individual representation elements, to achieve \emph{strong} neuron interactions.
We also introduce a low-rank approximation to make the original bilinear models computationally feasible~\cite{kim2016hadamard}.
Furthermore, as bilinear pooling only encodes multiplicative second-order features, we propose \emph{extended bilinear pooling} to incorporate first-order representations, which can capture more comprehensive information of the input sentences.
We validate the proposed neuron interaction based (NI-based) representation composition on top of multi-layer multi-head self-attention networks (\textsc{MlMhSan}s). The reason is two-fold. First, \textsc{MlMhSan}s are critical components of various SOTA \textsc{Dnn}s models, such as \textsc{Transformer}~\cite{Vaswani:2017:NIPS}, \textsc{Bert}~\cite{bert2018}, and \textsc{Lisa}~\cite{Strubell:2018:EMNLP}. Second, \textsc{MlMhSan}s involve in compositions
of both multi-layer representations and multi-head representations, which can investigate the universality of NI-based composition. Specifically,
\begin{itemize}
\item First, we conduct experiments on the machine translation task, a benchmark to evaluate the performance of neural models. Experimental results on the widely-used WMT14 English$\Rightarrow$German and English$\Rightarrow$French data show that the NI-based composition consistently improves performance over \textsc{Transformer} across language pairs. Compared with existing representation composition strategies~\cite{Peters:2018:NAACL,Dou:2018:EMNLP}, our approach shows its superiority in efficacy and efficiency.
\item Second, we carry out linguistic analysis~\cite{conneau2018acl} on the learned representations from NMT encoder, and find that NI-based composition indeed captures more syntactic and semantic information as expected. These results provide support for our hypothesis that modeling strong neuron interactions helps to better capture complex linguistic information via advanced composition functions, which is essential for downstream NLP tasks.
\end{itemize}
This paper is an early step in exploring neuron interactions for representation composition in NLP tasks, which we hope will be a long and fruitful journey.
We make the following contributions:
\begin{itemize}
\item Our study demonstrates the necessity of modeling neuron interactions for representation composition in deep NLP tasks. We employ bilinear pooling to simulate the strong neuron interactions.
\item We propose {\em extended bilinear pooling} to incorporate first-order representations, which produces a more comprehensive representation.
\item Experimental results show that representation composition benefits the widely-employed \textsc{MlMhSan}s by aggregating information learned by multi-layer and/or multi-head attention components.
\end{itemize}
\section{Background}
\subsection{Multi-Layer Multi-Head Self-Attention}
In the past two years, \textsc{MlMhSan}s based models establish the SOTA performances across different NLP tasks. The main strength of \textsc{MlMhSan}s lies in the powerful representation learning capacity provided by the multi-layer and multi-head architectures.
\textsc{MlMhSan}s perform a series of nonlinear transformations from the input sequences to final output sequences.
Specifically, \textsc{MlMhSan}s are composed of a stack of $L$ identical layers ({\em multi-layer}), each of which is calculated as
\begin{eqnarray}
{\bf H}^l &=& \textsc{Self-Att}({\bf H}^{l-1}) + {\bf H}^{l-1},
\label{eqn:enc}
\end{eqnarray}
where a residual connection is employed around each of two layers~\cite{he2016CVPR}.
$\textsc{Self-Att}(\cdot)$ is a self-attention model, which
captures dependencies among hidden states in ${\bf H}^{l-1}$:
\begin{eqnarray}
\textsc{Self-Att}({\bf H}^{l-1}) = \textsc{Att}({\bf Q}^l, {\bf K}^{l-1}) \ {\bf V}^{l-1} \label{eq:out},
\end{eqnarray}
where $\{{\bf Q}^l, {\bf K}^{l-1}, {\bf V}^{l-1}\}$
are the query, key and value vectors that are transformed from the lower layer ${\bf H}^{l-1}$, respectively.
Instead of performing a single attention function,~\citeauthor{Vaswani:2017:NIPS}~\shortcite{Vaswani:2017:NIPS} found it is beneficial to capture different context features with multiple individual attention functions (\emph{multi-head}).
Concretely, multi-head attention model first transforms $\{{\bf Q}, {\bf K}, {\bf V}\}$ into $H$ subspaces with different, learnable linear projections:\footnote{Here we skip the layer index for simplification.}
\begin{equation}
{\bf Q}_h, {\bf K}_h, {\bf V}_h = {\bf Q}{\bf W}_h^{Q}, {\bf K}{\bf W}_h^{K}, {\bf V}{\bf W}_h^{V},
\end{equation}
where $\{{\bf Q}_h, {\bf K}_h, {\bf V}_h\}$ are respectively the query, key, and value representations of the $h$-th head. $\{{\bf W}_h^{Q}, {\bf W}_h^{K}, {\bf W}_h^{V}\}$
denote parameter matrices associated with the $h$-th head.
$H$ self-attention functions (Equation~\ref{eq:out}) are applied in parallel to produce the output states $\{{\bf O}_1,\dots, {\bf O}_H\}$.
Finally, the $H$ outputs are concatenated and linearly transformed to produce a final representation:
\begin{eqnarray}
\label{eq:concat_linear}
{\bf H} = [{\bf O}_1, \dots, {\bf O}_H] \ {\bf W}^O, \label{eq:concat}
\end{eqnarray}
where ${\bf W}^O \in \mathbb{R}^{d \times d}$ is a trainable matrix.
\subsection{Representation Composition}
Composing (i.e. aggregating) representations learned by different layers or attention heads has been shown beneficial for \textsc{MlMhSan}s \cite{Dou:2018:EMNLP,Ahmed:2018:arXiv}.
Without loss of generality, from here on, we refer to $\{{\bf r}_1, \dots, {\bf r}_N\} \in \mathbb{R}^{d}$ for the representations to compose, where ${\bf r}_i$ can be a layer representation (${\bf H}^l$, Equation~\ref{eqn:enc}) or head representation (${\bf O}_h$, Equation~\ref{eq:concat}). The composition is expressed as
\begin{eqnarray}
\mathbf{\widetilde{H}} = \textsc{Compose}({\bf r}_1, \dots, {\bf r}_N),
\label{eqn:neural}
\end{eqnarray}
where $\textsc{Compose}(\cdot)$ can be arbitrary functions, such as linear combination\footnote{The linear composition of multi-head representations (Equation~\ref{eq:concat}) can be rewritten in the format of weighted sum: ${\bf O}=\sum_{h=1}^H {\bf O}_h {\bf W}^O_h$ with ${\bf W}^O_h \in \mathbb{R}^{\frac{d}{H} \times d}$.}~\cite{Peters:2018:NAACL,Ahmed:2018:arXiv} and hierarchical aggregation~\cite{Dou:2018:EMNLP}.
Although effective to some extent, these approaches do not model neuron interactions among the representation vectors, which we believe is valuable for representation composition in deep NLP models.
\iffalse
\subsection{Neuron Interaction in Physiology}
Studies in the area of physiology show that different types of neurons carry distinct signals~\cite{matsumoto2009nature,cohen2012nature}.
Theoretical and experimental studies also suggest that neural computation in our brain results from a dynamic interaction of excitatory and inhibitory synaptic signals~\cite{liu2004nature}. Maintaining strong neuron interactions induced by synaptic signals is crucial for information processing, where neuron interactions are direct signal exchanges~\cite{koch1983nonlinear}. For example, direction selectivity in the visual system is generated by the nonlinear interaction of excitation and inhibition from corresponding neurons~\cite{taylor2000science}. Thus, more neuron interactions bring better functionality in nervous systems.
Inspired by these findings, we believe that modeling \emph{full and direct} neuron interactions in representation composition can improve the learning capability of neural models.
\fi
\section{Approach}
\begin{figure*}[t]
\centering
\subfloat[Bilinear Pooling]{
\includegraphics[width=0.42\textwidth]{bilinear.pdf}
} \hspace{0.1\textwidth}
\subfloat[Extended Bilinear Pooling]{
\includegraphics[width=0.42\textwidth]{extended_bilinear.pdf}
}
\caption{Illustration of (a) {\em bilinear pooling} that models fully neuron-wise multiplicative interaction, and (b) {\em extended bilinear pooling} that captures both second- and first-order neuron interactions.}
\label{fig:bilinear-pooling}
\end{figure*}
\subsection{Motivation}
Different types of neurons in the nervous system carry distinct signals~\cite{cohen2012nature}. Similarly, neurons in deep NLP models -- individual dimensions of representation vectors, carry distinct linguistic information~\cite{Bau:2019:ICLR,Dalvi:2019:AAAI}.
Studies in neuroscience reveal that stronger neuron interactions bring more information processing capability~\cite{koch1983nonlinear},
which we believe also applies to deep NLP models.
In this work, we explore the strong neuron interactions provided by bilinear pooling for representation composition. Bilinear pooling~\cite{Lin:2015:ICCV} is a recently proposed feature fusion approach in the vision field.
Instead of linearly combining all representations, bilinear pooling executes pairwise multiplicative interactions among individual representations, to model \emph{full} neuron interactions as shown in Figure~\ref{fig:bilinear-pooling}(a).
Note that there are many possible ways to implement the neuron interactions.
The aim of this paper is not to explore this whole space but simply to show that one fairly straightforward implementation works well on a strong benchmark.
\subsection{Bilinear Pooling for Neuron Interaction}
\label{sec:LRBP}
\paragraph{Bilinear Pooling}
Bilinear pooling~\cite{Tenenbaum:2000:NeuroComputation} is defined as an \emph{outer product} of two representation vectors followed by a linear projection. As illustrated in Figure~\ref{fig:bilinear-pooling}(a), all elements of the two vectors have direct multiplicative interactions with each other. However, in the scenario of multi-layer and multi-head composition, we generally have more than two representation vectors to compose (i.e., $L$ layers and $H$ attention heads). To utilize the full second-order (i.e. multiplicative) interactions in bilinear pooling, we concatenate all the representation vectors and feed the concatenated vector twice to the bilinear pooling.
Concretely, we have:
\begin{eqnarray}
{\bf R} &=& | \widehat{\bf R} \widehat{\bf R}^{\top} | {\bf W}^B, \\
\widehat{\bf R} &=& [{\bf r}_1, \dots, {\bf r}_N],
\label{eq:con}
\end{eqnarray}
where $|\widehat{\bf R} \widehat{\bf R}^{\top}| \in \mathbb{R}^{Nd \times Nd}$ is the outer product of the concatenated representation $\widehat{\bf R}$, $|\cdot|$ denotes serializing the matrix into a vector with dimensionality $(Nd)^2$. In this way, all elements in the partial representations are able to interact with each other in a multiplicative way.
However, the parameter matrix ${\bf W}^B \in \mathbb{R}^{(Nd)^2 \times d}$ and computing cost cubically increases with dimensionality $d$, which becomes problematic when training or decoding on a GPU with limited memory\footnote{For example, a regular \textsc{Transformer} model requires a huge amount of 36 billion ($(Nd)^2 \times d$) parameters for $d=1000$ and $N=6$.}.
There have been a few attempts to reduce the computational complexity of the original bilinear pooling.~\newcite{Gao:2016:CVPR} propose {\em compact bilinear pooling} to reduce the quadratic expansion of dimensionality for image classification.~\newcite{kim2016hadamard} and~\newcite{Kong:2017:CVPR} propose {\em low-rank bilinear pooling} for visual question answering and image classification respectively, which further reduces the parameters to be learned and achieves comparable effectiveness with full bilinear pooling. In this work, we focus on the low-rank approximation for its efficiency, and generalize from the original model for deep representations.
\paragraph{Low-Rank Approximation}
In the full bilinear models, each output element $R_i \in \mathbb{R}^{1}$ can be expressed as
\begin{eqnarray}
R_i &=& \sum_{j=1}^{Nd} \sum_{k=1}^{Nd} { w}^B_{jk,i} \widehat{ R}_j \widehat{ R}^{\top}_k \nonumber \\
&=& \widehat{\bf R}^{\top} {\bf W}^B_i \widehat{\bf R} \label{eq:full-bilinear},
\end{eqnarray}
where ${\bf W}^B_i \in \mathbb{R}^{Nd \times Nd}$ is a weight matrix to produce output element $ R_i$.
The low-rank approximation enforces the rank of ${\bf W}^B_i$ to be low-rank $r \leq Nd$ ~\cite{pirsiavash2009bilinear}, which is then factorized as ${\bf U}_i {\bf V}_i^{\top}$ with ${\bf U}_i \in \mathbb{R}^{Nd \times r}$ and ${\bf V}_i \in \mathbb{R}^{Nd \times r}$. Accordingly, Equation~\ref{eq:full-bilinear} can be rewritten as
\begin{eqnarray}
{R}_i &=& \widehat{\bf R}^{\top} {\bf U}_i {\bf V}_i^{\top} \widehat{\bf R} \nonumber \\
&=& (\widehat{\bf R}^{\top} {\bf U}_i \odot \widehat{\bf R}^{\top}{\bf V}_i) \mathbbm{1}_r,
\label{eq:low-rank}
\end{eqnarray}
where $\mathbbm{1}_r$ is a $r$-dimensional vector of ones, $\odot$ represents element-wise product.
By replacing $\mathbbm{1}_r$ with ${\bf P} \in \mathbb{R}^{r\times d}$, and redefining ${\bf U} \in \mathbb{R}^{Nd\times r}$ and ${\bf V} \in \mathbb{R}^{Nd \times r}$, the low-rank approximation can be defined as
\begin{equation}
{\bf R} = (\widehat{\bf R}^{\top}{\bf U} \odot \widehat{\bf R}^{\top}{\bf V}) {\bf P}.
\label{eq:final}
\end{equation}
In this way, the computation complexity is reduced from $O(d^3)$ to $O(d^2)$. And the parameter matrices {\bf U}, {\bf V}, and {\bf P} are now feasible to fit in GPU memory.
\paragraph{Extended Bilinear Pooling with First-Order Representation}
Previous work in information theory has proven that second-order and first-order representations encode different types of information~\cite{goudreau1994firstorder}, which we believe also holds on NLP tasks.
As bilinear pooling only encodes second-order (i.e., multiplicative) interactions among individual neurons, we propose the \emph{extended bilinear pooling} to inherit the advantages of first-order representations and form a more comprehensive representation.
Specifically, we append $\mathbf{1}$s to the representation vectors. As illustrated in Figure~\ref{fig:bilinear-pooling}(b), we respectively append $\mathbf{1}$ to the two ${\bf R}$ vectors, then the outer product of them produces both second-order and first-order interactions among the elements. According to Equation~\ref{eq:final}, the final representation is revised as:
\begin{equation}
{\bf R_f} = (\begin{bmatrix}\widehat{\bf R}\\1\end{bmatrix}^{\top} {\bf U} \odot \begin{bmatrix}\widehat{\bf R}\\1\end{bmatrix}^{\top} {\bf V})~ {\bf P},
\end{equation}
where $\widehat{\bf R}$ is the concatenated representation as in Equation~\ref{eq:con}.
As a result, the final representation $\bf R_f$ preserves both multiplicative bilinear features (as in Equation~\ref{eq:final}) and first-order linear features (as in Equation~\ref{eq:concat_linear}).
\begin{table*}[t]
\centering
\begin{tabular}{c|l||r c c||c}
{\bf \#} & {\bf Model} & \bf {\# Para.} & \bf {Train} & \bf Decode & \bf BLEU\\
\hline
1 & \textsc{Transformer-Base} & 88.0M & 2.02 & 1.50 & $27.31$\\
\hline
\multicolumn{6}{c}{\em Existing representation composition} \\
\hline
2 & ~~+ Multi-Layer: Linear Combination & +3.1M &1.98 &1.46 & $27.77$ \\
\hdashline
3 &~~+ Multi-Layer: Hierarchical Aggregation & +23.1M & 1.62 & 1.36 & $28.32$\footnotemark \\
4 &~~+ Multi-Head: Hierarchical Aggregation & +13.6M & 1.74 & 1.38 & 28.13 \\
5 &~~+ Both (3+4) & +36.7M & 1.42 & 1.25 & 28.42 \\
\hline
\multicolumn{6}{c}{\em This work: neuron-interaction based representation composition} \\
\hline
6 & ~~+ Multi-Layer: {\em NI-based Composition} & +16.8M & 1.93 & 1.44& $28.31$\\
7 & ~~+ Multi-Head: {\em NI-based Composition} & +14.1M & 1.92 &1.43 & $28.29$\\
8 & ~~+ Both (6+7) &+30.9M & 1.87& 1.40 & {\bf 28.54} \\
\end{tabular}
\caption{Translation performance on WMT14 English$\Rightarrow$German
translation task. ``\# Para.'' denotes the number of parameters, and ``Train'' and ``Decode'' respectively denote the training speed (steps/second) and decoding speed (sentences/second). We compare our model with linear combination~\cite{Peters:2018:NAACL} and hierarchical aggregation~\cite{Dou:2018:EMNLP}. }
\label{tab:comparison}
\end{table*}
\paragraph{Applying to \textsc{Transformer}}
\textsc{Transformer}~\cite{Vaswani:2017:NIPS} consists of an encoder and a decoder, each of which is stacked in 6 layers where we can apply multi-layer composition (excluding the embedding layer) to produce the final representations of the encoder and decoder. Besides, each layer has one (in encoder) or two (in decoder) multi-head attention component with $H$ heads, to which we can apply multi-head composition to substitute Equation~\ref{eq:concat}. The two sorts of representation composition can be used individually, while combining them is expected to further improve the performance.
\footnotetext{The original result in \cite{Dou:2018:EMNLP} is $28.63$, which is \emph{case-insensitive}. As we report case-sensitive BLEU scores, we have requested \citeauthor{Dou:2018:EMNLP} to get this result.}
\section{Experiments}
\begin{table*}[t]
\centering
\begin{tabular}{l||rcl|rcl}
\multirow{2}{*}{\bf Architecture} & \multicolumn{3}{c|}{\bf EN$\Rightarrow$DE} & \multicolumn{3}{c}{\bf EN$\Rightarrow$FR}\\
\cline{2-7}
& \# Para. & Train & BLEU & \# Para. & Train & BLEU\\
\hline \hline
\multicolumn{7}{c}{{\em Existing NMT systems}: {\cite{Vaswani:2017:NIPS}}} \\
\hline
\textsc{Transformer-Base} & 65M & n/a & $27.3$ & n/a & n/a& $38.1$ \\
\textsc{Transformer-Big} & 213M & n/a & $28.4$ & n/a & n/a & $41.8$\\
\hline\hline
\multicolumn{7}{c}{{\em Our NMT systems}} \\ \hline
\textsc{Transformer-Base} & 88M & 2.02 & $27.31$ & 95M & 2.01& $39.28$ \\
~~~ + NI-Based Composition & 118M & 1.87& $28.54^\Uparrow$ &125M &1.85 & $40.15^\Uparrow$ \\
\hline
\textsc{Transformer-Big} & 264M & 0.85 & $28.58$ &278M &0.84 & $41.41$ \\
~~~ + NI-Based Composition & 387M & 0.61 & $29.17^\Uparrow$ & 401M & 0.59 & $42.10^\Uparrow$\\
\end{tabular}
\caption{Comparing with existing NMT systems on WMT14 English$\Rightarrow$German (``EN$\Rightarrow$DE'') and English$\Rightarrow$French (``EN$\Rightarrow$FR'') translation tasks.
``$\Uparrow$'': significantly better than the baseline ($p < 0.01$) using bootstrap resampling~\cite{Koehn2004Statistical}.}
\label{tab:main}
\end{table*}
\subsection{Setup}
\paragraph{Dataset}
We conduct experiments on the WMT2014 English$\Rightarrow$German (En$\Rightarrow$De) and English$\Rightarrow$French (En$\Rightarrow$Fr) translation tasks. The En$\Rightarrow$De dataset consists of about 4.56 million sentence pairs. We use newstest2013 as the development set and newstest2014 as the test set.
The En$\Rightarrow$Fr dataset consists of $35.52$ million sentence pairs. We use the concatenation of newstest2012 and newstest2013 as the development set and newstest2014 as the test set.
We employ BPE~\cite{sennrich2016neural} with 32K merge operations for both language pairs.
We adopt the case-sensitive 4-gram NIST BLEU score~\cite{papineni2002bleu} as our evaluation metric and bootstrap resampling~\cite{Koehn2004Statistical} for statistical significance test.
\paragraph{Models}
We evaluate the proposed approaches on the advanced \textsc{Transformer} model~\cite{Vaswani:2017:NIPS}, and implement on top of an open-source toolkit -- THUMT~\cite{zhang2017thumt}. We follow~\citeauthor{Vaswani:2017:NIPS}~\shortcite{Vaswani:2017:NIPS} to set the configurations and have reproduced their reported results on the En$\Rightarrow$De task. The parameters of the proposed models are initialized by the pre-trained \textsc{Transformer} model.
We have tested both \emph{Base} and \emph{Big} models, which differ at hidden size (512 vs. 1024) and number of attention heads (8 vs. 16). Concerning the low-rank parameter (Equation~\ref{eq:low-rank}), we set low-rank dimensionality $r$ to 512 and 1024 in \emph{Base} and \emph{Big} models respectively. All models are trained on eight NVIDIA P40 GPUs where each is allocated with a batch size of 4096 tokens. In consideration of computation cost, we study model variations with \emph{Base} model on the En$\Rightarrow$De task, and evaluate overall performance with \emph{Big} model on both En$\Rightarrow$De and En$\Rightarrow$Fr tasks.
\subsection{Comparison to Existing Approaches}
In this section, we evaluate the impacts of different representation composition strategies on the En$\Rightarrow$De translation task with \textsc{Transformer-Base}, as listed in Table~\ref{tab:comparison}.
\paragraph{Existing Representation Composition} (Rows 1-5)
For the conventional \textsc{Transformer} model, it adopts multi-head composition with linear combination but only uses top-layer representation as its default setting.
Accordingly, we keep the linear multi-head composition (Row 1) unchanged, and choose two representative multi-layer composition strategies (Rows 2 and 3): the widely-used linear combination~\cite{Peters:2018:NAACL} and the effective hierarchical aggregation~\cite{Dou:2018:EMNLP}. The hierarchical aggregation merges states of different layers through a CNN-like tree structure with the filter size being two, to hierarchically preserve and combine feature channels.
As seen, linearly combining all layers (Row 2) achieves +0.46 BLEU improvement over \textsc{Transformer-Base} with almost the same training and decoding speeds.
Hierarchical aggregation for multi-layer composition (Row 3) yields larger improvement in terms of BLEU score, but at the cost of considerable speed decrease.
To make a fair comparison, we also implement hierarchical aggregation for multi-head composition (Rows 4 and 5), which consistently improves performances at the cost of introducing more parameters and slower speeds.
\paragraph{The Proposed Approach} (Rows 6-8)
Firstly, we apply our NI-based composition, i.e. \emph{extended bilinear pooling}, for multi-layer composition with the default linear multi-head composition (Row 6). We find that the approach achieves almost the same translation performance as hierarchical aggregation (Row 3), while keeps the training and decoding speeds as \emph{efficient} as linear combination.
Then, we apply the NI-based approach for multi-head composition with the default top layer exploitation (Row 7). We can see that our approach gains +0.98 BLEU point over \textsc{Transformer-Base} and achieves more improvement than hierarchical aggregation (Row 4). The two results demonstrate that our NI-based approach can be effectively applied to different representation composition scenarios.
At last, we simultaneously apply the NI-based approach to the multi-layer and multi-head composition (Row 8). Our model achieves further improvement over individual models and the hierarchical aggregation (Row 5), showing that \textsc{Transformer} can benefit from the complementary composition from multiple heads and historical layers. In the following experiments, we adopt NI-based composition for both the multi-layer and multi-head compositions as the default strategy.
\subsection{Main Results on Machine Translation}
In this section, we validate the proposed NI-based representation composition on both WMT14 En$\Rightarrow$De and En$\Rightarrow$Fr translation tasks. Experimental results are listed in Table~\ref{tab:main}. The performances of our implemented \textsc{Transformer} match the results on both language pairs reported in previous work~\cite{Vaswani:2017:NIPS}, which we believe makes the evaluation convincing.
Incorporating NI-based composition consistently and significantly improves translation performance for both base and big \textsc{Transformer} models across language pairs, demonstrating the effectiveness and universality of the proposed NI-based representation composition. It is encouraging to see that \textsc{Transformer-Base} with NI-based composition even achieves competitive performance as that of \textsc{Transformer-Big} in the En$\Rightarrow$De task, with only half fewer parameters and the training speed is twice faster. This further demonstrates that our performance gains are not simply brought by additional parameters. Note that the improvement on En$\Rightarrow$De task is larger than En$\Rightarrow$Fr task, which can be attributed to the size of training data (4M vs. 35M).
\begin{table}[t]
\centering
\begin{tabular}{c|c||c | c | c}
\multicolumn{2}{c||}{\bf Task} & {\bf Base} & {\bf \textsc{Ours}} & \bf $\bigtriangleup$\\
\hline\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{{\bf Surface}}}
& SeLen & 92.20 & 92.11 & -0.1\%\\
& WC & 63.00 & 63.50 & +0.8\%\\
\cdashline{2-5}
& Ave. & 77.60 & 77.81 & +0.3\%\\
\hline
\multirow{4}{*}{\rotatebox[origin=c]{90}{{\bf Syntactic}}}
& TrDep & 44.74 & 44.96 & +0.5\%\\
& ToCo & 79.02 & 81.31 & \bf +2.9\%\\
& BShif & 71.24 & 72.44 & \bf +1.7\%\\
\cdashline{2-5}
& Ave. & 65.00 & 66.24 & \bf +1.9\%\\
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{{\bf Semantic}}}
& Tense & 89.24 & 89.26 & +0.0\%\\
& SubNm & 84.69 & 87.05 & \bf +2.8\%\\
& ObjNm & 84.53 & 86.91 & \bf +2.8\%\\
& SOMO & 52.13 & 52.52 & +0.7\%\\
& CoIn & 62.47 & 64.93 & \bf +3.9\%\\
\cdashline{2-5}
& Ave. & 74.61 & 76.13 & \bf +2.0\%\\
\end{tabular}
\caption{Classification accuracies on 10 probing tasks of evaluating the linguistic properties (``Surface'', ``Syntactic'', and ``Semantic''). ``Ave.'' denotes the averaged accuracy in each category. ``$\bigtriangleup$'' denotes the relative improvement, and we highlight the numbers $\geq 1\%$.}
\label{tab:probing}
\end{table}
\subsection{Analysis}
In this section, we conduct extensive analysis to deeply understand the proposed models in terms of 1) investigating the linguistic properties learned by the NMT encoder; 2) the influences of first-order representation and low-rank constraint; and 3) the translation performances on sentences of varying lengths.
\paragraph{Targeted Linguistic Evaluation on NMT Encoder}
Machine translation is a complex task, which consists of both the understanding of input sentence (encoder) and the generation of output conditioned on such understanding (decoder). In this probing experiment, we evaluate the understanding part using Transformer encoders that are trained on the EN$\Rightarrow$DE NMT data, and are fixed in the probing tasks with only MLP classifiers being trained on probing data.
Recently,~\newcite{conneau2018acl} designed 10 probing tasks to study what linguistic properties are captured by representations from sentence encoders.
A probing task is a classification problem that focuses on simple linguistic properties of input sentences, including surface information, syntactic information, and semantic information.
For example, ``WC'' tests whether it is possible to recover information about the original words given its sentence embedding. ``Bshif'' checks whether two consecutive tokens have been inverted. ``SubNm'' focuses on the number of the subject of the main clause.
For more detailed description about the 10 tasks, interested readers can refer to the original paper~\cite{conneau2018acl}.
We conduct probing tasks to examine whether the NI-based representation composition can benefit the \textsc{Transformer} encoder to produce more informative representation.
Table~\ref{tab:probing} lists the results.
The NI-based composition outperforms that by the baseline in most probing tasks, proving that our composition strategy indeed helps \textsc{Transformer} encoder generate more informative representation, especially at the syntactic and semantic level. The averaged gains in syntactic and semantic tasks are significant, showing that our strategy makes \textsc{San} capture more high-level linguistic properties. Note that the lower values in surface tasks (e.g., SeLen), are consistent with the conclusion in \cite{conneau2018acl}: as model captures deeper linguistic properties, it will tend to forget about these superficial features.
\begin{figure}[h]
\centering
\includegraphics[width=0.35\textwidth]{order.pdf}
\caption{Effect of first-order representation on WMT14 En$\Rightarrow$De translation task.}
\label{fig:residual}
\end{figure}
\paragraph{Effect of First-Order Representation}
As aforementioned, we extend the conventional bilinear pooling by appending $\mathbf{1}$s to the representation vectors thus incorporate first-order representations (i.e. linear combination), and capture both multiplicative bilinear features and additive linear features. Here we conduct ablation study to validate the effectiveness of each component. We respectively experiment on multi-layer and multi-head representation composition, and the results are shown in Figure~\ref{fig:residual}.
Several observations can be made.
First, we notice that by replacing linear combination with mere bilinear pooling (``NI-based composition w/o first-order'' in Figure~\ref{fig:residual}), the translation performance significantly improves both in multi-layer and multi-head composition, demonstrating the effectiveness of full neuron interaction and second-order features.
We further observe that it is indeed beneficial to extend bilinear pooling with linear combination (``NI composition'' in Figure~\ref{fig:residual}) which captures the complementary information among them and forms a more comprehensive representation of the input.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{rank.pdf}
\caption{BLEU scores on the En$\Rightarrow$De test set with different rank constraints for bilinear pooling. ``Baseline'' denotes \textsc{Transformer-Base}.}
\label{fig:low-rank}
\end{figure}
\paragraph{Effect of Low-Rank Constraint} In this experiment, we study the impact of low-rank constraint $r$ (Equation~\ref{eq:low-rank}) on bilinear pooling, as shown in Figure~\ref{fig:low-rank}.
It is interesting to investigate whether the model with a smaller setting of $r$ can also achieve considerable results.
We examine groups of multi-head composition models with different $r$ on the En$\Rightarrow$De translation task. From Figure~\ref{fig:low-rank}, we can see that the translation performance increases with larger $r$ value and the model with $r=51$2 achieves best performance\footnote{The maximum value of $r$ is 512 since the rank of a matrix ${\bf W} \in \mathbb{R}^{Nd \times Nd}$ is bounded by $Nd$.}. Note that even when the dimensionality $r$ is reduced to 32, our model can still consistently outperform the baseline with only 0.9M parameters added (not shown in the figure). This reconfirms our claim that the improvements on the BLEU score could not be simply attributed to the additional parameters.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{length.pdf}
\caption{BLEU scores on the En$\Rightarrow$De test set with respect to various input sentence lengths. ``Baseline'' denotes \textsc{Transformer-Base}.}
\label{fig:length}
\end{figure}
\paragraph{Length Analysis}
We group sentences of similar lengths together and compute the BLEU score for each group, as shown in Figure~\ref{fig:length}. Generally, the performance of \textsc{Transformer} goes up with the increase of input sentence lengths, which is different from the results on single-layer RNNSearch models (i.e., performance decreases on longer sentences) as shown in~\cite{tu2016modeling}. We attribute this phenomenon to the advanced \textsc{Transformer} architecture including multiple layers, multi-head attention and feed-forward networks.
Clearly, our NI-based approaches outperform the baseline \textsc{Transformer} in all length segments, including only using multi-layer composition or multi-head composition, which verifies our contribution that representation composition indeed benefits \textsc{San}s. Moreover, multi-layer composition and multi-head composition are complementary to each other regarding different length segments, and simultaneously applying them achieves further performance gain.
\section{Related Work}
\paragraph{Bilinear Pooling}
Bilinear pooling has been well-studied in the computer vision community, which is first introduced by \newcite{Tenenbaum:2000:NeuroComputation} to separate style and content. Bilinear pooling has since then been considered to replace fully-connected layers in neural networks by introducing second-order statistics, and applied to fine grained recognition~\cite{Lin:2015:ICCV}. While bilinear models provide richer representations than linear models~\cite{goudreau1994firstorder}, bilinear pooling produces a high-dimensional feature of quadratic expansion, which may constrain model structures and computational resources. To address this challenge, \newcite{Gao:2016:CVPR} propose compact bilinear pooling through random projections for image classification, which is further applied to visual question answering~\cite{fukui2016multimodal}.
~\newcite{kim2016hadamard} and~\newcite{Kong:2017:CVPR} independently propose low-rank approximation on the transformation matrix of bilinear pooling, which aims to reduce the model size and corresponding computational burden. Their models are applied to visual question answering and fine-grained image classification, respectively.
While most work focus on computer vision tasks, our work is among the few studies~\cite{dozat2017ICLR,Delbrouck:2017:arXiv}, which prove the idea of bilinear pooling can have promising applications on NLP tasks. Our approach differs at: 1) we apply bilinear pooling to representation composition in NMT, while they apply to the attention model in either parsing or multimodal NMT; and 2) we extend the original bilinear pooling to incorporate first-order representations, which consistently improves translation performance in different scenarios (Figure~\ref{fig:residual}).
\paragraph{Multi-Layer Representation Composition}
Exploiting multi-layer representations has been well studied in the NLP community. \newcite{Peters:2018:NAACL} have found that linearly combining different layers is helpful and improves their performances on various NLP tasks.
In the context of NMT, several neural network based approaches to fuse information across historical layers have been proposed, such as dense information flow~\cite{Shen:2018:NAACL}, iterative and hierarchical aggregation~\cite{Dou:2018:EMNLP}, routing-by-agreement~\cite{Dou:2019:AAAI}, and transparent attention~\cite{Bapna:2018:EMNLP}.
In this work, we consider representation composition from a novel perspective of \emph{modeling neuron interactions}, which we prove is a promising and effective direction.
Besides, we generalize layer aggregation to representation composition in \textsc{San}s by also considering multi-head composition, and we propose an unified NI-based approach to aggregate both types of representation.
\paragraph{Multi-Head Self-Attention}
Multi-head attention has shown promising results in many NLP tasks, such as machine translation~\cite{Vaswani:2017:NIPS} and semantic role labeling~\cite{Strubell:2018:EMNLP}. The strength of multi-head attention lies in the rich expressiveness by using multiple attention functions in different representation subspaces.
Previous work show that multi-head attention can be further enhanced by encouraging individual attention heads to extract distinct information.
For example,~\newcite{Li:2018:EMNLP} propose disagreement regularizations to encourage different attention heads to encode distinct features, and~\newcite{Strubell:2018:EMNLP} employ different attention heads to capture different linguistic features.
\newcite{Li:2019:NAACL} is a pioneering work on empirically validating the importance of information aggregation for multi-head attention. Along the same direction, we apply the NI-based approach to compose the representations learned by different attention heads (as well as different layers), and empirically reconfirm their findings.
\section{Conclusion}
In this work, we propose NI-based representation composition for \textsc{MlMhSan}s, by modeling strong neuron interactions in the representation vectors generated by different layers and attention heads. Specifically, we employ bilinear pooling to capture pairwise multiplicative interactions among individual neurons, and propose \emph{extended bilinear pooling} to further incorporate first-order representations.
Experiments on machine translation tasks show that our approach effectively and efficiently improves translation performance over the \textsc{Transformer} model, and multi-head composition and multi-layer composition are complementary to each other. Further analyses reveal that our model makes the encoder of \textsc{Transformer} capture more syntactic and semantic properties of input sentences.
Future work includes exploring more neuron interaction based approaches for representation composition other than the bilinear pooling, and applying our model to a variety of network architectures such as \textsc{Bert}~\cite{bert2018} and \textsc{Lisa}~\cite{Strubell:2018:EMNLP}.
\section{Acknowledgement}
The work described in this paper was partially supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CUHK 14210717 of the General Research Fund), and Microsoft Research Asia (2018 Microsoft Research Asia Collaborative Research Award). We thank the anonymous reviewers for their comments and suggestions.
|
2,869,038,154,044 | arxiv | \section{Introduction}
Understanding the nucleosynthesis and evolution of Asymptotic Giant
Branch (AGB) stars are of primary importance as they are the major
factories of some key elements in the Universe (Busso et al.
1999, Herwig 2005). They are the predominant sites for the slow
neutron-capture nucleosyntesis, and major contributors of elements heavier
than iron; upto half of all the heavy elements are produced through
s-process (Busso et al. 1999). There are certain isotopes like
$^{86}$Sr, $^{96}$Mo, $^{104}$Pd, $^{116}$Sn etc., which are known to be
produced only through the s-process. It has been estimated that a third
of the total carbon content in the Galaxy is produced in AGB stars, which
is about the same amount as produced in CCSNe and Wolf-Rayet stars
(Dray et al. 2003). Besides these, the intermediate-mass AGB stars are
the major producers of $^{14}$N in the Galaxy (Henry et al. 2000,
Merle et al. 2016).
The exact physical conditions and nucleosynthetic processes occuring at the
interior of AGB stars are not clearly understood that hinders a better
understanding of the contribution of these stars to the Galactic chemical
enrichment. This demands a need for detailed chemical composition studies
for an extended sample of AGB stars. However, the spectra of the AGB
stars are complicated as it is overwhelmed with the molecular
contributions arising due to their low photospheric temperature.
This makes the derivation of exact elemental abundance difficult.
In this regard, the extrinsic stars, which are known to have received
products of AGB phase of evolution via binary mass transfer mechanisms,
form vital tools to trace the AGB nucleosynthesis. The important classes
of such extrinsic stars are barium stars as the analysis of their generally
hotter spectra is more accurate (Bidelman \& Keenan 1951),
CH stars (Keenan 1942) and CEMP-s stars (Beers \& Christlieb 2005).
Most of them are radial
velocity variables (McClure et al. 1980, McClure 1983, 1984,
McClure \& Woodsworth 1990, Udry et al. 1998a,b, Lucatello et al. 2005)
associated with a now invisible white dwarf companion.
Detailed studies on barium stars include Allen \& Barbuy (2006a),
Smiljanic et al. (2007), de Castro et al. (2016), Yang et al. (2016) and
many others. However, these studies have not included abundances of
several heavy elements such as Rb for all the stars and also for C, N
and O. In this work, we have undertaken to carry out a detailed
spectroscopic analysis for a sample of ten barium/CH star candidates
and derived whenever possible the abundances of C, N, O and the neutron
density dependent [Rb/Zr] abundance ratio to investigate the neutron
source in the former companion AGB stars. There are two important
neutron sources for the s-process in the He intershell of AGB
stars: $^{13}$C($\alpha$, n)$^{16}$O reaction during the radiative
inter-pulse period and $^{22}$Ne($\alpha$, n)$^{25}$Mg reaction
during the convective thermal pulses. $^{13}$C($\alpha$, n)$^{16}$O
reaction is the dominant neutron source in low-mass AGB stars with
initial mass $\leq$ 3 M$_{\odot}$. The temperature
T $\geq$ 90 $\times$ 10$^{6}$ K required for the operation of this
reaction is provides a neutron density N$_{n}$ $\sim$ 10$^{8}$ cm$^{-3}$
in a timescale of $\geq$ 10$^{3}$ years (Straniero et al. 1995,
Gallino et al. 1998, Goriely \& Mowlavi 2000, Busso et al. 2001).
A temperature 300$\times$10$^{6}$ K, required for the activation
of $^{22}$Ne source is achieved during the TPs in
intermediate-mass AGB stars (initial mass $\geq$ 4 M$_{\odot}$).
It produces a neutron density N$_{n}$ $\sim$ 10$^{13}$ cm$^{-3}$
in a timescale of $\sim$ 10 years. The temperature required for
the $^{22}$Ne source is reached in low-mass stars during the
last few TPs providing N$_{n}$ $\sim$ 10$^{10}$ - 10$^{11}$ cm$^{-3}$
(Iben 1975, Busso et al. 2001). The Rb is produced only when
the N$_{n}$ $>$ 5$\times$ 10$^{8}$ cm$^{-3}$, otherwise Sr, Y,
Zr etc. are produced. Hence, the [Rb/Zr] ratio can be used as
an indicator of mass of AGB stars. We could determine
Rb abundance in four of our program stars; HD~32712, HD~36650,
HD~179832 and HD~211173.
In section 2, we describe the source of the spectra used in this
study. Section 3 describes the methodology used for the determination
of atmospheric parameters, elemental abundances and radial velocities.
A discussion on the stellar mass determination is also provided in
the same section. A comparison of our result with the literature
values are presented in section 3. In section 4, we discuss the
procedures adopted for the abundance determination of different
elements. Section 5 provides a discussion on abundance uncertainties.
Section 6 provides the discussion on the elemental abundance ratios and their
interpretations based on the existing nucleosynthesis theories. This section
also provides a comparison of the observational data with the FRUITY models
of Cristallo et al. (2009, 2011, 2015b) and a parametric model
based analysis. A discussion on the individual stars are also
given in the same section. Conclusions are drawn in section 7.
\section{OBJECT SELECTION, DATA ACQUISITION AND DATA REDUCTION}
The objects analyzed in this study are taken from the CH star
catalog of Bartkevicius (1996). Six of them are also found listed
in the barium star catalog of L\"u (1991). These stars lie among
the typical CH stars in the color – magnitude
((B-V) v/s MV) diagram. The spectra of these
objects are acquired from three different sources. For HD~219116,
HD~154276 and HD~147609, the high resolution spectra
($\lambda/\delta\lambda \sim 60,000 $) were obtained on October 2015, May 2017
and June 2017 using the high resolution fiber fed Hanle Echelle
SPectrograph (HESP) attached to the 2m Himalayan Chandra Telescope
(HCT) at the Indian Astronomical Observatory, Hanle. The wavelength
coverage of the HESP spectra spans from 3530 - 9970 {\rm \AA}. The
Data are reduced following the standard procedures using various
tasks in Image Reduction and Analysis Facility
(IRAF\footnote{IRAF is distributed by the National Optical Astronomical
Observatories, which is operated by the Association for Universities
for Research in Astronomy, Inc., under contract to the National
Science Foundation}) software. For HD~24035 and HD~207585 high
resolution spectra ($\lambda/\delta\lambda \sim 48,000 $) are
obtained with the UVES (Ultraviolet and Visual Echelle Spectrograph)
of the 8.2m Very Large Telescope (VLT) of ESO at Cerro Paranal, Chile.
A high resolution spectrum of HD~219116 is also obtained from UVES/VLT.
The wavelength coverage of the UVES spectra spans from 3290 - 6650 {\rm \AA}.
For HD~32712, HD~36650, HD~94518, HD~211173 and HD~179832, high
resolution spectra ($\lambda/\delta\lambda \sim 48,000 $) are obtained
with the FEROS (Fiber-fed Extended Range Optical Spectrograph) of the
1.52 m telescope of ESO at La Silla, Chile. The wavelength coverage
of the FEROS spectra spans from 3520 - 9200 {\rm \AA}. Basic data of
the program stars along with the source of spectra are given in the
Table \ref{basic data of program stars}. A few sample spectra are
shown in Figure \ref{sample_spectra}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{sample_spectra.eps}
\caption{ Sample spectra of the program stars in the wavelength region
5840 to 5863 {\bf {\rm \AA}}.}\label{sample_spectra}
\end{figure}
{\footnotesize
\begin{table*}
\caption{Basic data for the program stars.}\label{basic data of program stars}
\begin{tabular}{lcccccccccc}
\hline
Star &RA$(2000)$ &Dec.$(2000)$ &B &V &J &H &K &Exposure &Date of obs. & Source \\
& & & & & & & &(seconds) & & of spectrum\\
\hline
HD 24035 &03 43 42.53 &$-72$ 36 32.80 &9.74 &8.51 &6.567 &6.043 &5.919 &900 &05/04/2002 &UVES \\
HD 32712 &05 01 34.91 &$-$58 31 15.05 &9.71 &8.55 &6.634 &6.054 &5.912 &1200 &11/11/1999 &FEROS \\
HD 36650 &05 27 42.92 &$-$68 04 27.16 &9.91 &8.79 &6.812 &6.297 &6.190 &1200 &10/11/1999 &FEROS \\
HD 94518 &10 54 12.20 &$-$31 09 34.58 &8.95 &8.36 &7.182 &6.891 &6.824 &900 &02/01/2000 &FEROS \\
HD 147609 &16 21 51.99 &+27 22 27.19 &9.69 &9.18 &8.211 &8.035 &7.948 &2400(3) &01/06/2017 &HESP \\
HD 154276 &17 03 49.15 &+17 11 21.08 &9.80 &9.13 &7.911 &7.624 &7.549 &2400(3) &06/05/2017 &HESP \\
HD 179832 &19 16 30.00 &$-$49 13 13.01 &9.46 &8.44 &6.660 &6.163 &6.031 &600 &14/07/2000 &FEROS \\
HD 207585 &21 50 34.71 &$-$24 11 11.68 &10.50 &9.78 &8.633 &8.341 &8.301 &240 &24/04/2002 &UVES \\
HD 211173 &22 15 57.01 &$-$31 51 38.52 &9.43 &8.49 &6.810 &6.332 &6.218 &600 &14/07/2000 &FEROS \\
HD 219116 &23 13 30.24 &$-$17 22 08.71 &10.29 &9.25 &7.602 &7.137 &7.012 &240 &19/05/2002 &UVES \\
& & & & & & & &2400(3) &30/10/2015 &HESP \\
\hline
\end{tabular}
The numbers in the parenthesis with exposures indicate the number of
frames taken.
\end{table*}
}
\section{STELLAR ATMOSPHERIC PARAMETERS AND RADIAL VELOCITY ESTIMATION}
We have estimated the photometric temperature of the program stars using the
temperature calibration equations of Alonso et al. (1994, 1996) for dwarfs
and Alonso et al. (1999, 2001) for giants, and following the detailed
procedures as described in our earlier papers (Goswami et al. 2006, 2016).
We made use of the 2MASS J, H, K magnitudes taken from SIMBAD
(Cutri et al. 2003) for this calculation. The photometric temperature
estimates had been used as an initial guess for deriving the
spectroscopic effective temperature of each object.
To determine the stellar atmospheric parameters, we have used a set of
clean, unblended Fe I and Fe II lines with excitation potential in the
range 0.0 - 6.0 eV and equivalent width 20 - 180 {\rm m\AA}.
IRAF software is used for the equivalent width measurement. A
pseudo continuum (at 1) is fitted to the observed spectrum using the
spline function. The equivalent width is measured for each spectral
line by fitting a Gaussian profile. An initial
model atmosphere was selected from the Kurucz grid of model atmosphere
with no convective overshooting (http://cfaku5.cfa.hardvard.edu/) using
the photometric temperature estimate and the guess of log g value for
giants/dwarfs. A final model atmosphere was adopted by the iterative
method from the initially selected one, using the most recent version
of the radiative transfer code MOOG (Sneden 1973) based on the
assumptions of Local Thermodynamic Equilibrium (LTE).
\par The effective temperature is determined by the method of excitation
equilibrium, forcing the slope of the abundances from Fe I lines versus
excitation potentials of the measured lines to be zero. The micro-turbulent
velocity at a fixed effective temperature is determined by demanding that
there will be no dependence of the derived Fe I abundance on the reduced
equivalent width of the corresponding lines. Micro-turbulent velocity is
fixed at a value which gives zero slope for the plot of equivalent width
versus abundances of Fe I lines. The surface gravity, log\,g value is
determined by means of ionization balance, that is by forcing the Fe I
and Fe II lines to produce the same abundance at the selected effective
temperature and microturbulent velocity. The estimated abundances from
Fe I to Fe II lines as a function of excitation potential and
equivalent widths, respectively, are shown in figures that are made
available as on-line materials.
A comparison of our results with the literature values whenever
available shows a close match well within the error limits.
\par Radial velocities of the program stars are calculated
using a set of clean and unblended lines of several elements.
In Table \ref{atmospheric parameters} we present the derived atmospheric
parameters and radial velocities of the program stars. Our radial
velocity estimates are used to study the kinematic properties of the stars.
A table giving the results from kinematic analysis is presented in the Appendix (Table \ref{kinematic_analysis}).
Three objects in our sample, HD~24035, HD 147609 and HD 207585
are confirmed binaries with orbital periods of 377.83 $\pm$ 0.35 days
(Udry et al. 1998a), 672 $\pm$ 2 days (Escorza et al. 2019), and
1146 $\pm$ 1.5 days ( Escorza et al. 2019) respectively. Our estimated radial
velocity ($-$1.56 km$^{-1}$ ) for HD~24035 is slightly higher than the range
of radial velocities found in literature ($-$2.14 to $-$19.81) for this
object. However, for HD~207585 ($-$65.9 km$^{-1}$) and
HD~147609 ($-$18.17 $^{-1}$), our estimates fall well within the range of
velocities available in literature, i.e., ($-$52.2 to $-$74.1)
and ($-$19.2 to $-$11.9) respectively.
{\footnotesize
\begin{table*}
\caption{Derived atmospheric parameters for the program stars.} \label{atmospheric parameters}
\begin{tabular}{lccccccccc}
\hline
Star &T$\rm_{eff}$ & log g &$\zeta$ & [Fe I/H] &[Fe II/H] & V$_{r}$ & V$_{r}$ \\
& (K) & cgs &(km s$^{-1}$) & & & (km s$^{-1}$) & (km s$^{-1}$) \\
& $\pm 100$ & $\pm 0.2$& $\pm 0.2$ & & & & \\
\hline
HD 24035 & 4750 & 2.20 & 1.58 & $-$0.51$\pm$0.19 & $-$0.50$\pm$0.16 & $-$1.56$\pm$0.25 & $-$12.51$\pm$0.13\\
HD 32712 & 4550 & 2.53 & 1.24 & $-$0.25$\pm$0.12 & $-$0.25$\pm$0.12 & +10.37$\pm$0.02 & +11.27$\pm$0.16 \\
HD 36650 & 4880 & 2.40 & 1.30 & $-$0.02$\pm$0.12 & $-$0.02$\pm$0.14 & +36.40$\pm$0.19 & +31.52$\pm$0.47 \\
HD 94518 & 5700 & 3.86 & 1.30 & $-$0.55$\pm$0.10 & $-$0.55$\pm$0.12 & +92.20$\pm$0.43 & 92.689$\pm$0.015 \\
HD 147609 & 6350 & 3.50 & 1.55 & $-$0.28$\pm$0.16 & $-$0.28$\pm$0.12 & $-$18.17$\pm$1.47 & $-$17.11$\pm$0.82\\
HD 154276 & 5820 & 4.28 & 0.63 & $-$0.09$\pm$0.13 & $-$0.10$\pm$0.14 & $-$64.17$\pm$1.42 & $-$55.94$\pm$0.17\\
HD 179832 & 4780 & 2.70 & 0.99 & +0.23$\pm$0.04 & +0.22$\pm$0.06 & +6.73$\pm$0.03 & +7.64$\pm$0.13\\
HD 207585 & 5800 & 3.80 & 1.00 & $-$0.38$\pm$0.12 & $-$0.38$\pm$0.11 & $-$65.97$\pm$0.07 & $-$60.10$\pm$1.20 \\
HD 211173 & 4900 & 2.60 & 1.15 & $-$0.17$\pm$0.10 & $-$0.17$\pm$0.09 & $-$27.84$\pm$0.25 & $-$28.19$\pm$0.63\\
HD 219116 & 5050 & 2.50 & 1.59 & $-$0.45$\pm$0.11 & $-$0.44$\pm$0.11 & $-$40.90$\pm$0.25 & $-$11.00$\pm$7.30 \\
\hline
\end{tabular}
In Columns 7 and 8 we present radial velocities from the respective spectra and SIMBAD respectively
\end{table*}
}
\par We have determined the mass of the program stars from their
location in Hertzsprung-Russell diagram (Girardi et al. 2000 data base
of evolutionary tracks) using the spectroscopic temperature estimate,
T$\rm_{eff}$, and the luminosity, log$(L/L_{\odot})$.\\
log (L/L$_{\odot}$)=0.4(M$_{bol\odot}$ - V - 5 - 5log ($\pi$) + A$_{V}$ - BC) \\
The visual magnitudes V are taken from Simbad and the parallaxes $\pi$
from Gaia DR2 (https://gea.esac.esa.int/archive/). The bolometric
correction, BC, is calculated using the empirical calibrations of
Alonso et al. (1995) for dwarfs and Alonso et al. (1999) for giants.
The interstellar extinction A$_{V}$ is calculated using the calibration
equations given in Chen et al. (1998). From the estimated mass,
log\,{g} is calculated using \\
log (g/g$_{\odot}$)= log (M/M$_{\odot}$) + 4log (T$_{eff}$/T$_{eff\odot}$) - log (L/L$_{\odot}$)\\
We have adopted the solar values log g$_{\odot}$ = 4.44, T$_{eff\odot}$ = 5770K and M$_{bol\odot}$ = 4.74 mag.
\par We have used z = 0.004 tracks for HD~24035 and HD~94518;
z = 0.008 for HD~207585 and HD~219116; z = 0.019 for HD~32712,
HD~36650, HD~147609, HD~154276 and HD~211173, and z = 0.030 for
HD~179832. As an example, the evolutionary tracks for a few objects
are shown in Figure \ref{track_019}. The mass estimates are presented
in Table \ref{mass age}.
{\footnotesize
\begin{table*}
\caption{Estimates of log\,{g} and Mass using parallax method} \label{mass age}
\begin{tabular}{lcccccc}
\hline
Star name & Parallax & $M_{bol}$ & log(L/L$_{\odot}$) & Mass(M$_{\odot}$) & log g & log g (spectroscopic) \\
& (mas) & & & & (cgs) & (cgs) \\
\hline
HD 24035 & 4.612$\pm$0.101 & 1.401$\pm$0.05 & 1.339$\pm$0.02 & 0.70$\pm$0.21 & 2.61$\pm$0.02 & 2.20 \\
HD 32712 & 2.621$\pm$0.026 & 0.081$\pm$0.022 & 1.868$\pm$0.01 & 1.80$\pm$0.26 & 2.41$\pm$0.005 & 2.53 \\
HD 36650 & 2.655$\pm$0.027 & 0.474$\pm$0.023 & 1.710$\pm$0.01 & 2.20$\pm$0.26 & 2.78$\pm$0.01 & 2.40 \\
HD 94518 & 13.774$\pm$0.05 & 3.872$\pm$0.01 & 0.351$\pm$0.003 & 0.85$\pm$0.06 & 4.00$\pm$0.005 & 3.86 \\
HD 147609 & 4.301$\pm$0.107 & 1.955$\pm$0.055 & 1.118$\pm$0.02 & 1.70$\pm$0.05 & 3.71$\pm$0.02 & 3.50 \\
HD 154276 & 11.554$\pm$0.025 & 4.339$\pm$0.005 & 0.164$\pm$0.002 & 1.00$\pm$0.05 & 4.29$\pm$0.002 & 4.28 \\
HD 179832 & 2.914$\pm$0.052 & 0.216$\pm$0.041 & 1.814$\pm$0.02 & 2.5$\pm$0.28 & 2.70$\pm$0.02 & 2.70 \\
HD 207585 & 5.3146$\pm$0.407 & 3.313$\pm$0.165 & 0.575$\pm$0.065 & 1.05$\pm$0.05 & 3.90$\pm$0.045 & 3.80 \\
HD 211173 & 3.387$\pm$0.066 & 0.839$\pm$0.042 & 1.564$\pm$0.02 & 2.20$\pm$0.24 & 2.93$\pm$0.02 & 2.60 \\
HD 219116 & 1.584$\pm$0.044 & 0.002$\pm$0.06 & 1.901$\pm$0.02 & 2.35$\pm$0.17 & 2.68$\pm$0.015 & 2.50 \\
\hline
\end{tabular}
\end{table*}
}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{track_019_2k19_sep_16.eps}
\caption{The locations of HD~32712, HD~36650, HD~154276, HD~211173 and HD~147609.
The evolutionary tracks for 0.9, 1.0, 1.2, 1.7, 1.8, 2.0, 2.2, and 2.5 M$_{\odot}$
are shown from bottom to top for z = 0.019.} \label{track_019}
\end{figure}
{\footnotesize
\begin{table*}
\caption{Comparison of estimated stellar parameters with literature values} \label{Comparison }
\begin{tabular}{lccccccc}
\hline
Star & T$_{eff}$ & log g &$\zeta$ & [Fe I/H] & [Fe II/H] & Ref. \\
& (K) & &(km s$^{-1}$) & & \\
\hline
HD~24035 & 4750 & 2.20 & 1.58 & $-$0.51 & $-$0.50 & 1 \\
& 4700 & 2.50 & 1.30 & $-$0.23 & $-$0.28 & 2 \\
& 4500 & 2.00 & - & $-$0.14 & - & 3 \\
HD~32712 & 4550 & 2.53 & 1.24 & $-$0.25 & $-$0.25 & 1 \\
& 4600 & 2.10 & 1.30 & $-$0.24 & $-$0.25 & 2 \\
HD~36650 & 4880 & 2.40 & 1.30 & $-$0.02 & $-$0.02 & 1 \\
& 4800 & 2.30 & 1.50 & $-$0.28 & $-$0.28 & 2 \\
HD~94518 & 5700 & 3.86 & 1.30 & $-$0.55 & $-$0.55 & 1 \\
& 5859 & 4.20 & 4.15 & $-$0.56 & - & 4 \\
& 5859 & 4.15 & 1.20 & $-$0.49 & $-$0.50 & 5 \\
& 5709 & 3.86 & 2.23 & $-$0.84 & - & 6 \\
HD~147609 & 6350 & 3.50 & 1.55 & $-$0.28 & $-$0.28 & 1 \\
& 6411 & 3.90 & 1.26 & $-$0.23 & - & 7 \\
& 5960 & 3.30 & 1.50 & $-0.45$ & $+0.08$ & 8 \\
& 6270 & 3.50 & 1.20 & - & - & 9 \\
& 6300 & 3.61 & 1.20 & - & - & 10 \\
HD~154276 & 5820 & 4.28 & 0.63 & $-$0.09 & $-$0.10 & 1 \\
& 5722 & 4.28 & 0.93 & $-$0.29 & - & 5 \\
& 5731 & 4.35 & 1.28 & $-$0.30 & - & 11 \\
HD~179832 & 4780 & 2.70 & 0.99 & +0.23 & +0.22 & 1\\
HD~207585 & 5800 & 3.80 & 1.00 & $-$0.38 & $-$0.38 & 1 \\
& 5800 & 4.00 & - & $-$0.20 & - & 3 \\
& 5400 & 3.30 & 1.80 & $-$0.57 & - & 12 \\
& 5400 & 3.50 & 1.50 & $-$0.50 & - & 13 \\
HD~211173 & 4900 & 2.60 & 1.15 & $-$0.17 & $-$0.17 & 1 \\
& 4800 & 2.50 & - & $-$0.12 & - & 3 \\
HD~219116 & 5050 & 2.50 & 1.59 & $-$0.45 & $-$0.44 & 1 \\
& 4900 & 2.30 & 1.60 & $-$0.61 & $-$0.62 & 2 \\
& 4800 & 1.80 & - & $-$0.34 & - & 3 \\
& 5300 & 3.50 & 2.00 & $-$0.30 & - & 14 \\
& 5300 & 3.50 & - & $-$0.34 & - & 15 \\
\hline
\end{tabular}
References: 1. Our work, 2. de Castro et al. (2016), 3. Masseron et al. (2010),
4. Battistini \& Bensby (2015), 5. Bensby et al. (2014), 6. Axer et al. (1994),
7. Escorza et al. (2019) 8. Allen \& Barbuy (2006a), 9. North et al. (1994a),
10. Th\'evenin \& Idiart (1999), 11. Ramirez et al. (2013), 12. Luck \& Bond (1991),
13. Smith \& Lambert (1986a), 14. Smith et al. (1993), 15. Cenarro et al. (2007) \\
\end{table*}
}
\section{ABUNDANCE DETERMINATION}
Abundances of most of the elements are determined from the measured
equivalent width of lines of the neutral and ionized atoms using the
most recent version of MOOG and the adopted model atmospheres.
Absorption lines corresponding to different elements are identified
by comparing closely the spectra of program stars with the Doppler
corrected spectrum of the star Arcturus. The log $gf$ and the lower
excitation potential values of the lines are taken from the Kurucz
database of atomic line lists. The equivalent width of the spectral
lines are measured using various tasks in IRAF.
A master line list including all the elements was generated.
For the elements showing hyper-fine splitting and for molecular bands,
spectrum synthesis of MOOG was used to find the abundances.
Elements Sc, V, Mn, Co, Cu, Ba, La and Eu are affected by Hyper-fine
Splitting. The hyper-fine components of Sc and Mn are taken
from Prochaska \& Mcwilliam 2000, V, Co and Cu from Prochaska et al. 2000,
Ba from Mcwilliam 1998, La from Jonsell et al. 2006, Eu from Worely
et al. 2013. All the abundances are found relative to the respective
solar values (Asplund et al. 2009).
\par The abundance estimates are given in Tables \ref{abundance_table1}
through \ref{abundance_table3} and the lines used for the the
abundance estimation are presented in Tables \ref{linelist1} and
\ref{linelist2}. The detailed abundance analyses and discussion
are given in the section 6.
\begin{figure}
\centering
\includegraphics[width=\columnwidth, height= \columnwidth]{OI_6300_2k19_sep_16.eps}
\includegraphics[width=\columnwidth, height= \columnwidth]{OI_triplet_2k19_sep_16.eps}
\caption{ Synthesis of [O I] line around 6300 {\rm \AA} (Top panel) and
O I triplet around 7770 {\rm \AA} (Bottom panel, LTE abundance estimates).
Dotted line represents synthesized spectra and the solid line indicates
the observed spectra. Short dashed line represents the synthetic
spectra corresponding to $\Delta$[O/Fe] = $-$0.3 and long dashed line
is corresponding to $\Delta$[O/Fe] = +0.3} \label{OI_synth}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{carbon_5165_2k19_sep_16.eps}
\caption{ Synthesis of C$_{2}$ band around 5165 {\rm \AA}. Dotted line
represents synthesized spectra and the solid line indicates the observed
spectra. Short dashed line represents the synthetic spectra
corresponding to $\Delta$ [C/Fe] = $-$0.3 and long dashed line is
corresponding to $\Delta$[C/Fe] = +0.3} \label{carbon_5165}
\end{figure}
\section{ABUNDANCE UNCERTAINTIES}
The uncertainties in the elemental abundances has two main components:
random error and systematic error. Random error arises from the
uncertainties in the line parameters such as measured equivalent width,
line blending and oscillator strength. Since the random error varies
inversely as the square-root of the number of lines, we can reduce
this error by using maximum possible number of lines. Systematic
error is due to the uncertainties in the adopted stellar atmospheric
parameters.
The total uncertainty in the elemental abundance log $\epsilon$ is
calculated as; \\
$\sigma_{log\epsilon}^{2}$ = $\sigma_{ran}^{2}$ + $(\frac{\partial log \epsilon}{\partial T})^{2}$ $\sigma_{T_{eff}}^{2}$ + $(\frac{\partial log \epsilon}{\partial log g})^{2}$ $\sigma_{log g}^{2}$ + \\
\begin{center}
$(\frac{\partial log \epsilon}{\partial \zeta})^{2}$ $\sigma_{\zeta}^{2}$ + $(\frac{\partial log \epsilon}{\partial [Fe/H]})^{2}$ $\sigma_{[Fe/H]}^{2}$ \\
\end{center}
\noindent where $\sigma_{ran}$ = $\frac{\sigma_{s}}{\sqrt{N}}$. $\sigma_{s}$
is the standard deviation of the abundances derived from the N
number of lines of the particular species. The $\sigma$'$_{s}$ are
the typical uncertainties in the stellar atmospheric parameters, which are
T$_{eff}$$\sim$ $\pm$100 K, log g$\sim$ $\pm$0.2 dex, $\zeta$$\sim$ $\pm$0.2 kms$^{-1}$ and [Fe/H]$\sim$ $\pm$0.1 dex. The abundance uncertainties arising
from the error of each stellar atmospheric parameters is estimated by
varying one parameter at a time by an amount equal to their corresponding
uncertainty, by keeping others the same and computing the changes in
the abundances. We have done this procedure for a representative star,
HD~211173, in our sample with the assumption that the uncertainties
due to different parameters are independent, following de Castro
et al. (2016), Karinkuzhi et al. (2018) and Cseh et al. (2018). The
estimated differential abundances is given in
Table \ref{differential_abundance}. The procedure has been applied
to the abundances estimated from the equivalent width measurement as
well as the spectral synthesis calculation.
Finally, the uncertainty in [X/Fe] is calculated as, \\
$\sigma_{[X/Fe]}^{2}$ = $\sigma_{X}^{2}$ + $\sigma_{Fe}^{2}$ .
{\footnotesize
\begin{table*}
\caption{Differential Abundance ($\Delta$log$\epsilon$) of different species due to the variations
in stellar atmospheric parameters for HD~211173}
\label{differential_abundance}
\resizebox{\textwidth}{!}{\begin{tabular}{lcccccccccccc}
\hline
Element & $\Delta$T$_{eff}$ & $\Delta$T$_{eff}$ & $\Delta$log g & $\Delta$log g & $\Delta$$\zeta$ & $\Delta$$\zeta$ & $\Delta$[Fe/H] & $\Delta$[Fe/H] & ($\Sigma \sigma_{i}^{2}$)$^{1/2}$ & ($\Sigma \sigma_{i}^{2}$)$^{1/2}$ & $\sigma_{[X/Fe]}$ & $\sigma_{[X/Fe]}$\\
& (+100 K) & ($-$100 K) & (+0.2 dex) & ($-$0.2 dex) & (+0.2 kms$^{-1}$) & ($-$0.2 kms$^{-1}$) & (+0.1 dex) & ($-$0.1 dex) & (+$\Delta$) & ($-$$\Delta$) & (+$\Delta$) & ($-$$\Delta$) \\
\hline
C & 0.00 & 0.00 & +0.03 & $-$0.03 & $-$0.03 & +0.03 & +0.01 & $-$0.01 & 0.04 & 0.04 & 0.19 & 0.18 \\
N & +0.10 & $-$0.10 & 0.00 & 0.00 & +0.02 & $-$0.02 & +0.05 & $-$0.05 & 0.11 & 0.11 & 0.21 & 0.21 \\
O & $-$0.19 & +0.19 & +0.06 & $-$0.06 & 0.00 & 0.00 & 0.00 & 0.00 & 0.20 & 0.20 & 0.27 & 0.26 \\
Na I & +0.07 & $-$0.08 & $-$0.02 & +0.02 & $-$0.05 & +0.05 & 0.00 & +0.01 & 0.09 & 0.10 & 0.21 & 0.21 \\
Mg I & +0.06 & $-$0.05 & 0.00 & +0.01 & $-$0.06 & +0.07 & 0.00 & +0.01 & 0.08 & 0.09 & 0.21 & 0.20 \\
Al I & +0.06 & $-$0.07 & 0.00 & 0.00 & $-$0.02 & +0.02 & 0.00 & 0.00 & 0.06 & 0.07 & 0.20 & 0.19 \\
Si I & $-$0.03 & +0.03 & +0.04 & $-$0.04 & $-$0.03 & +0.03 & +0.01 & $-$0.01 & 0.06 & 0.06 & 0.20 & 0.20\\
Ca I & +0.10 & $-$0.11 & $-$0.04 & +0.03 & $-$0.10 & +0.09 & 0.00 & 0.00 & 0.15 & 0.15 & 0.24 & 0.23 \\
Sc II & $-$0.02 & +0.02 & +0.09 & $-$0.09 & $-$0.09 & +0.08 & +0.02 & $-$0.03 & 0.13 & 0.13 & 0.22 & 0.21 \\
Ti I & +0.14 & $-$0.15 & $-$0.01 & +0.01 & $-$0.08 & +0.08 & 0.00 & 0.00 & 0.16 & 0.17 & 0.24 & 0.24 \\
Ti II & $-$0.02 & 0.00 & +0.07 & $-$0.08 & $-$0.10 & +0.09 & +0.02 & $-$0.03 & 0.13 & 0.12 & 0.23 & 0.22 \\
V I & +0.16 & $-$0.17 & $-$0.01 & 0.00 & $-$0.07 & +0.07 & $-$0.01 & +0.01 & 0.18 & 0.18 & 0.25 & 0.25 \\
Cr I & +0.13 & $-$0.13 & $-$0.02 & +0.02 & $-$0.13 & +0.12 & 0.00 & 0.00 & 0.18 & 0.18 & 0.26 & 0.25 \\
Cr II & $-$0.08 & +0.07 & +0.10 & $-$0.09 & $-$0.08 & +0.09 & +0.01 & $-$0.02 & 0.15 & 0.15 & 0.25 & 0.24 \\
Mn I & +0.09 & $-$0.10 & $-$0.02 & +0.01 & $-$0.16 & +0.14 & $-$0.01 & 0.00 & 0.18 & 0.17 & 0.26 & 0.24 \\
Fe I & +0.07 & $-$0.07 & 0.00 & $-$0.01 & $-$0.13 & +0.12 & +0.10 & $-$0.10 & 0.18 & 0.17 & -- & -- \\
Fe II & $-$0.09 & +0.07 & +0.10 & $-$0.10 & $-$0.10 & +0.09 & +0.10 & $-$0.10 & 0.20 & 0.18 & -- & -- \\
Co I & +0.07 & $-$0.07 & +0.02 & $-$0.03 & $-$0.06 & +0.06 & +0.01 & $-$0.02 & 0.09 & 0.10 & 0.20 & 0.20 \\
Ni I & +0.04 & $-$0.03 & +0.02 & $-$0.02 & $-$0.10 & +0.10 & +0.01 & $-$0.01 & 0.11 & 0.11 & 0.21 & 0.20 \\
Cu I & +0.09 & $-$0.09 & $-$0.01 & 0.00 & $-$0.15 & +0.12 & +0.03 & $-$0.02 & 0.18 & 0.15 & 0.25 & 0.23 \\
Zn I & $-$0.05 & +0.06 & +0.07 & $-$0.06 & $-$0.08 & +0.09 & +0.02 & $-$0.01 & 0.12 & 0.12 & 0.22 & 0.21 \\
Rb I & +0.10 & $-$0.10 & 0.00 & 0.00 & $-$0.03 & +0.03 & 0.00 & 0.00 & 0.10 & 0.10 & 0.21 & 0.20 \\
Sr I & +0.15 & $-$0.16 & $-$0.03 & +0.02 & $-$0.22 & +0.22 & 0.00 & +0.01 & 0.27 & 0.27 & 0.32 & 0.32 \\
Y I & +0.16 & $-$0.17 & $-$0.01 & 0.00 & $-$0.02 & +0.03 & 0.00 & +0.01 & 0.16 & 0.17 & 0.24 & 0.24 \\
Y II & $-$0.01 & 0.00 & +0.08 & $-$0.08 & $-$0.14 & +0.14 & +0.02 & $-$0.03 & 0.16 & 0.16 & 0.24 & 0.24 \\
Zr I & +0.17 & $-$0.19 & $-$0.01 & 0.00 & $-$0.03 & +0.03 & $-$0.01 & 0.00 & 0.17 & 0.19 & 0.25 & 0.26 \\
Zr II & $-$0.03 & +0.01 & +0.09 & $-$0.09 & $-$0.09 & +0.11 & +0.02 & $-$0.03 & 0.13 & 0.15 & 0.22 & 0.23 \\
Ba II & +0.02 & $-$0.03 & +0.05 & $-$0.06 & $-$0.19 & +0.15 & +0.03 & $-$0.04 & 0.20 & 0.17 & 0.27 & 0.24 \\
La II & +0.01 & 0.00 & +0.09 & $-$0.09 & $-$0.06 & +0.07 & +0.03 & $-$0.03 & 0.11 & 0.12 & 0.21 & 0.21 \\
Ce II & +0.01 & $-$0.01 & +0.09 & $-$0.08 & $-$0.11 & +0.15 & +0.04 & $-$0.03 & 0.15 & 0.17 & 0.23 & 0.25 \\
Pr II & +0.01 & $-$0.02 & +0.08 & $-$0.09 & $-$0.03 & +0.03 & +0.03 & $-$0.04 & 0.09 & 0.10 & 0.22 & 0.21 \\
Nd II & +0.01 & $-$0.02 & +0.08 & $-$0.09 & $-$0.09 & +0.09 & +0.03 & $-$0.04 & 0.12 & 0.11 & 0.22 & 0.21 \\
Sm II & +0.02 & $-$0.02 & +0.09 & $-$0.08 & $-$0.05 & +0.07 & +0.04 & $-$0.03 & 0.11 & 0.11 & 0.21 & 0.21 \\
Eu II & $-$0.02 & +0.01 & +0.09 & $-$0.09 & $-$0.03 & +0.04 & +0.03 & $-$0.03 & 0.10 & 0.10 & 0.21 & 0.20 \\
\hline
\end{tabular}}
\end{table*}
}
\section{Abundance analysis and DISCUSSION}
\subsection{Light element abundance analysis: C, N, O, $^{12}$C/$^{13}$C,
Na, Al, $\alpha$- and $Fe$-peak elements}
The [O I] line at 6300.304 {\rm \AA} is used to derive
the oxygen abundances,
whenever possible, otherwise, the
resonance O I triplet lines at around 7770 {\rm \AA} are used.
The O I triplet lines are known to be affected by the non-LTE
effects (Eriksson \& Toft 1979, Johnson et al. 1974, Baschek
et al. 1977, Kiselman 1993, Amarsi et al. 2016).
The corrections are made to the LTE abundance obtained with
these lines following Bensby et al. (2004) and Afsar et al. (2012).
The [O I] line at 6363.776 \AA, is found to be blended and not
usable for abundance determination in any of the stars. The spectrum
synthesis fits of O I triplet lines for a few program stars are shown
in Figure \ref{OI_synth}. All the three lines of the O I IR triplet gave
the same abundance values, except for HD 147609. In the case of HD 147609,
the lines at 7771 and 7774 \AA, gave the same abundance and the line
at 7775 \AA, gave an abundance which is 0.15 dex lower. In this case,
we have taken the average of the three values as the final oxygen abundance.
\par We have estimated the oxygen abundance in all the program stars except
HD~24035. The derived abundance of oxygen is in the range
$-$0.26$\leq$[O/Fe]$\leq$0.97. Oxygen is under abundant in HD~36650
and HD~211173 with [O/Fe] values $-$0.23 and $-$0.26 respectively.
HD~32712 and HD~179832 show near-solar values. Purandardas et al. (2019)
found [O/Fe]$\sim$$-$0.33 for a barium star in their sample, somewhat
closer to our lower limit. A mild overabundance is found in the stars
HD~154276 and HD~219116 with [O/Fe] values 0.38 and 0.21 respectively.
In the other three stars, we found an [O/Fe]$>$0.6, with
HD~207585 showing the largest enhancement of 0.97.
The first dredge-up (FDU) is not expected to alter the
oxygen abundance.
The carbon abundances are derived using the spectral synthesis
calculation of C$_{2}$ band at 5165 \AA, (Figure \ref{carbon_5165}) for
six objects. G-band of CH at 4300 \AA, is used for two stars as the
C$_{2}$ band at 5165 \AA, are not usable for the abundance determination.
The objects for which we could estimate the carbon abundance using both
C$_{2}$ and CH bands, we find that the CH band returns a lower value for
carbon by about 0.2 to 0.3 dex. We could determine the carbon abundance
in all the objects except for HD 154276 and HD 179832. Carbon is found
to be under abundant in most of the stars analyzed here. The [C/Fe] value
ranges from $-$0.28 to 0.61. The stars HD~24035, HD~147609
and HD~207585 show a mild over abundance of carbon with values
0.41, 0.38 and 0.61 respectively, whereas it is near-solar in HD~32712
and HD~219116. HD~36650, HD~94518 and HD~211173 show mild under abundance
with [C/Fe] values, $-$0.22, $-$0.28, $-$0.23 respectively.
These values are consistent with those generally noticed in barium stars
(Barbuy et al. 1992, North et al. 1994a).
With the estimated carbon abundances, we have derived the
abundances of nitrogen using the spectrum synthesis calculation of
$^{12}$CN lines at 8000 {\rm \AA} region in HD~32712, HD~36650,
HD~94518 and HD~211173. In other objects, where this region is
not usable or unavailable, CN band at 4215 {\rm \AA} is used. The
molecular lines for CN and C$_{2}$ are taken from Brooke et al. (2013),
Sneden et al. (2014) and Ram et al. (2014).
\par The nitrogen abundance is estimated in seven of the program stars.
Estimated [N/Fe] values range from 0.24 to 1.41 dex with HD~24035 and
HD~94518 showing [N/Fe] $>$ 1.0 dex. Such higher values of nitrogen
have already been noted in some barium stars by several authors
(Smith 1984, Luck \& Lambert 1985, Barbuy et al. 1992, Allen
\& Barbuy 2006a, Smiljanic et al. 2006, Merle at al. 2016,
Karinkuzhi et al. 2018). Nitrogen enhancement with [N/Fe]$>$1 is
possible if the star is previously enriched by the pollution from
a massive AGB companion experiencing Hot-Bottom Burning (HBB).
In super-massive AGB stars nitrogen can be substantially produced
at the base of the convective envelope when the temperature of the
envelope exceeds 10$^{8}$ K (Doherty et al. 2014a).
\par We could derive the carbon isotopic ratio, $^{12}$C/$^{13}$C,
using the spectral synthesis calculation of the $^{12}$CN lines at
8003.292, 8003.553, 8003.910 \AA, and $^{13}$CN features at 8004.554,
8004.728, 8004.781 \AA, for four stars, HD 32712, HD 36650, HD 211173
and HD 219116. The values for this ratio are 20.0, 7.34, 20.0 and 7.34
respectively. Values in the range 7 - 20 (Barbuy et al. 1992,
Smith et al. 1993, Smith 1984, Harris et al. 1985, Karinkuzhi et al. 2018),
and 13 -33 (Tomkin \& Lambert 1979, Sneden et al. 1981) are found in
literature for barium stars.
\par The lower level of carbon enrichment and low $^{12}$C/$^{13}$C ratio
along with larger over abundance of nitrogen indicate that the matter
has undergone CN processing and the products have been brought to the
surface by the FDU. From their locations in HR diagram,
the three stars for which we could estimate $^{12}$C/$^{13}$C ratio
are on the ascent of first giant branch (FGB).
These stars have undergone the FDU at the beginning of FGB.
It has been noted that less evolved barium stars show higher
carbon abundance as they have not reached the
FDU (Barbuy et al. 1992, Allen \& Barbuy 2006a). Among our program stars,
HD~207585 shows the maximum enhancement of carbon, which is on the
subgiant branch. However, the star HD~94518 shows
the least enrichment among the program stars despite being the less
evolved one, dwarf barium star.
According to Vanture (1992), if the accreted material is mixed to
the hydrogen burning region of the star
either during the main-sequence or the first ascent of the giant branch,
certain nucleosynthesis can happen, thereby reducing the carbon
abundance. Smiljanic et al. (2006) ascribes
rotational mixing for reduction in the surface carbon abundance.
Also, even though the star has not reached the stage of dredge-up,
the difference in the mean molecular weight of the accreted material
and the inherent stellar materials in the interiors can induce
thermohaline mixing, and this could reduce the surface carbon abundance
by an order of magnitude compared to the unaltered case
(Stancliffe et al. 2007).
\par We have estimated the C/O ratios of the program stars except
for HD~24035, HD~154276 and HD~179832. The estimated C/O$<$1 as
normally seen in barium stars (Table \ref{hs_ls}).
\par The estimated Na abundances in
the range 0.05$\leq$[Na/Fe]$\leq$0.42 are similar to what
is normally seen for disk stars, normal field giants and some barium
stars (Antipova et al. 2004, de Castro et al. 2016, Karinkuzhi et al. 2018).
Na, Mg and Al are produced in the carbon burning stages of
massive stars (Woosely \& Weaver 1995), hence the SN II are the probable
sources of these elements in the disk. The thick and thin disk dwarf
stars in the Galaxy do not show any trend in [Na/Fe] ratio with
metallicity (Edvarsson et al. 1993, Reddy et al. 2003, Reddy et al. 2006).
Similar pattern as the dwarf stars is observed in the case of field
giants (Mishenina et al. 2006, Luck \& Heiter 2007).
Owing to the common origin, all the stars in the disk is expected
to show similar abundance.
An enhanced abundance of Na can be expected in AGB stars during the
inter-pulse stage from $^{22}$Ne produced in the previous hot pulses via
$^{22}$Ne(p, $\gamma$)$^{23}$Na (NeNa chain) (Mowlavi 1999,
Goriely \& Mowlavi 2000). This Na can be brought to the
surface during TDU. Hence an overabundance of Na may be expected
in the barium stars.
However Na enrichment can be expected in stars prior to the AGB phase.
El Eid \& Champagne (1995) and Antipova et al. (2004) related this
over abundance of Na to the nucleosynthesis associated with the
evolutionary stage of the star. According to them, Na is synthesized in the
convective H-burning core of the main-sequence stars through NeNa chain.
Later, this products are mixed to the surface during the FDU. As a result,
it is possible to observe sodium enrichment in giants rather than in dwarfs.
Boyarchuk et al. (2001), de Castro et al. (2016) found an anti-correlation of
[Na/Fe] with log g. We could observe a similar trend in our sample.
According to Denissenkov \& Ivanov (1987), a star with a minimum mass
of 1.5$M_{\odot}$ will be able to raise the Na abundance through the
NeNa chain even in the main-sequence itself. Even though the Na enriched
material can be synthesized in AGB and subsequently transferred to the
barium stars, there may be a non-negligible contribution to the Na
enrichment from the barium star itself.
\par The derived abundance of aluminium in HD~154276 and HD~211173
return near-solar values for [Al/Fe], $-$0.12 and $-$0.11 respectively.
Yang et al. (2016) found a range $-$0.22$\leq$[Al/Fe]$\leq$0.56,
Allen \& Barbuy (2006a) $-$0.1$\leq$[Al/Fe]$\leq$0.1, and de castro
et al. (2016) $-$0.07$\leq$[Al/Fe]$\leq$0.43 for their sample of barium stars.
\par The estimated abundances of Mg are in the range
$-$0.10$\leq$[Mg/Fe]$\leq$0.44. A Mg enrichment is expected to observe
in the barium stars if the s-process over abundance is resulting from the
neutrons produced during the convective thermal pulses through the
reaction $^{22}$Ne($\alpha$,n)$^{25}${Mg}.
We could not find any enhancement for Mg in our sample when compared with
values from the disk stars and normal giants.
This discards the fact that the origin of neutron is
$^{22}$Ne($\alpha$,n)$^{25}${Mg} source.
\par The estimated abundances for other elements, from Si,
to Zn are found to be well-within the range as normally seen for disk
stars.
\subsection{Heavy element abundance analysis}
\subsubsection{\textbf{The light s-process elements: Rb, Sr, Y, Zr}}
\par The abundance of Rb is derived using the spectral synthesis
calculation of Rb I resonance line at 7800.259 \r{A} in the stars
HD~32712, HD~36650, HD~179832 and HD~211173. We could not detect the
Rb I lines in the warmer program stars. The Rb I resonance line
at 7947.597 \r{A} is not usable for the abundance estimation.
The hyperfine components of Rb is taken from Lambert \& Luck (1976).
The spectrum synthesis of Rb for the three program stars are shown
in Figure \ref{Rb_7800}.
Rubidium is found to be under abundant in all the four program stars
with [Rb/Fe] ranging from $-$1.35 to $-$0.82.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Rb_7800_2k19_sep_16.eps}
\caption{ Synthesis of Rb I line around 7800 {\rm \AA}. Dotted line
represents synthesized spectra and the solid line indicates the
observed spectra. Short dashed line represents the synthetic spectra
corresponding to $\Delta$[Rb/Fe] = $-$0.3 and long dashed line is
corresponding to $\Delta$[Rb/Fe] = +0.3} \label{Rb_7800}
\end{figure}
\par Strontium abundances are derived from the spectral synthesis
calculation of Sr I line at 4607.327 \r{A} whenever possible.
HD~154276 shows a mild under abundance with value [Sr/Fe]$\sim$$-$0.22,
while HD~32712 and HD~179832 show near-solar values. Other stars
show enrichment with value [Sr/Fe]$>$0.66.
\par The abundance of Y is derived from the spectral synthesis calculation
of Y I line at 6435.004 \r{A} in all the program stars except for
HD~94518, HD~147609, HD~154276 and HD~179832 where no useful Y I line
were detected. The spectral synthesis of Y II line at 5289.815 \r{A} is
used in HD~94518 while the equivalent width measurement of several
lines of Y II is used in other stars. The abundances estimated from
Y I lines range from 0.38 to 1.61, and that from Y II lines, 0.07 to 1.37.
\par
The spectral synthesis of Zr I line at 6134.585 \r{A} is used in all
the stars except HD~94518, HD~147609 and HD~154276 where this line was
not detected. We could detect useful Zr II lines in all the program
stars except HD~36650 and HD~219116. In HD~24035, the equivalent
width measurement of Zr II lines at 4317.321 and 5112.297 \r{A} are used.
Spectral synthesis calculation of Zr II line at 4208.977 \r{A} is used
in HD~94518, HD~147609 and HD~154276, line at 5112.297 \r{A} is used
in HD~32712, HD~179832, HD~207585 and HD~211173. The measurement using
Zr I lines gives the value 0.38$\leq$[Zr I/Fe]$\leq$1.29 and Zr II lines
return $-$0.08$\leq$[Zr II /Fe]$\leq$1.89. The spectrum synthesis of
Zr for a few program stars are shown in Figure \ref{Zr_synth}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Zr_6134_2k19_sep_16.eps}
\caption{ Synthesis of Zr I line at 6134.585 {\rm \AA}. Dotted line
represents synthesized spectra and the solid line indicates the observed
spectra. Short dashed line represents the synthetic spectra
corresponding to $\Delta$[Zr/Fe] = $-$0.3 and long dashed line is
corresponding to $\Delta$[Zr/Fe] = +0.3} \label{Zr_synth}
\end{figure}
\subsubsection{\textbf{The heavy s-process elements: Ba, La, Ce, Pr, Nd}}
The abundance of Ba is derived from the spectral synthesis.
Ba II lines at 5853.668, 6141.713 and 6496.897 \r{A} are used
in HD~154276, whereas, in all the other stars, we have used the
line at 5853.668 \r{A}. The spectrum synthesis fits for Ba for a few
program stars are shown in Figure \ref{Ba_synth}.
Ba shows slight overabundance in HD~154276 with [Ba/Fe]$\sim$0.22, while
HD~179832 and HD~211173 show moderate enhancement with values 0.41 and
0.57 respectively. All other program stars show the overabundance
of Ba in the range 0.79 to 1.71.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Ba_2k19_sep_16.eps}
\caption{ Synthesis of Ba II line at 5853.668 {\rm \AA}. Dotted line
represents synthesized spectra and the solid line indicates the
observed spectra. Short dashed line represents the synthetic spectra
corresponding to $\Delta$[Ba/Fe] = $-$0.3 and long dashed line is
corresponding to $\Delta$[Ba/Fe] = +0.3} \label{Ba_synth}
\end{figure}
\par Lanthanum abundance is obtained from the spectral synthesis
analysis of La II line at 4322.503 \r{A} in HD~147609. For all other
stars spectral synthesis analysis of La II line at 4921.776 \r{A} is used.
The estimated La abundances are in the range 0.20$\leq$[La/Fe]$\leq$1.70.
HD~154276 shows a mild enhancement of La with [La/Fe]$\sim$0.20.
All other program stars are overabundant in La with [La/Fe] ranging
from 0.52 to 1.70.
\par The abundances of Ce is obtained from the equivalent width
measurement of several The star HD~154276 show near-solar abundance
for Ce with [Ce/Fe]$\sim$0.18, whereas all other stars are overabundant
in Ce with [Ce/Fe]$>$0.7.
\par The abundance of Pr is derived from the the equivalent width
measurement of Pr II lines whenever possible. We could not estimate Pr
abundance in HD~94518 and HD~154276 as there were no useful lines detected.
HD~179832 is mildly enhanced in Pr with [Pr/Fe]$\sim$0.23 while other stars
show the enrichment in the range 0.85 to 1.98.
\par Abundance of Nd is estimated from the spectral synthesis
calculation of Nd II lines at 4177.320 and 4706.543 \r{A} in
HD~154276. In all other stars, we have used the equivalent width
measurement of several Nd II lines. A near-solar value is obtained
for the Nd abundance in the star HD~179832 with [Nd/Fe]$\sim$0.04,
whereas a moderate enhancement is found in HD~154276 with
[Nd/Fe]$\sim$0.40. All other objects show an enrichment in Nd
with [Nd/Fe]$>$0.81.
\subsection{The r-process elements: Sm, Eu}
\par Samarium abundance is derived by the spectral synthesis of
Sm II line at 4467.341 \r{A} in HD~154276. The equivalent width
measurement of several Sm II lines is used to obtain the Sm abundance
in the rest of the program stars. All the good Sm lines are found
in the bluer wavelength region of the spectra. The maximum number of
Sm II lines used is eight, in HD~207585. The estimated Sm abundances
give a near-solar value for HD~154276 with [Sm/Fe]$\sim$0.07 while
all other stars are enriched in Sm with values ranging from 0.78 to 2.04.
\par The Eu abundance is derived from the spectral synthesis of Eu II line
at 4129.725 \r{A}
in HD~94518, HD~147609 and HD~207585. In all other stars except HD~154276,
spectral synthesis calculation of Eu II line at 6645.064 \r{A} is used. In
HD~154276, no useful lines for abundance analysis is detected. The
estimated Eu abundance covers the range 0.00$\leq$[Eu/Fe]$\leq$0.49.
The r-process element Eu is not expected to show enhancement in
Ba stars according to their formation scenario.
\par The observed abundance ratios when compared with their
counterparts in other barium stars from literature, the light
as well as the heavy element Eu are found to follow the Galactic trend.
In order to find the s-process contents in the stars, we have estimated the
mean abundance ratio of the s-process elements (Sr, Y, Zr, Ba, La, Ce, Nd ),
[s/Fe], for our stars. The estimated values of [s/Fe] is provided
in Table \ref{hs_ls}.
The star HD~154276 shows the least value for [s/Fe] ratio.
A comparison of [s/Fe] ratio observed in our program stars with that
in Ba stars and normal giants from literature is shown in
Figure \ref{rejected_ba_star}.
The stars which are rejected as Ba stars from the analysis of
de Castro et al. (2016) are also shown for a comparison. The [s/Fe]
value of HD~154276 falls among these rejected stars.
Most of these rejected Ba stars are listed as marginal Ba stars in
MacConnell et al. (1972). There is no clear mention in literature on
how high should be the [s/Fe] value for a star to be considered as a
Ba star. According to de Castro et al. (2018), this value is +0.25,
while Sneden et al. (1981) found a value +0.21, Pilachowski (1977)
found +0.50 and Rojas at al. (2013) found a value $>$0.34. If we stick
on to the values of these authors, the star HD~154276 with [s/Fe] = 0.11,
can not be consider as a Ba star. However, if we follow the criteria
of Yang et al. (2016) that [Ba/Fe] should be atleast 0.17 for the star
even to be a mild star, HD~154276 can be considered as a mild Ba star
with [Ba/Fe]$\sim$0.22.
\par A comparison of the heavy element abundances with the literature
values whenever available are presented in
Table \ref{abundance_comparison_literature}. In most of the cases our
estimates agree within error bars with the literature values.
{\footnotesize
\begin{table*}
\caption{Elemental abundances in HD 24035, HD 32712 and HD 36650} \label{abundance_table1}
\resizebox{\textwidth}{!}
{\begin{tabular}{lccccccccccccccc}
\hline
& & & & HD 24035 & & & HD 32712 & & & HD 36650 & \\
\hline
& Z & solar log$\epsilon^{\ast}$ & log$\epsilon$ & [X/H] & [X/Fe] & log$\epsilon$ & [X/H] & [X/Fe] & log$\epsilon$ & [X/H] & [X/Fe]\\
\hline
C & 6 & 8.43 & 8.33(syn) & $-$0.10 & 0.41 & 8.13(syn) & $-$0.3 & $-$0.05 & 8.19(syn) & $-$0.24 & $-$0.22 \\
N & 7 & 7.83 & 8.73(syn) & 0.90 & 1.41 & 8.10(syn) & 0.27 & 0.52 & 8.38(syn) & 0.55 & 0.57 \\
O & 8 & 8.69 & -- & -- & -- & 8.42(syn) & $-$0.27 & $-$0.02 & 8.20(syn) & $-$0.49 & $-$0.47 \\
Na I & 11 & 6.24 & 6.15$\pm$0.17(4) & $-$0.09 & 0.42 & 6.20$\pm$0.13(4) & $-$0.04 & 0.21 & 6.33$\pm$0.08(4) & 0.09 & 0.11 \\
Mg I & 12 & 7.60 & 7.23(1) & $-$0.37 & 0.14 & 7.25$\pm$0.20(2) & $-$0.35 & $-$0.10 & 7.69$\pm$0.04(2) & 0.09 & 0.11 \\
Si I & 14 & 7.51 & 7.26$\pm$0.04(2) & $-$0.25 & 0.14 & 7.60$\pm$0.19(3) & 0.09 & 0.34 & 7.28$\pm$0.10(3) & $-$0.23 & $-$0.21 \\
Ca I & 20 & 6.34 & 5.91$\pm$0.13(10) & 0.43 & 0.08 & 5.92$\pm$0.09(11) & $-$0.42 & $-$0.17 & 6.23$\pm$0.12(16) & $-$0.11 & $-$0.09 \\
Sc II & 21 & 3.15 & 2.65(syn) & $-$0.50 & 0.01 & 2.95(syn) & $-$0.20 & 0.05 & 3.08(syn) & $-$0.12 & $-$0.10 \\
Ti I & 22 & 4.95 & 4.76$\pm$0.11(19) & $-$0.19 & 0.32 & 4.68$\pm$0.11(27) & $-$0.27 & $-$0.02 & 4.92$\pm$0.09(24) & $-$0.03 & $-$0.01 \\
Ti II & 22 & 4.95 & 4.68$\pm$0.03(2) & $-$0.27 & 0.24 & 4.75$\pm$0.11(5) &$-$0.20 & 0.05 & 4.97$\pm$0.14(7) & 0.02 & 0.04 \\
V I & 23 & 3.93 & 3.30(syn) & $-$0.63 & $-$0.12 & 3.46(syn) & $-$0.47 & $-$0.22 & 3.92(syn) & $-$0.53 & $-$0.51 \\
Cr I & 24 & 5.64 & 5.11$\pm$0.06(6) & $-$0.53 & $-$0.02 & 5.33$\pm$0.17(7) & $-$0.31 & $-$0.06 & 5.60$\pm$0.15(9) & $-$0.04 & $-$0.02 \\
Cr II & 24 & 5.64 & -- & -- & -- & 5.72$\pm$0.17(4) & 0.08 & 0.33 & 5.54$\pm$0.08(3) & $-$0.10 & $-$0.08 \\
Mn I & 25 & 5.43 & 4.85(syn) & $-$0.58 & $-$0.07 & 4.86(syn) & $-$0.57 & $-$0.32 & 5.08(syn) & $-$0.40 & $-$0.38 \\
Fe I & 26 & 7.50 & 6.99$\pm$0.19(87) & $-$0.51 & - & 7.25$\pm$0.12(84) & $-$0.25 & - & 7.48$\pm$0.12(92) & $-$0.02 & - \\
Fe II & 26 & 7.50 & 7.00$\pm$0.16(7) & $-$0.50 & - & 7.25$\pm$0.15(9) & $-$0.25 & - & 7.48$\pm$0.14(6) & $-$0.02 & - \\
Co I & 27 & 4.99 & 4.67(syn) & $-$0.32 & 0.19 & 4.43(syn) & $-$0.56 & $-$0.31 & 4.86(syn) & $-$0.43 & $-$0.41 \\
Ni I & 28 & 6.22 & 6.07$\pm$0.14(13) & $-$0.15 & 0.36 & 6.04$\pm$0.18(11) & $-$0.18 & 0.07 & 6.33$\pm$0.10(11) & 0.11 & 0.13 \\
Cu I & 29 & 4.19 & -- & -- & -- & -- & -- & -- & 4.57(syn) & $-$0.09 & $-$0.07 \\
Zn I & 30 & 4.56 & 3.92(1) & $-$0.64 & $-$0.13 & 4.42(1) & $-$0.14 & 0.11 & -- & -- & -- \\
Rb I & 37 & 2.52 & -- & -- & -- & 1.14(syn) & $-$1.38 & $-$1.13 & 1.68(syn) & $-$0.84 & $-$0.82 \\
Sr I & 38 & 2.87 & -- & -- & -- & 2.65(syn) & $-$0.22 & 0.03 & 3.78(syn) & 0.64 & 0.66 \\
Y I & 39 & 2.21 & 3.31(syn) & 1.1 & 1.61 & 2.52(syn) & 0.31 & 0.56 & 2.70(syn) & 0.49 & 0.51 \\
Y II & 39 & 2.21 & -- & -- & -- & 3.00$\pm$0.15(6) & 0.79 & 1.04 & 2.89$\pm$0.13(10) & 0.68 & 0.70 \\
Zr I & 40 & 2.58 & 3.28(syn) & 0.7 & 1.21 & 2.85(syn) & 0.27 & 0.52 & 3.07(syn) & 0.49 & 0.51 \\
Zr II & 40 & 2.58 & 3.96$\pm$0.07(2) & 1.38 & 1.89 & 3.15(syn) & 0.57 & 0.82 & -- & -- & -- \\
Ba II & 56 & 2.18 & 3.38(syn) & 1.20 & 1.71 & 3.32(syn) & 1.14 & 1.39 & 2.95(syn) & 0.77 & 0.79 \\
La II & 57 & 1.10 & 2.22(syn) & 1.12 & 1.63 & 2.10(syn) & 1.00 & 1.25 & 1.81(syn) & 0.60 & 0.62 \\
Ce II & 58 & 1.58 & 2.77$\pm$0.12(9) & 1.19 & 1.70 & 3.01$\pm$0.11(11) & 1.43 & 1.68 & 2.55$\pm$0.13(11) & 0.97 & 0.99 \\
Pr II & 59 & 0.72 & 2.19$\pm$0.18(6) & 1.47 & 1.98 & 2.14$\pm$0.18(6) & 1.42 & 1.67 & 1.55$\pm$0.13(3) & 0.83 & 0.85 \\
Nd II & 60 & 1.42 & 2.32$\pm$0.17(15) & 0.90 & 1.41 & 2.93$\pm$0.18(11) & 1.51 & 1.76 & 2.34$\pm$0.16(15) & 0.92 & 0.94 \\
Sm II & 62 & 0.96 & 2.48$\pm$0.13(4) & 1.52 & 2.03 & 2.42$\pm$0.17(6) & 1.46 & 1.71 & 1.93$\pm$0.12(6) & 0.97 & 0.99 \\
Eu II & 63 & 0.52 & 0.50(syn) & $-$0.02 & 0.49 & 0.61(syn) & 0.09 & 0.34 & 0.59(syn) & 0.07 & 0.09 \\
\hline
\end{tabular}}
$\ast$ Asplund (2009), The number inside the paranthesis shows
the number of lines used for the abundance determination.
\end{table*}
}
{\footnotesize
\begin{table*}
\caption{Elemental abundances in HD 94518, HD 147609 and HD 154276} \label{abundance_table2}
\resizebox{\textwidth}{!}
{\begin{tabular}{lccccccccccccccc}
\hline
& & & & HD 94518 & & & HD 147609 & & & HD 154276 & \\
\hline
& Z & solar log$\epsilon^{\ast}$ & log$\epsilon$ & [X/H] & [X/Fe] & log$\epsilon$ & [X/H] & [X/Fe] & log$\epsilon$ & [X/H] & [X/Fe]\\
\hline
C & 6 & 8.43 & 7.60(syn) & $-$0.83 & $-$0.28 & 8.53(syn) & 0.10 & 0.38 & -- & -- & -- \\
N & 7 & 7.83 & 8.63(syn) & 0.80 & 1.35 & - & - & - & - & - & - \\
O & 8 & 8.69 & 8.79(syn) & 0.10 & 0.65 & 9.05(syn) & 0.36 & 0.64 & 8.91(syn) & 0.22 & 0.32 \\
Na I & 11 & 6.24 & 5.88$\pm$0.06(4) & $-$0.44 & 0.11 & 6.26$\pm$0.19(2) & 0.02 & 0.30 & 6.20$\pm$0.05(3) & $-$0.04 & 0.06 \\
Mg I & 12 & 7.60 & 7.37$\pm$0.12(4) & $-$0.23 & 0.32 & 7.49$\pm$0.08(4) & $-$0.11 & 0.17 & 7.81$\pm$0.06(3) & 0.21 & 0.31 \\
Al I & 13 & 6.45 & -- & -- & -- & -- & -- & -- & 6.23$\pm$0.08(2) & $-$0.22 & $-$0.12 \\
Si I & 14 & 7.51 & 6.90$\pm$0.20(4) & $-$0.61 & $-$0.06 & 7.37$\pm$0.05(4) & $-$0.14 & 0.14 & 7.54$\pm$0.05(5) & 0.03 & 0.13 \\
Ca I & 20 & 6.34 & 6.01$\pm$0.13(18) & $-$0.33 & 0.22 & 6.04$\pm$0.21(21) & $-$0.30 & 0.21 & 6.22$\pm$0.20(27) & $-$0.12 & $-$0.02\\
Sc II & 21 & 3.15 & 2.72(syn) & $-$0.63 & $-$0.08 & 2.90(syn) & $-$0.25 & 0.03 & 3.25(syn) & 0.10 & 0.20 \\
Ti I & 22 & 4.95 & 4.64$\pm$0.13(12) & $-$0.31 & 0.24 & 4.66$\pm$0.06(10) & $-$0.29 & $-$0.01 & 5.02$\pm$0.17(27) & 0.07 & 0.17 \\
Ti II & 22 & 4.95 & 4.81$\pm$0.14(13) & $-$0.14 & 0.41 & 4.67$\pm$0.16(8) & $-$0.28 & 0.00 & 5.06$\pm$0.21(18) & 0.11 & 0.21 \\
V I & 23 & 3.93 & 3.15(syn) & $-$0.78 & $-$0.23 & 3.53(syn) & $-$0.40 & $-$0.12 & 3.90(syn) & $-$0.03 & 0.07 \\
Cr I & 24 & 5.64 & 5.07$\pm$0.14(15) & $-$0.57 & $-$0.02 & 5.29$\pm$0.15(9) & $-$0.35 & $-$0.07 & 5.51$\pm$0.20(11) & $-$0.13 & $-$0.03 \\
Cr II & 24 & 5.64 & 5.06$\pm$0.19(4) & $-$0.58 & $-$0.03 & 5.29$\pm$0.11(3) & $-$0.35 & $-$0.07 & 5.53$\pm$0.07(5) & $-$0.11 & $-$0.01 \\
Mn I & 25 & 5.43 & 4.61(syn) & $-$1.1 & $-$0.55 & 5.03(syn) & $-$0.40 & $-$0.12 & 5.17$\pm$0.15(6) & $-$0.26 & $-$0.16 \\
Fe I & 26 & 7.50 & 6.95$\pm$0.10(110) & $-$0.55 & - & 7.22$\pm$0.16(151) & $-$0.28 & - & 7.41$\pm$0.13(150) & $-$0.09 & - \\
Fe II & 26 & 7.50 & 6.95$\pm$0.12(15) & $-$0.55 & - & 7.22$\pm$0.12(20) & $-$0.28 & - & 7.40$\pm$0.14(15) & $-$0.10 & - \\
Co I & 27 & 4.99 & 4.52(syn) & $-$0.64 & $-$0.09 & -- & -- & -- & 4.85$\pm$0.10(2) & $-$0.14 & $-$0.04 \\
Ni I & 28 & 6.22 & 5.70$\pm$0.15(26) & $-$0.52 & 0.03 & 5.88$\pm$0.11(14) & $-$0.34 & $-$0.06 & 6.13$\pm$0.15(19) & $-$0.09 & 0.01 \\
Cu I & 29 & 4.19 & 3.67(syn) & $-$0.82 & $-$0.27 & -- & -- & -- & -- & -- & -- \\
Zn I & 30 & 4.56 & 4.15$\pm$0.10(2) & $-$0.41 & 0.14 & 4.30 $\pm$0.07(2) & $-$0.26 & 0.02 & 4.64$\pm$0.00(2) & 0.08 & 0.18 \\
Sr I & 38 & 2.87 & 3.59(syn) & 0.63 & 1.18 & 4.10(syn) & 1.23 & 1.51 & 2.55(syn) & $-$0.32 & $-$0.22 \\
Y II & 39 & 2.21 & 2.16(syn) & $-$0.05 & 0.50 & 3.00$\pm$0.14(9) & 0.79 & 1.07 & 2.18$\pm$0.19(4) & $-$0.03 & 0.07 \\
Zr II & 40 & 2.58 & 2.32(syn) & $-$0.26 & 0.29 & 3.30(syn) & 0.72 & 1.00 & 2.40(syn) & $-$0.18 & $-$0.08 \\
Ba II & 56 & 2.18 & 2.58(syn) & 0.35 & 0.90 & 3.30(syn) & 1.12 & 1.40 & 2.30(syn) & 0.12 & 0.22 \\
La II & 57 & 1.10 & 2.12(syn) & 0.08 & 0.58 & 2.09(syn) & 0.99 & 1.27 & 1.20(syn) & 0.10 & 0.20 \\
Ce II & 58 & 1.58 & 1.94$\pm$0.09(9) & 0.36 & 0.91 & 2.56$\pm$0.11(8) & 0.98 & 1.26 & 1.66$\pm$0.03(2) & 0.08 & 0.18 \\
Pr II & 59 & 0.72 & -- & -- & -- & 1.79(1) & 1.07 & 1.35 & -- & -- & -- \\
Nd II & 60 & 1.42 & 1.92$\pm$0.20(9) & 0.50 & 1.05 & 2.21$\pm$0.17(8) & 0.79 & 1.07 & 1.72$\pm$0.12(syn)(2) & 0.30 & 0.40 \\
Sm II & 62 & 0.96 & 1.80$\pm$0.08(4) & 0.84 & 1.39 & 1.97$\pm$ 0.19(4) & 1.01 & 1.29 & 0.93(syn) & $-$0.03 & 0.07 \\
Eu II & 63 & 0.52 & 0.12(syn) & $-$0.40 & 0.15 & 0.37(syn) & $-$0.15 & 0.13 & -- & -- & -- \\
\hline
\end{tabular}}
$\ast$ Asplund (2009), The number inside the parenthesis shows
the number of lines used for the abundance determination.
\end{table*}
}
{\footnotesize
\begin{table*}
\caption{Elemental abundances in HD 179832, HD 207585, HD 211173 and HD 219116} \label{abundance_table3}
\resizebox{\textwidth}{!}
{\begin{tabular}{lccccccccccccccc}
\hline
& & & & HD 179832 & & & HD 207585 & & & HD 211173 & & & HD 219116 & \\
\hline
& Z & solar log$\epsilon^{\ast}$ & log$\epsilon$ & [X/H] & [X/Fe] & log$\epsilon$ & [X/H] & [X/Fe] & log$\epsilon$ & [X/H] & [X/Fe] & log$\epsilon$ & [X/H] & [X/Fe]\\
\hline
C & 6 & 8.43 & -- & -- & -- & 8.66(syn) & 0.23 & 0.61 & 8.03(syn) & $-$0.40 & $-$0.23 & 8.03(syn) & $-$0.43 & 0.02 \\
N & 7 & 7.83 & -- & -- & -- & 8.20(syn) & 0.37 & 0.75 & 8.20(syn) & 0.37 & 0.54 & 7.85(syn) & 0.02 & 0.47 \\
O I & 8 & 8.69 & 8.93(syn) & 0.24 & 0.01 & 9.28(syn) & 0.59 & 0.97 & 8.26(syn) & $-$0.43 & $-$0.26 & 8.45(syn) & $-$0.24 & 0.21 \\
Na I & 11 & 6.24 & 6.52$\pm$0.13(2) & 0.28 & 0.05 & 6.11$\pm$0.11(4) & $-$0.13 & 0.25 & 6.30$\pm$0.14(4) & 0.06 & 0.23 & 6.05$\pm$0.16(4) & $-$0.19 & 0.26 \\
Mg I & 12 & 7.60 & 7.83$\pm$0.02(2) & 0.23 & 0.00 & 7.29$\pm$0.12(3) & $-$0.31 & 0.07 & 7.66$\pm$0.08(2) & 0.06 & 0.23 & 7.47$\pm$0.02(3) & $-$0.11 & 0.34 \\
Al I & 13 & 6.45 & -- & -- & -- & -- & -- & -- & 6.17$\pm$0.07(2) & $-$0.28 & $-$0.11 & -- & -- & -- \\
Si I & 14 & 7.51 & 7.71$\pm$0.08(4) & 0.20 & $-$0.03 & 7.25$\pm$0.02(2) & $-$0.26 & 0.12 & 6.97$\pm$0.11(2) & $-$0.54 & $-$0.37 & 7.08$\pm$0.20(2) & $-$0.43 & 0.02 \\
Ca I & 20 & 6.34 & 6.27$\pm$0.06(9) & $-$0.07 & $-$0.30 & 6.22$\pm$0.17(11) & $-$0.12 & 0.26 & 6.23$\pm$0.17(15) & $-$0.11 & 0.06 & 6.02$\pm$0.14(16) & $-$0.34 & 0.13 \\
Sc II & 21 & 3.15 & 3.35(syn) & 0.20 & $-$0.03 & 2.63(syn) & $-$0.52 & $-$0.14 & 2.79(syn) & $-$0.36 & $-$0.19 & 2.66(syn) & $-$0.49 & $-$0.04 \\
Ti I & 22 & 4.95 & 5.06$\pm$0.07(4) & 0.11 & $-$0.12 & 4.58$\pm$0.15(7) & $-$0.37 & 0.01 & 4.77$\pm$0.11(27) & $-$0.18 & $-$0.01 & 4.73$\pm$0.09(21) & $-$0.22 & 0.23 \\
Ti II & 22 & 4.95 & 5.39$\pm$0.08(4) & 0.44 & 0.21 & 4.82$\pm$0.12(9) & $-$0.13 & 0.25 & 4.76$\pm$0.16(9) & $-$0.19 & $-$0.02 & 4.65$\pm$0.15(6) & $-$0.3 & 0.15 \\
V I & 23 & 3.93 & 4.03(syn) & 0.10 & $-$0.13 & 3.11(syn) & $-$0.82 & $-$0.44 & 3.47(syn) & $-$0.46 & $-$0.29 & 3.67(syn) & $-$0.26 & 0.19 \\
Cr I & 24 & 5.64 & 5.78$\pm$0.12(2) & 0.14 & $-$0.09 & 5.39$\pm$0.15(11) & $-$0.25 & 0.13 & 5.45$\pm$0.16(11) & $-$0.19 & $-$0.02 & 5.34$\pm$0.16(9) & $-$0.3 & 0.15 \\
Cr II & 24 & 5.64 & 5.71$\pm$0.02(2) & 0.07 & $-$0.16 & 5.38$\pm$0.15(4) & $-$0.26 & 0.12 & 5.24$\pm$0.16(4) & $-$0.4 & $-$0.23 & 5.20$\pm$0.09(2) & $-$0.44 & 0.01 \\
Mn I & 25 & 5.43 & 5.25$\pm$0.09(3) & $-$0.18 & $-$0.41 & 4.60(syn) & $-$0.83 & $-$0.45 & 5.13(syn) & $-$0.30 & $-$0.13 & 4.78(syn) & $-$0.65 & 0.2 \\
Fe I & 26 & 7.50 & 7.73$\pm$0.01(68) & 0.23 & - & 7.12$\pm$0.12(107) & $-$0.38 & - & 7.33$\pm$0.10(109) & $-$0.17 & - & 7.05$\pm$0.11(92) & $-$0.45 & - \\
Fe II & 26 & 7.50 & 7.72$\pm$0.04(9) & 0.22 & - & 7.12$\pm$0.11(12) & $-$0.38 & - & 7.33$\pm$0.09(11) & $-$0.17 & - & 7.06$\pm$0.12(9) & $-$0.44 & - \\
Co I & 27 & 4.99 & 5.24$\pm$0.08(6) & 0.25 & 0.02 & 4.55(syn) & $-$0.44 & $-$0.06 & 4.58(syn) & $-$0.41 & $-$0.24 & 4.69$\pm$0.10(7) & $-$0.3 & 0.15 \\
Ni I & 28 & 6.22 & 6.43$\pm$0.07(10) & 0.21 & $-$0.02 & 5.83$\pm$0.14(16) & $-$0.39 & $-$0.01 & 6.14$\pm$0.17(27) & $-$0.08 & 0.09 & 5.91$\pm$0.12(11) & $-$0.31 & 0.14 \\
Cu I & 29 & 4.19 & -- & -- & -- & 4.36(syn) & $-$0.63 & $-$0.25 & 3.92(syn) & $-$0.27 & $-$0.10 & 4.09(syn) & $-$0.12 & 0.33 \\
Zn I & 30 & 4.56 & 4.94$\pm$0.11(2) & 0.38 & 0.15 & -- & -- & -- & 4.43$\pm$0.02(2) & $-$0.13 & 0.04 & 4.04(1) & $-$0.56 & 0.11 \\
Rb I & 37 & 2.52 & 1.40(syn) & $-$1.12 & $-$1.35 & -- & -- & -- & 1.35(syn) & $-$1.17 & $-$1.00 & -- & -- & -- \\
Sr I & 38 & 2.87 & 3.12(syn) & 0.25 & 0.02 & -- & -- & -- & 3.40(syn) & 0.53 & 0.70 & 3.13(syn) & 0.26 & 0.71 \\
Y I & 39 & 2.21 & -- & -- & -- & 2.77(syn) & 0.56 & 0.94 & 2.42(syn) & 0.21 & 0.38 & 2.49(syn) & 0.28 & 0.73 \\
Y II & 39 & 2.21 & 2.55$\pm$0.05(5) & 0.34 & 0.11 & 3.20$\pm$0.08(9) & 0.99 & 1.37 & 2.69$\pm$0.07(8) & 0.48 & 0.65 & 2.51$\pm$0.09(3) & 0.30 & 0.75 \\
Zr I & 40 & 2.58 & 4.10(syn) & 1.52 & 1.29 & 3.30(syn) & 0.72 & 1.10 & 2.79(syn) & 0.21 & 0.38 & 2.79(syn) & 0.21 & 0.66 \\
Zr II & 40 & 2.58 & 4.25(syn) & 1.67 & 1.44 & 3.74(syn) & 0.82 & 1.20 & 2.80(syn) & 0.22 & 0.39 & -- & -- & -- \\
Ba II & 56 & 2.18 & 2.82(syn) & 0.64 & 0.41 & 3.50(syn) & 1.22 & 1.60 & 2.58(syn) & 0.40 & 0.57 & 2.90(syn) & 0.77 & 1.22 \\
La II & 57 & 1.10 & 1.85(syn) & 0.75 & 0.52 & 2.47(syn) & 1.32 & 1.70 & 1.88(syn) & 0.78 & 0.95 & 2.00(syn) & 0.9 & 1.35 \\
Ce II & 58 & 1.58 & 2.55$\pm$0.11(2) & 0.97 & 0.74 & 2.92$\pm$0.16(14) & 1.34 & 1.72 & 2.15$\pm$0.10(11) & 0.57 & 0.74 & 2.70$\pm$0.20(10) & 1.12 & 1.57 \\
Pr II & 59 & 0.72 & 1.18$\pm$0.05(2) & 0.46 & 0.23 & 1.93$\pm$0.11(3) & 1.21 & 1.59 & 1.93$\pm$0.11(2) & 1.21 & 1.59 & 1.54$\pm$0.18(3) & 0.82 & 1.27 \\
Nd II & 60 & 1.42 & 1.69$\pm$0.04(2) & 0.27 & 0.04 & 2.66$\pm$0.10(19) & 1.24 & 1.62 & 1.98$\pm$0.17(14) & 0.56 & 0.73 & 2.10$\pm$0.12(10) & 0.68 & 1.13 \\
Sm II & 62 & 0.96 & 1.97$\pm$0.04(5) & 1.01 & 0.78 & 2.62$\pm$0.17(8) & 1.66 & 2.04 & 1.66$\pm$0.07(4) & 0.70 & 0.87 & 2.09$\pm$0.18(7) & 1.13 & 1.58 \\
Eu II & 63 & 0.52 & 0.75(syn) & 0.23 & 0.00 & 0.42(syn) & $-$0.10 & 0.28 & 0.48(syn) & $-$0.04 & 0.13 & 0.50(syn) & $-$0.02 & 0.43 \\
\hline
\end{tabular}}
$\ast$ Asplund (2009), The number inside the parenthesis shows the
number of lines used for the abundance determination.
\end{table*}
}
\subsection{\textbf{The [hs/ls] ratio} as an indicator of neutron source}
In Table \ref{hs_ls}, we have presented the estimated [ls/Fe], [hs/Fe] and
[hs/ls] ratios for the program stars, where ls refers to the light
s-process elements (Sr, Y and Zr) and hs to the heavy s-process elements
(Ba, La, Ce and Nd).
The [hs/ls] ratio is a useful indicator of neutron source in the
former AGB star. As the metallicity decreases, the neutron exposure
increases. As a result, lighter s-process elements are bypassed in
favour of heavy elements. Hence, [hs/ls] ratio increases with
decreasing metallicity. The models of Busso et al. (2001) have shown
the behaviour of this ratio with metallicity for AGB stars of mass 1.5
and 3.0 M$_{\odot}$ for different $^{13}$C pocket efficiencies.
According to these models, the maximum value of [hs/ls] is $\sim$ 1.2
which is at metallicities $\sim$ $-$1.0 and $\sim$ $-$0.8 for
the 3 and 1.5 M$_{\odot}$ models respectively for the standard
$^{13}$C pocket efficeincy. In their models, Goriely \& Mowlavi (2000),
have shown the run of [hs/ls] ratio with meatallicity for different
thermal pulses for AGB stars in the range 1.5-3M$_{\odot}$. The maximum
value of [hs/ls]$\sim$0.6 occurs at metallicity $\sim$ $-$0.5. It was
noted that, in all these models, the [hs/ls] ratio does not follow a linear
anti-correlation with metallicity, rather exhibits a loop like behaviour.
The ratio increases with decreasing metallicity upto a particular value
of [Fe/H] and then starts to drop. Our [hs/ls] ratio has a maximum
value of $\sim$ 1.15 which occurs at a metallicity of $\sim$ $-$0.25.
The anti-correlation of [hs/ls] suggest the operation of
$^{13}$C($\alpha$, n)$^{16}$O neutron source, since
$^{13}$C($\alpha$, n)$^{16}$O is found to be anti-correlated with
metallicity (Clayton 1988, Wallerstein 1997).
As seen from the Table \ref{hs_ls}, all the stars show positive values
for [hs/ls] ratio. At metallicities higher than solar, a negative value
is expected for this ratio and at lower metallicities, a positive
value is expected for low-mass AGB stars where
$^{13}$C($\alpha$, n)$^{16}$O is the neutron source (Busso et al. 2001,
Goriely \& Mowlavi 2000). However, it is possible that AGB stars with
masses in the range 5-8M$_{\odot}$ can also exhibit low [hs/ls]
ratios considering the $^{22}$Ne($\alpha$, n)$^{25}$Mg neutron source
(Karakas \& Lattanzio 2014). The models of Karakas \& Lattanzio (2014)
predicted that the ls elements are predominantly produced over the
hs elements for AGB stars of mass 5 and 6 M$_{\odot}$. The [hs/ls]
ratio is correlated to the neutron exposure.
The $^{22}$Ne($\alpha$, n)$^{25}$Mg source has smaller neutron
exposure compared to the $^{13}$C($\alpha$, n)$^{16}$O source.
Hence, in the stars where $^{22}$Ne($\alpha$, n)$^{25}$Mg operates,
we expect a lower [hs/ls] ratio. The lower neutron exposure of the
neutrons produced from the $^{22}$Ne source together with the
predictions of low [hs/ls] ratio in massive AGB star models
have been taken as the evidence of operation of
$^{22}$Ne($\alpha$, n)$^{25}$Mg in massive AGB stars. A Mg
enrichment is expected in the stars where this reaction takes place.
As none of our stars shows such an enrichment, we discard
the possibility of $^{22}$Ne($\alpha$, n)$^{25}$Mg reaction
as a possible neutron source for any of our program stars,
with respect to [hs/ls] ratio. This is also supported by our
estimates of Rb and Zr as discussed in the following section.
\subsection{Rb as a probe to the neutron density at the s-process site}
In addition to the [hs/ls] ratio, the abundance of rubidium can
also provide clues to the mass of the companion AGB stars.
The AGB star models predict higher Rb abundances for massive AGB stars
where the neutron source is $^{22}$Ne($\alpha$,n)$^{25}${Mg}
reaction (Abia et al. 2001, van Raai et al. 2012). In the s-process
nucleosynthesis path, the branching points at the unstable
nuclei $^{85}$Kr and $^{86}$Rb controls the Rb production. The
amount of Rb produced along this s-process path is determined by
the probability of these unstable nuclei to capture the neutron
before $\beta$-decaying, which in turn depends on the neutron density
at the s-process site (Beer \& Macklin 1989, Tomkin \& Lambert 1983,
Lambert et al. 1995).
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{rejected_ba_stars_2k19_sep_16.eps}
\caption{\small{Observed [s/Fe] ratios of the
program stars with respect to metallicity [Fe/H].
Red open circles represent normal giants from literature (Luck \& Heiter 2007).
Green crosses, blue four-sided stars, cyan filled pentagons, red
eight-sided crosses represent strong Ba giants, weak Ba giants,
Ba dwarfs, Ba sub-giants respectively from literature (de Castro
et al. 2016, Yang et al. 2016, Allen \& Barbuy 2006a).
Magenta six-sided stars represent the stars rejected as Ba stars
by de Castro et al. (2016). HD~24035 (filled hexagon), HD~32712
(starred triangle), HD~36650 (filled triangle), HD~94518 (filled circle),
HD~147609 (five-sided star), HD~154276 (open hexagon), HD~179832
(open triangle), HD~207585 (six-sided cross), HD~211173 (nine-sided star)
and HD~219116 (filled square).}}\label{rejected_ba_star}
\end{figure}
The production of $^{87}$Rb from $^{85}$Kr and
$^{86}$Rb is possible only at higher neutron densities,
N$_{n}$ $>$ 5$\times$10$^{8}$ n/cm$^{3}$, Sr, Y, Zr etc. are produced
otherwise (Beer 1991, Lugaro \& Chieffi 2011). The $^{87}$Rb
isotope has magic number of neutrons and hence it is fairly stable
against neutron capture. Also, the neutron capture cross-section of
$^{87}$Rb is very small ($\sigma$ $\sim$ 15.7 mbarn at 30 KeV)
compared to that of $^{85}$Rb ($\sigma$ $\sim$ 234 mbarn) (Heil
et al. 2008a). Hence, once the nucleus $^{87}$Rb is produced, it
will be accumulated. Therefore, the isotopic ratio $^{87}$Rb/$^{85}$Rb
could be a direct indicator of the neutron density at
the s-process site, as a consequence help to infer the mass
of the AGB star. But, it is impossible to distinguish the lines
due to these two isotopes of Rb in the stellar spectra
(Lambert \& Luck 1976, Garc\'ia-Hern\'andez et al. 2006).
However, the abundance of Rb relative to other elements in
this region of the s-process path, such as Sr, Y, and Zr,
can be used to estimate the average neutron density of the s-process.
Detailed nucleosynthesis models for the stars with masses
between 5 - 9 M$_{\odot}$ at solar metallicity
predict [Rb/(Sr,Zr)]$>$0 (Karakas et al. 2012). A positive value
of [Rb/Sr] or [Rb/Zr] ratio indicates a higher neutron
density, whereas a negative value indicates a low neutron density.
This fact has been used an evidence to conclude that
$^{13}$C($\alpha$, n)$^{16}$O reaction act as the
neutron source in M, MS and S stars (Lambert et al. 1995) and
C stars must be low-mass AGB stars with M$<$3M$_{\odot}$
(Abia et al. 2001). The observed [Rb/Zr] ratios in the AGB
stars both in our Galaxy and the Magellanic Clouds show a
value $<$ 0 for low-mass AGB stars and a value $>$ 0 for intermedaite-mass
(4-6 M$_{\odot}$) AGB stars (Plez et al. 1993, Lambert et al. 1995,
Abia et al. 2001, Garc\'ia-Hern\'andez et al. 2006, 2007, 2009,
van Raai et al. 2012).
The estimated [Rb/Zr] and [Rb/Sr] ratios (Table 13) give
negative values for our stars for which we could estimate
these ratios. The observed [Rb/Fe] and [Zr/Fe] ratios are shown
in Figure \ref{Rb_Zr}. The observed ranges of Rb and Zr in
low- and intermediate-mass AGB stars (shaded regions)
in the Galaxy and Magellanic Clouds are also shown for a
comparison. It is clear that the abundances of Rb and Zr
are consistent with the range normally observed in the
low-mass AGB stars.
\subsection{Comparison with FRUITY models and a parametric model based
analysis}
A publicly available (http://fruity.oa-teramo.inaf.it/, Web sites of
the Teramo Observatory (INAF)) data set for the s-process in AGB stars
is the FRANEC Repository of Updated Isotopic Tables \& Yields (FRUITY)
models (Cristallo et al. 2009, 2011, 2015b). These models cover the
whole range of metallicity observed for Ba stars from z = 0.001 to
z = 0.020 for the mass range 1.3 - 6.0 M$_{\odot}$. The computations
comprise of the evolutionary models starting from the pre-main
sequence to the tip of AGB phase through the core He-flash. During
the core H-burning, no core overshoot has been considered, a
semi-convection is assumed during the core He-burning. The only mixing
considered in this model is arising from the convection, additional
mixing phenomena such as rotation is not considered here. The
calculations are based on a full nuclear network considering all the
stable and relevant unstable isotopes from hydrogen to bismuth.
This includes 700 isotopes and about 1000 nuclear processes such as
charged particle reactions, neutron captures, and $\beta$-decays
(Straniero et al. 2006, G\"orres et al. 2000, Jaeger et al. 2001,
Abbondanno et al. 2004, Patronis et al. 2004, Heil et al. 2008b).
The details of the input physics and the computational algorithms
are provided in Straniero et al. (2006). In this model, $^{13}$C
pocket is formed through time-dependent overshoot mechanisms, which
is controlled by a free overshoot parameter ($\beta$) in the
exponentially declining convective velocity function. This parameter
is set in such a way that the neutrons released are enough to
maximise the production of s-process elements. For the low-mass
AGB star models (initial mass $<$4 M$_{\odot}$), neutrons are
released by the $^{13}$C($\alpha$, n)$^{16}$O reaction during the
interpulse phase in radiative conditions, when temperatures
within the pockets reaches T $\sim$ 1.0 $\times$ 10$^{8}$ K,
with typical densities of 10$^{6}$ - 10$^{7}$ neutrons cm$^{-3}$.
However, in the case of the metal-rich models (z = 0.0138, z = 0.006
and z = 0.003), $^{13}$C is only partially burned during the interpulse;
surviving part is ingested in the convective zone generated by the
subsequent thermal pulse (TP) and then burned at
T $\sim$ 1.5 $\times$ 10$^{8}$ K, producing a neutron density
of 10$^{11}$ neutrons cm$^{-3}$. For larger z,
$^{22}$Ne($\alpha$, n)$^{25}$Mg neutron source
is marginally activated during the TPs; but for low z, it
becomes an important source when most of the $^{22}$Ne is
primary (Cristallo et al. 2009, 2011). For the intermediate-mass
AGB star models, the s-process distributions are dominated by
the $^{22}$Ne($\alpha$, n)$^{25}$Mg neutron source, which is
efficiently activated during TPs. The contribution from the
$^{13}$C($\alpha$, n)$^{16}$O reaction is strongly reduced in
the massive stars.
This is due to the lower extent of the $^{13}$C pocket in them.
It is shown that the extent of $^{13}$C pocket decreases
with increasing core mass of the AGB, due to the shrinking
and compression of He-intershell (Cristallo et al. 2009).
These massive models experience Hot Bottom Burning and Hot-TDUs
at lower metallicities (Cristallo et al. 2015b).
We have compared our observational data with the FRUITY model.
The model predictions are unable to reproduce the [hs/ls] ratios
characterizing the surface composition of the stars.
A comparison of the observed [hs/ls] ratios with metallicity
shows a large spread (Figure \ref{fruity_ls_hs_hsls}), somewhat similar
to the comparison between the model and observational data as shown
in Cristallo et al. (2011, Figure 12).
The observed discrepancy may be explained considering different
$^{13}$C pocket efficiencies in the AGB models. In the FRUITY models
a standard $^{13}$C pocket is being considered, however, it needs to
be checked if a variation in the amount of $^{13}$C pocket would
give a better match with the observed spread. Absence of stellar
rotation in the current FRUITY models may also be a cause for the
observed discrepancy. The rotation induced mixing alters the extend
of $^{13}$C pocket (Langer et al. 1999), which in turn affects
the s-process abundance pattern. However, a study made by
Cseh et al. (2018) using the rotating star models available for the
metallicity range of Ba stars (Piersanti et al. 2013) could not
reproduce the observed abundance ratios of stars studied in
de Castro et al. (2016).
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{RbZr_2k19_may_15.eps}
\caption{The observed abundances [Rb/Fe] vs [Zr/Fe].
HD~32712 (starred triangle), HD~36650 (filled triangle),
HD~179832 (open triangle), and HD~211173 (nine-sided star).
The region shaded with short-dashed line and dots
corresponds to the observed range of Zr and Rb in
intermediate-mass and low-mass AGB stars respectively
in the Galaxy and the Magellanic Clouds
(van Raai et al. 2012). The four stars occupy the region
of low-mass AGB stars except for HD~179832 (open triangle)
which lie marginally below this region.} \label{Rb_Zr}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth, height= 0.8\columnwidth]{fruity_hsls_2k19_sep_16.eps}
\caption{Comparison of predicted and observed values of [hs/ls] ratios.}
\label{fruity_ls_hs_hsls}
\end{figure}
The observed abundance ratios for eight neutron-capture elements are
compared with their counterparts in the low-mass AGB stars from literaure,
that are found to be associated with $^{13}$C($\alpha$, n)$^{16}$O
neutron source (Figure \ref{agb_heavy_comparison}). As discussed in
de Castro et al. (2016) the scatter observed in the ratios may be a
consequence of different dilution factors during the mass transfer,
as well as the orbital parameters, metallicity and initial mass.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{agb_heavy_2k19_sep_16.eps}
\caption{\small{Comparison of abundance ratios of neutron-capture elements
observed in the program stars and the AGB stars with respect to
metallicity [Fe/H]. Red crosses represent the AGB stars from literature
(Smith \& Lambert 1985, 1986b, 1990, Abia \& wallerstein 1998).}}
\label{agb_heavy_comparison}
\end{center}
\end{figure}
We have performed a parametric analysis in order to find the
dilution experienced by the s-rich material after the mass transfer.
The dilution factor, d, is defined as M$_{\star}^{env}$/M$_{AGB}^{transf}$ = 10$^{d}$,
where M$_{\star}^{env}$ is the mass of the envelope of the observed star
after the mass transfer, M$_{AGB}^{transf}$ is the mass transferred from the AGB.
The dilution factor is derived by comparing the observed abundance
with the predicted abundance from FRUITY model for the heavy elements
(Rb, Sr, Y, Zr, Ba, La, Ce, Pr, Nd, Sm and Eu). The solar
values has been taken as the initial composition. The observed
elemental abundances are fitted with the parametric model function
(Husti et al. 2009). The best fits masses and corresponding dilution
factors along with the $\chi^{2}$ values are given in
Table \ref{parametric model}. The goodness of fit of the parametric
model function is determined by the uncertainty in the observed
abundance. The best fits obtained are shown in Figures
\ref{parametric1} - \ref{parametric4}. All the Ba stars are found
to have low-mass AGB companions with M $\leq$ 3 M$_{\odot}$.
Among our stars, HD~147609 is found to have a companion of 3 M$_{\odot}$ by
Husti et al. (2019), whereas our estimate is 2.5 M$_{\odot}$.
{\footnotesize
\begin{table*}
\caption{The best fitting mass, dilution factor and reduced chi-square values.} \label{parametric model}
\resizebox{\textwidth}{!}{\begin{tabular}{lcccccccccccc}
\hline
Star name/ & & HD~24035 & HD~32712 & HD~36650 & HD~94518 & HD~147609 & HD~154276 & HD~179832 & HD~207585 & HD~211173 & HD~219116 \\
mass (M/M$_{\odot}$) & & & & & & & & & & \\
\hline
\hline
1.5 & d & - & - & - & 0.22 & - & 0.71 & 0.21 & - & - & 0.04 \\
& $\chi^{2}$ & - & - & - & 9.91 & - & 1.46 & 51.39 & - & - & 1.40 \\
\hline
2.0 & d & - & 0.001 & - & 0.52 & - & 1.18 & 0.65 & - & - & 0.36 \\
& $\chi^{2}$ & - & 16.14 & - & 10.14 & - & 1.43 & 48.04 & - & - & 1.55 \\
\hline
2.5 & d & 0.07 & 0.08 & 0.10 & 0.62 & 0.08 & 1.31 & 0.82 & 0.07 & 0.03 & 0.46 \\
& $\chi^{2}$ & 1.92 & 17.64 & 8.15 & 10.31 & 1.39 & 1.60 & 48.06 & 4.28 & 18.15 & 1.66 \\
\hline
3.0 & d & - & - & 0.04 & 0.27 & - & 1.20 & 0.75 & - & - & 0.10 \\
& $\chi^{2}$ & - & - & 8.08 & 9.92 & - & 1.52 & 48.01 & - & - & 1.33 \\
\hline
4.0 & d & - & - & & - & - & 0.58 & - & - & - & - \\
& $\chi^{2}$ & - & - & & - & - & 1.01 & - & - & - & - \\
\hline
5.0 & d & - & - & & - & - & 0.09 & - & - & - & - \\
& $\chi^{2}$ & - & - & & - & - & 0.91 & - & - & - & - \\
\hline
\end{tabular}}
\end{table*}
}
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth, height= 0.8\columnwidth]{hd24035_par_2k19_sep_16.eps}
\includegraphics[width=0.8\columnwidth, height= 0.8\columnwidth]{hd32712_par_2k19_sep_16.eps}
\includegraphics[width=0.8\columnwidth, height= 0.8\columnwidth]{hd36650_par_2k19_sep_16.eps}
\caption{Solid curve represent the best fit for the parametric model function.
The points with error bars indicate the observed abundances in
(i) \textit{Top panel:} HD~24035 (ii) \textit{Middle panel:} HD~32712
(iii) \textit{Bottom panel:} HD~36650.}
\label{parametric1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth, height= 0.8\columnwidth]{hd94518_par_2k19_sep_16.eps}
\includegraphics[width=0.8\columnwidth, height= 0.8\columnwidth]{hd147609_par_2k19_sep_16.eps}
\includegraphics[width=0.8\columnwidth, height= 0.8\columnwidth]{hd154276_par_2k19_sep_16.eps}
\caption{Solid curve represent the best fit for the parametric model function.
The points with error bars indicate the observed abundances in
(i) \textit{Top panel:} HD~94518 (ii) \textit{Middle panel:} HD~147609
(iii) \textit{Bottom panel:} HD~154276.}
\label{parametric2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth, height= 0.8\columnwidth]{hd179832_par_2k19_sep_16.eps}
\includegraphics[width=0.8\columnwidth, height= 0.8\columnwidth]{hd207585_par_2k19_sep_16.eps}
\caption{Solid curve represent the best fit for the parametric model function.
The points with error bars indicate the observed abundances in
(i) \textit{Top panel:} HD~179832
(ii) \textit{Bottom panel:} HD~207585.}
\label{parametric3}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth, height= 0.8\columnwidth]{hd211173_par_2k19_sep_16.eps}
\includegraphics[width=0.8\columnwidth, height= 0.8\columnwidth]{hd219116_par_2k19_sep_16.eps}
\caption{Solid curve represent the best fit for the parametric model function.
The points with error bars indicate the observed abundances in
(i) \textit{Top panel:} HD~211173
(ii) \textit{Bottom panel:} HD~219116.}
\label{parametric4}
\end{figure}
\subsection{Discussion on individual stars}
{\small\textbf{HD~24035, HD~219116, HD~32712, HD~36650, HD~207585
and HD~211173:}}\\
These objects are listed in the CH star catalogue of
Bartkevicius (1996) as well as in the barium star catalogue
of L\"u (1991). While Smith et al. (1993) classified HD~219116 as
a CH subgiant, MacConnell et al. (1972) and Mennessier et al. (1997)
suggested these objects to be giant barium stars.
Based on our temperature and luminosity estimates, their locations
in the H-R diagram correspond to the giant phase of evolution, except
for HD~207585 which is found to be a strong Ba sub-giant.
Earlier studies on these objects include Masseron et al. (2010) and
de Castro et al. (2016) on abundance analysis. We have estimated
the abundances of all the important s-process elements and Eu in
these objects except Sr in HD~24035. Based on our abundance annalysis
we find these objects to satisfy the criteria for
s-process enriched stars (Beers \& Christlieb 2005) with
[Ba/Fe]$>$1 and [Ba/Eu]$>$0.50 respectively. Following
Yang et al. (2016), they can also be included in strong
Ba giant category while HD~211173 is a mild Ba giant.
They show the characteristics of Ba stars
with estimated C/O $\sim$0.95, 0.51. 0.56, 0.24 and 0.59 for
HD~219116, HD~32712, HD~36650, HD~207585 and HD~211173 respectively.
HD~24035 shows the largest enhancement of Ba among our
program stars with [Ba/Fe]$\sim$1.71
and largest mean s-process abundance with [s/Fe]$\sim$1.55.
A comparison with FRUITY models shows that the former AGB
companions of HD~24035, HD~32712, HD~36650, HD~219116, HD~207585
and HD~211173 are low mass objects with masses 2.5 M$_{\odot}$,
2.0 M$_{\odot}$, 3.0 M$_{\odot}$, 3.0 M$_{\odot}$, 2.5 M$_{\odot}$,
and 2.5 M$_{\odot}$ respectively.
From kinematic analysis we find these objects to belong
to the thin disk population with probability $\geq$0.97.
The estimated spatial velocities $<$ 85 km/s, also satisfies
the criterion of Chen et al. (2004) for stars to be thin
disk objects. From radial velocity monitoring, HD~24035
and HD~207585 are confirmed to be binaries with orbital periods
of 377.82$\pm$0.35 days (Udry et al. 1998a) and 672$\pm$2 days
(Escorza et al. 2019) respectively. \\
\noindent {\small\textbf{HD~94518:}}
This object belongs to the CH star catalogue of Bartkevicius (1996).
Our abundance analysis places this object in the strong Ba star
category with C/O$\sim$ 0.06.
The abundance pattern observed in this star resembles with
that of a 1.5 M$_{\odot}$ AGB star.
The position of this stars in H-R diagram shows this object to be a
subgiant star. Kinematic analysis shows this object to belong to
thick disk population with a probability $\sim$0.95. \\
\noindent {\small\textbf{HD~147609:}}
This star is listed in CH star catalogue of Bartkevicius (1996).
This star is a strong Ba dwarf with C/O$\sim$ 0.27.
Comparison of the observed abundance in HD~147609 with the
FRUITY model shows close resemblance with that seen in
2.5 M$_{\odot}$ AGB star.
Kinematic analysis has shown that this object belongs to thin disk
population with characteristic spatial velocity of thin disk objects.
The radial velocity monitoring study by Escorza et al. (2019)
has confirmed this object to be a binary with an orbital period of
1146$\pm$1.5 days. \\
\noindent {\small\textbf{HD~154276:}}
This star is listed in CH star catalogue of Bartkevicius (1996).
Our analysis have shown that this star is a dwarf.
Our analysis based on mean s-process abundance, [s/Fe], revealed
that this object can not be considered as a Ba star. \\
\noindent {\small\textbf{HD~179832:}}
This object belongs to the CH star catalogue of Bartkevicius (1996).
We have presented a first time detailed abundance analysis for
this object. Our analyses have shown that this object is a mild Ba giant.
The abundance trend observed in this star suggest that the former
companion AGB star's mass to be 3 M$_{\odot}$ .
From kinematic analysis, HD~179832 is found to be a thin disk object
with probability of 0.99. The spatial velocity is estimated to be
11.97 km s$^{-1}$, as expected for thin disk stars (Chen at al. 2004). \\
{\footnotesize
\begin{table*}
\caption{Comparison of the heavy elemental abundances of our program stars with the literature values.} \label{abundance_comparison_literature}
\resizebox{0.6\textheight}{!}{\begin{tabular}{lcccccccccc}
\hline
Star name & [Fe I/H] & [Fe II/H] & [Fe/H] & [Rb I/Fe] & [Sr/Fe] & [Y I/Fe] & [Y II/Fe] & [Zr I/Fe] & Ref \\
\hline
HD~24035 & $-$0.51 & $-$0.50 & $-$0.51 & - & - & 1.61 & - & 1.21 & 1 \\
& $-$0.23 & $-$0.28 & $-$0.26 & - & - & 1.35 & - & 1.20 & 2 \\
& - & - & $-$0.14 & - & - & - & - & - & 3 \\
HD~32712 & $-$0.25 & $-$0.25 & $-$0.25 & $-$1.13 & 0.03 & 0.56 & 1.04 & 0.52 & 1 \\
& $-$0.24 & $-$0.25 & $-$0.25 & - & - & 0.74 & - & 0.56 & 2 \\
HD~36650 & $-$0.02 & $-$0.02 & $-$0.02 & $-$0.82 & 0.66 & 0.51 & 0.70 & 0.51 & 1 \\
& $-$0.28 & $-$0.28 & $-$0.28 & - & - & 0.55 & - & 0.46 & 2 \\
HD~94518 & $-$0.55 & $-$0.55 & $-$0.55 & - & 1.18 & - & 0.50 & - & 1 \\
& $-$0.49 & $-$0.50 & $-$0.50 & - & 0.55 & - & - & - & 4 \\
HD~147609 & $-$0.28 & $-$0.28 & $-$0.28 & - & 1.51 & - & 1.07 & - & 1 \\
& $-0.45$ & $+0.08$ & $-$0.45 & - & 1.32 & - & 1.57 & 0.89 & 5 \\
& - & - & - & - & - & 0.96 & - & 0.80 & 6 \\
HD~154276 & $-$0.09 & $-$0.10 & $-$0.10 & - & $-$0.22 & - & 0.07 & - & 1 \\
& - & - & $-$0.29 & - & - & $-$0.07 & - & - & 4 \\
HD~179832 & +0.23 & +0.23 & +0.22 & $-$1.35 & 0.02 & - & 0.11 & 1.29 & 1 \\
HD~207585 & $-$0.38 & $-$0.38 & $-$0.38 & - & - & 0.94 & 1.37 & 1.10 & 1 \\
& - & - & $-$0.50 & - & - & - & - & - & 3 \\
& - & - & $-$0.57 & - & - & 1.29 & - & 1.50 & 7 \\
HD~211173 & $-$0.17 & $-$0.17 & $-$0.17 & $-$1.00 & 0.70 & 0.38 & 0.65 & 0.38 & 1 \\
& - & - & $-$0.12 & - & - & - & - & - & 3 \\
HD~219116 & $-$0.45 & $-$0.44 & $-$0.45 & - & 0.71 & 0.73 & 0.75 & 0.66 & 1 \\
& $-$0.61 & $-$0.62 & $-$0.62 & - & - & 0.59 & - & 0.65 & 2 \\
& - & - & $-$0.34 & - & - & - & - & - & 3 \\
& - & - & $-$0.30 & - & - & - & - & - & 8 \\
\hline
Star name & [Zr II/Fe] & [Ba II/Fe] & [La II/Fe] & [Ce II/Fe] & [Pr II/Fe] & [Nd II/Fe] & [Sm II/Fe] & [Eu II/Fe] & Ref \\
\hline
HD~24035 & 1.89 & 1.71 & 1.63 & 1.70 & 1.98 & 1.41 & 2.03 & 0.19 & 1 \\
& - & - & 2.70 & 1.58 & - & 1.58 & - & - & 2 \\
& - & 1.07 & 1.01 & 1.63 & - & - & - & 0.32 & 3 \\
HD~32712 & 0.82 & 1.39 & 1.25 & 1.68 & 1.67 & 1.76 & 1.71 & 0.04 & 1 \\
& - & - & 1.53 & 1.16 & - & 1.19 & - & - & 2 \\
HD~36650 & - & 0.79 & 0.62 & 0.99 & 0.85 & 0.94 & 0.99 & $-$0.21 & 1 \\
& - & - & 0.83 & 0.68 & - & 0.57 & - & - & 2 \\
HD~94518 & 0.29 & 0.90 & 0.58 & 0.91 & - & 1.05 & 1.39 & $-$0.17 & 1 \\
& - & 0.77 & - & - & - & - & - & - & 4 \\
HD~147609 & 1.00 & 1.40 & 1.27 & 1.26 & 1.35 & 1.07 & 1.29 & 0.13 & 1 \\
& 1.56 & 1.57 & 1.63 & 1.64 & 1.22 & 1.32 & 1.09 & 0.74 & 5 \\
& - & - & - & - & - & 0.98 & - & - & 6 \\
HD~154276 & $-$0.08 & 0.22 & 0.20 & 0.18 & - & 0.40 & 0.07 & - & 1 \\
& - & $-$0.03 & - & - & - & - & - & - & 4 \\
HD~179832 & 1.44 & 0.41 & 0.52 & 0.74 & 0.23 & 0.04 & 0.78 & 0.00 & 1 \\
HD~207585 & 1.20 & 1.60 & 1.70 & 1.72 & 1.59 & 1.62 & 2.04 & $-$0.02 & 1 \\
& - & 1.23 & 1.37 & 1.41 & - & - & - & 0.58 & 3 \\
& - & - & 1.60 & 0.84 & 0.61 & 0.93 & 1.05 & - & 7 \\
HD~211173 & 0.39 & 0.57 & 0.95 & 0.74 & 1.59 & 0.73 & 0.87 & $-$0.17 & 1 \\
& - & 0.35 & 0.29 & 0.73 & - & - & - & 0.15 & 3 \\
HD~219116 & - & 1.22 & 1.35 & 1.57 & 1.27 & 1.13 & 1.58 & 0.13 & 1 \\
& - & - & 1.21 & 1.07 & - & - & - & - & 2 \\
& - & 0.77 & 0.56 & 0.80 & - & - & - & 0.17 & 3 \\
& - & 0.90 & - & - & - & 1.43 & - & - & 8 \\
\hline
\end{tabular}}
References: 1. Our work, 2. de Castro (2016), 3. Masseron et al. (2010),
4. Bensby et al. (2014), 5. Allen \& Barbuy (2006a), 6. North et al. (1994a),
7. Luck \& Bond (1991), 8. Smith et al. (1993) \\
\end{table*}
}
\section{CONCLUSIONS}
Results from high resolution spectroscopic analysis of ten objects are
presented. All the objects are listed in the CH star catalog of
Bartkevicius (1996). Six of them are also listed in the barium star catalog
of L\"u (1991). Except for one object HD~154276, all other objects are shown
to be bonafied barium stars from our analysis. Although it
satisfies the criteria of Yang et al. (2016) to be a mild
barium star, our detailed abundance analysis shows this object to
be a normal metal-poor star. An analysis based on the mean s-process
abundance clearly
shows that this particular star lies among the stars rejected as
barium stars by de Castro et al. (2016).
\par Some of the objects analysed here, although common to the sample
analysed by different authors, abundances of important heavy elements
such as Rb, and C, N, O are not found in literature. New results for
these elements are presented in this work.
\par We have presented first time abundance results for HD~179832 and
shown it to be a mild barium giant. Kinematic analysis have shown it to be
a thin disk object. A parametric model based analysis have shown the object's
former companion AGB star's mass to be about 3M$_{\odot}$.
\par The sample of stars analysed here covers a metallicity range
from $-$0.55 to +0.23, and a kinematic analysis has shown that all of them
belong to the Galactic disk, as expected for barium stars.
\par The estimated mass of the barium stars are consistent
with that observed for other barium stars (Allen \& Barbuy 2006a,
Liang et al. 2003, Antipova et al. 2004, de Castro et al. 2016).
The abundance estimates are consistent with the operation of
$^{13}$C($\alpha$, n)$^{16}$O source in the former low-mass AGB companion.
\par We did not find any enhancement
of Mg in our sample, that discards the source of neutron as the
$^{22}$Ne($\alpha$, n)$^{25}$Mg reaction. An enhancement of Mg
abundances when compared with their counterparts in disk stars and
normal giants would have indicated the operation
of $^{22}$Ne($\alpha$, n)$^{25}$Mg.
\par The detection of Rb I line at 7800.259 {\rm \AA}
in the spectra of HD~32712, HD~36650, HD~179832 and HD~211173
allowed us to determine [Rb/Zr] ratio. This ratio gives an indication of
the neutron source at the s-process site
and in turn provides clues to the mass of the star.
We have obtained negative values for this ratio in all the four stars.
The negative values obtained for these stars indicate the
operation of $^{13}$C($\alpha$, n)$^{16}$O reaction.
As this reaction occurs in the low-mass AGB stars, we confirm
that the former companions
of these stars are low-mass AGB stars with M $\leq$ 3 M$_{\odot}$.
\par Distribution of abundance patterns and [hs/ls] ratios also indicate
low-mass companions for the objects for which [Rb/Zr] could not be estimated.
A comparison of observed abundances with the predictions from FRUITY
models, and with those that are observed in low-mass AGB
stars from literature, confirms low-mass for the former companion AGB stars.
\section{ACKNOWLEDGMENT}
We thank the staff at IAO and at the remote control station at
CREST, Hosakotte for assisting during the observations.
Funding from the DST SERB project No. EMR/2016/005283 is gratefully
acknowledged.
This work made use of the SIMBAD astronomical database, operated
at CDS, Strasbourg, France, and the NASA ADS, USA.
This work has made use of data from the European Space Agency (ESA)
mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia
Data Processing and Analysis Consortium
(DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium).
T.M. acknowledges support provided by the Spanish Ministry
of Economy and Competitiveness (MINECO) under grant AYA-
2017-88254-P. Based on data collected using HESP, UVES
and FEROS. The authors would like to thank the referee for useful suggestions
that had improved the readability of the paper.
{\footnotesize
\begin{table*}
\caption{Equivalent widths (in m\r{A}) of Fe lines used for deriving
atmospheric parameters.} \label{linelist1}
\resizebox{\textwidth}{!}{\begin{tabular}{lccccccccccccccc}
\hline
Wavelength(\r{A}) & El & $E_{low}$(eV) & log gf & HD~24035 & HD~32712 & HD~36650 & HD~94518 & HD~147609 & HD~154276 & HD~179832 & HD~207585 & HD~211173 & HD~219116 & Ref \\
\hline
4114.445 & Fe I & 2.832 & $-$1.220 & - & - & - & - & - & - & - & -& - & - & 1 \\
4132.899 & & 2.850 & $-$1.010 & - & - & - & 88.7(6.94) & - & - & - & 83.4(6.96) & 132.2(7.31) & - & 1 \\
4153.900 & & 3.400 & $-$0.320 & 129.9(6.79) & - & - & - & - & - & - & 90.8(6.90) & - & - & 1 \\
4154.499 & & 2.830 & $-$0.690 & - & - & - & - & - & - & - & - & - & - & 2 \\
4184.891 & & 2.832 & $-$0.860 & - & - & - & - & - & - & - & -& - & - & 1 \\
\hline
\end{tabular}}
The numbers in the paranthesis in columns 5-14 give the derived
abundances from the respective line. \\
References: 1. F\"uhr et al. (1988) 2. Kurucz (1988)\\
\textbf{Note:} This table is available in its entirety in online only.
A portion is shown here for guidance regarding its form and content.
\end{table*}
}
{\footnotesize
\begin{table*}
\caption{Equivalent widths (in m\r{A}) of lines used for deriving
elemental abundances.} \label{linelist2}
\resizebox{\textwidth}{!}{\begin{tabular}{lccccccccccccccc}
\hline
Wavelength(\r{A}) & El & $E_{low}$(eV) & log gf & HD~24035 & HD~32712 & HD~36650 & HD~94518 & HD~147609 & HD~154276 & HD~179832 & HD~207585 & HD~211173 & HD~219116 & Ref \\
\hline
5682.633 & Na I & 2.102 & $-0.700$ & 126.7(6.29) & 133.5(6.31) & 126.0(6.44) & 63.5(5.86) & 66.2(6.13) & 83.5(6.23) & 132.2(6.55) & 79.0(6.17) & 121.5(6.43) & 104.5(6.13) & 1 \\
5688.205 & & 2.105 & $-$0.450 & 137.1(6.21) & 151.3(6.31) & 130.1(6.26) & 85.2(5.94) & 98.2(6.39) & - & 145.1(6.50) & 98.0(6.22) & 136.2(6.41) & 126.8(6.24) & 1 \\
6154.226 & & 2.102 & $-$1.560 & 49.4(5.91) & 66.9(6.04) & 67.7(6.30) & 20.5(5.90) & - & 27.4(6.13) & - & 22.2(6.00) & 60.1(6.21) & 37.9(5.92) & 1 \\
6160.747 & & 2.104 & $-$1.260 & 88.3(6.20) & 92.7(6.16) & 87.5(6.31) & 29.2(5.81) & - & 47.9(6.22) & - & 38.0(6.03) & 74.8(6.15) & 57.2(5.93) & 1 \\
4571.096 & Mg I & 0.000 & $-$5.691 & - & - & - & - & - & 108.9(7.88) & - & - & - & - & 2 \\
4702.991 & & 4.346 & $-$0.666 & - & 179.6(7.10) & - & 179.6(7.27) & 156.2(7.56) & - & - & 149.0(7.16) & - & 180.0(7.48) & 3 \\
4730.029 & & 4.346 & $-$2.523 & - &- & 85.1(7.71) & 45.5(7.46) & 32.3(7.48) & 60.2(7.80) & 87.1(7.81) & - & 76.8(7.60) & - & 3 \\
\hline
\end{tabular}}
The numbers in the paranthesis in columns 5-14 give the derived abundances from the respective line. \\
References: 1. Kurucz et al. (1975), 2. Laughlin et al. (1974), 3. Lincke et al. (1971)\\
\textbf{Note:} This table is available in its entirety in online only.
A portion is shown here for guidance regarding its form and content.
\end{table*}
}
{\footnotesize
\begin{table*}
\caption{Estimates of [ls/Fe], [hs/Fe], [s/Fe], [hs/ls], [Rb/Sr], [Rb/Zr], C/O} \label{hs_ls}
\begin{tabular}{lcccccccc}
\hline
Star name & [Fe/H] & [ls/Fe] & [hs/Fe] & [s/Fe] & [hs/ls] & [Rb/Sr] & [Rb/Zr] & C/O\\
\hline
HD 24035 & $-$0.51 & 1.41 & 1.61 & 1.55 & 0.20 & -- & -- & -- \\
HD 32712 & $-$0.25 & 0.37 & 1.52 & 1.03 & 1.15 & $-$1.06 & $-$1.65 & 0.51 \\
HD 36650 & $-$0.02 & 0.56 & 0.84 & 0.72 & 0.28 & $-$1.48 & $-$1.33 & 0.56 \\
HD 94518 & $-$0.55 & 0.67 & 0.86 & 0.77 & 0.19 & -- & -- & 0.06 \\
HD 147609 & $-$0.28 & 1.19 & 1.25 & 1.23 & 0.06 & -- & -- & 0.27 \\
HD 154276 & $-$0.10 & $-$0.08 & 0.25 & 0.11 & 0.33 & -- & -- & -- \\
HD 179832 & +0.23 & 0.47 & 0.66 & 0.45 & 0.19 & $-$1.37 & $-$2.64 & -- \\
HD 207585 & $-$0.38 & 1.02 & 1.66 & 1.45 & 0.64 & -- & -- & 0.24 \\
HD 211173 & $-$0.17 & 0.49 & 0.75 & 0.64 & 0.26 & $-$1.70 & $-$1.38 & 0.59 \\
HD 219116 & $-$0.45 & 0.70 & 1.32 & 1.05 & 0.62 &-- & -- & 0.95 \\
\hline
\end{tabular}
\end{table*}
}
|
2,869,038,154,045 | arxiv | \section{Introduction}
Knowledge graph (KG) embedding is a crucial method for a wide range of KG-based applications including question answering, recommendation systems and drug discovery. Prominent examples include \emph{addictive} (or \emph{translational}) family \citep{DBLP:conf/nips/BordesUGWY13,DBLP:conf/aaai/WangZFC14,DBLP:conf/aaai/LinLSLZ15} and \emph{multiplicative} (or \emph{bilinear}) family \citep{DBLP:conf/icml/NickelTK11,DBLP:journals/corr/YangYHGD14a,DBLP:conf/icml/LiuWY17}.
These approaches, however, are typically built upon the Euclidean geometry, which requires extremely high dimensionality when dealing with hierarchical KGs such as WordNet \citep{DBLP:journals/cacm/Miller95} and Gene Ontology \citep{ashburner2000gene}).
Recent studies \citep{DBLP:conf/nips/NickelK17,DBLP:conf/nips/ChamiYRL19} show that the hyperbolic geometry with exponentially growth volume is more suitable for embedding hierarchical data, and significantly reduces the dimensionality and improves the KG embeddings \citep{suzuki2018riemannian,DBLP:conf/acl/ChamiWJSRR20}
Besides the hierarchies that are most ubiquitous in KGs, KGs are usually organized highly heterogeneously. A KG consists of a mixture of multiple hierarchical and non-hierarchical relations.
Typically, the different hierarchical relations (e.g., isA and partOf) form distinct tree-like substructure, while different non-hierarchical relations (e.g., likes, friendOf) capture different types of interactions between the entities at the same level.
Fig.\ref{fig:example}(left) shows an example of KG consisting of heterogeneous graph structures. However, current hyperbolic KG embedding methods such as MurP \citep{suzuki2018riemannian} can only model a single hierarchy. RotH \citep{DBLP:conf/acl/ChamiWJSRR20} alleviate this issue by learning relation-specific curvature to distinguish the geometric characteristic of different relations. However, RotH still cannot model non-hierarchical relations (e.g, cyclic structures).
To deal with data with heterogeneous geometry, recent works \citep{law2020ultrahyperbolic,sim2021directed,DBLP:journals/corr/abs-2106-03134} propose to learn embedding in a pseudo-Riemannian manifold that interleaves the hyperbolic and spherical submanifolds. The key idea is that the hyperbolic submanifold is able to embed hierarchies while the spherical submanifold is able to embed non-hierarchies (e.g, cyclic structures). Pseudo-Riemannian manifolds have shown promising results on embedding hierarchical graphs with cycles. However, such powerful representation space has not yet been exploited in KG embedding.
In this paper, we propose a pseudo-Riemannian knowledge graph embeddings (PseudoE), the first KG embedding method in pseudo-Riemannian manifold that simultaneously embeds multiple hierarchical relations and non-hierarchical relations in a single space. In particular, we model entities as points in the pseudo-Riemannian manifold and model relations as pseudo-orthogonal transformations that cover various \emph{isometrics} including rotation/reflection and translation in pseudo-Riemannian manifold.
We derive a linearly complex relational parameterization by exploiting the theorem of Hyperbolic Cosine-Sine decomposition and Given transformation operators.
PseudoE is shown to be \emph{fully expressive} and the geometric operations allow for modelling multiple logical patterns including inversion, composition, symmetric or anti-symmetric patterns.
Experimental results on standard KG benchmarks show that the PseudoE outperforms previous Euclidean- and hyperbolic-based approaches, in both low-dimensional and high-dimensional cases.
\section{Related Work}
\subsection{Knowledge Graph Embeddings.}
Various KG embedding approaches have been proposed such as TransE \citep{DBLP:conf/nips/BordesUGWY13}, ComplEx \citep{DBLP:conf/icml/TrouillonWRGB16} and RotatE \citep{DBLP:conf/iclr/SunDNT19}. These approaches are built upon the Euclidean geometry, and often require high embedding dimensionality when dealing with hierarchical KGs. Recent works propose to learn KG embeddings in the hyperbolic geometry such as MuRP \citep{suzuki2018riemannian} that models relation as a combination of Möbius multiplication and Möbius addition, as well as RotH/RefH \citep{DBLP:conf/acl/ChamiWJSRR20} that models relation as hyperbolic isometries (rotation/reflection and translation) to model logical patterns and achieves the state-of-the-art for the KG completion task.
RotH/RefH learn relation-specific curvature to distinguish the geometric characteristics of different relations, but still cannot tackle the non-hierachical relations since hyperbolic space is not the optimal geometry for representing non-hierarchies. 5$\star$E \citep{DBLP:conf/aaai/NayyeriVA021} proposes $5$ transformations (inversion, reflection, translation, rotation, and homothety) to support multiple graph structures but the embeddings are still learned in the Euclidean space. Our method is learned in pseudo-Riemannian manifold that generalize the the hyperbolic and spherical manifolds, the optimal geometries for hierarchical and non-hierachical data, respectively.
\subsection{Pseudo-Riemannian Embeddings.}
Few recent works have explored the application of pseudo-Riemannian geometry in representation learning. \citep{sun2015space} first applied pseudo-Riemannian geometry (or \textit{pseudo-Euclidean space}) to embed non-metric data that preserves local information. \citep{clough2017embedding} exploited Lorentzian spacetime on embedding directed acyclic graphs. More recently, \cite{law2020ultrahyperbolic} proposed learning graph embeddings on pseudo-hyperboloid and provided some necessary geodesic tools, \cite{sim2021directed} further extended it into directed graph embedding, and \citep{DBLP:journals/corr/abs-2106-03134,law2021ultrahyperbolic} extended the pseudo-Riemannian embedding to support neural network operators.
However, pseudo-Riemannian geometry has not yet been exploited in the setting of KG embeddings.
\section{Preliminaries}
\textbf{Pseudo-Riemannian geometry.} A pseudo-Riemannian manifold $(\mathcal{M},g)$ is a differentiable manifold $\mathcal{M}$ equipped with a non-degenerate metric tensor $g$, which is called a pseudo-Riemannian metric defined by,
\begin{small}
\begin{align}
\forall \mathbf{x},\mathbf{y} \in \mathbb{R}^{p,q}, \langle \mathbf{x}, \mathbf{y}\rangle_q = \sum_{i=1}^{p}x_i y_i-\sum_{j=p+1}^{p+q}x_jy_j.\label{eq:metric}
\end{align}
\end{small}
where $\mathbb{R}^{p,q}$ is the pseudo-Euclidean space (or equivalently, spacetime) with the dimensionality of $d=p+q$. A point in $\mathbb{R}^{p,q}$ is interpreted as an \emph{event}, where the first $p$ dimension and last $q$ dimension are called the space dimension and time dimension, respectively.
The pair $(p,q)$ is called the signature of spacetime.
Two important special cases are Riemannian ($q=0$) and Lorentz ($q=1$) geometry that have positive definite metric tensor. The main difference of pseudo-Riemannian geometry ($q \geq 2, p > 1$) is that the metric tensor needs not possess positive definiteness (the scalar product induced by the metric could be positive, negative or zero).
\noindent
\textbf{Pseudo-hyperboloid.} By exploiting the scalar product induced by Eq.~(\ref{eq:metric}), pseudo-hyperboloid is defined as the submanifold in the ambient space $\mathbb{R}^{p,q}$, given by,
\begin{equation}\label{eq:pseudo-hyperboloid}
\mathcal{Q}_{\beta}^{p, q}=\left\{\mathbf{x}=\left(x_{1}, x_{2}, \cdots, x_{p+q}\right)^{\top} \in \mathbb{R}^{p,q}:\|\mathbf{x}\|_{q}^{2}=-\alpha^{2}\right\}
\end{equation}
where $\alpha$ is the the radius of curvature.
Pseudo-hyperbololid can be seen as a generalization of hyperbolic and spherical manifolds, i.e., hyperbolic and spherical manifolds can be defined as the special cases of pseudo-hyperboloid by setting all time dimensions except one to be zero and setting all space dimensions to be zero, respectively, i.e. $\mathbb{H}_{\alpha} = \mathcal{Q}_{\alpha}^{p,1}, \mathbb{S}_{-\alpha} = \mathcal{Q}_{\alpha}^{0,q}$. it is commonly known that hyperbolic and spherical manifolds are the optimal geometry for hierarchical and cyclic structure, respectively. Hence, pseudo-hyperbololid can embed both hierarchical and cyclic structures. Furthermore, pseudo-hyperbololid contains numerous copies of hyperbolic and spherical manifolds as well as their submanifolds (e.g., double cone) (see Fig.\ref{fig:example}(b)), providing additional inductive bias for representing complex graph structures that are specific to those geometries.
\noindent
\textbf{Pseudometric.} A pseudometric on $\mathcal{Q}$ is a function $\mathrm{d}: \mathcal{Q}\times\mathcal{Q}\rightarrow \mathbb{R}$ which satisfies the axioms for a metric, except for the \emph{identity of indiscernibles}, that is, one may have $\mathrm{d}_{\gamma}(\mathbf{x}, \mathbf{y})=0 \text{ for some distinct values } \mathbf{x}\neq\mathbf{y}$.
In other words, the axioms for a pseudometric are given as:
\begin{align}\label{eq:pseusometric}
&1.\ \mathrm{d}_{\gamma}(\mathbf{x}, \mathbf{x})=0 \\
&2.\
\mathrm{d}_{\gamma}(\mathbf{x}, \mathbf{y})=\mathrm{d}_{\gamma}(\mathbf{y}, \mathbf{x}) \geq 0 \text{ (symmetry)}\\
&3.\
\mathrm{d}_{\gamma}(\mathbf{x}, \mathbf{z})\leq \mathrm{d}_{\gamma}(\mathbf{x}, \mathbf{y}) + \mathrm{d}_{\gamma}(\mathbf{y}, \mathbf{z}) \text{ (triangle inequality)}
\end{align}
\section{Pseudo-Riemannian Knowledge Graph Embeddings}
Let $\mathcal{E}$ and $\mathcal{R}$ denote the set of entities and relations.
A KG $\mathcal{K}$ is a set of triplets $(h, r, t) \in \mathcal{K}$ where $h, t \in \mathcal{E}, r \in \mathcal{R}$ denote the head, tail and their relation, respectively.
The objective is to learn entity embeddings $e \in \mathcal{Q}^{\mid \mathcal{E} \mid \times d}$ lying in the pseudo-hyperboloid, and relation-specific transformation $f_r:\mathcal{Q}^{p,q} \rightarrow \mathcal{Q}^{p,q}$ in the pseudo-hyperboloid.
\subsection{Pseudo-Orthogonal Transformation}
To this end, we consider pseudo-orthogonal (or $J$-orthogonal) transformation \citep{DBLP:journals/siamrev/Higham03}, a generalization of \emph{orthogonal transformation} into pseudo-Riemannian geometry. Formally, a matrix $Q \in \mathbb{R}^{d \times d}$ is called $J$-orthogonal if
\begin{equation}
Q^{T} J Q=J
\end{equation}
where
$
J=\left[\begin{array}{cc}
I_{p} & 0 \\
0 & -I_{q}
\end{array}\right], p+q=n
$.
$J$ is a signature matrix of signature $(p,q)$. Such $J$-orthogonal matrices form a multiplicative group called pseudo-orthogonal group $O(p,q)$. Conceptually, $O(p,q)$ is a linear transformation in pseudo-hyperboloid that preserve the bilinear form (i.e., $Q\mathbf{x} \in \mathcal{Q}^{p,q} \forall \mathbf{x} \in \mathcal{Q}^{p,q}$), and therefore acts as an \emph{isometry} (distance-preserving transformation) on the pseudo-Riemannian manifold.
One special case of pseudo-orthogonal transformation is the well-known Lorentz transformation used in special relativity and the corresponding group is called Lorentz group $O(p,1)$.
\begin{proposition}\label{pro:lorent}
Lorentz transformation is a pseudo-orthogonal transformation with signature $(p,1)$.
\end{proposition}
Proposition \ref{pro:lorent} is important as it connects our method with some existing methods such as MurP, RotH/RefH and HyboNet (all these methods can be seen as a special cases of our method), see Sec.5 for details.
\subsection{Hyperbolic Cosine-Sine Decomposition}
Modelling relations as $J$-orthogonal matrices requires $\mathcal{O}(d^2)$ parameter space, which is the same as many bilinear models such as RESCAL \citep{DBLP:conf/icml/NickelTK11}. However, many works such as DistMult show that linear complex relation embeddings have advantages in both effectiveness and efficiency.
To reduce the space complexity of relation parameterization, we first decompose the $J$-orthogonal matrix into several components via Hyperbolic Cosine-Sine Decomposition \citep{DBLP:journals/siammax/StewartD05}.
\begin{theorem}[Hyperbolic Cosine-Sine Decomposition]
Let $Q$ be $J$-orthogonal and assume that $q \leq p.$ Then there are orthogonal matrices $U_{1}, V_{1} \in \mathbb{R}^{p \times p}$ and $U_{2}, V_{2} \in \mathbb{R}^{q \times q}$ s.t.
\begin{equation}\label{eq:cs_decomposition}
Q=\left[\begin{array}{cc}
U_{1} & 0 \\
0 & U_{2}
\end{array}\right]\left[\begin{array}{ccc}
C & 0 & -S \\
0 & I_{p-q} & 0 \\
-S & 0 & C
\end{array}\right]\left[\begin{array}{cc}
V_{1}^{T} & 0 \\
0 & V_{2}^{T}
\end{array}\right]
\end{equation}
where $C=\operatorname{diag}\left(c_{i}\right), S=\operatorname{diag}\left(s_{i}\right)$ and $C^{2}-S^{2}=I$. For cases where $q > p$, the decomposition can be defined analogously (see Appendix for details).
\end{theorem}
Essentially, a $J$-orthogonal matrix $Q$ is decomposed into several geometric transformations. The orthogonal matrices $U_{1}, V_{1},U_2, V_2$ represent \emph{rotation} (if the determinant $\operatorname{det}(U)=1$) or \emph{reflection} (if the determinant $\operatorname{det}(U)=-1$).
$U_{1}, V_{1}$ represent space rotations/reflections while $U_{2}, V_{2}$ represent time rotations/reflections. The middle matrix parameterized by the diagonal matrices $C,S$ acts as an \emph{translation} or \emph{generalized boost} \footnote{In Lorentz geometry, it is called \emph{boost} or \emph{hyperbolic rotation}} across space and time coordinates.
Note that both rotation/reflection and translation operators are important for KG embedding. Rotations/reflections are able to model complex logical patterns including inversion, composition, symmetric or anti-symmetric patterns, while translations is able to capture geometric patterns such as hierarchies by connecting points between different levels of a single hierarchy or points across distinct hierarchies.
\subsection{Relation Parameterization}
To capture both logical patterns and geometric patterns. We paramerterize rotation/reflection and translation, respectively.
\noindent
\textbf{Rotation/Reflection.} Parameterizing rotation/reflection matrices is non-trivial and there are some approaches proposed such as using Cayley Transform. However, it requires $\mathcal{O}(d^2)$ space complexity. Similar to \citep{DBLP:conf/acl/ChamiWJSRR20}, we parameterize rotation/reflection using Givens transformations denoted by $2 \times 2$ matrices. Suppose we have even number of dimensions $d$, rotation and reflection can be denoted by block-diagonal matrices of the form,
\begin{equation}\label{eq:rot_ref}
\begin{array}{c}
\operatorname{Rot}\left(\Theta_{r}\right)=\operatorname{diag}\left(G^{+}\left(\theta_{r, 1}\right), \ldots, G^{+}\left(\theta_{r, \frac{d}{2}}\right)\right) \\
\operatorname{Ref}\left(\Phi_{r}\right)=\operatorname{diag}\left(G^{-}\left(\phi_{r, 1}\right), \ldots, G^{-}\left(\phi_{r, \frac{n}{2}}\right)\right) \\
\text { where } G^{\pm}(\theta):=\left[\begin{array}{cc}
\cos (\theta) & \mp \sin (\theta) \\
\sin (\theta) & \pm \cos (\theta)
\end{array}\right]
\end{array}
\end{equation}
$\Theta_{r}:=\left(\theta_{r, i}\right)_{i \in\left\{1, \ldots \frac{d}{2}\right\}}$ and $\Phi_{r}:=\left(\phi_{r, i}\right)_{i \in\left\{1, \ldots \frac{d}{2}\right\}}$ are relation-specific parameters. Hence, the rotation/reflection matrices in in Eq.(\ref{eq:cs_decomposition}) can be parameterized by,
\begin{equation}
R_r=\left[\begin{array}{cc}
\operatorname{\tilde{R}}\left(\Theta_{r_{U_1}}\right) & 0 \\
0 & \operatorname{\tilde{R}}\left(\Theta_{r_{U_2}}\right)
\end{array}\right]
\end{equation}
where $\tilde{R}$ can be either rotation or reflection matrix in Eq.(\ref{eq:rot_ref}). Note that although rotation is enough to model and infer symmetric patterns \citep{DBLP:conf/emnlp/WangLLS21} (i.e., by setting rotation angle $\theta=\pi$ or $\theta=0$), reflection is more powerful for representing symmetric relations since their second power is the identity.
AttH \citep{DBLP:conf/emnlp/WangLLS21} combines rotations and reflections by using an attention mechanism learned in the tangent space. In our paper, we also combines rotations and reflections operators but in a different way. We set $U_1,U_2$ to be rotation matrices while $V_1,V_2$ to be reflection matrices.
Our ablations study will show that such combination lead to additional performance improvements.
\textbf{Translation.} The translation matrix is parameterized by two diagonal matrices $C,S$, which are linearly complex.
To satisfy the condition $C^{2}-S^{2}=I$ in Eq.(\ref{eq:cs_decomposition}), we parameterize $C=\operatorname{diag}\left(\sqrt{1+b_{1}^{2}}, \ldots, \sqrt{1+b_{p}^{2}}\right)$ and $S=\operatorname{diag}\left(b_{1}, \ldots, b_{p}\right)$. Therefore, the translation matrix can be denoted by,
\begin{equation}\tiny
B_r=\left[\begin{array}{ccc}
\operatorname{diag}\left(\sqrt{1+b_{1}^{2}}, \ldots, \sqrt{1+b_{q}^{2}}\right) & 0 & \operatorname{diag}\left(b_{1}, \ldots, b_{q}\right) \\
0 & I_{p-q} & 0 \\
\operatorname{diag}\left(b_{1}, \ldots, b_{q}\right) & 0 & \operatorname{diag}\left(\sqrt{1+b_{1}^{2}}, \ldots, \sqrt{1+b_{q}^{2}}\right)
\end{array}\right]
\end{equation}
The final transformation function of relation $r$ is given by,
\begin{equation}\label{eq:f_r}
f_r=R_r^1 B_r R_r^2
\end{equation}
\subsection{Objective Function.}
Given the transformation function $f_r$ and entity embeddings $e$, we design a score function for each triplet $(h,r,t)$ as
\begin{equation}
s(h, r, t)=-d_{\mathcal{Q}}^{2}\left(f_{r}\left(\mathbf{e}_{h}\right), \mathbf{e}_{t}\right)+b_{h}+b_{t}+\delta
\end{equation}
where $\mathbf{e}_{h}, \mathbf{e}_{t} \in \mathcal{Q}^{p,q}$ are the embeddings of head entity $h$ and tail entity $t$, $b_h,b_t \in V$ are entity-specific biases that act as margins in the scoring function and $\delta$ is a global margin hyper-parameter.
$d_{\mathcal{Q}}$ is a function that quantifies nearness/distances
between two points in the manifold.
The existing approaches consider the pseudometric (or geodesic distance) $\mathrm{d}_{\gamma}$. Different from Riemannian manifolds that are geodesically connected, pseudo-Riemannian manifolds are not geodesically connected and there exist \emph{broken cases} in which the geodesic cannot be defined, and hence the geodesic distance is not defined. Some approximated approaches \citep{law2020ultrahyperbolic,DBLP:journals/corr/abs-2106-03134} are proposed as alternative geodesic distances. However, the geodesic distance is \emph{pseudosemimetrics} \cite{buldygin2000metric}, i.e. a symmetric premetric, which satisfies the axioms for a classic metric, except that instead of the identity of indiscernibles.
In our case, we want the embedding $\mathbf{e}_{h}$ to be \emph{exactly} moved to the "nearby" of $\mathbf{e}_{t}$ by the relational transformation $f_r$. Hence, instead of considering the geodesic distance, we consider the manifold distance
\citep{law2020ultrahyperbolic} shows that a distance is a good proxy for the geodesic distance $d_{\gamma}(\cdot, \cdot)$ if it preserves distance relations: $\mathrm{d}_{\gamma}(\mathbf{a}, \mathbf{b})<\mathrm{d}_{\gamma}(\mathbf{c}, \mathbf{d})$ iff $\mathrm{d}_{q}(\mathbf{a}, \mathbf{b})<\mathrm{d}_{q}(\mathbf{c}, \mathbf{d})$. However, this relation is only satisfied for two special cases (i.e., sphere and hyperboloid) in which the geodesic distance is well-defined.
Therefore, we derive a Manhattan-like distance as the sum of differences of two special cases, i.e. sphere and hyperboloid manifolds.
The intuition is that minimizing the manifold distance $d_{\mathcal{Q}}(x,y)$ is equivalent to minimizing the Manhattan-like distance $d_{\mathcal{Q}}(x,\rho_x(y)) + d_{\mathcal{Q}}(\rho_x(y),y)$, where $d_{\mathcal{Q}}(x,\rho_x(y))$ is a spherical (space-like) distance and $d_{\mathcal{Q}}(\rho_x(y),y)$ is a hyperbolic (time-like) distance. $\rho_x(y)$ is a mapping of $y$ such that $\rho_x(y)$ and $x$ have the same space dimension. This projection lead to that $x,\rho_x(y)$ are connected by a pure space-like (spherical) geodesic while $\rho_x(y),y$ are connected by a pure time-like (hyperbolic) geodesic.
Given the projection, we have following theorem.
\begin{theorem}
Given two points $\mathbf{x},\mathbf{y} \in Q_{\beta}^{p, q}$, the manifold distance $d_{\mathcal{Q}}(x,y)$ satisfies the following inequality,
\begin{align}
d_{\mathcal{Q}}(x,y) \leq \min\{& d_\mathbb{S}(y,\rho_y(x)) + d_\mathbb{H}(\rho_y(x),x), \\
& d_\mathbb{S}(x,\rho_x(y)) + d_\mathbb{H}(\rho_x(y),y)\}
\end{align}
\end{theorem}
\noindent where $d_\mathbb{S}$ and $d_\mathbb{H}$ are spherical distance and hyperbolic distance, respectively. The inequality can be seen as a generalization of triangle inequality in the manifold, and the equality holds if and only the two points are connected by a pure space-like or time-like geodesics.
\subsection{Diffemorphsim Optimization.}
For each triplet, we randomly corrupt its head or tail entity with $k$ entities, calculate the probabilities for triplets as $p=\sigma(s(h, r, t))$ with the sigmoid function, and minimize the binary cross entropy loss.
\begin{equation}
\mathcal{L}=-\frac{1}{N} \sum_{i=1}^{N}\left(\log p^{(i)}+\sum_{j=1}^{k} \log \left(1-\tilde{p}^{(i, j)}\right)\right)
\end{equation}
where $p^{(i)}$ and $\tilde{p}^{(i, j)}$ are the probabilities for correct and corrupted triplets respectively, and $N$ is the sample number.
It is noteworthy that directly optimizing in pseudo-Riemannian manifold is pratical challenging. The issue is caused by the fact that there exist some points that cannot be connected by a geodesic in the manifold (hence no tangent direction for gradient descent). One way to sidestep the problem is to define entity embeddings in the Euclidean space and use the diffemorphsim theorem to map the points into the manifolds. Since diffemorphsim are differential and bijective mapping, the canonical chain rule can be exploited to perform standard gradient descent. In particular, we use the following diffeomorphism \citep{DBLP:journals/corr/abs-2106-03134} to $\mathbf{x}$ into the pseudo-hyperboloid.
\begin{theorem}\label{lm:sbr}\citep{DBLP:journals/corr/abs-2106-03134}
For any point $\mathbf{x} \in \mathcal{Q}_{\beta}^{s, t}$, there exists a diffeomorphism $\psi: \mathcal{Q}_{\beta}^{s, t} \rightarrow \mathbb{S}_{-\beta}^{t} \times \mathbb{R}^{s}$ that maps $\mathbf{x}$ into the product manifolds of a sphere and the Euclidean space. The mapping and its inverse are given by,
\end{theorem}
\begin{equation}
\psi(\mathbf{x})=\left(\begin{array}{c}
\sqrt{|\beta|} \frac{\mathrm{t}}{\|||||} \\
\mathrm{s}
\end{array}\right), \quad \psi^{-1}(\mathrm{z})=\left(\begin{array}{c}
\frac{\sqrt{|\beta|+\|\mathrm{v}\|^{2}}}{\sqrt{|\beta|}} \mathbf{u} \\
\mathbf{v}
\end{array}\right) \text {, }
\end{equation}
where $\mathrm{x}=\left(\begin{array}{c}\mathrm{t} \\ \mathrm{s}\end{array}\right) \in Q_{\beta}^{s, t}$ with $\mathrm{t} \in \mathbb{R}_{*}^{t}$ and $\mathrm{s} \in \mathbb{R}^{s} . \mathrm{z}=\left(\begin{array}{c}\mathrm{u} \\ \mathrm{v}\end{array}\right) \in$ $\mathbb{S}_{-\beta}^{t} \times \mathbb{R}^{s}$ with $\mathbf{u} \in \mathbb{S}_{-\beta}^{t}$ and $\mathbf{v} \in \mathbb{R}^{s}$. With these mappings, any vector $\mathbf{x} \in \mathbb{R}_{*}^{q+1} \times \mathbb{R}^{p}$ can be mapped to $\mathcal{Q}_{\beta}^{p, q}$ via $\varphi=\psi^{-1} \circ \psi$.
\section{Theoretical Analysis}
\subsection{Complexity}
The relation parameters include rotation, reflection and translation, as well as entity-specific biases.
The total number of parameters then $\mathcal{O}(|\mathcal{V}| d)$. The additional
cost is proportional to the number of relations, which is usually much smaller than the number of
entities.
\subsection{Subsumption}
HyboNet models relation as Lorentz transformation. PseudoE subsumes HyboNet since Lorentz transformation is a special case of pseudo-orthogonal transformation where $q=1$.
\begin{theorem}
Lorentz transformation is equivalent to the combination of Möbius multiplication and Möbius addition.
\end{theorem}
\begin{theorem}
Lorentz transformation can be decomposed into rotation/reflection and translation
\end{theorem}
Theorem 4 and 5 imply that MurP and RotH/RefH are all special cases of HyboNet. Hence, PseudoE also subsumes MurP and RotH/RefH.
\subsection{Expressibility}
A key theoretical property of link prediction models is their ability to be fully expressive, which we define formally a
\begin{definition}[Full Expressivity]
A KGE model M is fully expressive if, for any given disjoint sets of true and false
facts, there exists a parameter configuration for M such that M accurately classifies all the given
facts.
\end{definition}
\begin{proposition}[Expressivity]
For any truth assignment over entities $\mathcal{E}$ and relations $\mathcal{R}$ containing $|\mathcal{W}|$ true facts, there exists a model with embeddings vectors of size $\min (|\mathcal{E} \| \mathcal{R}|+1,|\mathcal{W}|+1)$ that represent the assignment.
\end{proposition}
\subsection{Inference patterns.} PseudoE can naturally infer relation patterns including symmetry, anti-symmetry, inversion and composition.
\begin{proposition}
PseudoE can model symmetry relation
\end{proposition}
\begin{proposition}
PseudoE can model anti-symmetry relation
\end{proposition}
\begin{proposition}
PseudoE can model inversion relation
\end{proposition}
\begin{proposition}
PseudoE can model composition relation
\end{proposition}
\section{Experiments}
We evaluate the performance of PseudoE on the KG completion task. We conjecture that (1) PseudoE outperforms hyperbolic and Euclidean counterparts in both low dimensions and high dimensions (Sec.\ref{sec:low_dim}). (2) We expect that the signature works as knob for controlling the embedding geometry, and hence influence the performance of PseudoE. (3) We conjecture that the combination of rotation and reflection outperforms a single rotation or reflection operator.
\subsection{Experiment setup.}
\noindent
\textbf{Dataset.} We use two standard benchmarks: WN18RR and FB15k-237.
WN18RR is a subset of WordNet containing 11 lexical relationships.
FB15k-237 is a subset of Free-base that contains general world knowledge. Both dataset
have hierarchical (e.g., part\_of) and non-hierarchical (e.g., similar\_to) relations, some of which induce logical patterns like hasNeighbor and isMarriedTo.
For each KG, we add inverse relations to the datasets, which is the standard data augmentation protocol \cite{DBLP:conf/icml/LacroixUO18}. We used the same train/valid/test sets as in \citep{DBLP:conf/icml/LacroixUO18}. Additionally, we estimate the global graph curvature and $\delta$-hyperbolicity. The datasets’ statistics are summarized in Table 1. As we can see, WN18RR is more hierachical than FB15k-237 since it has a smaller global graph curvature.
\begin{table}[]
\centering
\begin{tabular}{lcccc}
\hline Dataset & \#entities & \#relations & \#triples & $\xi_{G}$ \\
\hline WN18RR & $41 \mathrm{k}$ & 11 & $93 \mathrm{k}$ & $-2.54$ \\
\hline FB15k-237 & $15 \mathrm{k}$ & 237 & $310 \mathrm{k}$ & $-0.65$ \\
\hline YAGO3-10 & $123 \mathrm{k}$ & 37 & $1 \mathrm{M}$ & $-0.54$ \\
\hline
\end{tabular}
\caption{Datasets statistics. The lower the metric $\xi_{G}$ is, the more tree-like the knowledge graph is.}
\label{tab:dataset}
\end{table}
\noindent
\textbf{Evaluation protocol.} We report two popular ranking-based metrics: mean reciprocal rank (MRR), the average of the inverse of the true entity ranking in the prediction; and H@K, the percentage of the correct entities appearing within the top K positions of the predicted ranking.
As a standard, we report the two metrics in the filtered setting: all true triples in the KG are filtered out during evaluation.
\begin{table*}[]
\centering
\label{tab:low_dim}
\caption{Link prediction results (\%) on WN18RR and FB15k-237 for low-dimensional embeddings ($d=32$) in the filtered setting. The first group of models are Euclidean models, the second groups are non-Euclidean models. RotatE results are reported without self-adversarial negative sampling for fair comparison. Top three results are highlighted.}
\begin{tabular}{lrrrrrrrrrrrrrrr}
\hline & \multicolumn{4}{c}{WN18RR} & \multicolumn{4}{c}{FB15k-237} & \multicolumn{4}{c}{ YAGO3-10} \\
Model & MRR & H@ 1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 \\
\hline
TransE & 36.6 & 27.4 & 43.3 & 51.5 & 29.5 & 21.0 & 32.2 & 46.6 & - & - & - & - \\
RotatE & 38.7 & 33.0 & 41.7 & 49.1 & 29.0 & 20.8 & 31.6 & 45.8 & - & - & - & - \\
ComplEx & 42.1 & 39.1 & 43.4 & 47.6 & 28.7 & 20.3 & 31.6 & 45.6 & 33.6 & 25.9 & 36.7 & 48.4 \\
QuatE & 42.1 & 39.6 & 43.0 & 46.7 & 29.3 & 21.2 & 32.0 & 46.0 & - & - & - & - \\
5*E & 44.9 & 41.8 & 46.2 & 51.0 & 32.3 & 24.0 & 35.5 & 50.1 & - & - & - & - \\
\hline
MuRP & 46.5 & 42.0 & 48.4 & 54.4 & 32.3 & 23.5 & 35.3 & 50.1 & 23.0 & 15.0 & 24.7 & 39.2 \\
RotH & \underline{47.2} & \underline{42.8} & \underline{49.0} & \underline{55.3} & 31.4 & 22.3 & 34.6 & 49.7 & 39.3 & 30.7 & 43.5 & 55.9 \\
RefH & 44.7 & 40.8 & 46.4 & 51.8 & 31.2 & 22.4 & 34.2 & 48.9 & 38.1 & 30.2 & 41.5 & 53.0\\
AttH & 46.6 & 41.9 & 48.4 & 55.1 & \underline{32.4} & \underline{23.6} & \underline{35.4} & \underline{50.1} & \underline{39.7} & \underline{31.0} & \underline{43.7} & \underline{56.6} \\
\hline
PseudoE (q=2) & 48.0 & 43.4 & 50.0 & 55.7 & 33.1 & 24.1 & 35.5 & 50.3 & 39.5 & 31.2 & 43.9 & 56.8 \\
PseudoE (q=4) & \textbf{48.8} & \textbf{44.0} & \textbf{50.3} & \textbf{56.1} & 33.4 & 24.3 & 36.0 & 51.0 & 40.0 & 31.5 & 44.3 & 57.5 \\
PseudoE (q=6) & 48.3 & 42.5 & 49.1 & 55.9 & \textbf{33.8} & \textbf{24.9} & \textbf{36.5} & \textbf{51.7} & \textbf{40.5} & \textbf{31.8} & \textbf{44.7} & \textbf{58.0} \\
PseudoE (q=8) & 47.5 & 42.3 & 49.0 & 55.7 & 32.6 & 24.6 & 36.2 & 51.0 & 39.4 & 31.3 & 43.4 & 57.8 \\
\hline
\end{tabular}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{imgs/total_dim.pdf}
\caption{The MRR of WN18RR for dimension $d \in \{10,16,20,32,50,200,500\}$. Average and standard
deviation computed over 10 runs for PseudoE.}
\label{fig:total_dim}
\end{figure}
\noindent
\textbf{Baselines.} For comparisons, we consider both Euclidean SotA models and hyperbolic SotA models.
\subsection{Results.}\label{sec:low_dim}
Following previous non-Euclidean approaches \citep{DBLP:conf/acl/ChamiWJSRR20},
we first evaluate PseudoE in the low-dimensional setting ($d = 32$), which is roughly one order of
magnitude smaller than SotA Euclidean methods. Table \ref{tab:low_dim} compares the performance of PseudoE to that of the baselines. Overall, it is clear that the hyperbolic methods outperform Euclidean counterparts, among which RotH achieved the best results in WN18RR while AttH performs best in FB15k-237, showcasing the benefits of
hyperbolic geometry on embedding hierarchical structure. However, our proposed method, PseudoE with varying time dimension ($q=2,4,6,8$) further improves the performance of all hyperbolic methods. In particular, PseudoE with only $2$ time dimension consistently outperforms all baselines, suggesting that the heterogeneous structure imposed by the pseudo-Riemannian geometry leads to better representations. In particular, the best performance of WN18RR is achieved by PesudoE ($q=4$) while the best performances of FB15k-237 and YAGO3-10 are achieved by PesudoE ($q=6$). We believe this is due to the fact that WN18RR is more hierarchical than FB15k-237 and YAGO3-10, which validates our conjecture that the number time dimensions control the hierarchy of the embedding space. Fig.~\ref{fig:time_dim} compares the performance of PseudoE with varying time dimension. It clearly shows that the performance increase initially, but decrease after the time dimension reaches a suitable value.
\begin{table*}[]
\centering
\caption{Link prediction results (\%) on WN18RR and FB15k-237 for high-dimensional embeddings (best for $d \in \{200,400,500\}$ ) in the filtered setting. Top three results are highlighted.}
\begin{tabular}{lrrrrrrrrrrrrrr}
\hline & \multicolumn{4}{c}{WN18RR} & \multicolumn{4}{c}{FB15k-237} & \multicolumn{4}{c}{ YAGO3-10} \\
Model & MRR & H@ 1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 \\
\hline
TransE & 48.1 & 43.3 & 48.9 & 57.0 & 34.2 & 24.0 & 37.8 & 52.7 & - & - & - & - \\
DistMult & 43.0 & 39.0 & 44.0 & 49.0 & 24.1 & 15.5 & 26.3 & 41.9 & 34.0 & 24.0 & 38.0 & 54.0 \\
RotatE & 47.6 & 42.8 & 49.2 & 57.1 & 33.8 & 24.1 & 37.5 & 53.3 & 49.5 & 40.2 & 55.0 & 67.0 \\
ComplEx & 48.0 & 43.5 & 49.5 & 57.2 & 35.7 & 26.4 & 39.2 & 54.7 & 56.9 & 49.8 & 60.9 & 70.1\\
QuatE & 48.8 & 43.8 & 50.8 & 58.2 & 34.8 & 24.8 & 38.2 & 55.0 & - & - & - & - \\
5*E & \underline{50.0} & \underline{45.0} & 51.0 & \underline{59.0} & \underline{37.0} & \underline{28.0} & \underline{40.0} & \underline{56.0} & - & - & - & - \\
\hline
MuRP & 48.1 & 44.0 & 49.5 & 56.6 & 33.5 & 24.3 & 36.7 & 51.8 & 35.4 & 24.9 & 40.0 & 56.7 \\
RotH & 49.6 & 44.9 & \underline{51.4} & 58.6 & 34.4 & 24.6 & 38.0 & 53.5 & 57.0 & 49.5 & 61.2 & 70.6 \\
RefH & 46.1 & 40.4 & 48.5 & 56.8 & 34.6 & 25.2 & 38.3 & 53.6 & 57.6 & 50.2 & 61.9 & 71.1 \\
AttH & 48.6 & 44.3 & 49.9 & 57.3 & 34.8 & 25.2 & 38.4 & 54.0 & 56.8 & 49.3 & 61.2 & 70.2 \\
\hline
PseudoE (q=2) & 49.7 & 44.8 & 51.4 & 58.8 & 36.4 & 27.4 & 39.5 & 55.4 \\
PseudoE (q=4) & 50.3 & 45.1 & 51.7 & 58.9 & 37.0 & 27.9 & 40.1 & 56.1\\
PseudoE (q=6) & 50.1 & 45.0 & 51.5 & 59.0 & 36.5 & 27.5 & 39.5 & 55.3\\
PseudoE (q=8) & 49.9 & 44.9 & 51.2 & 58.5 & 34.8 & 25.3 & 38.4 & 53.7 \\
\hline
\end{tabular}
\end{table*}
\begin{table*}[]
\centering
\begin{tabular}{lcccccc}
\hline Relation & $\mathrm{Khs}_{G}$ & $\xi_{G}$ & RotE & RotH & PseudoE (Rotation, q=4) & $\Delta$(P-H) \\
\hline \text { member meronym } & 1.00 & -2.90 & 32.0 & 39.9 & 41.5 & 1.5\% \\
\text { hypernym } & 1.00 & -2.46 & 23.7 & 27.6 & 27.8 & 0.7 \% \\
\text { has part } & 1.00 & -1.43 & 29.1 & 34.6 & 35.3 & 2.0 \% \\
\text { instance hypernym } & 1.00 & -0.82 & 48.8 & 52.0 & 53.1 & 6.1 \% \\
\textbf { member of domain region } & 1.00 & -0.78 & 38.5 & 36.5 & 45.0 & 23.3 \% \\
\textbf { member of domain usage } & 1.00 & -0.74 & 45.8 & 43.8 & 51.2 & 16.8 \% \\
\text { synset domain topic of } & 0.99 & -0.69 & 42.5 & 44.7 & 47.5 & 5.17 \% \\
\text { also see } & 0.36 & -2.09 & 63.4 & 70.5 & 83.7 & 18.7 \% \\
\text { derivationally related form } & 0.07 & -3.84 & 96.0 & 96.8 & 98.6 & 1.80 \% \\
\text { similar to } & 0.07 & -1.00 & 100.0 & 100.0 & 100.0 & 0.00 \% \\
\text { verb group } & 0.07 & -0.50 & 97.4 & 97.4 & 97.4 & 0.00 \% \\
\hline
\end{tabular}
\caption{Comparison of H@10 for WN18RR relations. Higher $\mathrm{Khs}_{G}$ and lower $\xi_{G}$ means more hierarchical.}
\label{tab:my_label}
\end{table*}
\subsection{Ablation Studies}
\section{Conclusion}
This paper proposes PseudoE, a KG embedding method in pseudo-Riemannian manifold that generalizes hyperbolic and spherical manifolds.
\bibliographystyle{named}
\section{Introduction}
Knowledge graph (KG) embeddings, which map entities and relations into a low-dimensional space, have emerged as an effective way for a wide range of KG-based applications \citep{DBLP:journals/bmcbi/CelebiUYGDD19, DBLP:conf/www/0003W0HC19,DBLP:conf/wsdm/HuangZLL19}. In the last decade, various KG embedding methods have been proposed. Prominent examples include the \emph{additive} (or \emph{translational}) family \citep{DBLP:conf/nips/BordesUGWY13,DBLP:conf/aaai/WangZFC14,DBLP:conf/aaai/LinLSLZ15} and the \emph{multiplicative} (or \emph{bilinear}) family \citep{DBLP:conf/icml/NickelTK11,DBLP:journals/corr/YangYHGD14a,DBLP:conf/icml/LiuWY17}.
Most of these approaches, however, are built on the Euclidean geometry that suffers from inherent limitations when dealing with hierarchical KGs such as WordNet \citep{DBLP:journals/cacm/Miller95}.
Recent studies \citep{DBLP:conf/nips/ChamiYRL19, DBLP:conf/nips/NickelK17} show that hyperbolic geometries (e.g., the Poincaré ball or Lorentz model) are more suitable for embedding hierarchical data because of their exponentially growing volumes.
Such \emph{tree-like} geometric space has been exploited in developing various hyperbolic KG embedding models such as MuRP \citep{DBLP:conf/nips/BalazevicAH19}, RotH \citep{DBLP:conf/acl/ChamiWJSRR20} and HyboNet \citep{DBLP:journals/corr/abs-2105-14686}, boosting the performance of link prediction on KGs with rich hierarchical structures and remarkably reducing the dimensionality.
Although hierarchies are the most dominant structures, the real-world KGs usually exhibit heterogeneous topological structures, e.g., a KG consists of multiple hierarchical and non-hierarchical relations.
Typically, different hierarchical relations (e.g., \textit{subClassOf} and \textit{partOf}) form distinct hierarchies, while various non-hierarchical relations (e.g., \textit{similarTo} and \textit{sisterTerm}) capture the corresponding interactions between the entities at the same hierarchy level \citep{bai2021modeling}.
Fig.\ref{fig:example}(a) shows an example of KG consisting of a heterogeneous graph structure.
However, current hyperbolic KG embedding methods such as MuRP \citep{DBLP:conf/nips/BalazevicAH19} and HyboNet \citep{DBLP:journals/corr/abs-2105-14686} can only model a globally homogeneous hierarchy.
RotH \citep{DBLP:conf/acl/ChamiWJSRR20} implicitly considers the topological "heterogeneity" of KGs and alleviates this issue by learning relation-specific curvatures that distinguish the topological characteristics of different relations.
However, this does not entirely solve the problem, because hyperbolic geometry inherently \emph{mismatches} non-hierarchical data (e.g., data with cyclic structure) \citep{DBLP:conf/iclr/GuSGR19}.
To deal with data with heterogeneous topologies, a recent work \cite{DBLP:conf/www/WangWSWNAXYC21} learns KG embeddings in a product manifold and shows some improvements on KG completion. However, such product manifold is still a homogeneous space in which all data points have the same degree of heterogeneity (i.e., hierarchy and cyclicity), while KGs require relation-specific geometric mappings, e.g., relation \textit{partOf} should be more "hierarchical" than relation \textit{similarTo}.
Different from previous works, we consider an ultrahyperbolic manifold that seamlessly interleaves the hyperbolic and spherical manifolds. Fig.\ref{fig:example} (b) shows an example of ultrahyperbolic manifold that contains multiple distinct geometries. Ultrahyperbolic manifold has demonstrated impressive capability on embedding graphs with heterogeneous topologies such as hierarchical graphs with cycles \citep{law2020ultrahyperbolic,sim2021directed,DBLP:journals/corr/abs-2106-03134}.
However, such powerful representation space has not yet been exploited for embedding KGs with heterogeneous topologies.
\begin{figure}
\centering
\subfloat[\centering ]{{\includegraphics[width=.64\columnwidth]{imgs/kg_example.pdf}}}
\subfloat[\centering ]{{\includegraphics[width=.36\columnwidth]{imgs/ultra.png} }}
\caption{(a) A KG contains multiple distinct hierarchies (e.g., \textit{subClassOf} and \textit{partOf}) and non-hierarchical relations (e.g., \textit{similarTo} and \textit{sisterTerm}).
(b) An ultrahyperbolic manifold generalizing hyperbolic and spherical manifolds (figure from \citep{law2020ultrahyperbolic}).}
\label{fig:example}
\vspace{-0.2cm}
\end{figure}
In this paper, we propose ultrahyperbolic KG embeddings (UltraE), the first KG embedding method that simultaneously embeds multiple distinct hierarchical relations and non-hierarchical relations in a single but heterogeneous geometric space.
The intuition behind the idea is that there exist multiple kinds of local geometries that could describe their corresponding relations. For example, as shown in Fig.~\ref{fig:distance}(a), two points in the same \emph{circular conic section} are described by spherical geometry, while two points in the same half of a \emph{hyperbolic conic section} can be described by hyperbolic geometry.
In particular, we model entities as points in the ultrahyperbolic manifold and model relations as pseudo-orthogonal transformations, i.e., isometries in the ultrahyperbolic manifold.
We exploit the theorem of hyperbolic Cosine-Sine decomposition \citep{DBLP:journals/siammax/StewartD05} to decompose the pseudo-orthogonal matrices into various geometric operations including circular rotations/reflections and hyperbolic rotations. Circular rotations/reflections allow for modeling relational patterns (e.g., composition), while hyperbolic rotations allow for modeling hierarchical graph structures. As Fig.~\ref{fig:distance}(b) shows, a combination of circular rotations/reflections and hyperbolic rotations induces various geometries including circular, elliptic, parabolic and hyperbolic geometries.
These geometric operations are parameterized by Givens rotation/reflection \citep{DBLP:conf/acl/ChamiWJSRR20} and trigonometric functions, such that the number of relation parameters grows linearly w.r.t embedding dimensionality.
The entity embeddings are parametrized in Euclidean space and projected to the ultrahyperbolic manifold with differentiable and bijective mappings, allowing for stable optimization via standard Euclidean based gradient descent algorithms.
\noindent
\textbf{Contributions.} Our key contributions are summarized as follows:
\begin{itemize}
\item We propose a novel KG embedding method, dubbed UltraE, that models entities in an ultrahyperbolic manifold seamlessly covering various geometries including hyperbolic, spherical and their combinations. UltraE enables modeling multiple hierarchical and non-hierarchical structures in a single but heterogeneous space.
\item We propose to decompose the relational transformation into various operators and parameterize them via Givens rotations/reflections such that the number of parameters is linear to the dimensionality. The decomposed operators allow for modeling multiple relational patterns including inversion, composition, symmetry, and anti-symmetry.
\item We propose a novel Manhattan-like distance in the ultrahyperbolic manifold, to retain the identity of indiscernibles while without suffering from the broken geodesic issues.
\item We show the theoretical connection of UltraE with some existing approaches. Particularly, by exploiting the theorem of Lorentz transformation, we identify the connections between multiple hyperbolic KG embedding methods, including MuRP, RotH/RefH and HyboNet.
\item We conduct extensive experiments on three standard benchmarks, and the experimental results show that UltraE outperforms previous Euclidean, hyperbolic and mixed-curvature (product manifold) baselines on KG completion tasks.
\end{itemize}
\section{Preliminaries}
In this paper, the points on a manifold are denoted by boldface lower letters $\mathbf{x,y}$. The matrices are denoted by boldface capital letters $\mathbf{U},\mathbf{V},\mathbf{I}$.
The embedding spaces are denoted by blackboard bold capital letters like $\mathbb{R},\mathbb{H},\mathbb{S},\mathbb{U}$ denoting Euclidean, hyperbolic, spherical and ultrahyperbolic manifolds, respectively.
\subsection{Pseudo-Riemannian Geometry} A pseudo-Riemannian manifold $(\mathcal{M},g)$ is a differentiable manifold $\mathcal{M}$ equipped with a metric tensor $g: T_{\mathbf{x}}\mathcal{M} \times T_{\mathbf{x}}\mathcal{M} \rightarrow \mathbb{R}$ defined in the entire tangent space $T_{\mathbf{x}}\mathcal{M}$, where $g$ is non-degenerate (i.e., $g(\mathbf{x}, \mathbf{y})=0$ for all $\mathbf{y} \in T_{\mathbf{x}}\mathcal{M} \backslash\{\mathbf{0}\}$ implies that $\mathbf{x}=\mathbf{0}$) and indefinite (i.e., $g$ could be positive, negative and zero). Such a metric is called a pseudo-Riemannian metric, defined as
\begin{small}
\begin{align}\label{eq:metric}
\forall \mathbf{x},\mathbf{y} \in \mathbb{R}^{p,q}, \langle \mathbf{x}, \mathbf{y}\rangle_q = \sum_{i=1}^{p}\mathbf{x}_i \mathbf{y}_i-\sum_{j=p+1}^{p+q}\mathbf{x}_j\mathbf{y}_j,
\end{align}
\end{small}
where $\mathbb{R}^{p,q}$ is a pseudo-Euclidean space (or space-time) with the dimensionality of $d=p+q$ where $p\geq 0, q\geq 0$. The space $\mathbb{R}^{p,q}$ has a rich background in physic \citep{o1983semi}. A point in $\mathbb{R}^{p,q}$ is interpreted as an \emph{event}, where the first $p$ dimensions and last $q$ dimensions are \emph{space-like} and \emph{time-like} dimensions, respectively.
The pair $(p,q)$ is called the signature of space-time.
Two important special cases are Riemannian ($q=0$) and Lorentz ($q=1$) geometrics that have positive definite metrics. The more general cases of pseudo-Riemannian geometry ($q \geq 2, p \geq 1$), however, do not need to possess positive definiteness, i.e., the scalar product induced by the metric could be positive (space-like), negative (time-like) or zero (light-like).
\begin{figure}
\centering
\subfloat[\centering ]{{\includegraphics[width=.42\columnwidth]{imgs/proj.pdf} }}%
\qquad
\subfloat[\centering ]{{\includegraphics[width=0.48\columnwidth]{imgs/geometrices.pdf}}}%
\qquad
\caption{(a) An illustration of a spherical (\textit{green}) geometry in the circular conic section and a hyperbolic (\textit{blue}) geometry in the hyperbolic conic section. The Manhattan-like distance of two points is defined by summing up the \emph{energy} moving from one point to another point with a circular rotation and a hyperbolic rotation. $\rho_{\mathbf{x}}(\mathbf{y})$ is a projection of $\mathbf{y}$ on an circular conic section crossing $\mathbf{x}$, such that $\rho_{\mathbf{x}}(\mathbf{y})$ and $\mathbf{x}$ are connected by a circular rotation while $\rho_{\mathbf{x}}(\mathbf{y})$ and $\mathbf{y}$ are connected by a hyperbolic rotation. (b) An illustration of the various geometries covered by the circular rotation and hyperbolic rotation, including circular, elliptic, parabolic and hyperbolic geometries.}
\label{fig:distance}
\end{figure}
\subsection{Ultrahyperbolic Manifold}
By exploiting the scalar product induced by Eq.~(\ref{eq:metric}), an ultrahyperbolic manifold (or pseudo-hyperboloid) is defined as a submanifold in the ambient space $\mathbb{R}^{p,q}$, given by
\begin{equation}\label{eq:pseudo-hyperboloid}
\mathbb{U}_{\alpha}^{p, q}=\left\{\mathbf{x}=\left(x_{1}, x_{2}, \cdots, x_{p+q}\right)^{\top} \in \mathbb{R}^{p,q}:\|\mathbf{x}\|_{q}^{2}=-\alpha^{2}\right\},
\end{equation}
where $\alpha$ is a non-negative real number denoting the radius of curvature. $\|\mathbf{x}\|_{q}^{2}=\langle \mathbf{x}, \mathbf{x}\rangle_{q}$ is a norm of the induced scalar product.
The ultrahyperbolic manifold $\mathbb{U}_{\alpha}^{p, q}$ can be seen as a generalization of hyperbolic and spherical manifolds, i.e., hyperbolic and spherical manifolds can be defined as special cases of ultrahyperbolic manifold by setting all time dimensions except one to be zero and setting all space dimensions to be zero, respectively, i.e., $\mathbb{H}_{\alpha} = \mathbb{U}_{\alpha}^{p,1}, \mathbb{S}_{\alpha} = \mathbb{U}_{\alpha}^{0,q}$. It is commonly known that hyperbolic and spherical manifolds are optimal geometric spaces for hierarchical and cyclic structures, respectively. Hence, ultrahyperbolic manifold is able to embed both hierarchical and cyclic structures.
\section{Ultrahyperbolic Knowledge Graph Embeddings}
Let $\mathcal{E}$ and $\mathcal{R}$ denote the set of entities and relations.
A KG $\mathcal{K}$ consists of a set of triples $(h, r, t) \in \mathcal{K}$ where $h, t \in \mathcal{E}, r \in \mathcal{R}$ denote the head, the tail and their relation, respectively.
The objective is to associate each entity with an embedding $\mathbf{e} \in \mathbb{U}^{p,q}$ in the ultrahyperbolic manifold, as well as a relation-specific transformation $f_r:\mathbb{U}^{p,q} \rightarrow \mathbb{U}^{p,q}$ that transforms one entity to another one in the ultrahyperbolic manifold.
\subsection{Relation as Pseudo-Orthogonal Matrix}
We propose to model relations as pseudo-orthogonal (or $J$-orthogonal) transformations \citep{DBLP:journals/siamrev/Higham03}, a generalization of \emph{orthogonal transformation} in pseudo-Riemannian geometry.
Formally, a real, square matrix $\mathbf{Q} \in \mathbb{R}^{d \times d}$ is called $J$-orthogonal if
\begin{equation}
\mathbf{Q}^{T} \mathbf{J} \mathbf{Q}=\mathbf{J},
\end{equation}
where
$\mathbf{J}=\left[\begin{array}{cc}
\mathbf{I}_{p} & \mathbf{0} \\
\mathbf{0} & -\mathbf{I}_{q}
\end{array}\right], p+q=d
$ and $\mathbf{I}_p$, $\mathbf{I}_q$ are identity matrices.
$\mathbf{J}$ is called a signature matrix of signature $(p,q)$. Such $J$-orthogonal matrices form a multiplicative group called pseudo-orthogonal group $O(p,q)$. Conceptually, a matrix $\mathbf{Q} \in O(p,q)$ is an \emph{isometry} (distance-preserving transformation) in the ultrahyperbolic manifold that preserves the bilinear form (i.e., $\forall \mathbf{x} \in \mathbb{U}^{p,q}, \mathbf{Q}\mathbf{x} \in \mathbb{U}^{p,q} $). Therefore, the matrix acts as a linear transformation in the ultrahyperbolic manifold.
There are two challenges to model relations as $J$-orthogonal transformations: 1) $J$-orthogonal matrix requires $\mathcal{O}(d^2)$ parameters. 2) Directly optimizing the $J$-orthogonal matrices results in constrained optimization, which is practically challenging within the standard gradient based framework.
\subsubsection{Hyperbolic Cosine-Sine Decomposition}
To solve these issues, we seek to decompose the $J$-orthogonal matrix by exploiting the Hyperbolic Cosine-Sine (CS) Decomposition.
\begin{proposition}[Hyperbolic CS Decomposition \citep{DBLP:journals/siammax/StewartD05}]\label{prop:cs_decomposition}
Let $\mathbf{Q}$ be $J$-orthogonal and assume that $q \leq p.$ Then there are orthogonal matrices $\mathbf{U}_{1}, \mathbf{V}_{1} \in \mathbb{R}^{p \times p}$ and $\mathbf{U}_{2}, \mathbf{V}_{2} \in \mathbb{R}^{q \times q}$ s.t.
\begin{equation}\label{eq:cs_decomposition}
\mathbf{Q}=\left[\begin{array}{cc}
\mathbf{U}_{1} & \mathbf{0} \\
\mathbf{0} & \mathbf{U}_{2}
\end{array}\right]\left[\begin{array}{ccc}
\mathbf{C} & \mathbf{0} & \mathbf{S} \\
\mathbf{0} & I_{p-q} & \mathbf{0} \\
\mathbf{S} & \mathbf{0} & \mathbf{C}
\end{array}\right]\left[\begin{array}{cc}
\mathbf{V}_{1}^{T} & \mathbf{0} \\
\mathbf{0} & \mathbf{V}_{2}^{T}
\end{array}\right],
\end{equation}
where $\mathbf{C}=\operatorname{diag}\left(c_1, \ldots, c_q\right), \mathbf{S}=\operatorname{diag}\left(s_1, \ldots, s_q\right)$ and $\mathbf{C}^{2}-\mathbf{S}^{2}=\mathbf{I}_q$. For cases where $q > p$, the decomposition can be defined analogously. For simplicity, we only consider $q \leq p$.
\end{proposition}
Geometrically, the $J$-orthogonal matrix is decomposed into various geometric operators. The orthogonal matrices $\mathbf{U}_{1}, \mathbf{V}_{1}$ represent circular rotation or reflection (depending on their determinant) \footnote{Depending on the determinant, a orthogonal matrix $\mathbf{U}$ denotes a rotation iff $\operatorname{det}(\mathbf{U})=1$ or a reflection iff $\operatorname{det}(\mathbf{U})=-1$} in the space dimension, while $\mathbf{U}_{2}, \mathbf{V}_{2}$ represent circular rotation or reflection in the time dimension. The intermediate matrix that is uniquely determined by $\mathbf{C},\mathbf{S}$, denotes a hyperbolic rotation (analogous to the "circular rotation") across the space and time dimensions. Fig. \ref{fig:rotation} shows a $2$-dimensional example of circular rotation and hyperbolic rotation.
It is worth noting that both circular rotation/reflection and hyperbolic rotation are important operations for KG embeddings.
On the one hand, circular rotations/reflections are able to model complex relational patterns including inversion, composition, symmetry, or anti-symmetry. Besides, these relational patterns usually form
some non-hierarchies (e.g., cycles). Hence, circular rotations/reflections inherently encode non-hierarchical graph structures.
Hyperbolic rotation, on the other hand, is able to model hierarchies, i.e., by connecting entities at different levels of hierarchies.
Therefore, the decomposition in proposition \ref{prop:cs_decomposition} shows that
$J$-orthogonal transformation is powerful for representing both relational patterns and graph structures.
\begin{figure}[]
\centering
\includegraphics[width=0.8\columnwidth]{imgs/rotation.png}
\caption{A two-dimensional example of circular rotation (\textit{left}) and hyperbolic rotation (\textit{right}), where $\theta$ is the angle of rotations.}
\label{fig:rotation}
\end{figure}
\subsection{Relation Parameterization}
In light of this, we now parameterize circular rotation/reflection and hyperbolic rotation, respectively.
\subsubsection{Circular Rotation/Reflection.} Parameterizing circular rotation or reflection via orthogonal matrices is non-trivial and there are some \emph{trivialization} approaches such as using Cayley Transform \citep{shepard2015representation}. However, such parameterization requires $\mathcal{O}(d^2)$ parameter complexity. To simplify the complexity, we consider Given transformations denoted by $2 \times 2$ matrices. Suppose the number of dimension $p,q$ are even, circular rotation and reflection can be denoted by block-diagonal matrices of the form, given as
\begin{equation}\label{eq:rot_ref}
\begin{array}{c}
\operatorname{\mathbf{Rot}}\left(\Theta_{r}\right)=\operatorname{diag}\left(\mathbf{G}^{+}\left(\theta_{r, 1}\right), \ldots, \mathbf{G}^{+}\left(\theta_{r, \frac{p+q}{2}}\right)\right) \\
\operatorname{\mathbf{Ref}}\left(\Phi_{r}\right)=\operatorname{diag}\left(\mathbf{G}^{-}\left(\phi_{r, 1}\right), \ldots, \mathbf{G}^{-}\left(\phi_{r, \frac{p+q}{2}}\right)\right) \\
\text {where} \quad \mathbf{G}^{\pm}(\theta):=\left[\begin{array}{cc}
\cos (\theta) & \mp \sin (\theta) \\
\sin (\theta) & \pm \cos (\theta)
\end{array}\right]
\end{array},
\end{equation}
where $\Theta_{r}:=\left(\theta_{r, i}\right)_{i \in\left\{1, \ldots \frac{p+q}{2}\right\}}$ and $\Phi_{r}:=\left(\phi_{r, i}\right)_{i \in\left\{1, \ldots \frac{p+q}{2}\right\}}$ are relation-specific parameters.
Although circular rotation is theoretically able to infer symmetric patterns \citep{DBLP:conf/emnlp/WangLLS21} (i.e., by setting rotation angle $\theta=\pi$ or $\theta=0$), circular reflection can more effectively represent symmetric relations since their second power is the identity. AttH \citep{DBLP:conf/emnlp/WangLLS21} combines circular rotations and circular reflections by using an attention mechanism learned in the tangent space, which requires additional parameters.
We also combine circular rotation and circular reflection operators but in a different way. Since the $J$-orthogonal matrix is decomposed into two rotation/reflection matrices, we set the first matrix in Eq.~(\ref{eq:cs_decomposition}) to be circular rotation while the third part to be circular reflection matrices, given by
\begin{equation}\small\label{eq:uv}
\begin{split}
\mathbf{U_{\Theta_r}}=\left[\begin{array}{cc}
\operatorname{\operatorname{\mathbf{Rot}}}\left(\Theta_{r_{p}}\right) & \mathbf{0} \\
\mathbf{0} & \operatorname{\operatorname{\mathbf{Rot}}}\left(\Theta_{r_{q}}\right)
\end{array}\right],
\end{split}
\quad
\begin{split}
\mathbf{V_{\Phi_r}}=\left[\begin{array}{cc}
\operatorname{\operatorname{\mathbf{Ref}}}\left(\Phi_{r_{p}}\right) & \mathbf{0} \\
\mathbf{0} & \operatorname{\operatorname{\mathbf{Ref}}}\left(\Phi_{r_{q}}\right)
\end{array}\right]
\end{split},
\end{equation}
Clearly, the parameterization of circular rotation and reflection in Eq.~ (\ref{eq:rot_ref}), as well as the combination of them in Eq.(\ref{eq:uv}), lose a certain degree of freedom of $J$-orthogonal transformation. However, it 1) results in a linear ($\mathcal{O}(d)$) memory complexity of relational embeddings; 2) significantly reduces the risk of overfitting; and 3) is sufficiently expressive to model complex relational patterns as well as graph structures. This is similar to many other Euclidean models, such as SimplE \citep{DBLP:conf/nips/Kazemi018}, that sacrifice some degree of freedoms of the multiplicative model (i.e., RESCAL \citep{DBLP:conf/icml/NickelTK11}) parameterized by quadratic matrices while pursuing a linearly complex, less overfitting and highly expressive relational model. Our ablation studies will show that such combination of rotation and reflection results in performance gains.
\subsubsection{Hyperbolic Rotation.}
The hyperbolic rotation matrix is parameterized by two diagonal matrices $\mathbf{C},\mathbf{S}$ that satisfy the condition $\mathbf{C}^{2}-\mathbf{S}^{2}=\mathbf{I}$. The hyperbolic rotation matrix can be seen as a generalization of the $2\times2$ hyperbolic rotation given by $\left[\begin{array}{ll}\cosh (\mu) & \sinh (\mu) \\ \sinh (\mu) & \cosh (\mu)\end{array}\right]$, where the trigonometric functions $\sinh$ and $\cosh$ are hyperbolic versions of the $\sin$ and $\cos$ functions. Clearly, it satisfies the condition $\cosh (\mu)^2-\sinh(\mu)^2=1$. Analogously, to satisfy the condition $\mathbf{C}^{2}-\mathbf{S}^{2}=\mathbf{I}$, we parameterize $\mathbf{C},\mathbf{S}$ by diagonal matrices
\begin{align}
& \mathbf{C}(\mu)=\operatorname{diag}\left(\cosh (\mu_1), \ldots, \cosh (\mu_q)\right), \\
& \mathbf{S}(\mu)=\operatorname{diag}\left(\sinh (\mu_1), \ldots, \sinh (\mu_q)\right),
\end{align}
where $\mu=(\mu_1, \cdots, \mu_q)$ is the parameter of hyperbolic rotation to learn.
Therefore, the hyperbolic rotation matrix can be denoted by
\begin{equation}\scriptsize\label{eq:translation}
\mathbf{H}_{\mu_r}=\left[\begin{array}{ccc}
\operatorname{diag}\left(\cosh (\mu_{r,1}), \ldots, \cosh (\mu_{r,q})\right) & \mathbf{0} & \operatorname{diag}\left(\sinh (\mu_{r,1}), \ldots, \sinh (\mu_{r,q})\right) \\
\mathbf{0} & I_{p-q} & \mathbf{0}, \\
\operatorname{diag}\left(\sinh (\mu_{r,1}), \ldots, \sinh (\mu_{r,q})\right) & \mathbf{0} & \operatorname{diag}\left(\cosh (\mu_{r,1}), \ldots, \cosh (\mu_{r,q})\right)
\end{array}\right],
\end{equation}
Given the parameterization of each component, the final transformation function of relation $r$ is given by
\begin{equation}\label{eq:f_r}
f_r=\mathbf{U}_{\theta_r} \mathbf{H}_{\mu_r} \mathbf{V}_{\Phi_r}.
\end{equation}
Notably, the combination of circular rotation/reflection and hyperbolic rotation covers various kinds of geometric transformations in the ultrahyperbolic manifold, including circular, elliptic, parabolic, and hyperbolic transformations (See Fig.~\ref{fig:distance}(a)). Hence, our relational embedding is able to work with all corresponding geometrical spaces.
\subsection{Objective and Manhattan-like Distance}
\subsubsection{Objective Function.}
Given $f_r$ and entity embeddings $e$, we design a score function for each triplet $(h,r,t)$ as
\begin{equation}
s(h, r, t)=-d_{\mathbf{U}}^{2}\left(f_{r}\left(\mathbf{e}_{h}\right), \mathbf{e}_{t}\right)+b_{h}+b_{t}+\delta,
\end{equation}
where $f_{r}\left(\mathbf{e}_{h}\right) = \mathbf{U}_{\theta_r} \mathbf{H}_{\mathbf{\mu}_r} \mathbf{V}_{\Phi_r} \mathbf{e_h}$, and $\mathbf{e}_{h}, \mathbf{e}_{t} \in \mathbb{U}^{p,q}$ are the embeddings of head entity $h$ and tail entity $t$, $b_h,b_t \in \mathbf{R}^d$ are entity-specific biases, and each bias defines an entity-specific \emph{sphere of influence} \citep{DBLP:conf/nips/BalazevicAH19} surrounding the center point. $\delta$ is a global margin hyper-parameter. $d_{\mathbb{U}}(\cdot)$ is a function that quantifies the nearness/distance between two points in the ultrahyperbolic manifold.
\subsubsection{Manhattan-like Distance.}
Defining a proper distance $d_{\mathbb{U}}(\cdot)$ in the ultrahyperbolic manifold is non-trivial. Different from Riemannian manifolds that are geodesically connected, ultrahyperbolic manifolds are not geodesically connected, and there exist \emph{broken cases} in which the geodesic distance is not defined \citep{law2020ultrahyperbolic}.
Some approximation approaches \citep{law2020ultrahyperbolic,DBLP:journals/corr/abs-2106-03134} are proposed and satisfy some of the axioms of a classic metric (e.g., symmetric premetric). However, these distances suffer from the lack of the identity of indiscernibles, that is, one may have $\mathbf{d}_{\mathbb{U}}(\mathbf{x}, \mathbf{y})=0$ for some distinct points $\mathbf{x}\neq\mathbf{y}$. This is not a problem for metric learning that learns to preserve the pair-wise distances
\citep{law2020ultrahyperbolic,DBLP:journals/corr/abs-2106-03134}. However, our preliminary experiments find that the geodesic distance lacks the identity of indiscernibles results in unstable and non-convergent training. We conjecture this is due to the fact that the target of KG embedding is different from the graph embedding aiming at preserving pair-wise distance. KG embedding aims at satisfying $f_r(\mathbf{e}_h) \approx \mathbf{e}_t$ for each positive triple $(h,r,t)$ while not for negative triples. Hence, we need to retain the \emph{identity of indiscernibles}, that is, $\mathbf{d}_{\mathbb{U}}(\mathbf{x}, \mathbf{y})=0 \Leftrightarrow \mathbf{x}=\mathbf{y}$.
To address this issue, we propose a novel Manhattan-like distance function, which is defined by a composition of a spherical distance and a hyperbolic distance.
Fig.~\ref{fig:distance} shows the Manhattan-like distance.
Formally, given two points $\mathbf{x},\mathbf{y} \in \mathbb{U}^{p,q}$, we first define a projection $\rho_{\mathbf{x}}(\mathbf{y})$ of $\mathbf{y}$ on the \emph{circular conic section} crossing $\mathbf{x}$, such that $\rho_{\mathbf{x}}(\mathbf{y})$ and $\mathbf{x}$ share the same space dimension while $\rho_{\mathbf{x}}(\mathbf{y})$ and $y$ lie on a hyperbolic subspace.
\begin{equation}\label{eq:dis_proj}
\rho_{\mathbf{x}}(\mathbf{y}) = \left(\begin{array}{c}
\mathbf{ \mathbf{x}_p } \\
\alpha \mathbf{y}_q \frac{\|x_p\|}{\|y_p\|}
\end{array}\right),
\end{equation}
This projection makes that $\mathbf{x}$ and $\rho_\mathbf{x}(\mathbf{y})$ are connected by a purely space-like geodesic while $\rho_\mathbf{x}(\mathbf{y}),\mathbf{y}$ are connected by a purely time-like geodesic.
The distance function of $\mathbb{U}$ hence can be defined as a Manhattan-like distance, i.e., the sum of the two distances, given as
\begin{align}\label{eq:distance}
d_{\mathbb{U}}(\mathbf{x},\mathbf{y}) = \min\{& d_\mathbb{S}(\mathbf{y},\rho_\mathbf{y}(\mathbf{x})) + d_\mathbb{H}(\rho_\mathbf{y}(\mathbf{x}),\mathbf{x}), \\
& d_\mathbb{S}(\mathbf{x},\rho_\mathbf{x}(\mathbf{y})) + d_\mathbb{H}(\rho_\mathbf{x}(\mathbf{y}),\mathbf{y})\},
\end{align}
where $d_\mathbb{S}$ and $d_\mathbb{H}$ are spherical and hyperbolic distances, respectively, which are well-defined and maintain the identity of indiscernibles.
\subsection{Optimization}
For each triplet $(h,r,t)$, we create $k$ negative samples by randomly corrupting its head or tail entity. The probability of a triple is calculated as $p=\sigma(s(h, r, t))$ where $\sigma(.)$ is a sigmoid function. We minimize the binary cross entropy loss, given as
\begin{equation}
\mathcal{L}=-\frac{1}{N} \sum_{i=1}^{N}\left(\log p^{(i)}+\sum_{j=1}^{k} \log \left(1-\tilde{p}^{(i, j)}\right)\right),
\end{equation}
where $p^{(i)}$ and $\tilde{p}^{(i, j)}$ are the probabilities for positive and negative triplets respectively, and $N$ is the number of samples.
Notably, directly optimizing the embeddings in ultrahyperbolic manifold is challenging. The issue is caused by the fact that there exist some points that cannot be connected by a geodesic in the manifold (hence no tangent direction for gradient descent). One way to sidestep the problem is to define entity embeddings in the Euclidean space and use a \emph{diffeomorphism} to map the points into the manifold.
In particular, we consider the following diffeomorphism.
\begin{theorem}\label{lm:sbr}[Diffeomorphism \citep{DBLP:journals/corr/abs-2106-03134}]
For any point $\mathbf{x} \in \mathbb{U}_{\alpha}^{p, q}$, there exists a diffeomorphism $\psi: \mathbb{U}_{\alpha}^{p, q} \rightarrow \mathbb{R}^{p} \times \mathbb{S}_{\alpha}^{q}$ that maps $\mathbf{x}$ into the product manifolds of a sphere and the Euclidean space. The mapping and its inverse are given by
\end{theorem}
\begin{equation}
\psi(\mathbf{x})=\left(\begin{array}{c}
\mathbf{s} \\
\alpha \frac{\mathbf{t}}{\|\mathbf{t}\|}
\end{array}\right), \quad \psi^{-1}(\mathbf{z})=\left(\begin{array}{c}
\mathbf{v}\\
\frac{\sqrt{\alpha^2+\|\mathbf{v}\|^{2}}}{\alpha} \mathbf{u}
\end{array}\right) \text {, }
\end{equation}
where $\mathbf{x}=\left(\begin{array}{c}\mathbf{s} \\ \mathbf{t}\end{array}\right) \in \mathbb{U}_{\alpha}^{p,q}$ with $\mathbf{s} \in \mathbb{R}^{p}$ and $\mathbf{t} \in \mathbb{R}_{*}^{q}$. $\mathbf{z}=\left(\begin{array}{c}\mathbf{v} \\ \mathbf{u} \end{array}\right) \in \mathbb{R}^{p} \times \mathbb{S}_{\alpha}^{q} $ with $\mathbf{v} \in \mathbb{R}^{p}$ and $\mathbf{u} \in \mathbb{S}_{\alpha}^{q}$.
With these mappings, any vector $\mathbf{x} \in \mathbb{R}^{p} \times \mathbb{R}_{*}^{q}$ can be mapped to $\mathbb{U}_{\alpha}^{p, q}$ by a double projection $\varphi=\psi^{-1} \circ \psi$. Note that since the diffeomorphism is differential and bijective, the canonical chain rule can be exploited to perform standard gradient descent optimization.
\section{Theoretical Analysis}
In this section, we provide some theoretical analyses of UltraE and some related approaches.
\subsection{Complexity Analysis}
To make the model scalable to the size of the current KGs and keep up with their growth, a KG embedding model should have linear time and parameter (space) complexity \citep{DBLP:journals/corr/abs-1304-7158,DBLP:conf/nips/Kazemi018}.
In our case, the number of relation parameters of circular rotation, circular reflection, and hyperbolic rotation grows linearly with the dimensionality given by $p+q$. The total number of parameters is then $\mathcal{O}( (N_e + N_r)\times d)$, where $N_e$ and $N_r$ are the numbers of entities and relations and $d=p+q$ is the embedding dimensionality. Similar to TransE \cite{DBLP:conf/nips/BordesUGWY13} and RotH \cite{DBLP:conf/acl/ChamiWJSRR20}, UltraE has time complexity $\mathcal{O}(d)$.
\subsection{Connections with Hyperbolic Methods}
UltraE has close connections with some existing hyperbolic KG embedding methods, including HyboNet \citep{DBLP:journals/corr/abs-2105-14686}, RotH/RefH \citep{DBLP:conf/acl/ChamiWJSRR20}, and MuRP \citep{DBLP:conf/nips/BalazevicAH19}. To show this, we first introduce Lorentz transformation.
\begin{definition}\label{def:lorent}
Lorentz transformation is a pseudo-orthogonal transformation with signature $(p,1)$.
\end{definition}
\noindent
\textbf{HyboNet} \citep{DBLP:journals/corr/abs-2105-14686} embeds entities as points in a Lorentz space and models relations as Lorentz transformations. According to Definition \ref{def:lorent}, we have the following proposition.
\begin{proposition}
UltraE, if parameterized by a full $J$-orthogonal matrix, generalizes HyboNet.
\end{proposition}
That is, HyboNet is the case of UltraE (with full $J$-orthogonal matrix parameterization) where $q=1$.
By exploiting the polar decomposition \citep{ratcliffe1994foundations}, a Lorentz transformation matrix $\mathbf{T}$ can be decomposed into $\mathbf{T}=\mathbf{R}_{\mathbf{U}} \mathbf{R}_{\mathbf{b}}$, where
\begin{equation}
\mathbf{R_{U}}=\left[\begin{array}{cc}
\mathbf{U} & \mathbf{0} \\
\mathbf{0} & 1
\end{array}\right], \mathbf{R_{b}}=\left[\begin{array}{cc}
\left(\mathbf{I}+\mathbf{b} \mathbf{b}^{\top}\right)^{\frac{1}{2}} & \mathbf{b}^{\top} \\
\mathbf{b} & \sqrt{1+\|\mathbf{b}\|_{2}^{2}}
\end{array}\right],
\end{equation}
where $\mathbf{R_{U}}$ is an orthogonal matrix.
In Lorentz geometry, $\mathbf{R_{U}}$ and $\mathbf{R_{b}}$ are called Lorentz rotation and Lorentz boost, respectively. $\mathbf{R_{U}}$ represents rotation or reflection in space dimension (without changing the time dimension), while $\mathbf{R_{b}}$ denotes a hyperbolic rotation across the time dimension and each space dimension.
\cite{DBLP:journals/spl/TabaghiD21} established an equivalence between Lorentz boost and Möbius addition (or hyperbolic translation). Hence, HyboNet inherently models each relation as a combination of a rotation/reflection and a hyperbolic translation.
\noindent
\textbf{RotH/RefH} \citep{DBLP:conf/acl/ChamiWJSRR20}, interestingly, also models each relation as a combination of a rotation/reflection and a hyperbolic translation that is implemented by Möbius addition.
Hence, HyboNet subsumes RotH/RefH,\footnote{Note that RotH/RefH consider Poincaré Ball while HyboNet considers Lorentz model. The subsumption still holds since Poincaré Ball is isometric to the Lorentz model.} where the equivalence cannot hold because the rotation/reflection of RotH/RefH is parameterized by the Givens rotation/reflection \citep{DBLP:conf/acl/ChamiWJSRR20}.
\noindent
\textbf{MuRP} \citep{DBLP:conf/nips/BalazevicAH19} models relations as a combination of Möbius multiplication (with diagonal matrix) and Möbius addition. Note that \citep{DBLP:journals/corr/abs-2105-14686} established a fact that a Lorentz rotation is equivalent to Möbius multiplication, and \cite{DBLP:journals/spl/TabaghiD21} proved that Lorentz boost is equivalent to Möbius addition. Hence, HyboNet subsumes MuRP, where the equivalence cannot hold because the Möbius multiplication in MuRP is parameterized by a diagonal matrix.
To sum up, UltraE generalizes HyboNet to allow for arbitrary signature $(p,q)$, while HyboNet subsumes RotH/RefH and MuRP.
\begin{table}[]
\centering
\caption{The statistics of KGs, where $\xi_{G}$ measures the tree-likeness (the lower the $\xi_{G}$ is, the more tree-like the KG is).}
\begin{tabular}{lcccc}
\hline Dataset & \#entities & \#relations & \#triples & $\xi_{G}$ \\
\hline WN18RR & $41 \mathbf{k}$ & 11 & $93 \mathbf{k}$ & $-2.54$ \\
\hline FB15k-237 & $15 \mathbf{k}$ & 237 & $310 \mathbf{k}$ & $-0.65$ \\
\hline YAGO3-10 & $123 \mathbf{k}$ & 37 & $1 \mathbf{M}$ & $-0.54$ \\
\hline
\end{tabular}
\label{tab:dataset}
\end{table}
\subsection{Inference Patterns}
UltraE can naturally infer relation patterns including symmetry, anti-symmetry, inversion and composition. As discussed above, the defined relation transformation $f_{r}=U_{\theta_r} B_{\mu_r} V_{\Phi_r}$ consists of three operations, including a circular rotation, a hyperbolic rotation, and a circular reflection.
The three operation matrices can all be identified as identity matrices.
Therefore, there are several different combinations of parameter settings to meet the above inferred requirements, demonstrating the comprehensive capability of the proposed UltraE on encoding relational patterns.
For the sake of proof, we assume $B_{\mu_r}$ is an identity matrix $\mathbf{I}$, and $\Theta_r,\Phi_r\in[-\pi,\pi)$.
\begin{proposition}
Let $r$ be a symmetric relation such that for each triple $(e_h, r, e_t)$, its symmetric triple $(e_t, r, e_h)$ also holds. This symmetric property of $r$ can be encoded into UltraE.
\end{proposition}
\begin{proof}
If $r$ is a symmetric relation, by taking the $B_{\mathbf{b}_r}=\mathbf{I}$ and $U_{\Theta_r}=\mathbf{I}$, we have
\begin{align}
\mathbf{e}_{h} = f_{r}\left(\mathbf{e}_{t}\right) = V_{\Phi_r} \mathbf{e}_{t}, \
\mathbf{e}_{t} = f_{r}\left(\mathbf{e}_{h}\right) = V_{\Phi_r} \mathbf{e}_{h}
\Rightarrow V_{\Phi_r}^2 = \mathbf{I} \nonumber
\end{align}
which holds true when $\Phi_r=0$ or $\Phi_r=-\pi$.
\end{proof}
\begin{proposition}
Let $r$ be an anti-symmetric relation such that for each triple $(e_h, r, e_t)$, its symmetric triple $(e_t, r, e_h)$ is not true. This anti-symmetric property of $r$ can be encoded into UltraE.
\end{proposition}
\begin{proof}
If $r$ is an anti-symmetric relation, by taking the $B_{\mathbf{b}_r}=\mathbf{I}$ and $U_{\Theta_r}=\mathbf{I}$, we have
\begin{align}
\mathbf{e}_{h} = f_{r}\left(\mathbf{e}_{t}\right) = V_{\Phi_r} \mathbf{e}_{t}, \
\mathbf{e}_{t} = f_{r}\left(\mathbf{e}_{h}\right) = V_{\Phi_r} \mathbf{e}_{h}
\Rightarrow \mathbf{e}_{h} = \mathbf{e}_{t} \nonumber
\end{align}
which holds true when $\Phi_r\neq0$ and $\Phi_r\neq-\pi$.
\end{proof}
\begin{proposition}
Let $r_1$ and $r_2$ be inverse relations such that for each triple $(e_h, r_1, e_t)$, its inverse triple $(e_t, r_2, e_h)$ is also true. This inverse property of $r_1$ and $r_2$ can be encoded into UltraE.
\end{proposition}
\begin{proof}
If $r_1$ and $r_2$ are inverse relations, by taking the $B_{\mathbf{b}_{r_1}}=B_{\mathbf{b}_{r_2}}=\mathbf{I}$ and $V_{\Phi_{r_1}}=V_{\Phi_{r_2}}=\mathbf{I}$, we have
\begin{align}
\mathbf{e}_{h} = f_{r_1}\left(\mathbf{e}_{t}\right) = U_{\Theta_{r_1}} \mathbf{e}_{t}, \
\mathbf{e}_{t} = f_{r_2}\left(\mathbf{e}_{h}\right) = U_{\Theta_{r_2}} \mathbf{e}_{h}
\Rightarrow U_{\Theta_{r_1}}U_{\Theta_{r_2}} = \mathbf{I} \nonumber
\end{align}
which holds true when $\Theta_{r_1}+\Theta_{r_2}=0$.
\end{proof}
\begin{proposition}
Let relation $r_1$ be composed of $r_2$ and $r_3$ such that triple $(e_h, r_1, e_t)$ exists when $(e_h, r_2, e_t)$ and $(e_h, r_3, e_t)$ exist. This composition property can be encoded into UltraE.
\end{proposition}
\begin{proof}
If $r_1$ is composed of $r_2$ and $r_3$, by taking the $B_{\mathbf{b}_{r_1}}=B_{\mathbf{b}_{r_2}}=\mathbf{I}$ and $V_{\Phi_{r_1}}=V_{\Phi_{r_2}}=\mathbf{I}$, we have
\begin{align}
&\mathbf{e}_{h} = f_{r_1}\left(\mathbf{e}_{t}\right) = U_{\Theta_{r_1}} \mathbf{e}_{t}, \
\mathbf{e}_{h} = f_{r_2}\left(\mathbf{e}_{t}\right) = U_{\Theta_{r_2}} \mathbf{e}_{t}, \\
&\mathbf{e}_{h} = f_{r_3}\left(\mathbf{e}_{t}\right) = U_{\Theta_{r_3}} \mathbf{e}_{t} \
\Rightarrow U_{\Theta_{r_1}} = U_{\Theta_{r_2}}U_{\Theta_{r_3}} \nonumber
\end{align}
which holds true when $\Theta_{r_1}=\Theta_{r_2}+\Theta_{r_3}$ or $\Theta_{r_1}=\Theta_{r_2}+\Theta_{r_3}+2\pi$ or $\Theta_{r_1}=\Theta_{r_2}+\Theta_{r_3}-2\pi$.
\end{proof}
\begin{table*}[]
\centering
\caption{Link prediction results (\%) on WN18RR, FB15k-237 and YAGO3-10 for low-dimensional embeddings ($d=32$) in the filtered setting. The first group of models are Euclidean models, the second groups are non-Euclidean models and MuRMP is a mixed-curvature baseline. RotatE, MuRE, MuRP, RotH, RefH and AttH results are taken from \citep{DBLP:conf/acl/ChamiWJSRR20}.
RotatE results are reported without self-adversarial negative sampling for fair comparison. The best score and best baseline are in \textbf{bold} and underlined, respectively. }
\begin{tabular}{lrrrrrrrrrrrrrrr}
\hline & \multicolumn{4}{c}{WN18RR} & \multicolumn{4}{c}{FB15k-237} & \multicolumn{4}{c}{ YAGO3-10} \\
Model & MRR & H@ 1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 \\
\hline
TransE & 36.6 & 27.4 & 43.3 & 51.5 & 29.5 & 21.0 & 32.2 & 46.6 & - & - & - & - \\
RotatE & 38.7 & 33.0 & 41.7 & 49.1 & 29.0 & 20.8 & 31.6 & 45.8 & - & - & - & - \\
ComplEx & 42.1 & 39.1 & 43.4 & 47.6 & 28.7 & 20.3 & 31.6 & 45.6 & 33.6 & 25.9 & 36.7 & 48.4 \\
QuatE & 42.1 & 39.6 & 43.0 & 46.7 & 29.3 & 21.2 & 32.0 & 46.0 & - & - & - & - \\
5$\star$E & 44.9 & 41.8 & 46.2 & 51.0 & 32.3 & 24.0 & 35.5 & 50.1 & - & - & - & - \\
MuRE & 45.8 & 42.1 & 47.1 & 52.5 & 31.3 & 22.6 & 34.0 & 48.9 & 28.3 & 18.7 & 31.7 & 47.8 \\
\hline
MuRP & 46.5 & 42.0 & 48.4 & 54.4 & 32.3 & 23.5 & 35.3 & 50.1 & 23.0 & 15.0 & 24.7 & 39.2 \\
RotH & \underline{47.2} & \underline{42.8} & \underline{49.0} & \underline{55.3} & 31.4 & 22.3 & 34.6 & 49.7 & 39.3 & 30.7 & 43.5 & 55.9 \\
RefH & 44.7 & 40.8 & 46.4 & 51.8 & 31.2 & 22.4 & 34.2 & 48.9 & 38.1 & 30.2 & 41.5 & 53.0\\
AttH & 46.6 & 41.9 & 48.4 & 55.1 & \underline{32.4} & \underline{23.6} & \underline{35.4} & 50.1 & \underline{39.7} & \underline{31.0} & \underline{43.7} & \underline{56.6} \\
MuRMP & 47.0 & 42.6 & 48.3 & 54.7 & 31.9 & 23.2 & 35.1 & \underline{50.2} & 39.5 & 30.8 & 42.9 & \underline{56.6} \\
\hline
UltraE (q=2) & 48.1 & 43.4 & 50.0 & 55.4 & 33.1 & 24.1 & 35.5 & 50.3 & 39.5 & 31.2 & 43.9 & 56.8 \\
UltraE (q=4) & \textbf{48.8} & \textbf{44.0} & \textbf{50.3} & \textbf{55.8} & 33.4 & 24.3 & 36.0 & 51.0 & 40.0 & 31.5 & 44.3 & 57.0 \\
UltraE (q=6) & 48.3 & 42.5 & 49.1 & 55.5 & \textbf{33.8} & \textbf{24.7} & \textbf{36.3} & \textbf{51.4} & \textbf{40.5} & \textbf{31.8} & \textbf{44.7} & \textbf{57.2} \\
UltraE (q=8) & 47.5 & 42.3 & 49.0 & 55.1 & 32.6 & 24.6 & 36.2 & 51.0 & 39.4 & 31.3 & 43.4 & 56.5 \\
\hline
\end{tabular}
\label{tab:low_dim}
\end{table*}
\section{Empirical Evaluation}
In this section, we evaluate the performance of UltraE on link prediction in three KGs that contain both hierarchical and non-hierarchical relations.
We systematically study the major components of our framework and show that (1) UltraE outperforms Euclidean and non-Euclidean baselines on embedding KGs with heterogeneous topologies, especially in low-dimensional cases (Sec.~\ref{sec:low_dim}); (2) the signature of embedding space works as a knob for controlling the geometry, and hence influences the performance of UltraE (Sec.~\ref{sec:signature}); (3) UltraE is able to improve the embeddings of relations with heterogeneous topologies (Sec.~\ref{sec:topo}); and (4) the combination of rotation and reflection outperforms a single operator (Sec.~\ref{sec:operators}).
\subsection{Experiment Setup}
\subsubsection{Dataset.} We use three standard benchmarks: WN18RR \citep{DBLP:conf/nips/BordesUGWY13}, a subset of WordNet containing $11$ lexical relationships, FB15k-237 \citep{DBLP:conf/nips/BordesUGWY13}, a subset of Freebase containing general world knowledge, and YAGO3-10 \citep{DBLP:conf/cidr/MahdisoltaniBS15}, a subset of YAGO3 containing information of relationships between people.
All three datasets contain hierarchical (e.g., \textit{partOf}) and non-hierarchical (e.g., \textit{similarTo}) relations, and some of which contain relational patterns like symmetry (e.g., \textit{isMarriedTo}).
For each KG, we follow the standard data augmentation protocol \cite{DBLP:conf/icml/LacroixUO18} and the same train/valid/test splitting as used in \cite{DBLP:conf/icml/LacroixUO18} for fair comparision.
Following the previous work \cite{DBLP:conf/acl/ChamiWJSRR20}, we use the global graph curvature \citep{DBLP:conf/iclr/GuSGR19} to measure the geometric properties of the datasets. The statistics of datasets are summarized in Table \ref{tab:dataset}. As we can see, all datasets are globally hierarchical (i.e., the curvature is negative) but none of which is a pure tree structure.
Comparatively, WN18RR is more hierarchical than FB15k-237 and YAGO3-10 since it has a smaller global graph curvature.
\begin{table*}[]
\centering
\caption{Link prediction results (\%) on WN18RR, FB15k-237 and YAGO3-10 for high-dimensional embeddings (best for $d \in \{200,400,500\}$) in the filtered setting. RotatE, MuRE, MuRP, RotH, RefH and AttH results are taken from \citep{DBLP:conf/acl/ChamiWJSRR20}. RotatE results are reported without self-adversarial negative sampling. The best score and best baseline are in \textbf{bold} and underlined, respectively.}
\begin{tabular}{lrrrrrrrrrrrrrr}
\hline & \multicolumn{4}{c}{WN18RR} & \multicolumn{4}{c}{FB15k-237} & \multicolumn{4}{c}{ YAGO3-10} \\
Model & MRR & H@ 1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 \\
\hline
TransE & 48.1 & 43.3 & 48.9 & 57.0 & 34.2 & 24.0 & 37.8 & 52.7 & - & - & - & - \\
DistMult & 43.0 & 39.0 & 44.0 & 49.0 & 24.1 & 15.5 & 26.3 & 41.9 & 34.0 & 24.0 & 38.0 & 54.0 \\
RotatE & 47.6 & 42.8 & 49.2 & 57.1 & 33.8 & 24.1 & 37.5 & 53.3 & 49.5 & 40.2 & 55.0 & 67.0 \\
ComplEx & 48.0 & 43.5 & 49.5 & 57.2 & 35.7 & 26.4 & 39.2 & 54.7 & 56.9 & 49.8 & 60.9 & 70.1\\
QuatE & 48.8 & 43.8 & 50.8 & 58.2 & 34.8 & 24.8 & 38.2 & 55.0 & - & - & - & - \\
5$\star$E & \underline{50.0} & \underline{\textbf{45.0}} & 51.0 & \underline{59.0} & \underline{\textbf{37.0}} & \underline{\textbf{28.0}} & \underline{\textbf{40.0}} & 56.0 & - & - & - & - \\
MuRE & 47.5 & 43.6 & 48.7 & 55.4 & 33.6 & 24.5 & 37.0 & 52.1 & 53.2 & 44.4 & 58.4 & 69.4 \\
\hline
MuRP & 48.1 & 44.0 & 49.5 & 56.6 & 33.5 & 24.3 & 36.7 & 51.8 & 35.4 & 24.9 & 40.0 & 56.7 \\
RotH & 49.6 & 44.9 & \underline{51.4} & 58.6 & 34.4 & 24.6 & 38.0 & 53.5 & 57.0 & 49.5 & 61.2 & 70.6 \\
RefH & 46.1 & 40.4 & 48.5 & 56.8 & 34.6 & 25.2 & 38.3 & 53.6 & \underline{57.6} & \underline{50.2} & \underline{61.9} & \underline{\textbf{71.1}} \\
AttH & 48.6 & 44.3 & 49.9 & 57.3 & 34.8 & 25.2 & 38.4 & 54.0 & 56.8 & 49.3 & 61.2 & 70.2 \\
MuRMP & 48.1 & 44.1 & 49.6 & 56.9 & 35.8 & 27.3 & 39.4 & \underline{56.1} & 49.5 & 44.8 & 59.1 & 69.8 \\
\hline
UltraE (q=20) & 48.5 & 44.2 & 50.0 & 57.3 & 34.9 & 25.1 & 38.5 & 54.1 & 56.9 & 49.5 & 61.0 & 70.3 \\
UltraE (q=40) & \textbf{50.1} & \textbf{45.0} & \textbf{51.5} & \textbf{59.2} & 35.1 & 27.5 & \textbf{40.0} & 56.0 & 57.5 & 49.8 & 62.0 & 70.8 \\
UltraE (q=80) & 49.7 & 44.8 & 51.2 & 58.5 & 36.8 & 27.6 & \textbf{40.0} & \textbf{56.3} & \textbf{58.0} & \textbf{50.6} & \textbf{62.3} & \textbf{71.1} \\
UltraE (q=160) & 48.6 & 44.5 & 50.3 & 57.4 & 35.4 & 26.0 & 39.0 & 55.5 & 57.0 & 49.5 & 61.8 & 70.5 \\
\hline
\end{tabular}
\label{tab:high_dim}
\end{table*}
\noindent
\subsubsection{Evaluation protocol.}
Two popular ranking-based metrics are reported: 1) mean reciprocal rank (MRR), the mean of the inverse of the true entity ranking in the prediction; and 2) hit rate $H@K$ ($K \in \{1,3,10\}$), the percentage of the correct entities appearing in the top $K$ ranked entities. As a standard, we report the metrics in the filtered setting \citep{DBLP:conf/nips/BordesUGWY13}, i.e., when calculating the ranking during evaluation, we filter out all true triples in the training set, since predicting a low rank for these triples should not be penalized.
\noindent
\subsubsection{Hyperparameters.}
For each KG, we explore batch size $\in \{500,1000\}$, global margin $\in \{2,4,6,8\}$ and learning rate $\in \{3e-3,5e-3,7e-3\}$ in the validation set. The negative sampling size is fixed to $50$. The maximum number of epochs is set to $1000$. The radius of curvature $\alpha$ is fixed to $1$ since our model does not need relation-specific curvatures but is able to learn relation-specific mappings in the ultrahyperbolic manifold. The signature of the product manifold is set as the same as \citep{DBLP:conf/www/WangWSWNAXYC21}.
\noindent
\subsubsection{Baselines.} Our baselines are divided into two groups:
\begin{itemize}
\item \textbf{Euclidean models}. 1) TransE \citep{DBLP:conf/nips/BordesUGWY13}, the first translational model; 2) RotatE \citep{DBLP:conf/iclr/SunDNT19}, a rotation model in a complex space; 3) DistMult \citep{DBLP:journals/corr/YangYHGD14a}, a multiplicative model with a diagonal relational matrix; 4) ComplEx \citep{DBLP:conf/icml/TrouillonWRGB16}, an extension of DisMult in a complex space; 5) QuatE \citep{DBLP:conf/aaai/CaoX0CH21}, a generalization of complex KG embedding in a hypercomplex space ; 6) 5$\star$E that models a relation as five transformation functions; 7) MuRE \citep{DBLP:conf/nips/BalazevicAH19}, a Euclidean model with a diagonal relational matrix.
\item \textbf{Non-Euclidean models.} 1) MuRP \citep{DBLP:conf/nips/BalazevicAH19}, a hyperbolic model with a diagonal relational matrix; 2) MuRS, a spherical analogy of MuRP; 3) RotH/RefH \citep{DBLP:conf/acl/ChamiWJSRR20}, a hyperbolic embedding with rotation or reflection; 4) AttH \citep{DBLP:conf/acl/ChamiWJSRR20}, a combination of RotH and RefH by attention mechanism; 5) MuRMP \citep{DBLP:conf/www/WangWSWNAXYC21}, a generalization of MuRP in the product manifold.
\end{itemize}
We compare UltraE with varied signatures (time dimensions).
Since for all KGs, the hierarchies are much more dominant than cyclicity and we assume that both space and time dimension are even numbers, we set the time dimension to be a relatively small value (i.e., $q=2,4,6,8$ and $q=20,40,80,160$ for low-dimension settings and high-dimension settings, respectively) for comparison. A full possible setting of time dimension with $q \leq p$ is studied in Sec.~\ref{sec:signature}.
\subsection{Overall Results}
\label{sec:low_dim}
\subsubsection{Low-dimensional Embeddings.}
Following previous non-Euclidean approaches \citep{DBLP:conf/acl/ChamiWJSRR20}, we first evaluate UltraE in the low dimensional setting ($d = 32$). Table \ref{tab:low_dim} shows the performance of UltraE and the baselines. Overall, it is clear that UltraE with varying time dimension ($q=2,4,6,8$) improves the performance of all methods. UltraE, even with only $2$ time dimension, consistently outperforms all baselines, suggesting that the heterogeneous structure imposed by the pseudo-Riemannian geometry leads to better representations.
In particular, the best performance of WN18RR is achieved by UltraE ($q=4$) while the best performances of FB15k-237 and YAGO3-10 are achieved by UltraE ($q=6$). We believe that this is because WN18RR is more hierarchical than FB15k-237 and YAGO3-10, validating our conjecture that the number of time dimensions controls the geometry of the embedding space.
Besides, we observed that the mixed-curvature baseline MuRMP does not consistently improve the hyperbolic methods. We conjecture that this is because MuRMP cannot properly model relational patterns.
\noindent
\subsubsection{High-dimensional Embeddings}
Table \ref{tab:high_dim} shows the results of link prediction in high dimensions (best for $d \in \{200,400,500\}$). Overall, UltraE achieves either better or competitive results against a variety of other models.
In particular, we observed that there is no significant performance gain among hyperbolic methods and mixed-curvature methods against Euclidean-based methods. We conjecture that this is because when the dimension is sufficiently large, both Euclidean and hyperbolic geometries have sufficient ability to represent complex hierarchies in KGs.
However, UltraE roughly outperforms all compared approaches, with the only exception of 5$\star$E achieving competitive results. Again, the performance gain is not as significant as in the low-dimension cases, which further validates the hypothesis that KG embeddings are not sensitive to the choice of embedding space with high dimensions. The additional performance gain might be obtained from the flexibility of inference of the relational patterns.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{imgs/total_dim.pdf}
\caption{The performance (MRR) of various methods on WN18RR, with $d \in \{10,16,20,32,50,200,500\}$. UltraE is implemented with only rotation and $q=4$.
The results of MuRE, MuRS and MuRMP are taken from \cite{DBLP:conf/www/WangWSWNAXYC21} with $d \in \{10,15,20,40,100,200,500\}$. All results are averaged over 10 runs.}
\label{fig:total_dim}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{imgs/time_dim.pdf}
\caption{The performance (H@10) of UltraE with varied signature (time dimensions) under the condition of $d=p+q=32, q \leq p$ on WN18RR. The dashed horizontal lines denote the results of RotH. As $q$ increases, the performance first increases and starts to decrease after reaching a peak.}
\label{fig:time_dim}
\end{figure}
\begin{table}[]
\centering
\vspace{0.1cm}
\caption{Comparison of H@10 for WN18RR relations. Higher $\mathbf{Khs}_{G}$ and lower $\xi_{G}$ mean more hierarchical structure. UltraE is implemented by rotation and with best signature $(4,28)$. }
\resizebox{\columnwidth}{!}{
\begin{tabular}{lcccccc}
\hline Relation & $\mathbf{Khs}_{G}$ & $\xi_{G}$ & RotE & RotH & UltraE (Rot) \\
\hline
\text {member meronym } & 1.00 & -2.90 & 32.0 & 39.9 & 41.3 \\
\text {hypernym } & 1.00 & -2.46 & 23.7 & 27.6 & 28.6 \\
\text {has part } & 1.00 & -1.43 & 29.1 & 34.6 & 36.0 \\
\text {instance hypernym } & 1.00 & -0.82 & 48.8 & 52.0 & 53.2 \\
\textbf {member of domain region } & 1.00 & -0.78 & 38.5 & 36.5 & 43.3 \\
\textbf { member of domain usage } & 1.00 & -0.74 & 45.8 & 43.8 & 50.3 \\
\text { synset domain topic of } & 0.99 & -0.69 & 42.5 & 44.7 & 46.3 \\
\text { also see } & 0.36 & -2.09 & 63.4 & 70.5 & 73.5 \\
\hline
\text { derivationally related form } & 0.07 & -3.84 & 96.0 & 96.8 & 97.1 \\
\text { similar to } & 0.07 & -1.00 & 100.0 & 100.0 & 100.0 \\
\text { verb group } & 0.07 & -0.50 & 97.4 & 97.4 & 98.0 \\
\hline
\end{tabular}}
\label{tab:relation_type}
\end{table}
\begin{table}[]
\centering
\vspace{0.1cm}
\caption{Comparison of H@10 on YAGO3-10 relations. UltraE (Rot) and UltraE (Ref) are implemented by only rotation and reflection, respectively. We choose the best signature $(6,26)$. }
\resizebox{\columnwidth}{!}{
\begin{tabular}{lccccc}
\hline
Relation & Anti-symmetric & Symmetric & UltraE (Rot) & UltraE (Ref) & UltraE \\
\hline
hasNeighbor & $\boldsymbol{x}$ & $\checkmark$ & 75.3 & \textbf{100.0} & \textbf{100.0} \\
isMarriedTo & $\boldsymbol{x}$ & $\checkmark$ & 94.0 & 94.4 & \textbf{100.0} \\
actedIn & $\checkmark$ & $\boldsymbol{x}$ & 14.7 & 12.7 & \textbf{15.3} \\
hasMusicalRole & $\checkmark$ & $\boldsymbol{x}$ & 43.5 & 37.0 & \textbf{46.0} \\
directed & $\checkmark$ & $\boldsymbol{x}$ & 51.5 & 45.3 & \textbf{56.8} \\
graduatedFrom & $\checkmark$ & $\boldsymbol{x}$ & 26.8 & 16.3 & \textbf{27.5} \\
playsFor & $\checkmark$ & $\boldsymbol{X}$ & 67.2 & 64.0 & \textbf{66.8} \\
wroteMusicFor & $\boldsymbol{\checkmark}$ & $\boldsymbol{X}$ & \textbf{28.4} & 18.8 & 27.9 \\
hasCapital & $\boldsymbol{\checkmark}$ & $\boldsymbol{X}$ & \textbf{73.2} & 68.3 & \textbf{73.2} \\
dealsWith & $\boldsymbol{X}$ & $\boldsymbol{X}$ & 30.4 & 29.7 & \textbf{43.6} \\
isLocatedIn & $\boldsymbol{X}$ & $\boldsymbol{X}$ & 41.5 & 39.8 & \textbf{42.8} \\
\hline
\end{tabular}}
\label{tab:relational_pattern}
\end{table}
\subsection{Parameter Sensitivity}
\subsubsection{The effect of dimensionality.}
To investigate the effect of dimensionality, we conduct experiments on WN18RR and compare UltraE ($q=4$) against various state-of-the-art counterparts with varying dimensionality. For a fair comparison with RotH that only considers rotation, we only use rotation for the implementation of UltraE, denoted by UltraE (Rot).
Fig. \ref{fig:total_dim} shows the results obtained by averaging over 10 runs.
It clearly shows that the mixed-curvature method MuRMP outperforms its counterparts (MuRE, MuRP) with a single geometry, showcasing the limitation of a single homogeneous geometry on capturing the intrinsic heterogeneous structures. However, RotH performs slightly better than MuRMP, especially in high dimensionality, we conjecture that this is due to the capability of RotH on inferring relational patterns.
UltraE achieves further improvements across a broad range of dimensions, suggesting the benefits of ultrahyperbolic manifold for modeling relation-specific geometries as well as inferring relational patterns.
\subsubsection{The effect of signature.}\label{sec:signature} We study the influence of the signature on WN18RR by setting a varying number of time dimensions under the condition of $d=p+q=32, p \geq q$. Fig. \ref{fig:time_dim} shows that in all three benchmarks, by increasing $q$, the performance grows first and starts to decline after reaching a peak, which is consistent with our hypothesis that the signature acts as a knob for controlling the geometric properties. One might also note that compared with hyperbolic baselines (the dashed horizontal lines), the performance gain for WN18RR is relatively smaller than those of FB15k-237 and YAGO3-10. We conjecture that this is because WN18RR is more hierarchical than FB15k-237 and YAGO3-10, and the hyperbolic embedding performs already well. This assumption is further validated by the fact that the best time dimension of WN18RR ($q=4$) is smaller than that of FB15k-237 and YAGO3-10 ($q=6$).
\subsubsection{The effect of relation types.}\label{sec:topo} In this part, we investigate the per-relationship performance of UltraE on WN18RR.
Similar to RotE and RotH that only consider rotation, we consider UltraE (Rot) as before.
Two metrics that describe the geometric properties of each relation are reported, including global graph curvature and Krackhardt hierarchy score \citep{DBLP:conf/acl/ChamiWJSRR20}, for which higher $\mathbf{Khs}_{G}$ and lower $\xi_{G}$ means more hierarchical.
As shown in Table \ref{tab:relation_type}, although RotH outperforms RotE on most of the relation types, the performance is not on par with RotE on relations "member of domain region" and "member of domain usage". UltraE (Rot), however, consistently outperforms both RotE and RotH on all relations, with significant performance gains on relations "member of domain region " and "member of domain usage " that RotH fails on.
The overall observation also verifies the flexibility and effectiveness of the proposed method in dealing with heterogeneous topologies of KGs.
\subsubsection{The effect of rotation and reflection.}\label{sec:operators}
To investigate the role of rotation and reflection, we compare UltraE against its two variants: UltraE with only rotation (UltraE (Rot)) and UltraE with only reflection (UltraE (Ref)).
Table \ref{tab:relational_pattern} shows the per-relationship results on YAGO3-10. We observe that UltraE with rotation performs better on anti-symmetric relations while UltraE with reflection performs better on symmetric relations, suggesting that reflection is more suitable for representing symmetric patterns. On almost all relations including relations that are neither symmetric nor anti-symmetric, except for "wroteMusicFor", UltraE outperforms both rotation or reflection variants, showcasing that combining multiple operators can learn more expressive representations.
\section{Related Work}
\subsection{Knowledge Graph Embeddings}
Recent progress of KG embeddings has been achieved from many perspectives. One line of works aims at improving the expressivity of relational operations, from \emph{additive} operations \citep{DBLP:conf/nips/BordesUGWY13,DBLP:conf/aaai/WangZFC14,DBLP:conf/aaai/LinLSLZ15} to \emph{multiplicative} operations \citep{DBLP:conf/icml/NickelTK11,DBLP:journals/corr/YangYHGD14a,DBLP:conf/icml/LiuWY17}. Among which rotation model \citep{DBLP:conf/iclr/SunDNT19} allows for better representation of relational patterns such as symmetry, anti-symmetry, inversion and composition.
Another line of work tries to exploit more expressive embedding space, from Euclidean space to hyperbolic space. Various hyperbolic KG embeddings are proposed, including MuRP \citep{DBLP:conf/nips/BalazevicAH19} that models relation as a combination of Möbius multiplication and Möbius addition, as well as RotH/RefH \citep{DBLP:conf/acl/ChamiWJSRR20} that models relations as hyperbolic isometries (rotation/reflection and translation) to infer relational patterns. RotH/RefH learn relation-specific curvature to distinguish the geometric characteristics of different relations, but still cannot tackle the non-hierarchical relations since hyperbolic space is not the optimal geometry of non-hierarchies. HyboNet \citep{DBLP:journals/corr/abs-2105-14686} is a multiplicative hyperbolic model in Lorentz geometry but requires a quadratic number of parameters.
5$\star$E \citep{DBLP:conf/aaai/NayyeriVA021} proposes $5$ transformations (inversion, reflection, translation, rotation, and homothety) to support multiple graph structures but the embeddings are still learned in the Euclidean space. Unlike all previous methods that focus on homogeneous geometric space, our method is learned in the ultrahyperbolic manifold, a heterogeneous geometric space with multiple kinds of local geometries.
\subsection{Ultrahyperbolic Embeddings}
Some recent works explored the application of ultrahyperbolic or pseudo-Riemannian geometry in representation learning. Pseudo-Riemannian geometry (or \textit{pseudo-Euclidean space}) was first applied to embed non-metric data that preserves local information \citep{sun2015space}. \citep{clough2017embedding} exploited Lorentzian space-time on embedding directed acyclic graphs. More recently, \cite{law2020ultrahyperbolic} proposed learning graph embeddings on pseudo-hyperboloid and provided some necessary geodesic tools, \cite{sim2021directed} further extended it into directed graph embedding, and \citep{DBLP:journals/corr/abs-2106-03134,law2021ultrahyperbolic} extended the pseudo-Riemannian embedding to support neural network operators.
However, pseudo-Riemannian geometry has not yet been exploited in the setting of KG embeddings.
\section{Conclusion}
This paper proposes UltraE, an ultrahyperbolic KG embedding method in a pseudo-Riemannian manifold that interleaves hyperbolic and spherical geometries, allowing for simultaneously modeling multiple hierarchical and non-hierarchical structures in KGs.
We derive a relational embedding by exploiting the pseudo-orthogonal transformation, which is decomposed into various geometric operators including circular rotations/reflections and hyperbolic rotations, allowing for inferring complex relational patterns in KGs. We propose a Manhattan-like distance that measures the nearness of points in the ultrahyperbolic manifold.
The embeddings are optimized by standard gradient descent thanks to the differentiable and bijective mapping.
We discuss theoretical connections of UltraE with other hyperbolic methods.
On three standard KG datasets, UltraE outperforms many previous Euclidean and non-Euclidean counterparts, especially in low-dimensional settings.
\section*{Acknowledgments}
The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Bo Xiong. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No: 860801.
\bibliographystyle{named}
|
2,869,038,154,046 | arxiv | \section{Introduction}
The initial value formulation of general relativity provides a powerful
tool for studying time-dependent solutions, especially in situations where one
cannot solve the equations explicitly. The initial value constraints can place
restrictions on the possible topologies and geometries of the Cauchy surface
on which the initial conditions for the problem are defined. The approach
was largely pioneered by Wheeler \cite{Wheeler} and Misner
\cite{MisnerWheeler,Misner:1960zz,Misner2}, under the name of
{\it Geometrodynamics}, with further early developments
by Lindquist, and Brill \cite{Lindquist,Brill:1963yv} and other authors.
The initial value formulation has subsequently been extended to larger
systems of matter coupled to Einstein gravity, and among these, the theories
encountered in supergravities are of particular interest. First of all,
such extensions of Einstein gravity
have the merit of being consistent with the usual positive
energy theorems of classical general relativity. Furthermore, they are
potentially of intrinsic physical interest if they arise as low energy
limits of string theory or
M-theory. Some results in one such theory, namely a particular
four-dimensional Einstein-Maxwell-Dilaton
(EMD) theory coming from string theory, were studied by Ortin \cite{ortin}.
In \cite{cvegibpop} a larger class of EMD theories and other string-related
supergravities were investigated, with an emphasis on time-symmetric
initial data sets. The focus in \cite{cvegibpop} was exclusively on
four-dimensional theories. Clearly, in the context of supergravity,
string theory and M-theory it is of interest to extend the investigation to
dimensions greater than four, and that provides the motivation for the
present paper. As in \cite{cvegibpop} we shall, for simplicity, focus on
the case of time-symmetric initial data; i.e., on the case where the the
the metric is taken to be static on the initial time surface, and the
second fundamental form vanishes there.
The bulk of the earlier studies in four dimensions involved making an
ansatz introduced by Lichnerowicz \cite{lichnerowicz}, in which the
spatial metric on the initial surface was taken to be a conformal factor
times a fiducial static metric $\bar g_{ij}$, where $\bar g_{ij}$ might
typically be the flat Euclidean metric, or the metric on the 3-sphere, or
the metric on $S^1\times S^2$. The Hamiltonian constraint now becomes
an equation for the spatial Laplacian of the conformal factor, allowing
rather simple solutions if a suitable restriction of its coordinate
dependence, adapted to the symmetry of the fiducial metric, is imposed. By this
means data sets that describe the initial data for multiple black holes,
or black holes in a closed universe, or wormholes in the $S^1\times S^2$ case,
can be constructed.
In higher dimensions the possibilities for choosing fiducial metrics
$\bar g_{ij}$ become more extended. For example, if there are $d$
spatial dimensions one can write the metric on a round $d$-sphere $S^d$
in a variety of different ways, such as in terms of foliations of
$S^p\times S^q$ surfaces with a ``latitude'' coordinate $\mu$, where
$p+q=d-1$. If one then makes an assumption that the conformal factor
depends only on $\mu$, then depending upon how the integers $p$ and $q$
partition $d-1$, one will obtain solutions of the initial data
constraints corresponding to different distributions of black hole centres.
There are also many possibilities extending the $S^1\times S^2$ wormhole
choice that was considered when $d=3$.
In this paper, we shall consider some of these higher-dimensional
generalisatiions in some detail. After setting up the notation for
time-symmetric initial data in section 2, we turn in section 3 to the
case of the higher-dimensional vacuum Einstein equations. We
construct solutions of the initial-value constraints both for a flat
Euclidean fiducial metric $\bar g_{ij}$, and for cases with a spherical
metric, described in a variety of different ways as described above.
We also consider one example, for $d=4$, where the fiducial metric is
taken to be
the Fubini-Study metric on the complex projective plane $\CP^2$.
In section 4 we consider the Einstein-Maxwell equations, extending this to
the Einstein-Maxwell-Dilaton system in section 5. Another generalisation,
to an Einstein-Dilaton system coupled to two electromagnetic fields, is
considered in section 6. In section 7 we give a rather general discussion
of Einstein gravity coupled to $p$ dilatons and $q$ Maxwell fields. This
encompasses many examples that arise in supergravities in various dimensions.
We describe in section 8 how the initial-value problem for these
general Einstein-Maxwell-Dilaton systems may be mapped into the initial-value
problem for corresponding purely Einstein-Dilaton systems.
In section 9, we turn to a consideration of initial data for wormhole
solutions in higher dimensions, generalising results in the literature on
the $d=3$ case. We consider wormholes associated with using
a fiducial metric on $S^1\times S^{d-1}$. We obtain solutions for
wormhole initial data in higher-dimensional pure Einstein,
Einstein-Maxwell, and the various Einstein-Maxwell-Dilatons systems
mentioned above. We include in our discussion a calculation of
the masses and the charges for these wormhole configurations, and
the interaction energies between multiple wormhole
throats. The paper ends with conclusions in section 10.
\section{The Constraints for Time-Symmetric Initial Data}
In the general ADM decomposition, an $n$-dimensional metric $d\hat s^2$
is written in the form
\be
d\hat s^2= -N^2\, dt^2 + g_{ij}\, (dx^i + N^i\, dt)(dx^j + N^j\, dt)\,,
\ee
whose inverse is given by
\be
\Big(\fft{\del}{\del \hat s}\Big)^2 = -\fft{1}{N^2}\,
\Big(\fft{\del}{\del t} - N^i \del_i\Big)^2 + g^{ij}\, \del_i\otimes \del_j
\,.
\ee
The unit vector normal to the $t=\,$constant surfaces is given by
\be
n= n^\mu\, \del_\mu = \fft1{N}\, \Big(\fft{\del}{\del t} - N^i \del_i\Big)\,.
\ee
The Hamiltonian constraint for the $n$-dimensional
Einstein equations $\hat R_{\mu\nu}- \ft12 \hat R\, \hat g_{\mu\nu}
= \hat T_{\mu\nu}$ is
then given by the double projection with $n^\mu\, n^\nu$:
\be
\hat R_{\mu\nu}\, n^\mu\, n^\nu + \ft12 \hat R =
\hat T_{\mu\nu}\, n^\mu\, n^\nu\,.
\ee
By the Gauss-Codacci equations this implies
\be
R + K^2 - K^{ij}\, K_{ij} = 2 \hat T_{\mu\nu}\, n^\mu\, n^\nu\,,
\ee
where $R$ is the Ricci scalar of the $(n-1)$-dimensional spatial metric
$g_{ij}$ and $K_{ij}$ is the second fundamental form.
If we consider time-symmetric data on the initial surface, which we take
to be at $t=0$, then
$K_{ij}=0$ and $N^i=0$ on this surface, and the Hamiltonian constraint becomes
\be
R = 2 N^{-2}\, \hat T_{00}\,.\label{Hamcon}
\ee
Note that the momentum constraint will simply be $\hat T_{0i}=0$.
Following a procedure introduced by Lichnerowicz \cite{lichnerowicz},
we may seek solutions to the Hamiltonian constraint by considering the
case where the metric $g_{ij}$ is conformally related to a fixed,
time-independent background metric $\bar g_{ij}$, with $g_{ij}=
\Phi^\alpha\, \bar g_{ij}$. It is straightforward to see that if
we choose $\alpha= 4/(d-2)$ then we shall have
\be
R = \Phi^{-\, \fft{d+2}{d-2}}\, \Big[ -\fft{4(d-1)}{d-2}\, \bar\square +
\bar R\Big]\, \Phi\,,\qquad
g_{ij}= \Phi^{\fft{4}{d-2}}\, \bar g_{ij}\,,\label{lichR}
\ee
where $\bar R$ is the Ricci scalar of the background metric $\bar g_{ij}$,
and $\bar\square$ is the covariant Laplacian $\bar\nabla^i\bar\nabla_i$
in the background metric. Here, and in what follows, we are using
$d$ to denote the number of spatial dimensions, so
\be
d=n-1\,.
\ee
\section{Data for Vacuum Einstein Equations}
If we consider the constraint equations for the pure vacuum Einstein
equations then we shall simply have the Hamiltonian constraint $R=0$ on
the initial $t=0$ surface which, from (\ref{lichR}), will give the linear
equation
\be
-\bar\square\Phi + \fft{d-2}{4(d-1)}\,\bar R\, \Phi=0\label{Philin}
\ee
for $\Phi$. Any solution of this equation will give rise to consistent
time-symmetric initial data for the vacuum Einstein equations. Because
the equation is linear in $\Phi$, one can of course superpose solutions.
In the bulk of this paper where we consider various matter couplings
to gravity we shall study the simplest case
where the background metric is just taken to be flat, with $\bar g_{ij}=
\delta_{ij}$. Before doing so, in this section we shall also make some
observations
about the vacuum Einstein case with more complicated curved background metrics.
\subsection{Vacuum data with flat $\bar g_{ij}$}\label{vacuumflatsec}
The simplest choice for the background metric $\bar g_{ij}$ in (\ref{lichR})
is to take it to be flat, with $\bar g_{ij}=\delta_{ij}$. We then get
vacuum initial data by taking $\Phi$ to be any harmonic function in the
flat metric, obeying
\be
\del_i\del_i\, \Phi=0\,.
\ee
We may therefore take $\Phi$ to be of the form
\be
\Phi = 1 + \ft12 \sum_{n=1}^N \fft{M_n}{|\bx - \bx_n|^{d-2}}\,,
\ee
where $\bx$ denotes the $(d-1)$-vector $\bx=(x^1,x^2,\cdots ,x^{d-1})$.
In general, the case with $N$ centres corresponds to initial data for
a system of $N$ black holes, which would, of course, evolve as a time-dependent
solution, which one could solve numerically but solving explicitly would
not be tractable. The $N=1$ case with a single singularity, however,
simply gives the initial data for the
$(d+1)$-dimensional Schwarzschild solution. Taking the singularity,
without loss of generality, to be at the origin (so $\bx_1=0$), and
taking $M_1=M$, then in terms of hyperspherical polar coordinates
in the Euclidean $d$-space we have
\bea
g_{ij}\, dx^i dx^j &=& \Phi^{\fft4{d-2}}\, \Big(d\rho^2 + \rho^2\,
d\Omega_{d-1}^2\Big)\,,\nn\\
\Phi &=& 1 +\fft{M}{2 \rho^{d-2}}\,,\label{schwid}
\eea
where $\rho^2 = x^i x^i$ and $d\Omega_{d-1}^2$ is the metric on the unit
$(d-1)$-sphere. To see how this corresponds to the initial data for
the Schwarzschild solution, we observe that the metric in (\ref{schwid})
can be written in the standard $d$-dimensional Schwarzschild form
\be
g_{ij}\, dx^i dx^j = \Big(1-\fft{2M}{r^{d-2}}\Big)^{-1}\, dr^2 +
r^2\, d\Omega_{d-1}^2
\ee
if
\be
r= \rho\, \Big(1+ \fft{M}{2\rho^{d-2}}\Big)^{\fft{2}{d-2}}\quad \hbox{and}\quad
\Big(1-\fft{2M}{r^{d-2}}\Big)^{-1}\, dr^2=
\Big(1 +\fft{M}{2 \rho^{d-2}}\Big)^{\fft{4}{d-2}}\, d\rho^2\,.\label{rrhorel}
\ee
A straightforward calculation shows that indeed if $r$ is given in terms of
$\rho$ by the first equation in (\ref{rrhorel}), then the second equation is
satisfied too. Note that the first equation can be inverted to give
$\rho$ as a function of $r$, with the result
\be
\rho = r\,
\Big[\ft12 \Big(1 + \sqrt{1-\fft{2M}{r^{d-2}}}\Big)\Big]^{\fft2{d-2}}\,.
\ee
\subsection{Vacuum data with non-flat background $\bar g_{ij}$}
In the case of four spacetime dimensions, the Lichnerowicz procedure
has been used in a variety of applications, with the background metric
$\bar g_{ij}$ being taken to be either flat, or else the standard metric
on the unit 3-sphere or on $S^1\times S^2$. These latter cases have been
used to construct initial data for black holes in closed universes, or for
wormholes. In higher dimensions the possibilities for the choice of the
background metric $\bar g_{ij}$ are more diverse, and there are many cases
where one can solve explicitly for $\Phi$ on the initial $t=0$ surface.
(Of course, it does not necessarily mean that the data will evolve into
desirable solutions, but it does provide interesting cases for further
investigation.) In what follows, we present some simple examples of
curved background metrics.
\subsubsection{Unit $d$-sphere background}
To illustrate some of the possibilities in higher dimensions, let us first
consider the case when the background metric $\bar g_{ij}$ describes the
unit $d$-sphere. There are many ways that this background metric can be
written; here, we shall consider the cases
\be
\bar g_{ij}\, dx^i\, dx^j = d\bar s^2 = d\mu^2 + \sin^2\mu\, d\Omega_p^2 +
\cos^2\mu\, d\widetilde \Omega_q^2\,,\qquad p+q=d-1\,,\label{dsphere}
\ee
where $d\Omega_p^2$ and $d\widetilde \Omega_q^2$ are unit metrics on
a $p$-sphere and $q$-sphere respectively. The ``latitude'' coordinate
$\mu$ ranges from $0\le\mu\le\ft12\pi$, except when $q=0$ when it ranges
over $0\le\mu\le \pi$, and $p=0$ when it ranges over
$-\ft12\pi\le\mu\le\ft12\pi$.
The Ricci scalar of the unit $d$-sphere is given
by $\bar R= d\, (d-1) $, and so the
equation (\ref{Philin}) for $\Phi$ is the Helmholtz equation\footnote{Note
that the hyperspherical harmonics on the unit $d$-sphere obey
$-\bar\square Y =\lambda Y$ with eigenvalues $\lambda=\ell\,(\ell+d-1)$
and $\ell=0,1,2,\ldots$, and since none of these eigenvalues coincide
with the eigenvalue in (\ref{helmholtz}) (which is in fact negative),
the solutions for $\Phi$ that we are
seeking will necessarily have singularities on the sphere.}
\be
-\bar\square\Phi + \ft14 d\, (d-2)\, \Phi=0\,.\label{helmholtz}
\ee
A simple ansatz for solving this explicitly in the (\ref{dsphere}) metrics
is to assume $\Phi$ is a function only of the latitude coordinate $\mu$, and
so (\ref{helmholtz}) becomes
\be
\Phi'' + (p\, \cot\mu - q\, \sin\mu)\, \Phi' -\ft14(p+q+1)(p+q-1)\, \Phi=0\,,
\label{pqeqn}
\ee
where a prime denotes a derivative with respect to $\mu$.
If we consider the simplest case where $q=0$ and hence $d=p+1$, the unit
$d$-sphere is viewed as a foliation by $(d-1)$-spheres. The solution
to (\ref{pqeqn}) can be written as
\be
\Phi = \fft{c_1}{(\cos\ft12\mu)^{p-1}} + \fft{c_2}{(\sin\ft12\mu)^{p-1}}\,,
\ee
where $c_1$ and $c_2$ are arbitrary constants. The first term has a
singularity at the north pole, and the second term is singular at the
south pole. If we choose $c_1=c_2= \sqrt{M}\, 2^{-\ft12 (p-1)}$, so that
\be
\Phi= \sqrt{\fft{M}{2}}\, 2^{-\ft12 (p-1)}\, \Big[\fft1{(\cos\ft12\mu)^{p-1}} +
\fft1{(\sin\ft12\mu)^{p-1}}\Big]\,,\label{schwid2}
\ee
then after defining a new radial variable $r$ by letting
\be
r^{\ft12 (p-1)} = \sqrt{\fft{M}{2}}\, \Big[ (\tan\ft12\mu)^{\ft12(p-1)}
+ (\cot\ft12\mu)^{\ft12(p-1)}\Big]\,,
\ee
the spatial $d$-metric, given as in (\ref{lichR}), becomes
\be
ds^2= \Phi^{\ft{4}{d-2}}\, (d\mu^2 + \sin^2\mu\, d\Omega_{d-1}^2) =
\Big( 1- \fft{2M}{r^{d-2}}\Big)^{-1}\, dr^2 +
r^2\, d\Omega_{d-1}^2\,.
\ee
This can be recognised as the time-symmetric initial data for the
$(d+1)$-dimensional generalisation of the Schwarzschild black hole.
The horizon of the black hole corresponds to the equator, $\mu=\ft12\pi$.
Of course since the metric on a round $d$-sphere is conformally related
to the flat Euclidean $d$-metric in any dimension, we can straightforwardly
relate the
initial data we constructed here to the previous initial data for the
Schwarzschild black hole that we constructed in section \ref{vacuumflatsec}
using a flat background metric. The Euclidean and sphere metrics are
related by
\be
d\rho^2 + \rho^2\, d\Omega_{d-1}^2 = \Omega^2\, \big( d\mu^2 +
\sin^2\mu\, d\Omega_{d-1}^2\big)
\ee
where
\be
\Omega= \fft{c^2}{\cos^2\ft12\mu}\,, \qquad \rho = 2 c^2 \tan\ft12\mu\,,
\ee
where $c$ is an arbitrary constant,
and using this one can easily verify that the $\Phi$ functions given in
(\ref{schwid}) and (\ref{schwid2}) for the flat and the spherical
background metrics are related by
\be
\Phi_{\rm sphere} = \Phi_{\rm flat}\, \Omega^{\fft{d-2}{2}}\,.
\ee
In the manner described, for example, in \cite{LW,clifton}, one could take
a superposition of solutions for $\Phi$ like the one in (\ref{schwid2}),
but expressed
in terms of a rotated choice of polar axes for the $d$-sphere. By this means,
one could set time-symmetric initial data for multiple black holes.
We can also consider the solutions of the equation (\ref{pqeqn}) for $\Phi$
in the case where $p$ and $q$ are both non-zero. In this description of
the $d=p+q+1$ sphere it is foliated by $S^p\times S^q$ surfaces. The
solutions are given in terms of hypergeometric functions by
\bea
\Phi &=&
c_1 \, F\Big[\fft{p+q-1}{4},\fft{p+q+1}{4},\fft{q+1}{2}; \cos^2\mu\Big]
\nn\\
&&
+ c_2\, (\cos\mu)^{1-q}\, F\Big[\fft{p-q+1}{4},\fft{p-q+3}{4}, \fft{3-q}{2};
\cos^2\mu\Big]\,.
\eea
The explicit solutions are of varying complexity depending on the choice of
$p$ and $q$. A fairly simple example is when $p=q=2$. In this case,
the solution to (\ref{pqeqn}) for $\Phi$ for this metric on the unit
5-sphere is given by
\be
\Phi= \fft1{\cos\mu}\, \Big(\fft{a_1}{\sin\ft12\mu} + \fft{a_2}{\cos\ft12\mu}
\Big)\,.\label{bhid}
\ee
This has a simple pole singularity in a 3-plane times a 2-sphere surface
at $\mu=0$ and a simple pole singularity on another 3-plane times 2-sphere
surface at $\mu=\ft12\pi$. (Recall here the 5-sphere is spanned by
$0\le\mu\le\ft12\pi$, with a foliation of different 2-spheres
contracting onto the origin of a 3-plane at each endpoint.) The
initial data described by (\ref{bhid}) would correspond to taking
certain continuous superpositions of elementary mass-point initial data of
the kind we discussed previously.
\subsubsection{$\CP^2$ background}
Further possibilities for background metrics that could give
explicitly solvable time-symmetric initial data include taking $\bar g_{ij}$
to be a metric on a product of spheres, or else taking a metric on
a complex projective space or products of these, possibly with spheres as
well. As a simple example, consider the case $d=4$ with $\bar g_{ij}$
taken to be the Fubini-Study metric on $\CP^2$. The metric
\be
d\bar s^2= d\mu^2 + \ft14 \sin^2\mu\, \cos^2\mu\, (d\psi+\cos\theta\, d\phi)^2
+ \ft14 \sin^2\mu\, (d\theta^2 + \sin^2\theta\, d\phi^2)\label{cp2met}
\ee
is Einstein with $\bar R_{ij}= 6 \bar g_{ij}$ and hence $\bar R=24$.
From (\ref{Philin}) we have $-\bar\square\Phi + 4\Phi=0$, and so if we assume
$\Phi$ depends only on $\mu$ we have
\be
\Phi'' + (3\cot\mu-\tan\mu)\, \Phi' - 4\Phi=0\,,
\ee
for which the general solution is
\be
\Phi= \fft{c_1}{\sin^2\mu} + \fft{c_2\, \log\cos\mu}{\sin^2\mu}\,.
\label{cp2Phi}
\ee
The first term exhibits the leading-order $1/\mu^2$ behaviour of a point
charge at the NUT at $\mu=0$, while the second term has the characteristic
$\log(\ft12\pi-\mu)$ behaviour of a charge distributed over the bolt at
$\mu=\ft12\pi$. It would be interesting to investigate what this
time-symmetric initial data describes in this, and other cases.
\section{Einstein-Maxwell Equations}\label{EMsec}
Consider the Einstein-Maxwell system in $n=(d+1)$ dimensions, described
by the action
\be
I= \int d^nx \sqrt{-\hat g} (\hat R - \hat F^2)\,,
\ee
where $\hat F^2= \hat g^{\mu\rho}\,\hat g^{\nu\sigma}\,
\hat F_{\mu\nu}\, \hat F_{\rho\sigma}$,
for which the equations of motion are
\be
\hat R_{\mu\nu} -\ft12\hat R\, \hat g_{\mu\nu} = 2(\hat F_{\mu\rho}\,
\hat F_{\nu\sigma} \hat g^{\rho\sigma} -\ft14 \hat F^2\, \hat g_{\mu\nu})\,,
\qquad \hat \nabla_\mu \hat F^{\mu\nu}=0\,.\label{einstmaxeqn}
\ee
We shall consider the evolution of purely electric time-symmetric
initial data, for which only the components of $\hat F_{\mu\nu}$ specified
by
\be
n^\mu \, \hat F_{\mu i}= -E_i
\ee
are non-zero. Projecting the Einstein equations in (\ref{einstmaxeqn}) with
$n^\mu n^\nu$ then gives the Hamiltonian constraint
\be
R= 2 E^2\,,\label{Hamconeinmax}
\ee
where $E^2 = g^{ij}\, E_i\, E_j$. We also have the Gauss law constraint
$\nabla_i E^i=0$.
Generalising a discussion of Misner and
Wheeler \cite{MisnerWheeler} to the case of general dimensions, we may seek
a solution of the constraint equations, with $g_{ij}$ given as in
(\ref{lichR}) and $\bar g_{ij}=\delta_{ij}$ as before, in the form
\be
\Phi= (CD)^\alpha\,,\qquad E_i= \beta\, \del_i\log\fft{C}{D}\,.\label{PhiE}
\ee
The aim is to choose the constants $\alpha$ and $\beta$ appropriately, such
that the constraint equations will be satisfied if $C$ and $D$ are arbitrary
harmonic functions in the flat background Euclidean metric $\delta_{ij}$.
Substituting $R$ from (\ref{lichR}) (with $\bar R=0$ since we are taking a
flat background metric $\bar g_{ij}=\delta_{ij}$) into (\ref{Hamconeinmax}),
together with (\ref{PhiE}), it is straightforward to see that if we
choose
\be
\alpha = \fft12\,,\qquad \beta = \sqrt{\fft{d-1}{2(d-2)}}\,,
\ee
then the Hamiltonian constraint is indeed satisfied if $C$ and $D$ are
arbitrary harmonic functions in the Euclidean background metric $\bar g_{ij}=
\delta_{ij}$, i.e.
\be
\del_i\del_i C=0\,,\qquad \del_i\del_i D=0\,.\label{CDharm}
\ee
Furthermore, the Gauss law constraint $\nabla_i E^i=0$ is also satisfied
subject to (\ref{CDharm}).
If we take the case where $C$ and $D$ both have a single pole at the
origin, then in hyperspherical polar coordinates we can take
\be
C= 1 +\fft{M-Q}{2\rho^{d-2}}\,,\qquad D= 1 + \fft{M+Q}{2\rho^{d-2}}\,.
\ee
The spatial metric takes the form
\be
g_{ij} dx^i dx^j = (C D)^{\fft{2}{d-2}}\, (d\rho^2 + \rho^2 d\Omega_{d-1}^2)\,,
\ee
and if we define the area coordinate $r=\rho\, (CD)^{1/(d-2)}$ this becomes
\be
g_{ij} dx^i dx^j = \Big(1- \fft{2M}{r^{d-2}} + \fft{Q^2}{r^{2d-4}}\Big)^{-1}\,
dr^2 + r^2 d\Omega_{d-1}^2\,,
\ee
which can be recognised as the spatial part of the $(d+1)$-dimensional
Reissner-Nordstr\"om metric.
\section{Einstein-Maxwell-Dilaton System}\label{EMDsec}
A rather general class of theories that are relevant in string theory
are encompassed by the Einstein-Maxwell-Dilaton (EMD) system in $n=d+1$
dimensions, described by the Lagrangian
\be
{\cal L}= \sqrt{-\hat g}\, (\hat R -\ft12 \hat g^{\mu\nu}\, \del_\mu\phi
\del_\nu\phi - \ft14 e^{a\phi}\, \hat F^2)\,,
\ee
where $a$ is an arbitrary constant.
The Hamiltonian and Gauss law constraints will be
\bea
R &=&\ft12 g^{ij}\del_i\phi\, \del_j\phi + \ft12 e^{a\phi}\, g^{ij}\,
E_i E_j\,,\nn\\
0&=& \nabla_i(e^{a\phi}\, g^{ij}\, E_j)\,.\label{emdcons}
\eea
As usual, we shall consider a flat Euclidean background metric, with
$g_{ij}= \Phi^{4/(d-2)}\, \delta_{ij}$.
Upon using (\ref{lichR}) the Hamiltonian constraint becomes
\be
\fft{4(d-1)}{(d-2)}\, \del_i\del_i \Phi +
\ft12 \Phi\, \Big[ \del_i\phi\, \del_i\phi + e^{a\phi}\, E_i E_i\Big]=0\,.
\ee
It is straightforward to verify that we can solve the Hamiltonian and Gauss
law constraints by introducing three arbitrary harmonic functions
$C$, $D$ and $W$ in the Euclidean $d$-space, in terms of which we write
\bea
\Phi &=& (CD)^{\fft{(d-2)}{(d-1)\, \Delta}}\,W^{\fft{a^2}{\Delta}}\,,\nn\\
e^{a\phi} &=& \Big(\fft{CD}{W^2}\Big)^{\fft{2a^2}{\Delta}}\,,\nn\\
E_i &=& \fft{2}{\sqrt\Delta}\, \Big(\fft{CD}{W^2}\Big)^{-\fft{a^2}{\Delta}}\,
\del_i\log\fft{C}{D}\,,\label{emddata}
\eea
where we have introduced the parameter $\Delta$, as in \cite{lupopemaximal},
which is related to $a$ by the expression
\be
a^2 = \Delta - \fft{2(d-2)}{d-1}\,.\label{Deltaparam}
\ee
\subsection{Non-extremal black hole}
The solution for a static non-extremal black hole in the EMD theory
with arbitrary dilaton coupling $a$ in $n=d+1$ dimensions
was constructed in \cite{emdbh}, and is given by
\bea
ds^2 &=& - h \, f^{-2(d-2)}\, dt^2 + f^2\, h^{-1}\, dr^2 +
r^2\, f^2\, d\Omega_{d-1}^2\,,\nn\\
h&=& 1 - \Big(\fft{r_H}{r}\Big)^{d-2}\,,\qquad
f= \Big(1 + \fft{\alpha}{r^{d-2}}\Big)^{\ft{2}{(d-1)\Delta}}\,,\nn\\
e^{a\phi} &=& \Big(1 + \fft{\alpha}{r^{d-2}}\Big)^{\ft{2 a^2}{\Delta}}\,,
\qquad
A = \fft2{\sqrt\Delta}\, \sqrt{1+\fft{r_H^{d-2}}{\alpha}}\,
\Big(1+ \fft{\alpha}{r^{d-2}}\Big)^{-1}\, dt
\,,\label{emdbh}
\eea
where $\Delta$ is defined in (\ref{Deltaparam}).
Defining a new radial coordinate $\rho$ by
\be
r^{d-2}= \rho^{d-2}\, \Big(1+\fft{uv}{\rho^{d-2}}\Big)^2\,,
\ee
where $u$ and $v$ are constants related to the horizon radius $r_H$ and
the parameter $\alpha$ by
\be
r_H^{d-2}= 4 u v\,,\qquad \alpha = (u-v)^2\,,
\ee
a straightforward calculation show that the metric (\ref{emdbh}) becomes
\be
ds^2 = -N^2\, dt^2 + \Phi^{\ft{4}{d-2}}\,
(d\rho^2 + \rho^2\, d\Omega_{d-1}^2)\,,\label{staticmet}
\ee
with
\bea
\Phi &=& \Big[ \Big(1 + \fft{u^2}{\rho^{d-2}}\Big)
\Big(1 + \fft{v^2}{\rho^{d-2}}\Big)\Big]^{\ft{d-2}{(d-1)\Delta}}
\, \Big(1 + \fft{uv}{\rho^{d-2}}\Big)^{\ft{a^2}{\Delta}}\,,\nn\\
N &=& \Big(1 - \fft{uv}{\rho^{d-2}}\Big)^2\,
\Big[ \Big(1 + \fft{u^2}{\rho^{d-2}}\Big)
\Big(1 + \fft{v^2}{\rho^{d-2}}\Big)
\Big]^{-\fft{2(d-2)}{(d-1)\Delta}}\,
\Big(1 + \fft{uv}{\rho^{d-2}}\Big)^{\ft{4(d-2)}{(d-1)\Delta} -1}\,.
\eea
Comparing with (\ref{emddata}), we see that the static non-extremal
black hole is generated by starting from the initial data in which the
harmonic functions $C$, $D$ and $W$ are taken to be
\be
C=1 +\fft{u^2}{\rho^{d-2}}\,,\qquad
D=1 +\fft{v^2}{\rho^{d-2}}\,,\qquad
W=1 +\fft{uv}{\rho^{d-2}}\,.
\ee
If one takes more general solutions for the intial data, with $C$,
$D$ and $W$ having singularities at multiple locations, the
evolution would give rise to time-dependent solutions that could be
constructed only numerically. However, if one takes very specific
initial data with multiple singularities, it can give rise to static
solutions. This will happen in the case of initial data for the
multi-centre extremal nlack holes, discussed below:
\subsection{Multi-centre extremal black holes}
The static multi-centre extremal black holes in $n=d+1$ dimensions
are given by
\bea
ds^2 &=& - C^{\ft{4(d-2)}{(d-1)\Delta}}\, dt^2 + C^{\ft{4}{(d-1)\Delta}}\,
dy^i\, dy^i\,,\nn\\
A&=& \fft{2}{\sqrt{\Delta}}\, C^{-1}\, dt\,,\qquad
e^{a\phi} = C^{\ft{2 a^2}{\Delta}}\,,
\eea
where $C$ is an arbitrary harmonic function in the $d$-dimensional
Euclidean space with metric $dy^i\, dy^i$. Comparison with (\ref{emddata})
shows that indeed the harmonic function $C$ provides the initial data
for these solutions, with $D=W=1$.
\section{Einstein-Two-Maxwell-Dilaton System}\label{E2MDsec}
An extension of the Einstein-Maxwell-Dilaton system containing two
Maxwell fields, with just one dilaton, is of considerable interest.
The theory, which we shall refer to by the acronym E2MD, has the
Lagrangian
\be
{\cal L}= \sqrt{-g}\, \Big(R - \ft12 (\del\phi)^2
-\ft14 e^{a\phi}\, F_1^2 -\ft14 e^{b\phi}\, F_2^2\Big)\,.
\ee
The theory and its black hole solutions were studied extensively in
$n=d+1$ dimensions in \cite{hong2field}. It is convenient to
parameterise the dilaton coupling constants $a$ and $b$ as
\be
a^2= \fft{4}{N_1} - \fft{2(d-2)}{d-1}\,,\qquad
b^2= \fft{4}{N_2} - \fft{2(d-2)}{d-1}\,.
\ee
It was found that while
black hole solutions with both field strengths carrying charge cannot be
found explicitly for general values of $a$ and $b$, they can be obtained
if
\be
a b = -\fft{2(d-2)}{d-1}\,,\label{abrel}
\ee
and we shall assume this from now on. This condition also implies
\be
a N_1 + b N_2=0\,,\qquad N_1 + N_2= \fft{2(d-1)}{d-2}\,.
\ee
The Hamiltonian and Gauss law constraints will be
\bea
R &=&\ft12 g^{ij}\del_i\phi\, \del_j\phi + \ft12 e^{a\phi}\, g^{ij}\,
E^1_i E^1_j + \ft12 e^{b\phi}\, g^{ij}\,
E^2_i E^2_j\,,\label{E2MDHam0}\\
0&=& \nabla_i(e^{a\phi}\, g^{ij}\, E^1_j)\,,\qquad
0=\nabla_i(e^{b\phi}\, g^{ij}\, E^2_j)\label{E2MDGauss}\,.
\eea
We shall again consider a flat Euclidean background metric, with
$g_{ij}= \Phi^{4/(d-2)}\, \delta_{ij}$.
Upon using (\ref{lichR}) the Hamiltonian constraint (\ref{E2MDHam0}) becomes
\be
\fft{4(d-1)}{(d-2)}\, \del_i\del_i \Phi +
\ft12 \Phi\, \Big[ \del_i\phi\, \del_i\phi + e^{a\phi}\, E^1_i E^1_i
+ e^{b\phi}\, E^2_i E^2_i \Big]=0\,.\label{E2MDHam}
\ee
It is now a straightforward exercise to make an appropriate ansatz for
solving the Hamiltonian and Gauss law constraints in terms of harmonic
functions, and then to solve for the various exponents in the ansatz in
order to satisfy the constrain equations. Motivated by the form of the ansatz
that was employed in four dimensions in \cite{cvegibpop}, we have
made an ansatz here involving four harmonic functions, $C_1$, $D_1$, $C_2$
and $D_2$, and we find we can solve the Gauss law constraints
(\ref{E2MDGauss}) and the Hamiltonian constraint (\ref{E2MDHam}) by
writing
\bea
\Phi &=& (C_1\, D_1)^{\ft{(d-2)\,N_1}{4(d-1)}}\,
(C_2\, D_2)^{\ft{(d-2)\,N_2}{4(d-1)}}\,,\nn\\
e^\phi&=& (C_1\, D_1)^{\ft12 a N_1}\, (C_2\, D_2)^{\ft12 b N_2}=
\Big(\fft{C_1\, D_1}{C_2\, D_2}\Big)^{\ft12 a N_1}\,,\nn\\
E^1_i &=& \sqrt{N_1}\, \Big(\fft{C_1\, D_1}{C_2\, D_2}\Big)^{
-\ft{(d-2)\, N_2}{2(d-1)}}\, \del_i\log\fft{C_1}{D_1}\,,\nn\\
E^2_i &=& \sqrt{N_2}\, \Big(\fft{C_2\, D_2}{C_1\, D_1}\Big)^{
-\ft{(d-2)\, N_1}{2(d-1)}}\, \del_i\log\fft{C_2}{D_2}\,.\label{datasol}
\eea
\subsection{Static non-extremal black hole}
The spherically-symmetric non-extremal black hole solutions in the
E2MD theory, when $a$ and $b$ obey the relation (\ref{abrel}), can be
found in \cite{hong2field}. They are given by
\begin{eqnarray}
ds^2 &=& -(H_1^{N_1} H_2^{N_2})^{-\fft{(D-3)}{D-2}} f dt^2 +
(H_1^{N_1} H_2^{N_2})^{\fft{1}{D-2}} (f^{-1} dr^2 + r^2 d\Omega_{D-2}^2)\,,\cr
A_1&=& \fft{\sqrt{N_1}\,c_1}{s_1}\, H_1^{-1} dt\,,\qquad
A_2 = \fft{\sqrt{N_2}\,c_2}{s_2}\, H_2^{-1} dt\,,\cr
\phi &=& \ft12 N_1\, a_1 \log H_1 + \ft12 N_2 \,a_2 \log H_2\,,\qquad
f=1 - \fft{\mu}{r^{D-3}}\,,\cr
H_1&=&1 + \fft{\mu \, s_1^2}{r^{D-3}}\,,\qquad
H_2 = 1 + \fft{\mu \, s_2^2}{r^{D-3}}\,,\label{solution1}
\end{eqnarray}
where we are using a standard notation where $s_i=\sinh\delta_i$ and
$c_i=\cosh\delta_i$. If we make the coordinate transformation
\be
r^{d-2}= \rho^{d-2}\, \Big(1+ \fft{\mu}{4\rho^{d-2}}\Big)^2\,,
\ee
the metric can be cast into the standard static form (\ref{staticmet}),
with $\Phi$ and $\phi$
given as in (\ref{datasol}), where the harmonic functions
take the specific forms
\be
C_1= 1+\fft{\mu\, e^{2\delta_1}}{4\rho^{d-2}}\,,\quad
D_1= 1+\fft{\mu\, e^{-2\delta_1}}{4\rho^{d-2}}\,,\quad
C_2= 1+\fft{\mu\, e^{2\delta_2}}{4\rho^{d-2}}\,,\quad
D_2= 1+\fft{\mu\, e^{-2\delta_2}}{4\rho^{d-2}}\,. \label{specialCD}
\ee
The metric function $g_{00}=-N^2$ is given by
\be
N^2 = \Big(1-\fft{\mu^2}{16\rho^{2(d-2)}}\Big)^2\,
\Big[(C_1\, D_1)^{N_1}\, (C_2\, D_2)^{N_2}\Big]^{-\ft{d-2}{d-2}}\,.
\ee
After some calculation, one can verify that the field strengths
in this non-extremal solution indeed imply that $E_i^1$ and $E_i^2$
in the initial-value data are consistent with the expressions we found
in (\ref{datasol}) for the general ansatz with four independent
harmonic functions. Thus we conclude that in the special case
where the four harmonic functions take the particular form given in
(\ref{specialCD}), they give rise to the initial data for the
non-extremal black hole (\ref{solution1}).
\section{Gravity with $p$ Dilatons and $q$ Maxwell Fields}\label{multiemdsec}
\subsection{The theories}
A general class of theories that encompasses the relevant bosonic
sectors of various supergravities is provided by considering the Lagrangian
\be
{\cal L}= \sqrt{-g}\, \Big( E - \ft12\sum_{\alpha=1}^p (\del\phi_\alpha)^2
- \ft14 \sum_{I=1}^q X_I^{-2}\,
F_{(I)}^2\Big)\,, \qquad X_I= e^{-\ft12 \vec a_I\cdot\vec \phi}
\label{genpq}
\ee
in $n=d+1$ spacetime dimensions,
where $\vec\phi=(\phi_1,\phi_2,\cdots, \phi_p)$ is the $p$-vector of
dilaton fields, and $\vec a_I$ is a set of $q$ constant dilaton $p$-vectors
that characterise the couplings of the dilatons to the $q$ Maxwell
fields $F_{(I)}$. We can obtain multi-centre BPS black hole solutions,
and spherically-symmetric static non-extremal black hole solutions, whenever
the dilaton vectors obey the relations \cite{lupomultiscalar}
\be
\vec a_I\cdot\vec a_J = 4 \delta_{IJ} - \fft{2(d-2)}{d-1}\,.\label{adotrel}
\ee
We shall assume the dilaton vectors obey this relation from now on.
For a given dimension and a given number $p$ of dilaton fields, the most
general theory of the form (\ref{genpq}) will correspond to the case where
$q$ is chosen to be as large as possible, subject to the set of
dilaton vectors $\vec a_i$ obeying (\ref{adotrel}). Obviously,
one can always find $p$ such $p$-vectors. To see this, let $\vec e_I$
be an orthonormal basis in $\R^p$, where the vector $\vec e_i$ has an
entry 1 at the $I$th position, with all other components zero. Thus
$\vec e_I\cdot\vec e_J=\delta_{IJ}$. If we define
$\vec e \equiv \sum_I \vec e_I$ then clearly the vectors
\be
\vec a_I = \alpha \, \vec e_I + \beta\, \vec e
\ee
will obey the relations
\be
\vec a_I\cdot\vec a_J= \alpha^2\, \delta_{IJ} + 2\alpha\beta + p \beta^2\,.
\ee
Solving for $\alpha$ and $\beta$ such that this reproduces (\ref{adotrel}),
we find
\be
\alpha=2\,,\qquad \beta = -\fft{2}{p} \pm \fft{2}{p}\,
\sqrt{1-\fft{(d-2)p}{2(d-1)}}\,.
\ee
The only further question
is whether one can find a set of more than these $p$ such vectors,
that all obey (\ref{adotrel}). Any additional dilaton vector
or vectors, over and above the $p$ already constructed above,
would necessarily have to be a
linear combination of the first $p$ dilaton vectors. Let us suppose
that such an additional dilaton vector $\vec u$ existed, over and above
the $p$ dilaton vectors $\vec a_I$ with $1\le I\le p$. Thus we must
have
\be
\vec u = \sum_{I=1}^p c_I\, \vec a_I\,,
\ee
where $c_I$ are some constants. The $\vec a_I$ satisfy (\ref{adotrel}),
and we must also require
\be
\vec u\cdot\vec u= 4-\fft{2(d-2)}{d-1}\,,\qquad
\vec u\cdot \vec a_I = -\fft{2(d-2)}{d-1}\,.
\ee
Using (\ref{adotrel}) we easily see that these conditions imply
\be
c_I = -1\,,\qquad p=\fft{d}{d-2}\,,
\ee
and so we only have solutions with integer $p$ if
$(d,p)=(3,3)$ or $(d,p)=(4,2)$.
Thus in four spacetime dimensions we can have a theory of the type
(\ref{genpq}), with three dilatons and four Maxwell fields. This corresponds
to the bosonic subsector of four-dimensional STU supergravity in which
the three additional axionic scalars are set to zero. In five spacetime
dimensions we can have a theory of the type (\ref{genpq}) with two dilatons
and three Maxwell fields. This corresponds to a bosonic subsector of
five-dimensional STU supergravity. In all other cases, the
requirement that the dilaton vectors in the Lagrangian (\ref{genpq}) obey
the relations (\ref{adotrel}) restricts us to having only $p$ Maxwell fields
when there are $p$ dilatonic scalar fields.
\subsection{Ansatz for initial-value constraints}
The time-symmetric initial-value constraints for the theory (\ref{genpq})
are
\bea
R &=& \ft12 g^{ij}\, \del_i\vec\phi\cdot \del_i\vec\phi +
\ft12 g^{ij}\, \sum_{I=1}^q X_I^{-2}\,E^I_i\, E^I_j\,,\label{Hamilgen}
\\
0 &=& \nabla_i( g^{ij}\, X_I^{-2}\, E^I_j)\,.\label{Gaussgen}
\eea
We shall, as usual, use a flat background $d$-metric, and so we write
\be
g_{ij}= \Phi^{\ft{4}{d-2}}\, \delta_{ij}\,.
\ee
The ansatz for the initial-value data for four-dimensional STU supergravity was
discussed in \cite{cvegibpop}; it involved a total of eight arbitrary
harmonic functions (two for each of the four Maxwell fields). Multi-centre
BPS black holes were constructed in arbitrary dimensions
for the theories described by (\ref{genpq}),
with dilaton vectors obeying (\ref{adotrel}), in \cite{lupotrxu}, and these
provide a useful guide for writing an ansatz for initial-value data
in the general case. One can also construct spherically-symmetric
non-extremal black hole solutions in all dimensions. These
solutions provide further guidance for writing an ansatz for
initial-value data in general dimensions. In particular, we find
that in order to encompass an initial-value formulation for these
non-extremal solutions, we must go beyond the natural-looking generalisation
of the four-dimensional STU supergravity example that would involve $2q$
harmonic functions for the $q$ Maxwell fields. Namely, we must introduce
one further arbitrary harmonic function, which we shall call $W$.
After some experimentation, we are led to
consider the following ansatz for the initial data:
\bea
\Phi &=& \Pi^{\ft{d-2}{4(d-1)}}\, W^\gamma\,,
\quad \vec\phi = \ft12\sum_{I=1}^q
\vec a_I\, \log (C_I\, D_I)-\vec a\, \log W\,,\quad
E^I_i = \fft{\Phi^2}{C_I\, D_I}\, \del_i
\log\fft{C_I}{D_I}\,,\label{IVgen}\nn\\
\Pi&\equiv & \prod_{I=1}^q C_I \, D_I\,,\qquad
\gamma\equiv 1-\fft{q(d-2)}{2(d-1)}\,,\qquad
\vec a \equiv \sum_{I=1}^q\, \vec a_I\,.
\eea
(Note that for four-dimensional STU supergravity we have $d=3$ and
$q=4$, implying $\gamma=0$, and $\vec a=0$, so the additional harmonic function $W$ is absent
in this special case.)
One can easily verify from the definition of $X_I$ in (\ref{genpq}),
and using the relations (\ref{adotrel}), that the ansatz for $\vec\phi$
in (\ref{IVgen}) implies
\be
X_I^{-2}= \Phi^{-4}\, (C_I\, D_I)^2\,,\label{XIsol}
\ee
and then it is easy to see that the Gauss law constraints (\ref{Gaussgen})
are satisfied if the functions $C_I$ and $D_I$ are harmonic in the flat
background metric,
\be
\del_i\del_i\, C_I=0\,,\qquad \del_i\del_i\, D_I=0\,.\label{CIDIharm}
\ee
Using (\ref{lichR}), we find, upon substituting the ans\"atze (\ref{IVgen})
into the Hamiltonian constraint (\ref{Hamilgen}) that it gives
\be
-\Pi^{-1}\, \del_i\del_i \, \Pi + (\del_i\log\Pi)^2 -
\fft{4\gamma\, (d-1)}{d-2}\, W^{-1}\, \del_i\del_i\, W =
\ft12\sum_{I=1}^q \Big\{ [\del_i\log(C_I\, D_I)]^2 +
[\del_i\log\fft{C_I}{D_I}]^2\Big\}\,.
\ee
After some algebra, we find that this is indeed satisfied if the
functions $W$, $C_I$ and $D_I$ are harmonic, obeying $\del_i\del_i\, W=0$
and (\ref{CIDIharm}). Thus
we have established that (\ref{IVgen}) indeed gives a solution of the
initial-value constraints (\ref{Hamilgen}) and (\ref{Gaussgen}), where
$W$, $C_I$ and $D_I$ are arbitrary harmonic functions in the Euclidean
background metric.
As mentioned earlier, special cases of the theories we are considering
in this section include the gravity, dilaton and Maxwell-field sectors of
four-dimensional STU supergravity (with $(d,p,q)=(3,3,4)$) and five-dimensional
STU supergravity (with $(d,p,q)=(4,2,3)$). In both of these cases the constant
$\gamma$ in (\ref{IVgen}) is zero, and so the harmonic function $W$ does
not arise in the initial-data ansatz. The four-dimensional STU supergravity
case was discussed in \cite{cvegibpop}. Some other special cases also
correspond to the gravity, dilaton and Maxwell-field sectors of supergravities.
These include a six-dimensional case with $(d,p,q)=(5,2,2)$ and
a seven-dimensional case with $(d,p,q)=(6,2,2)$.
\subsection{Extremal multi-centre black holes}
As was discussed in general in \cite{lupotrxu}, these solutions are
given by
\bea
ds^2 &=& -H^{-\ft{d-2}{d-1}}\, dt^2 + H^{\ft1{d-1}}\, dx^i\, dx^i\,,\nn\\
\vec\phi &=& \ft12 \sum_{I=1}^q\, \vec a_I\, \log H_I \,,\qquad
A^I = -H_I^{-1}\, dt\,,
\eea
where the $H_I$ are arbitrary harmonic functions in the Euclidean metric
$dx^i\, dx^i$. Clearly this solution matches with the initial data in
(\ref{IVgen}), in the special case with
\be
C_I=H_I\,,\qquad D_I=1\,,\qquad W =1\,.
\ee
\subsection{Non-extremal spherically-symmetric black holes}
Non-extremal spherically-symmetric black holes solutions can easily
be found in the
theory defined by (\ref{genpq}) and (\ref{adotrel}), and they are
given by
\bea
ds^2 &=& -H^{-\ft{d-2}{d-1}}\, f\, dt^2 +
H^{\ft1{d-1}}\, \Big( \fft{dr^2}{f} + r^2 \, d\Omega_{d-1}^2\Big)\,,
\label{nonextpq}\\
\vec\phi &=& \ft12 \sum_{I=1}^q \vec a_I\, \log H_I\,,\qquad
A^I= \big(1- H_I^{-1} \big)\, \coth\delta_I\, dt\,,\qquad
H_I= 1 + \fft{2m \sinh^2\delta_I}{r^{d-2}}\,.\nn
\eea
Introducing a new radial variable $\rho$ by
\be
r^{d-2}= \rho^{d-2}\, \Big( 1 + \fft{m}{2\rho^{d-2}}\Big)^2\,,
\ee
we find that the metric $ds^2$ in (\ref{nonextpq}) becomes
\be
ds^2 = -N^2\, dt^2 + \Phi^{\ft{4}{d-2}}\, (d\rho^2 + \rho^2\, d\Omega_{d-1}^2)
\,,
\ee
where $\Phi$ is given in (\ref{IVgen}) with the harmonic functions $C_I$,
$D_I$ and $W$ being given by
\be
C_I = 1+\fft{m e^{2\delta_I}}{2\rho^{d-2}}\,,\qquad
D_I = 1+\fft{m e^{-2\delta_I}}{2\rho^{d-2}}\,,\qquad
W= 1+\fft{m}{2\rho^{d-2}}\,,\label{nonextharm}
\ee
and
\be
N^2= \Phi^{-4}\, W^2\, \Big(1-\fft{m}{2\rho^{d-2}}\Big)^2\,.
\ee
The functions $H_I$ in (\ref{nonextpq}) are given by $H_I=W^{-2}\, C_I\, D_I$,
and hence the potentials $A^I$ in the non-extremal solution are simply
given by
\be
A^I= \Big(-\fft{1}{C_I} + \fft1{D_I}\Big)\, dt\,,
\ee
and the dilatonic scalars are given by the expression in (\ref{IVgen}).
Thus we see that the non-extremal spherically-symmetric black hole solutions
do indeed have initial data given by (\ref{IVgen}), with the harmonic
functions $C_I$, $D_I$ and $W$ taking the special spherically-symmetric
form (\ref{nonextharm}).
\section{Mapping Einstein-Maxwell-Dilaton to Einstein-Scalar}
It was observed in \cite{ortin}, and developed further in \cite{cvegibpop},
that the time-symmetric initial data for a system of gravity coupled to
Maxwell fields and dilatonic scalars can straightforwardly mapped into
the time-symmetric initial data for an extended system of scalar
fields coupled to gravity. Although \cite{ortin,cvegibpop} discussed this
specifically for four-dimensional spacetimes, the extension to arbitrary
dimensions is immediate.
\subsection{Mapping of Einstein-Maxwell-Dilaton data}\label{EMDmapsec}
The mapping can be illustrated by considering the EMD theories
with multiple dilatons and Maxwell fields that we discussed in section
\ref{multiemdsec}. Making the replacement
\be
E^I_i \longrightarrow X_I^{-1}\, \del_i \psi_I\,,\label{Etopsi}
\ee
the Hamiltonian constraint (\ref{Hamilgen}) becomes
\be
R= \ft12 g^{ij}\, (\del_i\vec\phi\cdot\del_j\vec\phi +
\del_i\psi_I\, \del_j\psi_I)\,,
\ee
which is the same as the Hamiltonian constraint for a system of
free scalar fields $\vec\phi,\psi_I)$ coupled to gravity. In view of
(\ref{XIsol}), the ansatz for $E^I_i$ in (\ref{IVgen}) becomes
simply
\be
\psi_I= \log\fft{C_I}{D_I}\label{psiCD}
\ee
for the scalar fields $\psi_I$. Furthermore, under (\ref{Etopsi})
the Gauss law constraints (\ref{Gaussgen}) give simply
\be
\del_i\Big( C_I\, D_I\, \del_i\psi_I\Big)=0\,,
\ee
and so these are indeed satisfied when $\psi_\AA$ is given by (\ref{psiCD}),
since $C_I$ and $D_I$ are harmonic.
\subsection{General $N$-scalar system coupled to gravity}\label{ESsec}
If we consider a general system of $N$ scalar fields $\sigma_\AA$,
$1\le \AA\le N$, coupled to gravity and described by the Lagrangian
\be
{\cal L}= \sqrt{-\hat g} (\hat R - \ft12 \hat g^{\mu\nu}\,\sum_{\AA=1}^N
\del_\mu\sigma_\AA\, \del_\nu\sigma_\AA)\,,
\ee
then the Hamiltonian constraint for time-symmetric initial data is
$R=\ft12 g^{ij}\, \sum_\AA \del_i\sigma_\AA\, \del_j\sigma_\AA$, and so
writing $g_{ij}=\Phi^{4/(d-2)}$ is usual the constraint becomes
\be
-\fft{4(d-1)}{d-2}\, \del_i\del_i\Phi = \ft12 \Phi\,
\sum_{\AA=1}^N \del_i\sigma_\AA\,\del_i\sigma_\AA\,.\label{Hscalar}
\ee
Following the strategy used in four dimensions in \cite{cvegibpop} we make
the ansatz
\be
\Phi= \prod_{a=1}^M\, X_a^{n_a} \,,\qquad
\sigma_\AA= \sqrt{\fft{8(d-1)}{d-2}}\,
\sum_{a=1}^M m_a^\AA\, \log X_a\,,\label{Phisig}
\ee
where $X_a$ are a set of $M$ harmonic functions, $\del_i\del_i X_a=0$,
and the constants $n_a$ and $m_a^\AA$ are determined by requiring that
the Hamiltonian constraint (\ref{Hscalar}) be satisfied. One finds
that this holds if
\be
n_a\, n_b + \sum_{\AA=1}^N m_a^\AA\, m_b^\AA = n_a\, \delta_{ab}\,.
\label{mndot0}
\ee
As in \cite{cvegibpop}, by defining the $M$ $(N+1)$-component vectors
\be
{\bf m}_a =(m_a^\AA,n_a)\equiv (m_a^1\,m_a^2,\cdots,m_a^N,n_a)\label{bfm}
\ee
in $\R^{N+1}$, (\ref{mndot0}) becomes
\be
{\bf m}_a\cdot {\bf m_b}= n_a\, \delta_{ab}\,.\label{bfmdot}
\ee
Defining ${\bf Q}_a= {\bf m}_a/\sqrt{n_a}$ one has
\be
{\bf Q}_a\cdot {\bf Q_b}=\delta_{ab}\,,
\ee
show that there is a one-one mapping between orthonormal $M$-frames in
$\R^{N+1}$ and solutions of the Hamiltonian constraint conditions
(\ref{mndot0}). This means that we must have $M\le N+1$.
\subsection{Specialisation to the scalar theories from EMD}
In the mapping from Einstein-Maxwell-Dilaton theories with $p$
dilatons and $q$ Maxwell fields to a purely Einstein-Scalar system with
$(p+q)$ scalar fields, which we described in section (\ref{EMDmapsec}),
the initial-value constraints were solved in terms of the harmonic
functions $C_I$, $D_I$ and $W$. We may relate this to the general
scalar discussion in section (\ref{ESsec}) by noting that, in an obvious
notation, the harmonic functions $X_a$ and the scalar fields $\sigma_\AA$
are now split as
\be
X_a=\{C_I, D_I, W\}\,,\qquad \sigma_\AA
=\{\psi_I,\, \vec\phi\, \}\,.\label{XCDW}
\ee
Comparing the expressions for $\Phi$, in eqn (\ref{IVgen}), and
$\psi_I$, in eqn (\ref{psiCD}), with eqn (\ref{Phisig}) we see that
for these cases the vectors ${\bf m}_a$ defined in (\ref{bfm}) are given,
following the same notation as for $X_a$ in (\ref{XCDW}), by
\bea
{\bf m}_{{\sst C}_I} &=&\beta\, (\vec e_I,\, \ft12\vec a_I,\, 2\beta)\,,\nn\\
{\bf m}_{{\sst D}_I} &=&\beta\, (-\vec e_I,\, \ft12\vec a_I,\, 2\beta)\,,\nn\\
{\bf m}_w &=& (\vec 0,\, -\beta\, \vec a,\, \gamma)\,,\label{mforemd}
\eea
where $\vec e_I$ is an orthonormal basis for $\R^q$,
$\beta =\sqrt{\ft{d-2}{8(d-1)}}$, and $\gamma$ and $\vec a$ are
defined in (\ref{IVgen}). One can easily verify that the vectors
defined in (\ref{mforemd}) indeed satisfy (\ref{bfmdot}).
\section{Wormholes}
\subsection{Wormhole initial data for vacuum Einstein equations}
It was observed by Misner in the case of four spacetime
dimensions that if one takes the spatial background metric to be $S^1\times
S^2$, then this can give rise to initial data that generates
wormhole spacetimes. One can analogously consider $S^1\times S^{d-1}$
spatial backgrounds for $(d+1)$-dimensional spacetimes. Taking the
background metric to be
\be
d\bar s^2 = d\mu^2 + d\sigma^2 + \sin^2\sigma\, d\Omega_{d-2}^2\,,
\label{S1Sd-1}
\ee
then from (\ref{lichR}), the condition for the vanishing of the Ricci
scalar
for the spatial metric $ds^2 = \Phi^{4/(d-2)}\, d\bar s^2$ is that $\Phi$
should satisfy
\be
-\bar\square \Phi + \fft{(d-2)^2}{4}\, \Phi=0\,,\label{PhieqnS1Sd-1}
\ee
since the $S^1\times S^{d-1}$ metric (\ref{S1Sd-1}) has Ricci scalar $\bar R=
(d-1)(d-2)$. One can easily see that a solution for $\Phi$ is given by
\be
\Phi= c^{\ft{d-2}{2}}\,
\Big(\cosh\mu -\cos\sigma\Big)^{-\fft{d-2}{2}}\,,\label{Phising}
\ee
where $c$ is any constant.
If we take the solution (\ref{Phising}) itself, the metric
$ds^2=\Phi^{4/(d-2)}\, d\bar s^2$ is nothing but the flat Euclidean metric
$ds^2=dx^a dx^a + dz^2$ written in bi-hyperspherical coordinates, with the
Euclidean coordinates $x^i=(x^a,z)$ given by
\be
x^a= \fft{c\, u^a\, \sin\sigma}{\cosh\mu-\cos\sigma}\,,\qquad
z= \fft{c\, \sinh\mu}{\cosh\mu - \cos\sigma}\,,
\ee
where the $u^a$, constrained by $u^a\, u^a=1$, parameterise points on the
unit $(d-2)$-sphere whose metric is $d\Omega_{d-2}^2=du^a\, du^a$.
Of course since the metric (\ref{S1Sd-1}) is invariant under
translations of
the $\mu$ coordinate, and
(\ref{PhieqnS1Sd-1}) is a linear equation, one can
form superpositions to obtain the more general solutions
\be
\Phi=\sum_n\, A_n\, \Big(\cosh(\mu-\mu_n) -\cos\sigma\Big)^{-\fft{d-2}{2}}
\,,\label{PhieqnS1Sd-1gen}
\ee
The the $A_n$ and $\mu_n$ are arbitrary constants. Since one would like
the wormhole metric to be single-valued and hence periodic in the $\mu$
coordinate on $S^1$, it is appropriate to take $\mu_n= -2 n\, \mu_0$ and
$A_n=a^{\ft{d-2}{2}}$, with the summation in (\ref{PhieqnS1Sd-1gen})
being taken over
all the integers, and with $2\mu_0$ being the period of $\mu$:
\be
\Phi(\mu,\sigma)= c^{\ft{d-2}{2}}\, \sum_{n\in Z}
\Big(\cosh(\mu+2n\, \mu_0) -\cos\sigma\Big)^{-\fft{d-2}{2}}
\,.\label{Phiperiodic}
\ee
In a natural generalisation of the case of four spacetime dimensions
that was discussed in \cite{Lindquist} and elsewhere, one can easily
see that if we consider the elementary harmonic function
\be
\fft1{| \bx + \bd_n|^{d-2}}
\ee
in the Euclidean space with coordinates $\bx=\{x^1,\ldots x^{d-1}, z\}$, where
$-\bd_n$ is the location of the singularity, with
\be
\bd_n=\{0,\ldots,0,c\, \coth n\mu_0\}\,,\label{dnlocs}
\ee
then
\be
\fft1{| \bx + \bd_n|^{d-2}} =
\fft{(\sinh n \mu_0)^{d-2}}{c^{d-2}}\,
(\cosh\mu -\cos\sigma)^{\ft{d-2}{2}}\, \Big(\cosh (\mu+2n \mu_0)
-\cos\sigma\Big)^{-\ft{d-2}{2}}\,,\label{elterm}
\ee
and so the periodic conformal function $\Phi$ constructed in (\ref{Phiperiodic})
can be expressed as
\be
\Phi = c^{\ft{d-2}{2}}\, \Big(\cosh\mu - \cos\sigma\Big)^{-\ft{d-2}{2}}\,
\hat\Phi\,,\label{PhihatPhi}
\ee
where
\be
\hat\Phi = 1 + \sum_{n\ge 1}\fft{c^{d-2}}{(\sinh n \mu_0)^{d-2}}\,
\Big[\, \fft1{|\bx + \bd_n|^{d-2}} +
\fft1{|\bx - \bd_n|^{d-2}} \,\Big]\,.
\ee
Thus the initial-time spatial $d$-metric
metric $ds^2=\Phi^{\ft4{d-2}}\, d\bar s^2$, where $d\bar s^2$ is
the $S^1\times S^{d-1}$ metric (\ref{S1Sd-1}), can be written as
\be
ds^2 = \hat\Phi^{\ft{4}{d-2}}\, dx^i dx^i\,,
\ee
which is the metric for a sum over infinitely-many mass points at the
locations $-\bd_n$ and $+\bd_n$, with strengths $c^{d-2}\,
(\sinh n\mu_0)^{-(d-2)}$, giving a description of a two-centre
wormhole in $(d+1)$ spacetime dimensions. Comparing with the form of
the multi-black hole initial data discussed in section \ref{vacuumflatsec},
we see that the total mass of the wormhole is given by
\be
M= 4 c^{d-2}\, \sum_{p\ge 1} \fft1{(\sinh n\mu_0)^{d-2}}\,.\label{whmass}
\ee
This generalises the result for the four-dimensional spacetime wormhole
considered by Misner in \cite{Misner:1960zz}, which corresponded to the
case $d=3$.
The infinite sums in the expression (\ref{whmass}) can in fact be evaluated
explicitly, in terms of the $q$-polygamma function
\be
\psi^{(r)}_q(z)= \fft{\del^r\, \psi_q(z)}{\del z^r}\,,\qquad
\psi_q(z)=\fft1{\Gamma_q(z)}\, \fft{\del \Gamma_q(z)}{\del z}=
-\log(1-q) +\log q\, \sum_{k\ge 0}\fft{q^{k+z}}{1-q^{k+z}}\,,
\ee
where $\Gamma_q(z)$ is the $q$ generalisation of the usual gamma function
$\Gamma(z)$. For example, one finds that
\bea
\sum_{n\ge 1}\fft1{\sinh n \mu_0} &=& \fft{\im \pi - \psi_q(1)
+ \psi_q(1-\ft{\im \pi}{\mu_0})}{\mu_0}\,,\nn\\
\sum_{n\ge 1}\fft1{(\sinh n \mu_0)^2} &=&
\fft{-2\mu_0 + \psi^{(1)}_q(1) + \psi^{(1)}_q(1-\ft{\im\pi}{\mu_0})}{
\mu_0^2}\,,
\eea
where $q\equiv e^{\mu_0}$. Interestingly, in the special case $\mu_0=\pi$,
the sum for $d=4$ has a simple expression,
\be
\sum_{n\ge 1}\fft1{(\sinh n \pi)^2} = \fft1{6}-\fft1{2\pi}\,.
\ee
Another example with a simple expression is when $\mu_0=\pi$ in $d=6$, for
which one has
\be
\sum_{n\ge 1}\fft1{(\sinh n \pi)^4} = \fft1{3\pi} -\fft{11}{90} +
\fft{[\Gamma(\ft14)]^8}{1920 \pi^6}\,.
\ee
\subsection{Wormhole initial data for Einstein-Maxwell}
By an elementary extension of the calculation described in section
\ref{EMsec}, one can verify that writing
\be
ds^2 = \Phi^{\ft4{d-2}}\, d\bar s^2\,,
\ee
where $d\bar s^2$ is the metric (\ref{S1Sd-1}) on $S^1\times S^{d-1}$, the
initial value constraints (\ref{Hamconeinmax}) and $\nabla_i E^i=0$
and for the Einstein-Maxwell system are satisfied by again writing
\be
\Phi=(C D)^{\ft12}\,,\qquad E_i= \sqrt{\fft{d-1}{2(d-2)}}\, \del_i
\log\fft{C}{D}\,,\label{PhiCD2}
\ee
where now $C$ and $D$ are arbitrary solutions of the Helmholtz equation
\be
-\bar\square C+ \fft{(d-2)^2}{4}\, C=0\,,\qquad
-\bar\square D + \fft{(d-2)^2}{4}\, D=0\,.
\ee
Thus we can solve the constraint equations by taking each of $C$ and $D$
to be functions of the general form (\ref{PhieqnS1Sd-1gen}). Since
we would again like to construct initial data that is periodic in the
circle coordinate $\mu$, it is important that we have $\Phi(\mu,\sigma)=
\Phi(\mu+2\mu_0,\sigma)$, where $2\mu_0$ is the period of $\mu$. However,
as can be seen from (\ref{PhiCD2}), the functions $C$ and $D$ can be allowed
to have the more general holonomy properties
\be
C(\mu+2\mu_0,\sigma)= e^{-\lambda}\, C(\mu,\sigma)\,,\qquad
D(\mu+2\mu_0,\sigma)= e^{\lambda}\, D(\mu,\sigma)\,,
\ee
where $\lambda$ is a constant. This is compatible also with the
single-valuedness of the solution for $E_i$ in (\ref{PhiCD2}). We
can construct solutions $C$ and $D$ with the required holonomy by
taking
\bea
C(\mu,\sigma) &=& c^{\ft{d-2}{2}}\, \sum_{n\in Z} e^{n\lambda}\,
\Big(\cosh(\mu+2n\mu_0) - \cos\sigma\Big)^{-\ft{d-2}{2}}\,,\nn\\
D(\mu,\sigma)&=&c^{\ft{d-2}{2}}\, \sum_{n\in Z} e^{-n\lambda}\,
\Big(\cosh(\mu+2n\mu_0) - \cos\sigma\Big)^{-\ft{d-2}{2}}\,.\label{CDhol}
\eea
These series are convergent provided that $|\lambda|<(d-2)\, |\mu_0|$.
It is again useful to re-express $C$ and $D$ in terms of
harmonic functions $\hat C$ and $\hat D$ in the conformally-related
Euclidean space. Thus from (\ref{elterm}) we see that $\hat C$ and $\hat D$
defined by
\be
C= \fft{c^{\ft{d-2}{2}}}{[\cosh\mu -\cos\sigma]^{\ft{d-2}{2}}}\,\hat C\,,\qquad
D= \fft{c^{\ft{d-2}{2}}}{[\cosh\mu -\cos\sigma]^{\ft{d-2}{2}}}\,\hat D
\label{CDhChD}
\ee
are given by
\bea
\hat C &=& 1 + \sum_{n\ge 1}\, \fft{c^{d-2}}{(\sinh n\mu_0)^{d-2}}\,
\Big[ \fft{e^{n\lambda}}{|\bx + \bd_n|^{d-2}} +
\fft{e^{-n\lambda}}{|\bx - \bd_n|^{d-2}}\Big]\,,\nn\\
\hat D &=& 1 + \sum_{n\ge 1}\, \fft{c^{d-2}}{(\sinh n\mu_0)^{d-2}}\,
\Big[ \fft{e^{-n\lambda}}{|\bx + \bd_n|^{d-2}} +
\fft{e^{n\lambda}}{|\bx - \bd_n|^{d-2}}\Big]\,.\label{hChDdef}
\eea
Defining $\hat\Phi=(\hat C \hat D)^{\ft12}$ we therefore have
\be
ds^2 = \Phi^{\ft{4}{d-2}}\, d\bar s^2 = \hat\Phi^{\ft{4}{d-2}}\,
dx^i dx^i\,,\label{metricgen}
\ee
and so we straightforwardly find that the total mass of the Einstein-Maxwell
wormhole is given by
\be
M= 4 c^{d-2}\, \sum_{n\ge 1} \fft{\cosh n\lambda}{(\sinh n\mu_0)^{d-2}}\,.
\label{MEM}
\ee
This reduces to the result in \cite{Lindquist} when $d=3$,
corresponding to the case of Einstein-Maxwell wormholes in four
spacetime dimensions. The mass is finite provided that the condition
$|\lambda|< (d-2)\, |\mu_0|$ that we mentioned previously is satisfied.
The electric charge threading each wormhole throat can be calculated from a
Gaussian integral
\be
Q= \fft1{\omega_{d-1}}\, \int_S E_i\, n^i\, dS\,,
\ee
where $\omega_{d-1}$ is the volume of the unit $(d-1)$ sphere, and $n^i$ is the
unit vector normal to the $(d-1)$-surface with area element $dS$ enclosing
the charged mass points that comprise the wormhole throat under
consideration. In our case the mass points at $\bd_n$ for $1\le n\le\infty$
are associated with one throat, and the mass points at $-\bd_n$ with the
other.
Since the spatial metric $ds^2$ on the initial surface is equal to
$(\hat C\hat D)^{\ft{2}{d-2}}\, dx^i dx^i$, the area element $dS =
(\hat C\hat D)^{\ft{d-1}{d-2}}\, d\hat S$ and the unit vector
$n^i = (\hat C\hat D)^{-\ft1{d-2}}\, \hat n^i$, where $d\hat S$ and $\hat n^i$
are the corresponding quantities in the Euclidean metric $dx^i dx^i$.
Thus, from (\ref{PhiCD2}) and (\ref{CDhChD}) we have
\be
Q= \fft{1}{\omega_{d-1}}\, \sqrt{\fft{d-1}{2(d-2)}}\,
\int_{\hat S}\,(\hat D\, \del_i \hat C- \hat C\, \del_i \hat D)\,
d\hat S^i\,.\label{QCD}
\ee
The corresponding charge for the other throat will be $-Q$.
One way to evaluate (\ref{QCD}) is to convert it, using the divergence
theorem, into
\be
Q= \fft{1}{\omega_{d-1}}\, \sqrt{\fft{d-1}{2(d-2)}}\,
\int_{V}\,(\hat D\, \del_i\del_i \hat C- \hat C\, \del_i \del_i \hat D)\,
d^dx\,,\label{QCD2}
\ee
and make use of the fact that the harmonic functions $\hat C$ and $\hat D$,
defined in (\ref{hChDdef}), satisfy
\bea
\del_i\del_i \hat C &=& -(d-2)
\sum_{n\ge 1}\, \fft{c^{d-2}\, \omega_{d-1}}{(\sinh n\mu_0)^{d-2}}\,
[e^{n\lambda}\, \delta^d(\bx+\bd_n) +
e^{-n\lambda}\, \delta^d(\bx-\bd_n)]\,,\nn\\
\del_i\del_i \hat D &=& -(d-2)
\sum_{n\ge 1}\, \fft{c^{d-2}\, \omega_{d-1} }{(\sinh n\mu_0)^{d-2}}\,
[e^{-n\lambda}\, \delta^d(\bx+\bd_n) +
e^{n\lambda}\, \delta^d(\bx-\bd_n)]\,.
\eea
To calculate the total charge for the wormhole throat corresponding to
the $\bx = \bd_n$ sequence of mass points, we should choose the
volume $V$ in integral (\ref{QCD2}) to enclose all these mass points, but
none of those located at the points $\bx=-\bd_n$. Thus we find
\be
Q= \sqrt{\fft{(d-1)(d-2)}{2}}\,\Big[
\sum_{n\ge 1} \fft{2 c^{d-2}\,\sinh n\lambda}{
(\sinh(n\mu_0)^{d-2}} +
\sum_{m\ge1}\sum_{n\ge 1}\, \fft{2 c^{2d-4}\, \sinh(m+n)\lambda}{
|\bd_m+\bd_n|\, (\sinh m \mu_0\, \sinh n\mu_0)^{d-2}}\Big]\,.
\label{Qsum}
\ee
Note that the terms that arise
involving $|\bd_m - \bd_n|^{-(d-2)}$ cancel by
antisymmetry. The $m=n$ ``self-energy'' terms require some care, since the
denominators $|\bd_m - \bd_n|^{d-2}$
go to zero. One way to handle this is to introduce regulators by sending
$\bx\rightarrow \bx +\bm{\epsilon}$ in $\hat C$, and $\bx\rightarrow
\bx -\bm{\epsilon}$ in $\hat D$. The terms involving
$|\bd_m - \bd_n|^{-(d-2)}$ now become
$|\bd_m - \bd_n+2 \bm{\epsilon}|^{-(d-2)}$ and still cancel by
antisymmetry, prior to sending the regulator to zero. An alternative way to
evaluate the charge is to work directly with the expression (\ref{QCD}) for
$Q$, and evaluate the contribution for each of the included mass points
$\bx=\bd_n$ by integrating over a small $(d-1)$-sphere
surrounding that point. After summing over the contributions from all
the mass points, one arrives at the same result (\ref{Qsum}) that we obtained
above. In this calculation, the analogous regularisation of the
potentially-divergent ``self-energy'' terms occurs because they cancel
pairwise by antisymmetry before taking the limit in which the radius of the
small spheres surrounding the mass points goes to zero.
In view of the definition (\ref{dnlocs}) for $\bd_n$ we
have $|\bd_m+\bd_n|= c\, (\coth m\mu_0 + \coth n\mu_0)$, and hence
\bea
Q &=& \sqrt{\fft{(d-1)(d-2)}{2}}\, \sum_{n\ge 1} \fft{2 c^{d-2}\,\sinh n\lambda}{
(\sinh(n\mu_0)^{d-2}} + \sqrt{\fft{(d-1)(d-2)}{2}}\,
\sum_{m\ge1}\sum_{n\ge 1}\, \fft{2 c^{d-2}\, \sinh(m+n)\lambda}{
(\sinh (m+n) \mu_0 )^{d-2}}\,,\nn\\
&=& \sqrt{\fft{(d-1)(d-2)}{2}}\,
\sum_{m\ge0}\sum_{n\ge 1}\, \fft{2 c^{d-2}\, \sinh(m+n)\lambda}{
(\sinh (m+n) \mu_0 )^{d-2}}\,.\label{Qres2}
\eea
Defining $p=m+n$, the double summation $\sum_{m=0}^\infty \sum_{n=1}^\infty$
can be rewritten as $\sum_{p\ge 1} \sum_{n=1}^p$, and so (\ref{Qres2})
becomes\footnote{This agrees in the special case $d=3$ with what Lindquist
would have had in his result for four-dimensional spacetime, if he
had not accidentally omitted the factor of $p$ in the numerator of his
expression.}
\be
Q=2 c^{d-2}\,
\sqrt{\fft{(d-1)(d-2)}{2}}\, \sum_{p\ge 1} \fft{p\, \sinh p\lambda}{
(\sinh p\mu_0)^{d-2}}\,.\label{QEM}
\ee
It is interesting to note from the expressions (\ref{MEM}) for the
total mass $M$ and (\ref{QEM}) for the charge that
\be
Q= \fft12 \sqrt{\fft{(d-1)(d-2)}{2}}\, \fft{\del M}{\del\lambda}\,.
\ee
\subsection{Einstein-Maxwell-Dilaton wormholes}
The Einstein-Maxwell-Dilaton system discussed in section \ref{EMDsec} allows
one to set up wormhole initial data also. One can straightforwardly check
that by taking the background metric $d\bar s^2$ to be the $S^1\times S^{d-1}$
metric (\ref{S1Sd-1}), the Hamilton and Gauss-law constraints (\ref{emdcons})
are satisfied if $\Phi$, $\phi$ and $E_i$ are given by (\ref{emddata})
and the functions $C$, $D$ and $W$ now obey the Helmholtz equations
\be
-\bar\square C+ \fft{(d-2)^2}{4}\, C=0\,,\quad
-\bar\square D + \fft{(d-2)^2}{4}\, D=0\,,\quad
-\bar\square W + \fft{(d-2)^2}{4}\, W=0\,.\label{CDWeqns}
\ee
As in the previous wormhole examples, we can construct solutions
with appropriate periodicity or holonomy properties by taking
suitable linear superpositions of elementary solutions. In this example,
we see that we can ensure the necessary periodicity of the conformal
factor $\Phi$, the dilaton $\phi$ and the electric field $E_i$ by
arranging that the solutions for $C$, $D$ and $W$ obey\footnote{Note
that we do not encounter the problems that were seen in \cite{ortin} for
the Einstein-Maxwell-Dilaton system, where the dilaton had non-trivial
monodromy and was not periodic in the $\mu$ coordinate. This is related to
the fact that our ansatz involves three harmonic functions, $C$, $D$ and $W$,
thus generalising the 3-function ansatz in \cite{cvegibpop}, whereas the
more restrictive ansatz in \cite{ortin} has only two harmonic functions.}
\be
C(\mu+ 2\mu_0,\sigma)= e^{-\lambda}\, C(\mu,\sigma)\,,\quad
D(\mu+ 2\mu_0,\sigma)= e^{\lambda}\, D(\mu,\sigma)\,,\quad
W(\mu+2\mu_0,\sigma)= W(\mu,\sigma)\,,
\ee
and so we may take
\bea
C(\mu,\sigma) &=& c^{\ft{d-2}{2}}\, \sum_{n\in Z} e^{n\lambda}\,
\Big(\cosh(\mu+2n\mu_0) - \cos\sigma\Big)^{-\ft{d-2}{2}}\,,\nn\\
D(\mu,\sigma)&=&c^{\ft{d-2}{2}}\, \sum_{n\in Z} e^{-n\lambda}\,
\Big(\cosh(\mu+2n\mu_0) - \cos\sigma\Big)^{-\ft{d-2}{2}}\,,\nn\\
W(\mu,\sigma)&=&c^{\ft{d-2}{2}}\, \sum_{n\in Z}
\Big(\cosh(\mu+2n\mu_0) - \cos\sigma\Big)^{-\ft{d-2}{2}}\,.
\label{CDWhol}
\eea
We again have the expressions (\ref{metricgen}) for the metric $ds^2$ in
terms of the $S^1\times S^{d-1}$ metric $d\bar s^2$ and the Euclidean metric
$dx^i dx^i$, with $\hat\Phi$ related to $\Phi$ as in (\ref{PhihatPhi}) and
now
\be
\hat\Phi = (\hat C\, \hat D)^{\ft{(d-2)}{(d-1)\Delta}}\,
\hat W^{\ft{a^2}{\Delta}}\,,
\ee
with
\bea
\hat C &=& 1 + \sum_{n\ge 1}\, \fft{c^{d-2}}{(\sinh n\mu_0)^{d-2}}\,
\Big[ \fft{e^{n\lambda}}{|\bx + \bd_n|^{d-2}} +
\fft{e^{-n\lambda}}{|\bx - \bd_n|^{d-2}}\Big]\,,\nn\\
\hat D &=& 1 + \sum_{n\ge 1}\, \fft{c^{d-2}}{(\sinh n\mu_0)^{d-2}}\,
\Big[ \fft{e^{-n\lambda}}{|\bx + \bd_n|^{d-2}} +
\fft{e^{n\lambda}}{|\bx - \bd_n|^{d-2}}\Big]\,,\label{hChDhWdef}\\
\hat W &=&1 + \sum_{n\ge 1}\, \fft{c^{d-2}}{(\sinh n\mu_0)^{d-2}}\,
\Big[ \fft{1}{|\bx + \bd_n|^{d-2}} +
\fft{1}{|\bx - \bd_n|^{d-2}}\Big]\,.
\eea
Comparing with the form of $\Phi$ in section \ref{vacuumflatsec} we
straightforwardly find that the total wormhole mass is given by
\be
M= \fft{16(d-2)\, c^{d-2}}{(d-1)\Delta}\, \sum_{n\ge1}
\fft{\sinh^2\ft{n\lambda}{2}}{(\sinh(n\mu_0)^{d-2}} +
4c^{d-2}\, \sum_{n\ge1} \fft1{(\sinh n\mu_0)^{d-2}}\,.\label{MEMD}
\ee
The calculation of the electric charge proceeds in a very similar fashion
that for the Einstein-Maxwell case, which we discussed earlier. Now,
we shall have
\bea
Q &=& \fft1{\omega_{d-1}}\, \int e^{a\phi}\, E_i\, n^i\, dS\,,\nn\\
&=& \fft{2}{\sqrt{\Delta}\, \omega_{d-1}}\,
\int(\hat D\, \del_i\hat C - \hat C\, \del_i \hat D)\, d\hat S^i\,,
\eea
and hence by the same steps as for Einstein-Maxwell, we find
\be
Q= \fft{4(d-2) c^{d-2}}{\sqrt{\Delta}}\,
\sum_{p\ge 1} \fft{p \sinh p\lambda}{(\sinh p\mu_0)^{d-2}}\,.\label{QEMD}
\ee
We again have a simple relation between the mass $M$, given by (\ref{MEMD}),
and the charge $Q$ given by (\ref{QEMD}), namely
\be
Q= \fft{(d-1)\, \sqrt{\Delta}}{2}\, \fft{\del M}{\del\lambda}\,.
\ee
\subsection{Multi-Maxwell wormholes}
For the remaining examples of time-symmetric initial data involving
multiple Maxwell fields, which we discussed in sections \ref{E2MDsec}
and \ref{multiemdsec}, we shall just briefly summarise the results for
wormhole initial data.
In the case of two Maxwell fields and a single
dilaton, described in section \ref{E2MDsec}, we find that
each of the pairs of functions $(C_1,D_1)$ and $(C_2,D_2)$ will now take
the form given in (\ref{CDhol}), with independent $\lambda$
parameters $\lambda_1$ and $\lambda_2$ allowed for the two pairs, so that
\be
C_I(\mu,\sigma)=e^{-\lambda_I}\, C_I(\mu+2\mu_0,\sigma)\,,\qquad
D_I(\mu,\sigma)=e^{\lambda_I}\, D_I(\mu+2\mu_0,\sigma)\,,\quad I=1,2\,.
\ee
The total wormhole mass is then given by
\be
M= \fft{2(d-2)\, c^{d-2}}{(d-1)}\, \sum_{n\ge 1}
\fft{1}{(\sinh n\mu_0)^{d-2}}\,
\Big[ N_1\, \cosh n\lambda_1 + N_2\, \cosh n\lambda_2\Big]\,.
\ee
There are now two charges, one for each Maxwell field, and these are
given by
\be
Q_I= 2(d-2) c^{d-2}\, \sqrt{N_I}\, \sum_{p\ge 1}
\fft{p\, \sinh p\lambda_I}{(\sinh p \mu_0)^{d-2}}\,.
\ee
The charges can be expressed in terms of the mass as follows:
\be
Q_I= \fft{(d-1)}{\sqrt{N_I}}\, \fft{\del M}{\del\lambda_I}\,.
\ee
For the case of $p$ dilaton fields and $q$ Maxwell fields
discussed in section \ref{multiemdsec}, we find that the initial-value
constraints can be solved, in the $S^1\times S^{d-1}$ background metric,
by the ansatz (\ref{IVgen}), where now the functions $C_I$, $D_I$ and
$W$ obey the equation (\ref{CDWeqns}). Single-valuedness of the metric,
dilatons and electric fields requires that we have the holonomy
relations
\be
C_I(\mu+ 2\mu_0,\sigma)= e^{-\lambda_I}\, C_I(\mu,\sigma)\,,\quad
D_I(\mu+ 2\mu_0,\sigma)= e^{\lambda_I}\, D_I(\mu,\sigma)\,,\quad
W(\mu+2\mu_0,\sigma)= W(\mu,\sigma)\,.
\ee
We can take the functions $C_I$, $D_I$ and $W$ to be given as in
(\ref{CDWhol}), with the different $\lambda_I$ parameters for each
pair $(C_I,D_I)$. We find the total mass of the wormhole is given by
\be
M= \fft{4(d-2)\, c^{d-2}}{(d-1)}\, \sum_{I=1}^q \sum_{n\ge1}
\fft{\sinh^2\ft{n\lambda_I}{2}}{(\sinh(n\mu_0)^{d-2}} +
4c^{d-2}\, \sum_{n\ge1} \fft1{(\sinh n\mu_0)^{d-2}}\,.
\ee
The total
charges associated with one of the two wormhole throats are given by
\be
Q_I = 2(d-2) c^{d-2}\, \sum_{p\ge 1}
\fft{p\,\sinh p\lambda_I}{(\sinh p \mu_0)^{d-2}}\,,
\ee
with the other carrying charges of equal magnitudes but opposite signs.
The charges and the mass are related by
\be
Q_I= (d-1)\, \fft{\del M}{\del\lambda_I}\,.
\ee
\subsection{Wormhole interaction energy}
A manifold with $N$ Einstein-Rosen bridges in an asymptotically flat spacetime,
with each bridge leading to a different
asymptotically flat spacetime, has a metric of the form \cite{Brill:1963yv}
\be
ds^2 = \hat\Phi^{\ft{4}{d-2}}\,dx^i dx^i\,,
\ee
where
\be
\hat\Phi= 1+ \sum_{i=1}^N \fft{\alpha_n}{r_i^{d-2}}\,,
\ee
and $r_i=|\bx -\bx_i|$, with $\bx_i$ being the location of the $i$th mass point.
The total mass $M$ of this system is given by
\be
M= 2\sum_{i=1}^N \alpha_i\,.
\ee
In the limit when $\bx$ approaches the $i$th mass point, one has
\be
r_i \rightarrow 0, \qquad r_{j} \rightarrow r_{ij} \; \; (j \neq i)\,,
\ee
and the
metric takes the form
\be
ds^2 \rightarrow \left[ \fft{\alpha_i}{r_n^{d-2}} +
A_n \right]^\ft{4}{d-2} (dr_i^2 + r_i^2 \; d\Omega_{d-1}^2) =
\left( \fft{\alpha_i}{r_i^{d-2}} \right)^\ft{4}{d-2}
\left[ 1 + A_i \fft{r_n^{d-2}}{\alpha_n} \right]^\ft{4}{d-2}
(dr_i^2 + r_i^2 \; d\Omega_{d-1}^2),
\ee
where
\be
A_i = 1 + \sum_{j \neq i} \fft{\alpha_j}{r_{ij}^{d-2}} \,,
\ee
and we are using $r_i$ as the radial coordinate near $r_i=0$.
Now introducing the new coordinate
\be
r_i'^{d-2} = \fft{\alpha_i^2}{r_i^{d-2}}\,,
\ee
the line element in the corresponding limit
$r_i' \rightarrow \infty$ takes the form
\be
ds^2 \rightarrow \left[ 1 + \fft{A_i \alpha_i}{r_i'^{d-2}}
\right]^\ft{4}{d-2} (dr_i'^2 + r_i'^2 \; d\Omega_{d-1}^2).
\ee
This implies that the bare mass of the individual bridge is
\be
m_i = 2A_i \,\alpha_i = 2\alpha_i + 2\alpha_i \,
\sum_{j \neq i} \fft{\alpha_j}{r_{ij}^{d-2}}\,,
\ee
and their sum,
\be
\sum_{i=1}^N m_i = 2 \, \sum_{i=1} \alpha_i +
2 \sum_{i=1}^N \sum_{j \neq i} \fft{\alpha_i \,
\alpha_j}{r_{ij}^{d-2}} \, = \, M + 2 \, \sum_{i=1}^N \sum_{j \neq i}
\fft{\alpha_i \, \alpha_j}{r_{ij}^{d-2}}\,,
\ee
is not equal to the total mass of the system. Hence, the energy of
gravitational interaction is
\be
M_{int} = M - \sum_{i=1}^N \, m_i = - 2 \, \sum_{i=1}^N
\sum_{j \neq i} \, \fft{\alpha_i \, \alpha_j}{r_{ij}^{d-2}}.
\ee
We can now use these results to obtain the
interaction energy of various wormholes.
\subsubsection{Einstein wormhole}
For the wormhole manifold in pure Einstein gravity we have
\be
\hat\Phi = 1 + \sum_{n\ge 1}\fft{c^{d-2}}{(\sinh n \mu_0)^{d-2}}\,
\Big[\, \fft1{|\bx + \bd_n|^{d-2}} +
\fft1{|\bx - \bd_n|^{d-2}} \,\Big]\,.
\ee
All image points at $\bd_n$ contribute to the mass $m_1$ of one mouth
of the wormhole, and the rest (at $-\bd_n$) to the mass $m_2$ of the
other mouth.
The mass of the $n^{th}$ image point is
\be
{\mathfrak m}_n = 2\alpha_n + 2\alpha_n \, \sum_{m \neq n} \fft{\alpha_m}{r_{nm}^{d-2}},
\ee
where
\be
\alpha_n = \fft{c^{d-2}}{(\sinh n \mu_0)^{d-2}}.
\ee
Hence, the masses of the wormhole mouths are
\be
m_2 = m_1 = \sum_{n \ge 1} {\mathfrak m}_n = \fft{M}{2} + 2 \sum_{n \ge 1}
\sum_{m \neq n} \, \fft{\alpha_n \, \alpha_m}{r_{mn}^{d-2}}\,.\label{withinf}
\ee
The terms where $m$ is negative will have denominators $r_{mn}=|\bd_m -\bd_n|$
of the form $|\bd_p+\bd_n|= c \sinh(p+n) mu_0$, with $n$ and $p=-m$ both
positive. The double
sum will converge for such terms. However, when $m$ is positive the
denominators will be $|\bd_m-\bd_n|= c |\sinh(m-n)\mu_0|$ with $m$ and $n$
positive, and even though the terms with $m=n$ are excluded, the
double sum will diverge. As discussed in detail in \cite{Lindquist},
this problem can be resolved by subtracting out the (infinite) interaction
energy between the bare masses which together make up $m_i$. In other
words, one makes a ``mass renormalisation'' by adding the infinite
(negative) term
\be
\delta m_1 =
-2 \sum_{n\ge 1} \sum_{m\ge1,\, m\ne n} \fft{\alpha_n\, \alpha_m}{
r_{mn}^{d-2}} =
-2 \sum_{n\ge 1} \sum_{\substack{m\ge1\\ m\ne n}} \fft{\alpha_n\, \alpha_m}{
|\bd_m-\bd_n|^{d-2}} \,.
\ee
This leads to the
``renormalised'' mass
\be
m_2 = m_1 = \fft{M}{2} + 2 \sum_{n \ge 1} \sum_{m \leq 1} \, \fft{\alpha_n \, \alpha_m}{r_{mn}^{d-2}} = \fft{M}{2} + 2 \sum_{n \ge 1} \sum_{m \geq 1} \, \fft{c^{d-2}}{[\sinh(m+n)\mu_0]^{d-2}},
\ee
or, after reorganising the double summation,
\be
m_2 = m_1 = \fft{M}{2} + 2 \,c^{d-2}\,\sum_{p \ge 2}
\fft{p-1}{[\sinh p\mu_0]^{d-2}}.
\ee
Now, the (finite) interaction energy between the two mouths is given by
\be
M_{int} = M - (m_1 + m_2) =
- 4 \,c^{d-2}\,\sum_{p \ge 2} \fft{p-1}{[\sinh p\mu_0]^{d-2}}.
\ee
\subsubsection{Other wormholes}
We may now apply the same procedure to the case of $q$ Maxwell and $p$
dilaton fields. For this system we have
\bea
\hat \Phi = \hat \Pi^{\ft{d-2}{4(d-1)}}\, \hat W^\gamma\,, \qquad
\hat \Pi&\equiv & \prod_{I=1}^q \hat C_I \, \hat D_I\,,\qquad
\gamma\equiv 1-\fft{q(d-2)}{2(d-1)}\,,
\eea
where
\bea
\hat C_I &=& 1 + \sum_{n\ge 1}\, \fft{c^{d-2}}{(\sinh n\mu_0)^{d-2}}\,
\Big[ \fft{e^{n\lambda_I}}{|\bx + \bd_n|^{d-2}} +
\fft{e^{-n\lambda_I}}{|\bx - \bd_n|^{d-2}}\Big] = 1+ \sum_{n\neq 0} \fft{c_{In}}{r_n^{d-2}} \, ,\nn\\
\hat D_I &=& 1 + \sum_{n\ge 1}\, \fft{c^{d-2}}{(\sinh n\mu_0)^{d-2}}\,
\Big[ \fft{e^{-n\lambda_I}}{|\bx + \bd_n|^{d-2}} +
\fft{e^{n\lambda_I}}{|\bx - \bd_n|^{d-2}}\Big] = 1+ \sum_{n\neq 0} \fft{d_{In}}{r_n^{d-2}}\,,\nn \\
\hat W &=&1 + \sum_{n\ge 1}\, \fft{c^{d-2}}{(\sinh n\mu_0)^{d-2}}\,
\Big[ \fft{1}{|\bx + \bd_n|^{d-2}} +
\fft{1}{|\bx - \bd_n|^{d-2}}\Big]= 1+ \sum_{n\neq 0} \fft{w_{n}}{r_n^{d-2}}\,. \label{CDWint}
\eea
The total mass of wormhole is given by
\be
\fft{M}{2} = \sum_{n \neq 0} \left[ \fft{d-2}{4(d-1)} \left[\, c_{In} + d_{In}\,\right] + \gamma\,w_{In} \right] \,.
\ee
In the limit
\be
r_n \rightarrow 0, \qquad r_{m} \rightarrow r_{nm} \; \; (m \neq n),
\ee
we get
\bea
\hat{C}_I & \rightarrow & \left[ \fft{c_{In}}{r_n^{d-2}} + C_{In} \right]\,, \qquad C_{In} = 1 + \sum_{m \neq n} \fft{c_{Im}}{r_{nm}^{d-2}}\,, \nn \\
\hat{D}_I & \rightarrow & \left[ \fft{d_{In}}{r_n^{d-2}} + D_{In} \right]\,, \qquad D_{In} = 1 + \sum_{m \neq n} \fft{d_{Im}}{r_{nm}^{d-2}}\,, \nn \\
\hat{W} & \rightarrow & \left[ \fft{w_{n}}{r_n^{d-2}} + W_{n} \right]\,, \qquad \, W_{n} = 1 + \sum_{m \neq n} \fft{w_{m}}{r_{nm}^{d-2}}\,.
\eea
and metric takes the form
\be
ds^2 \rightarrow \left( \fft{\alpha_n}{r_n^{d-2}} \right)^\ft{4}{d-2} \prod_{I=1}^q \left[ \left( 1 + C_{In} \fft{r_n^{d-2}}{c_{In}} \right) \left( 1 + D_{In} \fft{r_n^{d-2}}{d_{In}} \right) \right]^\ft{1}{d-1} \left[ 1 + W_n \fft{r_n^{d-2}}{w_n} \right]^\ft{ 4 \, \gamma}{d-2} (dr_n^2 + r_n^2 \; d\Omega_{d-1}^2) \,,
\ee
where
\be
\alpha_n = w_n^\gamma \prod_{I=1}^q (c_{In} \, d_{In})^\ft{d-2}{4(d-1)} \,.
\ee
From \ref{CDWint} we can see that
\be
c_{In} = c^{d-2} \,\fft{e^{-n\lambda_I}}{(\sinh n \mu_0)^{d-2}}\,,
\qquad d_{In} = c^{d-2} \,\fft{e^{n\lambda_I}}{(\sinh n \mu_0)^{d-2}}\,,
\qquad w_n = \fft{c^{d-2}}{(\sinh n \mu_0)^{d-2}} \,,
\ee
and hence
\be
(\,c_{In} \, d_{In} \,) = w_n^2 \, \quad \implies \quad \alpha_n = w_n^\gamma \prod_{I=1}^q (c_{In} \, d_{In})^\ft{d-2}{4(d-1)} = w_n \,,
\ee
i.e.
\be
\alpha_n^2 = w_n^2 = c_{In} \, d_{In} \,.
\ee
Now, defining a new radial coordinate
\be
r_n'^{d-2} = \fft{\alpha^2}{r_n^{d-2}}\,,
\ee
the line element in the limit $r_n' \rightarrow \infty$ takes the form
\be
ds^2 \rightarrow \prod_{I=1}^q \left[ \left( 1 + C_{In} \fft{d_{In}}{r_n'^{d-2}} \right) \left( 1 + D_{In} \fft{c_{In}}{r_n'^{d-2}} \right) \right]^\ft{1}{d-1} \left[ 1 + W_n \fft{w_n}{r_n'^{d-2}} \right]^\ft{ 4 \, \gamma}{d-2} (dr_i'^2 + r_i'^2 \; d\Omega_{d-1}^2)\,.
\ee
The mass of the $n^{th}$ image point in this system is
\be
{\mathfrak m}_n = \fft{d-2}{2(d-1)} \left[\, C_{In}\,d_{In} +
D_{In}\,c_{In}\,\right] + 2\gamma\,W_{In}\,w_{In}\,,
\ee
or
\be
{\mathfrak m}_n = \fft{d-2}{2(d-1)} \left[\, c_{In} + d_{In}\,\right] +
2 \gamma\,w_{In} + \fft{d-2}{2(d-1)} \sum_{m\neq n} \left[\,
\fft{c_{In}\,d_{Im} + d_{In}\,c_{Im}}{r_{mn}^{d-2}} \right] +
2 \gamma \, \sum_{m \neq n} \, \fft{w_n \, w_m}{r_{mn}^{d-2}}\,.
\ee
Following the discussion similar to previous subsection, the
``renormalised mass" of the wormhole mouth is
\be
m_2 = m_1 = \fft{M}{2} + \fft{d-2}{d-1}\, c^{d-2}\,\sum_{I=0}^{q} \, \sum_{p \ge 1} (p-1)\fft{\cosh p\lambda_I}{[\sinh p\mu_0]^{d-2}} + 2 \,\gamma\, c^{d-2}\,\sum_{p \ge 1} \fft{p-1}{[\sinh p\mu_0]^{d-2}}\,.
\ee
The interaction energy is then given by
\be
M_{int} = M - m_1 - m_2 = - \fft{4(d-2)}{(d-1)} \, c^{d-2}\, \sum_{I=1}^q \sum_{p\ge1} (p-1) \fft{\sinh^2\ft{p\lambda_I}{2}}{(\sinh(p\mu_0)^{d-2}} -
4c^{d-2}\, \sum_{p\ge1} \fft{p-1}{(\sinh p \mu_0)^{d-2}}\,.
\ee
In a similar fashion, in the case of two Maxwell fields and a single
dilaton, described in section \ref{E2MDsec}, the interaction energy is
given by
\be
M_{int}= - \fft{2(d-2)}{(d-10}\, c^{d-2}\, \sum_{p\ge 1}
\fft{p-1}{(\sinh p\mu_0)^{d-2}}\,
\Big[ N_1\, \cosh p\lambda_1 + N_2\, \cosh p\lambda_2\Big]\,.
\ee
For the Einstein-Maxwell-Dilaton system discussed in section \ref{EMDsec},
the interaction energy is
\be
M_{int}= - \fft{16(d-2)\, c^{d-2}}{(d-1)\Delta}\, \sum_{n\ge1} (n-1)
\fft{\sinh^2\ft{n\lambda}{2}}{(\sinh(n\mu_0)^{d-2}} -
4c^{d-2}\, \sum_{n\ge1} \fft{n-1}{(\sinh n\mu_0)^{d-2}}\,.
\ee
For the Einstein-Maxwell case, described in section
\ref{EMsec}, the interaction energy is
\be
M_{int} = - 4\, c^{d-2}\, \sum_{n\ge 1} (n-1)
\,\fft{\cosh n\lambda}{(\sinh n\mu_0)^{d-2}}\,.
\ee
\section{Conclusions}
The geometrodynamical approach to studying solutions of the Einstein
equations and the coupled Einstein-Maxwell equations was pioneered by
Wheeler, Misner and others in late 1950s and early 1960s. The idea was
to look at the initial-value constraints in a Hamiltonian formulation
of the equations of motion, extracting as much information as possible
about the properties of the (in general time-dependent) solutions that
would evolve from the initial data. For simplicity, the initial
data were typically taken to be time independent, corresponding to an initial
slice at a moment of time-reflection symmetry in the subsequent evolution.
One can calculate some general features of the solutions that will
evolve from the initial data, even though in practice the explicit solution
of the evolution equations is beyond reach.
The early work on geometrodynamics was all focused on the case of
four-dimensional spacetimes.
More recently, wider classes of four-dimensional theories were considered,
in which additional matter fields of the kind occurring in supergravity
theories were included \cite{ortin,cvegibpop}.
In this paper, we have presented results for time-symmetric initial
value data satisfying the constraint equations in higher-dimensional
theories of gravity coupled to scalar and Maxwell fields. These theories
encompass particular cases that correspond to higher-dimensional theories
of supergravity, and thus they also have relevance for the low-energy limits
of string theories or M-theory. We considered initial data both
for multiple black hole
evolutions and also for wormhole spacetimes. In the case of wormhole
spacetimes, we studied some of the properties of the solutions
in detail, including the masses and charges associated with the individual
wormhole throats in a multi-wormhole spacetime, and the interaction energies
between the throats.
Our focus in this paper has been the construction of consistent time-symmetric
initial data for multiple black holes or wormholes in higher-dimensional
theories such as those that arise in supergravities or in string theory and
M-theory. In general, one does not expect to be able to solve the
evolution equations for the initial-data sets explicitly, but it could
nonetheless be of interest to try investigate further some of the features
that might be expected to arise in such solutions.
A further point is that the solutions to the initial-value constraints
that we considered all
made use of an ansatz introduced first by Lichnerowicz in the case of
four-dimensional spacetimes, in which the spatial metric on the
initial surface is taken to be a conformal factor times a fixed
fiducial metric of high symmetry, such as the Euclidean metric, or the
metric on $S^3$ or $S^1\times S^2$. When one considers higher spacetime
dimensions, such a conformal factor parameterises a smaller fraction
of the total space of possible spatial geometries. It might therefore be
interesting to explore more general ans\"atze for parameterising the
spatial metrics on the initial surface.
\section*{Acknowledgments}
This work was supported in part by DOE grant DE-FG02-13ER42020.
|
2,869,038,154,047 | arxiv |
\section{Analysis}\label{sec:analysis}
We finally analyze the different components of our proposed model
by investigating, i.a., mapping matrices and attention weights.
\paragraph{Meta-Embeddings.}
We first analyze the rank of the mapping matrices $Q_i$ (see Section \ref{sec:meta-embeddings}) of our models trained with and without adversarial training. In both cases, their rank is equal to the original embedding size. Thus, no information is lost by mapping the embeddings to the common space.
Second, we investigate the attention weights assigned to the different embedding types.
Figure~\ref{fig:att_weights_pos} shows the averaged weights for POS tagging models in all 27 languages.
FastText embeddings get the highest weights across languages.
In particular, we observe that monolingual embeddings (BPEmb, fastText\xspace, character) get higher weights for Basque (Eu), which is considered an isolated language~\cite{trask1997history}.
Non-European languages (Hi, Fa) seem to rely more on subword-based embeddings than on character embeddings.
\begin{figure}[t]
\centering
\includegraphics[width=.49\textwidth]{figures/attention_weights_pos.pdf}
\caption{Averaged attention weights for POS models of different languages. }
\label{fig:att_weights_pos}
\end{figure}
Figure~\ref{fig:att_weights_sent} provides the change of attention
from the average for the domain-specific embeddings for a sentence from the clinical domain. It demonstrates that the clinical embedding weights are higher for in-domain words,
such as ``mg'', ``PRN'' (which stands for ``pro re nata'') or ``PO'' (which refers to ``per os'') and lower for general-domain words, such as ``every'', ``6'' or ``hours''.
\begin{figure}[t]
\centering
\includegraphics[width=.49\textwidth]{figures/attention_weights_sent_1.pdf}
\vspace{-0.5cm}
\caption{Changes in domain-embeddings influence on meta-embeddings. The model prefers domain-specific embeddings for in-domain words.}
\label{fig:att_weights_sent}
\end{figure}
Figure~\ref{fig:att_weights_pos_tag} shows the average attention weights with respect to the POS tag of the words. While fastText\xspace gets the highest weights in general, there are label-dependent differences, e.g., for tags like ADP, PUNCT or SYM, which are usually short high-frequency words. Thus, we next analyze the influence of our features on the attention weights.
\begin{figure}[t]
\centering
\includegraphics[width=.49\textwidth]{figures/attention_weights_per_pos_tag.pdf}
\vspace{-0.8cm}
\caption{Averaged attention weights w.r.t.\ POS tags. }
\label{fig:att_weights_pos_tag}
\end{figure}
\paragraph{Feature-Based Attention.}
\begin{figure}[t!]
\centering
\includegraphics[width=.43\textwidth]{figures/attention_weights_features.png}
\caption{Averaged attention weights w.r.t.\ word frequencies (left) and word length (right).}
\label{fig:att_weights_freq}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=.49\textwidth]{figures/attention_weights_shape.pdf}
\vspace{-0.8cm}
\caption{Influence of domain-embeddings for the most influential word shapes.}
\label{fig:att_weights_shape}
\end{figure}
In Figure~\ref{fig:att_weights_freq}, we investigate the average attention weights with respect to word frequency (left) and word length (right).
While the model assigns almost equal weights to all embeddings for high-frequency words, it prefers fastText\xspace embeddings for less frequent words.
We observe similar behavior for the word length: The models strongly prefer fastText\xspace for longer words, which are often less frequent.
Finally, the influence of domain-specific embeddings with respect to word shapes is shown in Figure~\ref{fig:att_weights_shape}. In particular, for complex word shapes that include numbers and punctuation marks, the domain-embeddings are assigned higher weights.
These observations verify the choice of our features for attention.
\paragraph{Adversarial Training.}
Figure \ref{fig:common_space} shows the common embedding space with and without adversarial training for the example of two different embeddings. Adversarial training clearly helps to avoid embedding-type clusters. In the supplementary material, we show that this is also true when mapping more embedding types to a common space.
\section{Data Preprocessing}
For NER and POS tagging, we use the gold segmentation provided by the data.
For concept extraction (CE), we use the preprocessing scripts from~\newcite{alsentzer-etal-2019-publicly} for the English i2b2 corpus and the Spanish Clinical Case Corpus tokenizer~\cite{spaccc/Intxaurrondo19} for the Spanish PharmaCoNER corpus. We noticed that the Spanish tokenizer sometimes merges multi-word expressions into a single token joined with underscores for contiguous words. As a result, some tokens cannot be aligned with the corresponding entity annotations. To address this, we split those tokens into their components in a postprocessing step.
We use the official training/development/testing splits in all settings.
\section{Hyperparameters and Training}\label{sec:app:training}
To ensure reproducibility, we describe details of our models and training procedure in the following.
\paragraph{NLI.}
For SNLI we use the hyperparameters proposed by \newcite{kiela-etal-2018-dynamic}.
The embedding projections are set to 256 dimensions and we use 256 hidden units per direction for the BiLSTM.
The classifier consists of an MLP with a 1024-dimensional hidden layer.
Adam is used for optimization \cite{kingma2014adam}. The initial learning rate is set to 0.0004 and dropped by a factor of 0.2 when the development accuracy stops improving. Dropout is set to 0.2. Our NLI models have 950k trainable parameters.
\paragraph{NER, CE, and POS.}
We follow the hyperparameter configuration from \newcite{flair/Akbik18}
and use 256 hidden units per direction for the BiLSTM.
Labels for sequence tagging are encoded in BIOSE format.
The attention layer has a hidden size $H$ of 10.
We set the mapping size $E$ to the size of the largest embedding in all experiments, i.e., 2,046 dimensions, the size of one FLAIR embedding or 3048 dimensions for English when ELMo is used.
The discriminator $D$ is trained every 10$^{th}$ batch.
We perform a hyperparameter search for the $\lambda$ parameter in \{1e-3, 1e-4, 1e-5, 1e-6\} for models using adversarial training. The best configurations along with performance on the validation sets are reported in Table~\ref{tab:validation}.
We use stochastic gradient descent and an initial learning rate of 0.1. The learning rate is halved after 3 consecutive epochs without improvements on the development set.
Note that we use the same hyperparameters for all our models and all tasks.
The NER and POS models have up to 13.9M trainable parameters.
We train the models for a maximum of 100 epochs and select the best model according to the performance using the task's metric on the development set if available, or using the training loss if the model was trained on the combined training and development set. The training of a single model takes between 1 and 8 hours depending on the dataset size on a Nvidia Tesla V100 GPU with 32GB VRAM.
\begin{table}[t!]
\footnotesize
\centering
\begin{tabular}{lcccc}
\toprule
& NER & POS & CE & NLI \\
\midrule
Character & X & X & X & \\
BPEmb & X & X & X & \\
XLM-RoBERTa & X & X & X & \\
fastText\xspace (Crawl) & X & X & X & X \\
fastText\xspace (Wiki) & & & . & X \\
BOW2 & & & & X \\
GloVe & & & & X \\
FLAIR & X & & X & \\
ELMo (Only EN) & X & & X & \\
Domain-Specific & & & X & \\
\bottomrule
\end{tabular}
\caption{Embeddings used in our experiments.}
\label{tab:embeds}
\end{table}
\section{Embeddings}\label{sec:app:embeddings}
The embeddings used in our experiments are shown in Table~\ref{tab:embeds}. All of the embeddings stay fixed during training. We use the published fine-tuned version of XLM-RoBERTa of ~\newcite{conneau2019unsupervised} in our NER experiments, but do not perform further fine-tuning. The domain-specific embeddings for the concept extraction in the clinical domain (CE) are Clinical BERT for English~\cite{alsentzer-etal-2019-publicly} and Spanish clinical fastText embeddings~\cite{emb/bio/Soares19}.
\section{Comparison to existing NER model}
Similar to the NLI experiments, we included our proposed meta-embedding methods in an existing NER model, namely the one from \newcite{flair/Akbik18}. The results are shown in Table~\ref{tab:ner_compare_results}. Again, our methods consistently improve upon the existing meta-embedding approach.
Note that the results on Table~\ref{tab:ner_compare_results} here differ from the results in Table 2 in the paper because another set of embeddings was used. Here, we aimed at including our meta-embedding approach using the same embeddings as \newcite{flair/Akbik18} and to investigate the different components of our proposed method.
\begin{table}[t]
\setlength\tabcolsep{3.5pt}
\centering
\footnotesize
\begin{tabular}{clcc}
\toprule
& Model & English & German \\
\midrule
\multirow{3}{*}{\rotatebox{90}{}}
& Concatenation\xspace
& 91.70$\pm$.08 & 83.84$\pm$.33 \\
& Summation\xspace
& 92.32$\pm$.12 & 85.03$\pm$.27 \\
& \baselineMeta (ATT)\xspace
& 92.28$\pm$.09 & 86.41$\pm$.33 \\
\midrule
\multirow{3}{*}{\rotatebox{90}{OUR}}
& ATT+FEAT\xspace
& 92.38$\pm$.09 & 86.71$\pm$.40 \\
& ATT+ADV\xspace
& 92.42$\pm$.13 & 86.67$\pm$.18 \\
& ATT+FEAT+ADV\xspace
& 92.48$\pm$.08 & 87.03$\pm$.10 $\ast$\xspace \\
\bottomrule
\end{tabular}
\caption{NER Results (\fscore) on the CoNLL 2003 data. Concatenation is similar to the model of \newcite{flair/Akbik18} but without training on the development set. $\ast$\xspace denotes statistical significance.}
\label{tab:ner_compare_results}
\setlength\tabcolsep{6pt}
\end{table}
\section{Further Analysis}
Figure~\ref{fig:adversarial_space_4} shows the effect of adversarial training when using 4 embedding types. Similar to the training with 2 embedding types, we can observe strong type-based cluster for each embedding in the beginning. These cluster get resolved and the embedding types get indistinguishable for the discriminator during the adversarial training.
\begin{figure}
\centering
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[width=1.0\textwidth]{figures/space_4_embeds_pca.png}
\caption{w/o adversarial training.}
\label{fig:space_before_4}
\end{subfigure}
~
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[width=1.0\textwidth]{figures/space_4_embeds_adv_pca.png}
\caption{w/ adversarial training. }
\label{fig:adversarial_space_4}
\end{subfigure}
\caption{PCA plots of the common embedding space before (left) and after (right) adversarial training with 4 embeddings for SNLI experiments.}
\label{fig:common_space_4}
\end{figure}
\begin{table}
\centering
\footnotesize
\begin{tabular}{llccc}
\toprule
Task & Language & lambda & Dev F1 & Test F1 \\
\midrule
\multirow{2}{*}{\rotatebox{90}{CE}}
& En & 1e-5 & - & 89.23 \\
& Es & 1e-4 & - & 91.97 \\
\midrule
\multirow{4}{*}{\rotatebox{90}{NER}}
& De & 1e-3 & - & 91.10 \\
& En & 1e-6 & - & 93.67 \\
& Es & 1e-5 & - & 89.66 \\
& Nl & 1e-6 & - & 92.62 \\
\midrule
\multirow{27}{*}{\rotatebox{90}{POS Tagging}}
& Bg & 1e-6 & 99.29 & 99.33 \\
& Cs & 1e-6 & 99.35 & 99.22 \\
& Da & 1e-4 & 98.53 & 98.78 \\
& De & 1e-5 & 96.24 & 95.30 \\
& El & 1e-3 & 99.02 & 98.84 \\
& En & 1e-4 & 97.24 & 97.43 \\
& Es & 1e-4 & 96.90 & 97.47 \\
& Et & 1e-4 & 97.05 & 96.03 \\
& Eu & 1e-6 & 97.45 & 97.28 \\
& Fa & 1e-3 & 98.65 & 98.41 \\
& Fi & 1e-5 & 97.40 & 97.41 \\
& Fr & 1e-4 & 97.65 & 96.95 \\
& Ga & 1e-4 & 95.20 & 89.47 \\
& He & 1e-3 & 98.56 & 97.96 \\
& Hi & 1e-4 & 98.18 & 98.11 \\
& Hr & 1e-4 & 99.42 & 97.67 \\
& Id & 1e-3 & 92.80 & 93.66 \\
& It & 1e-3 & 98.75 & 98.60 \\
& Nl & 1e-4 & 97.56 & 94.36 \\
& No & 1e-5 & 99.30 & 99.01 \\
& Pl & 1e-4 & 98.74 & 98.71 \\
& Pt & 1e-4 & 98.86 & 98.67 \\
& Ro & 1e-5 & 94.16 & 95.29 \\
& Sl & 1e-4 & 99.29 & 99.11 \\
& Sv & 1e-3 & 98.72 & 98.74 \\
& Ta & 1e-3 & 90.50 & 91.10 \\
\midrule
NLI & En & 1e-3 & 86.57 & 85.89 \\
\bottomrule
\end{tabular}
\caption{Validation results and best $\lambda$ value found. NER and CE models were trained on a combination of training and development set.}
\label{tab:validation}
\end{table}
\section{Feature-Based Meta-Embeddings with Adversarial Training}\label{sec:approach}
We next describe meta-embeddings and their shortcomings and present our proposed approach
to improve
them with features and adversarial training.
\subsection{Meta-Embeddings} \label{sec:meta-embeddings}
We combine various embedding types as input to our neural networks.
As some embeddings are more beneficial for certain words, e.g., domain-specific embeddings for in-domain words, we weight the embeddings with an attention function, similar to \newcite{kiela-etal-2018-dynamic}.
In contrast to concatenation, this allows the network to decide on the word level which embedding type to focus on and reduces the dimensionality of the input to the first layer of our neural networks.
As our method is independent of the particular embeddings, we keep this section general and list the embeddings we use in our experiments in Section \ref{sec:embeddings}.
Given $n$ embedding types, we first map all embeddings $e_i, 1 \le i \le n$
to the same space with dimension $E$, using a non-linear mapping $x_i = \tanh(Q_i \cdot \vec{e_i} + \vec{b_i})$.
As the different embedding types have different dimensions, we learn individual weight matrices $Q_i$ and bias vectors $b_i$ per embedding type. The result $x_i \in \mathbb{R}^E$ is
the input embedding of the $i$-th type, $e_i$, mapped to size $E$, which is a hyperparameter.
\renewcommand{\vec}[1]{\mathbf{#1}}
Second, attention weights $\alpha_i$ are computed by:
\begin{equation}
\alpha_i = \frac{\exp(V \cdot \tanh(W x_i))}{\sum_{l=1}^n \exp(V \cdot \tanh(W x_l))}
\label{eq:attBaseline}
\end{equation}
with $W \in \mathbb{R}^{H \times E}$ and $V \in \mathbb{R}^{1 \times H}$ being parameter matrices that are randomly initialized and learned during training.
Finally, the embeddings $x_i$ are weighted using the attention weights $\alpha_i$ resulting in the word representation: $e^{ATT} = \sum_i \alpha_i \cdot x_i$.
This word representation
forms the input to our neural models which are described in Section \ref{sec:architectures}.
\subsection{Shortcomings of Meta-Embeddings}
\label{sec:shortcomings}
By manual analysis of data, meta-embedding spaces and attention weights, we identified the following two shortcomings of existing meta-embedding approaches.
\textbf{Uninformed Computation of Attention Weights.}
Equation \ref{eq:attBaseline} for calculating attention weights only depends on $x_i$, the representation of the current word.\footnote{\newcite{kiela-etal-2018-dynamic} proposed two versions: using the word embeddings or using the hidden states of a bidirectional LSTM encoder. Our observation holds for both of them.}
As motivated before, different embedding types are beneficial for different types of words.
Given a sentence like ``Vicodin Tablets PO every six hours PRN'', the terms ``Vicodin'', ``PO'' and ``PRN'' can be best represented by embeddings that have been specialized on the clinical domain while the other words can also be modeled by embeddings trained on, e.g., news or Wikipedia data.
Despite this intuition, the model does not have access to those word characteristics and, thus, has to learn to recognize them in an unsupervised way during training.
We argue that depending on the amount of training data, this approach might not be successful (especially for new words in the test data) and might slow down the training process.
Therefore, we propose to give the attention function access to basic word characteristics (see Section \ref{sec:feature-based-attention}).
\textbf{Embedding Type Clusters in Common Space.}
As described in Section \ref{sec:meta-embeddings}, all embeddings are mapped into a common space before computing the attention weights and weighted averages. Figure \ref{fig:space_before} shows that the embeddings form type-based clusters in this space. The figure only shows two embedding types for a better overview but we observed similar structures for other embedding types as well (see supplementary material). Especially the dense cluster of fastText\xspace embeddings will draw the positions of the meta-embeddings towards this cluster when computing a weighted average of fastText\xspace and GloVe embeddings. However, it would arguably be more meaningful if the meta-embeddings could spread more freely in the space according to the meaning and context of the word.
To account for this, we propose adversarial learning of meta-embeddings in Section \ref{sec:adversarial}, leading to the common space shown in Figure \ref{fig:adversarial_space}.
\subsection{Proposed Method} \label{sec:our_embeddings}
In this section, we describe our contributions to address the previously mentioned shortcomings.
Our proposed method is visualized in Figure~\ref{fig:model}.
\begin{figure}
\centering
\includegraphics[width=.48\textwidth]{figures/model.png}
\caption{Overview of our model architecture.
$C$ (classifier), $D$ (discriminator) and $F$ (feature extractor) denote the components of adversarial training.}
\label{fig:model}
\end{figure}
\subsubsection{Feature-Based Attention}
\label{sec:feature-based-attention}
To allow the model to make an informed decision which embeddings to focus on, we propose to use the features described below as an additional input to the attention function. The word features are represented as a vector $f \in \mathbb{R}^F$ and integrated into the attention function from Equation \ref{eq:attBaseline} as follows:
\begin{align}
\alpha_i = \frac{\exp(V \cdot \tanh(W x_i + U f))}{\sum_{l=1}^n \exp(V \cdot \tanh(W x_l + U f))}
\label{eq:attention}
\end{align}
with $U \in \mathbb{R}^{H \times F}$ being a parameter matrix that is learned during training.
\paragraph{Features.}
We use the following task-independent features based on simple word characteristics.
\emph{- Length}:
Long words, in particular compounds,
are often less frequent in embedding vocabularies,
such that the word length can be an indicator for rare or out-of-vocabulary words.
We encode the lengths in 20-dimensional one-hot vectors. Words with more than 19 characters share the same vector.
\emph{- Frequency}:
High-frequency words can typically be modeled well by word-based embeddings, while low-frequency words are better captured with subword-based embeddings.
Moreover, frequency is domain-dependent and, thus, can help to decide between embeddings from different domains.
We estimate the frequency $f$ of a word in the general domain from its rank $r$ in the fastText\xspace-based embeddings provided by \newcite{fastText/Grave18}:
$f(r) = k / r$ with $k = 0.1$, following \newcite{manning1999foundations}.
Finally, we group the words into 20 bins as in~\newcite{mikolov12-lm} and represent their frequency with a 20-dimensional one-hot vector.
\emph{- Word Shape}:
Word shapes capture certain linguistic features
and are often part of manually designed feature sets, e.g., for CRF classifiers~\cite{lafferty01-crf}.
For example, uncommon word shapes can be indicators for domain-specific words, which can benefit from domain-specific embeddings.
We create 12 binary features that capture information on the word shape, including whether the first, any or all characters are uppercased, alphanumerical, digits or punctuation marks.
\emph{- Word Shape Embeddings:}
In addition, we train word shape embeddings (25 dimensions) similar to \newcite{limsopatham-collier-2016-bidirectional}.
For this, the shape of each word is converted by replacing letters with \textit{c} or \textit{C} (depending on the capitalization), digits with \textit{n} and punctuation marks with \textit{p}. For instance, \textit{Dec. 12th} would be converted to \textit{Cccp nncc}.
The resulting shapes are one-hot encoded and a trainable randomly initialized linear layer is used to compute the shape representation.
All sparse feature vectors (binary or one-hot encoded) are fed through a linear layer to generate a dense representation.
Finally, all features are concatenated into a single feature vector $f$ of 77 dimensions which is used in the attention function as described earlier.
\subsubsection{Adversarial Learning of Meta-Embeddings}
\label{sec:adversarial}
As motivated in Figure \ref{fig:common_space} and Section \ref{sec:shortcomings}, the second part of our contribution is to avoid type-based clusters in the common embedding space.
In particular, we propose to use adversarial training for this purpose.
We adopt gradient-reversal training with three components:
a feature extractor $F$ consisting of the different embedding layers and the mapping function $Q$ to the common embedding space, a discriminator $D$ that tries to distinguish the different types of embeddings from each other, and a downstream classifier $C$ which is either a sequence tagger or a sentence classifier in our experiments (and is described in more detail in Section \ref{sec:architectures}).
The feature generator is shared between discriminator and downstream classifier and trained with gradient reversal to fool the discriminator.
To be more specific, the discriminator $D$ is a multinomial non-linear classification model with a standard cross-entropy loss function $L_D$.
In our sequence tagging experiments, the downstream classifier $C$ has a conditional random field (CRF) output layer and is trained with a CRF loss $L_C$ to maximize the log probability of the correct tag sequence \cite{ner/Lample16}. In our sentence classification experiments, $C$ is a multinomial classifier with cross-entropy loss $L_C$.
Let $\theta_F$, $\theta_D$, $\theta_C$ be the parameters of the feature generator, discriminator and downstream classifier, respectively.
Gradient reversal training will update the parameters as follows:
\begin{align}
\theta_D &= \theta_D - \eta \lambda \frac{\partial L_D}{\partial \theta_D}; \;\;\;
\theta_C = \theta_C - \eta \frac{\partial L_C}{\partial \theta_C}\\
\theta_F &= \theta_F - \eta (\frac{\partial L_C}{\partial \theta_F} - \lambda \frac{\partial L_D}{\partial \theta_F})
\end{align}
with $\eta$ being the learning rate and $\lambda$ being a hyperparameter to control the discriminator influence.
Thus, the parameters of the feature generator $\theta_F$ are updated in the opposite direction of the gradients from the discriminator loss. This has the effect that the different embedding types cannot form clusters in the common embedding space since these would be easily distinguishable by the discriminator.
\section{Neural Architectures}
\label{sec:architectures}
In this section, we present the architectures we use for sentence classification and sequence tagging.
Note that our contribution concerns the embedding layer which can be used as input to any model for NLP, e.g., also sequence-to-sequence models.
\subsection{Input Layer}\label{sec:embeddings}
The input to our neural networks is the feature-based meta-embedding layer as described in Section \ref{sec:approach}.
Our methodology does not depend on the embedding types, i.e., it can incorporate any token representation.
In our experiments, we use the following embeddings:
GloVe \cite{pennington-etal-2014-glove},
BOW2 \cite{levy-goldberg-2014-dependency},
character-based embeddings (randomly initialized and accumulated to token embeddings using a bidirectional long short term memory network with 25 hidden units in each direction),
FLAIR \cite{flair/Akbik18},
\footnote{We treat the FLAIR forward and backward embeddings as independent embeddings.}
fastText\xspace \cite{fastText/Grave18},
byte-pair encoding embeddings (BPEmb) \cite{bpemb/heinzerling18},
XLM-RoBERTa \cite{conneau2019unsupervised} (multilingual embeddings for 100 languages),\footnote{The final embedding vector is computed using the scalar mixing operation proposed by \newcite{liu-etal-2019-linguistic} which combines all layers of the transformer model.}
ELMo~\cite{elmo/Peters18} and
domain-specific embeddings (
Clinical BERT \cite{alsentzer-etal-2019-publicly} trained on MIMIC,
and Spanish clinical fastText\xspace embeddings \cite{emb/bio/Soares19} trained on Wikipedia health categories and articles of the Scielo online archive).
More details on which embeddings we use for which task can be found in the supplementary material.
\subsection{Model for Sentence Classification}
For sentence classification, we follow \newcite{kiela-etal-2018-dynamic} and use a bidirectional long-short term memory (BiLSTM) sentence encoder with max-pooling. To determine natural language inference, premise and hypothesis are encoded individually. Then, their representations $u$ and $v$ are combined using $[u, v, u * v, |u-v|]$. Finally, a two-layer feed-forward network performs the classification.
\subsection{Model for Sequence Tagging}
\label{sec:system}
Our sequence tagger follows a state-of-the-art architecture \cite{ner/Lample16} with
bidirectional long-short term memory (BiLSTM) network \cite{lstm/Hochreiter97} and conditional random field (CRF) output layer \cite{lafferty01-crf}.
For training, the forward algorithm is used to sum the scores for all possible sequences.
During decoding, the Viterbi algorithm is applied to obtain the sequence with the maximum score.
\section{Conclusion}\label{sec:conclusion}
In this paper, we identified two shortcomings of attention-based meta-embeddings:
First, attention weights are computed without the knowledge of characteristics of the word.
To address this, we extend the attention function with word features.
Second, the embeddings might form type-based clusters in the common embedding space.
To avoid this, we propose adversarial training for learning the mapping function.
We demonstrate the effectiveness of our approach on sentence classification and sequence tagging tasks
in different languages and domains.
Our analysis shows
that our approach successfully addresses the above mentioned shortcomings.
A possible future direction is the evaluation of our method on sequence-to-sequence tasks.
\section{Experiments and Results}\label{sec:experiments}
We now describe datasets, baseline models and the results of our experiments
for the tasks of natural language inference (NLI), named entity recognition (NER), and part-of-speech tagging (POS).
Training details can be found in the supplementary material.
\subsection{Data}
For NLI, we use the
SNLI corpus \cite{bowman-etal-2015-large}.
For NER, we use the four CoNLL benchmark datasets from the news domain (English/German/Dutch/Spanish) \cite{data/conll/Sang02,data/conll/Sang03}.
In addition,
we conduct experiments on two datasets from the clinical domain, the English i2b2 2010 concept extraction task \cite{i2b2/task/uzuner2010} and the Spanish PharmaCoNER task \cite{gonzalez-agirre-etal-2019-pharmaconer}.
For POS tagging, we use the universal dependencies treebanks version 1.2 (UPOS tag) and use the 27 languages for which \newcite{yasunaga-etal-2018-robust} reported numbers.
\subsection{Baseline Meta-Embeddings}
We use four baselines for our proposed feature-based meta-embeddings with adversarial training.
Note that for each experiment, our baselines and proposed models use the same set of embeddings.
\textbf{Concatenation\xspace.}
All $n$ embedding vectors $e_i$ are concatenated and used as input to the BiLSTM: $e^{CONCAT} = [e_1, ..., e_n]$ with $[,]$ denoting vector concatenation.
Note that this leads to a very high-dimensional input representation and therefore requires more parameters in the next neural network layer, which can be inefficient in practice.\footnote{The total number of model parameters depends not only on the input representation but also on the other layers, including the $Q_i$ mappings of attention-based meta-embeddings.}
\textbf{Summation\xspace.}
Our second baseline (SUM)
uses the sum of the embedding vectors (i.e., weighting all embedding types equally) in order to reduce the number of input dimensions: $e^{SUM} = \sum_{k=1}^n x_k$.
To be able to sum the different embeddings (with different dimensionalities), the embeddings are first mapped to the same size
as described in Section \ref{sec:meta-embeddings}.
As our third baseline, we test the \textbf{normalization} of all embeddings to a unit diameter before their summation to align the embedding spaces.
\textbf{\baselineMeta (ATT)\xspace.}
The fourth baseline implements the standard attention-based meta-embeddings\xspace approach, as proposed by \newcite{kiela-etal-2018-dynamic} and described in Section \ref{sec:meta-embeddings}.
\subsection{Evaluation Results}
We now present the results of our experiments.
In a first step, we integrate our proposed methods in an existing model for NLI, also showing the impact of our individual components. Then, we perform experiments for multilingual NER and POS tagging, and report statistical significance for our models in comparison to the meta-embeddings baseline with attention.
For NLI, we apply Welch’s t-test
on the model with median performance on the development set.
For NER and POS tagging, we use paired permutation testing with 2$^{20}$ permutations.
In all cases, we use a significance level of 0.05
and apply the Fisher correction for testing on multiple datasets following \newcite{dror-etal-2017-replicability}.
\paragraph{Natural Language Inference.}
To investigate the different components of our proposed method and show that it can easily be implemented into existing models, we extend the attention-based meta-embeddings approach by \newcite{kiela-etal-2018-dynamic}.\footnote{We use their code provided at \url{https://github.com/facebookresearch/DME}. Note that our numbers slightly differ from the numbers reported by \newcite{kiela-etal-2018-dynamic} as they used six embeddings. However, the two MT-based embeddings~\cite{hill2014embedding} are not accessible any longer.}
In particular, we add both our proposed feature-based attention and the adversarial training for learning the mapping between embedding spaces, and conduct experiments with both extensions in isolation and in combination.
Table~\ref{tab:nli_compare_results} provides the results in comparison to the baseline approaches.
Our model shows statistically significant differences to the existing meta-embeddings.\footnote{The
state-of-the-art model for SNLI with 91.9 test accuracy is based on fine-tuning BERT~\cite{zhang2019semantics}.}
Similar to \newcite{kiela-etal-2018-dynamic}, we observe that the attention-based meta-embeddings are not always superior to the unweighted summation.
This suggest, that our two previously identified shortcomings of existing meta-embeddings can lead to performance decreases.
The normalization of all embeddings to a unit diameter performs worse than the summation, showing that this is not enough to alleviate the shortcoming of unaligned embedding spaces.
However, including the features, the adversarial training or both leads to consistent improvements.
\begin{table}[t]
\setlength\tabcolsep{3.5pt}
\centering
\footnotesize
\begin{tabular}{clll}
\toprule
& Model & \multicolumn{1}{c}{Dev} & \multicolumn{1}{c}{Test} \\
\midrule
\multirow{4}{*}{\rotatebox{90}{Baselines}}
& Concatenation\xspace
& 86.01$\pm$.22 & 85.43$\pm$.11 \\
& Summation\xspace
& 86.33$\pm$.33 & 85.57$\pm$.31 \\
& Sum. + Normalization\xspace
& 85.62$\pm$.20 & 85.16$\pm$.28 \\
& \baselineMeta (ATT)\xspace
& 86.21$\pm$.22 & 85.64$\pm$.35 \\
\midrule
\multirow{3}{*}{\rotatebox{90}{OUR}}
& ATT+FEAT\xspace
& 86.48$\pm$.26 & 85.76$\pm$.22 \\
& ATT+ADV\xspace
& 86.50$\pm$.09 & 85.83$\pm$.17 \\
& ATT+FEAT+ADV\xspace
& 86.57$\pm$.12 $\ast$\xspace & 85.89$\pm$.18 $\ast$\xspace \\
\bottomrule
\end{tabular}
\caption{SNLI results (accuracy). Comparing against the models of \newcite{kiela-etal-2018-dynamic} using four embeddings. $\ast$\xspace denotes statistical significance.}
\label{tab:nli_compare_results}
\setlength\tabcolsep{6pt}
\end{table}
\paragraph{Named Entity Recognition.}
Table \ref{tab:ner_results} shows the results of on the popular CoNLL benchmark datasets for NER in comparison to the state of the art.
Our methods consistently improve upon the existing meta-embedding approaches and achieve state-of-the-art performance on 2 out of 4 languages
and competitive results on the other,
while maintaining a comparably low-dimensional input representation. In total, our model creates a representation of 2,048 dimensions, while other state-of-the-art systems, e.g., \newcite{flair/Akbik19} use up to 8,292 dimensions to represent input tokens.
\begin{table}[t]
\centering
\footnotesize
\setlength\tabcolsep{4.8pt}
\begin{tabular}{lllll}
\toprule
Model & \multicolumn{1}{c}{En} & \multicolumn{1}{c}{De} & \multicolumn{1}{c}{Nl} & \multicolumn{1}{c}{Es} \\
\midrule
\newcite{flair/Akbik19}
& 93.18 & 88.27 & 90.12 & - \\
\newcite{ner/Strakova19}
& 93.38 & 85.10 & 92.69 & 88.81 \\
\newcite{conneau2019unsupervised}
& 92.92 & 85.81 & 92.53 & 89.72 \\
\newcite{yu2020named}
& 93.3 & 90.2 & \bf 93.5 & \bf 90.3 \\
\midrule
Concatenation\xspace
& 93.45 & 90.43 & 92.32 & 89.28 \\
Summation\xspace
& 93.53 & 90.78 & 92.29 & 88.89 \\
Sum. + Normalization\xspace
& 93.56 & 90.85 & 92.41 & 89.13 \\
Meta-embeddings\xspace
& 93.47 & 90.61 & 92.45 & 88.91 \\
ATT+FEAT\xspace
& 93.56 & 90.87 & 92.57 & 89.11 \\
ATT+ADV\xspace
& 93.64 & 90.94 & 92.54 & 89.29 \\
Our ATT+FEAT+ADV\xspace
& \bf 93.75 & \bf 91.17 $\ast$\xspace
& 92.62 & 89.66 $\ast$\xspace \\
\bottomrule
\end{tabular}
\caption{NER results (\fscore) on CoNLL 2002 and 2003 datasets.
$\ast$\xspace denotes statistical significance. }
\setlength\tabcolsep{6pt}
\label{tab:ner_results}
\end{table}
Our baseline setting Concatenation\xspace slightly outperforms a model with a similar concatenation from~\newcite{flair/Akbik19} which is also based on combining different embedding types,
probably because we added recently released XLM-RoBERTa as well as BPEmb embeddings in this setting.
Our other baseline systems Summation\xspace and \baselineMeta (ATT)\xspace, which use summation and attention-based weighting instead of concatenation, achieve consistently better results on all languages, except for Spanish.
However, our proposed system ATT+FEAT+ADV\xspace
increases performance even further with statistically significant differences compared to the existing meta-embeddings\xspace approach on 2 out of 4 languages,
showing that our method is able to alleviate the previously mentioned shortcomings of meta-embeddings.
\paragraph{POS Tagging.}
Those effects are also reflected in the POS tagging results, as shown in Table~\ref{tab:pos_results},
even though the differences are smaller for this task. We set the new state of the art for 25 out of 27 languages from the UD 1.2 corpus. In particular, we can observe remarkable differences using our method compared to the summation or concatenation for the low-resource languages with up to 4 \fscore for Tamil (Ta). On average, our proposed method is 1 \fscore better than the existing meta-embeddings\xspace approach for low-resource languages with statistically significant differences in 3 out of 6 languages.
\newcommand{\midrule}{\midrule}
\begin{table}[t!]
\centering
\footnotesize
\setlength\tabcolsep{5pt}
\begin{tabular}{l|ll|llll}
\toprule
& \multicolumn{1}{c}{Adv} & \multicolumn{1}{c|}{mBPE} & \multicolumn{1}{c}{CAT} & \multicolumn{1}{c}{SUM} & \multicolumn{1}{c}{ATT} & \multicolumn{1}{c}{OUR} \\
\midrule
Bg & 98.53 & 98.7 & 98.94 & 99.19 & 99.26 & \textbf{99.33} \\
Cs & 98.81 & 98.9 & 99.05 & 99.16 & 99.19 & \textbf{99.22} $\ast$\xspace \\
Da & 96.74 & 97.0 & 98.03 & 98.44 & 98.67 & \textbf{98.78} \\
De & 94.35 & 94.0 & 94.76 & 94.95 & 95.00 & \textbf{95.30} $\ast$\xspace \\
En & 95.82 & 95.6 & 97.10 & 97.34 & 97.39 & \textbf{97.43} \\
Es & 96.44 & 96.5 & 97.31 & 97.30 & 97.32 & \textbf{97.47} \\
Eu & 94.71 & 95.6 & 96.32 & 96.93 & 97.15 & \textbf{97.28} \\
Fa & 97.51 & 97.1 & 97.75 & 98.13 & \textbf{98.41} & \textbf{98.41} \\
Fi & 95.40 & 94.6 & 96.89 & 97.04 & 96.58 & \textbf{97.41} \\
Fr & 96.63 & 96.2 & 96.75 & 96.91 & \textbf{97.01} & 96.95 \\
He & 97.43 & 96.6 & 97.49 & 97.76 & 97.86 & \textbf{97.96} $\ast$\xspace \\
Hi & 97.21 & 97.0 & 97.93 & 97.85 & 98.04 & \textbf{98.11} \\
Hr & 96.32 & 96.8 & 97.55 & 97.48 & 97.48 & \textbf{97.67} \\
Id & \textbf{94.03} & 93.4 & 93.32 & 93.35 & 93.60 & 93.66 \\
It & 98.08 & 98.1 & 98.28 & 98.43 & 98.59 & \textbf{98.60} \\
Nl & 93.09 & 93.8 & 93.25 & 93.79 & 93.68 & \textbf{94.36} $\ast$\xspace \\
No & 98.08 & 98.1 & 98.85 & 98.96 & 99.00 & \textbf{99.01} \\
Pl & 97.57 & 97.5 & 97.95 & 98.68 & 98.69 & \textbf{98.71} \\
Pt & 98.07 & 98.2 & 98.31 & 98.53 & 98.55 & \textbf{98.67} \\
Sl & 98.11 & 98.0 & 98.63 & 99.03 & 99.10 & \textbf{99.11} \\
Sv & 96.70 & 97.3 & 98.16 & 98.42 & 98.53 & \textbf{98.74} $\ast$\xspace \\
\cline{1-7}
Avg & 96.65 & 96.6 & 97.27 & 97.51 & 97.58 & \textbf{97.72} \\
\midrule
El & 98.24 & 97.9 & 98.22 & 98.54 & 98.71 & \textbf{98.84} \\
Et & 91.32 & 92.8 & 91.63 & 93.20 & 95.19 & \textbf{96.03} $\ast$\xspace \\
Ga & \textbf{91.11} & 91.0 & 86.32 & 86.95 & 87.79 & 89.47 $\ast$\xspace \\
Hu & 94.02 & 94.0 & 95.93 & 96.22 & 97.14 & \textbf{97.72} \\
Ro & 91.46 & 89.7 & 92.69 & 94.57 & \textbf{95.29} & \textbf{95.29} \\
Ta & 83.16 & 88.7 & 84.51 & 87.73 & 87.93 & \textbf{91.10} $\ast$\xspace \\
\cline{1-7}
Avg & 91.55 & 92.4 & 91.55 & 92.87 & 93.67 & \textbf{94.74} \\
\bottomrule
\end{tabular}
\caption{Results (accuracy) for POS tagging (using gold segmentation). \textit{Adv} refers to results from \newcite{yasunaga-etal-2018-robust} and \textit{mBPE} to \newcite{heinzerling-strube-2019-sequence}. As \newcite{yasunaga-etal-2018-robust}, we split into high- (top) and low-resource languages (bottom). $\ast$\xspace denotes statistical significance.}
\label{tab:pos_results}
\setlength\tabcolsep{6pt}
\end{table}
\paragraph{Domain-specific NER.}
Table \ref{tab:clinical_ner_results} shows that we achieve state-of-the-art performance for NER in the clinical domain on both English and Spanish datasets. Thus, our method works robustly across domains. In Section \ref{sec:analysis}, we analyze when the model selects domain embeddings.
\begin{table}[th!]
\footnotesize
\centering
\begin{tabular}{lll}
\toprule
Model & \multicolumn{1}{c}{English} & \multicolumn{1}{c}{Spanish} \\
\midrule
\newcite{alsentzer-etal-2019-publicly} & 87.7 & - \\
\newcite{xiong-etal-2019-deep} & - & 91.1\\
\newcite{lange2020closing} & 88.9 & 91.4 \\
\midrule
Concatenation\xspace & 87.97 & 90.66 \\
Summation\xspace & 88.69 & 90.23 \\
Sum. + Normalization\xspace & 88.60 & 90.39 \\
\baselineMeta (ATT)\xspace & 88.87 & 91.33 \\
ATT+FEAT\xspace & 88.93 & 91.51 \\
ATT+ADV\xspace & 89.07 & 91.42 \\
Our ATT+FEAT+ADV\xspace & \bf 89.23 & \bf 91.97 $\ast$\xspace \\
\bottomrule
\end{tabular}
\caption{Clinical NER results (\fscore) for English and Spanish concept extraction. $\ast$\xspace: statistical significance.}
\label{tab:clinical_ner_results}
\end{table}
\section{Introduction}\label{sec:introduction}
The number of available word representations has increased considerably over the last years.
There are traditional context-independent word embeddings, such as word2vec \cite{mikolov-word2vec} and GloVe \cite{pennington-etal-2014-glove}, subword embeddings, such as byte-pair encoding embeddings \cite{bpemb/heinzerling18} and character-based embeddings \cite[e.g.,][]{ner/Lample16, ma-hovy-2016-end}, and more recently, context-aware embeddings, such as ELMo \cite{elmo/Peters18}, FLAIR \cite{flair/Akbik18} and BERT \cite{bert/Devlin19}.
Different embedding types can be beneficial in different cases, for instance, word embeddings are powerful for frequent words while character- or subword-based embeddings can model out-of-vocabulary words.
Also the data used for training embeddings has a large impact on performance, e.g., domain-specific embeddings can capture in-domain words that do not appear in general domains like news text.
As a result, restricting a model to only one embedding variant is suboptimal.
Therefore, some studies consider so-called meta-embeddings that combine various embedding types.
\newcite{yin-schutze-2016-learning}, for instance, concatenate different embeddings and then reduce their dimensionality.
\newcite{kiela-etal-2018-dynamic} propose to combine different embeddings with weighted sums where the weights are learned via attention.
\begin{figure}
\centering
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[width=1.0\textwidth]{figures/space_2_embeds_pca.png}
\caption{w/o adversarial training.}
\label{fig:space_before}
\end{subfigure}
~
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[width=1.0\textwidth]{figures/space_2_embeds_adv_pca.png}
\caption{w/ adversarial training. }
\label{fig:adversarial_space}
\end{subfigure}
\caption{PCA plots of the common embedding space before (left) and after (right) adversarial training.}
\label{fig:common_space}
\end{figure}
In this paper, we identify two shortcomings of existing approaches of attention-based meta-embeddings:
First, the computation of attention weights is \emph{uninformed}, ignoring properties of the word.
Even though, certain embedding types are preferred for different words, e.g., for low-frequency words \cite{kiela-etal-2018-dynamic},
the model has no direct access to the information how frequent a word is. Thus, we argue that the computation of attention weights is uninformed and the model needs to learn this information along with the attention weights during training in an unsupervised way.
While this might be possible for word properties like frequency, it is arguably much harder for, e.g., the domain-specificity of a word.
To address this shortcoming, we propose to learn \emph{feature-based} meta-embeddings that use word features, such as frequency and shape, as
input to the attention function. This helps the model to select the most appropriate embedding type in an informed manner.
The second shortcoming concerns the common space to which all embeddings need to be mapped to compute a weighted average over them. Figure \ref{fig:space_before} shows that the embeddings form
\emph{embedding-type-specific clusters}
in this space.
As a result, the positions of meta-embeddings (i.e., weighted averages of embeddings of different types) might be noisy and not distributed
by meaning in the embedding space.
We demonstrate that this reduces performance.
To remedy this problem, we propose to use \emph{adversarial training} to learn mapping functions such that a discriminator cannot distinguish the embedding types in the common space.
We conduct experiments for both sentence classification and sequence tagging. We use the natural language inference (NLI) task in direct comparison against existing meta-embeddings and further experiment with named entity recognition (NER) and part-of-speech tagging (POS) benchmarks
for several languages. We also investigate the effect of in-domain embeddings on NER in the clinical domain on two languages.
The code for our models will be made available.
To sum up, our contributions are as follows:
\noindent (i) We identify two shortcomings of existing meta-embedding approaches and show their negative impact on performance: uninformed attention weight computation and unaligned embedding spaces.
\noindent (ii) We propose feature-based meta-embeddings which are trained using adversarial learning as input representation for NLP tasks.
\noindent (iii) We evaluate our approach on different tasks, languages and domains, and
our experiments and analysis show its positive effects.
In particular, we set the new state of the art for 29 out of 33 POS and NER
benchmarks across languages and domains.
\section{Related Work}\label{sec:related}
This section surveys related work on meta-embeddings, attention and adversarial training.
\textbf{Meta-Embeddings.}
\label{sec:relatedWorkEmbeddings}
Previous work has seen performance gains by, for example, combining various types of word embeddings \cite{tsuboi-2014-neural} or
the same type trained on different corpora \cite{luo2014}.
For the combination, some alternatives have been proposed, such as different input channels of a convolutional neural network \cite{kim-2014-convolutional,zhang-etal-2016-mgnc}, concatenation followed by dimensionality reduction \cite{yin-schutze-2016-learning} or averaging of embeddings \cite{coates-bollegala-2018-frustratingly}, e.g., for combining embeddings from multiple languages~\cite{reid2020combining}.
More recently, auto-encoders \cite{bollegala-bao-2018-learning}, ensembles of sentence encoders \cite{poerner-etal-2020-sentence}
and attention-based methods \cite{kiela-etal-2018-dynamic} have been introduced. The latter allows a dynamic (input-based) combination of multiple embeddings.
\newcite{emb/Winata19} and \newcite{Priyadharshini2020} used similar attention functions to combine embeddings from different languages for NER in code-switching settings.
In this paper, we follow the idea
of attention-based meta-embeddings. We identify two shortcomings of existing approaches and propose a task-independent
method for improving them.
\textbf{Extended Attention.}
Attention has been introduced in the context of machine translation \cite{bahdanau2014neural} and is since
then widely used in NLP
\cite[i.a.,][]{tai-etal-2015-improved, xu15-show-attend-tell, yang-etal-2016-hierarchical,vaswani2017}.
One part of our proposed method is the integration of features into the attention function.
This is similar to extending the source of attention for uncertainty detection \cite{adel-schutze-2017-exploring} or
relation extraction \cite{zhang-etal-2017-position,li-etal-2019-improving}.
In contrast to these works, we use task-independent features derived from the token itself. Thus, we can use the same attention function for different tasks.
\textbf{Adversarial Training.}
The second part of our proposed method is motivated by the usage of adversarial training \cite{goodfellow2014} for creating input representations that are independent of a specific domain or feature.
This is related to using adversarial training for domain adaptation \cite{ganin2016} or coping with bias or confounding variables
\cite{li-etal-2018-towards,raff2018,zhang2018bias,barrett-etal-2019-adversarial,mchardy-etal-2019-adversarial}. Following \newcite{ganin2016}, we use gradient reversal training in this paper.
Recent studies use adversarial training on the word level
to enable cross-lingual transfer from a source to a target language \cite{zhang-etal-2017-adversarial,keung-etal-2019-adversarial,wang-etal-2019-weakly,bari2020}.
In contrast, our discriminator is not binary but multinomial (as in \newcite{chen-cardie-2018-multinomial})
and allows us to create a common space for embeddings from different embedding types.
Adversarial training was also used
to strengthen non-textual representations, e.g., knowledge graphs~\cite{zeng2020learning} or networks~\cite{dai2019adversarial}.
|
2,869,038,154,048 | arxiv | \section{Introduction}
Nematic liquid crystals (NLCs) are classical examples of mesophases or intermediate phases of matter between the solid and liquid phases, with a degree of long-range orientational order \cite{dg}. Nematics are directional materials with locally preferred directions of molecular alignment, described as nematic ``directors'' in the literature. The directional nature of nematics makes them highly susceptible to external light and electric fields, making them the working material of choice for the multi-billion dollar liquid crystal display industry \cite{bahadur1984liquid}. Recently, there has been substantial interest in new applications for NLCs in nano-technology, microfluidics, photonics and even security applications \cite{lagerwall2012new}. We build on a batch of papers on NLCs in square wells, originally reported in \cite{tsakonas2007multistable} and followed up in recent years in \cite{luo2012multistability}, \cite{kralj2014order}, \cite{kusumaatmaja2015free}, \cite{lewis2014colloidal}, \cite{Walton2018}, \cite{canevari2017order}, \cite{wang2018order} etc. In \cite{tsakonas2007multistable}, the authors experimentally and numerically study NLC equilibria inside square wells with tangent boundary conditions on lateral surfaces, which means that the nematic molecules on these surfaces preferentially lie in the plane of the surfaces. They study shallow wells and argue that it is sufficient to study the nematic profile on the square cross-section and hence, model NLC equilibria on a square with tangent boundary conditions, which require the nematic directors to be tangent to the square edges creating a necessary mismatch at the square corners. They report two experimentally observed NLC equilibria on micron-sized wells, labelled as the \emph{diagonal} solution for which the nematic director is along a square diagonal and a \emph{rotated} solution for which the nematic director rotated by $\pi$ radians between a pair of parallel edges. They further model this system within a reduced two-dimensional continuum Landau-de Gennes (LdG) approach and recover the diagonal and rotated solutions numerically. The reduction from a 3D well to a 2D square domain can be rigorously justified using $\Gamma$-convergence \cite{wang2018order}.
In \cite{kralj2014order}, the authors study the effects of square size on NLC equilibria for this model problem with tangent boundary conditions. They measure square size in units of a material-dependent length scale - the biaxial correlation length, which is typically in the nanometer regime. For micron-sized squares, the authors recover the diagonal and rotated solutions within a continuum LdG approach as before. As they reduce the square size, particularly from the micron to the nano-scale, they find a unique Well Order Reconstruction Solution (WORS) for squares smaller than a certain critical size, which in turn depends on the material constants and temperature. The WORS is an interesting NLC equilibria for two reasons - (i) it partitions the square into four quadrants and the nematic director is approximately constant in each quadrant according to the tangent condition on the corresponding edge and (ii) the WORS has a defect line along each square diagonal and the two mutually perpendicular defect lines intersect at the square centre, yielding the quadrant structure. Indeed, we speculate that this distinctive defect line could be a special optical feature of the WORS, if experimentally realised. The WORS has been analysed in \cite{canevari2017order} and \cite{wang2018order}, in terms of solutions of the Allen-Cahn equation and it is rigorously proven that the WORS is globally stable for sufficiently small squares i.e. for nano-scale geometries. Recent work shows that the WORS is also observable in molecular simulations and is hence, not a continuum artefact.
A potential criticism is that the WORS is an artefact of the 2D square domain and is hence, not relevant for 3D scenarios. In this paper, we address the important question - does the WORS survive in a three-dimensional square box? As proven in \cite{wang2018order}, the WORS does survive in the \emph{thin film limit} but can we observe the WORS for square wells with a finite height? The answer is affirmative and we identify two physically relevant 3D scenarios for which the WORS exists, for all values of the well height and for all temperatures below the nematic supercooling temperature i.e. for temperatures that favour a bulk ordered nematic phase. The paper is organised as follows. In Section~\ref{sub:ldg}, we review the LdG theory for NLCs and introduce the domain and the boundary conditions in Section~\ref{sect:Omega}. Our analytical results are restricted to Dirichlet tangent conditions for the nematic directors on the lateral surfaces of the well, phrased in the LdG framework. In Section~\ref{sect:natural}, we work with 3D wells that have natural boundary conditions on the top and bottom surfaces and study the existence, stability and qualitative properties of the WORS as a special case of a more general family of LdG equilibria; we believe these results to be of general interest. In Section~\ref{sect:surface}, we work with 3D wells that have realistic surface energies that favour planar boundary conditions on the top and bottom and again prove the existence of the WORS for arbitrary well heights and low temperatures, accompanied by interesting companion results for surface energy. In Section~\ref{sec:numerics}, we perform a detailed numerical study of the 3D LdG model on 3D square wells. We discover novel mixed 3D solutions, that interpolate between different diagonal solutions, when the WORS is unstable. Further, we numerically study the effect of surface anchoring on the lateral surfaces on the stability of the WORS, in contrast to the analysis which is restricted to Dirichlet conditions on the lateral surfaces. The WORS ceases to exist as we weaken the tangential boundary conditions on the lateral surfaces; this is expected from \cite{kralj2014order} since the tangent conditions naturally induce the symmetry of the WORS in severe confinement. Our numerical results yield quantitative estimates for the existence of the WORS as a function of the anchoring strength on the lateral surfaces and these estimates can be of value in further work. We summarise our conclusions in Section~\ref{sec:summary}.
\section{Preliminaries}
\subsection{The Landau-de Gennes model}
\label{sub:ldg}
We work with the Landau-de Gennes (LdG) theory for nematic liquid crystals.
The LdG theory is a powerful continuum theory for nematic liquid crystals
and describes the nematic state by a macroscopic order parameter ---
the LdG $\Qvec$-tensor, which is a symmetric traceless $3\times 3$ matrix i.e.
\[
\Qvec\in S_0 := \left\{ \Qvec\in \mathbb{M}^{3\times 3}\colon
Q_{ij} = Q_{ji}, \ Q_{ii} = 0 \right\}.
\]
A $\Qvec$-tensor is said to be (i) isotropic if~$\Qvec=0$, (ii) uniaxial
if $\Qvec$ has a pair of degenerate non-zero eigenvalues and can be written in the form
\[
\Qvec = s\left(\nvec \otimes \nvec - \frac{\mathbf{I}}{3} \right)
\]
where $\nvec$ is the eigenvector with the non-degenerate eigenvalue and
(iii) biaxial if~$\Qvec$ has three distinct eigenvalues.
We assume that the domain is a three-dimensional well, filled with nematic liquid crystals,
\[
V := \Omega\times (0, \, h),
\]
where $\Omega\subseteq{\mathbb R}^2$ is the two-dimensional cross-section of the well
(more precisely, a truncated square, as described in
Section~\ref{sect:Omega}) and~$h$ is the well height
\cite{tsakonas2007multistable, kralj2014order}. Let~$\Gamma$
be the union of the top and bottom plates, that is,
\[
\Gamma := \Omega\times\{0, \, h\}.
\]
In the absence of surface anchoring energy, we work with a simple form of the LdG energy given by~\cite{dg}
\begin{equation} \label{eq:dim-energy}
\mathcal{F}_\lambda[\Qvec] := \int_{V} \left(\frac{L}{2} \left|\nabla\Qvec \right|^2
+ f_b(\Qvec) \right) \d V
\end{equation}
The term~$\abs{\nabla\Qvec}^2 := \partial_k Q_{ij} \partial_k Q_{ij}$,
for $i$, $j$, $k = 1, \, 2, \, 3$, is an elastic energy density
which penalises spatial inhomogeneities and $L>0$
is a material-dependent elastic constant.
The thermotropic bulk potential, $f_b$, is given by
\begin{equation} \label{eq:3}
f_b(\Qvec) := \frac{A}{2} \tr\Qvec^2 - \frac{B}{3} \tr\Qvec^3 + \frac{C}{4}\left(\tr\Qvec^2 \right)^2.
\end{equation}
The variable~$A = \alpha (T - T^*)$ is the re-scaled temperature,
$\alpha$, $B$, $C>0$ are material-dependent constants
and~$T^*$ is the characteristic supercooling
temperature~\cite{dg,newtonmottram}.
It is well-known that all stationary points of $f_b$ are either uniaxial or isotropic~\cite{dg,newtonmottram,ejam2010}.
The re-scaled temperature~$A$ has three characteristic values:
(i)~$A=0$, below which the isotropic phase $\Qvec=0$ loses stability,
(ii) the nematic-isotropic transition temperature,
$A={B^2}/{27 C}$, at which $f_b$ is minimized by the isotropic
phase and a continuum of uniaxial states with $s=s_+ ={B}/{3C}$ and
$\nvec$ arbitrary, and (iii) the nematic superheating temperature,
$A = {B^2}/{24 C}$, above which the isotropic state is the
unique critical point of $f_b$.
For a given $A<0$, let
$\mathcal{N} := \left\{ \Qvec \in S_0\colon
\Qvec = s_+ \left(\nvec\otimes \nvec - \Ivec/3 \right) \right\}$
denote the set of minimizers of the bulk potential, $f_b$, with
\begin{equation} \label{eq:s+}
s_+ := \frac{B + \sqrt{B^2 + 24|A| C}}{4C}
\end{equation}
and~$\nvec \in S^2$ arbitrary.
We non-dimensionalize the system using a change of variables,
$\bar{\rvec} = \rvec/ \lambda$,
where $\lambda$ is a characteristic length scale of the cross-section $\Omega$.
The rescaled domain and the rescaled top and bottom surfaces become
\begin{equation}
\overline{V} := \overline{\Omega} \times (0, \, \epsilon),
\qquad \overline{\Gamma} := \overline{\Omega} \times \{0, \, \epsilon\}
\end{equation}
where $\overline{\Omega}$ is the rescaled two-dimensional
domain and~$\epsilon := h / \lambda$.
The re-scaled LdG energy functional is
\begin{equation} \label{eq:LdG}
\overline{\mathcal{F}_\lambda}[\Qvec] := \frac{\mathcal{F}_\lambda[\Qvec]}{L \lambda} =
\int_{\overline{V}} \left(\frac{1}{2}\left| \overline{\nabla} \Qvec \right|^2
+ \frac{\lambda^2}{L} f_b(\Qvec) \right) \, \overline{\d V}.
\end{equation}
In~\eqref{eq:LdG},
$\overline{\nabla}$ is the gradient with respect to
the re-scaled spatial coordinates, $\overline{\d V}$
is the re-scaled volume element and~$\overline{\d S}$ is the re-scaled area element.
In what follows, we drop the \emph{bars} and all statements
are to be understood in terms of the re-scaled variables.
Critical points of ~\eqref{eq:LdG} satisfy the
Euler-Lagrange system of partial differential equations
\begin{equation} \label{eq:EL}
\Delta \Qvec = \frac{\lambda^2}{L}
\left\{ A\Qvec - B\left(\Qvec\Qvec - \frac{\Ivec}{3}|\Qvec|^2 \right)
+ C|\Qvec|^2 \Qvec \right\} \! .
\end{equation}
\subsection{The 2D domain, $\Omega$,
and the boundary conditions on the lateral surfaces}
\label{sect:Omega}
Let $\Omega$ be our two-dimensional (2D) cross-section which
is a truncated square with diagonals along the coordinate axes:
\begin{equation} \label{Omega}
\Omega := \left\{(x, \, y)\in{\mathbb R}^2\colon
|x| < 1 - \eta, \ |y| < 1 - \eta,
\ |x+y| < 1, \ |x-y| < 1 \right\}
\end{equation}
(see Figure~\ref{fig:square}).
Here~$\eta\in (0, \, 1)$ is a small, but fixed parameter.
The boundary, $\partial\Omega$, consists of four ``long'' edges~$C_1$,\ldots, $C_4$,
parallel to the lines ~$y = x$ and~$y = -x$,
and four ``short'' edges~$S_1, \, \ldots, \, S_4$, of length~$2\eta$,
parallel to the $x$ and $y$-axes respectively.
The four long edges~$C_i$ are labeled counterclockwise and $C_1$
is the edge contained in the first quadrant, i.e.
\[
C_1 := \left\{(x, \, y)\in{\mathbb R}^2\colon x + y = 1, \ \eta \leq x \leq 1 - \eta \right\} \! .
\]
The short edges~$S_i$ are introduced to remove the sharp square vertices.
They are also labeled counterclockwise and
$S_1 := \{(1 - \eta, \, y)\in{\mathbb R}^2\colon |y|\leq \eta\}$.
\begin{figure}[t]
\centering
\includegraphics[height=5cm]{label_square.eps}
\caption{The `truncated square'~$\Omega$. }
\label{fig:square}
\end{figure}
We impose Dirichlet conditions
on the lateral surfaces of the well,
$\partial\Omega\times (0, \, \epsilon)$:
\begin{equation} \label{eq:bc-Dir}
\Qvec = \Qvec_{\mathrm{b}} \quad \textrm{on }
\partial V\setminus\Gamma = \partial\Omega\times (0, \, \epsilon),
\end{equation}
where the boundary datum~$\Qvec_{\mathrm{b}}$ is independent of the $z$-variable,
$\partial_z\Qvec_{\mathrm{b}}\equiv 0$.
Following the literature on planar multistable
nematic systems~\cite{tsakonas2007multistable, luo2012multistability, kralj2014order}, we impose
\emph{tangent} uniaxial Dirichlet conditions on the long edges, $C_1, \, \ldots, \, C_4$:
\begin{equation} \label{eq:bc1}
\Qvec_{\mathrm{b}}(x, \, y, \, z) := \begin{cases}
s_+\left( \nvec_1 \otimes \nvec_1 - \Ivec/3 \right) & \textrm{if } (x, \, y)\in C_1 \cup C_3 \\
s_+\left( \nvec_2 \otimes \nvec_2 - \Ivec/3 \right) & \textrm{if } (x, \, y)\in C_2 \cup C_4;
\end{cases}
\end{equation}
where $s_+$ is defined in (\ref{eq:s+})
and
\begin{equation} \label{eq:n12}
\nvec_1 := \frac{1}{\sqrt{2}}\left(-1, \, 1, \, 0 \right),
\qquad \nvec_2 := \frac{1}{\sqrt{2}}\left(1, \, 1, \, 0 \right).
\end{equation}
We prescribe Dirichlet conditions on the short edges too, in terms of a function,
$g_0\colon [-\eta, \, \eta]\to [-s_+/2, s_+/2]$,
chosen to eliminate discontinuities of the tangent Dirichlet boundary condition e.g.
\[
g_0(s) := \frac{s_+}{2\eta} s, \qquad
\textrm{for } -\eta \leq s \leq\eta,
\]
but the choice of $g_0$ does not affect qualitative predictions or numerical results.
We define
\begin{equation} \label{eq:bc2}
\Qvec_{\mathrm{b}}(x, \, y, \, z) := \begin{cases}
g_0(y) \left(\nvec_1 \otimes \nvec_1 - \nvec_2\otimes \nvec_2 \right)
- \dfrac{s_+}{6}\left(2 \zhat\otimes\zhat
- \nvec_1\otimes \nvec_1 - \nvec_2\otimes \nvec_2 \right)
& \textrm{if } (x, \, y)\in S_1\cup S_3, \\
g_0(x)\left(\nvec_1 \otimes \nvec_1 - \nvec_2\otimes \nvec_2 \right)
- \dfrac{s_+}{6}\left(2 \zhat\otimes\zhat
- \nvec_1\otimes \nvec_1 - \nvec_2\otimes \nvec_2 \right)
& \textrm{if } (x, \, y)\in S_2\cup S_4.
\end{cases}
\end{equation}
Given the Dirichlet conditions~\eqref{eq:bc-Dir},
our admissible class of~$\Qvec$-tensors is
\begin{equation} \label{eq:admissible}
\mathcal{B} := \left\{ \Qvec \in W^{1,2}(V, \, S_0)\colon
\Qvec = \Qvec_{\mathrm{b}} \ \textrm{on }
\partial\Omega\times(0, \, \epsilon) \right\}.
\end{equation}
\section{The Well Order Reconstruction Solution (WORS) in a three-dimensional (3D) context and related results}
\label{sec:Wors_3D}
In \cite{kralj2014order}, the authors numerically report the Well Order Reconstruction solution (WORS) on the 2D domain, $\Omega$, with the Dirichlet conditions (\ref{eq:bc1}); which is further analysed in \cite{canevari2017order}. At a fixed temperature $A = -\frac{B^2}{3C}$, the WORS corresponds to a classical solution of the Euler-Lagrange equations (\ref{eq:EL}) of the form
\begin{equation}
\label{eq:wors}
\Qvec_{WORS} = q \left( \nvec_1 \otimes \nvec_1 - \nvec_2 \otimes \nvec_2 \right) - \frac{B}{6C} \left(2 \zhat \otimes \zhat -\nvec_1 \otimes \nvec_1 - \nvec_2\otimes \nvec_2 \right)
\end{equation}
with a single degree of freedom, $q\colon\Omega\to \mathbb{R}$ which satisfies the Allen-Cahn equation and has the following symmetry properties
\[
q \left(x, 0 \right) = q\left( 0, y \right) = 0, \qquad xy \, q(x,y) \geq 0.
\]
The WORS has a constant set of eigenvectors, unit-vectors $\nvec_1, \nvec_2$
defined by~\eqref{eq:n12} and the coordinate unit-vector $\zhat$, and very importantly, has two mutually perpendicular defect lines, along the square diagonals, intersecting at the square centre, described by the nodal lines of $q$ above. These are defect lines in the sense that $\Qvec_{WORS}$ is uniaxial along these diagonal lines with negative order parameter, which physically implies that the nematic molecules lie in the plane of the square without a preferred in-plane direction along the defect lines i.e. they are locally disordered in the square plane along the defect lines.
In \cite{canevari2017order}, the authors prove that the WORS is globally stable for $\lambda$ small enough and unstable for $\lambda$ large enough. The analysis in \cite{canevari2017order} is restricted to a special temperature but numerics show that the WORS exists for all $A<0$ with the diagonal defect lines, and the eigenvalue associated with $\zhat$ is negative for all $A<0$. The negative eigenvalue (associated with $\zhat$) implies that nematic molecules lie in the $(x,y)$-plane and for non-zero $q$, there is a locally defined nematic director in the $(x,y)$-plane. In the next sections, we study the relevance of the WORS in 3D contexts i.e. does the WORS survive in 3D scenarios and what can be said about its qualitative properties?
\subsection{Natural boundary conditions on the top and bottom plates}
\label{sect:natural}
In this section, we study a special class of LdG critical points, including the WORS, with natural boundary conditions
on the top and bottom plates.
Minimizers of~$\mathcal{F}_\lambda$ (see~\eqref{eq:LdG}),
in the admissible class~$\mathcal{B}$ in~\eqref{eq:admissible}
satisfy the Euler-Lagrange system~\eqref{eq:EL},
subject to the Dirichlet boundary conditions~\eqref{eq:bc-Dir}
along with \emph{natural} or Neumann boundary conditions
on the top and bottom plates i.e.
\begin{equation} \label{eq:bc-nat}
\partial_z\Qvec = 0 \qquad \textrm{on } \Gamma = \Omega\times\{0, \, \epsilon\} .
\end{equation}
Throughout this section,
we will treat~$L>0$, $B$, $C>0$ as fixed parameters,
while~$\lambda$ and~$A$ may vary.
\begin{proposition} \label{prop:2D}
For any~$\lambda>0$ and~ $A < 0$,
there exist minimizers~$\Qvec$ of ~$\mathcal{F}_\lambda$,
in~\eqref{eq:LdG}, in the admissible class~$\mathcal{B}$,
(see~\eqref{eq:admissible}). Moreover, minimizers are independent of the $z$-variable,
that is~$\partial_z\Qvec=0$ on~$V$, and they minimize the 2D functional
\begin{equation} \label{eq:2D-LdG}
I[\Qvec] := \int_{\Omega}
\left(\frac{1}{2}\left|\nabla\Qvec \right|^2
+ \frac{\lambda^2}{L} f_b(\Qvec) \right) \d S
\end{equation}
in the class
\begin{equation} \label{eq:2D-admissible}
\mathcal{B}^\prime := \left\{ \Qvec \in W^{1,2}(\Omega, \, S_0)\colon
\Qvec = \Qvec_{\mathrm{b}} \ \textrm{on } \partial\Omega \right\} \! .
\end{equation}
\end{proposition}
This result can be proved, e.g., as in~\cite[Theorem~0]{Bethuel1992}.
\newline \emph{Any} ($z$-independent) critical point of the functional~$I$
in the admissible class~$\mathcal{B}^\prime$ is also a solution of the
three-dimensional system~\eqref{eq:EL},
subject to the boundary conditions~\eqref{eq:bc-Dir} and~\eqref{eq:bc-nat}.
This necessarily implies that the WORS is a LdG critical point on 3D wells V, of arbitrary height $\epsilon$, with natural boundary conditions on $\Gamma$.
Therefore, in the rest of this section, we restrict ourselves to a 2D problem --- the analysis of critical points of~$I$
in ~$\mathcal{B}^\prime$.
Our first result concerns the existence of a Well Order Reconstruction Solution (WORS)-like solution for all $A<0$ as proven below.
\begin{proposition} \label{prop:WORS}
For any~$\lambda>0$ and~$A<0$, there exists a solution~$(q_1^{WORS}, \, q_3^{WORS})$
of the system
\begin{equation} \label{PDE-WORS}
\left\{\begin{aligned}
\Delta q_1 &= \frac{\lambda^2}{L} q_1 \left\{A + 2B q_3 + 2C\left(q_1^2 + 3q_3^2 \right)\right\} \\
\Delta q_3 &= \frac{\lambda^2}{L} q_3 \left\{A - Bq_3 + 2C\left(q_1^2 + 3q_3^2\right)\right\}
+ \frac{\lambda^2B}{3L}q_1^2
\end{aligned} \right.
\end{equation}
subject to the boundary conditions
\begin{equation}
q_1(x, \, y) = q_{1\mathrm{b}} (x, \, y) :=
\begin{cases}
-{s_+}/{2} & \textrm{if } (x, \, y)\in C_1\cup C_3 \\
{s_+}/{2} & \textrm{if } (x, \, y)\in C_2\cup C_4 \\
g_0(y) & \textrm{if } (x, \, y)\in S_1\cup S_3 \\
g_0(x) & \textrm{if } (x, \, y)\in S_2\cup S_4.
\end{cases}
\end{equation}
and $q_3 = -{s_+}/{6}$ on $\partial\Omega$, that satisfies
\begin{equation} \label{WORS}
xy\, q_1(x, \, y) \geq 0, \quad
q_3(x, \, y) < 0
\qquad \textrm{for any } (x, \, y)\in\Omega.
\end{equation}
Then
\begin{equation} \label{Q-WORS2D}
\begin{split}
\Qvec(x, \, y) &= q_1^{WORS}(x, \, y) \left(\nvec_1 \otimes \nvec_1 - \nvec_2\otimes \nvec_2 \right)
+ q_3^{WORS}(x, \, y) \left(2 \zhat\otimes\zhat
- \nvec_1\otimes \nvec_1 - \nvec_2\otimes \nvec_2 \right) \! ,
\end{split}
\end{equation}
is a WORS solution of the Euler-Lagrange system \eqref{eq:EL} on~$V$, subject to the Dirichlet conditions \eqref{eq:bc-Dir} and natural boundary conditions on~$\Gamma$.
\end{proposition}
\begin{proof}
We follow the approach in~\cite{canevari2017order}. Let
\[
\Omega_+ := \left\{(x, \, y)\in \Omega\colon x>0, \ y > 0\right\}
\]
be the portion of~$\Omega$ that is contained in the first quadrant.
For solutions of the form (\ref{Q-WORS2D}), the LdG energy reduces to
\begin{equation} \label{eq:LdG-WORS2}
\begin{split}
G[q_1, \, q_3] &:= \int_{\Omega_+} \left\{
\abs{\nabla q_1}^2 + 3 \abs{\nabla q_3}^2
+ \frac{\lambda^2}{L} \left(A (q_1^2 + 3 q_3^2 )
+ 2 B q_3 q_1^2 - 2 B q_3^3 + C(q_1^2 + 3 q_3^2)^2\right)\right\} \d S.
\end{split}
\end{equation}
We minimize~$G$ in the admissible class
\[
\mathcal{G} := \left\{(q_1, \, q_3)\in H^1(\Omega_+)^2\colon q_3 \leq 0 \textrm{ in } \Omega_+, \ \
q_1 = q_{1\mathrm{b}} \textrm{ on } \partial\Omega\cap \overline{\Omega_+}, \ \
q_3 = -s_+/6 \textrm{ on } \partial\Omega\cap \overline{\Omega_+}, \ \
q_1 = 0 \textrm{ on } \partial\Omega_+\setminus\partial\Omega\right\} \!.
\]
We impose no boundary conditions for~$q_3$ on~$\partial\Omega_+\setminus\partial\Omega$.
The function~$q_{1\mathrm{b}}$ is compatible with the Dirichlet conditions~\eqref{eq:bc-Dir}.
The class~$\mathcal{G}$ is closed and convex. Therefore,
a routine application of the direct method of the calculus of variations shows
that a minimizer~$(q_1^{WORS}, \, q_3^{WORS})$ exists. Moreover,
we can assume without loss of generality that $q_1^{WORS}\geq 0$ on~$\Omega_+$
--- otherwise, we replace $q_1^{WORS}$ with~$|q^{WORS}_1|$ and note
that $G[q_1^{WORS}, \, q_3^{WORS}] = G[|q_1^{WORS}|, \, q_3^{WORS}]$.
We now claim that~$q_3^{WORS}<0$ in~$\Omega_+$. To prove this,
let us consider a function~$\varphi\in H^1(\Omega_+)$ such that~$\varphi\geq 0$
in~$\Omega_+$ and~$\varphi = 0$ on~$\partial\Omega\cap \overline{\Omega_+}$.
For sufficiently small~$t\geq 0$, the function
$q_3^t := q_3^{WORS} - t\varphi$ is an admissible perturbation of~$q_3^{WORS}$,
and hence, we have
\[
\frac{\d}{\d t}_{|t = 0} G[q_1^{WORS}, \, q_3^t] \geq 0
\]
because $(q_1^{WORS}, \, q_3^{WORS})$ is a minimizer. The derivative on the
left-hand side can be computed explicitly and we obtain
\begin{equation} \label{subsol}
\int_{\Omega_+} \left\{ -6 \nabla q_3^{WORS}\cdot\nabla\varphi
- \frac{\lambda^2}{L} f(q_1^{WORS}, \, q_3^{WORS}) \, \varphi\right\} \d S \geq 0
\end{equation}
where
\begin{equation} \label{f}
f(q_1, \, q_3) := 6 q_3 \left(A - Bq_3 + 6C q_3^2\right)
+ 2(B + 6C q_3)q_1^2
\end{equation}
For~$|q_3|$ sufficiently small, we have
$A - Bq_3 + 6C q_3^2 < 0$ and~$B + 6C q_3>0$, because~$A<0$ and~$B>0$.
Therefore, there exists~$\delta>0$ (depending only on~$A$, $B$, $C$)
such that
\begin{equation} \label{signf}
f(q_1, \, q_3) > 0 \qquad \textrm{for any } q_1\in{\mathbb R}
\textrm{ and } q_3\in [-\delta, \, 0].
\end{equation}
Now, we define
\begin{equation} \label{varphi-subsol}
\varphi := \begin{cases}
q_3^{WORS} + \delta & \textrm{where } q_3^{WORS} > -\delta \\
0 & \textrm{where } q_3^{WORS}\leq -\delta.
\end{cases}
\end{equation}
By taking~$\delta < s_+/6$, we can make sure that~$\varphi = 0$
on~$\partial\Omega\cap \overline{\Omega_+}$.
Then, we can substitute~$\varphi$ into~\eqref{subsol} and obtain
\begin{equation*}
\int_{\{q_3^{WORS}>-\delta\}} \left\{ 6 \abs{\nabla q_3^{WORS}}^2
+ \frac{\lambda^2}{L}
f(q_1^{WORS}, \, q_3^{WORS}) \, (q_3^{WORS} + \delta)\right\}
\d S \leq 0.
\end{equation*}
Due to~\eqref{signf}, we conclude that
$q_3^{WORS}\leq -\delta < 0$ in~$\Omega_+$.
In particular, $(q_1^{WORS}, \, q_3^{WORS})$ lies in the interior of the
admissible set~$\mathcal{G}$ and hence, it solves the Euler-Lagrange
system~\eqref{PDE-WORS} for the functional~$G$, together with the natural boundary condition
$\partial_\nu q_3=0$ on~$\partial \Omega_+ \setminus\partial \Omega$.
We extend~$(q_1^{WORS}, \, q_3^{WORS})$ to the whole of~$\Omega$
by reflections about the planes~$\{x = 0\}$ and~$\{y = 0\}$:
\[
q_1^{WORS}(x, \, y) := \mathrm{sign}(xy)\, q_1^{WORS}(\abs{x}, \, \abs{y}), \qquad
q_3^{WORS}(x, \, y) := q_3^{WORS}(\abs{x}, \, \abs{y})
\]
for any~$(x, \, y)\in \Omega\setminus\overline{\Omega_+}$.
An argument based on elliptic regularity,
as in~\cite[Theorem~3]{dangfifepeletier},
shows that $(q_1^{WORS}, \, q_3^{WORS})$ is a solution
of~\eqref{PDE-WORS} on~$\Omega$, satisfies the boundary conditions
and~\eqref{WORS}, by construction.
\end{proof}
The WORS is a special case of critical points of~\eqref{eq:2D-LdG},
that have $\zhat$ as a constant eigenvector and can be completely described by three
degrees of freedom, i.e. they can be written as
\begin{equation} \label{Q-ansatz}
\begin{split}
\Qvec(x, \, y) &= q_1(x, \, y) \left(\nvec_1 \otimes \nvec_1 - \nvec_2\otimes \nvec_2 \right)
+ q_2(x, \, y) \left(\nvec_1 \otimes \nvec_2 + \nvec_2\otimes \nvec_1 \right) \\
&\qquad\qquad +
q_3(x, \, y) \left(2 \zhat\otimes\zhat
- \nvec_1\otimes \nvec_1 - \nvec_2\otimes \nvec_2 \right) \! ,
\end{split}
\end{equation}
where~$q_1$, $q_2$, $q_3$ are scalar functions and~$\nvec_1$,
$\nvec_2$ are given by~\eqref{eq:n12}.
For solutions of the form~\eqref{Q-ansatz}, the LdG Euler-Lagrange
system~\eqref{eq:LdG} reduces to
\begin{equation} \label{eq:EL123}
\left\{\begin{aligned}
\Delta q_1 &= \frac{\lambda^2}{L} q_1 \left\{A + 2B q_3 + 2C\left(q_1^2 + q_2^2 + 3q_3^2 \right)\right\} \\
\Delta q_2 &= \frac{\lambda^2}{L} q_2 \left\{A + 2B q_3 + 2C\left(q_1^2 + q_2^2 + 3q_3^2 \right)\right\} \\
\Delta q_3 &= \frac{\lambda^2}{L} q_3 \left\{A - Bq_3 + 2C\left(q_1^2 + q_2^2 + 3q_3^2\right)\right\}
+ \frac{\lambda^2B}{3L}\left( q_1^2 + q_2^2 \right),
\end{aligned} \right.
\end{equation}
This is precisely the Euler-Lagrange system associated with the functional
\begin{equation} \label{eq:J}
J[q_1, \, q_2, \, q_3]: = \int_{\Omega} \left(
\abs{\nabla q_1}^2 + \abs{\nabla q_2}^2 + 3 \abs{\nabla q_3}^2
+ \frac{\lambda^2}{L} F(q_1, \, q_2, \, q_3)\right) \d S,
\end{equation}
where~$F$ is the polynomial potential given by
\begin{equation} \label{eq:F}
F(q_1, \, q_2, \, q_3) := A \left( q_1^2 + q_2^2 + 3 q_3^2 \right)
+ 2 B q_3 \left( q_1^2 + q_2^2\right) - 2 B q_3^3 + C\left( q_1^2 + q_2^2 + 3 q_3^2 \right)^2.
\end{equation}
The Dirichlet boundary condition~\eqref{eq:bc-Dir} for~$\Qvec$
translates into boundary conditions for~$q_1$, $q_2$ and~$q_3$:
\begin{equation} \label{eq:BC123}
q_1 = q_{1\mathrm{b}}, \quad q_2 = 0, \quad q_3 = -{s_+}/{6}
\qquad \textrm{on } \partial\Omega,
\end{equation}
where the function~$q_{1\mathrm{b}}$ is defined by
\begin{equation} \label{eq:BC1}
q_{1\mathrm{b}} (x, \, y) :=
\begin{cases}
-{s_+}/{2} & \textrm{if } (x, \, y)\in C_1\cup C_3 \\
{s_+}/{2} & \textrm{if } (x, \, y)\in C_2\cup C_4 \\
g_0(y) & \textrm{if } (x, \, y)\in S_1\cup S_3 \\
g_0(x) & \textrm{if } (x, \, y)\in S_2\cup S_4.
\end{cases}
\end{equation}
By adapting the methods in \cite{Ignatetal},
we can construct solutions~$(q_1, \, q_2, \, q_3)$
to the system~\eqref{eq:EL123}, subject to the boundary
conditions~\eqref{eq:BC123}, that
satisfy~$q_3<0$ in~$\Omega$ and are locally stable. The WORS is a specific example of such a solution with a constant eigenframe and two degrees of freedom. In fact, the results in \cite{wang2018order} show that the WORS loses stability with respect to solutions of the form \eqref{Q-ansatz}, with $q_2 \neq 0$, as $\lambda$ increases.
We say that a solution~$(q_1, \, q_2, \, q_3)$ of~\eqref{eq:EL123} is
locally stable if, for any perturbations $\varphi_1$, $\varphi_2$,
$\varphi_3\in C^1_{\mathrm{c}}(\Omega)$, there holds
\begin{equation} \label{eq:second_var}
\delta^2 J(q_1, \, q_2, \, q_3) [\varphi_1, \, \varphi_2, \, \varphi_3] :=
\frac{\d^2}{\d t^2}_{|t = 0} J[q_1 + t\varphi_1, \, q_2 + t\varphi_2, \, q_3 + t\varphi_3]\geq 0.
\end{equation}
Given a locally stable solution~$(q_1, \, q_2, \, q_3)$ of~\eqref{eq:EL123},
the corresponding $\Qvec$-tensor, defined by~\eqref{Q-ansatz},
is a solution of~\eqref{eq:EL} and is locally stable in the restricted
class of $\Qvec$-tensors that have $\zhat$ as a constant eigenvector.
\begin{proposition} \label{prop:stable}
For any~$A<0$, there exists a solution~$(q_{1,*}, \, q_{2,*}, \, q_{3,*})$
of the system~\eqref{eq:EL123}, subject to the boundary
conditions~\eqref{eq:BC123}, that is locally stable
and has~$q_{3,*}<0$ everywhere in~$\Omega$.
\end{proposition}
\begin{proof
Let~$\mathcal{A}$ be the set of triplets
$(q_1, \, q_2, \, q_3)\in H^1(\Omega)^3$
that satisfy the boundary conditions~\eqref{eq:BC123}.
The boundary data are piecewise of class~$C^1$, so the class~$\mathcal{A}$ is non-empty.
Moreover, $\mathcal{A}$ is convex and closed in~$H^1(\Omega)^3$.
To construct solutions with negative~$q_3$, we first introduce the class
\begin{equation} \label{A-}
\mathcal{A}^- := \left\{(q_1, \, q_2, \, q_3)\in\mathcal{A}
\colon q_3\leq 0 \ \textrm{ a.e. on }\Omega \right\} \!.
\end{equation}
The class~$\mathcal{A}^-$ is a non-empty, convex and closed subset of~$H^1(\Omega)^3$.
A routine application of the direct method in the calculus of variations
shows that the functional~$J$, defined by~\eqref{eq:J},
has a minimizer~$(q_{1,*}, \, q_{2,*}, \, q_{3,*})$ in the class~$\mathcal{A}^-$.
To complete the proof, it suffices to show that~$q_{3,*}\leq-\delta$ in~$\Omega$,
for some strictly positive constant~$\delta$. Once this inequality is proved,
it will follow that~$(q_{1,*}, \, q_{2,*}, \, q_{3,*})$ lies in the interior of~$\mathcal{A}^-$
and hence, it is a locally stable solution of the Euler-Lagrange system~\eqref{eq:EL123}.
To prove that~$q_{3,*}\leq-\delta$ in~$\Omega$, we follow the same method
as in Proposition~\ref{prop:WORS}. Let~$\varphi\in H^1(\Omega_+)$ be
such that~$\varphi\geq 0$ in~$\Omega$ and~$\varphi = 0$ on~$\partial\Omega$,
then
\[
\frac{\d}{\d t}_{|t = 0} J[q_{1,*}, \, q_{2,*}, \, q_{3,*} - t\varphi] \geq 0,
\]
and hence,
\begin{equation} \label{subsol*}
\int_{\Omega_+} \left\{ -6 \nabla q_{3,*}\cdot\nabla\varphi
- \frac{\lambda^2}{L} f_*(q_{1,*}, \, q_{2,*}, \, q_{3,*}) \, \varphi\right\} \d S \geq 0,
\end{equation}
where
\[
f_*(q_1, \, q_2, \, q_3) := 6 q_3 \left(A - Bq_3 + 6C q_3^2\right)
+ 2(B + 6C q_3)\left(q_1^2 + q_2^2\right).
\]
As before, there exists a number~$\delta>0$
(depending only on~$A$, $B$, $C$) such that
\begin{equation*}
f_*(q_1, \, q_2, \, q_3) > 0 \qquad \textrm{for any } q_1\in{\mathbb R}, \ q_2\in{\mathbb R}
\textrm{ and } q_3\in [-\delta, \, 0].
\end{equation*}
We can now show that~$q_{3,*}\leq -\delta$ in~$\Omega$
by repeating the same arguments of Proposition~\ref{prop:WORS}.
\end{proof}
We now consider solutions of~\eqref{eq:EL123}, \eqref{eq:BC123}
that satisfy~$q_3<0$ in~$\Omega$, and prove bounds on $q_3$ as a function of the re-scaled temperature $A$.
\begin{lemma} \label{lemma:maxprinciple}
Any solution~$(q_1, \, q_2, \, q_3)$ of the system~\eqref{eq:EL123},
subject to ~\eqref{eq:BC123}, satisfies
\[
q_1^2 + q_2^2 + 3q_3^2 \leq \frac{s_+^2}{3} \qquad \textrm{in } \Omega,
\]
where $s_+$ is the constant defined by~\eqref{eq:s+}.
\end{lemma}
This lemma be deduced from the corresponding
maximum principle for the full LdG system~\eqref{eq:EL};
see for instance~\cite[Proposition~3]{majumdar2010landau}.
\begin{lemma} \label{lemma:Hardy123}
Let~$(q_1, \, q_2, \, q_3)$ be a solution of the system~\eqref{eq:EL123},
subject to ~\eqref{eq:BC123},
such that~$q_3<0$ everywhere in~$\Omega$. Then $ q_1^2 + q_2^2 < 9q_3^2 $ everywhere in $\Omega$.
\end{lemma}
\begin{proof}
Define the functions~$\xi_1:= -q_1/q_3$ and~$\xi_2 := -q_2/q_3$.
Then, for~$k\in\{1, \, 2\}$, we have
\begin{gather*}
\nabla\xi_k = -\frac{1}{q_3}\nabla q_k + \frac{q_k}{q_3^2} \nabla q_3 \\
\Delta\xi_k = -\frac{1}{q_3}\Delta q_k + \frac{q_k}{q_3^2} \Delta q_3
+ \frac{2}{q_3^2}\nabla q_3\cdot\nabla q_k
- \frac{2q_k}{q_3^3} \abs{\nabla q_3}^2
= -\frac{1}{q_3}\Delta q_k + \frac{q_k}{q_3^2} \Delta q_3
- \frac{2}{q_3}\nabla q_3\cdot\nabla\xi_k
\end{gather*}
Using the system~\eqref{eq:EL123}, for~$k\in\{1, \, 2\}$ we obtain
\begin{equation} \label{hardy31}
\begin{split}
\Delta\xi_k + \frac{2}{q_3}\nabla q_3\cdot\nabla\xi_k
&= \frac{\lambda^2}{L}\bigg\{-\frac{q_k}{q_3}\left(A + 2B q_3 + 2C(q_1^2 + q_2^2 + 3q_3^2)\right) \\
&\qquad\qquad
+ \frac{q_k}{q_3}\left(A - B q_3 + 2C(q_1^2 + q_2^2 + 3q_3^2)\right)\bigg\}
+ \frac{\lambda^2B}{3L}\frac{q_k}{q_3^2}\left( q_1^2 + q_2^2 \right) \\
&= \frac{\lambda^2B}{3L} q_k \left(-9 + \xi_1^2 + \xi_2^2\right) \! .
\end{split}
\end{equation}
Now, we define a non-negative function~$\xi$
by~$\xi^2 := \xi_1^2+\xi_2^2$. We have
\[
\Delta(\xi^2/2) = \xi_1\Delta\xi_1 + \xi_2\Delta\xi_2
+ \abs{\nabla\xi_1}^2 + \abs{\nabla\xi_2}^2
\]
and hence, thanks to~\eqref{hardy31},
\[
\Delta(\xi^2/2) + \frac{2}{q_3}\nabla q_3\cdot
\left(\xi_1\nabla\xi_1 + \xi_2\nabla\xi_2\right)
= \frac{\lambda^2B}{3L} (q_1\xi_1 + q_2\xi_2)\left(\xi^2 - 9\right)
+ \abs{\nabla\xi_1}^2 + \abs{\nabla\xi_2}^2.
\]
Finally, we obtain
\begin{equation} \label{hardy32}
\Delta(\xi^2/2) + \frac{2}{q_3}\nabla q_3\cdot\nabla(\xi^2/2)
\geq \underbrace{-\frac{\lambda^2B}{3L} \frac{q_1^2 + q_2^2}{q_3}}_{\geq 0}
\left(\xi^2 - 9\right).
\end{equation}
From the boundary conditions~\eqref{eq:BC123}, we know
that~$\xi = \xi_1\leq 3$ on~$\partial\Omega$.
Then, the (strong) maximum principle applied to the differential
inequality~\eqref{hardy32} implies that $\xi^2 < 9$
everywhere inside~$\Omega$. Thus, the lemma follows.
\end{proof}
We define
\[
s_- := \frac{B - \sqrt{B^2 + 24|A| C}}{4C}<0.
\]
In the following propositions, we prove bounds on~$q_3$, in terms of~$s_+$
(see~\eqref{eq:s+}) and~$s_-$.
\begin{proposition} \label{prop:bounds-highA}
Let $-\frac{B^2}{3C} \leq A < 0$ so that
\begin{equation}\label{ineq1}
\frac{s_{-}}{3}\geq-\frac{s_{+}}{6}\geq-\frac{B}{6C}.
\end{equation} Let $(q_1, \, q_2, \, q_3)$
be any solution of the PDE system~\eqref{eq:EL123},
satisfying the boundary conditions~\eqref{eq:BC123},
with $q_3<0$ in $\Omega$. Then
\begin{equation} \begin{split}
-\frac{s_{+}}{6}\leq q_3\leq\frac{s_{-}}{3} \quad \text{in}\quad\Omega.
\end{split} \end{equation}
\end{proposition}
\begin{proof}
Firstly, we shall prove the upper bound
$q_3\leq s_{-}/3$ in $\Omega$. Assume for a contradiction,
that the maximum of $q_3$ is attained at some point
$(x_0, \, y_0)\in\Omega$ such that $q_3(x_0, \, y_0)>s_{-}/3$.
Then, using~\eqref{ineq1}, the following inequalities hold:
\begin{equation*} \begin{split}
Aq_3(x_0,y_0)-Bq_3^2(x_0,y_0)+6Cq_3^3(x_0,y_0)
&> A\left(-\frac{B}{6C}\right) + B\left(-\frac{B^2}{36C^2}\right)
+ 6C\left(\frac{-B^3}{216C^3}\right)\\
&>\left(-\frac{B^2}{3C}\right)\left(-\frac{B}{6C}\right)-\frac{B^3}{18C^2}=0,\\
\end{split} \end{equation*}
and
\begin{equation*}
2Cq_3(x_0,y_0)+\frac{B}{3}>2C\left(-\frac{B}{6C}\right)+\frac{B}{3}=0.
\end{equation*}
We evaluate both sides of the equation for $q_3\in C^2(\Omega)$
in~\eqref{eq:EL123} at the point~$(x_0, \, y_0)$:
\begin{equation*} \begin{split}
\underbrace{\Delta q_3(x_0,y_0)}_{\leq 0}
&=\underbrace{\frac{\lambda^2}{L}\left\{Aq_3(x_0,y_0)-Bq_3^2(x_0,y_0)+6Cq_3^3(x_0,y_0)\right\}}_{>0} \\
&+\underbrace{\frac{\lambda^2}{L}\left\{2Cq_3(x_0,y_0)+\frac{B}{3}\right\}
\left(q_1^2(x_0,y_0)+q_2^2(x_0,y_0)\right)}_{\geq 0},
\end{split} \end{equation*}
which leads to a contradiction. Since~$q_3 = -s_+/6\leq s_-/3$
on~$\partial\Omega$, we conclude that $q_3\leq s_-/3$ on~$\Omega$.
Now let's prove the weaker lower bound $q_3\geq-\frac{B}{6C}$
in $\Omega$. Assume for contradiction that the minimum of $q_3$
is attained at some point $(x_1,y_1)\in\Omega$ such that
$q_3(x_1,y_1)<-\frac{B}{6C}$. Then the following inequalities hold:
\begin{equation*} \begin{split}
Aq_3(x_1,y_1)-Bq_3^2(x_1,y_1)+6Cq_3^3(x_1,y_1)&<A\left(-\frac{B}{6C}\right)+B\left(-\frac{B^2}{36C^2}\right)+6C\left(\frac{-B^3}{216C^3}\right) \\
&<\left(\frac{B^2}{3C}\right)\left(\frac{B}{6C}\right)-\frac{B^3}{18C^2}=0,
\end{split} \end{equation*}
and
\begin{equation*}
2Cq_3(x_1,y_1)+\frac{B}{3} < 2C\left(-\frac{B}{6C}\right)+\frac{B}{3}=0.
\end{equation*}
Recalling the equation for $q_3\in C^2(\Omega)$ in~\eqref{eq:EL123} and the boundary conditions~\eqref{eq:BC123}, we get an immediate contradiction and obtain the lower bound
$q_3\geq-\frac{B}{6C}$.
We are now ready to prove the optimal lower bound
$q_3\geq-\frac{s_{+}}{6}$ in $\Omega$.
Recalling Lemma~\ref{lemma:Hardy123} and $q_3\geq-\frac{B}{6C}$, we have that:
\begin{equation}\label{eq:MC}
\begin{split}
\Delta q_3 &=\frac{\lambda^2}{L}\left\{Aq_3-Bq_3^2+6Cq_3^3\right\}+\frac{\lambda^2}{L}\left\{2Cq_3+\frac{B}{3}\right\}(q_1^2+q_2^2) \\
&\leq \frac{\lambda^2}{L}\left\{Aq_3-Bq_3^2+6Cq_3^3\right\}+\frac{\lambda^2}{L}\left\{2Cq_3+\frac{B}{3}\right\}9q_3^2 \\
&\leq \frac{\lambda^2}{L}\left\{Aq_3+2Bq_3^2+24Cq_3^3\right\} \quad \text{in}\quad\Omega.
\end{split}
\end{equation}
Assume for a contradiction, that the minimum of $q_3$ is attained at some point $(x_2, \, y_2)\in\Omega$ such that $q_3(x_2,y_2)<-\frac{s_{+}}{6}$. Then, using (\ref{ineq1}), the following inequality holds:
\begin{equation} \begin{split}
Aq_3(x_2,y_2)+2Bq_3^2(x_2,y_2)+24Cq_3^3(x_2,y_2)&<A\left(-\frac{s_{+}}{6}\right)-2B\left(-\frac{s_{+}^2}{36}\right)+24C\left(\frac{-s_{+}^3}{216}\right)\nonumber \\
&<\left(\frac{B^2}{3C}\right)\left(\frac{s_{+}}{6}\right)+2B\left(\frac{B^2}{36C^2}\right)+24C\left(-\frac{s_{+}^3}{216}\right) \nonumber\\
&<\frac{B^3}{9C^2}-\frac{B}{9}\left(\frac{B^2}{C}\right)=0,
\end{split} \end{equation}
which when combined with the equation~\eqref{eq:MC}, yields $\Delta q_3(x_2, y_2)<0$.
This is a contradiction, and the desired result follows
\end{proof}
\begin{corollary} \label{cor:bounds}
Assume that $A=-\frac{B^2}{3C}$ and let $(q_1, \, q_2, \, q_3)$ be a solution of~\eqref{eq:EL123}
subject to boundary conditions~\eqref{eq:BC123} with $q_3<0$ in $\Omega$.
Then $q_3 = -\frac{s_{+}}{6}$ in~$\Omega$.
\end{corollary}
Finally, we have the following inequalities for $A<-\frac{B^2}{3C}$:
\begin{gather}
\frac{s_{-}}{3}<-\frac{s_{+}}{6}<-\frac{B}{6C}. \label{ineq2}
\end{gather}
\begin{proposition} \label{prop:bounds-lowA}
Assume that $A<-\frac{B^2}{3C}$. Let $(q_1, \, q_2, \, q_3)$ be any solution of the
PDE system~\eqref{eq:EL123} satisfying ~\eqref{eq:BC123}
with $q_3<0$ in $\Omega$. Then
\begin{equation}
\frac{s_{-}}{3}\leq q_3\leq-\frac{s_{+}}{6} \quad \text{in}\quad\Omega.
\end{equation}
\end{proposition}
\textbf{Remark:} The proof of Proposition~\ref{prop:bounds-lowA} is completely analogous to
that of Proposition~\ref{prop:bounds-highA}. We first prove the lower bound $q_3\geq s_-/3$,
then the weaker upper bound $q_3\leq-\frac{B}{6C}$, and finally the sharp upper bound $q_3\leq -s_+/6$.
Each step is obtained by repeating almost word by word the arguments of Proposition~\ref{prop:bounds-highA}.
We omit the details for brevity. $\Box$
\subsubsection{Stability/Instability of the WORS with natural boundary conditions.}
We first recall a result from \cite{canevari2017order} that ensures that the WORS is globally stable with natural boundary conditions on $\Gamma$, for arbitrary well heights or all values of $\epsilon$.
\begin{lemma} \label{lemma:uniqueness-s}
For any $A < 0$, there exists~$\lambda_0 > 0$ (depending only
on~$A$, $B$, $C$, $L$) such that, for~$\lambda < \lambda_0$, the
functional~\eqref{eq:2D-LdG} has a unique critical point
in the class~\eqref{eq:2D-admissible}.
\end{lemma}
This result follows from a general uniqueness criterion for critical
points of functionals of the form~\eqref{eq:LdG}; see, e.g.,
\cite[Lemma~8.2]{lamy2014}, \cite[Lemma~3.2]{canevari2017order}. The WORS exists for all $\lambda$ and $A<0$ and an immediate consequence is that the WORS is the unique LdG energy minimizer for sufficiently small $\lambda$.
We now study the instability of the WORS
with respect to in-plane perturbations of the eigenframe and these perturbations necessarily have a non-zero $q_2$ component,
when~$\lambda$ is large and~$A$ is low enough. To this end, we take a
function~$\varphi\in C^1_{\mathrm{c}}(\Omega)$ and consider the perturbation
\[
\Qvec_t(x, \, y) := \Qvec(x, \, y) + t\varphi(x, \, y)
\left(\nvec_1\otimes\nvec_2 + \nvec_2\otimes\nvec_1 \right) \!,
\]
where~$\nvec_1$, $\nvec_2$ are defined by~\eqref{eq:n12}
and~$t\in{\mathbb R}$ is a small parameter. We compute the second
variation of the LdG energy ~\eqref{eq:2D-LdG}, about the WORS solution as discussed in Proposition~\ref{prop:WORS}:
\begin{equation} \label{H}
H_\lambda[\varphi] := \frac{1}{2}\frac{\d^2}{\d t^2} I[\Qvec_t]_{|t=0} =
\int_{\Omega} \left(\abs{\nabla\varphi}^2 + \frac{\lambda^2}{L} \varphi^2
\left(A + 2 Bq_{3} + 2C(q_{1}^2 + 3 q_{3}^2)\right)\right) \d S
\end{equation}
(see \cite[Section~5.3]{wang2018order}).
\begin{proposition} \label{prop:unstable}
Let $A\leq -\frac{B^2}{3C}$. Let~$(q_1, \, q_2, \, q_3)$ be
a solution of~\eqref{eq:EL123}, subject to the boundary conditions~\eqref{eq:BC123},
such that $q_2 = 0$ and~$q_3 < 0$ everywhere in~$\Omega$, such as the WORS-solution constructed in Proposition~\ref{prop:WORS}.
For any function~$\varphi\in C^1_{\mathrm{c}}(\Omega)$
that is not identically equal to zero, there exists a
number~$\lambda_0>0$ (depending on~$A$, $B$, $C$, $L$ and~$\varphi$) such that
$H_\lambda[\varphi]<0$ when~$\lambda\geq\lambda_0$.
\end{proposition}
\begin{proof}
Due to Lemma~\ref{lemma:maxprinciple} and Proposition~\ref{prop:bounds-lowA}, we have
\begin{equation*} \begin{split}
A+2Bq_3+2C(q_1^2 + 3q_3^2)
&\leq A-\frac{Bs_{+}}{3}+\frac{2Cs_{+}^2}{3} \\
&=A-\frac{B}{3}\left(\frac{B+\sqrt{B^2+24|A|C}}{4C}\right)+\frac{2C}{3}\left(\frac{2B^2+2B\sqrt{B^2+24|A|C}+24|A|C}{16C^2}\right) \\
&=A+|A|=0.
\end{split} \end{equation*}
The equality holds if and only if $q_{3} = -s_+/6$ and~$q_{1}^2 + 3q_{3}^2 = s_+^2/3$,
that is, if and only if $\abs{q_{1}} = s_+/2$ and~$q_{3} = -s_+/6$. However,
from Lemma~\ref{lemma:Hardy123} we know that $3q_{3} < q_{1} < -3q_{3}$
inside~$\Omega$, so we must have
\[
A + 2Bq_{3} + 2C(q_{1}^2 + 3q_{3}^2) < 0 \qquad
\textrm{everywhere inside } \Omega.
\]
Then, for any fixed~$\varphi\in C^\infty_{\mathrm{c}}(\Omega)$
that is not identically equal to zero, the quantity~$H_\lambda[\varphi]$
defined by~\eqref{H} becomes strictly negative for~$\lambda$ large enough.
\end{proof}
\subsection{Surface anchoring on the top and bottom plates}
\label{sect:surface}
In this section, we consider more experimentally relevant
boundary conditions on the top and bottom plates,
$\Gamma:= \Omega \times\{0, \, \epsilon\}$.
Instead of natural boundary conditions, we impose surface
energies on~$\Gamma$. The free energy functional,
in dimensionless units, becomes
\begin{equation} \label{eq:LdG-s}
\mathcal{F}_\lambda[\Qvec] :=
\int_V \left(\frac{1}{2}\abs{\nabla\Qvec}^2
+ \frac{\lambda^2}{L}f_b(\Qvec)\right) \d V
+ \frac{\lambda}{L} \int_{\Gamma} f_s(\Qvec) \, \d S,
\end{equation}
and~$f_s$ is the surface anchoring energy density defined by
\cite{OsipovHess, Sluckin, SenSullivan, golovaty2017dimension}
\begin{equation} \label{eq:fs}
f_s(\Qvec) := \alpha_z \left(\Qvec\zhat\cdot\zhat + \frac{s_+}{3}\right)^2
+ \gamma_z \abs{(\Ivec - \zhat\otimes\zhat)\Qvec\zhat}^2,
\end{equation}
where~$\alpha_z$ and~$\gamma_z$ are positive coefficients.
We remark that the second term in~\eqref{eq:fs}, $\gamma_z |(\Ivec - \zhat\otimes\zhat)\Qvec\zhat|^2$,
is equal to zero if and only if $\Qvec\zhat$ is parallel to~$\zhat$.
Therefore, the surface energy density~$f_s$ favours $\Qvec$-tensors that have~$\zhat$ as an eigenvector, with
constant eigenvalue~$-s_+/3$, on the top and bottom plates.
We have Dirichlet boundary conditions \eqref{eq:bc-Dir}
on the lateral surface; and the admissible class is $\mathcal{B}$,
defined by~\eqref{eq:admissible}.
\begin{lemma} \label{lemma:EL-surface}
Critical points of the functional~\eqref{eq:LdG-s}, in the admissible
class~$\mathcal{B}$ defined by~\eqref{eq:admissible},
satisfy the Euler-Lagrenge system~\eqref{eq:EL},
subject to Dirichlet boundary conditions~\eqref{eq:bc-Dir} on
the lateral surfaces and
\begin{equation} \label{eq:bc-Neu}
\partial_\nu\Qvec + \frac{\lambda}{L} \Gvec(\Qvec) = 0
\qquad \textrm{on } \Gamma.
\end{equation}
Here, $\nu$ is the outward-pointing unit normal to~$V$
and~$\Gvec$ is defined by
\[
\begin{split}
\Gvec(\Qvec) := \left( \begin{matrix}
-\dfrac{2}{3}\alpha_z\left(Q_{33} + \dfrac{s_+}{3}\right) & 0
& \gamma_z Q_{13} \\
0 & -\dfrac{2}{3}\alpha_z\left(Q_{33} + \dfrac{s_+}{3}\right)
& \gamma_z Q_{23} \\
\gamma_z Q_{13} & \gamma_z Q_{23} &
\dfrac{4}{3}\alpha_z\left(Q_{33} + \dfrac{s_+}{3}\right)
\end{matrix} \right) \! .
\end{split}
\]
\end{lemma}
\begin{remark} \label{remark:Lagrange}
The matrix~$\Gvec(\Qvec)$ is symmetric and traceless. Therefore,
the Lagrange multipliers associated with the symmetry and tracelessness
constraints have already been embedded in the definition of~$\Gvec$.
\end{remark}
\begin{remark} \label{remark:2-3D}
Because of the boundary condition~\eqref{eq:bc-Neu},
$z$-independent solutions ($\partial_z\Qvec=0$) may not, in general,
be solutions of the 3D problem with surface energy anchoring
on the top and bottom plates. However, when~$A = -B^2/(3C)$
we know that there exist $z$-independent solutions with~$Q_{33}=-s_+/3$;
they correspond to triplets~$(q_1, \, q_2, \, q_3)$
with constant~$q_3=-s_+/6$ (see Corollary~\ref{cor:bounds}).
These $z$-independent solutions with constant~$Q_{33}$ are also solutions
of the 3D problem with surface energies on the top and bottom plates.
\end{remark}
\begin{proof}[Proof of Lemma~\ref{lemma:EL-surface}]
Let~$\Qvec\in\mathcal{B}$ be a critical point for~$\mathcal{F}_\lambda$,
and let~$\Pvec\in H^1(V, \, S_0)$ be any perturbation
such that $\Pvec = 0$ on~$\partial V\setminus\Gamma$.
We compute the first variation of~$\mathcal{F}_\lambda$
with respect to~$\Pvec$:
\[
\begin{split}
0=&\frac{\d}{\d t}_{|t=0} \mathcal{F}_\lambda [\Qvec+t\Pvec]
= \int_V \left(\nabla\Qvec:\nabla\Pvec
+ \frac{\lambda^2}{L} \left(A\Qvec\cdot\Pvec - B\Qvec^2\cdot\Pvec
+ C\abs {\Qvec}^2\Qvec\cdot\Pvec\right)\right) \d V \\
&\qquad\qquad + \frac{\lambda}{L}
\int_{\Gamma} \left(2\alpha_z(\Pvec\zhat\cdot\zhat)
\left(\Qvec\zhat\cdot\zhat + \frac{s_+}{3}\right)
+ 2\gamma_z (\Ivec - \zhat\otimes\zhat)\Qvec\zhat\cdot
(\Ivec - \zhat\otimes\zhat)\Pvec\zhat\right) \d S,
\end{split}
\]
where~$\Qvec\cdot\Pvec := \tr(\Qvec\Pvec) = Q_{ij}P_{ij}$.
By integrating by parts, and noting that~$\tr\Pvec = 0$, we obtain:
\begin{equation} \label{EL1}
\begin{split}
&\int_V \left(-\Delta\Qvec +
\frac{\lambda^2}{L} \left( A\Qvec - B\Qvec^2
+ \frac{B}{3}\abs{\Qvec}^2\Ivec + C\abs {\Qvec}^2\Qvec\right)\right)
\cdot\Pvec \, \d V +
\int_\Gamma \partial_\nu\Qvec\cdot\Pvec \, \d S \\
&\qquad\qquad + \frac{\lambda}{L} \int_{\Gamma}
\left(2\alpha_z(\Pvec\zhat\cdot\zhat)
\left(\Qvec\zhat\cdot\zhat + \frac{s_+}{3}\right)
+ 2\gamma_z (\Ivec - \zhat\otimes\zhat)\Qvec\zhat\cdot
(\Ivec - \zhat\otimes\zhat)\Pvec\zhat\right) \d S = 0.
\end{split}
\end{equation}
We now deal with the integral on~$\Gamma$.
We first remark that
\begin{equation} \label{EL1.1}
\begin{split}
(\Pvec\zhat\cdot\zhat)
\left(\Qvec\zhat\cdot\zhat + \frac{s_+}{3}\right)
&= \left(Q_{33} + \frac{s_+}{3}\right)P_{33} \\
&= \left(\begin{matrix}
-\dfrac{1}{3}\left(Q_{33} + \dfrac{s_+}{3}\right) & 0 & 0 \\
0 & -\dfrac{1}{3}\left(Q_{33} + \dfrac{s_+}{3}\right) & 0 \\
0 & 0 & \dfrac{2}{3}\left(Q_{33} + \dfrac{s_+}{3}\right)
\end{matrix}\right)\cdot\Pvec
\end{split}
\end{equation}
because~$\tr\Pvec = 0$. We also have
\begin{equation} \label{EL1.2}
\begin{split}
(\Ivec - \zhat\otimes\zhat)\Qvec\zhat\cdot
(\Ivec - \zhat\otimes\zhat)\Pvec\zhat
= \sum_{i=1}^2 Q_{i3}P_{i3}
= \frac{1}{2}\left(\begin{matrix}
0 & 0 & Q_{13} \\
0 & 0 & Q_{23} \\
Q_{13} & Q_{23} & 0 \\
\end{matrix}\right)\cdot\Pvec
\end{split}
\end{equation}
Using~\eqref{EL1}, \eqref{EL1.1} and~\eqref{EL1.2} we obtain
\begin{equation*}
\begin{split}
&\int_V \left(-\Delta\Qvec + \frac{\lambda^2}{L}
\left( A\Qvec - B\Qvec^2 + \frac{B}{3}\abs{\Qvec}^2\Ivec
+ C\abs {\Qvec}^2\Qvec\right)\right) \cdot\Pvec \, \d V
+ \int_\Gamma \left(
\partial_\nu\Qvec + \frac{\lambda}{L}
\Gvec(\Qvec)\right)\cdot\Pvec \, \d S = 0
\end{split}
\end{equation*}
for any~$\Pvec\in H^1( V, \, S_0)$ such that $\Pvec= 0$
on~$\partial V\setminus\Gamma$, and the lemma follows.
\end{proof}
\begin{lemma}\label{lemma:maxprinciple3d}
There exists a constant~$M$ (depending only on~$A$, $B$,
$C$ but \emph{not on} $\lambda$, $L$, $\epsilon$) such that
any solution~$\Qvec$ of the system~\eqref{eq:EL}, subject to the
boundary conditions~\eqref{eq:bc-Dir} and~\eqref{eq:bc-Neu},
satisfies
\[
\abs{\Qvec} \leq M \qquad \textrm{in } V.
\]
\end{lemma}
\begin{proof}
Let~$\Pvec := \Qvec + s_+(\zhat\otimes\zhat)/2$.
We have $\partial_\nu(\abs{\Pvec}^2/2) = \partial_\nu\Pvec\cdot\Pvec
= \partial_\nu\Qvec\cdot\Pvec$ and hence,
by~\eqref{eq:bc-Neu}, we deduce that
\begin{equation} \label{maxprinc1}
\begin{split}
-\frac{L}{\lambda}\partial_\nu(\abs{\Pvec}^2/2)
= \Gvec(\Qvec)\cdot\Pvec &= \gamma_z\sum_{i=1}^2 Q_{i3}^2
+ \dfrac{2}{3}\alpha_z\left(Q_{33} + \dfrac{s_+}{3}\right)
\left(-Q_{11}- Q_{22} + 2Q_{33} + s_+ \right) \\
&= \gamma_z\sum_{i=1}^2 Q_{i3}^2
+ 2\alpha_z\left(Q_{33} + \dfrac{s_+}{3}\right)^2\geq 0
\qquad \textrm{on }\Gamma.
\end{split}
\end{equation}
Similarly, we manipulate the Euler-Lagrange system to obtain
\begin{equation} \label{maxprinc2}
\begin{split}
\frac{L}{\lambda^2}\Delta(\abs{\Pvec}^2/2)
&= \frac{L}{\lambda^2}\Delta\Qvec\cdot\left(\Qvec +
\dfrac{s_+}{2}\zhat\otimes\zhat\right)
+ \frac{L}{\lambda^2}\abs{\nabla\Qvec}^2 \\
&\geq A\abs{\Qvec}^2 - B\tr(\Qvec^3) + C\abs {\Qvec}^4
+ \frac{s_+}{2}\left( \left(A + C\abs {\Qvec}^2\right)
Q_{33} - B Q_{3k}Q_{3k} + \dfrac{B}{3}\abs{\Qvec}^2\right)
\end{split}
\end{equation}
The right-hand side of~\eqref{maxprinc2}
is a quartic polynomial in~$\Qvec$, with leading order
term~$C\abs{\Qvec}^4$ and~$C>0$. Therefore, there exists
a positive number~$M_1$ (depending on $A$, $B$ and~$C$ only)
such that the right-hand side of~\eqref{maxprinc2}
is positive when~$\abs{\Qvec}\geq M_1$. By the triangular inequality,
we have
\[
\abs{\Pvec} \geq M_2 := M_1 + \frac{s_+}{2}
\quad \implies \quad \abs{\Qvec} =
\abs{\Pvec - \frac{s_+}{2}\zhat\otimes\zhat} \geq M_1
\]
and hence the right-hand side of~\eqref{maxprinc2} is positive when
$\abs{\Pvec}\geq M_2$. Finally, the boundary
datum~$\Qvec_{\mathrm{b}}$ on the lateral surfaces,
defined by~\eqref{eq:bc-Dir}, satisfies
$\abs{\Qvec_{\mathrm{b}}}\leq (2/3)^{1/2}s_+$
on~$\partial\Omega\times(0, \, \epsilon)$.
By applying the maximum principle
to~\eqref{maxprinc1} and~\eqref{maxprinc2}, we obtain that
\[
\abs{\Pvec} \leq \max\left \{M_2, \,
\left(\sqrt{\frac{2}{3}} + \frac{1}{2}\right)s_+ \right\}
\qquad \textrm{in } V
\]
Then, by the triangular inequality, $\Qvec$ is also
bounded in terms of~$A$, $B$ and~$C$ only.
\end{proof}
Adapting the methods in~\cite{canevari2017order}, for any values of~$\lambda$
and~$\epsilon$, it is possible to construct
a WORS-like solution for this 3D problem with surface anchoring
on the top and bottom plates. The WORS has a constant eigenframe and, hence
it can be completely described in terms of two
degrees of freedom as before:
\begin{equation} \label{Q-WORS}
\begin{split}
\Qvec(x, \, y, \, z) &= q_1(x, \, y, \, z)
\left(\nvec_1 \otimes \nvec_1 - \nvec_2\otimes \nvec_2 \right)
+ q_3(x, \, y, \, z) \left(2 \zhat\otimes\zhat
- \nvec_1\otimes \nvec_1 - \nvec_2\otimes \nvec_2 \right) \! ,
\end{split}
\end{equation}
where~$q_1$, $q_3$ are scalar functions,~$\nvec_1$,
$\nvec_2$ are given by~\eqref{eq:n12} and $q_1$ is constrained to vanish on the diagonals with symmetry
\begin{equation} \label{WORS3}
x y \, q_1(x, \, y, \, z) \geq 0 \qquad \textrm{for any } (x, \, y, \, z)\in V.
\end{equation}
\begin{proposition} \label{prop:WORS3D}
For any~$\lambda$, $\epsilon$ and~$A$, there exists a solution of the form ~\eqref{Q-WORS}, of the system~\eqref{eq:EL},
subject to the boundary conditions~\eqref{eq:bc-Dir} and~\eqref{eq:bc-Neu},
satisfies~\eqref{WORS3} with $q_1=0$ along the square diagonals and has~$q_3 < 0$ on~$V$.
\end{proposition}
\begin{proof}
Let
\[
V_+ := \left\{(x, \, y, \, z)\in V\colon x>0, \ y > 0\right\} \!.
\]
Following the approach in~\cite{canevari2017order}, we consider
the functional
\begin{equation} \label{eq:LdG-WORS3}
\begin{split}
G[q_1, \, q_3] &:= \int_{V_+} \left\{
\abs{\nabla q_1}^2 + 3 \abs{\nabla q_3}^2
+ \frac{\lambda^2}{L} \left(A (q_1^2 + 3 q_3^2 )
+ 2 B q_3 q_1^2 - 2 B q_3^3 + C(q_1^2 + 3 q_3^2)^2\right)\right\} \d S \\
&\qquad\qquad + \frac{2\lambda\alpha_z}{L}
\int_{\Gamma\cap\overline{V}_+} \left(q_3 + \frac{s_+}{6}\right)^2 \d S,
\end{split}
\end{equation}
obtained by substituting the ansatz~\eqref{Q-WORS}
into \eqref{eq:LdG-s}.
We minimize~$G$ among the finite-energy pairs $(q_1, \, q_3)\in H^1(V_+)^2$,
subject to the constraint~$q_3\leq 0$ on~$V$ and to the boundary conditions
\begin{equation} \label{eq:bc3D}
q_1 = q_{1\mathrm{b}} \ \textrm{ and } \ q_3 = -{s_+}/{6}
\quad \textrm{on } (\partial\Omega\times (0, \, \epsilon) )\cap \overline{V_+},
\qquad q_1 = 0 \quad \textrm{on } \partial V_+ \setminus\partial V,
\end{equation}
where the function~$q_{1\mathrm{b}}$ is defined by~\eqref{eq:BC1}.
A routine application of the direct method of the calculus of variations shows
that a minimizer~$(q_1^{WORS}, \, q_3^{WORS})$ exists. Without loss of generality,
we can assume that $q_1^{WORS}\geq 0$ on~$V_+$; otherwise, we replace $q_1^{WORS}$
with~$|q^{WORS}_1|$ and note that $G[q_1^{WORS}, \, q_3^{WORS}] = G[|q_1^{WORS}|, \, q_3^{WORS}]$.
We claim that~$q_3^{WORS}\leq-\delta$ for some strictly positive constant~$\delta$,
depending only on~$A$, $B$ and~$C$. The proof of this claim follows the argument in
Proposition~\ref{prop:WORS}. We take a perturbation~$\varphi\in H^1(V_+)$ such that
$\varphi\geq 0$ in~$V_+$ and~$\varphi = 0$ on~$\partial V\cap\overline{V_+}$.
Then, $q_3^t := q_3^{WORS} - t\varphi$, for~$t\geq 0$, is
an admissible perturbation for~$q_3^{WORS}$
and from the optimality condition
\[
\frac{\d}{\d t}_{|t = 0} G[q_1^{WORS}, \, q_3^t] \geq 0
\]
we deduce
\begin{equation} \label{subsol3}
\int_{V_+} \left\{ -6 \nabla q_3^{WORS}\cdot\nabla\varphi
- \frac{\lambda^2}{L} f(q_1^{WORS}, \, q_3^{WORS}) \, \varphi\right\} \d V
- \frac{4\lambda\alpha_z}{L} \int_{\Gamma\cap\overline{V}_+}
\left(q_3 + \frac{s_+}{6}\right)\varphi \, \d S \geq 0.
\end{equation}
The function~$f(q_1^{WORS}, \, q_3^{WORS})$ is defined by~\eqref{f},
and by~\eqref{signf} we know that there exists a constant~$\delta \in (0, \, s_+/6)$
such that $f(q_1, \, q_3)>0$ for any~$q_1\in{\mathbb R}$ and any~$q_3\in [-\delta, \, 0]$.
We choose~$\varphi$ as in~\eqref{varphi-subsol} and,
due to~\eqref{subsol3}, we deduce that
$q_3^{WORS}\leq -\delta$ in~$V_+$.
Since~$q_3^{WORS}$ is strictly negative, we can consider perturbations
of the form $q_3^t := q_3^{WORS} + t\varphi$, irrespective of the sign of~$\varphi$,
provided that~$\abs{t}$ is sufficiently small. As a consequence,
$(q_1^{WORS}, \, q_3^{WORS})$ solves the Euler-Lagrange system
\begin{equation} \label{eq:EL13}
\left\{\begin{aligned}
\Delta q_1 &= \frac{\lambda^2}{L} q_1 \left\{A + 2B q_3 + 2C\left(q_1^2 + 3q_3^2 \right)\right\} \\
\Delta q_3 &= \frac{\lambda^2}{L} q_3 \left\{A - Bq_3 + 2C\left(q_1^2 + 3q_3^2\right)\right\}
+ \frac{\lambda^2B}{3L}q_1^2
\end{aligned} \right.
\end{equation}
on~$V_+$, as well as the boundary conditions
\begin{equation} \label{eq:bc_neu-wors}
\partial_\nu q_1 = 0, \quad
\partial_\nu q_3 + \frac{4\lambda\alpha_z}{3L} \left(q_3 + \frac{s_+}{6}\right) = 0
\qquad \textrm{on } \Gamma\cap\overline{V}_+
\end{equation}
and~$\partial_\nu q_3=0$ on~$\partial V_+ \setminus\partial V$.
We extend~$(q_1^{WORS}, \, q_3^{WORS})$ to the whole of~$V$
by reflections about the planes~$\{x = 0\}$ and~$\{y = 0\}$:
\[
q_1^{WORS}(x, \, y, \, z) := \mathrm{sign}(xy)\, q_1^{WORS}(\abs{x}, \, \abs{y}, \, z), \qquad
q_3^{WORS}(x, \, y, \, z) := q_3^{WORS}(\abs{x}, \, \abs{y}, \, z)
\]
for any~$(x, \, y, \, z)\in V\setminus\overline{V_+}$.
The functions $q_1^{WORS}$, $q_3^{WORS}$, defined above, solve the Euler-Lagrange
system~\eqref{eq:EL13} on $\Omega\setminus(\{x=0\}\cup\{y=0\})$.
In fact, an argument based on elliptic regularity,
along the lines of~\cite[Theorem~3]{dangfifepeletier},
shows that $(q_1^{WORS}, \, q_3^{WORS})$ is a solution of~\eqref{eq:EL13} on the whole of~$\Omega$.
Finally, using~\eqref{eq:EL13}, \eqref{eq:bc3D} and~\eqref{eq:bc_neu-wors}, we can check that
the $\Qvec$-tensor associated with~$(q_1^{WORS}, \, q_3^{WORS})$,
as defined by~\eqref{Q-WORS}, has all the required properties.
\end{proof}
Adapting a general criterion for uniqueness of critical points
(see, e.g., \cite[Lemma~8.2]{lamy2014}),
we can show that the functional~\eqref{eq:LdG-s} has a unique
critical point in the admissible class~\eqref{eq:admissible}
when~$\lambda$ is small enough, irrespective of~$\epsilon$, which implies that the WORS is globally stable for $\lambda$ sufficiently small with surface energies too.
\begin{proposition} \label{prop:uniq_small_lambda}
There exists a positive number~$\lambda_0$
(depending only on~$A$, $B$, $C$) such that,
when~$\lambda<\lambda_0$, the system~\eqref{eq:EL}
has a unique solution that satisfies the boundary
conditions~\eqref{eq:bc-Dir}, \eqref{eq:bc-Neu}.
\end{proposition}
The main step of the proof is the following
\begin{lemma} \label{lemma:strictly_convex}
For any $M > 0$, there exists a
$\lambda_0 = \lambda_0(M, \, A, \, B, \, C, \, L, \, \Omega)$ such that,
for $\lambda < \lambda_0$, the functional~$\mathcal{F}_\lambda$
given by~\eqref{eq:LdG-s} is strictly convex in the class
\begin{equation}
X = \left\{\Qvec \in H^{1}(V, \, S_0)\colon
|\Qvec| \leq M \ \textrm{ on } V, \quad
\Qvec = \Qvec_{\mathrm{b}} \ \textrm{ on }
\partial\Omega\times(0, \, \epsilon)\right \} \! .
\end{equation}
\end{lemma}
Once Lemma~\ref{lemma:strictly_convex} is proved,
Proposition~\ref{prop:uniq_small_lambda} follows.
Indeed, let us consider the constant~$M$ given by
Lemma~\ref{lemma:maxprinciple3d}. Then, any solution
of the system~\eqref{eq:EL}, subject to the boundary
conditions~\eqref{eq:bc-Dir}, \eqref{eq:bc-Neu},
must belong to the class~$X$, by Lemma~\ref{lemma:maxprinciple3d}.
However, if $\mathcal{F}_\lambda$ is strictly convex in~$X$, then
it cannot have more than one critical point in~$X$.
\begin{proof}[Proof of Lemma~\ref{lemma:strictly_convex}]
For any $\Qvec_1, \Qvec_2 \in X$, we have
\begin{equation} \label{Poinc0}
\begin{aligned}
\mathcal{F}_{\lambda}\left( \frac{\Qvec_1 + \Qvec_2}{2} \right)
& = \int_{V} \frac{1}{8} |\nabla(\Qvec_1 + \Qvec_2)|^2
+ \frac{\lambda^2}{L} f_b \left(\frac{\Qvec_1 + \Qvec_2}{2}\right) \d V
+ \frac{\lambda}{L} \int_{\Gamma} f_s\left(\frac{\Qvec_1 + \Qvec_2}{2} \right) \d S \\
& = \int_{V} \frac{1}{4} \left( |\nabla \Qvec_1|^2 + |\nabla \Qvec_2|^2
- \frac{1}{2} |\nabla(\Qvec_1 - \Qvec_2)|^2 \right)
+ \frac{\lambda^2}{L} f_b \left( \frac{\Qvec_1 + \Qvec_2}{2} \right) \d V \\
&\qquad\qquad + \frac{\lambda}{L} \int_{\Gamma}
f_s \left(\frac{\Qvec_1 + \Qvec_2}{2} \right) \d S. \\
\end{aligned}
\end{equation}
Since $f_s(\Qvec)$ is a convex function of $\Qvec$, we have
\begin{equation} \label{Poinc.5}
\int_{\Gamma} f_s \left(\frac{\Qvec_1 + \Qvec_2}{2}\right) \d S
\leq \frac{1}{2}\int_{\Gamma} \left(f_s(\Qvec_1) + f_s (\Qvec_2)\right) \d S.
\end{equation}
We now deal with the bulk term, $f_b$.
Both~$\Qvec_1$ and~$\Qvec_2$ are equal to~$\Qvec_{\mathrm{b}}$
on~$\partial\Omega\times(0, \, \epsilon)$ and hence,
$\Qvec_2 - \Qvec_1 = 0$ on~$\partial\Omega\times (0, \, \epsilon)$.
For a.e. fixed $z_0 \in (0, \, \epsilon)$,
by the Poincar\'e inequality on $\Omega$, we have
\begin{equation}
\left\|\Qvec_1(\cdot, \, \cdot, \, z_0)
- \Qvec_2(\cdot, \, \cdot, \, z_0)\right\|_{L^2(\Omega)}^2
\leq C_1(\Omega) \left\| \nabla_{x, y}(\Qvec_1(\cdot, \, \cdot, \, z_0)
- \Qvec_2(\cdot, \, \cdot, \, z_0))\right\|^2_{L^2(\Omega)} \! ,
\end{equation}
where~$C_1(\Omega)$ is a positive constant that only
depends on the geometry of~$\Omega$.
By integrating the previous inequality with respect
to~$z_0\in (0, \, \epsilon)$, we deduce that
\begin{equation} \label{Poinc}
\int_{V}\abs{\Qvec_1- \Qvec_2}^2 \d V
\leq C_1(\Omega) \int_{V} \abs{\nabla(\Qvec_1 - \Qvec_2)}^2 \d V \! .
\end{equation}
Since $|\Qvec_1| \leq M$ and
$|\Qvec_2| \leq M$ everywhere in~$V$, we have
\begin{equation} \label{Poinc2}
\int_V\left| f_b \left(\frac{\Qvec_1 + \Qvec_2}{2}\right)
- \frac{1}{2}f_b(\Qvec_1) - \frac{1}{2}f_b(\Qvec_2)\right| \d V
\leq \left\|f_b\right\|_{W^{2,\infty} (B_M)}
\int_{V} \abs{\Qvec_1 - \Qvec_2}^2 \d V,
\end{equation}
where $B_M = \left \{ \Qvec \in S_0\colon |\Qvec| \leq M \right\}$
and $\left\|f_b\right\|_{W^{2,\infty} (B_M)}$ is a positive constant that
bounds the second derivatives of~$f_b$ in~$B_M$
(in particular, $\left\|f_b\right\|_{W^{2,\infty} (B_M)}$
only depends on $M$, $A$, $B$ and~$C$).
Combining~\eqref{Poinc} and~\eqref{Poinc2}, we find
a positive constant~$C_2 = C_2 (f_b, \, \Omega, \, M)
:= C_1(\Omega) \left\|f_b\right\|_{W^{2,\infty} (B_M)}$ such that
\begin{equation} \label{Poinc3}
\int_V f_b \left(\frac{\Qvec_1 + \Qvec_2}{2}\right) \d V \leq
\frac{1}{2} \int_V \left(f_b(\Qvec_1) + f_b(\Qvec_2) \right) \d V
+ C_2 \int_V \abs{\nabla(\Qvec_1 - \Qvec_2)}^2 \d V.
\end{equation}
Now, we use~\eqref{Poinc.5} and~\eqref{Poinc3}
to bound the right-hand side of~\eqref{Poinc0}. We obtain
\begin{equation}
\begin{aligned}
\mathcal{F}_{\lambda}\left(\frac{\Qvec_1 + \Qvec_2}{2}\right)
& \leq \frac{1}{2} \left(\mathcal{F}_{\lambda}(\Qvec_1)
+ \mathcal{F}_{\lambda}(\Qvec_2)\right)
+ \left(\frac{C_2\lambda^2}{L} - \frac{1}{8}\right)
\int_V \abs{\nabla(\Qvec_1 - \Qvec_2)}^2 \d V
\end{aligned}
\end{equation}
If we take $\lambda < \lambda_0 := (\frac{L}{8C_2})^{1/2}$,
then we have
\begin{equation}
\mathcal{F}_{\lambda}\left(\frac{\Qvec_1 + \Qvec_2}{2}\right)
\leq \frac{1}{2} \left(\mathcal{F}_{\lambda}(\Qvec_1)
+ \mathcal{F}_{\lambda}(\Qvec_2)\right)
\end{equation}
and the equality holds if and only if~$\Qvec_1 = \Qvec_2$.
This proves that~$\mathcal{F}_\lambda$ is strictly convex in~$X$.
\end{proof}
We deduce that the WORS-solution survives in 3D wells, independently of the well height, with both natural boundary conditions and realistic surface energies on the top and bottom surfaces. Moreover, the WORS is globally stable for $\lambda$ small enough, independent of well height and in the next section, we complement our analysis with numerical examples.
\section{Numerics}
\label{sec:numerics}
\subsection{Numerical Methods}
For computational convenience, in this section we take the cross-section of the well, $\Omega$,
to be a (non-truncated) square with sides parallel to the coordinate axes, i.e.~$\Omega = (-1, \, 1)^2$.
We consider the general $\Qvec$-tensor with five degrees of freedom
\begin{equation}\label{q12345}
\begin{aligned}
\Qvec(x,y) & = q_1(x, y, z)(\xhat \otimes \xhat - \yhat \otimes \yhat) + q_2(x, y, z) (\xhat \otimes \yhat + \yhat \otimes \xhat) \\
& + q_3(x, y, z) (2 \zhat \otimes \zhat - \xhat \otimes \xhat - \yhat \otimes \yhat) \\
& + q_4(x, y, z)(\xhat \otimes \zhat + \zhat \otimes \xhat) + q_5(x, y, z)(\yhat \otimes \zhat + \zhat \otimes \yhat ), \\
\end{aligned}
\end{equation}
where $\xhat$, $\yhat$ and $\zhat$ are unit-vectors in the $x$-, $y$- and $z$-directions respectively.
Moreover, instead of considering Dirichlet conditions (infinite strong anchoring) on the lateral surfaces, we consider finite anchoring on the lateral surfaces, which allows us to study nematic equilibria without excluding the corners of the well \cite{Walton2018}. More precisely, we impose surface energies on the lateral sides given by \cite{kralj2014order}
\iffalse
\begin{equation}
\begin{aligned}
& f_s(\Qvec) = \omega_1 \alpha_x (\Qvec \xhat \cdot \xhat - \frac{2}{3} s_{+})^2 + \gamma_z \big| \left( \mathbf{I} - \xhat \otimes \xhat \right) \Qvec \xhat \big|^2, \quad y = 0, 1; \\
& f_s(\Qvec) = \omega_2 \alpha_y (\Qvec \yhat \cdot \yhat - \frac{2}{3} s_{+})^2 + \gamma_z \big| \left( \mathbf{I} - \yhat \otimes \yhat \right) \Qvec \yhat \big|^2, \quad x = 0, 1; \\
\end{aligned}
\end{equation}
\fi
\begin{equation}\label{side_anchoring_1}
\begin{aligned}
& f_s(\Qvec) = \omega_1 \left( \Qvec - g(x) \left( \xhat \otimes \xhat - \frac{1}{3} \mathbf{I} \right) \right)^2, \quad y = 0, 1; \\
& f_s(\Qvec) = \omega_2 \left( \Qvec - g(y) \left( \yhat \otimes \yhat - \frac{1}{3} \mathbf{I} \right) \right)^2, \quad x = 0, 1; \\
\end{aligned}
\end{equation}
where $\omega_i = \frac{W_i \lambda}{L}$ is the non-dimensionalized anchoring strength, $W_i$ is the surface anchoring, the function $g \in C^{\infty}([-1, 1])$ eliminates the discontinuity at the corners e.g
\begin{equation}
g(x) = s_{+}, \quad \forall x \in [-1 + \delta, 1 - \delta], \qquad g(-1) = g(1) = 0,
\end{equation}
for a small constant $\delta$. The choice of $g$ does not affect numerical results qualitatively.
We take $W_1 = W_2 = 10^{-2} \mathrm{Jm}^{-2}$ to account for the strong anchoring on the lateral sides of well \cite{ravnik2009landau}.
On $\Gamma$ - the top and bottom surfaces,
the surface energy for finite tangential anchoring (\ref{eq:fs}) can be written as
\begin{equation}\label{fs_z}
f_s(\Qvec) = w_z \int_{\Gamma}\left(\alpha_z \left( \Qvec \zhat \cdot \zhat + \frac{1}{3} s_{+} \right)^2 + \gamma_z \big| \left( \mathbf{I} - \zhat \otimes \zhat \right) \Qvec \zhat \big|^2\right)\d S
\end{equation}
where $w_z = \frac{W_z \lambda}{L}$ is the non-dimensionalized anchoring strength. The surface energy (\ref{fs_z}) favors planar boundary conditions on the top and bottom surface, such that $\zhat$ is an eigenvector of $\Qvec$ with associated eigenvalue $-\frac{1}{3}s^{+}$.
Instead of solving the Euler-Lagrange equations for the LdG free energy, we use the energy-minimization based numerical method \cite{gartlanddavis, wang2017topological} to find the minimizer of current system. The physical domain can be rescaled to $\Omega_c = \{ (\bar{x}, \bar{y}, \bar{z}) ~|~ \bar{x} \in [0, 2 \pi], \bar{y} \in [0, 2 \pi], \bar{z} \in [-1, 1] \}$. Since $\Qvec$ is a symmetric and traceless matrix, $\Qvec$ can be written as
\begin{equation}
\Qvec =
\begin{pmatrix}
p_1 & p_2 & p_3 \\
p_2 & p_4 & p_5 \\
p_3 & p_5 & -p_1 - p_4 \\
\end{pmatrix}.
\end{equation}
We can expand $p_i$ in terms of special functions: Fourier series on $\bar{x}$ and $\bar{y}$, and Chebyshev polynomials on $\bar{z}$, i.e.
\begin{equation}\label{Expand}
p_i(\bar{x}, \bar{y}, \bar{z}) = \sum_{l=1-L}^{L-1}\sum_{m = 1-M}^{M-1} \sum_{n = 0}^{N-1} p^{lmn}_{i} X_l(\bar{x}) Y_{m}(\bar{y})Z_{n}(\bar{z}).
\end{equation}
where $L$, $M$, $N$ specify the truncation limits of the expanded series, $X_l$ and $Y_m$ are defined as
\begin{equation}
X_{l}(\bar{x}) =
\begin{cases}
\cos l \bar{x} \quad l \geq 0, \\
\sin |l| \bar{x} \quad l < 0.
\end{cases}
\quad
Y_{m}(\bar{y}) =
\begin{cases}
\cos m \bar{y} \quad m \geq 0, \\
\sin |m| \bar{y} \quad m < 0.
\end{cases}
\end{equation}
Inserting (\ref{Expand}) into the LdG free energy (\ref{eq:LdG}) with surface energy term (\ref{side_anchoring_1}) and (\ref{fs_z}), we get a function of $\mathbf{p} = ( p_{lmn}^{i} ) \in \mathbb{R}^{D}$, where $D = (2L - 1) \times (2M - 1) \times N$.
The minimizers of function $F(\mathbf{p})$ can be found by some standard optimization methods. In the following simulation, we mainly use L-BFGS, which is a type of quasi-Newton methods and is efficient for our problem\cite{wright1999numerical}.
The energy-minimization based numerical approach with L-BFGS usually converges to a local minimizer with a proper initial guess, but that is not necessarily guaranteed. Similar to Ref. \cite{robinson2017molecular}, we can justify the stability of an obtained solution $\mathbf{p}$ by computing the smallest eigenvalue $\lambda_1$ of Hessian matrix $\mathbf{G}(\mathbf{p})$ corresponding to $\mathbf{p}$:
\begin{equation}\label{Rl}
\lambda_1 = \min_{\vvec \neq 0, \\ \vvec \in \mathbb{R}^{D}} \frac{ \langle \mathbf{G}(\mathbf{p}) \vvec, \, \vvec \rangle }{ \langle \vvec, \vvec \rangle },
\end{equation}
where $\langle\cdot, \, \cdot \rangle$ is the standard inner product in $\mathbb{R}^{D}$.
A solution is locally stable (metastable) if $\lambda_1 > 0$.
Practically, $\lambda_1$ can be computed by solving the gradient flow equation of $\vvec$
\begin{equation}\label{eq1}
\frac{\pp \vvec}{\pp t} = - \frac{2 \gamma}{ \langle \vvec, \, \vvec \rangle } \left( \mathbf{G} \vvec - \frac{\langle \mathbf{G} \vvec, \, \vvec \rangle}{ \langle \vvec, \, \vvec \rangle} \vvec \right),
\end{equation}
where $\gamma(t)$ is a relaxation parameter, and $\mathbf{G}\vvec = \mathbf{G} (\mathbf{p}) \vvec$ is approximated by
\begin{equation}
\mathbf{G}(\mathbf{p}) \vvec = - \frac{ \nabla_{D} F(\mathbf{p} + l \vvec) - \nabla_D F (\mathbf{p} - l \vvec)}{2l},
\end{equation}
for some small constant $l$. We can choose $\gamma(t)$ properly to accelerate the convergence of the dynamic system (\ref{eq1}).
In what follows, we frequently refer to the biaxiality parameter\cite{majumdar2010landau}
\[
\beta^2 = 1 - 6 \frac{\left(\textrm{tr} \Qvec^3 \right)^2}{|\Qvec|^6}
\]
such that $0\leq \beta^2 \leq 1$ and $\beta^2 = 0$ if and only if $\Qvec$ is uniaxial or isotropic (for which we set $\beta^2=0$ by default).
\subsection{Numerical Results}
In the following, we take $A = -\frac{B^2}{3C}$ if not stated differently, so all material constants in Landau-de Gennes free-energy are fixed.
The two key dimensionless variables are
\begin{equation}
\bar{\lambda}^2 = \frac{2C \lambda^2}{L}, \quad \epsilon = \frac{h}{\lambda},
\end{equation}
which describe the cross-sectional size and height of the square well respectively. Other dimensionless variables are related to the surface energy on all six surfaces.
\subsubsection{Strong anchoring on the lateral surfaces}
Firstly, we consider strong anchoring on the lateral surfaces, by taking $W_1 = W_2 = 10^{-2} \mathrm{J m}^{-2}$ in (\ref{side_anchoring_1}). For the surface energy on the top and bottom plates (see~\eqref{fs_z}), we take $W_z = 10^{-2} \mathrm{Jm}^{-2}$ if not stated differently.
For relatively large $\bar{\lambda}^2$, we find the well-known diagonal and rotated solutions as stable configurations for arbitrary $\epsilon$~\cite{tsakonas2007multistable}. These are essentially described by $\Qvec$-tensors of the form
\[
\Qvec = q \left( \nvec \otimes \nvec - \frac{\mathbf{I}_2}{2} \right) + q_3 \left( \zhat\otimes \zhat - \frac{\mathbf{I}_3}{3} \right)
\]
where $\nvec = \left(\cos \theta, \sin \theta, 0 \right)$, $q>0$, $q_3 <0$, $\mathbf{I}_2$ is the identity matrix in two dimensions and $\mathbf{I}_3$ is the identity matrix in three dimensions respectively. Moreover, $\theta$ is a solution of $\Delta \theta = 0$ on a square subject to appropriately defined Dirichlet conditions \cite{lewis2014colloidal}. In the case of the diagonal solution, $\nvec$ roughly aligns along one of the square diagonals whereas for the rotated solution, $\theta$ rotates by approximately $\pi$ radians between a pair of opposite square edges. In Fig. \ref{3D_WRD}(a)-(b). we plot a diagonal and a rotated solution for $\bar{\lambda}^2 = 100$ and $\epsilon = 4$ with $A = - \frac{B^2}{3C}$. These solutions are z-invariant, as $|\pp_z \Qvec|^2 \approx 10^{-12}$ in our numerical solutions.
\begin{figure}[!htb]
\begin{center}
\begin{overpic}[height = 12.5em]{D_h_2_L_100_W_1e-2_3D_view.eps}
\put(-5, 100){(a)}
\end{overpic}
\hspace{1em}
\begin{overpic}[height = 12.5em]{D_h_2_L_100_W_1e-2_M.eps}
\end{overpic}
\hspace{2em}
\begin{overpic}[height = 12.5em]{R_h_2_L_100_W_1e-2_3D_view.eps}
\put(-5, 100){(b)}
\end{overpic}
\hspace{1em}
\begin{overpic}[height = 12.5em]{R_h_2_L_100_W_1e-2_M.eps}
\end{overpic}
\end{center}
\vspace{2em}
\begin{center}
\begin{overpic}[height = 12.5em]{WORS_h_2_L_5_W_1e-2_3D_view.eps}
\put(-5, 100){(c)}
\end{overpic}
\hspace{1em}
\begin{overpic}[height = 12.5em]{WORS_h_2_L_5_W_1e-2_M.eps}
\end{overpic}
\end{center}
\caption{(a) A diagonal solution for $\bar{\lambda}^2 = 100$ and $\epsilon = 4$.
(b) A rotated solution for $\bar{\lambda}^2 = 100$ and $\epsilon = 4$.
(c) The WORS for $\bar{\lambda}^2 = 5$ and $\epsilon = 4$.
The colors show the biaxiality parameter $\beta^2$, which are arranged such that the high to low values (one to zero) correspond to variations from red to blue. The white lines indicate the director direction $\nvec$.}\label{3D_WRD}
\end{figure}
\begin{figure*}[hbt]
\centering
\begin{overpic}[height = 12.5em]{DD_h_2_L_100_3D_View.eps}
\put(-5, 95){(a)}
\end{overpic}
\begin{overpic}[height = 12.5em]{DD_t_h_2_L_100.eps}
\end{overpic}
\begin{overpic}[height = 12.5em]{DD_m_h_2_L_100.eps}
\end{overpic}
\begin{overpic}[height = 12.5em]{DD_b_h_2_L_100.eps}
\end{overpic}
\vspace{1em}
\begin{overpic}[height = 12.5em]{DD_h_2_L_10_3D_View.eps}
\put(-5, 95){(b)}
\end{overpic}
\begin{overpic}[height = 12.5em]{DD_t_h_2_L_10.eps}
\end{overpic}
\begin{overpic}[height = 12.5em]{DD_m_h_2_L_10.eps}
\end{overpic}
\begin{overpic}[height = 12.5em]{DD_b_h_2_L_10.eps}
\end{overpic}
\caption{(a) The locally stable mixed 3D solution for $\bar{\lambda}^2 = 100$ and $\epsilon = 4$, shown by 3D view and cross sections at $z = 0$, $z = 2$, and $z = 4$ respectively. (b) The locally stable mixed 3D solution for $\bar{\lambda}^2 = 10$ and $\epsilon = 4$, shown by 3D view and cross sections at $z = 0$, $z = 2$, and $z = 4$ respectively. The colors show the biaxiality parameter $\beta^2$, which are arranged such that the high to low values (one to zero) correspond to variations from red to blue. The white lines indicate the director direction $\nvec$.}\label{3D_1}
\end{figure*}
For sufficiently small $\bar{\lambda}^2$, we always get the WORS for arbitrary $\epsilon$, in accordance with the uniqueness results for small $\lambda$ in previous sections. In Fig. \ref{3D_WRD}(c), we plot the WORS for $\bar{\lambda}^2 = 5$ and $\epsilon = 4$ with $A = - \frac{B^2}{3C}$.
Interestingly, for $\epsilon$ large enough, we can have additional mixed locally stable solutions for relatively large $\bar{\lambda}^2$. In Fig. \ref{3D_1}(a)-(b), we plot a mixed 3D solution for $\bar{\lambda}^2 = 100$ and $10$, with $\epsilon = 4$. These mixed solutions can be obtained by taking a mixed initial condition as
$\Qvec = s_{+} \left(\nvec \otimes \nvec - \frac{1}{3} \mathbf{I}_3 \right)$ with
\begin{equation}
\nvec(x, y, z) =
\begin{cases}
\frac{1}{\sqrt{2}}(1, ~1, 0), \quad z \geq \frac{\epsilon}{2} \\
\frac{1}{\sqrt{2}}(1, -1, 0), \quad z < \frac{\epsilon}{2}. \\
\end{cases}
\end{equation}
The initial condition has two separate diagonal profiles on the top and bottom surfaces with a mismatch at the centre of the well, at $z = \frac{\epsilon}{2}$. In this case, the L-BFGS procedure converges to a locally stable solution that has different diagonal configurations on the top and bottom plates. On the middle slice, we have a BD-like profile (referring to the terminology in \cite{canevari2017order}), where the corresponding $\Qvec$ tensor is of the form
\[
\Qvec_{BD} = q_1 (\xhat \otimes \xhat - \yhat \otimes \yhat ) + q_3 \left( 2 \zhat \otimes \zhat - \xhat \otimes \xhat - \yhat \otimes \yhat \right)
\]
with two degrees of freedom, $q_3 <0$ on the middle slide and $q_1 = 0$ near a pair of parallel edges of the square cross-section ($q_1=0$ describes a transition layer between two distinct values of $q_1$). We compute the smallest eigenvalue of the Hessian matrix corresponding to this solution, which is positive and hence, this mixed solution is numerically stable. Indeed, these mixed solutions have lower free energy than rotated solutions for $\bar{\lambda}^2 = 100$ and $\epsilon = 4$.
Numerical simulations show that mixed solutions cease to exist when $\epsilon$ or $\bar{\lambda}^2$ is small enough. For $\bar{\lambda}^2 = 100$, we cannot find such solutions for $\epsilon \leq 0.8$.
We can generate more 3D configurations by mixing diagonal and rotated configurations on the top and bottom surfaces or two different rotated solutions but these are unstable according to our numerical simulations.
\subsubsection{Weak anchoring on the lateral surfaces}
In this section, we relax surface anchoring on the lateral surfaces and fix $W_z = 10^{-2} \mathrm{Jm}^{-2}$ on the top and bottom plates with $\epsilon = 0.2$.
In Fig. \ref{WORS_Weak}, we plot numerical solutions for $W_1 = W_2 = 10^{-2} \mathrm{Jm}^{-2}, 2 \times 10^{-3} \mathrm{Jm}^{-2}, 10^{-3} \mathrm{Jm}^{-2}$ and $10^{-4} \mathrm{Jm}^{-2}$, respectively, with $\bar{\lambda}^2 = 5$ and $\epsilon = 0.1$. All three solutions are obtained by using a diagonal-like initial condition.
In the strong anchoring case ($W_1 = W_2 = 10^{-2} \mathrm{Jm}^{-2}$), we get the WORS as expected, as the WORS is the unique critical point when $\bar{\lambda}^2$ is small enough. However, for $W_1 = W_2 = 10^{-3} \mathrm{Jm}^{-2}$, we get a diagonal-like solution in which maximum biaxiality is achieved around the corner. By further decreasing the anchoring strength, the nematic director is almost uniformly aligned along the diagonal direction. Similar results were reported in \cite{kralj2014order}.
\begin{figure}[!htb]
\centering
\begin{overpic}[width = 0.24\columnwidth]{D_ini_h_0_1_L_5_W_1e-2.eps}
\put(-5, 95){(a)}
\end{overpic}
\hfill
\begin{overpic}[width = 0.24\columnwidth]{D_ini_h_0_1_L_5_W_2e-3.eps}
\put(-5, 95){(b)}
\end{overpic}
\hfill
\begin{overpic}[width = 0.24\columnwidth]{D_h_0_1_L_5_W_1e-3.eps}
\put(-5, 95){(c)}
\end{overpic}
\hfill
\begin{overpic}[width = 0.24\columnwidth]{D_ini_h_0_1_L_5_W_1e-4.eps}
\put(-5, 95){(d)}
\end{overpic}
\caption{Transition from a diagonal solution to the WORS by increasing the anchoring strength on the lateral surfaces for $\bar{\lambda}^2 = 5$ and $\epsilon = 0.1$, shown by the cross-section at $\epsilon = 0.1$. (a) $W_i = 10^{-2} \mathrm{Jm}^{-2}$; (c) $W_i = 10^{-3} \mathrm{Jm}^{-2}$; (d) $W_i = 10^{-4} \mathrm{Jm}^{-2}$. The colors show the biaxiality parameter $\beta^2$, which are arranged such that the high to low values (one to zero) correspond to variations from red to blue. The white lines indicate the director direction $\nvec$.
}\label{WORS_Weak}
\end{figure}
For $W_1 = W_2 = 10^{-3} \mathrm{Jm}^{-2}$, we can get the WORS by further decreasing $\bar{\lambda}^2$. However, the WORS ceases to exist for $W_1 = W_2 = 10^{-4} \mathrm{Jm}^{-2}$. Quantitatively, we can compute bifurcation points $\bar{\lambda}^{2}_{*}$, such that the WORS is the unique solution for $\bar{\lambda}^2 < \bar{\lambda}^{2}_{*}$, as a function of anchoring strength $W_1 = W_2 = W$, shown in Fig. \ref{Bifur}. We can find $\bar{\lambda}_{*}^2$ by decreasing $\bar{\lambda}^2$ till diagonal-like intial conditions converge to WORS, since diagonal solutions cease to exist for $\bar{\lambda}^2 < \bar{\lambda}^{2}_{*}$. The result in Fig. \ref{Bifur} is computed with $\epsilon = 0.1$. However, this result is independent of $\epsilon$ as both diagonal and the WORS are z-invariant solutions for $A = -\frac{B^2}{3C}$.
\begin{figure}[!h]
\centering
\begin{overpic}[width = 0.6 \columnwidth]{W_burf.eps}
\end{overpic}
\caption{Bifurcation points $\bar{\lambda}^{*}$, such that the WORS is the unique solution for $\bar{\lambda}^2 < \bar{\lambda}_{*}^2$, as a function of anchoring strength. The blue dashed line indicates the bifuration point of the WORS for Dirichlet boundary condition in a 2D square domain. }\label{Bifur}
\end{figure}
Another way to relax the surface anchoring is to consider the surface energy
\begin{equation}\label{relax_1}
\begin{aligned}
& \text{on}~~y = 0, 1: \\
& f_s(\Qvec) = \omega_1 \left( \alpha \left(\Qvec \xhat \cdot \xhat - \frac{2}{3} s_{+} \right)^2 + \gamma \big| \left( \mathbf{I} - \xhat \otimes \xhat \right) \Qvec \xhat \big|^2 \right); \\
& \text{on}~~x = 0, 1: \\
& f_s(\Qvec) = \omega_2 \left( \alpha \left( \Qvec \yhat \cdot \yhat - \frac{2}{3} s_{+} \right)^2 + \gamma \big| \left( \mathbf{I} - \yhat \otimes \yhat \right) \Qvec \yhat \big|^2 \right),\\
\end{aligned}
\end{equation}
where $\omega_i = \frac{W_i \lambda}{L}$ is the non-dimensionalized anchoring strength, $\alpha > 0$ and $\gamma > 0$ are constants. The second term in the surface energy (\ref{relax_1}) forces $\xhat$ ($\yhat$) to be an eigenvector of $\Qvec$ on the plane $y = 0,~1$ ($x = 0,~1$), while the first term forces the eigenvalue associated with $\xhat$ ($\yhat$) to be $\frac{2}{3}s_{+}$. Since the second term in (\ref{relax_1}) can be zero if we take $\Qvec = s_{+} \left( \zhat \otimes \zhat - \frac{1}{3} \mathbf{I} \right)$, which also makes the surface energy on the top and bottom plates ($\Gamma$) zero, we keep $\alpha$ non-zero to get interesting defect patterns.
We fix $W_i = 10^{-2} \mathrm{Jm}^{-2}$ and vary $\alpha$ and $\gamma$ to relax the anchoring on the lateral surfaces.
Fig. \ref{Res_Relax_1} shows the numerical result for $\epsilon = 0.1$ and $\bar{\lambda}^2 = 5$ with various $\alpha$ and $\gamma$, by using diagonal-like initial conditions.
For $\alpha = \gamma = 1$ and $W_i = 10^{-2} \mathrm{Jm}^{-2}$, we can get a WORS-like solution, which has a strong biaxial region near the boundary. The biaxial regions near the boundary become larger as $\alpha$ gets smaller. We then fix $\alpha = 1$ and vary $\gamma$. If $\gamma$ is small enough, the nematic director (the leading eigenvector of the $\Qvec$-tensor) is not tangent to the square edges and the WORS ceases to exist.
\begin{figure}[h]
\centering
\begin{overpic}[width = 0.24\columnwidth]{WORS_h_0_1_L_5_Relax.eps}
\put(-5, 95){(a)}
\end{overpic}
\hfill
\begin{overpic}[width = 0.24\columnwidth]{D_ini_h_0_1_L_5_relax_a_0_1_g_1.eps}
\put(-5, 95){(b)}
\end{overpic}
\hfill
\begin{overpic}[width = 0.24\columnwidth]{D_ini_h_0_1_L_5_relax_a_1_g_0_2.eps}
\put(-5, 95){(c)}
\end{overpic}
\hfill
\begin{overpic}[width = 0.24\columnwidth]{D_ini_h_0_1_L_5_relax_a_1_g_0_1.eps}
\put(-5, 95){(d)}
\end{overpic}
\caption{Numerical solutions for surface energy (\ref{relax_1}) with a diagonal-like initial condition for $\bar{\lambda}^2 = 5$ and $\epsilon = 0.2$, shown by the cross-section at $z = 0.1$: (a) $\alpha = \gamma = 1$; (b) $\alpha = 0.1$ and $\gamma = 1$; (c) $\alpha = 1$ and $\gamma = 0.2$; (d) $\alpha = 1$ and $\gamma = 0.1$. The colors show the biaxiality parameter $\beta^2$, which are arranged such that the high to low values (one to zero) correspond to variations from red to blue. The white lines indicate the director direction $\nvec$.}\label{Res_Relax_1}
\end{figure}
The above examples show that the WORS ceases to exist if the anchoring on the lateral surfaces is weak enough, and we always get a diagonal-like solution when the WORS ceases to exist.
It should be remarked that the diagonal-like solutions tend to be defect-free around the corners with weak anchoring, as the nematic directors aren't forced to be tangential
to the square edges and there is no biaxial-uniaxial or biaxial-isotropic interface near the corners.
\iffalse
\begin{figure}[h]
\begin{overpic}[width = 0.32\columnwidth]{L_5_relax_Corner_a_1_g_0_2.eps}
\put(-5, 100){(a)}
\end{overpic}
\hfill
\begin{overpic}[width = 0.32\columnwidth]{L_5_W_2e-3_Corner.eps}
\put(-5, 100){(b)}
\end{overpic}
\hfill
\begin{overpic}[width = 0.32\columnwidth]{L_5_W_1e-3_Corner.eps}
\put(-5, 100){(c)}
\end{overpic}
\begin{overpic}[width = 0.32\columnwidth]{L_5_relax_Corner_a_1_g_0_2.eps}
\put(-5, 100){(a)}
\end{overpic}
\hfill
\begin{overpic}[width = 0.32\columnwidth]{L_5_W_2e-3_Corner.eps}
\put(-5, 100){(b)}
\end{overpic}
\hfill
\begin{overpic}[width = 0.32\columnwidth]{L_5_W_1e-3_Corner.eps}
\put(-5, 100){(c)}
\end{overpic}
\caption{Corner behavior in the corner }\label{Coner_Weak}
\end{figure}
\fi
\subsubsection{Escaped Solutions}
In Ref. \cite{wang2018order}, the authors show that there exists two escaped solutions with non-zero $q_4$ and $q_5$, and $q_3 > 0$, in the reduced 2D square domain for relatively large $\bar{\lambda}^2$. Our simulations show that these two escaped solutions can exist in 3D wells for similar values of $\bar{\lambda}^2$ if the anchoring strength on the top and bottom plates are weak enough and the escaped solutions are numerically locally stable. Fig. \ref{3D_escaped} (a)-(b) show the nematic director and biaxiality parameter in the middle slices of these two types of escaped configurations for $\bar{\lambda}^2 = 100$, $\epsilon = 4$ and $W_z = 10^{-5} \mathrm{Jm}^{-2}$, which are quite similar to the escaped configurations in a cylindrical cavity \cite{kralj1993stability}. The value of $q_3$ in configuration \ref{3D_escaped}(a) is plotted in Fig. \ref{3D_escaped} (d) and $q_3 > 0$ in the center of well.
\begin{figure}[!htb]
\begin{center}
\begin{overpic}[height = 12.5em]{ES_-1_L_100_1e-5.eps}
\put(-5, 95){(a)}
\end{overpic}
\hspace{1em}
\begin{overpic}[height = 12.5em]{ES_+1_L_100_1e-5.eps}
\put(-5, 95){(b)}
\end{overpic}
\hspace{1em}
\begin{overpic}[height = 12.5em]{ES_-1_h_2_L_100_W_1e-5_3D_view.eps}
\put(-5, 100){(c)}
\end{overpic}
\hspace{1em}
\begin{overpic}[height = 12.5em]{ES_-1_L_100_1e-5_q3.png}
\put(-5, 100){(d)}
\end{overpic}
\end{center}
\caption{(a) Middle slice in the escaped configuration with $-1$-disclination line in the center for $\bar{\lambda}^2 = 100$ and $\epsilon = 4$. (b) Middle slice in the escaped configuration with $+1$-disclination line in the center for $\bar{\lambda}^2 = 100$ and $\epsilon = 4$. The colors show the biaxiality parameter $\beta^2$, which are arranged such that the high to low values (one to zero) correspond to variations from red to blue. The white rods indicate the director direction $\nvec$. (c)-(d) 3D view and $q_3$ in the escaped configuration with $-1$-disclination line in the center for $\bar{\lambda}^2 = 100$ and $\epsilon = 4$. }\label{3D_escaped}
\end{figure}
Strictly speaking, the two configurations in Fig. \ref{3D_escaped} are not z-invariant if $W_z \neq 0$. The escaped solutions cease to exist if either $\epsilon$ is small enough or if the anchoring $W_z$ is large enough.
We can compute the critical achoring strength $W_z$ on the top and bottom plates, for which the escaped configurations cease to exist, as a function of $\epsilon$ for $\bar{\lambda}^2 = 100$, shown in Fig. \ref{ES_Height}.
\begin{figure}[!h]
\centering
\begin{overpic}[width = 0.6\columnwidth]{ES_Wz_Height.eps}
\end{overpic}
\caption{Critical achoring strength on the top and bottom plates, for which the escaped configurations lose their stabilities as a function of $\epsilon$. }\label{ES_Height}
\end{figure}
\section{Summary}
\label{sec:summary}
In a batch of papers \cite{kralj2014order}, \cite{canevari2017order} and \cite{wang2018order}, the authors study WORS-type solutions or critical points of the LdG free energy on square domains, and WORS-type solutions have a constant eigenframe with a distinct diagonal defect line connecting the four square vertices. It is natural to ask if WORS-type solutions are relevant for 3D domains or if they are a 2D-artefact. Our essential finding in this paper is to show that the WORS is a LdG critical point for 3D wells with a square cross-section and experimentally relevant tangent boundary conditions on the lateral surfaces, for arbitrary well height, with both natural boundary conditions and realistic surface energies on the top and bottom surfaces. In fact, for sufficiently small $\lambda$ - the size of the square cross-section, the WORS is the global LdG minimizer for these 3D problems, exemplifying the 3D relevance of WORS-type solutions for all temperatures below the nematic supercooling temperature.
We also numerically demonstrate the existence of stable mixed 3D solutions with two different diagonal profiles on the top and bottom well surfaces, for wells with sufficiently large $\epsilon$ and $\lambda$. These are again interesting from an applications point of view and are 3D solutions that are not covered by a purely 2D study. It is interesting to see that whilst the BD solution is an unstable LdG critical point on a 2D square domain, it interpolates between the two distinct diagonal profiles for a stable mixed 3D solution. Further work will be based on a study of truly 3D solutions that are not $z$-invariant and if they can related to the 2D solutions on squares reported in previous work i.e. can we use the zoo of 2D LdG critical points on a square domain reported in \cite{robinson2017molecular, wang2018order} to construct exotic 3D solutions on a 3D square well? This will be of substantial mathematical and applications-oriented interest.
\section{Acknowledgements}
G.C.'s research was supported by the Basque Government through the BERC 2018-2021 program; by the Spanish Ministry of Science, Innovation and Universities: BCAM Severo Ochoa accreditation SEV-2017-0718; and by the Spanish Ministry of Economy and Competitiveness: MTM2017-82184-R. A.M. was supported by fellowships EP/J001686/1 and EP/J001686/2, is supported by an OCIAM Visiting Fellowship and the Keble Advanced Studies Centre. Y.W. would like to thank Professor Chun Liu, for his constant support and helpful advice.
\bibliographystyle{plain}
|
2,869,038,154,049 | arxiv | \section{\bigskip Introduction}
This paper is a sequel to \cite{Li2}\ and \cite{Li}, in which we studied the
diagonalizations of self-adjoint operators modulo norm ideal in semifinite von
Neumann algebras. (see \cite{Kadison},\cite{M1}-\cite{M3} or \cite{V} for more
details about von Neumann algebras.) In particular, we give an analogue of
Kato-Rosenblum theorem in a semifinite von Neumann algebra in \cite{Li}.
Let $\mathcal{H}$ be a complex separable infinite dimensional Hilbert space.
Assume $H$ and $H_{1}$ are densely defined self-adjoint operators on
$\mathcal{H}$ satisfying that $H_{1}-H$ is in the trace class, then the
Kato-Rosenblum theorem asserts that the wave operator $W_{\pm}\left(
H_{1},H\right) $ of $H$ and $H_{1}$ exists and consequently the absolutely
continuous parts of $H$ and $H_{1}$ are unitarily equivalent. Thus, if a
self-adjoint operator $H$ in $\mathcal{B}(\mathcal{H})$ has a nonzero
absolutely continuous spectrum, then $H$ can not be a sum of a diagonal
operator and a trace class operator. In \cite{Li}, we introduce the concept of
generalized wave operator $W_{\pm}$ based on the notion of norm absolutely
continuous projections. An analogue of Kato-Rosenblum theorem in a semifinite
von Neumann algebra $\mathcal{M}$ is obtained by showing the existence of the
generalized wave operator $W_{\pm}.$ To be more precise, we proved that for
self-adjoint operators $H$ and $H_{1}$ affiliated with $\mathcal{M}$
satisfying $H_{1}-H\in\mathcal{M\cap L}^{1}\left( \mathcal{M},\tau\right) ,$
the generalized wave operator $W_{\pm}\left( H_{1},H\right) $ exists and
then the norm absolutely continuous part of $H$ and $H_{1}$ are unitarily
equivalent. It implies that a self-adjoint operator $H$ affiliated with
$\mathcal{M}$ can not be a sum of a diagonal operator in $\mathcal{M}$ (see
Definition 1.0.1 in \cite{Li2}) and an operator in $\mathcal{M\cap L
^{1}\left( \mathcal{M},\tau\right) $ if it has a non-zero norm absolutely
continuous projection in $\mathcal{M}$
The above statements illustrate that showing the existence of wave operators
is the key step to prove two versions of Kato-Rosenblum theorem. In
mathematical scattering theory, wave operator $W_{\pm}$ is an elementary
concept and the existence of $W_{\pm}$ is one of the main research topics in
this area. Actually, there are two typical approaches to show the existence of
$W_{\pm}$. One is called time-dependent approach which has been used in
\cite{Kato} and \cite{Ro} and another is called stationary approach (see
\cite{BE}, \cite{D} or \cite{Y1}). The methods which do not make explicit use
of the time variable $t$ are known as the stationary approaches. An important
merit of a stationary approach is the advanced formula part. We notice that
the method in \cite{Li} to show the Kato-Rosenblum theorem in $\mathcal{M}$ is
a so-called time-dependent approach. So it is natural to ask whether there is
a stationary approach in $\mathcal{M}$. Thus to explore a stationary method in
$\mathcal{M}$ is our main purpose in the current article. We will also show
the Kato-Rosenblum theorem in $\mathcal{M}$ in \cite{Li} by a stationary approach.
The notion of the norm absolutely continuous support $P_{ac}^{\infty}\left(
H\right) $ of a self-adjoint operator $H$ affiliated to $\mathcal{M}$ plays a
very important role in the Kato-Rosenblum theorem in $\mathcal{M}.$ So in this
article, we are going to characterize $P_{ac}^{\infty}\left( H\right) $ by
applying the Kato smoothness given in \cite{Kato2}, we assert that
\[
P_{ac}^{\infty}\left( H\right) =\vee\left\{ R\left( G^{\ast}\right)
:G\in\mathcal{M}\text{ is }H\text{-smooth}\right\} .
\]
Therefore for a self-adjoint $H$ affiliated with $\mathcal{M},$ if there is a
$H$-smooth operator in $\mathcal{M},$ then $H$ is not a sum of a diagonal
operator in $\mathcal{M}$ and an operator in $\mathcal{M\cap L}^{1}\left(
\mathcal{M},\tau\right) $.
The construction of this paper is as follows. In section 2, we prepare related
notation, definitions and lemmas. We list the relation between the resolvent
$R_{H}\left( z\right) =\left( H-z\right) ^{-1}$ and unitary $U_{H
(t)=\exp\left( -itH\right) $ for a self-adjoint operator $H$ on
$\mathcal{H}.$ We also recall the definitions of Kato smoothness and
generalized wave operators. Some basic properties of generalized wave
operators are discussed in this section too. Section 3 is focused on the main
results of this paper. We first characterize the norm absolutely continuous
support $P_{ac}^{\infty}\left( H\right) $ of a self-adjoint operator $H$
affiliated to $\mathcal{M}$ by applying the Kato smoothness. After giving the
concepts of generalized weak wave operators $\widetilde{W}_{\pm}$, generalized
stationary wave operators $\mathcal{U}_{\pm}$ in $\mathcal{M}$, we give a
stationary proof of the Kato-Rosenblum theorem in $\mathcal{M}.$
\section{Preliminaries and Notation}
Let $\mathcal{H}$ be a complex Hilbert space and $\mathcal{B}\left(
\mathcal{H}\right) $ be the set of all bounded linear operators on
$\mathcal{H}.$ In this article, we assume that $\mathcal{M}\subseteq
\mathcal{B}\left( \mathcal{H}\right) $ is a countable decomposable properly
infinite semifinite von Neumann algebra with a faithful normal tracial weight
$\tau$ and $\mathcal{A}\left( \mathcal{M}\right) $ is the set of densely
defined, closed operators affiliated with $\mathcal{M}.$
\subsection{The Unitary Group and Resolvent of a Self-adjoint Operator}
The resolvent $R_{H}\left( z\right) =\left( H-z\right) ^{-1}$ and unitary
$U_{H}(t)=\exp\left( -itH\right) $ for a self-adjoint operator $H$ on
$\mathcal{H}$ will be frequently used in the current paper$,$ so we recall
their properties and relations below.
Let $H$ be any self-adjoint operator with domain $\mathcal{D}\left( H\right)
$ in $\mathcal{H}$ and $\left\{ \left( E_{H}\left( \lambda\right) \right)
\right\} _{\lambda\in\mathbb{R}}$ be the spectral resolution of the identity
for $H.$ For $f,g$ in $\mathcal{H},$ the unitary group $U_{H}(t)=\exp\left(
-itH\right) $ has the sesquilinear for
\begin{equation}
\left\langle U_{H}(t)f,g\right\rangle =\left\langle \exp\left( -itH\right)
f,g\right\rangle =\int_{-\infty}^{\infty}\exp\left( -i\lambda t\right)
d\left\langle E_{H}\left( \lambda\right) f,g\right\rangle . \label{g1
\end{equation}
Similarly, its resolvent $R_{H}\left( z\right) =\left( H-z\right) ^{-1}$
has the sesquilinear form
\[
\left\langle R_{H}\left( z\right) f,g\right\rangle =\int_{-\infty}^{\infty
}\left( \lambda-z\right) ^{-1}d\left\langle E_{H}\left( \lambda\right)
f,g\right\rangle .
\]
The connection between above two sesquilinear forms is given by the relation
\begin{equation}
R_{H}\left( \lambda\pm i\varepsilon\right) =\pm i\int_{0}^{\infty
\exp\left( -\varepsilon t\pm i\lambda t\right) \exp\left( \pm itH\right)
dt. \label{g3
\end{equation}
The proof of equality (\ref{g3}) is based on Fubini's Theorem and given in
Section 1.4 \cite{Y1}. Set
\[
\delta_{H}\left( \lambda,\varepsilon\right) =\frac{1}{2\pi i}\left[
R_{H}\left( \lambda+i\varepsilon\right) -R_{H}\left( \lambda-i\varepsilon
\right) \right] =\frac{\varepsilon}{\pi}R_{H}\left( \lambda+i\varepsilon
\right) R_{H}\left( \lambda-i\varepsilon\right) \geq0,
\]
then
\begin{equation}
\varepsilon\pi^{-1}\left\Vert R_{H}\left( \lambda\pm i\varepsilon\right)
f\right\Vert ^{2}=\left\langle \delta_{H}\left( \lambda,\varepsilon\right)
f,f\right\rangle \label{g21
\end{equation}
and
\begin{equation}
\left\langle \delta_{H}\left( \lambda,\varepsilon\right) f,g\right\rangle
=\frac{\varepsilon}{\pi}\int_{-\infty}^{\infty}\frac{1}{\left( s-\lambda
-i\varepsilon\right) \left( s-\lambda+i\varepsilon\right) }d\left\langle
E_{H}\left( s\right) f,g\right\rangle . \label{g40
\end{equation}
Denote by $\mathcal{H}_{ac}\left( H\right) $ the set of all these vectors
$x\in\mathcal{H}$ such that the mapping $\lambda\longmapsto\left\langle
E_{H}\left( \lambda\right) x,x\right\rangle ,$ with $\lambda\in\mathbb{R}$,
is a locally absolutely continuous function on $\mathbb{R}$ (see \cite{Kato4}
or \cite{Y1} for more details). From the argument in Section 1.4 \cite{Y1}, we
conclude that
\begin{equation}
\lim_{\varepsilon\rightarrow0}\left\langle \delta_{H}\left( \lambda
,\varepsilon\right) f,g\right\rangle =\frac{d\left\langle E_{H}\left(
\lambda\right) f,g\right\rangle }{d\lambda},\text{ \ \ a.e. }\lambda
\in\mathbb{R} \label{g6
\end{equation}
for $f$ or $g$ in $\mathcal{H}_{ac}\left( H\right) $. We also have
\begin{equation}
\frac{d\left\langle E_{H}\left( \lambda\right) E_{H}\left( \Lambda\right)
f,g\right\rangle }{d\lambda}=\mathcal{X}_{\Lambda}\left( \lambda\right)
\frac{d\left\langle E_{H}\left( \lambda\right) f,g\right\rangle }{d\lambda
},\text{ a.e. }\lambda\in\mathbb{R}, \label{g30
\end{equation}
where $\mathcal{X}_{\Lambda}\left( \cdot\right) $ is the characteristic
function of the Borel set $\Lambda$. The proof of equality (\ref{g30}) can be
found in the proof of Theorem X.4.4 in \cite{Kato4} or Section 1.3 in
\cite{Y1}.
\subsection{Kato Smoothness and generalized wave operators}
Kato smoothness play a very important role in the mathematical scattering
theory. It can be equivalently formulated in terms of the corresponding
unitary group. We recall it in this section.
For a self-adjoint operator $H,$ an operator $G:\mathcal{H\rightarrow H}$ is
called $H$-bounded if $\mathcal{D}\left( H\right) \subseteq\mathcal{D
\left( G\right) $ and $GR_{H}\left( z\right) $ is bounded for $z$ in the
resolvent set $\rho=\rho\left( H\right) .$
\begin{theorem}
\label{K1}(Theorem 4.3.1 in \cite{Y1} or Theorem 5.1 in \cite{Kato2}) Let $H$
be a densely defined self-adjoint operator in $\mathcal{H}$. Assume that
$G:\mathcal{H\rightarrow H}$ is $H$-bounded operator$,$ then the following
conditions are equivalent.
\begin{enumerate}
\item $\gamma_{1}^{2}=\frac{1}{2\pi}\sup_{f\in\mathcal{D}\left( H\right)
,\left\Vert f\right\Vert =1}\int_{\mathbb{R}}\left\Vert Ge^{\pm itH
f\right\Vert ^{2}dt<\infty;$
\item $\gamma_{2}^{2}=\frac{1}{\left( 2\pi\right) ^{2}}\sup_{\left\Vert
f\right\Vert =1,\varepsilon>0}\int_{\mathbb{R}}\left( \left\Vert
GR_{H}(\lambda+i\varepsilon)f\right\Vert ^{2}+\left\Vert GR_{H}\left(
\lambda-i\varepsilon\right) f\right\Vert ^{2}d\lambda\right) <\infty;$
\item $\gamma_{3}^{2}=\sup_{\left\Vert f\right\Vert =1,\varepsilon>0
\int_{\mathbb{R}}\left\Vert G\delta_{H}\left( \lambda,\varepsilon\right)
f\right\Vert ^{2}d\lambda<\infty;$
\item $\gamma_{4}^{2}=\sup_{\lambda\in\mathbb{R},\varepsilon>0}\left\Vert
G\delta_{H}\left( \lambda,\varepsilon\right) G^{\ast}\right\Vert <\infty;$
\item $\gamma_{5}^{2}=\sup_{\Lambda\subseteq\mathbb{R}}\frac{\left\Vert
GE_{H}\left( \Lambda\right) G^{\ast}\right\Vert }{\left\vert \Lambda
\right\vert }<\infty.$
\end{enumerate}
All the constants $\gamma_{j}=\gamma_{j}\left( G\right) ,$ $j=1,\cdots,5,$
are equal to one another.
\end{theorem}
\begin{definition}
\label{K2}Let $H$ be a self-adjoint operator acting on the Hilbert space
$\mathcal{H}.$ If $G$ is $H$-bounded and one of the inequalities (1)-(5) holds
(and then all of them), then operator $G$ is called Kato smooth relative to
the operator $H$ ($H$-smooth). The common value of the quantities $\gamma
_{1},\cdots,\gamma_{5}$ is denoted by $\gamma_{H}\left( G\right) .$
\end{definition}
\begin{remark}
There are other expressions for the number $\gamma_{H}\left( G\right) $
given in the Section 4.3 (\cite{Y1}). In particular, for each the sign "$\pm
$"
\begin{equation}
\gamma_{H}^{2}\left( G\right) =\left( \frac{1}{2\pi}\right) ^{2
\sup_{\left\Vert f\right\Vert =1,\varepsilon>0}\int_{-\infty}^{\infty
}\left\Vert GR_{H}\left( \lambda\pm i\varepsilon\right) f\right\Vert
^{2}d\lambda\label{g7
\end{equation}
\end{remark}
Before giving the definition of generalized wave operators in $\mathcal{M},$
we need to recall the following concepts which appear first in \cite{Li}.
\begin{definition}
\label{M0}(\cite{Li})Let $H$ be a self-adjoint element in $\mathcal{A}\left(
\mathcal{M}\right) $ and let $\left\{ E_{H}\left( \lambda\right) \right\}
_{\lambda\in\mathbb{R}}$ be the spectral resolution of the identity for $H$ in
$\mathcal{M}.$ We define $\mathcal{P}_{ac}^{\infty}\left( H\right) $ to be
the collection of those projections $P$ in $\mathcal{M}$ such that:
the mapping $\lambda\longmapsto PE_{H}\left( \lambda\right) P$ from
$\lambda\in\mathbb{R}$ into $\mathcal{M}$ is locally absolutely continuous,
i.e., for all $a,b\in\mathbb{R}$ with $a<b$ and every $\varepsilon>0,$ there
exists a $\delta>0$ such that $\sum_{i}\left\Vert PE_{H}\left( b_{i}\right)
P-PE_{H}\left( a_{i}\right) P\right\Vert <\varepsilon$ for every finite
collection $\left\{ \left( a_{i},b_{i}\right) \right\} $ of disjoint
intervals in $\left[ a,b\right] $ with $\sum_{i}\left( b_{i}-a_{i}\right)
<\delta.$
A projection $P\in\mathcal{P}_{ac}^{\infty}\left( H\right) $ is called a
norm absolutely continuous projection with respect to $H.$ Define
\[
P_{ac}^{\infty}\left( H\right) =\vee\left\{ P:P\in\mathcal{P}_{ac}^{\infty
}\left( H\right) \right\} .
\]
Such $P_{ac}^{\infty}\left( H\right) $ is called the norm absolutely
continuous support of $H$ in $\mathcal{M}$ and denote the range of
$P_{ac}^{\infty}\left( H\right) $ by $\mathcal{H}_{ac}^{\infty}\left(
H\right) .$
\end{definition}
\begin{remark}
\label{I1}Let $P_{ac}\left( H\right) $ be the projection from $\mathcal{H}$
onto $\mathcal{H}_{ac}\left( H\right) .$ In \cite{Li}, it has been shown
that $P_{ac}^{\infty}\left( H\right) \leq P_{ac}\left( H\right) $ and
$P_{ac}^{\infty}\left( H\right) \in\mathcal{M\cap A}^{\prime}$ where
$\mathcal{A}$ is the von Neumann subalgebra generated by $\left\{
E_{H}\left( \lambda\right) \right\} _{\lambda\in\mathbb{R}}$ in
$\mathcal{M}$ and $\mathcal{A}^{\prime}$ denotes the commutant of
$\mathcal{A}.$
\end{remark}
Now, we are ready to recall the definition of generalized wave operators.
\begin{definition}
\label{I2}(\cite{Li})Let $H$, $H_{1}\in\mathcal{A}\left( \mathcal{M}\right)
$ be a pair of self-adjoint operators and $J$ be an operator in $\mathcal{M}$.
The generalized wave operator for a pair of self-adjoint operators $H$,
$H_{1}$ and $J$ in $\mathcal{M}$ is the operator
\[
W_{\pm}(H_{1},H;J)=s.o.t\text{-}\lim_{t\rightarrow\pm\infty}e^{itH_{1
}Je^{-itH}P_{ac}^{\infty}\left( H\right)
\]
provided that $s.o.t$ (strong operator topology) limit exists.
\end{definition}
We note that the relation containing the signs "$\pm$" is understood as two
independent equalities. After slightly modify the proof of Theorem 5.2.5 in
\cite{Li}, we can get the next result.
\begin{theorem}
\label{I3}Let $H$, $H_{1}\in\mathcal{A}\left( \mathcal{M}\right) $ be a pair
of self-adjoint operators and $J$ be an operator in $\mathcal{M}$. If $W_{\pm
}(H_{1},H;J)$ exists for a pair of self-adjoint operators $H$, $H_{1}$ and
$J,$ then for any Borel function $\varphi
\[
\varphi\left( H_{1}\right) W_{\pm}(H_{1},H;J)=W_{\pm}(H_{1},H;J)\varphi
\left( H\right) .
\]
In particular, for any Borel set $\Lambda\subseteq\mathbb{R}$
\[
E_{H_{1}}\left( \Lambda\right) W_{\pm}(H_{1},H;J)=W_{\pm}(H_{1
,H;J)E_{H}\left( \Lambda\right) .
\]
\end{theorem}
Different $J$ might give us different $W_{\pm},$ we will give a condition
about $J$ such that $W_{\pm}(H_{1},H;J)$ is an isometry on $P_{ac}^{\infty
}\left( H\right) $ below$.$ Its proof is similar to Proposition 2.1.3 in
\cite{Y1}, so we omit it.
\begin{theorem}
\label{I4}Let $H$, $H_{1}\in\mathcal{A}\left( \mathcal{M}\right) $ be a pair
of self-adjoint operators and $J$ be an operator in $\mathcal{M}$. If $W_{\pm
}(H_{1},H;J)$ exists, then $W_{\pm}(H_{1},H;J)$ is isometric on $P_{ac
^{\infty}\left( H\right) $ if the strong operator limit
\[
s.o.t\text{-}\lim_{t\rightarrow\pm\infty}\left( J^{\ast}J-I\right)
e^{-itH}P_{ac}^{\infty}\left( H\right) =0
\]
\end{theorem}
\begin{lemma}
\label{M6}(Lemma 5.2.3 \cite{Li}) Suppose $H$ is a self-adjoint element in
$\mathcal{A}\left( \mathcal{M}\right) .$ Let $\left\{ \left( E_{H}\left(
\lambda\right) \right) \right\} _{\lambda\in\mathbb{R}}$ be the spectral
resolution of the identity for $H$ in $\mathcal{M}.$ If $S\in\mathcal{M}$
satisfies that the mapping $\lambda\longmapsto S^{\ast}E_{H}\left(
\lambda\right) S$ from $\mathbb{R}$ into $\mathcal{M}$ is locally absolutely
continuous, then $R\left( S\right) ,$ the range projection of $S$ in
$\mathcal{M},$ is a subprojection of $P_{ac}^{\infty}\left( H\right) .$
\end{lemma}
Then we can get the following result.
\begin{proposition}
Let $H$, $H_{1}\in\mathcal{A}\left( \mathcal{M}\right) $ be a pair of
self-adjoint operators and $J$ be an operator in $\mathcal{M}$. If $W_{\pm
}\overset{\triangle}{=}W_{\pm}(H_{1},H;J)$ exists and the strong operator
limit
\[
s.o.t\text{-}\lim_{t\rightarrow\pm\infty}\left( J^{\ast}J-I\right)
e^{-itH}P_{ac}^{\infty}\left( H\right) =0,
\]
Then $W_{\pm}W_{\pm}^{\ast}\leq P_{ac}^{\mathcal{1}}\left( H_{1}\right) $
\end{proposition}
\begin{proof}
If
\[
s.o.t\text{-}\lim_{t\rightarrow\pm\infty}\left( J^{\ast}J-I\right)
e^{-itH}P_{ac}^{\infty}\left( H\right) =0,
\]
then $W_{\pm}^{\ast}W_{\pm}=P_{ac}^{\infty}\left( H\right) $ by Theorem
\ref{I4}. From Theorem \ref{I3}, for any $P\in\mathcal{P}_{ac}^{\mathcal{1
}\left( H\right) $ and any Borel set $\Lambda\subseteq\mathbb{R},$
\[
\left( W_{\pm}P\right) ^{\ast}E_{H_{1}}\left( \Lambda\right) \left(
W_{\pm}P\right) =PW_{\pm}^{\ast}E_{H_{1}}\left( \Lambda\right) W_{\pm
}P=PW_{\pm}^{\ast}W_{\pm}E_{H}\left( \Lambda\right) P=PE_{H}\left(
\Lambda\right) P.
\]
It implies that the mapping $\lambda\rightarrow\left( W_{\pm}P\right)
^{\ast}E_{H_{1}}\left( \lambda\right) \left( W_{\pm}P\right) $ from
$\mathbb{R}$ into $\mathcal{M}$ is locally absolutely continuous. Hence the
range projection $R\left( W_{\pm}P\right) \leq P_{ac}^{\mathcal{1}}\left(
H_{1}\right) $ by Lemma \ref{M6}$.$ Therefore $R\left( W_{\pm}\right) \leq
P_{ac}^{\mathcal{1}}\left( H_{1}\right) $ by the fact that $W_{\pm
P_{ac}^{\mathcal{1}}\left( H\right) =W_{\pm}.$ Hence $W_{\pm}W_{\pm}^{\ast
}\leq P_{ac}^{\mathcal{1}}\left( H_{1}\right) .$
\end{proof}
\section{\bigskip Main Results}
\subsection{Characterization of norm absolutely continuous projections in
$\mathcal{M}$}
The cut off function $\omega_{n}$ is given in \cite{Li}. We refer the reader
to \cite{Li} for its definition. Here we only recall its useful property.
\begin{lemma}
\label{M5}(Lemma 4.2.2 in \cite{Li}) Suppose $H$ is a self-adjoint element in
$\mathcal{A}\left( \mathcal{M}\right) .$ For each $n\in\mathbb{N}$ and
cut-off function $\omega_{n},$ let
\[
\omega_{n}\left( H\right) =\int_{\mathbb{R}}\omega_{n}\left( t\right)
dE_{H}\left( t\right) .
\]
Then $\omega_{n}\left( H\right) \in\mathcal{M}$ an
\[
\omega_{n}\left( H\right) \rightarrow I\text{ in strong operator topology,
as }n\rightarrow\mathcal{1}.
\]
\end{lemma}
\begin{remark}
\label{M26} Let $H$ be a self-adjoint element in $\mathcal{A}\left(
\mathcal{M}\right) $ and $P\in\mathcal{P}_{ac}^{\infty}\left( H\right) .$
From Lemma 4.2.3 (vi) in \cite{Li},
\begin{equation}
\int_{\mathbb{R}}\left\Vert P\omega_{n}\left( H\right) e^{-itH}f\right\Vert
^{2}dt\leq\frac{n}{2\pi}\left\Vert f\right\Vert \label{g70
\end{equation}
for any $f\in\mathcal{H}$ and $n\in\mathbb{N}.$ The
\[
\sup_{\left\Vert f\right\Vert =1}\int_{\mathbb{R}}\left\Vert P\omega
_{n}\left( H\right) e^{-itH}f\right\Vert ^{2}dt\leq\frac{n}{2\pi}.
\]
By Theorem \ref{K1} and Definition \ref{K2}, we have $G=P\omega_{n}\left(
H\right) $ is $H$-smooth for $n\in\mathbb{N}.$
\end{remark}
Next theorem is the main result in this subsection.
\begin{theorem}
\label{M7}Suppose $H$ is a self-adjoint element in $\mathcal{A}\left(
\mathcal{M}\right) .$ Then
\[
P_{ac}^{\infty}\left( H\right) =\vee\left\{ R\left( G^{\ast}\right)
:G\in\mathcal{M}\text{ is }H\text{-smooth}\right\}
\]
where $R\left( G^{\ast}\right) $ is the range projection of $G^{\ast}.$
\end{theorem}
\begin{proof}
By Remark \ref{M26}, we have $G=P\omega_{n}\left( H\right) $ is $H$-smooth.
Hence from Theorem \ref{K1},
\[
\sup_{\left\Vert x\right\Vert =1}\frac{1}{2\pi}\int_{\mathbb{R}}\left\Vert
P\omega_{n}\left( H\right) e^{-itH}x\right\Vert ^{2}dt=\sup_{\Lambda
\subseteq\mathbb{R}}\frac{\left\Vert P\omega_{n}\left( H\right) E_{H}\left(
\Lambda\right) \omega_{n}\left( H\right) P\right\Vert }{\left\vert
\Lambda\right\vert }\leq\frac{n}{\left( 2\pi\right) ^{2}}.
\]
Therefore $\lambda\rightarrow P\omega_{n}\left( H\right) E_{H}\left(
\lambda\right) \omega_{n}\left( H\right) P$ is locally absolutely
continuous$.$ Then by Lemma \ref{M6}, we have $R\left( \omega_{n}\left(
H\right) P\right) \leq P_{ac}^{\infty}\left( H\right) $ for every
$n\in\mathbb{N}$ and $P\in\mathcal{P}_{ac}^{\infty}\left( H\right) .$ Hence
\[
P=\vee_{n}R\left( \omega_{n}\left( H\right) P\right) =\vee_{n}\left\{
R\left( G^{\ast}\right) :G=P\omega_{n}\left( H\right) \right\}
\]
by Lemma \ref{M5}, now we conclude that $P_{ac}^{\infty}\left( H\right)
\leq\vee\left\{ R\left( G^{\ast}\right) :G\in\mathcal{M}\text{ is
}H\text{-smooth}\right\} .$
On the other hand, if $G\in\mathcal{M}$ is $H$-smooth, then by Theorem
\ref{K1} (5) we have $\lambda\rightarrow GE_{\lambda}G^{\ast}$ is locally
absolutely continuous. Therefore $R\left( G^{\ast}\right) \leq
P_{a.c}^{\infty}\left( H\right) $ by Lemma \ref{M6}. Hence
\[
\vee\left\{ R\left( G^{\ast}\right) :G\text{ is }H\text{-smooth in
}\mathcal{M}\right\} \leq P_{a.c}^{\infty}\left( H\right) .
\]
This completes the proof.
\end{proof}
Let $P_{ac}\left( H\right) $ be the projection from $\mathcal{H}$ onto
$\mathcal{H}_{ac}\left( H\right) .$ In \cite{Li}, it has been shown that
$P_{ac}\left( H\right) =P_{ac}^{\infty}\left( H\right) $ for a densely
defined self-adjoint operator $H$ (therefore $H\in\mathcal{A}\left(
\mathcal{B}\left( \mathcal{H}\right) \right) .$ Then we can get the
following corollary.
\begin{corollary}
\label{M8}Let $H$ be a densely defined self-adjoint operator on $\mathcal{H}.$
Then
\begin{align*}
P_{ac}\left( H\right) & =P_{ac}^{\infty}\left( H\right) \\
& =\vee\left\{ R\left( G^{\ast}\right) :G\in\mathcal{B}\left(
\mathcal{H}\right) \text{ is }H\text{-smooth}\right\} .
\end{align*}
\end{corollary}
\begin{corollary}
\label{M9}Suppose $H$ is a self-adjoint affiliated with $\mathcal{M}.$ Then
$P_{ac}^{\infty}\left( H\right) \neq0$ if and only if there is at least one
$H$-smooth operator in $\mathcal{M}.$
\end{corollary}
\begin{proof}
If $P_{ac}^{\infty}\left( H\right) \neq0,$ then there is a projection
$P\in\mathcal{P}_{ac}^{\infty}\left( H\right) .$ By the argument in the
proof of Theorem \ref{M7}, we know that $P\omega_{n}\left( H\right) $ is
$H$-smooth operator in $\mathcal{M}.$ The other direction is clear by Theorem
\ref{M7}.
\end{proof}
\subsection{A Stationary approach in $\mathcal{M}$}
\bigskip\ \ The stationary approach in a Hilbert space $\mathcal{H}$ is based
on several variations of wave operators, such as weak wave operators and
stationary wave operators (see \cite{Y1}). So we will list the definitions of
these variations in $\mathcal{M}.$ For the details, we refer the reader to
\cite{ZRHL}.
\begin{definition}
\label{R1}(\cite{ZRHL}) Let $H$, $H_{1}\in\mathcal{A}\left( \mathcal{M
\right) $ be a pair of self-adjoint operators and $J$ be an operator in
$\mathcal{M}$. The generalized weak wave operator for a pair of self-adjoint
operators $H$, $H_{1}$ and $J$ is the operator
\[
\widetilde{W}_{\pm}(H_{1},H;J)=w.o.t\text{-}\lim_{t\rightarrow\pm\infty
P_{ac}^{\infty}\left( H_{1}\right) e^{itH_{1}}Je^{-itH}P_{ac}^{\infty
}\left( H\right)
\]
provided that $w.o.t$ (weak operator topology) limit exists.
\end{definition}
\ Furthermore, we also have
\begin{equation}
\widetilde{W}_{\pm}(H,H_{1};J^{\ast})=\widetilde{W}_{\pm}^{\ast}(H_{1},H;J)
\label{g33
\end{equation}
if $\widetilde{W}_{\pm}(H_{1},H;J)$ exists.
\begin{definition}
\label{R5}(\cite{ZRHL})Let $J$ be an operator in $\mathcal{M}$, $H$, $H_{1}$
be self-adjoint operators in $\mathcal{A}\left( \mathcal{M}\right) .$ If for
any pair of elements $f$ and $f_{1}$ in $\mathcal{H},$
\[
\lim_{\varepsilon\rightarrow0}\frac{\varepsilon}{\pi}\left\langle
JR_{H}\left( \lambda\pm i\varepsilon\right) P_{ac}^{\infty}\left( H\right)
f,R_{H_{1}}\left( \lambda\pm i\varepsilon\right) P_{ac}^{\infty}\left(
H_{1}\right) f_{1}\right\rangle
\]
exists for a.e. $\lambda\in\mathbb{R},$ then the generalized stationary wave
operator is defined as
\[
\left\langle \mathcal{U}_{\pm}\left( H_{1},H;J\right) f,f_{1}\right\rangle
=\int_{-\infty}^{\infty}\lim_{\varepsilon\rightarrow0}\frac{\varepsilon}{\pi
}\left\langle JR_{H}\left( \lambda\pm i\varepsilon\right) P_{ac}^{\infty
}\left( H\right) f,R_{H_{1}}\left( \lambda\pm i\varepsilon\right)
P_{ac}^{\infty}\left( H_{1}\right) f_{1}\right\rangle d\lambda.
\]
\end{definition}
In \cite{ZRHL}, the generalized stationary wave operator $\mathcal{U}_{\pm}$
was introduced based on the concept of generalized weak abelian wave operator
$\widetilde{\mathfrak{U}}_{\pm}.$ For the definitions of them and relation
among $\mathcal{U}_{\pm}$, $\widetilde{\mathfrak{U}}_{\pm}$, $\widetilde
{W}_{\pm}$ and $W_{\pm}$ in $\mathcal{M},$ we refer the reader to \cite{ZRHL}
for more details.
From the definition of $\mathcal{U}_{\pm}\left( H_{1},H;J\right) $, it is
clear that
\begin{equation}
P_{ac}^{\infty}\left( H_{1}\right) \mathcal{U}_{\pm}\left( H_{1
,H;J\right) =\mathcal{U}_{\pm}\left( H_{1},H;J\right) \label{G70
\end{equation}
if $\mathcal{U}_{\pm}\left( H_{1},H;J\right) $ exists.
Note $\mathcal{U}_{\pm}\left( H_{1},H;J\right) $ is given in terms of the
resolvents of operators $H$ and $H_{1}$ which obviously have nothing to do
with time variable $t$ apparently$.$ So $\mathcal{U}_{\pm}\left(
H_{1},H;J\right) $ is a key concept in the stationary approach in
$\mathcal{M}.$ Actually, to check the existence of $\mathcal{U}_{\pm}\left(
H_{1},H;J\right) $ is one of the key steps to show the existence of $W_{\pm}$
in a stationary method.
\begin{corollary}
\label{R8}(\cite{ZRHL})Let $H$, $H_{1}\in\mathcal{A}\left( \mathcal{M
\right) $ be a pair of self-adjoint operators and $J$ be an operator in
$\mathcal{M}$. If $\mathcal{U}_{\pm}\left( H_{1},H;J\right) $ exists, then
\[
\mathcal{U}_{\pm}\left( H,H_{1};J^{\ast}\right) =\mathcal{U}_{\pm}^{\ast
}\left( H_{1},H;J\right) .
\]
\end{corollary}
Next result give us the relation among $\mathcal{U}_{\pm}\left(
H_{1},H;J\right) ,$ $\widetilde{W}_{\pm}\left( H_{1},H;J\right) $ and
$W_{\pm}(H_{1},H;J)$.
\begin{theorem}
\label{R10}(\cite{ZRHL})Let $H$, $H_{1}\in\mathcal{A}\left( \mathcal{M
\right) $ be a pair of self-adjoint operators and $J$ be an operator in
$\mathcal{M}$. If $\mathcal{U}_{\pm}\left( H_{1},H;J\right) $,
$\mathcal{U}_{\pm}\left( H,H;J^{\ast}J\right) $, $\widetilde{W}_{\pm}\left(
H_{1},H;J\right) $ and $\widetilde{W}_{\pm}\left( H,H;J^{\ast}J\right) $
exist as well as
\[
\mathcal{U}_{\pm}^{\ast}\left( H_{1},H;J\right) \mathcal{U}_{\pm}\left(
H_{1},H;J\right) =\mathcal{U}_{\pm}\left( H,H;J^{\ast}J\right) ,
\]
then $W_{\pm}(H_{1},H;J)$ exists an
\[
\mathcal{U}_{\pm}\left( H_{1},H;J\right) =W_{\pm}(H_{1},H;J).
\]
\end{theorem}
Theorem \ref{R10} give us a strategy to show the existence of $W_{\pm}$ by a
stationary approach.
\begin{corollary}
\label{R9}Let $H$, $H_{1}\in\mathcal{A}\left( \mathcal{M}\right) $ be a pair
of self-adjoint operators and $J$ be an operator in $\mathcal{M}$. If
$\mathcal{U}_{\pm}\left( H_{1},H;J\right) $ exists, then for any pair of
elements $f$ and $f_{1}$ in $\mathcal{H}\ $and any Borel sets $\Lambda$,
$\Lambda_{1}\subset\mathbb{R},$
\begin{align*}
& \left\langle \mathcal{U}_{\pm}\left( H_{1},H;J\right) E_{H}\left(
\Lambda\right) f,E_{H_{1}}\left( \Lambda_{1}\right) f_{1}\right\rangle \\
& =\int_{\Lambda_{1}\cap\Lambda}\lim_{\varepsilon\rightarrow0}\frac
{\varepsilon}{\pi}\left\langle JR_{H}\left( \lambda\pm i\varepsilon\right)
P_{ac}^{\infty}\left( H\right) f,R_{H_{1}}\left( \lambda\pm i\varepsilon
\right) P_{ac}^{\infty}\left( H_{1}\right) f_{1}\right\rangle d\lambda.
\end{align*}
\end{corollary}
\begin{proof}
Set
\[
\alpha_{\pm}\left( f,f_{1};\lambda\right) =\lim_{\varepsilon\rightarrow
0}\frac{\varepsilon}{\pi}\left\langle JR_{H}\left( \lambda\pm i\varepsilon
\right) P_{ac}^{\infty}\left( H\right) f,R_{H_{1}}\left( \lambda\pm
i\varepsilon\right) P_{ac}^{\infty}\left( H_{1}\right) f_{1}\right\rangle
.
\]
Since
\[
P_{ac}^{\infty}\left( H\right) E_{H}\left( \Lambda\right) =E_{H}\left(
\Lambda\right) P_{ac}^{\infty}\left( H\right)
\]
and
\[
P_{ac}^{\infty}\left( H_{1}\right) E_{H_{1}}\left( \Lambda_{1}\right)
=E_{H_{1}}\left( \Lambda_{1}\right) P_{ac}^{\infty}\left( H_{1}\right)
\]
by Remark \ref{I1}, we get that
\begin{align*}
& \left\vert \alpha_{\pm}\left( E_{H}\left( \Lambda\right) f,E_{H_{1
}\left( \Lambda_{1}\right) f_{1};\lambda\right) \right\vert ^{2}\\
& \leq\frac{\varepsilon^{2}}{\pi^{2}}\left\Vert J\right\Vert ^{2
\lim_{\varepsilon\rightarrow0}\left\Vert R_{H}\left( \lambda\pm
i\varepsilon\right) E_{H}\left( \Lambda\right) P_{ac}^{\infty}\left(
H\right) f\right\Vert ^{2}\lim_{\varepsilon\rightarrow0}\left\Vert R_{H_{1
}\left( \lambda\pm i\varepsilon\right) E_{H_{1}}\left( \Lambda_{1}\right)
P_{ac}^{\infty}\left( H_{1}\right) f_{1}\right\Vert ^{2}\\
& =\left\Vert J\right\Vert ^{2}\lim_{\varepsilon\rightarrow0}\left\langle
\delta_{H}\left( \lambda,\varepsilon\right) E_{H}\left( \Lambda\right)
P_{ac}^{\infty}\left( H\right) f,f\right\rangle \cdot\lim_{\varepsilon
\rightarrow0}\left\langle \delta_{H_{1}}\left( \lambda,\varepsilon\right)
E_{H_{1}}\left( \Lambda_{1}\right) P_{ac}^{\infty}\left( H_{1}\right)
f_{1},f_{1}\right\rangle \\
& =\left\Vert J\right\Vert ^{2}\mathcal{X}_{\Lambda\cap\Lambda_{1}
\frac{d\left\langle E_{H}\left( \lambda\right) P_{ac}^{\infty}\left(
H\right) f,f\right\rangle }{d\lambda}\frac{d\left\langle E_{H_{1}}\left(
\lambda\right) P_{ac}^{\infty}\left( H_{1}\right) f_{1},f_{1}\right\rangle
}{d\lambda
\end{align*}
by (\ref{g6}) and (\ref{g30}) where $\mathcal{X}_{\Lambda\cap\Lambda_{1}}$ is
the characteristic function of $\Lambda\cap\Lambda_{1}.$ Therefore
\[
\mathcal{X}_{\mathbb{R}\backslash\Lambda\cap\Lambda_{1}}\alpha_{\pm}\left(
E_{H}\left( \Lambda\right) f,E_{H_{1}}\left( \Lambda_{1}\right)
f_{1};\lambda\right) =0\text{.
\]
where $\mathcal{X}_{\mathbb{R}\backslash\Lambda\cap\Lambda_{1}}$ is the
characteristic function of $\mathbb{R}\backslash\Lambda\cap\Lambda_{1}.$ It
implies that
\[
\mathcal{X}_{\Lambda\cap\Lambda_{1}}\alpha_{\pm}\left( E_{H}\left(
\Lambda\right) f,E_{H_{1}}\left( \Lambda_{1}\right) f_{1};\lambda\right)
=\alpha_{\pm}\left( E_{H}\left( \Lambda\right) f,E_{H_{1}}\left(
\Lambda_{1}\right) f_{1};\lambda\right) .
\]
Hence
\begin{align*}
& \mathcal{X}_{\Lambda\cap\Lambda_{1}}\alpha_{\pm}\left( f,f_{1
;\lambda\right) \\
& =\mathcal{X}_{\Lambda\cap\Lambda_{1}}\alpha_{\pm}\left( E_{H}\left(
\Lambda\right) f,E_{H_{1}}\left( \Lambda_{1}\right) f_{1};\lambda\right)
+\mathcal{X}_{\Lambda\cap\Lambda_{1}}\alpha_{\pm}\left( E_{H}\left(
\mathbb{R}\backslash\left( \Lambda\right) \right) f,E_{H_{1}}\left(
\Lambda_{1}\right) f_{1};\lambda\right) \\
& +\mathcal{X}_{\Lambda\cap\Lambda_{1}}\alpha_{\pm}\left( f,E_{H_{1}}\left(
\mathbb{R}\backslash\left( \Lambda_{1}\right) \right) f_{1};\lambda\right)
\\
& =\alpha_{\pm}\left( E_{H}\left( \Lambda\right) f,E_{H_{1}}\left(
\Lambda_{1}\right) f_{1};\lambda\right) .
\end{align*}
It follows that
\begin{align*}
& \left\langle \mathcal{U}_{\pm}\left( H_{1},H;J\right) E_{H}\left(
\Lambda\right) f,E_{H_{1}}\left( \Lambda_{1}\right) f_{1}\right\rangle \\
& =\int_{\mathbb{R}}\alpha_{\pm}\left( E_{H}\left( \Lambda\right)
f,E_{H_{1}}\left( \Lambda_{1}\right) f_{1};\lambda\right) =\int
_{\mathbb{R}}\mathcal{X}_{\Lambda\cap\Lambda_{1}}\alpha_{\pm}\left(
f,f_{1};\lambda\right) \\
& =\int_{\Lambda_{1}\cap\Lambda}\lim_{\varepsilon\rightarrow0}\frac
{\varepsilon}{\pi}\left\langle JR_{H}\left( \lambda\pm i\varepsilon\right)
P_{ac}^{\infty}\left( H\right) f,R_{H_{1}}\left( \lambda\pm i\varepsilon
\right) P_{ac}^{\infty}\left( H_{1}\right) f_{1}\right\rangle d\lambda.
\end{align*}
The proof is completed.
\end{proof}
\begin{remark}
\label{M10}Let $H$, $H_{1}\in\mathcal{A}\left( \mathcal{M}\right) $ be a
pair of self-adjoint operators and $J$ be an operator in $\mathcal{M}$ with
$J\mathcal{D}\left( H\right) \subseteq\mathcal{D}\left( H_{1}\right) $.
Then
\begin{align}
H_{1}J-JH & =\left( H_{1}-z\right) J-J\left( H-z\right) \nonumber\\
& =\left( H_{1}-z\right) (JR_{H}\left( z\right) -R_{H_{1}}\left(
z\right) J)\left( H-z\right) . \label{g71
\end{align}
Hence
\begin{equation}
JR_{H}\left( z\right) -R_{H_{1}}\left( z\right) J=R_{H_{1}}\left(
z\right) \left( H_{1}J-JH\right) R_{H}\left( z\right) . \label{g28
\end{equation}
From (\ref{g21}) and (\ref{g28}), we have
\begin{align}
& \frac{\varepsilon}{\pi}\left\langle JR_{H}\left( \lambda\pm i\varepsilon
\right) P_{ac}^{\infty}\left( H\right) f,R_{H_{1}}\left( \lambda\pm
i\varepsilon\right) P_{ac}^{\infty}\left( H_{1}\right) f_{1}\right\rangle
\nonumber\\
& =\frac{\varepsilon}{\pi}\left\langle \left( R_{H_{1}}\left( \lambda\pm
i\varepsilon\right) J+R_{H_{1}}\left( \lambda\pm i\varepsilon\right)
\left( H_{1}J-JH\right) R_{H}\left( \lambda\pm i\varepsilon\right)
\right) P_{ac}^{\infty}\left( H\right) f,R_{H_{1}}\left( \lambda\pm
i\varepsilon\right) P_{ac}^{\infty}\left( H_{1}\right) f_{1}\right\rangle
\nonumber\\
& =\left\langle \left( J+\left( H_{1}J-JH\right) R_{H}\left( \lambda\pm
i\varepsilon\right) \right) P_{ac}^{\infty}\left( H\right) f,\delta
_{H_{1}}\left( \lambda,\varepsilon\right) P_{ac}^{\infty}\left(
H_{1}\right) f_{1}\right\rangle . \label{g26
\end{align}
\end{remark}
\begin{lemma}
\label{M11}Let $H$, $H_{1}\in\mathcal{A}\left( \mathcal{M}\right) $ be a
pair of self-adjoint operators and $J$ be an operator in $\mathcal{M}$ with
$J\mathcal{D}\left( H\right) \subseteq\mathcal{D}\left( H_{1}\right) $.
Suppose there are $H$-bounded operator $G$ and $H_{1}$-bounded operator
$G_{1}$ in $\mathcal{A}\left( \mathcal{M}\right) $ satisfying $H_{1
J-JH=G_{1}^{\ast}G$. If
\[
\lim_{\varepsilon\rightarrow0}GR_{H}\left( \lambda\pm i\varepsilon\right)
P_{ac}^{\infty}\left( H\right) f
\]
exist a.e. $\lambda\in\mathbb{R}$ for every $f\in\mathcal{H}$ and
\[
\lim_{\varepsilon\rightarrow0}\left\langle G_{1}\delta_{H_{1}}\left(
\lambda,\varepsilon\right) P_{ac}^{\infty}\left( H_{1}\right)
f_{1},g\right\rangle
\]
exist a.e. $\lambda\in\mathbb{R}$ for any $f_{1}$ and $g\in\mathcal{H}$, then
$\mathcal{U}_{\pm}\left( H_{1},H;J\right) $ exists.
\end{lemma}
\begin{proof}
Since $H_{1}J-JH=G_{1}^{\ast}G,$ by (\ref{g26})
\begin{align}
& \frac{\varepsilon}{\pi}\left\langle JR_{H}\left( \lambda\pm i\varepsilon
\right) P_{ac}^{\infty}\left( H\right) f,R_{H_{1}}\left( \lambda\pm
i\varepsilon\right) P_{ac}^{\infty}\left( H_{1}\right) f_{1}\right\rangle
\nonumber\\
& =\left\langle \left( J+\left( H_{1}J-JH\right) R_{H}\left( \lambda\pm
i\varepsilon\right) \right) P_{ac}^{\infty}\left( H\right) f,\delta
_{H_{1}}\left( \lambda,\varepsilon\right) P_{ac}^{\infty}\left(
H_{1}\right) f_{1}\right\rangle \nonumber\\
& =\left\langle \left( J+G_{1}^{\ast}GR_{H}\left( \lambda\pm i\varepsilon
\right) \right) P_{ac}^{\infty}\left( H\right) f,\delta_{H_{1}}\left(
\lambda,\varepsilon\right) P_{ac}^{\infty}\left( H_{1}\right)
f_{1}\right\rangle \nonumber\\
& =\left\langle JP_{ac}^{\infty}\left( H\right) f,\delta_{H_{1}}\left(
\lambda,\varepsilon\right) P_{ac}^{\infty}\left( H_{1}\right)
f_{1}\right\rangle +\left\langle \left( GR_{H}\left( \lambda\pm
i\varepsilon\right) \right) P_{ac}^{\infty}\left( H\right) f,G_{1
\delta_{H_{1}}\left( \lambda,\varepsilon\right) P_{ac}^{\infty}\left(
H_{1}\right) f_{1}\right\rangle . \label{g43
\end{align}
Then from (\ref{g6}),
\[
\lim_{\varepsilon\rightarrow0}\left\langle JP_{ac}^{\infty}\left( H\right)
f,\delta_{H_{1}}\left( \lambda,\varepsilon\right) P_{ac}^{\infty}\left(
H_{1}\right) f_{1}\right\rangle \text{ exists a.e}.\lambda\in\mathbb{R
\text{.
\]
Since
\[
\lim_{\varepsilon\rightarrow0}GR_{H}\left( \lambda\pm i\varepsilon\right)
P_{ac}^{\infty}\left( H\right) f
\]
and
\[
\lim_{\varepsilon\rightarrow0}\left\langle G_{1}\delta_{H_{1}}\left(
\lambda,\varepsilon\right) P_{ac}^{\infty}\left( H_{1}\right)
f_{1},g\right\rangle
\]
exist a.e. $\lambda\in\mathbb{R}$ for every $f,f_{1}$ and $g\in\mathcal{H},$
we can easily check that
\[
\lim_{\varepsilon\rightarrow0}\left\langle \left( GR_{H}\left( \lambda\pm
i\varepsilon\right) \right) P_{ac}^{\infty}\left( H\right) f,G_{1
\delta_{H_{1}}\left( \lambda,\varepsilon\right) P_{ac}^{\infty}\left(
H_{1}\right) f_{1}\right\rangle
\]
exists a.e. $\lambda\in\mathbb{R}$. Hence $\mathcal{U}_{\pm}\left(
H_{1},H;J\right) $ is well-defined.
\end{proof}
\begin{lemma}
\label{M12}Let $H$, $H_{1}\in\mathcal{A}\left( \mathcal{M}\right) $ be a
pair of self-adjoint operators and $J$ be an operator in $\mathcal{M}$ with
$J\mathcal{D}\left( H\right) \subseteq\mathcal{D}\left( H_{1}\right) $.
Suppose there are $H$-bounded operator $G$ and $H_{1}$-bounded operator in
$\mathcal{A}\left( \mathcal{M}\right) $ satisfying $H_{1}J-JH=G_{1}^{\ast
G$. If
\begin{equation}
\lim_{\varepsilon\rightarrow0}GR_{H}\left( \lambda\pm i\varepsilon\right)
P_{ac}^{\infty}\left( H\right) f \label{g44
\end{equation}
exist a.e. $\lambda\in\mathbb{R}$ for every $f\in\mathcal{H}$ and
\begin{equation}
\lim_{\varepsilon\rightarrow0}\left\langle G_{1}\delta_{H_{1}}\left(
\lambda,\varepsilon\right) P_{ac}^{\infty}\left( H_{1}\right)
f_{1},g\right\rangle \label{g45
\end{equation}
exist a.e. $\lambda\in\mathbb{R}$ for any $f_{1}$ and $g\in\mathcal{H},$ then
$\mathcal{U}_{\pm}\left( H,H;J^{\ast}J\right) $ exists an
\[
\mathcal{U}_{\pm}^{\ast}\left( H_{1},H;J\right) \mathcal{U}_{\pm}\left(
H_{1},H;J\right) =\mathcal{U}_{\pm}\left( H,H;J^{\ast}J\right) .
\]
\end{lemma}
\begin{proof}
By Lemma \ref{M11}, $\mathcal{U}_{\pm}\left( H_{1},H;J\right) $ exists. For
any Borel set $\Lambda$
\begin{equation}
\left\langle E_{H_{1}}\left( \Lambda\right) \cdot\mathcal{U}_{\pm}\left(
H_{1},H;J\right) f,f_{1}\right\rangle =\int_{\Lambda}\lim_{\varepsilon
\rightarrow0}\left\langle \delta_{H_{1}}\left( \lambda,\varepsilon\right)
\mathcal{U}_{\pm}\left( H_{1},H;J\right) f,f_{1}\right\rangle d\lambda
\label{G50
\end{equation}
where
\[
\lim_{\varepsilon\rightarrow0}\left\langle \delta_{H_{1}}\left(
\lambda,\varepsilon\right) \mathcal{U}_{\pm}\left( H_{1},H;J\right)
f,f_{1}\right\rangle \text{ exists
\]
by (\ref{g6}) and
\[
P_{ac}^{\infty}\left( H_{1}\right) \mathcal{U}_{\pm}\left( H_{1
,H;J\right) =\mathcal{U}_{\pm}\left( H_{1},H;J\right) .
\]
By Corollary \ref{R9}, we also have
\begin{align}
& \left\langle E_{H_{1}}\left( \Lambda\right) \cdot\mathcal{U}_{\pm}\left(
H_{1},H;J\right) f,f_{1}\right\rangle \nonumber\\
& =\int_{\Lambda}\lim_{\varepsilon\rightarrow0}\frac{\varepsilon}{\pi
}\left\langle JR_{H}\left( \lambda\pm i\varepsilon\right) P_{ac}^{\infty
}\left( H\right) f,R_{H_{1}}\left( \lambda\pm i\varepsilon\right)
P_{ac}^{\infty}\left( H_{1}\right) f_{1}\right\rangle d\lambda. \label{G52
\end{align}
Comparing two integrands of (\ref{G50}) and (\ref{G52}), we have
\begin{align}
& \lim_{\varepsilon\rightarrow0}\left\langle \delta_{H_{1}}\left(
\lambda,\varepsilon\right) \mathcal{U}_{\pm}\left( H_{1},H;J\right)
f,f_{1}\right\rangle \nonumber\\
& =\lim_{\varepsilon\rightarrow0}\frac{\varepsilon}{\pi}\left\langle
JR_{H}\left( \lambda\pm i\varepsilon\right) P_{ac}^{\infty}\left( H\right)
f,R_{H_{1}}\left( \lambda\pm i\varepsilon\right) P_{ac}^{\infty}\left(
H_{1}\right) f_{1}\right\rangle \text{ a.e. }\lambda\in\mathbb{R}.
\label{g27
\end{align}
So this equality holds only when
\[
\lim_{\varepsilon\rightarrow0}\frac{\varepsilon}{\pi}\left\langle
JR_{H}\left( \lambda\pm i\varepsilon\right) P_{ac}^{\infty}\left( H\right)
f,R_{H_{1}}\left( \lambda\pm i\varepsilon\right) P_{ac}^{\infty}\left(
H_{1}\right) f_{1}\right\rangle
\]
exists a.e. $\lambda\in\mathbb{R}$. By (\ref{g26}) and the fact that
$H_{1}J-JH=G_{1}^{\ast}G$, we hav
\begin{align*}
& \frac{\varepsilon}{\pi}\left\langle JR_{H}\left( \lambda\pm i\varepsilon
\right) P_{ac}^{\infty}\left( H\right) f,R_{H_{1}}\left( \lambda\pm
i\varepsilon\right) P_{ac}^{\infty}\left( H_{1}\right) f_{1}\right\rangle
\\
& =\left\langle JP_{ac}^{\infty}\left( H\right) f,\delta_{H_{1}}\left(
\lambda,\varepsilon\right) P_{ac}^{\infty}\left( H_{1}\right)
f_{1}\right\rangle +\left\langle \left( GR_{H}\left( \lambda\pm
i\varepsilon\right) \right) P_{ac}^{\infty}\left( H\right) f,G_{1
\delta_{H_{1}}\left( \lambda,\varepsilon\right) P_{ac}^{\infty}\left(
H_{1}\right) f_{1}\right\rangle .
\end{align*}
Then by (\ref{g44}), (\ref{g45})and (\ref{g6}), we can conclude that
\begin{equation}
\lim_{\varepsilon\rightarrow0}\frac{\varepsilon}{\pi}\left\langle
JR_{H}\left( \lambda\pm i\varepsilon\right) P_{ac}^{\infty}\left( H\right)
f,R_{H_{1}}\left( \lambda\pm i\varepsilon\right) P_{ac}^{\infty}\left(
H_{1}\right) f_{1}\right\rangle \label{g68
\end{equation}
exists a.e. $\lambda\in\mathbb{R}$.
In (\ref{g68}), replace $f_{1}$ by $\mathcal{U}_{\pm}\left( H_{1},H;J\right)
g,$ so from (\ref{g71}), (\ref{g26}) and (\ref{g27}) we have
\begin{align*}
& \lim_{\varepsilon\rightarrow0}\frac{\varepsilon}{\pi}\left\langle
JR_{H}\left( \lambda\pm i\varepsilon\right) P_{ac}^{\infty}\left( H\right)
f,R_{H_{1}}\left( \lambda\pm i\varepsilon\right) \mathcal{U}_{\pm}\left(
H_{1},H;J\right) P_{ac}^{\infty}\left( H\right) g\right\rangle \\
& =\lim_{\varepsilon\rightarrow0}\left\langle \left( J+\left(
H_{1}J-JH\right) R_{H}\left( \lambda\pm i\varepsilon\right) \right)
P_{ac}^{\infty}\left( H\right) f,\delta_{H_{1}}\left( \lambda
,\varepsilon\right) \mathcal{U}_{\pm}\left( H_{1},H;J\right) P_{ac
^{\infty}\left( H\right) g\right\rangle \\
& =\lim_{\varepsilon\rightarrow0}\frac{\varepsilon}{\pi}\left\langle \left(
J+\left( H_{1}J-JH\right) R_{H}\left( \lambda\pm i\varepsilon\right)
\right) P_{ac}^{\infty}\left( H\right) f,R_{H_{1}}\left( \lambda\mp
i\varepsilon\right) JR_{H}\left( \lambda\pm i\varepsilon\right)
P_{ac}^{\infty}\left( H\right) g\right\rangle \\
& =\lim_{\varepsilon\rightarrow0}\left\langle \left( J+\left(
H_{1}J-JH\right) R_{H}\left( \lambda\pm i\varepsilon\right) \right)
P_{ac}^{\infty}\left( H\right) f,\delta_{H_{1}}\left( \lambda\right)
R_{H_{1}}^{-1}\left( \lambda\pm i\varepsilon\right) JR_{H}\left( \lambda\pm
i\varepsilon\right) P_{ac}^{\infty}\left( H\right) g\right\rangle \\
& =\lim_{\varepsilon\rightarrow0}\left\langle JR_{H}\left( \lambda\pm
i\varepsilon\right) P_{ac}^{\infty}\left( H\right) f,JR_{H}\left(
\lambda\pm i\varepsilon\right) P_{ac}^{\infty}\left( H\right)
g\right\rangle \\
& =\lim_{\varepsilon\rightarrow0}\frac{\varepsilon}{\pi}\left\langle
R_{H}\left( \lambda\mp i\varepsilon\right) J^{\ast}JR_{H}\left( \lambda\pm
i\varepsilon\right) P_{ac}^{\infty}\left( H\right) f,P_{ac}^{\infty}\left(
H\right) g\right\rangle \text{ a.e. }\lambda\in\mathbb{R}.
\end{align*}
Hence applying the definition of $\mathcal{U}_{\pm}\left( H_{1},H;J\right)
,$ we have
\begin{align*}
& \left\langle \mathcal{U}_{\pm}\left( H_{1},H;J\right) f,\mathcal{U}_{\pm
}\left( H_{1},H;J\right) g\right\rangle \\
& =\int_{-\infty}^{\infty}\lim_{\varepsilon\rightarrow0}\frac{\varepsilon
}{\pi}\left\langle JR_{H}\left( \lambda\pm i\varepsilon\right)
P_{ac}^{\infty}\left( H\right) f,R_{H_{1}}\left( \lambda\pm i\varepsilon
\right) \mathcal{U}_{\pm}\left( H_{1},H;J\right) P_{ac}^{\infty}\left(
H\right) g\right\rangle \\
& =\int_{-\infty}^{\infty}\lim_{\varepsilon\rightarrow0}\frac{\varepsilon
}{\pi}\left\langle R_{H}\left( \lambda\mp i\varepsilon\right) J^{\ast
JR_{H}\left( \lambda\pm i\varepsilon\right) P_{ac}^{\infty}\left( H\right)
f,P_{ac}^{\infty}\left( H\right) g\right\rangle \\
& =\left\langle \mathcal{U}_{\pm}\left( H,H;J^{\ast}J\right)
f,g\right\rangle .
\end{align*}
It implies that\ \
\[
\mathcal{U}_{\pm}^{\ast}\left( H_{1},H;J\right) \mathcal{U}_{\pm}\left(
H_{1},H;J\right) =\mathcal{U}_{\pm}\left( H,H;J^{\ast}J\right) .
\]
\end{proof}
According to Theorem \ref{R10}, for showing the existence of $W_{\pm
(H_{1},H;J)$ by a stationary method, we also need to prove the existence of
$\widetilde{W}_{\pm}\left( H_{1},H;J\right) $ and $\widetilde{W}_{\pm
}\left( H,H;J^{\ast}J\right) $ without using time variable $t$ explicitly.
\subsection{The Kato-Rosenblum theorem in $\mathcal{M}$ by a stationary
approach}
The results below involve noncommutative $\mathcal{L}^{p}$-spaces associated
to a semifinite von Neumann algebra $\mathcal{M},$ so we refer the reader to
\cite{Pisier} for more details about it.
\begin{remark}
\label{M13}For a separable Hilbert space $\mathcal{H},$ we denote by $H_{\pm
}^{2}\left( \mathcal{H}\right) $ the class of functions with values in
$\mathcal{H},$ holomorphic on upper (lower) half-plane and such that
\[
\sup_{\varepsilon>0}\int_{\mathbb{R}}\left\Vert u\left( \lambda\pm
i\varepsilon\right) \right\Vert ^{2}d\lambda<+\infty.
\]
Then by the result in Section 1 of Chapter V in \cite{SF}, we know that the
radial limit exists almost everywhere, i.e., $\lim_{\varepsilon\rightarrow
0}u\left( \lambda\pm i\varepsilon\right) $ exists a.e. $\lambda\in
\mathbb{R}$.
\end{remark}
\begin{lemma}
\label{M14}Let $H\in\mathcal{A}\left( \mathcal{M}\right) $ be a self-adjoint
operator and $A\in\mathcal{L}^{2}\left( \mathcal{M},\tau\right)
\cap\mathcal{M}.$ The
\[
s.o.t\text{-}\lim_{\varepsilon\rightarrow0}AR_{H}\left( \lambda\pm
i\varepsilon\right) P_{ac}^{\infty}\left( H\right) \text{
\]
and
\[
s.o.t\text{-}\lim_{\varepsilon\rightarrow0}A\delta_{H}\left( \lambda
,\varepsilon\right) P_{ac}^{\infty}\left( H\right)
\]
exist in the strong operator topology a.e. $\lambda\in\mathbb{R}$.
\end{lemma}
\begin{proof}
By Remark \ref{M26}, and Theorem \ref{K1}, we ge
\[
\sup_{\left\Vert f\right\Vert =1}\frac{1}{2\pi}\int_{\mathbb{R}}\left\Vert
P\omega_{n}\left( H\right) e^{-itH}f\right\Vert ^{2}dt=\sup_{\Lambda
\subseteq\mathbb{R}}\frac{\left\Vert P\omega_{n}\left( H\right) E_{H}\left(
\Lambda\right) \omega_{n}\left( H\right) P\right\Vert }{\left\vert
\Lambda\right\vert }\leq\frac{n}{\left( 2\pi\right) ^{2}}.
\]
Hence by (\ref{g7}), we hav
\begin{align*}
& \sup_{\left\Vert f\right\Vert =1}\frac{1}{2\pi}\int_{\mathbb{R}}\left\Vert
P\omega_{n}\left( H\right) e^{-itH}f\right\Vert ^{2}dt\\
& =\frac{1}{\left( 2\pi\right) ^{2}}\sup_{\left\Vert f\right\Vert
=1,\varepsilon>0}\int_{\mathbb{R}}\left( \left\Vert P\omega_{n}\left(
H\right) R_{H}(\lambda\pm i\varepsilon)f\right\Vert ^{2}\right) d\lambda
\leq\frac{n}{\left( 2\pi\right) ^{2}}.
\end{align*}
From Lemma 2.1.1 in \cite{Li}, there is a sequence $\left\{ x_{m}\right\}
_{m\in\mathbb{N}}$ of $\mathcal{H}$ such that
\[
\left\Vert A\right\Vert _{2}^{2}=\tau\left( A^{\ast}A\right) =\sum
\left\langle A^{\ast}Ax_{m},x_{m}\right\rangle
\]
and
\[
\vee\left\{ A^{\prime}x_{m}:A^{\prime}\in\mathcal{M}^{\prime}\text{ and
m\in\mathbb{N}\right\}
\]
where $\mathcal{M}^{\prime}$ is the commutant of $\mathcal{M}.$ Then for these
$\left\{ x_{m}\right\} _{m\in\mathbb{N}}$ , we have
\[
\int_{\mathbb{R}}\left( \left\Vert P\omega_{n}\left( H\right) R_{H
(\lambda\pm i\varepsilon)Ax_{m}\right\Vert ^{2}\right) d\lambda\leq\frac
{n}{\left( 2\pi\right) ^{2}}\left\Vert Ax_{m}\right\Vert ^{2}\leq\frac
{n}{\left( 2\pi\right) ^{2}}\left\Vert A\right\Vert _{2}^{2
\]
for $P\in\mathcal{P}_{a.c}^{\infty}\left( H\right) .$ We further note that
for every $A\in\mathcal{L}^{2}\left( \mathcal{M},\tau\right) \cap
\mathcal{M},$
\begin{align*}
\int_{\mathbb{R}}\left\Vert P\omega_{n}\left( H\right) R_{H}(\lambda\pm
i\varepsilon)A\right\Vert _{2}^{2}d\lambda & =\int_{\mathbb{R}}\sum
_{m}\left\Vert P\omega_{n}\left( H\right) R_{H}(\lambda\pm i\varepsilon
)Ax_{m}\right\Vert _{2}^{2}d\lambda\\
& =\sum_{m}\int_{\mathbb{R}}\left\Vert P\omega_{n}\left( H\right)
R_{H}(\lambda\pm i\varepsilon)Ax_{m}\right\Vert _{2}^{2}d\lambda\\
& \leq\frac{n}{2\pi}\sum\left\Vert Ax_{m}\right\Vert ^{2}\leq\frac{n}{2\pi
}\left\Vert A\right\Vert _{2}^{2}.
\end{align*}
Combing it with the equality
\[
\left\Vert X\right\Vert _{2}^{2}=\tau\left( X^{\ast}X\right) =\tau\left(
XX^{\ast}\right) =\left\Vert X^{\ast}\right\Vert _{2}^{2}\text{ for very
}X\in\mathcal{M},
\]
we get the following inequality
\begin{align*}
\int_{\mathbb{R}}\left\Vert AR_{H}(\lambda\pm i\varepsilon)\omega_{n}\left(
H\right) Px_{m}\right\Vert ^{2}d\lambda & \leq\int_{\mathbb{R}}\left\Vert
AR_{H}(\lambda\pm i\varepsilon)\omega_{n}\left( H\right) P\right\Vert
_{2}^{2}d\lambda\\
& =\int_{\mathbb{R}}\left\Vert P\omega_{n}\left( H\right) R_{H}(\lambda\mp
i\varepsilon)A\right\Vert _{2}^{2}d\lambda\\
& \leq\frac{n}{2\pi}\left\Vert A\right\Vert _{2}^{2}.
\end{align*}
It implies that the vector-valued function $AR_{H}(\lambda\pm i\varepsilon
)\omega_{n}\left( H\right) Px_{m}$ belongs to the Hardy classes $H_{\pm
^{2}\left( \mathcal{H}\right) $ in the upper and lower half planes. By
Remark \ref{M13}, the radial limit values of functions in $H_{\pm}^{2}\left(
\mathcal{H}\right) $ exist a.e. $\lambda\in\mathbb{R},$ therefore
\[
\lim_{\varepsilon\rightarrow0}AR_{H}\left( \lambda\pm i\varepsilon\right)
\omega_{n}\left( H\right) Px_{m}\text{ exists a.e. }\lambda\in
\mathbb{R}\text{ for every }x_{m}.
\]
Since the linear span of the set $\left\{ A^{\prime}x_{m}:A^{\prime
\in\mathcal{M}^{\prime}\text{ and }m\in\mathbb{N}\right\} $ is dense in
$\mathcal{H},$ we have
\[
\lim_{\varepsilon\rightarrow0}AR_{H}\left( \lambda\pm i\varepsilon\right)
\omega_{n}\left( H\right) PA^{\prime}x_{m}=A^{\prime}\lim_{\varepsilon
\rightarrow0}AR_{H}\left( \lambda\pm i\varepsilon\right) \omega_{n}\left(
H\right) Px_{m}\text{ exists
\]
and then this indicates that
\[
s.o.t\text{-}\lim_{\varepsilon\rightarrow0}AR_{H}\left( \lambda\pm
i\varepsilon\right) \omega_{n}\left( H\right) P\text{ exists
\]
in strong operator topology. From the fact that $\omega_{n}\left( H\right)
\rightarrow I$ $(n\rightarrow\infty)$ in Lemma \ref{M5}, we can conclude that
\[
s.o.t\text{-}\lim_{\varepsilon\rightarrow0}AR_{H}\left( \lambda\pm
i\varepsilon\right) P\text{ exists for }A\in\mathcal{L}^{2}\left(
\mathcal{M},\tau\right) \cap\mathcal{M}\text{ and }P\in\mathcal{P
_{ac}^{\infty}\left( H\right) .
\]
Since $P_{ac}^{\infty}\left( H\right) =\vee\left\{ P:P\in\mathcal{P
_{ac}^{\infty}\left( H\right) \right\} ,$ we get
\[
s.o.t\text{-}\lim_{\varepsilon\rightarrow0}AR_{H}\left( \lambda\pm
i\varepsilon\right) P_{ac}^{\infty}\left( H\right) \text{ exists for
A\in\mathcal{L}^{2}\left( \mathcal{M},\tau\right) \cap\mathcal{M}.
\]
Note that $\delta_{H}\left( \lambda,\varepsilon\right) =\frac{1}{2\pi
i}\left[ R_{H}\left( \lambda+i\varepsilon\right) -R_{H}\left(
\lambda-i\varepsilon\right) \right] ,$ so we can conclude that
\[
\lim_{\varepsilon\rightarrow0}A\delta_{H}\left( \lambda,\varepsilon\right)
P_{ac}^{\infty}\left( H\right) x=\frac{1}{2\pi i}(\lim_{\varepsilon
\rightarrow0}AR_{H}\left( \lambda+i\varepsilon\right) P_{ac}^{\infty}\left(
H\right) x-\lim_{\varepsilon\rightarrow0}AR_{H}\left( \lambda-i\varepsilon
\right) P_{ac}^{\infty}\left( H\right) x)\text{
\]
exists for $A\in\mathcal{L}^{2}\left( \mathcal{M},\tau\right) \cap
\mathcal{M}.$ The proof is completed.
\end{proof}
\begin{remark}
By Lemma 2.1.6 in \cite{Li2}, we knows that $\mathcal{L}^{p}\left(
\mathcal{M},\tau\right) \cap\mathcal{M}$ is a two-sided ideal of
$\mathcal{M}$ for $1\leq p<\infty.$
\end{remark}
\begin{remark}
\label{R30}If $G\in\mathcal{M},$ then by (\ref{g6})
\[
\lim_{\varepsilon\rightarrow0}\left\langle G_{1}\delta_{H_{1}}\left(
\lambda,\varepsilon\right) P_{ac}^{\infty}\left( H_{1}\right)
f_{1},g\right\rangle =\lim_{\varepsilon\rightarrow0}\left\langle \delta
_{H_{1}}\left( \lambda,\varepsilon\right) P_{ac}^{\infty}\left(
H_{1}\right) f_{1},G_{1}g\right\rangle
\]
exists a.e. $\lambda\in\mathbb{R}$ for any $f_{1}$ and $g\in\mathcal{H}.$
\end{remark}
\begin{theorem}
\label{M15}Let $H$, $H_{1}\in\mathcal{A}\left( \mathcal{M}\right) $ be a
pair of self-adjoint operators and $J$ be an operator in $\mathcal{M}$ with
$J\mathcal{D}\left( H\right) \subseteq\mathcal{D}\left( H_{1}\right) $.
Assume $H_{1}J-JH\in\mathcal{L}^{1}\left( \mathcal{M},\tau\right)
\cap\mathcal{M}$, then $\mathcal{U}_{\pm}\left( H_{1},H;J\right)
\mathcal{\ }$and $\mathcal{U}_{\pm}\left( H,H;J^{\ast}J\right) $ both exist
an
\[
\mathcal{U}_{\pm}^{\ast}\left( H_{1},H;J\right) \mathcal{U}_{\pm}\left(
H_{1},H;J\right) =\mathcal{U}_{\pm}\left( H,H;J^{\ast}J\right) .
\]
\end{theorem}
\begin{proof}
Let $G=\left\vert H_{1}J-JH\right\vert ^{1/2}\in\mathcal{L}^{2}\left(
\mathcal{M},\tau\right) \cap\mathcal{M}$ and $G_{1}^{\ast}=V\left\vert
H_{1}J-JH\right\vert ^{1/2}\in\mathcal{L}^{2}\left( \mathcal{M},\tau\right)
\cap\mathcal{M}$ for some partial isometry $V$ in $\mathcal{M}$. Then the
proof is completed by Remark \ref{R30}, Lemma \ref{M14}, Lemma \ref{M11} and
Lemma \ref{M12}.
\end{proof}
According Theorem \ref{R10}, for showing the existence of $W_{\pm}(H_{1},H;J)$
by the stationary approach, we first need to show the existence of
$\widetilde{W}_{\pm}\left( H_{1},H;J\right) $ without depending on time
variable $t$ explicitly$.$ For doing this, we need several lemmas.
\begin{lemma}
\label{M16}(Lemma 2.5.1 in \cite{Li})Let $H$, $H_{1}\in\mathcal{A}\left(
\mathcal{M}\right) $ be a pair of self-adjoint operators, $J$ be an operator
in $\mathcal{M}$ with $J\mathcal{D}\left( H\right) \subseteq\mathcal{D
\left( H_{1}\right) $ and $H_{1}J-JH\in\mathcal{M}.$ Let $W_{J}\left(
t\right) =e^{itH_{1}}Je^{-itH},$ for $t\in\mathbb{R}.$ Then, for all
$f\in\mathcal{H}$ and $s,w\in\mathbb{R},$ the mapping $t\longmapsto
e^{itH_{1}}\left( H_{1}J-JH\right) e^{-itH}f$ from $\left[ s,w\right] $
into $\mathcal{H}$ is Bochner integrable wit
\[
\left( W_{J}\left( w\right) -W_{J}\left( s\right) \right) f=i\int
_{s}^{w}e^{itH_{1}}\left( H_{1}J-JH\right) e^{-itH}fdt.
\]
\end{lemma}
\begin{lemma}
\label{M17} Let $H\in\mathcal{A}\left( \mathcal{M}\right) $ be a
self-adjoint operator and $G$ be an operator in $\mathcal{M}$. Then there is a
linear manifold $\mathcal{D}$ in $\mathcal{H}_{ac}^{\infty}\left( H\right) $
with $\overline{\mathcal{D}}=\mathcal{H}_{ac}^{\infty}\left( H\right) $ such
that
\[
\int_{-\infty}^{\infty}\left\Vert Ge^{-itH}g\right\Vert ^{2}dt<\infty,\text{
}g\in\mathcal{D}.
\]
\end{lemma}
\begin{proof}
For any $f\in\mathcal{H}_{ac}^{\infty}\left( H\right) ,$ by (\ref{g6}), we
have
\[
\lim_{\varepsilon\rightarrow0}\left\langle G\delta(\lambda,\varepsilon
)f,h\right\rangle =\frac{d\left\langle GE_{H}\left( \lambda\right)
f,g\right\rangle }{d\lambda}\text{ a.e. }\lambda\in\mathbb{R}\text{ for any
}h\in\mathcal{H}.
\]
Let $F_{f}\left( \lambda\right) \in\mathcal{H}$ be the weak limit of
$G\delta(\lambda,\varepsilon)f,$ i.e. $\lim_{\varepsilon\rightarrow
0}\left\langle G\delta(\lambda,\varepsilon)f,h\right\rangle =\left\langle
F_{f}\left( \lambda\right) ,h\right\rangle $ a.e. $\lambda\in\mathbb{R}$ for
every $h\in\mathcal{H}.$ We set
\[
X_{N,n}(f)=\left\{ \lambda:\left\vert \lambda\right\vert \leq n,\left\Vert
F_{f}\left( \lambda\right) \right\Vert \leq N\right\}
\]
and $\mathcal{D}$ to be the set of linear combination of all elements of the
form $g=E\left( X_{N,n}\right) f$ for $f\in\mathcal{H}_{ac}^{\infty}\left(
H\right) $ and $n,N\in\mathbb{N}.$ Since for $f\in\mathcal{H}_{ac}^{\infty
}\left( H\right) $ and $n,N\in\mathbb{N},$
\[
E\left( X_{N,n}\right) f=E\left( X_{N,n}\right) P_{ac}^{\infty}\left(
H\right) f=P_{ac}^{\infty}\left( H\right) E\left( X_{N,n}\right) f
\]
by Remark \ref{I1}, we have $\mathcal{D\subset H}_{ac}^{\infty}\left(
H\right) .$ Note
\[
\lim_{N\rightarrow\infty}\left\vert \left( -n,n\right) \backslash
X_{N,n}\right\vert =0,
\]
then $f$ can be approximated by the elements $E\left( X_{N,n}\right) f$ for
$f\in\mathcal{H}_{ac}^{\infty}\left( H\right) .$ Hence $\overline
{\mathcal{D}}=\mathcal{H}_{ac}^{\infty}\left( H\right) .$
Let $\left\{ e_{i}\right\} _{i\in\mathbb{Z}}$ be an orthonormal basis in
$\mathcal{H}.$ By \ref{g1}, for $g=E\left( X_{N,n}\right) f$
\begin{align*}
\left\langle Ge^{-itH}g,e_{i}\right\rangle & =\int_{\mathbb{R}}e^{-i\lambda
t}d\left\langle GE_{H}\left( \lambda\right) g,e_{i}\right\rangle \\
& =\int_{X_{N,n}\left( g\right) }e^{-i\lambda t}\frac{d\left\langle
GE_{H}\left( \lambda\right) g,e_{i}\right\rangle }{d\lambda}d\lambda\\
& =\int_{X_{N,n}\left( g\right) }e^{-i\lambda t}\;\left\langle F_{g}\left(
\lambda\right) ,e_{i}\right\rangle d\lambda.
\end{align*}
Then by the Parseval equality, for each $i\in\mathbb{Z}$
\[
\int_{\mathbb{R}}\left\vert \left\langle Ge^{-itH}g,e_{i}\right\rangle
\right\vert ^{2}dt=2\pi\int_{X_{N,n}\left( g\right) }\left\vert \left\langle
F_{g}\left( \lambda\right) ,e_{i}\right\rangle \right\vert ^{2}d\lambda
\]
Hence
\[
\int_{\mathbb{R}}\left\Vert Ge^{-itH}g\right\Vert ^{2}dt=2\pi\int
_{X_{N,n}\left( g\right) }\left\Vert F_{g}\left( \lambda\right)
\right\Vert ^{2}d\lambda\leq4\pi N^{2}n.
\]
Therefore we have
\[
\int_{-\infty}^{\infty}\left\Vert Ge^{-itH}g\right\Vert ^{2}dt<\infty,\text{
}g\in\mathcal{D}.
\]
\end{proof}
\begin{theorem}
\label{M18}Let the operators $H,H_{1}\in\mathcal{A}\left( \mathcal{M}\right)
$ be a pair of self-adjoint operators and $J$ be an operator in $\mathcal{M}$
with $J\mathcal{D}\left( H\right) \subseteq\mathcal{D}\left( H_{1}\right)
.$ Let $W_{J}\left( t\right) =e^{itH_{1}}Je^{-itH},$ for $t\in\mathbb{R}.$
If $H_{1}J-JH=G_{1}^{\ast}G$ for $G_{1}$ and $G$ in $\mathcal{M}$, then the
generalized weak wave operator $\widetilde{W}_{\pm}\left( H_{1},H;J\right) $ exists.
\end{theorem}
\begin{proof}
By Lemma \ref{M17}, there are linear space $\mathcal{D\subseteq H
_{ac}^{\infty}\left( H\right) $ and $\mathcal{D}_{1}\mathcal{\subseteq
H}_{ac}^{\infty}\left( H_{1}\right) $ with $\overline{\mathcal{D
}=\mathcal{H}_{ac}^{\infty}\left( H\right) $ and $\overline{\mathcal{D}_{1
}=\mathcal{H}_{ac}^{\infty}\left( H_{1}\right) .$ Then for $f\in\mathcal{D}$
and $g\in\mathcal{D}_{1}$
\begin{align*}
\left\vert \left\langle W_{J}\left( w\right) -W_{J}\left( s\right)
f,g\right\rangle \right\vert & =\left\vert \int_{s}^{w}\left\langle
e^{itH_{1}}\left( H_{1}J-JH\right) e^{-itH}f,g\right\rangle dt\right\vert \\
& \leq\int_{s}^{w}\left\vert \left\langle Ge^{-itH}f,G_{1}e^{-itH_{1
}g\right\rangle \right\vert d\lambda\\
& \leq\left( \int_{s}^{w}\left\Vert Ge^{-itH}f\right\Vert ^{2}dt\cdot
\int_{s}^{t}\left\Vert G_{1}e^{-itH_{1}}g\right\Vert ^{2}dt\right) ^{1/2
\end{align*}
and
\[
\int_{s}^{w}\left\Vert Ge^{-itH}f\right\Vert ^{2}dt\rightarrow0,\int_{s
^{w}\left\Vert G_{1}e^{-itH_{1}}g\right\Vert ^{2}dt\rightarrow0
\]
as $s,w\rightarrow\pm\infty.$ Hence
\[
\lim_{t\rightarrow\pm\infty}\left\langle W_{J}\left( t\right)
f,g\right\rangle =\lim_{t\rightarrow\pm\infty}\left\langle W_{J}\left(
t\right) P_{ac}^{\infty}\left( H\right) f,P_{ac}^{\infty}\left(
H_{1}\right) g\right\rangle
\]
exists for $f\in\mathcal{D}$ and $g\in\mathcal{D}_{1}.$ Since
\[
\overline{\mathcal{D}}=\mathcal{H}_{ac}^{\infty}\left( H\right) \text{ and
}\overline{\mathcal{D}_{1}}=\mathcal{H}_{ac}^{\infty}\left( H_{1}\right) ,
\]
we hav
\[
\lim_{t\rightarrow\pm\infty}\left\langle P_{ac}^{\infty}\left( H_{1}\right)
W_{J}\left( t\right) P_{ac}^{\infty}\left( H\right) f,g\right\rangle
\]
exist for any $f,g\in\mathcal{H}.$ Therefore $\widetilde{W}_{\pm}\left(
H_{1},H;J\right) $ exists.
\end{proof}
\begin{corollary}
\label{M19}Let the operators $H,H_{1}\in\mathcal{A}\left( \mathcal{M}\right)
$ be a pair of self-adjoint operators and $J$ be an operator in $\mathcal{M}$
with $J\mathcal{D}\left( H\right) \subseteq\mathcal{D}\left( H_{1}\right)
.$ If $H_{1}J-JH$ $\in\mathcal{L}^{1}\left( \mathcal{M},\tau\right)
\cap\mathcal{M},$ then the generalized weak wave operator $\widetilde{W}_{\pm
}=\widetilde{W}_{\pm}\left( H,H;J^{\ast}J\right) $ exists.
\end{corollary}
\begin{proof}
Sinc
\[
HJ^{\ast}J-J^{\ast}JH=J^{\ast}\left( H_{1}J-JH\right) -\left( J^{\ast
H_{1}-HJ^{\ast}\right) J\in\mathcal{M},
\]
we have $\widetilde{W}_{\pm}=\widetilde{W}_{\pm}\left( H,H;J^{\ast}J\right)
$ exists by Theorem \ref{M18}.
\end{proof}
Next result is the analogue of Kato-Rosenblum Theorem in a semifinite von
Neumann algebra $\mathcal{M}$ which is first proved in \cite{Li} by a
time-dependent approach. The proof here can be regarded as a stationary approach.
\begin{theorem}
\label{T1}(Theorem 5.2.5 in \cite{Li}) Let $H$, $H_{1}\in\mathcal{A}\left(
\mathcal{M}\right) $ be a pair of self-adjoint operators and $J$ be an
operator in $\mathcal{M}$ with $J\mathcal{D}\left( H\right) \subseteq
\mathcal{D}\left( H_{1}\right) $. Assume $H_{1}-H\in\mathcal{L}^{1}\left(
\mathcal{M},\tau\right) \cap\mathcal{M}$, then
\[
W_{\pm}\overset{\triangle}{=}W_{\pm}\left( H_{1},H\right) \text{ exists in
}\mathcal{M}.
\]
Moreover, $W_{\pm}^{\ast}W_{\pm}=P_{ac}^{\infty}\left( H\right) $, $W_{\pm
}W_{_{\pm}}^{\ast}=P_{ac}^{\infty}\left( H_{1}\right) $ and $W_{\pm}HW_{\pm
}^{\ast}=H_{1}P_{ac}^{\infty}\left( H_{1}\right) .$
\end{theorem}
\begin{proof}
Let $J=I.$ Combing Theorem \ref{M18}, Theorem \ref{M15} and Theorem \ref{R10},
we know that $W_{\pm}\left( H_{1},H\right) $ and $W_{\pm}\left(
H,H_{1}\right) $ both exist. By Theorem \ref{I4}, we also have
\[
W_{\pm}^{\ast}W_{\pm}=W_{\pm}^{\ast}\left( H_{1},H\right) W_{\pm}\left(
H_{1},H\right) =P_{ac}^{\infty}\left( H\right)
\]
and
\[
W_{\pm}^{\ast}\left( H,H_{1}\right) W_{\pm}\left( H,H_{1}\right)
=P_{ac}^{\infty}\left( H_{1}\right) .
\]
Since $H_{1}-H$ $\in\mathcal{L}^{1}\left( \mathcal{M},\tau\right)
\cap\mathcal{M},$ we have $\widetilde{W}_{\pm}\left( H_{1},H\right) $ and
$\widetilde{W}_{\pm}\left( H,H_{1}\right) $ both exist and
\begin{align*}
W_{\pm}^{\ast} & =W_{\pm}^{\ast}\left( H_{1},H\right) =\widetilde{W}_{\pm
}^{\ast}\left( H_{1},H\right) =\widetilde{W}_{\pm}\left( H,H_{1}\right)
=W_{\pm}\left( H,H_{1}\right) ;\\
W_{\pm}^{\ast}\left( H,H_{1}\right) & =\widetilde{W}_{\pm}^{\ast}\left(
H,H_{1}\right) =\widetilde{W}_{\pm}\left( H_{1},H\right) =W_{\pm}\left(
H_{1},H\right) =W_{\pm
\end{align*}
by equality (\ref{g33}). Thus
\[
W_{\pm}^{\ast}W_{\pm}=P_{ac}^{\infty}\left( H\right) ,\text{ }W_{\pm
}W_{_{\pm}}^{\ast}=P_{ac}^{\infty}\left( H_{1}\right) .
\]
Meanwhile, by Theorem \ref{I3},
\[
W_{\pm}HW_{\pm}^{\ast}=HW_{\pm}W_{\pm}^{\ast}=P_{ac}^{\infty}\left(
H_{1}\right) .
\]
So the proof is completed.
\end{proof}
\begin{remark}
For a self-adjoint $H\in\mathcal{A}\left( \mathcal{M}\right) ,$ if there is
a $H$-smooth operator in $\mathcal{M},$ then $H$ is not a sum of a diagonal
operator in $\mathcal{M}$ and an operator in $\mathcal{M\cap L}^{1}\left(
\mathcal{M},\tau\right) $ by Theorem \ref{T1} and Theorem \ref{M7}.
\end{remark}
|
2,869,038,154,050 | arxiv | \section{Introduction }
Deep neural networks (DNNs) have achieved increasing demand in many practical applications \cite{chen2015deepdriving}\cite{hinton2012deep}\cite{krizhevsky2012imagenet}. However, studies over the past few years have also shown intriguing issue that DNN models are very sensitive and vulnerable to adversarial samples \cite{szegedy2013intriguing}\cite{biggio2013evasion}, implying potential security threats to their applications.
One of the widely studied adversarial attacks is the evasion attack, where the main aim of the attacker is to cause misclassification in the DNN model.
Black-box evasion attacks have attracted increasing research interests recently, where black-box means that the attacker does not know the DNN model but can query the model to get the DNN inference outputs, either the detailed confidence score or just a classification label \cite{alzantot2019genattack}\cite{brendel2017decision}\cite{chen2017zoo}\cite{tu2019autozoom}\cite{ilyas2018black}\cite{cheng2019improving}\cite{cheng2018query}\cite{cheng2019sign}\cite{li2019nattack}\cite{chen2020hopskipjumpattack}\cite{guo2019simple}. If the attacker has access to the full output logit values, they can apply soft-label attack algorithms such as \cite{chen2017zoo}\cite{tu2019autozoom}\cite{ilyas2018black}\cite{alzantot2019genattack}\cite{guo2019simple}. On the other hand, if the attacker has access to only the classification label, they can apply hard-label attack algorithms such as \cite{brendel2017decision}\cite{cheng2019sign}\cite{chen2020hopskipjumpattack}.
Along with the surge of attack algorithms, there has been an increase in the development of defense algorithms such as Adversarial Training (AT) \cite{tramer2017ensemble}, input transformation \cite{buckman2018thermometer}\cite{samangouei2018defense}, gradient obfuscation \cite{papernot2016distillation}, and stochastic defense via randomization \cite{he2019parametric}\cite{wang2019protecting}\cite{qin2021random}\cite{nesti2021detecting}\cite{liang2018detecting}\cite{fan2019integration}\cite{li2018certified}. However, limitations of existing defense techniques have also been observed \cite{athalye2018obfuscated}\cite{carlini2017adversarial}\cite{carlini2017towards}.
It has been proven that stochastic defense suffers from large degradation of DNN performance or limited defense performance.
Gradient obfuscation method has also been proven to be ineffective.
In this work, we develop an efficient and more effective method to defend the DNN against black-box attacks.
During the adversarial attack's optimization process, there is a stage that the adversarial samples are on the DNN's classification boundary. Boundary Defense BD$(\theta, \sigma)$, our method, detects these boundary samples as those with the classification confidence score below the threshold $\theta$ and adds white Gaussian noise with standard deviation $\sigma$ to their logits. This will prevent the attackers from optimizing their adversarial samples and maintain low DNN performance degradation.
Major contributions of this work are:
\begin{itemize}
\item A new boundary defense algorithm BD$(\theta, \sigma)$ is developed, which can be implemented efficiently and mitigate reliably both soft and hard label black-box attacks.
\item Theoretical analysis is conducted to study the impact of the parameters $\theta$ and $\sigma$ on the classification accuracy.
\item Extensive experiments are conducted, which demonstrate that BD(0.3, 0.1) (or BD(0.7, 0.1)) reduces attack success rate to almost 0 with around 1\% (or negligible) classification accuracy degradation over the IMAGENET (or MNIST/CIFAR10) models. The defense performance is shown superior over a list of existing defense algorithms.
\end{itemize}
The organization of this paper is as follows. In Section \ref{relatedwork}, related works are introduced. In Section \ref{analysis}, the BD method is explained. In Section \ref{experiment}, experiment results are presented. Finally, conclusions are given in Section \ref{conclusion}.
\section{Related Work} \label{relatedwork}
Black-box adversarial attacks can be classified into soft-label and hard-label attacks. In soft-label attacks like AutoZOOM \cite{chen2017zoo}\cite{cheng2018query} and NES-QL \cite{ilyas2018black}, the attacker generates adversarial samples using the gradients estimated from queried DNN outputs. In contrast, SimBA \cite{guo2019simple}, GenAttack \cite{alzantot2019genattack} and Square Attack \cite{andriushchenko2020square} resort to direct random search to obtain the adversarial sample.
Hard-label attacks like NES-HL \cite{ilyas2018black}, BA (Boundary Attack) \cite{brendel2017decision}, Sign-OPT \cite{cheng2018query}\cite{cheng2019sign}, and HopSkipJump \cite{chen2020hopskipjumpattack} start from an initial adversarial sample and iteratively reduce the distance between the adversarial sample and original sample based on the query results.
For the defense against black-box attacks, a lot of methods are derived directly from the defense methods against white-box attacks, such as input transformation \cite{dziugaite2016study}, network randomization \cite{xie2018mitigating} and adversarial training \cite{tramer2020adaptive}. The defenses designed specifically for black-box attack, are denoised smoothing \cite{salman2020denoised}, malicious query detection \cite{chen2020stateful}\cite{li2020blacklight}\cite{pang2020advmind}, and random-smoothing \cite{cohen2019certified}\cite{salman2019provably}. Nevertheless, their defense performance is not reliable and defense cost or complexity is too high.
Adding random noise to defend against black-box attacks has been studied recently as a low-cost approach, where \cite{byun2021small}\cite{qin2021theoretical}\cite{xie2018mitigating} add noise to the input, and \cite{lecuyer2019certified} \cite{liu2018towards}\cite{he2019parametric} add noise to input or weight of each layer. Unfortunately, heavy noise is needed to defend against hard-label attacks (in order to change hard labels) but heavy noise leads to severe degradation of DNN accuracy. Our proposed BD method follows similar approach, but we add noise only to the DNN outputs of the boundary samples, which makes it possible to apply heavy noise without significant degradation in DNN accuracy.
\section{Boundary Defense} \label{analysis}
\subsection{Black-box attack model}
Consider a DNN that classifies an image ${\bf X}_0$ into class label $c$ within $N$ classes. The DNN output is softmax logit (or confidence score) tensor $F({\bf X}_0)$. The classification result is $c=\arg\max_{i} {F}_i({\bf X}_0)$, where $F_i$ denotes the $i$th element function of $F$, $i=0, \cdots, N-1$. The attacker does not know the DNN model but can send samples ${\bf X}$ to query the DNN and get either $F({\bf X})$ or just $c$. The objective of the attacker is to generate an adversarial sample ${\bf X}={\bf X}_0+\Delta {\bf x}$ such that the output of the classifier is $t = \arg\max_{i} F_i({\bf X}) \neq c$, where the adversary $\Delta {\bf x}$ should be as small as possible.
\textbf{Soft-Label Black-box Attack:} The attacker queries the DNN to obtain the softmax logit output tensor ${F}({\bf X})$. With this information, the attacker minimizes the loss function $f_{{\rm SL}}({\bf X})$ for generating the adversarial sample ${\bf X}$ \cite{tu2019autozoom},
\begin{equation}
f_{{\rm SL}}({\bf X}) = {\cal D}({\bf X}, {\bf X}_0) + \lambda {\cal L}(F({\bf X}), t), \label{eq1.10}
\end{equation}
where ${\cal D}(\cdot, \cdot)$ is a distance function, e.g., $\| {\bf X}-{\bf X}_0 \|_p$, and ${\cal L}(\cdot, t)$ is the loss function, e.g., cross-entropy \cite{ilyas2018black} and C\&W loss \cite{carlini2017adversarial}.
\textbf{Hard-Label Black-box Attack}: The attacker does not use $F({\bf X})$ but instead uses the class label $\arg \max_i F_i({\bf X})$ to optimize the adversarial sample ${\bf X}$. A common approach for the attacker is to first find an initial sample ${\bf X}_{t,0}$ in the class $t \neq c$, i.e., $\arg\max_i F_i({\bf X}_{t,0})=t$. Then, starting from ${\bf X}_{t,0}$, the attacker iteratively estimates new adversarial samples ${\bf X}$ in the class $t$ so as to minimize the loss function $f_{{\rm HL}}({\bf X}) = {\cal D}({\bf X}, {\bf X}_0)$.
The above model is valid for both targeted and untargeted attacks. The attacker's objective is to increase attack success rate (ASR), reduce query counts (QC), and reduce sample distortion ${\cal D}({\bf X}, {\bf X}_0)$. In this paper, we assume that the attacker has a large enough QC budget and can adopt either soft-label or hard-label black-box attack algorithms. Thus, our proposed defense's main objective is to reduce the ASR to 0.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{figures/intro_BD.png}
\caption{Schematic representation of the black-box attack and the Boundary Defense BD$(\theta, \sigma)$ (highlighted region).}
\label{fig:model}
\end{figure}
\subsection{Boundary Defense Algorithm}
In this work, we propose a Boundary Defense method that defends the DNN against black-box (both soft and hard label, both targeted and untargeted) attacks by preventing the attacker's optimization of $f_{{\rm SL}}({\bf X})$ or $f_{{\rm HL}}({\bf X})$. As illustrated in Fig. \ref{fig:model}, for each query of ${\bf X}$, once the defender finds that the classification confidence $\max F({\bf X})$ is less than certain threshold $\theta$, the defender adds zero-mean white Gaussian noise ${\cal N}(0, \sigma^2)$ with a certain standard deviation $\sigma$ to all the elements of $F({\bf X})$. The DNN softmax logits thus become
\begin{equation}
F_{{\rm BD}}({\bf X}) = \left\{ \begin{array}{ll} F({\bf X}), & {\rm if} \; \max F({\bf X}) > \theta \\
F({\bf X}) + V, & {\rm otherwise} \end{array} \right. \label{eq2.1}
\end{equation}
where $V \sim {\cal N}({\bf 0}, \sigma^2{\bf I})$ and ${\bf I}$ is an identity matrix. The DNN outputs softmax logits clip\{$F_{{\rm BD}} ({\bf X})$, 0, 1\} when outputting soft labels or its classification label $\arg \max_i F_{{\rm BD}, i}({\bf X})$ when outputting hard labels.
We call it the BD$(\theta, \sigma)$ algorithm because samples with low confidence scores are usually on the classification boundary.
For a well-designed DNN, the clean (non-adversarial) samples can usually be classified accurately with high confidence scores. Those with low confidence scores happen rarely and have low classification accuracy. In contrast, when the attacker optimizes $f_{{\rm SL}}({\bf X})$ or $f_{{\rm HL}}({\bf X})$, there is always a stage that the adversarial samples ${\bf X}$ have small $\max F({\bf X})$ values.
For example, in the soft-label black-box targeted attacks, the attacker needs to maximize the $t$th logit value $F_t({\bf X})$ by minimizing the cross-entropy loss ${\cal L}(F({\bf X}), t)= -\log F_t({\bf X})$. Initially $F_t({\bf X}_0)$ is very small and $F_c({\bf X}_0)$ is large. The optimization increases $F_t({\bf X})$ while reducing $F_c({\bf X})$. There is a stage that all logit values are small, which means ${\bf X}$ is lying on the classification boundary.
As another example, a typical hard-label black-box targeted attack algorithm first finds an initial sample inside the target class $t$, which we denote as ${\bf X}_{t, 0}$. The algorithm often uses line search to find a boundary sample ${\bf X} = \alpha {\bf X}_{t, 0} + (1-\alpha) {\bf X}_0$ that maintains label $t$, where $\alpha$ is the optimization parameter. Then the algorithm randomly perturbs ${\bf X}$, queries the DNN, and uses the query results to find the direction to optimize $f_{{\rm HL}}({\bf X})$. Obviously, ${\bf X}$ must be on the decision boundary so that the randomly perturbed ${\bf X}$ will lead to changing DNN hard-label outputs. Otherwise, all the query results will lead to a constant output $t$, which is useless to the attacker's optimization process.
Therefore, for soft-label attacks there is an unavoidable stage of having boundary samples and for hard-label attacks the boundary samples are essential. Our BD method exploits this weakness of black-box attacks by detecting these samples and scrambling their query results to prevent the attacker from optimizing its objective.
One of the advantages of the BD$(\theta, \sigma)$ algorithm is that it can be implemented efficiently and inserted into DNN models conveniently with minimal coding. Another advantage is that the two parameters $(\theta, \sigma)$ make it flexible to adjust the BD method to work reliably. Large $\theta$ and $\sigma$ lead to small ASR but significant DNN performance degradation. Some attacks are immune to small noise (small $\sigma$), such as the HopSkipJump hard-label attack \cite{chen2020hopskipjumpattack}. Some other attacks such as SimBA \cite{guo2019simple} are surprisingly immune to large noise in boundary samples, which means that simply removing boundary samples or adding extra large noise to boundary samples as suggested in \cite{chen2020hopskipjumpattack} does not work. The flexibility of $(\theta, \sigma)$ makes it possible for the BD method to deal with such complicated issues and to be superior over other defense methods.
\subsection{Properties of Boundary Samples} \label{acc_degradation}
In this section, we study BD$(\theta, \sigma)$'s impact on the DNN's classification accuracy (ACC) when there is no attack, which provides useful guidance to the selection of $\theta$ and $\sigma$.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\linewidth]{figures/accanalysisfig1.png}
\includegraphics[width=0.48\linewidth]{figures/accanalysisfig5.png}
\centerline{(a) \hspace{0.4\linewidth} (b)}
\caption{Impact of parameters $\theta$ and $\sigma$ to classification accuracy (ACC). (a) ACC as function of true logit value $s = F_c({\bf X})$. (b) ACC when boundary defense BD$(\theta, \sigma)$ is applied. CleanACC is the DNN's ACC without attack/defense.}
\label{fig:accanalysis}
\end{figure}
Consider a clean sample ${\bf X}$ with true label $c$ and confidence $s = F_c({\bf X})$. Since the DNN is trained with the objective of maximizing $F_c({\bf X})$, we can assume that all the other logit values $F_i({\bf X})$, $i=0, \cdots, N-1$, $i \neq c$, are independent and identically distributed uniform random variables with values within $0$ to $a=(1-s)/(N-1)$, i.e., $F_i({\bf X}) \sim U(0, a)$. Without loss of generality, let $F_0({\bf X})$ be the maximum among these $N-1$ values. Then $Y = \sum_{i=1, i\neq c}^{N-1} F_i({\bf X})$ follows Irwin-Hall distribution with cumulative distribution function (CDF)
\begin{equation}
P_Y[y < x] = \frac{1}{(N-2)!} \sum_{k=0}^{\lfloor x/a \rfloor} (-1)^k \left(\begin{array}{c} N-2 \\ k \end{array} \right) \left( \frac{x}{a} - k \right)^{N-2}. \label{eq3.9}
\end{equation}
When $N$ is large, the distribution of $Y$ can be approximated as normal ${\cal N}\left( \frac{(1-s)(N-2)}{2(N-1)}, \left( \frac{1-s}{N-1} \right)^2 \frac{N-2}{12} \right)$. We denote its CDF as $\Phi(x)$. Since the sample ${\bf X}$ is classified accurately if and only if $s > F_0({\bf X})$, the classification accuracy $P[{\rm ACC}|s]$ can be derived as
\begin{equation}
P[{\rm ACC}|s] = P_Y[y > 1-2s] = 1 - \Phi(1-2s). \label{eq3.10}
\end{equation}
Using (\ref{eq3.10}), we can calculate $P[{\rm ACC}|s]$ for each $s$, as shown in Fig. \ref{fig:accanalysis}(a) for $N=10$ and $1000$ classes. It can be seen that for $N=1000$, if $s < 0.32$, then the sample's classification ACC is almost 0. This means that we can set $\theta \leq 0.32$ to safely scramble all those queries whose maximum logit value is less than 0.32 without noticeable ACC degradation.
Next, to evaluate the ACC when BD$(\theta, \sigma)$ is applied, we assume the true label $c$'s logit value $s$ follow approximately half-normal distribution, whose probability density function is
\begin{equation}
f_S(s) = \left\{ \begin{array}{ll}
\frac{\sqrt{2}}{\nu \sqrt{\pi}} e^{-\frac{(1-s)^2}{2\nu^2}}, & s \leq 1 \\
0, & {\rm otherwise} \end{array} \right. \label{eq3.14}
\end{equation}
with the parameter $\nu$.
The ACC of the DNN without attack or defense (which we call cleanACC) is then
\begin{equation}
{\rm cleanACC} = \int_0^1 P[{\rm ACC}|s]f_S(s) ds. \label{eq3.15}
\end{equation}
Using (\ref{eq3.14})-(\ref{eq3.15}), we can find the parameter $\nu$ for each clean ACC. For example, for $N=1000$ and a DNN with clean ACC 90\%, the distribution of true logit $s$ follows (\ref{eq3.14}) with $\nu=0.41$.
When noise is added, each $F_i(X)$ becomes $F_i(X)+v_i$ for noise $v_i \sim {\cal N}(0, \sigma^2)$. Following similar derivation of (\ref{eq3.9})-(\ref{eq3.10}), we can obtain the ACC of the noise perturbed logit $F_c(X)+v_c$ as
\begin{equation}
P[{\rm ACC}|s,\sigma] = 1 - \tilde{\Phi}(1-2s),
\end{equation}
where $\tilde{\Phi}$ is the CDF of the new normal distribution
${\cal N}\left( \frac{(1-s)(N-2)}{2(N-1)}, \left( \frac{1-s}{N-1} \right)^2 \frac{N-2}{12} + (N+2)\sigma^2 \right)$. The ACC under the defense is then
\begin{equation}
{\rm ACC} = \int_0^{\theta} P[{\rm ACC}|s,\sigma]f_S(s)ds + \int_{\theta}^1 P[{\rm ACC}|s] f_S(s) ds.
\end{equation}
Fig. \ref{fig:accanalysis}(b) shows how the defense ACC degrades with the increase of $\theta$ and $\sigma$. We can see that with $\sigma=0.1$, there is almost no ACC degradation for $N=10$. For $N=1000$, ACC degradation is very small when $\theta<0.4$ but grows to 5\% when $\theta>0.6$. Importantly, under $\theta<0.4$ we can apply larger noise $\sigma=0.3$ safely without obvious ACC degradation. This shows the importance of scrambling boundary samples only. Existing defenses scramble all the samples, which corresponds to $\theta=1$, and thus suffer from significant ACC degradation.
\section{Experiments}
\label{experiment}
\subsection{Experiment Setup}\label{setup}
In the first experiment, with the full validation datasets of MNIST (10,000 images), CIFAR10 (10,000 images), IMAGENET (50,000 images)
we evaluated the degradation of classification accuracy of a list of popular DNN models when our proposed BD method is applied.
In the second experiment, with $1000$ validation images of MNIST/CIFAR10 and $100$ validation images of IMAGENET, we evaluated the defense performance of our BD method against several state-of-the-art black-box attack methods, including soft-label attacks {\bf AZ} (AutoZOOM) \cite{tu2019autozoom}, {\bf NES-QL} (query limited) \cite{ilyas2018black}, {\bf SimBA} (SimBA-DCT) \cite{guo2019simple}, and {\bf GA} (GenAttack) \cite{alzantot2019genattack}, as well as hard-label attacks {\bf NES-HL} (hard label) \cite{ilyas2018black}, {\bf BA} (Boundary Attack) \cite{brendel2017decision}, {\bf HSJA} (HopSkipJump Attack) \cite{chen2020hopskipjumpattack}, and {\bf Sign-OPT} \cite{cheng2019sign}.
We adopted their original source codes with the default hyper-parameters and just inserted our BD$(\theta, \sigma)$ as a subroutine to process $F({\bf X})$ after each model prediction call. These algorithms used the InceptionV3 or ResNet50 IMAGENET models. To maintain uniformity and fair comparison, we considered the $l_{2}$ norm setting throughout the experiment.
We also compared our BD method with some representative black-box defense methods, including {\bf NP} (noise perturbation), {\bf JPEG} compression, {\bf Bit-Dept}, and {\bf TVM} (Total Variation Minimization), whose data were obtained from \cite{guo2017countering}, for soft-label attacks, and {\bf DD} (Defensive Distillation) \cite{papernot2016distillation}, {\bf Region-based} classification \cite{cao2017mitigating}, and {\bf AT} (Adversarial Training) \cite{goodfellow2014explaining} for hard-label attacks.
In order to have a more persuasive and comprehensive study of the robustness of the proposed BD method, we also performed experiments using Robust Benchmark models \cite{croce2021robustbench}, such as {\bf RMC} (Runtime Masking and Cleaning) \cite{wu2020adversarial}, {\bf RATIO} (Robustness via Adversarial Training on In- and Out-distribution)\cite{augustin2020adversarial}, {\bf RO} (Robust Overfitting)\cite{rice2020overfitting}, {\bf MMA} (Max-Margin Adversarial)\cite{ding2018mma}, {\bf ER} (Engstrom Robustness)\cite{engstrom2019adversarial}, {\bf RD} (Rony Decoupling)\cite{rony2019decoupling}, and {\bf PD} (Proxy Distribution)\cite{sehwag2021improving} models, over the CIFAR10 dataset for various attack methods.
As the primary performance metrics, we considered {\bf ACC} (DNN's classification accuracy) and {\bf ASR} (attacker's attack success rate). The ASR is defined as the ratio of samples with $\arg \max_i F_i({\bf X}) = t \neq c$. Without defense, the hard-label attack algorithms always output adversarial samples successfully with the label $t$ (which means ASR = 100\%). Under our defense the ASR will be reduced due to the added noise, so ASR is still a valid performance measure. On the other hand, since most hard-label attack/defense papers use the ASR defined as the ratio of samples satisfying both $\arg \max_i F_i({\bf X}) = t$ and median $l_2$ distortion ($\sqrt{\| {\bf X}-{\bf X}_0 \|^2/M}$ when ${\bf X}_0$ has $M$ elements) less than a certain threshold,
we will also report our results over this ASR, which we called {\bf ASR2}.
We show only the results of targeted attacks in this section. Experiments of untargeted attacks as well as extra experiment data and result discussions are provided in supplementary material.
\subsection{ACC Degradation Caused by Boundary Defense}\label{accevaluation}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{figures/ACC4fig2s05.png}
\caption{Top-1 classification accuracy degradation (Defense ACC $-$ Clean ACC) versus $\theta$. $\sigma=0.1$.}
\label{fig:accdegradation}
\end{figure}
For MNIST, we trained a 5-layer convolutional neural network (CNN) with clean ACC 99\%. For CIFAR10, we trained a 6-layer CNN with clean ACC 83\% and also applied the pre-trained model of \cite{xu2019pc} with ACC 97\%, which are called {\bf CIFAR10-s} and {\bf CIFAR10}, respectively. For IMAGENET, we used standard pre-trained models from the official Tensoflow library ({\bf ResNet50}, {\bf InceptionV3}, {\bf EfficientNet-B7}) and the official PyTorch library ({\bf ResNet50tor}, {\bf InceptionV3tor}), where ``-tor" indicates their PyTorch source.
We used the validation images to query the DNN models and applied our BD algorithm to modify the DNN outputs before evaluating classification ACC.
It can be observed from Fig. \ref{fig:accdegradation} that with $\theta \leq 0.3$ we can keep the loss of ACC around $1\%$ for IMAGENET models (from 0.5\% of ResNet50 to 1.5\% of InceptionV3). $\theta > 0.6$ leads to near 5\% ACC degradation. For MNIST and CIFAR10 the ACC has almost no degradation, but CIFAR10-s has limited 1.5\% ACC degradation for large $\theta$.
This fits well with the analysis results shown in Fig. \ref{fig:accanalysis}(b). Especially, most existing noise defense methods, which don't exploit boundary (equivalent to $\theta = 1$), would result in up to 5\% ACC degradation for IMAGENET models.
\subsection{Performance of BD Defense against Attacks}
\subsubsection{ASR of soft-label black-box attacks}\label{softlabel section}
Table \ref{tbl:ASR1} shows the ASR of soft-label black-box attack algorithms under our proposed BD method. To save the space we have shown the data of $\sigma=0.1$ only. Results regarding varying $\theta$ and $\sigma$ are shown in Fig. \ref{fig:soft-label asr}.
\begin{table}[t]
\caption{ASR (\%) of Targeted Soft-Label Attacks. $\sigma$ = 0.1.}
\label{tbl:ASR1}
\centering
\begin{tabular}{c|c|ccc}
\toprule
Dataset & Attacks
& No defense & $\theta_1$ = 0.5 & $\theta_2$ = 0.7 \\
\midrule
& AZ & 100 & 8 & 8 \\
MNIST & GA & 100 & 0 & 0\\
& SimBA & 97 & 3 & 0 \\
\midrule
& AZ & 100 & 9 & 9 \\
CIFAR10 & GA & 98.76 & 0 & 0 \\
& SimBA & 97.14 & 23 & 15\\
\midrule \midrule
Dataset & Attacks
& No Defense & $\theta_1$ = 0.1 & $\theta_2$ = 0.3 \\
\midrule
& AZ & 100 & 0 & 0 \\
IMAGENET & NES-QL & 100 & 69 & 8 \\
& GA & 100 & 0 & 0\\
& SimBA & 96.5 & 6 & 2 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width=1\linewidth]{figures/softlabel_graph_1.png}
\caption{ASR(\%) vs noise level $\sigma$ for various boundary threshold $\theta$. The top row is for MNIST, and the bottom row is for CIFAR10.}
\label{fig:soft-label asr}
\end{figure}
From Table \ref{tbl:ASR1} we can see that with the increase in $\theta$, the ASR of all the attack algorithms drastically reduced.
Over the IMAGENET dataset, the BD method reduced the ASR of all the attack algorithms to almost $0$ with $(\theta, \sigma) = (0.3, 0.1)$. For MNIST/CIFAR10 datasets, the BD method with $(\theta, \sigma)=(0.5, 0.1)$ was enough. Fig. \ref{fig:soft-label asr} shows a consistent decline of ASR over the increase in noise level $\sigma$. This steady decline indicates robust defense performance of the BD method against the soft-label attacks.
\subsubsection{ASR of hard-label black-box attacks}\label{hard-label section}
We have summarized the ASR and median $l_2$ distortion of hard-label attacks in presence of our proposed BD method in Table \ref{tb2:ASR2}.
\begin{table}[t]
\caption{ASR (\%) and Median $l_2$ Distortion of Targeted Hard-label Attacks. $\sigma =0.1$. ``-" Means no $l_2$ Distortion Data due to Absence of Adversarial Samples.}
\label{tb2:ASR2}
\centering
\begin{tabular}{c|c|ccc}
\toprule
Dataset & Attacks & & ASR/$l_2$& \\
& & No defense & $\theta_1$ = 0.5 & $\theta_2$ = 0.7\\
\midrule
& Sign-OPT & 100/0.059 & 4/0.12 & 0/-\\
MNIST & BA & 100/0.16 & 17/0.55 & 9/0.56\\
& HSJA & 100/0.15 & 38/0.14 & 7/0.15\\
\midrule
CIFAR10
& Sign-OPT & 100/0.004 & 4/0.08 & 0/-\\
& HSJA & 100/0.05 & 18/0.05 & 7/0.05\\
\midrule \midrule
Dataset & Attacks & &ASR/$l_2$&\\
& & No Defense & $\theta_1$ = 0.1 & $\theta_2$ = 0.3\\
\midrule
& NES-HL & 90/0.12 & 0/- & 0/- \\
IMAGE- & Sign-OPT & 100/0.05 & 14/0.4 & 0/- \\
NET & BA & 100/0.08 & 0/- & 0/- \\
& HSJA & 100/0.03 & 34/0.11 & 0/- \\
\bottomrule
\end{tabular}
\end{table}
Surprisingly, the BD method performed extremely well against the hard-label attacks that were usually challenging to conventional defense methods. In general, BD(0.3, 0.1) was able to reduce ASR to 0\% over the IMAGENET dataset, and BD(0.7, 0.1) was enough to reduce ASR to near 0 over MNIST and CIFAR10.
For ASR2, Figure \ref{fig:hard-label asr} shows how ASR2 varies with the pre-set $l_2$ distortion threshold when the BD method was used to defend against the Sign-OPT attack. We can see that the ASR2 reduced with the increase of either $\theta$ or $\sigma$, or both. BD(0.7, 0.1) and BD(0.3, 0.1) successfully defended against the Sign-OPT attack over the MNIST/CIFAR10 and IMAGENET datasets, respectively.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{figures/hardlabel_graph.png}
\caption{ASR (\%) versus median $l_2$ distortion of the Sign-OPT attack under the proposed BD method. }
\label{fig:hard-label asr}
\end{figure}
\subsubsection{Robust defense performance against adaptive attacks} \label{adaptive_attack}
To evaluate the robustness of the defense, it is crucial to evaluate the defense performance against adaptive attacks \cite{tramer2020adaptive}. For example, the attacker may change the query limit or optimization step size. In this subsection, we show the effectiveness of our BD defense against 2 major adaptive attack techniques: 1) adaptive query count (QC) budget; and 2) adaptive step size.
First, with increased attack QC budget, the results obtained are summarized in Table \ref{tb3:eot}.
We observe that when the attacker increased QC from $10^4$ to $10^{10}$, there was no significant increase in ASR.
Next, we adjusted the optimization (or gradient estimation) step size of the attack algorithms (such as $\beta$ of the Sign-OPT algorithm), and evaluated the performance of BD. The ASR data are shown in Fig \ref{fig:step_size}. We can see that there was no significant change of ASR when the attack algorithms adopted different optimization step sizes. For GenAttack \& Sign-OPT, the ASR was almost the same under various step sizes. For SimBA, the ASR slightly increased but with an expense of heavily distorted adversarial output.
\begin{table}[t]
\caption{ASR(\%) of Adaptive Black-Box Attacks under the Proposed BD method}
\label{tb3:eot}
\centering
\begin{tabular}{c|c|ccc}
\toprule
Dataset & Attack & query & budget &\\
& & $10^4$ (preset) & $10^8$ & $10^{10}$\\
\midrule
& GA & 0 & 0 & 0 \\
CIFAR10 & HSJA & 3 & 4 & 0 \\
& Sign-OPT & 5 & 8 & 8 \\
\midrule
& NES-QL & 2 & 12 & 8 \\
ImageNET & Boundary & 0 & 0 & 0\\
& HSJA & 0 & 0 & 0 \\
& Sign-OPT & 0 & 0 & 0\\
\bottomrule
\end{tabular}
\end{table}
As a result, we can assert the robustness of the BD method against the black-box adversarial attacks.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{figures/adaptive_step_1.png}
\caption{ASR (\%) versus step size of the adaptive attacks. Note that for GenAttack we considered $\sigma=0.01$, since the ASR for $\sigma=0.1$ was always 0 for all threshold values. For Sign-OPT \& SimBA we considered $\sigma=0.1$.}
\label{fig:step_size}
\end{figure}
\begin{table}[h]
\caption{Comparison of BD method with other defense methods against targeted hard-label attacks in terms of ASR (\%).}
\label{tb5:other_defense}
\centering
\begin{tabular}{c|c|ccc}
\toprule
Dataset & Defense & HSJA & BA & SimBA-DCT \\
\midrule
& DD \cite{papernot2016distillation} & 98 & 80 & - \\
& AT \cite{goodfellow2014explaining} & 100 & 50 & 4 \\
& Region-based \cite{cao2017mitigating} & 100 & 85 & - \\
MNIST & BD ($\theta = 0.5, \sigma = 0.1$)
& \textbf{38}& \textbf{17} & \textbf{3} \\
& BD ($\theta = 0.7, \sigma = 0.1$)& \textbf{7} & \textbf{9}& \textbf{0} \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[t]
\caption{Comparison of BD method ($\theta=0.5, \sigma=0.1$) with other defense methods against targeted GenAttack (soft-label) in terms of ASR (\%).}
\label{tb6:defenses}
\centering
\begin{tabular}{c|c|ccccc}
\toprule
Dataset & Attack & Bit-Depth & JPEG & TVM & NP & BD\\
\midrule
MNIST & GenAttack & 95 & 89 & - & 5 & \textbf{0}\\
CIFAR10 & GenAttack & 95 & 89 & 73 & 6 & \textbf{0}\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[t]
\caption{Compare ASR (\%) of proposed BD method with the Robust Bench Defense Models. CIFAR10 Dataset.}
\label{tb4:robustbench}
\centering
\begin{tabular}{c|ccc}
\toprule
RobustBench Defense & Sign-OPT & SimBA & HSJA \\
\midrule
RMC \cite{wu2020adversarial} & 100 & 83 & 100 \\
RATIO \cite{augustin2020adversarial} & 100 & 59 & 100 \\
RO \cite{rice2020overfitting} & 100 & 85 & 100 \\
MMA \cite{ding2018mma} & 100 & 83 & 100 \\
ER \cite{engstrom2019adversarial} & 100 & 92 & 100\\
RD \cite{rony2019decoupling} & - & 80 & - \\
PD \cite{sehwag2021improving}& 100 & 71 & 100\\
BD ($\theta = 0.5, \sigma = 0.1$) & \textbf{4} & \textbf{23} & \textbf{18}\\
BD ($\theta = 0.7, \sigma = 0.1$) & \textbf{0} & \textbf{15}& \textbf{7}\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Comparison with Other Defense Methods}
For defending against hard-label attacks, Table \ref{tb5:other_defense} compares the BD method with the DD, AT, and Region-based defense methods over the MNIST dataset. We obtained these other methods' defense ASR data from \cite{chen2020hopskipjumpattack} for HopSkipJump and BA attack methods, and obtained the defense performance data against SimBA through our experiments. It can be seen that our BD method outperformed all these defense methods with lower ASR. For soft-label attacks, Table \ref{tb6:defenses} shows that our BD method also outperformed a list of existing defense methods.
We also ran experiments using RobustBench models. The defense performance over the CIFAR10 dataset is reported in Table \ref{tb4:robustbench}. ASR is used as our preliminary evaluation criteria because for an attacker the higher the ASR the more robust the attack method is against all the defenses. From Table \ref{tb4:robustbench}, we can see that our method had the most superior defense performance.
\section{Conclusions} \label{conclusion}
In this paper, we propose an efficient and effective boundary defense method BD$(\theta, \sigma)$ to defend against black-box attacks. This method detects boundary samples by examining classification confidence scores and adds random noise to the query results of these boundary samples. BD$(0.3, 0.1)$ is shown to reduce the attack success rate to almost 0 with only about 1\% classification accuracy degradation for IMAGENET models. Analysis and experiments were conducted to demonstrate that this simple and practical defense method could effectively defend the DNN models against state-of-the-art black-box attacks.
\bibliographystyle{IEEEtran}
|
2,869,038,154,051 | arxiv | \section{Introduction}
Since 1998, when Riess et al. used SNIa data to statistically indicate that the expansion of universe is accelerating \cite{riess1998}, physicists have been providing various theories to explain this acceleration, including $f(R)$ theory \cite{Capo}, Brans-Dicke theory \cite{Brans}, and dark energy theroy \cite{Peebles} . At present, the dark energy theory can be used to effectively explain the cosmic microwave background (CMB) anisotropies \cite{Hu}. However, this study mainly focuses on the physical nature of dark energy. Dark energy can be studied using two main approaches. The first is to focus on the properties of dark energy, investigating whether or not its density evolves with time; this can be verified by reconstructing the equation of state $w(z)$ for dark energy, which is independent of physical models. The reconstruction of the equation of state involves parametric and non-parametric methods \cite{Linder}, the latter including the Principal Component Analysis \cite{Huterer,Clarkson}, Gaussian Processes \cite{Holsclaw,Shafieloo,Seikel}, PCA with the smoothness prior method \cite{Crittenden2009,Crittenden2012,Zhao}, and PCA based on the Ridge Regression Approach\cite{Huang}. The second involves dark energy physical models that are presented from the physical origin of its density and pressure, including scalar field models \cite{Ratra}, preudo-Nambu-Goldstone bosons for cosmology \cite{Frieman}, holographic dark energy \cite{Li}, age dark energy \cite{Mazia2007} and so on.
Currently, it may be difficult to judge which model, method or result is more persuasive, however, a model of dark energy that concerns its physical nature is essential. From the point of view of the models, Maziashvili presented a method that uses the Krolyhazy relation and time-energy uncertainty relation to estimate the density of dark energy \cite{Mazia2007,Mazia20076}, and the result is consistent with astronomical data if the unique numerical parameter in the dark energy model is taken to be on the order of one.
Based on this, to further explore the origin of dark energy density and pressure, we present the possibility that dark energy density is derived from massless scalar bosons in a vacuum. If the scalar boson field is the radiation field that satisfies the Bose-Einstein distribution, positive pressure would be generated, we first exclude this possibility based on the negative pressure of dark energy. Therefore, we can deduce the uncertainty in the position of scalar bosons based on the quantum fluctuation of space-time and the assumption that scalar bosons satisfy P-symmetry under a parity transformation, which can be used to estimate the scalar bosons and dark energy density.
\section{The quantum fluctuation of space-time and P-symmetry }
The quantum fluctuation of space-time relates to the quantum properties of objects. Using the Heisenberg position-momentum uncertainty relations, Wigner derived a quantum limit on the measurability of a certain length \cite{Wigner}. If $t$ is the time required by the measurement procedure, the uncertainty in the length measurement is
\begin{equation}\label{eq1}
\delta t \sim \sqrt {\frac{t}{{{M_c}}}}
\end{equation}
where ${M_c}$ is the mass of the clock, we take the units $\hbar = c = 1$ throughout this study.
As a result of the above relation, when ${M_c} \to \infty $, $\delta t \to 0$. To solve this situation, the quantum fluctuations of space-time itself is presented \cite{Karo,Jack,Amelino1994,Amelino1999}. It also results in uncertainty in distance measurements. Thus, at very short distance scales, space-time is foamy, and the limitation of space-time distance measurements can be given by
\begin{equation}\label{eq3}
\delta t \sim \left\{ \begin{array}{l}
{({t_p}t)^{\frac{1}{2}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} r > {t_p}\\
{t_p}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} r \le {t_p}
\end{array} \right.
\end{equation}
where ${t_p}$ is the Planck time and ${t_p} = \sqrt {\hbar G} $. This limitation of space-time measurements can be interpreted as the result of quantum fluctuations of space-time. Meanwhile, Eq. (\ref{eq3}) can be derived for massless particles in the framework of $\kappa $-deformed Poincare symmetries \cite{Amelino1997,Amelino1998}.
Krolyhazy derived another method for describing the quantum fluctuations of space-time, know as the Krolyhazy relation
\begin{equation}\label{eq4}
\delta t \sim \;{t_p}^{2/3}{t^{1/3}}.
\end{equation}
We consider that this quantum fluctuation effect is also applicable to the expanding universe. If the scalar boson field is the radiation field that obeys the Bose-Einstein distribution, positive pressure will generated , we first exclude this possibility based on the negative pressure of dark energy. By assuming massless scalar bosons fall into the horizon boundary of the cosmos, with the expansion of the universe, scalar bosons satisfy P-symmetry under parity transformation, which can be expressed as
\begin{equation}\label{eq5}
{P}\varphi ({r}) = - \varphi ({r}),
\end{equation}
where ${\Psi _0}$ is the wave function of the scalar bosons. If $r,t$ are the horizon size and age of the universe, from P-symmetry and quantum fluctuation eqation (\ref{eq3}) or (\ref{eq4}) of space-time, the former can correlate the the uncertainty in the relative position of two scalar bosons with the horizon size or age of the universe; hence, the uncertainty in the position of scalar boson is
\begin{equation}\label{eq6}
\delta {t_{scalar}} \sim \delta t.
\end{equation}
We hypothesise that the mean distance between scalar bosons is no less than the uncertainty relation $\delta {t_{scalar}}$, as a result, we obtain the number density of massless scalar bosons
\begin{equation}\label{eq7}
N = \delta {t^{ - 3}}.
\end{equation}
If the quantum fluctuation of space-time provides scalar bosons with nonzero energy, we can consider the wave function ${\Psi _0}$ for the massless scalar bosons is non-stationary state. Its wave length can be determined by the uncertainty relation $\delta t$; hence, ${\Psi _0}$ can be written as the superposition of at least $n$ stationary states ${\varphi _1},{\varphi _2},...,{\varphi _n}$
\begin{equation}\label{eq8}
{\Psi _0} = \sum\limits_k^n {{\alpha _k}} {\varphi _k}{e^{ - i{w_k}t'}},
\end{equation}
where ${{\rm{w}}_k} = 2\pi k/\delta t.$ It is easy to see that the state components of $\varphi ,{e^{ - i\omega {t'}}}$ are orthogonal in space and time, respectively.
Alternatively, the wave length of scalar bosons can be determined by the age of universe $t$; hence, the wave function of scalar bosons ${\Psi _0}$ can be written as
\begin{equation}\label{eq9}
{\Psi _0} = \sum\limits_k^n {{{\bar \alpha }_k}} {\bar \varphi _k}{e^{ - i{{\bar w}_k}t'}},
\end{equation}
where ${{\rm{\bar w}}_k} = 2\pi k/t.$ The wave function ${\Psi _0}$ will be further discussed using Higgs potential or Yukawa interaction.
By assuming scalar bosons or Goldstone bosons exist in a vacuum, Higgs proposed the existence of the Higgs scalar boson field which can be described by the field equation \cite{Higgs}
\begin{equation} \label{eq10}
{\nabla _\mu }{\nabla ^\mu }\Phi {\rm{ + }}{{\rm{\bar V}}^\prime }(\Phi {\Phi ^*})\Phi = 0,
\end{equation}
and When
\begin{equation}\label{eq11}
\bar V'({\Phi _0}{\Phi _0}^*) = 0,
\end{equation}
the massless-zero spin scalar boson field equation becomes
\begin{equation}\label{eq12}
\Box {\Phi _0} = 0.
\end{equation}
We consider this massless scalar field to caontain zero-spin and zero-charge bosons, and the scalar boson couples or it undergoesYukawa interaction with itself, which can be quantified in the simple potential energy form $ - {f^2}{\Phi _0}{\Phi _0}^*$, resulting in their massless state.
If the Higgs singlet potential ${\Phi _0}$ is substituted with the Yukawa potential $U$, the massless scalar bosons can interact with each other through propagator bosons provided they have weak isospins, which can be described by the Yukawa potential. Using Eq. (\ref{eq8}) or (\ref{eq9}), we incorporate a separation of the variable $U$
\begin{equation}\label{eq13}
U = {{U'}(r)}{e^{ - i({w_k} - {w_l})t}},
\end{equation}
and the Yukawa potential ${U'}(r)$ satisfies
\begin{equation}\label{eq14}
\left\{ {\Delta - ({m_0}^2 - {w_{kl}}^2)} \right\}U' = - 4\pi g{\varphi _k}{\varphi _l},
\end{equation}
where ${w_{kl}} = {w_k} - {w_l}$, ${m_0}$ is the mass of the Higgs or scalar boson, and $g$ is the coupling constant. Solving the above equation yields
\begin{equation}\label{eq15}
{U'}(r) = g\int {\frac{{{e^{ - \mu \left| {r - {r'}} \right|}}}}{{\left| {r - {r'}} \right|}}{\varphi _k}({r'}){\varphi _l}({r'})} d{r'},
\end{equation}
where $\mu = \sqrt {{m_0}^2 - {w_{kl}}^2} $.
\section{ The density and negative pressure for Boltzmann system}
When all scalar bosons are in the ground state ${\varepsilon _0}$, that is
\begin{equation}\label{eq16}
{\varepsilon _0} = \delta {t^{ - 1}}{\kern 1pt} {\kern 1pt} {\kern 1pt} or{\kern 1pt} {\kern 1pt} {\kern 1pt} {t^{ - 1}},
\end{equation}
we can use Eq. (\ref{eq3}) or (\ref{eq4}), (\ref{eq7}) and (\ref{eq16}) to obtain the internal energy per unit volume for massless scalar bosons
\begin{equation}\label{eq17}
{\rm{u = }}{\varepsilon _0}N \sim {({t_p}t)^{ - 2}}.
\end{equation}
Next we use the increase in entropy to derive the existence of negative pressure.
We consider the Boltzmann system to be composed of massless scalar bosons in vacuum; the bosons have a quantum state number ${W_p}$ for an energy level at the ground state ${\varepsilon _0}$, where ${W_p}$ is considered the momentum degree of freedom. The momentum is ${p_{{0}}} = \sqrt {\sum\limits_{{\rm{i = 1}}}^{{W_p}} {p_i^2} } $. When ${W_p} = 3$, ${p_{{0}}} = \sqrt {p_x^2 + p_y^2 + p_z^2} $, and it is clear that the microstate numbers of the Boltzmann system are ${3^N}$; hence, the system has statistical entropy per unit volume
\begin{equation}\label{eq18}
s = {k_B}\ln {3^N},
\end{equation}
where ${k_B}$ is the Boltzmann constant.
Using the thermodynamic entropy definition $TdS = dU + pdV$, where $T$ is temperature, and the equations $dS = sdV,{\kern 1pt}k; and dU = udV$; with Eq. (\ref{eq3}), (\ref{eq7}), (\ref{eq17}) and (\ref{eq18}), we get
\begin{equation}\label{eq19}
T = {(\ln {T_c})^{ - 1}}{k_B}^{ - 1}{({t_p}t)^{ - 1/2}}.
\end{equation}
Because $u{({t_p}t)^2}$ is constant, ${T_c}$ is also constant.
We assume the degeneracy of energy ${W_p} $ is increasing gradually with the expanding universe, which can be expressed as
\begin{equation}\label{eq20}
{W_p}:1 \to 2 \to 3,
\end{equation}
Hence, with the rise in ${W_p} $, the entropy density will increase, which can generate the negative pressure. From Eq. (\ref{eq17}) and (\ref{eq18}) (\ref{eq20}), it is easy to find that the negative pressure ${p_u} = {\omega _u}u$ satisfies
\begin{equation}\label{eq21}
T\ln {W_p}^N = (1 + {\omega _u})u,
\end{equation}
solving above equation yields
\begin{equation}\label{eq22}
{\omega _u} = - 1 + \frac{{\ln {W_p}}}{{\ln {T_c}}}.
\end{equation}
This is the equation of state for scalar field in the vacuum. It has an interesting feature. We set ${T_c} = 3$, when ${W_p} \to 1$, one has ${\omega _u} \to - 1$, if ${W_p} \to 2$, ${\omega _u} \to - 1/3$. As a result of above relation, it implies that the vacuum energy is from ordered to disordered state along with time, and it can't violate the principle of entropy increase. Namely, the negative pressure can be derived from the increasing entropy, which avoids the universe keeping expansion in the future.
\section{Dark energy density and pressure}
\subsection{ The dark energy density and the evolution of ground state energy}
We assume that dark energy is composed of massless scalar boson in vacuum, using Eq. (\ref{eq17}), and obtain the dark energy density ${\rho _{de}} = {({t_p}t)^{ - 2}}$. We introduce a numerical constant to ensure the equals sign satisfies Eq. (\ref{eq3}) or (\ref{eq4}), and choose the conformal time $\eta $ as the time scale $t$ \cite{Cai,Wei}. Hence, the energy density can be written as
\begin{equation}\label{eq23}
{\rho _{de}} = \frac{{3{n^2}{M_p}^2}}{{{\eta ^2}}},
\end{equation}
where ${M_p}^{ - 2} = 8\pi G$, ${n^2}$ is an introduced numerical constant, $\eta $ is the conformal time, which is given by
\begin{equation}\label{eq24}
\eta = \int_0^t {\frac{{dt}}{a}} .
\end{equation}
where $a$ is the scale factor of our universe. We take the present scale factor ${a_0} = 1$, $t$ is the age of universe.
Suppose the universe be spatially flat, define the fraction energy density of dark matter as ${\Omega _m} = {\rho _m}/3{M_p}^2{H^2}$, and ${\Omega _{de}} = {\rho _{de}}/3{M_p}^2{H^2}$, one has ${\Omega _m} =1- {\Omega _{de}}$ from the Friedmann equation. With the Eq. (\ref{eq23}), ${\Omega _{de}}$ can be written
\begin{equation}\label{eq25}
{\Omega _{de}} = \frac{{{n^2}}}{{{H^2}{\eta ^2}}} ,
\end{equation}
where $H = \dot a/a$ is the Hubble parameter.
Using the energy conservation equation ${\dot \rho _{de}} + 3H({\rho _{de}} + {p_{de}}) = 0$, with Eq. (\ref{eq23}) (\ref{eq25}), one can get the equation of state of energy density for ${\omega _{de}} = {p_{de}}/{\rho _{de}}$ as
\begin{equation}\label{eq26}
{\omega _{de}} = - 1 + \frac{2}{{3n}}\frac{{\sqrt {{\Omega _{de}}} }}{a} .
\end{equation}
Meanwhile using the Friedmann equations with equation Eq. (\ref{eq26}), as well as ${\rho _m} \propto {a^{ - 3}}$, one can get ${\Omega _{de}}$ that satisfies \cite{Wei}
\begin{equation}\label{eq27}
{\Omega '_{de}} = \frac{{{\Omega _{de}}}}{a}(1 - {\Omega _{de}})(3 - \frac{2}{n}\frac{{\sqrt {{\Omega _{de}}} }}{a}) ,
\end{equation}
where the prime represents the derivative with respect to $\ln a$.
It has some interesting features for the equation of state of the dark energy. In the dark energy dominated phase, the energy density can drive the universe to accelerated expansion if ${\omega _{de}} < - 1/3$. From Eq. (\ref{eq26}), it is easy to see that when $a \to \infty $, ${\Omega _{de}} \to 1$, thus ${\omega _{de}} \to - 1$ at later time. As well, in the matter dominated epoch, it has $a \propto {t^{2/3}}$, from Eq. (\ref{eq23})(\ref{eq24}), one has ${\rho _{de}} \propto {a^{ - 1}}$. Then from the dark energy density conservation equation, one can obtain ${\omega _{de}} = - 2/3$.
For constant ${\omega _{de}}$, the deceleration factor ${q_0}$ is given by ${q_0} = 0.5 + 1.5(1 - {\Omega _m}){\omega _{de}}$, we fix ${\Omega _m} = 0.3$ of the current universe which is from $\Lambda CDM$ cosmology with SNIa Pantheon samples \cite{Betoule}. It is easy to see that when ${\omega _{de}} \mathbin{\lower.3ex\hbox{$\buildrel<\over
{\smash{\scriptstyle\sim}\vphantom{_x}}$}} - 1/2$, ${q_0} < 0$, which implies that the energy density can drive the universe to accelerated expansion if ${\omega _{de}} \mathbin{\lower.3ex\hbox{$\buildrel<\over{\smash{\scriptstyle\sim}\vphantom{_x}}$}} - 1/2$ for the current universe.
Meanwhile we also consider the case that the time scale $t$ is replaced by the future event horizon ${R_h}$, which was proposed by Li \cite{Li}, the future event horizon is
\begin{equation}\label{eq28}
{R_{\rm{h}}} = a\int_t^\infty {\frac{{dt}}{a}} ,
\end{equation}
so the dark energy density is
\begin{equation}\label{eq29}
{\rho _{de}} = \frac{{3{n^2}{M_p}^2}}{{{R_h}^2}}.
\end{equation}
Then combining with the dark energy density conservation equation, the equation of state of energy density can be given by \cite{Li}
\begin{equation}\label{eq30}
{\omega _{de}} = - \frac{1}{3} - \frac{2}{{3n}}\sqrt {{\Omega _{de}}} .
\end{equation}
In the same way, at earlier time where ${\Omega _{de}} \to 0$, it has ${\omega _{de}} \to - 1/3$, and at later time where ${\Omega _{de}} \to 1$, ${\omega _{de}} \to - 1$ for $n = 1$.
\subsection{The invariable ground state energy for the scalar bosons and energy density}
If we also consider other case that the expected value of energy for scalar boson ${\varepsilon _c}$ does not change with time, ${\varepsilon _c}$ is constant, so Eq. (\ref{eq17}) becomes
\begin{equation}\label{eq31}
u = {\varepsilon _c}\delta {t^{ - 3}}.
\end{equation}
When consider the dark energy density is constrained by the expected value of the Higgs potential, the Eq. (\ref{eq31}) seems more persuasive.
From Eq. (\ref{eq4}) and Eq. (\ref{eq31}), and introduce numerical constant ${n^2}$, we have
\begin{equation}\label{eq32}
{\Omega _{de}} = \frac{{{n^2}}}{{t{\kern 1pt} {H^2}}}.
\end{equation}
Then we can use SNIa Pantheon sample and the Planck 2018 CMB angular power spectra to constrain the parameters of dark energy models based on Eq. (\ref{eq25}) and (\ref{eq32}).
\section{The Used observation data}
\subsection{SNIa Pantheon sample and the Plank 2018 CMB angular power spectra }
For SNIa data, the Pantheon sample is the combination of SNe Ia from the Pan-STARRS1 (PS1), the Sloan Digital Sky Survey (SDSS), SNLS, and various low-z and Hubble Space Telescope samples. The Panoramic Survey Telescope $\&$ Rapid Response System (Pan-STARRS or PS1) is a wide-field imaging facility built by the University of Hawaii's Institute for Astronomy, which is used for a variety of scientific studies from the nearby to the very distant Universe, and it has provided 279 SNe Ia for Pantheon sample \cite{Scolnic}. The Supernova Legacy Survey Program detected approximately 2000 high-redshift Supernovae between 2003 and 2008, and Pantheon sample contains about 236 SNe Ia based on the first three years of data, which can be used to investigate the expansion history of the universe, and improve the constraint of cosmological parameters, as well as dark energy study \cite{Conley}.
In 2014 SDSS Survey released a large catalogue which contains light curves, spectra, classifications, and ancillary data of 10,258 variable and transient sources\cite{Gunn1998,York,Gunn2006,Sako2007,Betoule,Sako2014}. The release generated the largest sample of supernova candidates, and 500 of sample have been confirmed as SNe Ia by the spectroscopic follow-up. 335 SNe Ia in Pantheon sample are taken from this spectroscopic sample. The rest of Pantheon sample are from the CfA$1-4$, CSP, and Hubble Space Telescope (HST) SN surveys \cite{Conley}. This extended sample of 1048 SNe Ia is called the Pantheon sample.
The Planck 2018 CMB angular power spectra data is based on observations obtained with Planck (http://www.esa.int/Planck), an ESA science mission with instruments and contributions directly funded by ESA Member States, NASA, and Canada.
\subsection{SALT2 calibration for Pantheon sample}
When correcting the apparent magnitude of Pantheon sample, considering that the prior dark energy equation of state is unknown, so we use SALT2 and Taylor expansion of ${{\rm{d}}_H} - z$ relation to directly calibrate the distance modulus, which can simplify the problem.
The Taylor expansion of ${{\rm{d}}_H} - z$ relation can be given by
\begin{equation}\label{eq36}
{{d_{H,th}} = \frac{1}{{1 - y}}\left\{ {y - \frac{{{q_0} - 1}}{2}{y^2} + \left[ {\frac{{3{q_0} - 2{q_0} - {j_0}}}{6} + \frac{{ - {\Omega _{{k_0}}} + 2}}{6}} \right]{y^3}} \right\}},
\end{equation}
where $y = z/(1 + z)$. In order to reduce calculation error for high redshift data, we take this variable substitution. ${q_0}$ is the deceleration parameter, ${j_0}$ is the jerk parameters, and ${\Omega _{{k_0}}}$ is the curvature term.
Meanwhile, the relation of distance modulus $\mu $ and luminosity distance ${d_H}$ can be written as
\begin{equation}\label{eq35}
\mu = 5{\log _{10}}{d_H} + 25 - 5{\log _{10}}{H_0}.
\end{equation}
We use SALT2 and Taylor expansion of ${{\rm{d}}_H} - z$ relation to directly calibrate Pantheon sample. The distance modulus ${\mu _{ob}}$ correction formula is given by SALT2 model \cite{Guy2005,Guy2007}
\begin{equation}\label{eq33}
{\mu _{B,ob}} = {m_B} - {M_B}{\rm{ + }}\alpha \times {x_1}{\rm{ + }}\beta \times c{\rm{ + }}\Delta B,
\end{equation}
where ${m_B}$ corresponds to the observed peak magnitude in rest frame $B$ band, ${x_1}$ describes the time stretching of the light curve, $c$ describes the SN colour at maximum brightness, $\Delta B$ is a bias correction based on previous simulations, and$\alpha $, $\beta $ are nuisance parameters in the distance estimate. ${M_B}$ is the absolute B-band magnitude, which depends on the host galaxy properties \cite{Betoule}. Notice that ${M_B}$ is related to the host stellar mass (M stellar ) by a simple step function
\begin{equation}\label{eq34}
{M_B} = \left\{ \begin{array}{l}
M_B^1{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} if{\kern 1pt} {\kern 1pt} {\kern 1pt} {M_{stellar}} < {10^{10}}{M_ \odot }\\
M_B^1 + {\Delta _M}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} otherwise
\end{array} \right.
\end{equation}
Here ${M_ \odot }$ is the mass of the Sun.
From Eq. (\ref{eq35}) and (\ref{eq33}), the ${\chi ^2}$ of Pantheon data can be calculated as
\begin{equation}\label{eq37}
{\chi ^2} = \Delta {\mu ^T}C_{{\mu _{ob}}}^{ - 1}\Delta \mu ,
\end{equation}
where $\Delta \mu = {\mu} - {\mu _{th}}$. ${C_\mu }$ is the covariance matrix of the distance modulus $\mu $, we only consider statistical error, and
\begin{equation}\label{eq38}
\begin{array}{l}
{C_{\mu ,stat}} = {V_{{m_B}}} + {\alpha ^2}{V_{{x_1}}} + {\beta ^2}{V_c} + 2\alpha {V_{{m_B},{x_1}}} - 2\beta {V_{{m_{B,c}}}}\\
{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} - 2\alpha \beta {V_{{x_1},c}}
\end{array}
\end{equation}
From Eq. (\ref{eq37}), and in combination with Pantheon sample, we obtain the statistical average and error for distance modulus $\mu $ and the parameters ${q_0}$ , ${j_0}$. Then we use the calibrated Pantheon sample to constrain the dark energy models parameters.
\section{Using the Pantheon sample to fit the models parameters}
\subsection{The fitting of model \uppercase\expandafter{\romannumeral1} parameters}
For model \uppercase\expandafter{\romannumeral1} ${\Omega _{de}} = {\raise0.7ex\hbox{${{n^2}{H^{ - 2}}}$} \!\mathord{\left/
{\vphantom {{{n^2}{H^{ - 2}}} t}}\right.\kern-\nulldelimiterspace}
\!\lower0.7ex\hbox{$t$}}$, we firstly choose the universe age $T$ as the time scale $t$, and consider Taylor expansion of $T-z$ relation in nearly flat space, Which is
\begin{equation}\label{eq39}
T - {T_0} = - \frac{1}{{{H_0}}}(y - \frac{{{q_0}}}{2}{y^2} + \frac{{2{q_0}^2 - {j_0}}}{6}{y^3}),
\end{equation}
where ${\rm{y}} = z/(1 + z)$. In order to reduce calculation error for high redshift data, we take this variable substitution. ${T_0}$ is the present age of the universe.
Then the Hubble parameter in nearly flat space can be written as
\begin{equation}\label{eq40}
E(z) = \sqrt {{\Omega _m}{{(1 + z)}^3} + {\Omega _R}{{(1 + z)}^4} + \frac{{{n^2}}}{{T{\kern 1pt} {H_0}^2}}} .
\end{equation}
where $E(z) = {\raise0.7ex\hbox{${H(z)}$} \!\mathord{\left/
{\vphantom {{H(z)} {{H_0}}}}\right.\kern-\nulldelimiterspace}
\!\lower0.7ex\hbox{${{H_0}}$}}$. From Eq. (\ref{eq39}) and (\ref{eq40}), one has ${T_0} = {\raise0.7ex\hbox{${{n^2}{H_0}^{ - 2}}$} \!\mathord{\left/
{\vphantom {{{n^2}{H_0}^{ - 2}} {(1 - {\Omega _m})}}}\right.\kern-\nulldelimiterspace}
\!\lower0.7ex\hbox{${(1 - {\Omega _m})}$}}$ when ${\Omega _R} \ll {\Omega _m}$.
We use ${\chi ^2}$ statistic fitting method to constrain the parameters
\begin{equation}\label{eq41}
{\chi ^2}_{Pantheon} = \Delta {\mu ^T}{C_\mu }^{ - 1}\Delta \mu + \Delta {q_0}^2{\sigma _{{q_0}}}^{ - 2} + \Delta {j_0}^2{\sigma _{{j_0}}}^{ - 2},
\end{equation}
where $\Delta \mu = {\mu} - {\mu _{th}}$. $\Delta {q_0} = {q_0} - {q_{0,prior}}$ ,$\Delta {j_0} = {j_0} - {j_{0,prior}}$, ${q_{0,prior}},{j_{0,prior}}$ are given by SALT2 calibration method in Section 5.2.
Following this, we combine Pantheon data with Eq. (\ref{eq40}) to constrain the parameters, and then use MCMC technology and ${\chi ^2}$ statistic fitting method to obtain the statistical mean values and the minimum chi-square without systematic errors of the parameters ${\Omega _m},{n^2}{H_0}^{ - 1},{q_0},{j_0}, {\kern 1pt} {t_0}{H_0}$ , which are shown in Table (~\ref{tab1}), The confidence region of ${\kern 1pt} ({\Omega _m},{n^2}{H_0}^{ - 1})$ plane are 68.3\%, 95.4\% and 99.7\%. (see Figs. (~\ref{fig1}))
\begin{table*}
\footnotesize
\centering
\caption{ Statistical mean values of the cosmological parameters from SN Ia Pantheon sample observation data combined with Model \uppercase\expandafter{\romannumeral1}.}
\label{tab1}
\vspace{0.3cm}
\begin{tabular}{@{}ccccccc@{}}
\hline
${\Omega _{{\rm{de}}}} = {\raise0.7ex\hbox{${{n^2}{H^{ - 2}}}$} \!\mathord{\left/
{\vphantom {{{n^2}{H^{ - 2}}} t}}\right.\kern-\nulldelimiterspace}
\!\lower0.7ex\hbox{$t$}}$&${\Omega _{\rm{m}}}$&${n^2}{H_0}^{ - 1}$&$q_0$&$j_0$&${\kern 1pt} {\kern 1pt} {t_0}{H_0}{\kern 1pt} ({t_0} = {\eta _{{T_0}}},{\kern 1pt} {\kern 1pt} {T_0}) $&$\chi _{\min }^2/d.o.f.$\\ \hline
$t = {\eta _T}$&0.23$\pm$0.013 & 3.3$\pm$0.5 & -0.57$\pm$0.28 & -0.2$\pm$2.7 &4.3$\pm$0.7 &1040$/$1050 \\
$t = T$ &0.236$\pm$0.012 & 2.84$\pm$0.56 & -0.44$\pm$0.31&-0.95$\pm$2.8 &3.7$\pm$0.74 &1044$/$1050 \\
\hline
\end{tabular}
\end{table*}
\begin{figure*}[tbp]
\begin{center}
\includegraphics[scale=0.5]{age_h0_pantheon.pdf}
\caption{68.3\%, 95.4\%, and 99.7\% confidence region of the (${\Omega _m}$, ${n^2}{H_0}^{ - 1}$ ) plane from Pantheon observation data combined with Model \uppercase\expandafter{\romannumeral1}, the + dots in responding color represent the best fitting values for ${\Omega _m}$, ${n^2}{H_0}^{ - 1}$.}
\label{fig1}
\end{center}
\end{figure*}
Moreover, we can also choose the conformal age ${\eta _T}$ as the time scale $t$; and consider Taylor expansion of ${\eta _T} - z$ relation in near flat space, Which is
\begin{equation}\label{eq42}
{\eta _T} - {\eta _{{T_0}}} = \frac{1}{{{H_0}}}\left\{ {y - \frac{{{q_0} - 1}}{2}{y^2} + \left[ {\frac{{3{q_0} - 2{q_0} - {j_0}}}{6}} \right]{y^3}} \right\},
\end{equation}
Where ${\eta _{{T_0}}}$ is the currently conformal age of the universe.
The Hubble parameter in near flat space can be expressed as
\begin{equation}\label{eq43}
E(z) = \sqrt {{\Omega _m}{{(1 + z)}^3} + {\Omega _R}{{(1 + z)}^4} + \frac{{{n^2}}}{{{\eta _T}{\kern 1pt} {H_0}^2}}} .
\end{equation}
Then from Eq. (\ref{eq42}) and (\ref{eq43}), ${\eta _{{T_0}}}$ satisfies
${\eta _{{T_0}}} = {\raise0.7ex\hbox{${{n^2}{H_0}^{ - 2}}$} \!\mathord{\left/
{\vphantom {{{n^2}{H_0}^{ - 2}} {(1 - {\Omega _m})}}}\right.\kern-\nulldelimiterspace}
\!\lower0.7ex\hbox{${(1 - {\Omega _m})}$}}$.
In the same way, we use Eq. (\ref{eq43}) with Pantheon sample to fit the model parameters, and the statistical results are shown in Table (\ref{tab1}), and Figs. (\ref{fig1}) shows the confidence region of ${\kern 1pt} ({\Omega _m},{n^2}{H_0}^{ - 1})$ plane are 68.3\%, 95.4\% and 99.7\%.
\subsection{The fitting of model \uppercase\expandafter{\romannumeral2} parameters}
When considering model \uppercase\expandafter{\romannumeral2}, in which ${\Omega _{de}} = {\raise0.7ex\hbox{${{n^2}{H^{ - 2}}}$} \!\mathord{\left/
{\vphantom {{{n^2}{H^{ - 2}}} {{t^{ - 2}}}}}\right.\kern-\nulldelimiterspace}
\!\lower0.7ex\hbox{${{t^{ - 2}}}$}}$, we can also select the universe age $T$ as the time scale $t$, and the Hubble parameter in near flat space can be written as
\begin{equation}\label{eq44}
E(z) = \sqrt {{\Omega _m}{{(1 + z)}^3} + {\Omega _R}{{(1 + z)}^4} + \frac{{{n^2}}}{{T{{\kern 1pt} ^2}{H_0}^2}}} .
\end{equation}
Alternatively, if we select the conformal age ${\eta _T}$ as the time scale $t$, the Hubble parameter in near flat space satisfies
\begin{equation}\label{eq45}
E(z) = \sqrt {{\Omega _m}{{(1 + z)}^3} + {\Omega _R}{{(1 + z)}^4} + \frac{{{n^2}}}{{{\eta _T}{{\kern 1pt} ^2}{H_0}^2}}} .
\end{equation}
Using the pointing models Eq. (\ref{eq44}) and (\ref{eq45}), we obtain the statistical results for the parameters which are shown in Table (\ref{tab2}), and Figs. (\ref{fig2}) shows the confidence region of $({\Omega _m},n)$ plane are 68.3\%, 95.4\% and 99.7\%.
\begin{table*}
\footnotesize
\centering
\caption{ Statistical mean values of the cosmological parameters from SN Ia Pantheon sample observation data combined with Model \uppercase\expandafter{\romannumeral2}.}
\label{tab2}
\vspace{0.3cm}
\begin{tabular}{@{}ccccccc@{}}
\hline
${\Omega _{de}} = {\raise0.7ex\hbox{${{n^2}{H^{ - 2}}}$} \!\mathord{\left/
{\vphantom {{{n^2}{H^{ - 2}}} {{t^2}}}}\right.\kern-\nulldelimiterspace}
\!\lower0.7ex\hbox{${{t^2}}$}}$&${\Omega _m}$&$n$&$q_0$&$j_0$&${\kern 1pt} {\kern 1pt} {t_0}{H_0}{\kern 1pt} ({t_0} = {\eta _{{T_0}}},{\kern 1pt} {\kern 1pt} {T_0}) $&$\chi _{\min }^2/d.o.f.$\\ \hline
$t = {\eta _T}$&0.19$\pm$0.016 & 4.4$\pm$0.44 & -0.64$\pm$0.27&-1.1$\pm$2.6 &5.5$\pm$0.62 &1042$/$1050 \\
$t = T$ &0.22$\pm$0.01 & 4.5$\pm$0.4 & -0.48$\pm$0.3&-0.7$\pm$2.8 &5.7$\pm$0.7 &1044$/$1050 \\
\hline
\end{tabular}
\end{table*}
\begin{figure*}
\begin{center}
\includegraphics[scale=0.5]{age_pantheon.pdf}
\caption{68.3\%, 95.4\%, and 99.7\% confidence region of the (${\Omega _m}$, $n$ ) plane from Pantheon observation data combined with Model \uppercase\expandafter{\romannumeral2}. The + dots in responding color represent the best fitting values for ${\Omega _m}$, $n$.}
\label{fig2}
\end{center}
\end{figure*}
If we consider the current age of the universe to be ${T_0} = 13.5 \pm 0.5Gyr$ which is inferred from globular clusters \cite{Valcin}, and Hubble constant is ${H_0} = 73.5 \pm 1.4km{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {s^{ - 1}}{\kern 1pt} {\kern 1pt} {\kern 1pt} Mp{c^{ - 1}}$ using the NGC 4258 distance measurement \cite{Reid}. By combining with the fitting results obtained from Pantheon sample in Table (\ref{tab1}) and (\ref{tab2}) , we discoer the choice of ${\eta _T}$ as time scale may be more persuasive. Furthermore, for the mean value of ${\Omega _m}$, Conley et al. provied a the statistical result of matter density for constant $wCDM$ model ${\Omega _m} = 0.19_{ - 0.10}^{ + 0.08}$ from the combination of SNLS, HST, and low-z, SDSS data \cite{Conley}, our result is in agreement with it.
If only Pantheon data are used to investigate dark energy, the statistical results indicate the age dark energy models including ${\Omega _{de}} = {\raise0.7ex\hbox{${{n^2}{H^{ - 2}}}$} \!\mathord{\left/
{\vphantom {{{n^2}{H^{ - 2}}} {{\eta _T}}}}\right.\kern-\nulldelimiterspace}
\!\lower0.7ex\hbox{${{\eta _T}}$}}$ and ${\Omega _{de}} = {\raise0.7ex\hbox{${{n^2}{H^{ - 2}}}$} \!\mathord{\left/
{\vphantom {{{n^2}{H^{ - 2}}} {{\eta _T}^2}}}\right.\kern-\nulldelimiterspace}
\!\lower0.7ex\hbox{${{\eta _T}^2}$}}$ have no evident superiority compared to $\Lambda CDM$ using minimum chi-square.
\section{Using CMB angular power spectra to constrain the models parameters}
In addition, we can use Planck 2018 CMB angular power spectra data to constrain the models parameters. When ${\rm{z}} \ge 2.5$, ${\raise0.7ex\hbox{${{\Omega _m}{{(1 + z)}^3}}$} \!\mathord{\left/
{\vphantom {{{\Omega _m}{{(1 + z)}^3}} {{\Omega _{de}}(z)}}}\right.\kern-\nulldelimiterspace}
\!\lower0.7ex\hbox{${{\Omega _{de}}(z)}$}} \gg 1$, so we think the Taylor expansion for ${\eta _T} - z$ relation is also applicable to calculate CMB angular power spectra.
A calculation of $C_{TT,l}^s$ that ignores the Sunyaev-Zeldovich (SZ) effect can refer to Weinberg 2008 \cite{Weinberg}, and SZ-effect can refer to Bond et al. 2005 \cite{Bond}. We use Planck 2018 data together with Model \uppercase\expandafter{\romannumeral1} and \uppercase\expandafter{\romannumeral2} to constrain the cosmology parameters, and the statistical mean values of the parameters are shown in Table (\ref{tab3}), and the Figs. (\ref{fig3}) shows the CMB theoretical TT power spectra by the best fitting values from the Planck 2018.
In Table (\ref{tab3}), we provide the statistical mean values of Hubble constant ${H_0}{\rm{ = }}73.2 \pm 1.3{\rm{km}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {s^{ - 1}}{\kern 1pt} {\kern 1pt} Mp{c^{ - 1}}$, which is consistent with the result obtained using NGC 4258 distance measurement \cite{Reid}.
For the dark energy density, when using Planck 2018 CMB data, the statistical results indicate the age dark energy models including ${\Omega _{de}} = {\raise0.7ex\hbox{${{n^2}{H^{ - 2}}}$} \!\mathord{\left/
{\vphantom {{{n^2}{H^{ - 2}}} {{\eta _T}}}}\right.\kern-\nulldelimiterspace}
\!\lower0.7ex\hbox{${{\eta _T}}$}}$ and ${\Omega _{de}} = {\raise0.7ex\hbox{${{n^2}{H^{ - 2}}}$} \!\mathord{\left/
{\vphantom {{{n^2}{H^{ - 2}}} {{\eta _T}^2}}}\right.\kern-\nulldelimiterspace}
\!\lower0.7ex\hbox{${{\eta _T}^2}$}}$ have evident superiority, compared to $\Lambda CDM$ using a minimum chi-square of $l(l + 1)C_{TT,l}^s{\kern 1pt} (l = 30 \sim 1500)$ .
\begin{table*}
\normalsize
\centering
\caption{The statistical mean values of the cosmological parameters from The Planck 2018 TT power spectra data combined with the Model \uppercase\expandafter{\romannumeral1} and \uppercase\expandafter{\romannumeral2}.}
\label{tab3}
\begin{tabular}{@{}ccc@{}}
\hline
&${\Omega _{de}} = {\raise0.7ex\hbox{${{n^2}{H^{ - 2}}}$} \!\mathord{\left/
{\vphantom {{{n^2}{H^{ - 2}}} {{\eta _T}}}}\right.\kern-\nulldelimiterspace}
\!\lower0.7ex\hbox{${{\eta _T}}$}}$&${\Omega _{de}} = {\raise0.7ex\hbox{${{n^2}{H^{ - 2}}}$} \!\mathord{\left/
{\vphantom {{{n^2}{H^{ - 2}}} {{\eta _T}^2}}}\right.\kern-\nulldelimiterspace}
\!\lower0.7ex\hbox{${{\eta _T}^2}$}}$ \\ \hline
${\Omega _b}{h^2}$&0.0214$\pm$0.00012&0.023$\pm$0.0002\\
${\Omega _c}{h^2}$&0.11$\pm$0.0014& 0.107$\pm$0.0014\\
${10^{10}}{A_s}{e^{ - 2\tau }}$& 1.29$\pm$0.016& 1.316$\pm$0.027\\
${n_s}$ &0.93$\pm$0.004&0.97$\pm$0.0025\\
${N_{eff}}$&3.45$\pm$0.13&3.5$\pm$0.35\\
${h_0}$ &0.742$\pm$0.012&0.732$\pm$0.013\\
${\Omega _m}$&0.237$\pm$0.018&0.243$\pm$0.022\\
${n^2}{H_0}^{ - 1},{\kern 1pt} {\kern 1pt} {\kern 1pt} n $&5.5$\pm$0.33&4.85$\pm$0.54\\
$q_0$ &-0.38$\pm$0.24&-0.7$\pm$0.59\\
$j_0$&1.3$\pm$0.22 &4.7$\pm$1.2 \\
$\sigma _8^{sz}$&1.1$\pm$0.011 &1.06$\pm$0.012 \\ \hline
\end{tabular}
\end{table*}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[scale=0.7]{cmb1.pdf}
\caption{ Blue dots in responding color represent Planck 2018 CMB TT angular power spectra. The red, yellow and cyan lines are the CMB theoretical values of angular power spectra from $\Lambda CDM$, Model \uppercase\expandafter{\romannumeral1} and \uppercase\expandafter{\romannumeral2}, respectively, using the best fitting values, which are constrained by Planck 2018 data. }
\label{fig3}
\end{center}
\end{figure*}
\section{CONCLUSIONS}
Understanding the physical nature of dark energy is important for our universe. In additon to the study of particle physics, dark energy may also enable us to further explore the nature of vacuum. Whether or not the dark energy is derived from scalar bosons, other particle, or neither, still needs to be further verified.
We explore a theoretical possibility that dark energy density is derived from massless scalar bosons in vacuum. Assuming massless scalar bosons fall into the horizon boundary with the expansion of the universe, scalar bosons satisfy P-symmetry under the parity transformation. P-symmetry with the quantum fluctuation of space-time enable us to estimate dark energy density. Meanwhile, to explain the physical nature of negative pressure, this is deduced from the increasing entropy density with the rise in degeneracy in the Boltzmann system.
Next, we used the SNIa Pantheon sample and Planck 2018 CMB angular power spectra to constrain the specified models. The statistical results indicate that age dark energy models have evident superiority, when only using CMB data compared to $\Lambda CDM$ using minimum chi-square . Furthermore, we obtain the statistical mean value of the Hubble constant ${H_0}{\rm{ = }}73.2 \pm 1.3{\rm{km}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {s^{ - 1}}{\kern 1pt} {\kern 1pt} Mp{c^{ - 1}}$, which is consistent with the result obtained using NGC 4258 distance measurement.
Finally, we extend our discussion to the future of universe. From Eq. (\ref{eq22}), if ${W_p}$ satisfies ${W_p}:3 \to 2 \to 1$, the universe may continue expanding in the future; however, if ${W_p}:1 \to 2 \to 3$, this will change from expansion to contraction. Thus, the property of dark energy will dominate the future of universe, and it will similarly determine the future of humanity.
\section{Acknowledgments}
This work was supported by Xiaofeng Yang{'}s Xinjiang Tianchi Bairen project and CAS Pioneer Hundred Talents Program. This work was also partly supported by the National Key R\&D Program of China under grant No.2018YFA0404602.
|
2,869,038,154,052 | arxiv | \section{Introduction}\label{sec:intro}
Analyses by \cite{Schelhaas2003} showed that storms were responsible for more than half the total damages in European forests for the period 1950--2000. The majority of this forest damage occurred in the Alpine zone of mountainous regions. In Austria, for the period 2002--2010, storms damaged 3.1 million m\textsuperscript{3} annually \citep{Thom2013}, representing 0.26\,\% of the total growing stock as well as 12\,\% of the total annual fellings.
In Switzerland, the storm damage was 17 and 22 times higher in the period 1985--2007 than in the two preceding 50 years periods \citep{Usbeck2010}.
According to \cite{Usbeck2010}, the possible explanations for such an increasing trend were manifold and include increased growing stocks, enlarged forested area, milder winters and hence tendency for wet and unfrozen soils, and higher recorded maximum gust wind speeds.
Since 1990, 85\,\% of all primary damage in the Western, Central and Northern European regions was caused by catastrophic storms with maximum gust wind speed between 50--60 ms\textsuperscript{-1} \citep{Gregow2017}. The relevance of storm damage is likely to increase, as simulations with climate scenario data and forest stand projections suggested a higher probability of exceeding the critical wind speed and, hence, wind throw events \citep{Blennow2008,Blennow2010}. Such a trend could negatively impact the carbon balance, especially in Western and Central European regions \citep{Lindroth2009}.
Wind throw events have direct and indirect impact on long-term sustained yield planning. A common indirect cost stems from the fact that storm-felled trees provided a surplus of breeding material for bark beetles and promoted their rapid population increase causing extra timber losses in the subsequent years \citep{Marini2017}. Hence, it is recommended that fallen trees be removed within two years of the disturbance \citep{DeGroot2018}. Following a wind throw event and prior to harvesting the storm-felled trees, strategic salvage harvest planning is needed \citep{Jirikowski2003}. Implementing such plans requires unplanned/unbudgeted logging road and site development efforts as well as interaction with external harvesting, transportation, and processing industry.
To inform strategic salvage harvest planning, a rapid and accurate estimate of the spatial extent and local severity of damage is needed. The task is to provide such estimates for individual and collections of discrete blowdowns. In most situations, a paucity of existing field inventory data within or adjacent to blowdowns precludes design-based estimation. Rather, field data from an existing set of sparsely sampled inventory plots or a small purposively selected set of sample plots, can be coupled with auxiliary data using a model to yield viable estimates. In such settings, the forest response variable measured on sample plots is regressed against meaningful auxiliary data. Commonly, such auxiliary data come as remotely sensed variables collected via satellite, aircraft, or unmanned aerial vehicle (UAV) based sensors. Remotely sensed data are routinely used to support forest inventories and have been shown to increase accuracy and precision of the estimates, and reduce field data collection efforts; see \cite{Koehl2006} and other references herein.
Increasingly, laser imaging detection and ranging (LiDAR) data are being incorporated into forest inventory and mapping efforts. LiDAR measurements offer high-resolution and 3-dimensional (3D) representation of forest canopy structure metrics that are often related to forest variables of interest. For example, LiDAR height metrics have been successfully used to estimate average tree height \citep{MagnussenBoudewyn1998}, stem density \citep{NaessetBjerknes2001}, basal area \citep{Naesset2002,Magnussen2010}, biomass \citep{Finley2017, Babcock2018} and growing stock volume \citep{Nelson1988,Naesset1997,Maltamo2006a}.
In practice, different techniques are used to predict forest variables using field measurements and remote sensing data. Nonparametric imputation methods, using some flavor of nearest neighbor (NN) interpolation \citep{MoeurStage1995}, achieved robust prediction for possibly multivariate response variable vectors \citep{TomppoHalme2004,LeMayTemesgen2005,Maltamo2006b,PackalenMaltamo2007}. \cite{Hudak2008} demonstrated the NN prediction accuracy can be enhanced through resampling and classifications with ``random forests'' \citep{Breiman2001}.
A key shortcoming of NN imputation methods, however, is the lack of statistically robust variance estimates, with the exception of some approximations presented and assessed in \cite{McRoberts2007,McRoberts2011} and \cite{Magnussen2013}.
Similar to NN imputation methods, geostatistical methods yield accurate and precise spatial prediction of forest variables. Additionally, geostatistical methods provide a solid theoretical foundation for probability-based uncertainty quantification, see, e.g., \cite{VerHoef2013}. In such settings, point-referenced observations are indexed by spatial locations, e.g., latitude and longitude, and predictive models build on classical kriging methods \citep{Cressie1993,Schabenberger2004,Chiles2013}. Additional flexibility for specifying and estimating regression mean and covariance components comes from recasting these predictive models within a Bayesian inferential framework \citep{Banerjee2014}. Similar extensions and benefits have been demonstrated for data with observations indexed by areal units, particularly in small-area estimation settings, many of which build on the Fay-Harriot model \citep{FayHerriot1979}, see, e.g., \cite{VerPlanck2017, VerPlanck2018} and references therein.
Following the study region description and data in Section~\ref{sec:data}, a general model that subsumes various sub-models is developed in Section~\ref{sec:model} along with implementation details for parameter estimation, prediction, and model selection in Section~\ref{sec:implementation}. Here too, we offer two different approaches for prediction over the areal blowdown units. Results presented in Section~\ref{sec:results} focus first on model selection then comparison among blowdown predictions, which is followed by discussion in Section~\ref{sec:discussion}. Some final remarks and future direction are provided in Section~\ref{sec:conclusion}.
\section{Materials and methods}\label{sec:methods}
\subsection{Study region and model data}\label{sec:data}
The study region was located in the southern region of the Austrian federal state of Carinthia, within the upper Gail valley and near the Dellach forest research and training center, which is jointly operated by the Institute of Forest Growth and the Institute of Forest Engineering of the University of Natural Resources and Life Sciences Vienna (Figure~\ref{fig:austria}). On October 28, 2018, the storm Adrian formed over the western Mediterranean Sea and achieved wind gust speeds of 130\,km/h throughout Carinthia. Within 72 hours, the storm was producing 627\,\,l/m$^2$ of precipitation at the Pl{\"o}ckenpass meteorological station, located near the study region, and inflicting heavy damage on the region's forest \citep{Zimmermann2018}.
\begin{figure}[!ht]
\begin{center
\subfigure[]{\includegraphics[width=10cm,trim={0cm 0cm 0cm 0cm},clip]{Map_Austria.pdf}\label{fig:austria}}\\
\subfigure[]{\includegraphics[width=10cm,trim={0cm 0cm 0cm 0cm},clip]{Map_SamplePlots.pdf}\label{fig:samplePlots}}
\caption{\subref{fig:austria} Location of the study region in Southern Carinthia, Austria. \subref{fig:samplePlots} Storm damage areas (polygons) and sample plots (red dots) in the study region.} \label{fig:StudySite1}
\end{center}
\end{figure}
Forest blowdown, caused by Adrian, was widely distributed across the Hermagor administrative district (Bezirk), which covers the study region. Using high-resolution aerial images provided by the Carinthian Forest Service, blowdown occurrences in the district were identified and delineated with high accuracy. As illustrated in Figure~\ref{fig:samplePlots} the blowdowns were concentrated in five distinct sub-regions labeled Frohn, Laas, Liesing, Mauthen, and Ploecken. A total of 564 blowdowns were delineated, totalling 212.3\,ha of affected forest. Table~\ref{tab:SumStatAreaPlot} provides the number of blowdown occurrences across the sub-regions and affected area characteristics.
\begin{table}
\caption{Summary statistics of the digitized blowdowns and collected sample plot data by sub-region.}\label{tab:SumStatAreaPlot}
\small
\setlength{\tabcolsep}{0.5pt}
\begin{tabularx}{1.0\linewidth}{
L{0.075\linewidth}
R{0.070\linewidth}
R{0.075\linewidth}
R{0.075\linewidth}
R{0.075\linewidth}
R{0.075\linewidth}
R{0.075\linewidth}
R{0.070\linewidth}
R{0.075\linewidth}
R{0.075\linewidth}
R{0.075\linewidth}
R{0.075\linewidth}
R{0.075\linewidth}
}
& \multicolumn{6}{c}{Blowdown areas} & \multicolumn{6}{c}{Sample plots}\\
\cmidrule(lr){2-7} \cmidrule(lr){8-13}
& & \multicolumn{5}{c}{Area (ha)} & & \multicolumn{5}{c}{Growing stock volume (m$^3$/ha)}\\
\cmidrule(lr){3-7} \cmidrule(lr){9-13}
& n & mean & med & sd & min & max & N & mean & med & sd & min & max \\
\midrule
Frohn & 273 & 0.230 & 0.065 & 0.600 & 0.004 & 7.231 & 21 & 593 & 579 & 227 & 188 & 979 \\
Laas & 152 & 0.362 & 0.138 & 0.565 & 0.004 & 2.727 & 17 & 785 & 772 & 333 & 142 & 1382 \\
Liesing & 5 & 0.581 & 0.348 & 0.621 & 0.115 & 1.607 & 7 & 689 & 698 & 67 & 563 & 757 \\
Mauthen & 62 & 0.621 & 0.190 & 1.075 & 0.013 & 4.969 & 6 & 762 & 748 & 323 & 403 & 1220 \\
Ploecken & 72 & 0.738 & 0.242 & 1.724 & 0.008 & 12.322 & 11 & 773 & 773 & 136 & 528 & 968 \\
\midrule
Total & 564 & 0.376 & 0.110 & 0.892 & 0.004 & 12.322 & 62 & 705 & 730 & 256 & 142 & 1382 \\
\end{tabularx}
\end{table}
Forest inventory data were not available for the study region; therefore, a field measurement campaign was initiated in May 2020 (post-Adrian) to collect growing stock timber volume measurements suitable for estimating volume loss due to blowdown. A total of $n$=62 sample plots were installed in unaffected forest adjacent to blowdowns (Figure~\ref{fig:samplePlots}). Plot locations were chosen to characterize forest similar in structure and composition to the pre-blowdown forest. No plots were located within blowdowns. Plot measurements were conducted using the terrestrial laser scanning (TLS) GeoSLAM ZEB HORIZON system (GeoSLAM Ltd., Nottingham, UK). Position and diameter at breast height (DBH) for the approximately 5586 measurement trees were derived from 3D point clouds on a 20\,m radius plot using fully automated routines demonstrated in \cite{Gollob2019,Gollob2020} and in \cite{Ritter2017,Ritter2020}. Tree height was estimated using a N\"{a}slund function formulated as a mixed-effects model with plot-level random effects. Stem volume was calculated using a traditional stem-form function \citep{Pollanschuetz1965}. For each plot, growing stock timber volume was expressed as m$^3$/ha (i.e., computed as the sum of tree volume scaled by the 7.958 fixed-area plot tree expansion factor). Table~\ref{tab:SumStatAreaPlot} provides a summary of growing stock timber volume by sub-region, the values of which serve as the response variable observations in subsequent regression models.
The Carinthian Forest Service provided a comprehensive set of aerial laser scanning (ALS) variables summarized on a 1\,m $\times$ 1\,m resolution grid collected in 2012 over the study region. Within each grid cell, ALS variables comprised the point cloud height distribution's mean, median, min, maximum, and standard deviation. Values for each variable in this fine grid were averaged over each plot to yield a set of plot-level predictor variables to pair with the growing stock volume response variable.
\subsection{Model construction} \label{sec:model}
As stated in Section~\ref{sec:intro}, the study goal was to predict what 2020 growing stock timber volume in blowdowns would have been if not destroyed by Adrian in late October 2018. Sample plot data, which included 2020 response variable measurements and 2012 ALS predictor variables, were used to develop models to predict growing stock timber volume for blowdowns where only 2012 ALS measurements were available.
Due to the relatively small number of sample plots within any one of the five spatially disjoint sub-regions (Figure~\ref{fig:StudySite1}), we aimed to pool the 62 sample plot measurements. An ideal pooled model would allow for intra- and inter-location specific relationships between the response and predictor variables. Such spatially varying relationships are particularly attractive because they accommodate potential impact of unobserved (and for the most part unobservable) spatially-explicit mediating factors such as disturbance history, genetics, and local growth environments. When cast within a regression framework, a spatially varying coefficients (SVC) model uses a smoothly-varying spatial process to pool information and avoid overfitting \citep{Gelfand2003, Finley2011}.
Given prediction is our primary goal, a preferred model would also estimate residual spatial correlation (i.e., spatial dependence among measurements not explained by the predictor variables) and use it to improve prediction performance. We anticipated the residual spatial correlation would decrease as distance between measurements increased. Again, following \cite{Gelfand2003} and broader geostatistical literature cited in Section~\ref{sec:intro}, residual spatial dependence is effectively estimated using a spatial process.
To accommodate the anticipated data features, assess varying levels of model complexity, and deliver statistically valid probabilistic uncertainty quantification we model response $y( {\boldsymbol s} )$ at generic spatial location $ {\boldsymbol s} $ as
\begin{linenomath*}
\begin{equation}\label{eq: spatially_varying_regression}
y( {\boldsymbol s} ) = (\beta_0 + \delta_0w_0( {\boldsymbol s} )) + \sum_{j=1}^{p} x_j( {\boldsymbol s} )\left\{\beta_j + \delta_jw_j( {\boldsymbol s} )\right\} + \epsilon( {\boldsymbol s} ),
\end{equation}
\end{linenomath*}
where $x_j( {\boldsymbol s} )$, for each $j=1,\ldots,p$, is the known value of a predictor variable at location $ {\boldsymbol s} $, $\beta_j$ is the regression coefficient corresponding to $x_j( {\boldsymbol s} )$, $\beta_0$ is an intercept, and $\epsilon( {\boldsymbol s} )$ follows a normal distribution with mean zero and variance $\tau^2$. Here, $\tau^2$ is viewed as the measurement error variance. The quantities $w_0( {\boldsymbol s} )$ and $w_j( {\boldsymbol s} )$ are spatial random effects corresponding to the intercept and predictor variables, respectively, thereby yielding a spatially varying regression model. To allow for varying levels of model complexity, the $\delta$s in Equation~(\ref{eq: spatially_varying_regression}) are binary indicator variables used to turn on and off the spatial random effects (i.e., a value of $1$ means the given spatial random effect is included in the model and $0$ otherwise). When $\delta = 1$ the associated space-varying regression coefficient can be seen as having a global effect, i.e., $\beta_0$ or $\beta_j$s, with local adjustments, i.e., $w_0( {\boldsymbol s} )$ or $w_j( {\boldsymbol s} )s$.
Over $n$ locations, a given spatial random effect $ {\boldsymbol w} = (w( {\boldsymbol s} _1), w( {\boldsymbol s} _2), w( {\boldsymbol s} _3), \ldots, w( {\boldsymbol s} _n))^\top$ follows a multivariate normal distribution with a length $n$ zero mean vector and $n\times n$ covariance matrix $ {\boldsymbol \Sigma} $ with $(i,j)^{th}$ element given by $C( {\boldsymbol s} _i, {\boldsymbol s} _j; {\boldsymbol \theta} )$. Clearly, for any two generic locations $ {\boldsymbol s} $ and $ {\boldsymbol s} ^\ast$ locations within the study region the function used for $C( {\boldsymbol s} , {\boldsymbol s} ^\ast; {\boldsymbol \theta} )$ must result in a symmetric and positive definite matrix $ {\boldsymbol \Sigma} $. Such functions are known as positive definite functions, details of which can be found in \cite{Cressie1993}, \cite{Chiles2013}, and \cite{Banerjee2014}, among others. Here we specify $C( {\boldsymbol s} , {\boldsymbol s} ^\ast; {\boldsymbol \theta} ) = \sigma^2\rho( {\boldsymbol s} , {\boldsymbol s} ^\ast; {\boldsymbol \phi} )$ where $ {\boldsymbol \theta} = \{\sigma^2, {\boldsymbol \phi} \}$ and $\rho(\cdot ; {\boldsymbol \phi} )$ is a positive support correlation function with $ {\boldsymbol \phi} $ comprising one or more parameters that control the rate of correlation decay and smoothness of the process. The spatial process variance is given by $\sigma^2$, \textit{i.e.}, $\text{Var}(w( {\boldsymbol s} ))=\sigma^2$. This covariance function yields a \emph{stationary} and \emph{isotropic} process, \textit{i.e.}, a process with a constant variance and a correlation depending only on the Euclidean distance separating locations. The M{\'a}tern correlation function is a flexible class of correlation functions with desirable theoretical properties \citep{Stein1999} and is given by
\begin{linenomath*}
\begin{equation}\label{matern}
\rho(|| {\boldsymbol s} - {\boldsymbol s} ^\ast||; {\boldsymbol \phi} ) = \frac{1}{2^{\nu - 1}\Gamma(\nu)}(\phi|| {\boldsymbol s} - {\boldsymbol s} ^\ast||)^\nu \mathcal{K}_\nu(|| {\boldsymbol s} - {\boldsymbol s} ^{\ast}||; \phi);\; \phi > 0, \; \nu > 0,
\end{equation}
\end{linenomath*}
where $|| {\boldsymbol s} - {\boldsymbol s} ^\ast||$ is the Euclidean distance between $ {\boldsymbol s} $ and $ {\boldsymbol s} ^\ast$, $ {\boldsymbol \phi} = \{\phi, \nu\}$ with $\phi$ controlling the rate of correlation decay and $\nu$ controlling the process smoothness, $\Gamma$ is the Gamma function, and $\mathcal{K}_\nu$ is a modified Bessel function of the third kind with order $\nu$. While it is theoretically ideal to estimate both $\phi$ and $\nu$, it is often useful from a computational standpoint to fix $\nu$ and estimate only $\phi$. For our current analysis, such a concession is reasonable given there is likely little information gain in estimating both parameters. Conveniently, when $\nu=0.5$ the M{\'a}tern correlation reduces to the exponential correlation function, i.e., $\rho(|| {\boldsymbol s} - {\boldsymbol s} ^\ast||; \phi) = \exp(-\phi || {\boldsymbol s} - {\boldsymbol s} ^\ast||)$. Therefore, only two process parameters are estimated for any given random effect are $ {\boldsymbol \theta} =\{\sigma^2, \phi\}$.
In the subsequent analysis, we consider the following candidate models defined using the general model (\ref{eq: spatially_varying_regression}):
\begin{enumerate}
\item Non-spatial---all $\delta$ are set to zero. This is simply a multiple regression model.
\item Space-varying intercept (SVI)---$\delta_0$ equals $1$ and all other $\delta$s are set to zero. This model estimates a spatial process and associated parameters $ {\boldsymbol \theta} _0$ for the intercept, but the impact of the covariates is assumed to be the same across the study region.
\item Space-varying coefficients (SVC)---all $\delta$s are set to 1. This model allows all regression coefficients to vary spatially over the study region. Each spatial process has its own parameters $ {\boldsymbol \theta} _0$ and $ {\boldsymbol \theta} _j$ for $j = (1,2, \ldots, p)$.
\end{enumerate}
\subsection{Implementation and analysis}\label{sec:implementation}
To facilitate uncertainty quantification for model parameters and subsequent prediction, the Non-spatial, SVI, and SVC candidate models defined in Section~\ref{sec:model} were fit within a Bayesian inferential framework, see, e.g., \cite{Gelman2013} for a general description Bayesian model fitting methods. The candidate models' Bayesian specification is completed by assigning prior distributions to all parameters. Then, parameter inference follows from posterior distributions that are sampled using Markov chain Monte Carlo (MCMC) algorithms. \cite{Finley2020} provide open source software, available in \texttt{R} \citep{R}, that implement efficient MCMC sampling algorithms for Equation~(\ref{eq: spatially_varying_regression}) and associated sub-models. More specifically, the \texttt{spSVC} function within the \texttt{spBayes} package \citep{spBayes} provides MCMC-based parameter posterior summaries, fit diagnostics for model comparison, and prediction at unobserved locations.
\emph{Data and annotated \texttt{R} code needed to fully reproduce the analysis and results will be provided as Supplementary Material upon publication or if requested by reviewers.}
\subsubsection{Prediction}\label{sec:prediction}
Our interest is in predicting the 2020 growing stock timber volume response variable $\tilde{ {\boldsymbol y} } = (\tilde{y}( {\boldsymbol s} _1), \tilde{y}( {\boldsymbol s} _2), \ldots, \tilde{y}( {\boldsymbol s} _{n_0}))^\top$ at a set of $n_0$ locations where it is not observed but ALS predictors are available (we used tilde to indicate a prediction). Following \cite{Gelman2013} and \cite{Banerjee2014}, given MCMC samples from the posterior distributions of the posited model's parameters, composition sampling is used to sample one-for-one from $\tilde{ {\boldsymbol y} }$'s posterior predictive distribution (PPD). For example, the Non-spatial model's $l$-th PPD sample $\tilde{ {\boldsymbol y} }^{(l)}$ is drawn from the multivariate Normal distribution $MVN\left(\beta_0^{(l)} + {\boldsymbol X} {\boldsymbol \beta} ^{(l)}, \tau^{2(l)} {\boldsymbol I} \right)$, where $\beta_0^{(l)}$, $ {\boldsymbol \beta} ^{(l)} = \left(\beta_1^{(l)}, \beta_2^{(l)}, \ldots, \beta_p^{(l)}\right)^\top$, and $\tau^{2(l)}$ are the $l$-th joint samples from the parameters' posterior distribution, $ {\boldsymbol X} $ is the $n_0\times p$ matrix of predictors at the $n_0$ prediction locations, and $ {\boldsymbol I} $ is the $n_0\times n_0$ identity matrix. The multivariate Normal PPDs for the SVI and SVC models are given in \cite{Finley2020, spBayes}. Importantly, the SVI and SVC candidate models use joint composition sampling to acknowledge the spatial correlation among prediction locations. A given candidate model's PPD is evaluated using each of its parameter's $M$ posterior samples, i.e., $l = 1, 2, \ldots, M$, to generate $M$ PPD samples for each of the $n_0$ prediction locations. These PPD samples are summarized analogously to the parameters' posterior samples. Prediction point estimates could include the PPD mean or median, and interval estimates can be built off the PPD standard deviation or directly expressed using a set of lower and upper percentiles in the form of a credible interval. For example, a point and dispersion estimate for the growing stock timber volume at a prediction location could be the mean and standard deviation over that location's $M$ PPD samples.
\subsubsection{Model selection}\label{sec:selection}
Our analysis had two separate model selection steps. First, in an effort to build parsimonious models and reduce possible issues arising from collinearity among the often highly correlated ALS predictor variables, we identified a common set of predictors to use in the three candidate models. Given our focus on prediction, predictor variable selection aimed to minimize prediction error using leave-one-out (LOO) cross-validation among the 62 sample plot measurements. Second, given the common set of predictors, model fit and prediction performance were used to select the ``best'' candidate model for subsequent blowdown area prediction. Here, again, candidate model prediction performance was assessed using LOO cross-validation among the 62 sample plot measurements.
For the first model selection step, the Non-spatial model was used to select the set of ALS predictor variables that minimized LOO mean squared prediction error (MSPE). As described in Section~\ref{sec:data}, candidate variables were summaries of the ALS point cloud height distribution and included its mean, median, minimum, maximum, and standard deviation. A backward variable selection algorithm, implemented using the \texttt{leaps} \citep{leaps} and \texttt{caret} \citep{caret} \texttt{R} packages, identified the predictor set that minimized LOO MSPE.
The second model selection step used model fit and LOO prediction measures to identify the ``best'' model for subsequent blowdown area prediction. The deviance information criterion (DIC)\citep{spiegelhalter2002} and widely applicable information criterion (WAIC) \citep{Watanabe2010} model fit criterion were computed for each candidate model. DIC equals $-2(\text{L}-\text{p}_D)$ where L is goodness of fit and p$_D$ is a model penalty term viewed as the effective number of parameters. Two WAIC criteria were computed based on the log pointwise predictive density (LPPD) with $\text{WAIC}_1 = -2(\text{LLPD} - \text{p}_1)$ and $\text{WAIC}_2 = -2(\text{LLPD} - \text{p}_2)$ where penalty terms p$_1$ and p$_2$ are defined by \cite{Gelman2014} just prior to, and in, Equation~(\ref{eq: spatially_varying_regression}). Models with lower values of DIC, WAIC$_1$, and WAIC$_2$ have better fit to the observed data and should yield better out-of-sample prediction, see \cite{Gelman2014}, \cite{Vehtari2017}, or \cite{Green2020} for more details on these model fit criteria.
The three candidate models were also assessed based on LOO cross-validation prediction MSPE and continuous rank probability score (CRPS) \citep{Gneiting2007}. Because MSPE measures only predictive accuracy, CRPS is a preferable measure of predictive skill because it favors models with both high accuracy and precision. Models with lower MSPE and CRPS should yield better blowdown area predictions. In addition to MSPE and CRPS, we computed the percent of holdout observations covered by their corresponding PPD 95\% credible interval. Models with an empirical coverage percent close to the chosen PPD credible interval percent are favored.
\subsubsection{Blowdown prediction}\label{sec:blowdownPrediction}
Given the sample plot dataset and posited model identified using methods in Section~\ref{sec:selection}, two approaches were used to predict growing stock timber volume for blowdown areas. We refer to the approaches as \emph{areal} and \emph{block} prediction.
\begin{figure}[!ht]
\begin{center
\subfigure[Areal mean canopy height.]{\includegraphics[width=7cm,trim={0cm 3.75cm 0cm 2.75cm},clip]{figures/mean_height_over_poly.pdf}\label{fig:computeXAreal}}
\subfigure[Grid cell mean canopy height.]{\includegraphics[width=7cm,trim={0cm 3.75cm 0cm 2.75cm},clip]{figures/mean_height_over_grd.pdf}\label{fig:computeXBlock}}
\caption{Illustration of the mean canopy height ALS predictor variable used for areal \subref{fig:computeXAreal} and \subref{fig:computeXBlock} block prediction for a subset of blowdowns in a small section of the Laas sub-region. The polygons are blowdowns and circles are forest sample plots. The gray scale basemap depicts the 1\,m $\times$ 1\,m ALS canopy height grid used to compute the mean canopy height values over plots and blowdown prediction units.} \label{fig:computeX}
\end{center}
\end{figure}
The \emph{areal} approach views each blowdown as a single prediction unit indexed by its polygon centroid and predictor variables computed as an average of the 1\,m $\times$ 1\,m resolution ALS values over its extent. Figure~\ref{fig:computeXAreal} illustrates the mean canopy height ALS variable summarized for blowdowns within a small portion of the Laas sub-region. \cite{VerPlanck2018} had a somewhat similar setting; however, their dataset and inferential goals differed from ours in a few key ways. First, because their sample data came from variable radius plots, it was not possible to spatially align ALS predictors with response variable measurements at the plot-level. As a result, they made the simplifying assumption that, when pooled, response variable measurements were representative of the stand areal unit within which they were observed. This assumption allowed them to spatially align the response and ALS predictor variables at the stand-level. Second, all stands within the population held at least two plots and the study goal was to improve stand-level point and interval estimates through a smoothing conditional autoregressive model (CAR) model. In our current setting, the sample data comprise response variable measurements collected on fixed-area plots with clearly defined spatial support over which the ALS predictor variables were also computed---the response and predictor variable measurements are spatially aligned at the plot-level. Also, no sample plots fall within the blowdown prediction units; hence, our inferential goal is squarely on out-of-sample prediction. Despite these differences we pursue the \emph{areal} prediction and clearly acknowledge the change-of-support issue between the discrete fixed-area plot data used to fit the model and different spatial support of the prediction units (see, e.g., \cite{Schabenberger2004} for a thorough description of spatial change-of-support problems). While not appropriate from a statistical standpoint, we do see this approach used in practice and therefore include it here for comparison with the \emph{block} approach that mitigates change-of-support issues by better aligning the spatial support of the measurement and prediction units.
For the areal approach, the $l$-th PPD sample of total growing stock volume (m$^3$) for a given blowdown is $\tilde{y}^{(l)}_{\mathcal{A}} = A\tilde{y}( {\boldsymbol s} )^{(l)}$, where $A$ is the blowdown's area (ha) and $\tilde{y}( {\boldsymbol s} )^{(l)}$ is the growing stock volume (m$^3$/ha) predicted at the blowdown's centroid $ {\boldsymbol s} $.
The \emph{block} approach partitions each blowdown into grid cells with the same area as the fixed-area sample plots (i.e., 0.126 ha). Each cell is indexed using its centroid, and predictor variables are computed as an average of the 1\,m $\times$ 1\,m resolution ALS values over its extent; see illustration in Figure~\ref{fig:computeXBlock}. Akin to block kriging \citep{Wackernagel2003}, the response variable prediction for a given blowdown is an aggregate of multiple point predictions within the blowdown extent. More specifically, given a blowdown divided into $n_0$ cells and corresponding vector holding the $l$-th joint PPD sample $\tilde{ {\boldsymbol y} }^{(l)}$ (m$^3$/ha), the PPD sample of total growing stock volume (m$^3$) for the blowdown is $\tilde{y}^{(l)}_{\mathcal{B}} = \sum^{n_0}_{i=1}a( {\boldsymbol s} _i)\tilde{y}( {\boldsymbol s} _i)^{(l)}$, where $a( {\boldsymbol s} _i)$ is the area of the $i$-th cell that falls within the blowdown ($a_i$ is at most 0.126 ha when the cell's extent is completely within the blowdown).
Composition sampling was again used to generate $M$ samples from $\tilde{y}_{\mathcal{A}}$'s and $\tilde{y}_{\mathcal{B}}$'s PPD for each blowdown. These PPD samples were summarized analogously to the parameters’ posterior samples to yield prediction point, dispersion, and interval estimates for the 564 blowdowns. In addition to blowdown specific PPD summaries, extra composition sampling was used to estimate the total volume PPD by sub-region and region.
\section{Results}\label{sec:results}
\subsection{Candidate models}
Following Section~\ref{sec:selection}, the model that yielded minimum LOO cross-validation MSPE included only the ALS point cloud height distribution mean predictor variable. We refer to this predictor variable as \emph{mean canopy height} and set it as $x_1$ in Equation~(\ref{eq: spatially_varying_regression}) for all candidate models.
\begin{table}[ht!]
\caption{Parameter posterior distribution median and 95\% credible interval for candidate models.}\label{tab:estimates}
\begin{center}
\begin{tabular}{lccc}
\toprule
& Non-spatial & SVI & SVC\\
\midrule
$\beta_0$ & 159.8 (59.6, 264.1) & 172.9 (70.4, 277.5) & 176.3 (69.3, 277.8)\\
$\beta_1$ & 33.7 (27.9, 39.8) & 33.2 (27.2, 39) & 33 (2.2, 63.9)\\
$\sigma^2_0$ & & 14677.8 (5491.3, 25740.3) & 9139.5 (3044.4, 18879.9)\\
$\sigma^2_1$ & & & 275.1 (102.1, 1164.2)\\
$\phi_0$ & & 11.6 (1.9, 29.2) & 18.7 (4.2, 29.4)\\
$\phi_1$ & & & 0.1 (0.03, 3.4)\\
$\tau^2$ & 22806.6 (16330.1, 32899.5) & 7105.5 (1999, 18985.2) & 5721.6 (1927.2, 15421.3)\\
\bottomrule
\end{tabular}
\end{center}
\end{table}
Parameter estimates for candidate models are given in Table~\ref{tab:estimates}. As expected, the $\beta_1$ estimates indicate a strong positive relationship between 2012 mean canopy height and 2020 growing stock volume. The addition of spatial random effects decreased the non-spatial residual variance $\tau^2$. The reapportionment of $\tau^2$ to $\sigma^2_0$ and non-negligible $\sigma^2_1$ suggests a substantial portion of variance---not explained by the ALS predictor---had a spatial structure and the ALS predictor had space-varying impact.
The SVI model spatial decay parameter estimates suggest a fairly localized spatial structure. Given the exponential spatial correlation function and km map projection units, the distance $d_0$ at which the spatial correlation drops to 0.05 (an arbitrary, but commonly used, cutoff) is estimated by solving $0.05 = \exp(-\phi d_0)$ for $d_0$ providing $d_0=-\log(0.05)/\phi$. The distance $d_0$ is commonly referred to as the effective spatial range. Using the SVI model $\phi_0$ estimates, the corresponding spatial range is 0.26 (0.10, 1.58) km.
Compared with the SVI model, the SVC model further reduced the non-spatial residual variance by taking into account the space-varying impact of $x_1$. Relative to the effective spatial range of the intercept process, the process on $x_1$ had a long spatial range, i.e., 29.96 (0.88, 99.86) km. That is, the spatially varying slope coefficient for $x_1$ represented sub-region differences in the relationship between growing stock volume and mean canopy height that were probably caused by unmeasured species, genetic, or environmental factors.
\begin{table}[!th]
\caption{Candidate model fit and leave-one-out (LOO) cross-validation prediction diagnostics. The last three rows were calculated using prediction LOO on the observed data. The row labeled CI Cover is the percent of 95\% posterior predictive distribution credible intervals that cover the observed LOO value. Where appropriate, the ``best'' metric in the row is bolded. }\label{tab:fitPredict}
\begin{center}
\begin{tabular}{lccc}
\toprule
Model & Non-spatial & SVI & SVC\\
\midrule
DIC & 801.1 & 752.8 & \textbf{748.7}\\
p$_D$ & 3.1 & 30.4 & 33.3\\
L & -397.5 & -346.0 & -341.0\\
\midrule
WAIC$_1$ & 800.7 & 746.2 & \textbf{738.3}\\
WAIC$_2$ & 801.1 & 763.8 & \textbf{756.9}\\
p$_1$ & 2.8 & 23.8 & 22.9\\
p$_2$ & 2.9 & 32.6 & 32.2\\
LPPD & -397.6 & -349.3 & \textbf{-346.2}\\
\midrule
MSPE & 23378.4 & 23134.4 & \textbf{21491.6}\\
CRPS & 88.2 & 87.0 & \textbf{82.9}\\
CI cover & 98.4 & 96.8 & 98.4\\
\bottomrule
\end{tabular}
\end{center}
\end{table}
Candidate model fit and LOO cross-validation prediction diagnostics are given in Table~\ref{tab:fitPredict}. Following from Section~\ref{sec:selection}, the lower values of DIC, WAIC$_1$, and WAIC$_2$ indicate addition of spatial random effects to the model intercept and regression slope coefficient improved fit to observed data. Similarly, LOO cross-validation MSPE and CRPS favored the SVC model over the Non-spatial and SVI model. All models achieve an approximate 95\% credible interval coverage.
\subsection{Blowdown prediction}
The SVC model provided the ``best'' fit and LOO predictive performance (Table~\ref{tab:fitPredict}) and therefore served as the prediction model for the blowdowns. Following methods in Section~\ref{sec:blowdownPrediction}, areal and block growing stock volume PPD mean and standard deviation were computed for each blowdown, the results of which are plotted in Figure~\ref{fig:SVCTotalPPD}. Figure~\ref{fig:TotalPPDMean} shows negligible difference between areal and block PPD means. However, as shown in Figure~\ref{fig:TotalPPDSD}, compared with the block approach, the areal prediction resulted in a consistently larger PPD standard deviation. Additionally, supplemental Figure~\ref{fig:TotalPPDCV} shows the areal PPD coefficient of variation (CV) is generally larger than the block PPD CV.
\begin{figure}[!ht]
\begin{center
\subfigure[]{\includegraphics[width=7.75cm,trim={0cm 0cm 0cm 0cm},clip]{figures/candidate_models_total.pdf}\label{fig:TotalPPDMean}}
\subfigure[]{\includegraphics[width=7.75cm,trim={0cm 0cm 0cm 0cm},clip]{figures/candidate_models_SD_total.pdf}\label{fig:TotalPPDSD}}
\caption{Summaries of each blowdown's total growing stock volume (m$^3$) posterior predictive distribution computed using areal and block prediction approach. Points represent blowdowns, colored by area, and broken down by sub-region with a one-to-one line.} \label{fig:SVCTotalPPD}
\end{center}
\end{figure}
Table~\ref{tab:predTotals} provides sub-region growing stock volume loss totals and corresponding 95\,\% confidence intervals. Although total blowdown area in Frohn was larger than other sub-regions, its per unit area growing stock loss 510.68\,m$^3$/ha (32,086.1\,m$^3$/62.83\,ha) was less than that in Laas (637.26\,m$^3$/ha), Liesing (818.21\,m$^3$/ha), Mauthen (784.76\,m$^3$/ha), and Ploecken (638.50\,m$^3$/ha). This disparity between Frohn and the other sub-regions is because the blowdowns in the Frohn sub-region were concentrated in relatively unproductive forests close to the alpine treeline zone.
\begin{table}[ht!]
\caption{Growing stock volume loss by sub-region and study region posterior predictive distribution median and 95\% credible interval.}\label{tab:predTotals}
\begin{center}
\begin{tabular}{lcc}
\toprule
& Area (ha) & Volume (m$^3$)\\
\midrule
Frohn & 62.83 & 32086 (28290, 35921)\\
Laas & 54.95 & 35017 (31293, 38930)\\
Liesing & 2.90 & 2373 (1941, 2849)\\
Mauthen & 38.49 & 30205 (25676, 34758)\\
Ploecken & 53.15 & 33936 (28998, 39361)\\
\midrule
Total & 212.32 & 133775 (122935, 144308)\\
\bottomrule
\end{tabular}
\end{center}
\end{table}
Maps of PPD summaries per blowdown were prepared for each sub-region. For example, Figure~\ref{fig:FrohnPred} shows blowdowns in the Frohn sub-region colored by growing stock volume PPD estimate mean, standard deviation, and coefficient of variation. Similar maps for the other sub-regions are provided in the Supplemental Material.
\begin{figure}[!ht]
\begin{center
\subfigure[PPD mean]{\includegraphics[width=5cm,trim={2.25cm 0cm 2.75cm 0cm},clip]{figures/Frohn_total_m3_blk_posterior_mean.pdf}}
\subfigure[PPD standard deviation]{\includegraphics[width=5cm,trim={2.25cm 0cm 2.75cm 0cm},clip]{figures/Frohn_total_m3_blk_posterior_sd.pdf}}
\subfigure[PPD CV]{\includegraphics[width=5cm,trim={2.25cm 0cm 2.75cm 0cm},clip]{figures/Frohn_cv_from_total_m3_blk_posterior.pdf}}
\end{center}
\caption{SVC block prediction approach posterior predictive distribution (PPD) mean, standard deviation, and coefficient of variation (CV) of total growing stock volume (m$^3$) for blowdowns in Frohn sub-region.} \label{fig:FrohnPred}
\end{figure}
Figure~\ref{fig:blkCV} shows the distribution of PPD CV by blowdown area. The average predicted growing stock volume CV over all blowdowns was 24.6\,\%, with individual CV predictions ranging from 9.2\,\% to 70.3\,\%. Approximately 90\,\% of blowdown predictions had CVs lower than 33.9\,\%. Relatively large prediction errors (i.e., large CVs) occurred for the smaller blowdown areas. Blowdowns for which the CV was less than 20\,\% accounted for 73.2\,\% of the total damage in the study region, and areas with a CV less than 25\,\% represented 93.2\,\% of the total damage.
\begin{figure}[!ht]
\begin{center
\includegraphics[width=10cm,trim={0cm 0cm 0cm 0cm},clip]{figures/SVC_blk_prediction_cv.pdf}
\end{center}
\caption{SVC block prediction approach coefficient of variation (CV) posterior predictive distribution versus blowdowns area.} \label{fig:blkCV}
\end{figure}
\section{Discussion}\label{sec:discussion}
Among the set of ALS predictors, mean canopy height alone explained a substantial portion of the response variable's variance and yielded the lowest LOO cross-validation MSPE. This finding is similar to those in other related studies. For example, \cite{BreidenbachAstrup2012} selected mean canopy height to predict above-ground biomass using Norwegian national forest inventory data, and \cite{Magnussen2014} selected mean canopy height to predict growing stock density in a study that considered Swiss and Norwegian national forest inventory data. Further, \cite{Mauro2016} identified maximum vegetation height as the only significant predictor in small area estimation models used to improve quadratic mean diameter estimates in Central Spain.
Given the mean canopy height predictor, the SVC model showed consistently better fit and predictive performance compared to the Non-spatial and SVI models (Table~\ref{tab:fitPredict}), and was therefore selected as the model to generate areal and block prediction for the blowdowns.
In regression, and most other modeling contexts, it is assumed the predictor variables used to estimate model parameters are the same as those used in subsequent prediction. This assumption does not hold for the areal prediction approach described in Section~\ref{sec:blowdownPrediction}, because the ALS variables computed over the 62 0.126 ha sample plots could have a different distribution than the ALS variables computed over the variable area blowdowns. Although it is reasonable to expect the mean of a given ALS variable to be similar between the sample plot and blowdown distributions, the dispersion of the two distributions could be quite different. That is, a change-of-support was immanent between the sample plots (model data) and the prediction units' areal extent. This fact alone should precluded areal prediction application; however, our analysis results further underscore the approach's shortcomings.
In the areal approach each blowdown was considered as a single prediction point, and its predicted volume per unit area was scaled by the blowdown's area to arrive at the blowdown's total volume. Such an approach might seem intuitive, but is problematic in the current setting. The issue is akin to how variance scales with forest inventory plot size. It is often the case that a forest variable's variance is inversely related to the area over which the variable is measured \citep{freese1962}. Consequently, a timber volume variable shows more variability when measured on smaller plots than on larger plots, especially when applied to structurally complex forests. This is because the larger plots average over local scale structural variability. When using a single prediction at the blowdown's centroid, the areal prediction variance does not scale with blowdown area. In contrast, the block approach more accurately scales prediction variance with blowdown area as reflected in Figure~\ref{fig:TotalPPDSD}. This is because its PPD is the result of an average over possibly multiple spatially correlated prediction units (grid cells) within the blowdown's boundary---the number of prediction units contributing to this average scales with blowdown area.
Finally, the PPDs rely on the spatial correlation between observed and prediction locations, e.g., using Equation~\ref{matern}, to appropriately weight the contribution of observed data for prediction. For the areal approach, this correlation is computed between observed locations and a blowdown's polygon centroid, which is the average location relative to the polygon's vertices. If the polygon's shape is highly irregular, with a large boundary length to area ratio, then its centroid might not represent well the distance between observed locations and the polygon's extent. As an extreme example, the centroid of a polygon in the shape of a letter ``C'' or ``L'' will fall outside the polygon boundary. Again, such issues are circumvented using the block prediction approach.
A particularly useful quality of the Bayesian inferential paradigm is PPDs can be generated for any arbitrary set of prediction units from a single grid cell to sets of blowdowns that comprise a sub-region or entire study region. Inference proceeds from desired joint PPD samples. These PPD samples can be summarized using measures of central tendency, dispersion, intervals, and transformations, with results presented as maps or tables, e.g., Figure~\ref{fig:FrohnPred} and Table~\ref{tab:predTotals}.
\section{Conclusions}\label{sec:conclusion}
The study goal to estimate growing stock timber volume loss due to storm Adrian in the Austrian upper Gail valley in Carinthia, was met using ALS and TLS measurements coupled through a flexible spatial regression model cast within a Bayesian inferential framework. Limited data availability and its configuration in space and time presented several inferential constraints on how statistically robust estimates could be pursued. Our proposed regression model was designed to leverage information from a small set of plot samples and aerial ALS as well as spatial autocorrelation among forest measurements and nonstationary relationships between response and predictor variables.
Three candidate models of varying complexity were assessed using model fit and out-of-sample prediction. Performance metrics supported the SVC model which was then used to make areal and block predictions over the blowdowns. Our results showed that in contrast to the areal approach, the block approach mitigates issues with change-of-support by matching the prediction unit to the sample plot extent. Using this finer spatial scale prediction unit and a joint prediction algorithm that acknowledges spatial correlation among prediction units, the total growing stock volume PPD captures the correlation among prediction units within a given blowdown and scale appropriately with blowdown area. The block prediction approach facilitated statistically sound inference at various spatial scales, i.e., blowdown, sub-region, and region levels.
The proposed methodology and annotated code that yields fully reproducible results can be adapted to deliver damage assessment for other forest disturbance events in future periods and different geographic regions. While additional sample plot data would improve estimates in our current study, we were able to demonstrate a fairly high level of accuracy and precision is achievable using a limited sample size. This small sample, i.e., $n$=62 plots, was collected using a TLS which, compared with traditional individual tree measurements, allowed for time and effort efficient data collection. Based on our experience from this and other efforts, a field crew can collect $\sim$20-25 plots per day, even in difficult alpine terrain.
Our study illustrated a methodology to efficiently deliver information required for strategic salvage harvesting following storm and other disturbances. Future work focuses on augmenting the SVC model to incorporate large spatial datasets using Nearest Neighbor Gaussian Processes \citep{Finley2019}, automate remote sensing predictor variable selection \citep{Franco-Villoria2019}, and accommodate high-dimensional \citep{Finley2017, Taylor-Rodriguez2019} and distributional regression \citep{Umlauf2018, Stasinopoulos2018} response vectors to predict forest disturbance induced change in species composition and diameter/size distributions.
\section{Acknowledgments}
This study was supported by the project Digi4+ and was financed by the Austrian Federal Ministry of Agriculture, Regions and Tourism under project number 101470. Finley was supported by the United States National Science Foundation DMS-1916395 and by NASA Carbon Monitoring System grants. The authors appreciate the support during the fieldwork that was given by the forest owners, Clemens Wassermann, G{\"u}nter Kronawetter and the team of the Carinthian Forest Service.
|
2,869,038,154,053 | arxiv | \section{Introduction}
The physics of equilibrium phase transition in the Ising model is
well understood \cite{st}. However, the mechanism behind the
nonequilibrium phase transition is not yet explored rigorously and the basic
phenomenology is still undeveloped. It is quite interesting to study
how the system behaves if it is driven out of equilibrium.
The simplest prototype example is the kinetic Ising model in oscillating
magnetic field.
In this context,
the dynamic response of the Ising system in presence of an oscillating
magnetic field has been studied
extensively \cite{rkp,dd,smp,tom,lo,ac} in the last few years.
The dynamic hysteresis
\cite{rkp,dd,smp} and the nonequilibrium
dynamic phase transition \cite{tom,lo,ac} are two important aspects of
the dynamic response of the kinetic Ising model in presence of an
oscillating magnetic field.
The nonequilibrium dynamic phase transition in the kinetic Ising model
in presence of an oscillating magnetic field, was first studied by
Tome and Oliviera \cite{tom}.
They solved the mean-field (MF)
dynamic equation of motion (for the average magnetization) of the kinetic
Ising model in presence of a sinusoidally oscillating magnetic field.
By defining the dynamic order parameter as the time averaged
magnetization over a full cycle of the
oscillating magnetic field, they showed that depending upon
the value of the field amplitude and the temperature, the
dynamic order parameter takes
nonzero value from a zero value.
In the field amplitude and temperature plane there
exists a distinct phase boundary separating dynamic ordered
(nonzero value of order parameter) and disordered (order
parameter vanishes) phase.
A tricritical point (TCP),
(separating the nature (discontinuous-continuous) of the transition)
on the phase boundary line,
was also observed by them \cite{tom}. However,
one may argue that such a mean-field transition
is not truly dynamic in origin since it exists even in the
quasi-static (or zero
frequency) limit. This is because, if the field amplitude is less than the
coercive field (at temperature less than the transition temperature
without any field), then the response magnetization varies periodically
but asymmetrically even in the zero frequency limit; the system remains locked
to one well of the free energy and cannot go to the other one, in the absence
of fluctuation.
The true dynamic nature of this kind of phase transition (in presence of
fluctuation) was first attemted
to study by Lo and Pelcovits \cite{lo}. They have studied
the dynamic phase transition in the kinetic Ising model in presence of an
oscillating magnetic field by Monte Carlo (MC) simulation which
allows the microscopic fluctuations.
Here, the transition disappears in the
zero frequency limit; due to the fluctuations, the magnetization flips to the
direction of the magnetic field and the dynamic order parameter (time
averaged magnetization) vanishes.
However, they \cite{lo} have
not reported any precise phase boundary.
Acharyya and Chakrabarti \cite{ac} studied the nonequilibrium dynamic phase
transition in the kinetic Ising model
in presence of oscillating magnetic field by
extensive MC simulation.
They \cite{ac} have also identified
that this dynamic phase transition (at a particular
nonzero frequency of the oscillating magnetic field) is associated with the
breaking of the symmetry of
the dynamic hysteresis ($m-h$) loop.
In the dynamically disordered phase
(where the value of order parameter vanishes)
the corresponding hysteresis loop is
symmetric, and loses its symmetry in the ordered phase (giving
nonzero value of dynamic order parameter).
They \cite{ac} also studied the temperature variation of the ac susceptibility
components near the dynamic transition point.
The major observation was
that the imaginary (real) part of the ac susceptibility gives a
peak (dip) near the dynamic transition point (where the dynamic
order parameter vanishes). The
important conclusions were: (i) this is a distinct
signal of phase transition and (ii) this is an indication of the
thermodynamic nature of the phase transition.
The Debye relaxation of the dynamic order parameter
and the critical slowing down have been studied very recently
\cite{ma} both by MC simulation and by solving the dynamic MF equation
\cite{tom} of
motion for the average magnetization.
The specific-heat singularity \cite{ma}
near the dynamic transition point is also
an indication of the thermodynamic nature of this dynamic phase transition.
It is worth mentioning here that the statistical distribution of dynamic
order parameter has been studied by Sides et al \cite{rik}. The nature of the
distribution changes
(from bimodal to unimodal) near the dynamic transition point. They have
also observed
\cite{rik} that the fluctuation of
the hysteresis loop area becomes considerably large near the dynamic
transition point.
In the case of equilibrium phase transitions, the fluctuation - dissipation
theorem (FDT) states
(due to the applicability of Gibbs formalism) that the
mean square fluctuations
of some intrinsic physical quantities (say, energy, magnetization etc.) are
directly related with some
responses (specific heat, susceptibility
etc.) of the system. Consequently, near the
ferro-para transition point, both the fluctuation of
magnetization and the susceptibilty show same singular behavior. If it
is of power law type, the same singular behavior will be characterised by
the same exponent. This is also true for fluctuation of energy and
the specific heat. These are the consequences of fluctuation-dissipation
theorem \cite{st}. Here, the main motivation is to study the fluctuations
and corresponding responses near the dynamic transition temperature.
In this paper, the fluctuations of dynamic order parameter and the energy
are studied as a function of temperature near the dynamic transition point.
The temperature variations of `susceptibility' and the 'specific-heat' are
also studied near the transition point.
The temperature variation of the fluctuation of dynamic order
parameter and
that of the `susceptibility' are compared. Similarly, the
temperature variation of the fluctuation
of energy and
that of the `specific-heat' are compared. The paper is organised as
follows: the model and the simulation scheme are discussed in section II,
the results are reported in Section III, section IV contains the summary
of the work.
\section{Model and simulation}
\noindent The Ising model with nearest neighbor ferromagnetic coupling
in presence
of a time varying magnetic field can be represented by the Hamiltonian
\begin{equation}
H = -\sum_{<ij>} J_{ij} s_i^z s_j^z - h(t) \sum_i s_i^z
\label{hm}
\end{equation}
Here, $s_i^z (=\pm 1)$ is Ising spin variable, $J_{ij}$ is the interaction
strength and $h(t) = h_0 {\rm cos}(\omega t)$
represents the oscillating magnetic field, where
$h_0$ and $\omega$ are the amplitude and the frequency
respectively of the oscillating field. The system
is in contact with an isothermal heat bath at temperature $T$. For simplicity
all $J_{ij} (= J > 0)$ are taken equal to
unity and the boundary condition is chosen to be periodic. The temperature
($T$) is measured in the unit of $J/K$, where $K$ is the Boltzmann constant
(here $K$ is taken unity).
A square lattice of linear size $L (=100)$ has been considered.
At any finite
temperature $T$ and for a fixed frequency ($\omega$)
and amplitude ($h_0$) of the
field, the microscopic
dynamics of this system has been studied here by Monte Carlo
simulation using Glauber single spin-flip dynamics
with a particular choice of the Metropolis rate of single spin-flip \cite{mc}.
Starting from an initial condition where all spins are up, each lattice site is
updated here sequentially and one such full scan over the entire lattice is
defined as the unit time step (Monte Carlo step or MCS).
The instanteneous magnetization
(per site),
$m(t) = (1/L^2) \sum_i s_i^z$ has been calculated. From the instanteneous
magnetization, the dynamic order parameter $Q = {\omega \over {2\pi}}
\oint m(t) dt$ (time averaged magnetization over a full cycle of the
oscillating field) is calculated.
Some of the transient loops have been discarded
to get the stable value of the dynamical quantities.
\section{Results}
\subsection{Temperature variations of susceptibility and
fluctuation of dynamic order
parameter }
The fluctuation of the dynamic order parameter is
$$\delta Q^2 = \left(<Q^2> - <Q>^2\right),$$
\noindent where the $< >$ stands for the averaging over various Monte Carlo
samples.
The `susceptibility' is defined as
$$\chi = -{d<Q> \over dh_0}.$$
Here, a square lattice of linear size $L$ (=100) has been
considered. $<Q^2>$ and $<Q>$ are calculated using MC simulation.
The averaging has been done over 100 different (uncorrelated) MC
samples.
The temperature variations of fluctuation of $Q$, i.e.,
$\delta Q^2$ and `susceptibility' $\chi$ have
been studied here and both plotted in Fig. 1. From the figure
it is observed that both $\delta Q^2$ and $\chi$
diverge near the dynamic
transition point (where $Q$ vanishes).
This has been studied for two different values of field amplitude $h_0$
(Fig. 1a is for $h_0$ = 0.2 and Fig. 1b is for $h_0$ =0.1).
The dynamic transition temperatures $T_d(h_0)$, at which $\chi$ and
$\delta Q^2$ diverge, are 1.91$\pm0.01$ for $h_0$ = 0.2 and $2.15\pm0.01$
for $h_0$ = 0.1. These values of $T_d(h_0)$ agree with the phase
diagram estimated from vanishing of $Q$.
The $log_e (\chi)$ versus $log_e (T_d - T)$ and
$log_e (\delta Q^2)$ versus $log_e (T_d - T)$
plots show (insets of Fig. 1) that
$\chi \sim (T_d-T)^{-\alpha}$ and $\delta Q^2 \sim (T_d-T)^{-\alpha}$.
For $h_0$ = 0.2, $\alpha \sim 0.53$
(inset of Fig. 1a) and
for $h_0$ = 0.1, $\alpha \sim 2.5$
(inset of Fig. 1b).
Results show that both $\chi$ and $\delta Q^2$
diverge near $T_d$ as a power law with the same exponent $\alpha$,
though there is a crossover region (where the effective exponent values are
different).
\subsection{Temperature variations of specific-heat and
fluctuation of energy}
The time averaged (over a full cycle) cooperative energy of the system
is
$$E = -(\omega/{2 {\pi} L^2}) \oint
\left(\sum_{i,j} s_i^z s_j^z \right) dt,$$
\noindent and the fluctuation of the cooperative energy is
$$\delta E^2 =\left(<E^2> - <E>^2\right).$$
\noindent The `specific-heat' $C$ \cite{ma} is defined as the derivative of
the energy (defined above) with respect to the temperature, and
$$C = {d<E> \over dT}.$$
Here, also a square lattice of linear size $L$ (=100) has been
considered. $<E^2>$ and $<E>$ are calculated using MC simulation.
The averaging has been done over 100 different (uncorrelated) MC
samples.
The temperature variation of the 'specific-heat' has been studied \cite{ma}
and found prominent divergent behavior near the dynamic transition point
(where $<Q>$ vanishes). The temperature variation of fluctuation of energy,
$(\delta E)^2$ has been studied and plotted in Fig. 2. From the figure
it is clear that the mean square fluctuation of energy ($\delta E^2$) and
the specific heat ($C$) both diverge near the dynamic transition point
(where the dynamic order parameter $Q$ vanishes)
This has been studied for two different values of field amplitude $h_0$
(Fig. 2a is for $h_0$ = 0.2 and Fig. 2b is for $h_0$ =0.1).
Here also (like the earlier case) the specific heat $C$ and $\delta E^2$
are observed to diverge at temperatures $T_d$.
The temperatures $T_d(h_0)$, at which $C$ and $\delta E^2$ diverge, are
1.91$\pm0.01$ for $h_0$ = 0.2 (Fig. 2a)
and 2.15$\pm$0.01 for $h_0$ = 0.1 (Fig. 2b).
These values also agree with the phase diagram estimated from vanishing
of $Q$.
The $log_e (C)$ vs. $log_e (T_d - T)$ and
$log_e (\delta E^2)$ vs. $log_e (T_d - T)$ plots show (insets of Fig. 2) that
$C \sim (T_d-T)^{-\gamma}$ and $\delta E^2 \sim (T_d-T)^{-\gamma}$.
For $h_0$ = 0.2, $\gamma \sim 0.35$
(inset of Fig. 2a) and
for $h_0$ = 0.1, $\gamma \sim 0.43$
(inset of Fig. 2b).
Like the earlier case, here
also the results show that both $\chi$ and $\delta Q^2$
diverge near $T_d$ as a power law with the same exponent $\alpha$,
though there is a crossover region (where the effective exponent values are
different).
\bigskip
\section{Summary}
The nonequlibrium dynamic phase transition, in the kinetic Ising model
in presence of an oscillating magnetic field, is studied by Monte
Carlo simulation.
Acharyya and Chakrabarti \cite{ac} observed that the complex susceptibility
components have peaks (or dips) at the dynamic transition point. Sides et
al \cite{rik} observed that the fluctuation in the hysteresis loop area grows
(seems to diverge) near the dynamic transition point.
It has been observed \cite{ma} that the 'relaxation time' and the appropriately
defined 'specific-heat' diverge near the dynamic transition
point.
The mean square fluctuation of dynamic order parameter
and the `susceptibility' are
studied as a function of temperature, near the dynamic transition point.
Both shows the power law variation with respect to the reduced temperature
near the dynamic transition point with
the same exponent values.
Similar,
observation has been made for the case of
mean square fluctuation of energy and the
`specific-heat'.
It appears that although the effective exponent values for the fluctuation
and the appropriate linear resoponse differ considerably, away from
the dynamic transition point $T_d$, they eventually converge and give
identical value as the temperature interval $|T_d-T|$ decreases and
falls within a narrow crossover region.
These numerical observations indicate that the fluctuation-
dissipation relation \cite{st}
holds good in this case of the nonequilibrium
phase transition in the kinetic Ising model.
However, at this stage there is no analytic support of the FDT
in this case.
Finally, it should be mentioned, in this context,
that experiments \cite{rpi1} on ultrathin
ferromagnetic Fe/Au(001) films have
been performed to study the frequency dependence
of hysteresis loop areas.
Recently, attempts have been made \cite{rpi2} to measure
the dynamic order parameter $Q$ experimentally,
in the same material, by extending their previous study \cite{rpi1}.
The dynamic phase transition
has been studied from the observed temperature variation of $Q$.
However, the detailed investigation
of the dynamic phase transitions by measuring
variations of associated response functions (like the ac susceptibility,
specific-heat, correlations, relaxations etc) have not yet been performed
experimentally.
\vspace {0.3 cm}
\section*{Acknowledgments}
The author would like to thank B. K. Chakrabarti for critical remarks and
for a careful reading of the manuscript.
The JNCASR, Bangalore, is gratefully acknowledged for financial support and
for computational facilities.
|
2,869,038,154,054 | arxiv | \section{Introduction}
The short scale size (50-100 km) electromagnetic fields in the ELF range are generated below Earth's ionosphere by variety of phenomena such as lightning discharges \citep{helliwell73,uman87,inan85,berthelier08,master83,milikh95}, impulsive fields created by seismic events \citep{hayakawa06} or by ground based horizontal electric dipole antennas \citep{papadopoulos08}. These fields can interact with the E-region (90-120 km) of the Earth's ionosphere and generate currents and associated electromagnetic fields which will radiate in the presence of the conducting Earth. Infact, inspired by the physics of the quasi-static equatorial electrojet (EEJ) a novel concept of the conversion of an ELF ground based horizontal electric dipole antenna (HED) into a vertical electric dipole (VED) antenna in the E-region was proposed \citep{papadopoulos08}. At ELF frequencies, the VED radiates $10^5$ times more efficiently than HED of the same dipole current moment \citep{field89}. In the case of quasi-static EEJ, tidal
motions drive and maintain horizontal (zonal) electric fields
of the order .5-1 mV/m perpendicular to the ambient
magnetic field over long times (several minutes to hours)
and over a 600 km strip in the E-region of the
dip equatorial ionosphere.
This electric field drives downward Hall current.
At steady state and to
zero order, current continuity requires that a vertical polarization electric field be built to prevent the downward Hall
current from flowing. This electric field is larger than the
zonal electric field by the ratio of the Hall-to-Pedersen
conductivity, approximately a factor 30, resulting in vertical electric fields in excess of 10 mV/m and associated
predominantly with eastern electrojet currents of more than
10 A/km. As noted by Forbes [1981], current continuity
requires the presence of a vertical current, with current
closure established by field aligned currents. These currents
result in ground based quasistationary magnetic fields of
100 nT or more.
The interaction of quasistatic electric fields with the
equatorial E-region leading to the generation of EEJ has been studied extensively both experimentally and theoretically
\citep{kelleybook,forbes81,rastogi89,onwumechilibook,rishbeth97}. However, the interaction of ELF range electromagnetic fields with $E$-region is relatively less understood. \citet{eliasson09} studied generation and penetration of the ELF current and electromagnetic fields into the equatorial E-region. They found that the interaction of the ELF pulsed and continuous wave fields with the equatorial E-region leads to the generation of both vertical and horizontal currents which penetrate into ionospheric layers as Helicon waves. It is the objective of this paper to study the physics of the interaction of the ELF electromagnetic field with off-equatorial E-region where the Earth's magnetic field makes finite angle $\theta$ with the horizontal. It is found that as the angle between Earth's magnetic field and horizontal is increased (going away from the equator), the currents and fields penetrate deeper into the ionospheric layers with increased wavelength, the magnitudes of horizontal (east-west) and vertical currents decrease near the boundary and the vertical electric field decreases drastically.
The paper is organized as follows. The next section contains physics of the E-region, numerical model, boundary conditions and simulation setup. Numerical fits for the real ionospheric conductivities used in the simulations are provided in this section. Section \ref{results} presents modelling results for various simulation parameters, viz., day-night time conditions, pulsed and continuous wave antenna field and various values of the angle between the Earth's magnetic field and the horizontal. In section \ref{summary}, we summarize our results.
\section{Interaction Model and Simulation Setup}
The E-region of the Earth's ionosphere (90 to 120 km above the Earth surface) consists of partially ionized plasma magnetized by the Earth's magnetic field. In this region, ions are viscously coupled to neutrals through collisions ($\nu_{in} >> \omega_{ci}$) while electrons are strongly magnetized ($\nu_{en} << \omega_{ce}$). Here $\nu_{in}$ and $\nu_{en}$ represent ion-neutral and electron-neutral collision frequencies, and $\omega_{ci}$ and $\omega_{ce}$ represent ion and electron cyclotron frequencies respectively. The presence of the background Earth's magnetic field $\mathbf{B_0}$ drives currents perpendicular to the magnetic field, in addition to the parallel currents. This gives rise to the tensor conductivity of the ionosphere. The current is related to the electric field by generalized Ohm's law.
\begin{equation}
\mathbf{J}=\sigma_{||}\mathbf{E}_{||}+\sigma_P\mathbf{E}_{\perp}-\sigma_H\frac{\mathbf{E}\times\mathbf{B_0}}{B_0},
\end{equation}
where $\sigma_{||}$, $\sigma_P$ and $\sigma_H$ are parallel, Pederson and Hall conductivities respectively. The current parallel to the magnetic field $\mathbf{B_0}$ is controlled by $\sigma_{||}$, along the perpendicular electric field $\mathbf{E}_{\perp}$ (Pederson current) by $\sigma_P$ and perpendicular to both $\mathbf{E}_{\perp}$ and $\mathbf{B_0}$ (Hall current) by $\sigma_H$. These conductivities depend on density, collision frequency and magnetic field, and are given by,
\begin{eqnarray}
\sigma_{||}&=&\epsilon_0\left(\frac{\omega_{pe}^2}{\nu_{en}}+\frac{\omega_{pi}^2}{\nu_{in}}\right)\\
\sigma_{P}&=&\epsilon_0\frac{\omega_{pe}^2}{\omega_{ce}}\left(\frac{\nu_{en}\omega_{ce}}{\omega_{ce}^2+\nu_{en}^2}+\frac{\nu_{in}\omega_{ci}}{\omega_{ci}^2+\nu_{in}^2}\right)\\
\sigma_{H}&=&\epsilon_0\frac{\omega_{pe}^2}{\omega_{ce}}\left(\frac{\omega_{ce}^2}{\omega_{ce}^2+\nu_{en}^2}-\frac{\omega_{ci}^2}{\omega_{ci}^2+\nu_{in}^2}\right)
\end{eqnarray}
Here it has been assumed that the plasma consists of electrons and one ion plasma species. In the E-region, where $\omega_{ce} >> \nu_{en}$ and $\omega_{ci} << \nu_{in}$, the dominant Hall conductivity gives rise to the Helicon waves which mainly govern the plasma dynamics. While in F-region (above 120 km), where $\omega_{ce} >> \nu_{en}$ and $\omega_{ci} >> \nu_{in}$, the Hall conductivity decreases and plasma dynamics is mainly diffusive. In our numerical modeling, we have approximated the real vertical profiles of day time conductivities \citep{forbes76} by numerical fits given by the following functions.
\begin{eqnarray}
\sigma_{||}&=&\frac{1}{a_{1,||}\exp(-z/L_{1,||})+a_{2,||}\exp(-z/L_{2,||})}\label{sigmapar_fit}\\
\sigma_{P}&=&\frac{1}{a_{1,P}\exp(-z/L_{1,P})+a_{2,P}\exp(z/L_{2,P})}\label{sigmaP_fit}\\
\sigma_{H}&=&\frac{1}{a_{1,H}\exp(-z/L_{1,H})+a_{2,H}\exp(z/L_{2,H})}\label{sigmaH_fit}
\end{eqnarray}
The values of the parameters $a$'s and $L$'s are listed in Table \ref{table1} and the profiles for day time conditions are plotted in Fig. \ref{geometry_sigma}b. For night time conditions we assume that the conductivities are decreased by a factor of five due to the decreased electron and ion densities. It can be seen that in E-region, Hall and Pederson conductivities increase with altitude and peak around 110 km and 130 km respectively. The Hall conductivity dominates Pederson conductivity in the E-region while Pederson conductivity is dominant above 120 km.
\begin{figure}
\begin{center}$
\begin{array}{c}
(\mathrm{a}) \\
\includegraphics[width=.6\textwidth]{geometry1.eps}\\
(\mathrm{b})\\
\includegraphics[width=.6\textwidth]{conductivity.eps}
\end{array}$
\end{center}
\caption{(a) The geometry of the simulation. The ionospheric layer is above $z=90$ km and free space is below $z=90$ km. The simulation box extends horizontally from $x=-3000$ km to $x=3000$ km and vertically from $z=90$ km to $z=160$ km. The constant geomagnetic field $\mathbf{B}_0$ makes an angle $\theta$ with the positive $x$ direction. The ELF magnetic field $\mathbf{B}_{ant}$ which couples to the ionosphere at $z=90$ km is generated by an antenna placed at $x=z=0$. (b) Numerical fits of $\sigma_{||}$, $\sigma_{P}$ and $\sigma_H$ as given by \citet{forbes76} for day time conditions. For night time conditions, the conductivities are decreased by a factor of 5. (After \cite{eliasson09}.)}
\label{geometry_sigma}
\end{figure}
\begin{center}
\begin{table}[h]
\begin{tabular}{|l|l|l|l|}\hline
$a_{1,||}=3.07\times 10^9 \Omega$m & $L_{1,||}=5.36$ km & $a_{2,||}=0.63\times 10^3 \Omega$m & $L_{2,||}=18$ km\\\hline
$a_{1,P}=7.67\times 10^{12} \Omega$m & $L_{1,P}=5.36$ km & $a_{2,P}=0.99 \Omega$m & $L_{2,P}=18.8$ km\\\hline
$a_{1,H}=1.53\times 10^{11} \Omega$m & $L_{1,H}=5.36$ km & $a_{2,H}=0.0157\Omega$m & $L_{2,H}=10.67$ km\\\hline
\end{tabular}
\caption{Parameter values used in the numerical fits (\ref{sigmapar_fit})-(\ref{sigmaH_fit}) of the day time conductivity profiles.}
\label{table1}
\end{table}
\end{center}
The geometry of the simulation is shown in Fig. \ref{geometry_sigma}a. The ionospheric layer is treated as a two-dimensional layer
varying in the horizontal direction $x$ and the
vertical direction $z$, and no variation along $y$. In this model,
free space is below $z =z_0= 90$ km, while the ionospheric layer
extends vertically above $z = 90$ km. The simulation box covers the region from $z=90$ km to $z=160$ km and from $x=-3000$ km to $x=3000$ km. The ionospheric layer
and free space are magnetized by a constant external
geomagnetic field $\mathbf{B}_0$ which makes an angle $\theta$ with the positive
$x$ direction.
Similar to the previous work \citep{eliasson09}, we assume a very simple model for
the antenna, that of an equivalent infinite length wire in
the $y$ direction located at $x = z = 0$, so that,
\begin{equation}
\mathbf{B}_{ant}(x,z,t)=B_{ant}(t)\frac{\hat{x}\bar{z}-\hat{z}\bar{x}}{(\bar{x}^2+\bar{z}^2)}.\label{bant}
\end{equation}
where $B_{ant}(t)$ is the value of the antenna magnetic field at
the bottom of the E-region ($z = z_0$) and $x=0$, and the
normalized coordinates are $\bar{x} = x/z_0, \bar{y} = y/z_0$, and $\bar{z} = z/z_0$.
We have chosen two kinds of time dependence for $B_{ant}(t)$, namely, pulsed antenna field and continuous wave antenna field. The time dependence of
the antenna field for the two cases are shown in Fig. \ref{empulse}.
For the pulsed case we use an antenna field of the
form $B_{ant} = B_{0,ant} \exp[-(t-t_0)^2/2D_t^2]$ for $t < t_0$, $B_{ant} = B_{0,ant}$ for $t_0 \leq t < t_1$ and $B_{ant} = B_{0,ant} \exp[-(t-t_1)^2/2D_t^2]$ for $t \geq t_1$,
where the maximum amplitude $B_{0,ant} = 1$ nT is reached at
time $t_0 = 0.05$ s, and the pulse switched off smoothly at $t_1 =
0.15$ s, using the pulse rise and decay time $D_t = 0.01$ s.
For the
continuous wave case we choose a 10 Hz antenna
field that is ramped up smoothly so that $B_{ant} = B_{0,ant}
\exp[-(t-t0)^2/2 D_t^2]\sin(20\pi t)$ for $t < t_0$ and $B_{ant} = B_{0,ant}
\sin(20\pi t)$ for $t \geq t_0$, where the pulse reaches its maximum
amplitude $B_{0,ant} = 1$ nT at $t_0 = 0.5$ s, and the rise time is $D_t =
0.15$ s.
It is convenient for numerical purpose to express the generalized Ohm's law in terms of the impedance tensor.
\begin{equation}
\mathbf{E}=\bar{\bar{\rho}}\mathbf{J}
\end{equation}
where $\bar{\bar{\rho}}$ is the impedance tensor which is obtained by inverting the conductivity tensor. For the background magnetic field making an angle $\theta$ with positive $x$-direction, it is given by,\\
\[
\bar{\bar{\rho}}=\left[\begin{array}{ccc}
\rho_{||}\cos^2\theta+\rho_P\sin^2\theta&\rho_H\sin\theta&(\rho_{||}-\rho_P)\sin\theta\cos\theta\\
-\rho_H\sin\theta&\rho_P&\rho_H\cos\theta\\
(\rho_{||}-\rho_P)\sin\theta\cos\theta&-\rho_H\cos\theta& \rho_{||}\sin^2\theta+\rho_P\cos^2\theta
\end{array}\right]
=\left[\begin{array}{ccc}
\rho_{11}&\rho_{12}&\rho_{13}\\
-\rho_{12}&\rho_{22}&\rho_{23}\\
\rho_{13}&-\rho_{23}&\rho_{33}
\end{array}
\right]\]\\
where $\rho_{||}=1/\sigma_{||}$, $\rho_{P}=\sigma_P/(\sigma_P^2+\sigma_H^2)$ and $\rho_{H}=\sigma_H/(\sigma_P^2+\sigma_H^2)$. The other equations governing the dynamics in the E-layer are Ampere's law and Farady's law.
\begin{eqnarray}
\nabla\times\mathbf{B}&=&\mu_0\mathbf{J}\\
\frac{\partial \mathbf{B}}{\partial t}&=&-\nabla\times\mathbf{E}
\end{eqnarray}
Combining Ampere's and Faraday's law with Ohm's law, we obtain an evolution equation for $\mathbf{B}$.
\begin{equation}
\frac{\partial \mathbf{B}}{\partial t}=-\frac{1}{\mu_0}\nabla\times[\bar{\bar{\rho}}.\nabla\times\mathbf{B}]
\label{evol_eq}
\end{equation}
Equation (\ref{evol_eq}) describes the Helicon wave dynamics when impedance tensor is nondiagonal and diffusive phenomena when impedance tensor is diagonal. Separating equation (\ref{evol_eq}) into components and using the condition $\nabla.\mathbf{B}=0$, we get three coupled equations in $B_x$, $B_y$ and $B_z$.
\begin{eqnarray}
\frac{\partial B_y}{\partial t}&=&-\frac{1}{\mu_0}
\frac{\partial}{\partial z}\left\{ -\rho_{11}\frac{\partial B_y}{\partial z}+\rho_{12}\left(\frac{\partial B_x}{\partial z}-\frac{\partial B_z}{\partial x}\right)+\rho_{13}\frac{\partial B_y}{\partial x}\right\}\nonumber\\
&&+\frac{1}{\mu_0}\frac{\partial}{\partial x}\left\{ -\rho_{13}\frac{\partial B_y}{\partial z}-\rho_{23}\left(\frac{\partial B_x}{\partial z}-\frac{\partial B_z}{\partial x}\right)+\rho_{33}\frac{\partial B_y}{\partial x}\right\}\label{evol_by}\\
\frac{\partial B_z}{\partial t}&=&-\frac{1}{\mu_0}\frac{\partial}{\partial x}\left\{ \rho_{12}\frac{\partial B_y}{\partial z}+\rho_{22}\left(\frac{\partial B_x}{\partial z}-\frac{\partial B_z}{\partial x}\right)+\rho_{23}\frac{\partial B_y}{\partial x}\right\}\label{evol_bz}\\
\frac{\partial B_x}{\partial x}&=&-\frac{\partial B_z}{\partial z}\label{divb}
\end{eqnarray}
Here $\nabla.\mathbf{B}=0$ is used in place of the $x$-component of the evolution equation (\ref{evol_eq}) to calculate $B_x$. For $\theta=0^o$, equations (\ref{evol_by})-(\ref{divb}) and impedance tensor $\bar{\bar{\rho}}$ reduce to the forms which were studied earlier by \citet{eliasson09}.
As boundary condition at the plasma-free space boundary $z=z_0$, we use the continuity of the magnetic field $B_z$ and its z-derivative \citep{eliasson09}.
This also gives continuity of the parallel (to the boundary surface in the $x$-$y$ plane) electric field. A detailed discussion about the boundary conditions is given by \cite{eliasson09}.
\section{Results\label{results}}
\begin{figure}
\includegraphics{em_pulse.eps}
\caption{The time dependence of the antenna magnetic field $B_{ant}(t)$ used in the simulations. (a) pulsed antenna field with pulse width 0.1 sec. (b) continuous wave antenna field with frequency 10 Hz.}
\label{empulse}
\end{figure}
We have conducted a series of numerical studies of
the system (\ref{evol_by})-(\ref{divb}) with various parameters of interest, viz., day and night time conductivities, pulsed and continuous wave antenna field and different values of $\theta$.
In what follows we first present results for a fixed value of $\theta=5^o$ and vary other parameters. Then we change values of $\theta$ for all other parameters in order to see the $\theta$-dependence of the results. Finally, a distribution will be fitted to $z$-integrated $J_y$ profile, which is useful for the calculation of the radiation from $J_y$.
\begin{figure} \includegraphics[width=\textwidth,height=.5\textheight]{varnew_theta5_day_pulse.1sec.eps}
\caption{The magnetic field components $B_x$, $B_y$ and $B_z$ (nT), the current density components $j_x$, $j_y$ and $j_z$ (nA/m$^2$) and the electric field components $E_x$, $E_y$ and $E_z$ (mV/m) for day time conditions, pulsed antenna field and $\theta=5^o$ at $t=0.1$ sec. (left column) and $t=1$ sec. (right column)\label{var_theta5_pulse_day}}.
\end{figure}
\begin{figure} \includegraphics[width=\textwidth,height=.5\textheight]{varnew_theta5_night_pulse.1sec.eps}
\caption{The magnetic field components $B_x$, $B_y$ and $B_z$ (nT), the current density components $j_x$, $j_y$ and $j_z$ (nA/m$^2$) and the electric field components $E_x$, $E_y$ and $E_z$ (mV/m) for night time conditions, pulsed antenna field and $\theta=5^o$ at $t=0.1$ sec. (left column) and $t=1$ sec. (right column)\label{var_theta5_pulse_night}}.
\end{figure}
Figs. \ref{var_theta5_pulse_day} and \ref{var_theta5_pulse_night} show spatial profiles of magnetic field, currents and electric fields for pulsed antenna field for the case $\theta=5^o$ for day and night time conductivities, respectively. The left column shows results at $t=0.1$ s when the antenna field is at its maximum (1 nT), and right column in the relaxation phase at $t=1$ s when the antenna field is zero.
It can be seen at $t=0.1$ s that interaction of the antenna field with the ionospheric plasma at the lower boundary at $z=90$ km generates currents and associated electromagnetic fields structures. The oblique wave penetration with decreasing wavelength of these structures into the deeper ionospheric layers up to $z \sim 110$ km can also be seen. The $x$-scale $\sim 100$ km of the structures is larger than the $z$-scale $\sim 10$ km. The amplitude of the vertical electric field $E_z \sim .6$ mV/m is one order of magnitude larger than those of the horizontal components $E_x$ and $E_y$. The vertical current $J_z$ has upward and downward flow structure and its amplitude is 1-2 order of magnitude smaller than those of the horizontal currents $J_x$ and $J_y$. The disparity between the magnitudes of $J_x$ and $J_z$, and $x$- and $z$-scales is consistent with the current continuity condition $\partial J_x/\partial x+\partial J_z/\partial z=0$. Another interesting feature to note here is that at $x=0$ and just below $z=90$ km the horizontal magnetic field component $B_x\sim 2$ nT which is almost double of the antenna magnetic field (1 nT), while vertical magnetic field component $B_z$ is much smaller than the antenna field. This happens due to the shielding of plasma interior from the incident magnetic field.
At the lower boundary, the antenna electric field $E_y$ drives Pederson current $J_{py}=\sigma_PE_y$ along $E_y$ and Hall current $\mathbf{J}_{H\perp}$
in the $x$-$z$ plane perpendicular to the background oblique magnetic field $\mathbf{B_0}=B_0(\hat{x}\cos\theta+\hat{z}\sin\theta)$. A perpendicular electric field $\mathbf{E}_{\perp}$
develops to limit the magnitude of the $\mathbf{J}_{H\perp}$ to the value required by the current continuity condition.
This $\mathbf{E}_{\perp}$ drives Hall current $J_{hy}=\sigma_HE_{\perp}$ in the negative $y$-direction.
The total current $J_y=J_{hy}+J_{py}$ generates the magnetic field component $B_x$ which cancels the antenna field inside the plasma while at the ionospheric boundary at $z=90$ km and $x=0$, it adds to the antenna field to almost double its value. The $z$-component of the antenna magnetic field is partially cancelled by the magnetic field produced by the induced current $J_y$.
Due to the current continuity, the perpendicular current is shunted parallel to $\mathbf{B_0}$. The $z$-component of this parallel current and of the Hall current $\mathbf{J}_{H\perp}$ give rise to the upward-downward flow structure of $J_z$. The current components in terms of electric field components can be written as,
\begin{eqnarray}
J_x&=&(\sigma_{||}\cos^2\theta+\sigma_P\sin^2\theta)E_x-\sigma_H\sin\theta E_y+(\sigma_{||}-\sigma_P)\sin\theta\cos\theta E_z\\
J_y&=&\sigma_H \sin\theta E_x+\sigma_P E_y -\sigma_H \cos\theta E_z\\ J_z&=&(\sigma_{||}-\sigma_P)\sin\theta\cos\theta E_x+\sigma_H\cos\theta E_y+(\sigma_{||}\sin^2\theta+\sigma_P\cos^2\theta)E_z
\end{eqnarray}
In the region 90 km $<z<$ 110 km, $\sigma_{||}/\sigma_H \sim 50-150 >> 1$ and $ \sigma_{||}/\sigma_P \sim 2400-1900 >> 1$. Therefore for small values of $\theta$ (for example for $\theta=5^o,\cos\theta \sim 1$ and $\sin\theta \sim 0.08$) $J_x$ is mainly dominated by parallel current as $E_x \sim E_y \sim 0.1 E_z$, while $J_z$ has contributions both from parallel and Hall currents in $x$-$z$ plane. For large values of $\theta$ close to $90^o$, $J_x$ and $J_z$ exchange their roles. The out-of-plane current $J_y$ is mainly Hall dominated as $\sigma_H >> \sigma_P$ for $z<120$ km and $E_z >> E_y$.
The phase speed of the wave penetration can be estimated from whistler dispersion relation $v_{\phi}=\omega/k=k_{||}\rho_H/\mu_0$. This gives $v_{\phi}= 100$ km/s for an average value of $\rho_H=4000$ S/m in the region $90$ km $<z<$ $110$ km and a half wavelength of 100 km giving $k_{||}=\pi/100=0.314$ km$^{-1}$. This gives penetration distance $\sim 10$ km by $t=0.1$ sec., as is seen in Fig. \ref{var_theta5_pulse_day}. From whistler dispersion relation, $\lambda \propto \rho_H \sim 1/\sigma_H$ for given values of $\omega$ and $k_{||}$. Since $\sigma_H$ increases from $z=90$ km to $z=110$ km, the wavelength decreases, which is consistent with the simulations.
In the relaxation phase ($t=1$ sec.) when antenna field has decayed to zero, the sign of the structures have reversed and the magnitudes have decreased. The sign is reversed because in the decay phase of the incident pulse the antenna electric field changes its sign, which generates currents and electromagnetic fields in the opposite direction. The waves penetrates up to $z \approx 120$ km, beyond which Pederson conductivity dominates Hall conductivity and penetration becomes mainly diffusive.
For night time conditions (Fig. \ref{var_theta5_pulse_night}), the currents and electromagnetic fields penetrate deeper up to $z \sim$ 120 km by $t=0.1$ sec. and vertical scale length of the structures increases. This is due to the decreased (by a factor of 5) conductivities which increase the wavelength and phase speed of the waves as $\lambda,v_{\phi}\propto 1/\sigma_H$. The amplitudes of $J_x$ and $J_y$ decrease but not by factor of 5 as electric field amplitudes increase. In the relaxation phase, the structures have penetrated into the diffusive F-layer (above 120 km) up to the upper boundary of the simulation domain. The helicon waves with a typical wavelength of $\sim 300$ km propagate laterally in both directions. These waves are concentrated around the Hall conductivity maximum $z=110$ km, which seems to guide the helicon waves.
\begin{figure}
\includegraphics[width=\textwidth,height=.5\textheight]{varnew_theta5_DayNight_freq10.eps}
\caption{The magnetic field components $B_x$, $B_y$ and $B_z$ (nT), the current density components $j_x$, $j_y$ and $j_z$ (nA/m$^2$) and the electric field components $E_x$, $E_y$ and $E_z$ (mV/m) for continuous wave antenna field and $\theta=5^o$ in the steady oscillatory state ($t=1.5$ sec.). The left and right columns are for day and night time conditions respectively. \label{var_theta5_freq_daynight}}.
\end{figure}
Fig. \ref{var_theta5_freq_daynight} shows results for continuous wave antenna field of frequency 10 Hz and $\theta=5^o$ in the steady oscillatory state at $t=1.5$ sec. This figure shows features similar to the case of the pulsed antenna field, viz., vertical wave penetration with decreasing wavelength and deeper penetration with larger wavelength for night time conductivities as compared to day time conductivities. However, the vertical scale length is relatively smaller than that in the case of pulsed antenna field. The reason is that in the case of pulsed antenna field the Fourier spectrum of the antenna field consists of frequencies smaller than the frequency of the continuous wave antenna field. It can be seen from the wave dispersion relation that $\lambda \propto 1/\omega$ which gives larger wavelengths corresponding to the lower frequencies in the pulsed antenna field case.
\begin{figure}
\includegraphics[width=\textwidth,height=.5\textheight]{J_zprofiles_pulse.eps}
\caption{Vertical profiles of the current density components along the line $x=0$ at $t=0.1$ sec. for pulsed antenna field and various values of $\theta$. The left and right columns are for the day and night time conditions respectively. \label{J_pulse}}.
\end{figure}
\begin{figure}
\includegraphics[width=\textwidth,height=.5\textheight]{E_zprofiles_pulse.eps}
\caption{Vertical profiles of the electric field components along the line $x=0$ at $t=0.1$ sec. for pulsed antenna field and various values of $\theta$. The left and right columns are for the day and night time conditions respectively. \label{E_pulse}}.
\end{figure}
\begin{figure}
\includegraphics[width=\textwidth,height=.5\textheight]{J_zprofiles_freq.eps}
\caption{Vertical profiles of the current density components along the line $x=0$ at $t=1.5$ sec. for continuous wave antenna field and various values of $\theta$. The left and right columns are for the day and night time conditions respectively. \label{J_freq}}.
\end{figure}
\begin{figure}
\includegraphics[width=\textwidth,height=.5\textheight]{E_zprofiles_freq.eps}
\caption{Vertical profiles of the electric field components along the line $x=0$ at $t=1.5$ sec. for continuous wave antenna field and various values of $\theta$. The left and right columns are for the day and night time conditions respectively. \label{E_freq}}.
\end{figure}
\subsection{$\theta$-dependence}
Now we present results by varying $\theta$ in order to see how the features observed for a single value of $\theta=5^o$ change. The vertical profiles (along line $x=0$) of the current density and electric field components are shown for various values of $\theta$ in Figs. \ref{J_pulse} and \ref{E_pulse} (pulsed antenna field), and in Figs. \ref{J_freq} and \ref{E_freq} (continuous wave antenna field). The first thing that can immediately be noted from these figures is that the vertical scale length of the structures increases with $\theta$; the structures become broader for larger values of $\theta$. Consequently, the peak of the structures shift away from the boundary ($z=90$ km) and deeper pentration takes place with increasing values of $\theta$. The magnitudes of $J_y$ and $J_z$ decrease with $\theta$ near the lower boundary. The vertical scale increases rapidly for up to $\theta=20^o$, but it is similar for $\theta=60^o$ and $90^o$ indicating asymptotic scale for large value of $\theta$. For $\theta=60^o$ and $90^o$, the profiles of $J_x$ and $J_y$ almost follow each other while profiles of $J_z$ have similar scale but differ in magnitude. The profiles of $E_x$ and $E_y$ also almost follow each other for $\theta=60^o$ and $90^o$.
The amplitudes of the spatial oscillations of $E_x$ and $E_y$ increase with $\theta$, while that of $E_z$ decreases very rapidly. For example, near the lower boundary $z=90$ km, for pulsed antenna field and day time conditions $E_y$ increases from $\sim -.05$ mV/m (for $\theta=0^o$) to $\sim -0.1$ mV/m (for $\theta=90^o$) and for continuous wave antenna field and day time conditions, it increases from $\sim -.05$ mV/m (for $\theta=0^o$) to $\sim -.5$ mV/m (for $\theta=90^o$). On the other hand for pulsed antenna field and day time conditions $E_z$ decreases drastically from $\sim 1.5$ mV/m for $\theta=0^o$ to very small values $\sim 10^{-4}$ mV/m for $\theta=90^o$ and for continuous wave antenna field and day time conditions from $\sim 3$ mV/m for $\theta=0^o$ to $\sim 10^{-4}$ mV/m for $\theta=90^o$.
In order to understand the $\theta$-dependence, we have derived a local dispersion relation from Eqs. (\ref{evol_by})-(\ref{divb}) assuming uniform conductivities. Many of the observed features can be explained qualitatively from such a dispersion relation displayed below.
\begin{eqnarray}
\mu_0^2\omega^2+i\omega\mu_0[k_{\perp}^2(\rho_0+\rho_P)+2k_{||}^2\rho_P]&=&(k_{\perp}^2+k_{||}^2)[k_{\perp}^2\rho_P\rho_0+k_{||}^2(\rho_P^2+\rho_H^2)],\label{dispersion}
\end{eqnarray}
where $\omega$ is frequency, $k_{\perp}=k_z\cos\theta-k_x\sin\theta$ and $k_{||}=k_x\cos\theta+k_z\sin\theta$ ($k_x$ and $k_z$ are wave numbers along $x$ and $z$ directions respectively). In the collisionless limit ($\rho_0\rightarrow 0, \rho_P \rightarrow 0$), this dispersion relation reduces to the whistler dispersion relation $\mu_0\omega=kk_{||}\rho_H$. We have solved the dispersion relation (\ref{dispersion}) numerically for $k_z$ for the given values of $\omega=20\pi$ sec.$^{-1}$, $k_x=\pi/200$ km$^{-1}$ and values of $\rho_0,\rho_P$ and $\rho_H$ at $z=110$ km where the Hall conductivity is maximum. The dispersion relation (\ref{dispersion}) is a fourth order polynomial in $k_z$ and hence its numerical solution gives four roots for $k_z$. The two of the four roots are the complex conjugate of the other two. We discard the roots with negative imaginary parts as they give the solution exponentially growing in $z$. The real and imaginary parts of the other two roots as functions of $\theta$ are plotted in top (root 1) and bottom (root 2) panels of Fig. \ref{dispfig}. As $\theta$ increases, the real and imaginary parts of both the roots approach asymptotic values for large values of $\theta$. For root 1, real part of $k_z$ drops down to very small values while imaginary part $\sim 0.5$ km$^{-1}$ for large values of $\theta$. This root has very long wavelength but decays in a small distance $\sim 2$ km.
For root 2, imaginary part of $k_z$ attains very small values while real part $\sim 0.4$ km$^{-1}$ for large values of $\theta$, giving a large decaying distance and a wavelength $\sim 15$ km. This root will be visible in the simulations. The real part for root 2 decreases rapidly up to $\theta\sim 20^o$ and then slowly up to $\theta=90^o$. This is consistent with the simulations in which vertical scale length increases significantly up to $\theta=20^o$ but there is not much difference in the vertical scales for large values of $\theta$, e.g., scale lengths for $\theta=60^o$ and $90^o$ in Figs. \ref{J_pulse}-\ref{E_freq}.
\begin{figure}
\includegraphics[width=\textwidth,height=.5\textheight]{kz_z110_freq10.eps}
\caption{$\theta$-dependence of the real and imaginary parts of the vertical wave number $k_z$ obtained from the solution of the dispersion relation (\ref{dispersion}) for given values of $\omega=20\pi$ s$^{-1}$, $k_{x}=\pi/200$ km$^{-1}$ and values of $\rho_P,\rho_H$ and $\rho_0$ at $z=110$ km. \label{dispfig}}.
\end{figure}
\begin{figure}
\includegraphics[width=\textwidth,height=.5\textheight]{jy_time_day_freq10.eps}
\caption{(a) $z$-integrated current $\bar{J}_y(x,t)$, (b) time variation of $\bar{J}_y(x=0,t)$ and its functional fit $p(t)$ defined in the text, (c) $\bar{J}_y(x,t)$ as a function of $x$ and the fitting distribution $\bar{J}_y^{fit}(x,t)=a^2p(t)/(x^2+a^2)$ (represented by circles) with $a=100$ km at different times and (d) total currents along $y$ obtained by integrating $\bar{J}_y(x,t)$ and $\bar{J}_y^{fit}(x,t)$ along $x$. These results are for $\theta=0^o$, day time and continuous wave antenna field.\label{jy_fit}}.
\end{figure}
\subsection{A current distribution fit for $J_y$}
It has been shown by \citet{park73} that for the purpose of the calculation of the electromagnetic fields produced by a current distribution $J_y\sim a/(x^2+a^2)$ above the conducting Earth, $J_y$ can be replaced by a line current along $y$ raised above its actual height by the half width ($a$) of the distribution. For this purpose, we integrate along $z$ the two dimensional current distribution $J_y$ obtained from the simulations and fit a distribution $\sim a/(x^2+a^2)$ to the integrated current. The $z$-integrated current $\bar{J}_y(x,t)=\int J_y(x,z,t) dz$ (mA/m) is shown in Fig. \ref{jy_fit}a for $\theta=0^o$, day time and continuous wave antenna field. For a given time, $\bar{J}_y$ has an $x$-profile which has its peak at $x=0$. This peak value $\bar{J}_y(x=0,t)$ oscillates in time with amplitude 1.4 mA/m in steady oscillatory state, shown in Fig .\ref{jy_fit}b which also shows the fit $p(t)$ for $\bar{J}_y(x=0,t)$. This fit $p(t)$ has the same functional form as the continuous wave antenna field except the amplitude in the steady oscillatory state, which is 1.4 mA/m for $p(t)$. Using $p(t)$, $\bar{J}_y(x,t)$ is fitted with the function,
\begin{equation}
\bar{J}_y^{fit}(x,t)=\frac{a^2p(t)}{a^2+x^2},
\end{equation}
where $a=100$ km. The plots of $\bar{J}_y(x,t)$ and $\bar{J}_y^{fit}(x,t)$ at two different times (when $p(t)$ is at its positive and negative peaks ) are shown in Fig. \ref{jy_fit}c. The total current $I_y$ flowing along $y$ obtained by integrating $\bar{J}_y(x,t)$ and $\bar{J}_y^{fit}(x,t)$ along $x$, shown in Fig. \ref{jy_fit}d, oscillates with amplitudes 310 Amps and 440 Amps respectively. The mismatch between the total currents is due to the slight mismatch between $\bar{J}_y(x,t)$ and $\bar{J}_y^{fit}(x,t)$ for $x>100$ km.
The $z$-integrated current shown in Fig. \ref{jy_fit} is for $\theta=0^o$, day time conditions and continuous wave antenna field. However the peak value of $p(t)$ (1.4 mA/m), the fit $\bar{J}_y^{fit}$ and the total current $I_y$ do not change at all (except the time dependence of $p(t)$ which is different for pulsed and continuous wave antenna field) on changing the simulation parameters, viz., day and night time conductivities, pulsed and continuous wave antenna field, and values of $\theta$. This can be understood as follows. Since $\partial/\partial x << \partial/\partial z$ and $B_z$ is small as compared to $B_x$, $J_y$ can be approximated as,
\begin{equation*}
J_y\approx\frac{1}{\mu_0}\frac{\partial B_x}{\partial z}.
\end{equation*}
On integrating from lower boundary at $z=90$ km to the upper boundary we get,
\begin{equation*}
\bar{J}_y(x,t)=\int J_y(x,z,t) dz =\frac{1}{\mu_0}B_x(x,z=90 km,t),
\end{equation*}
using $B_x=0$ at the upper boundary. Due to the magnetic shielding, the value of $B_x \approx$ 1.8 nT at $x=0$ and $z=90$ km (when antenna field is at its maximum) irrespective of the simulation parameters. This gives the peak value of $\bar{J}_y(x=0,t)\approx 1.4$ mA/m, which is same as in Fig. \ref{jy_fit}b. The fit of $J_y$ along $x$ is basically related to the $x$-dependence of the $x$-component of the antenna field ($B_{x,ant}$) for a given $z$ (see Eq. \ref{bant}). Since the peak value 1.4 mA/m and $x$-dependence of the $B_{x,ant}$ remain same for all the simulation parameters, the total current $I_y$ also remains same.
\section{Summary\label{summary}}
We have investigated the interaction of the ELF electromagnetic fields generated by an antenna placed at the Earth's surface with the E-region of the Earth's ionosphere for various parameters, viz., day and night time conductivities, pulsed and continuous wave antenna field, and different values of $\theta$. The interaction leads to the generation of the horizontal currents
$J_x$ and $J_y$ with magnitudes up to few 100 nA/m$^2$ and vertical currents $J_z$ with magnitude up to 7 nA/m$^2$ in the E-region depending upon various simulation parameters. The associated electric fields have magnitude up to $E_x \sim 0.6$ mV/m, $E_y \sim$ 1 mV/m and $E_z \sim$ 8 mV/m. The wave penetration with a typical wavelength of the order of $\sim 10$ km of the currents and fields in to the deeper ionospheric layers up to $z\sim120$ km takes place due to the dominance of the Hall conductivity over the Pederson conductivity in the region $90$ km $<z<120$ km and penetration becomes diffusive for $z>120$ km. The scale length $\lambda$ of the vertical penetration decreases with $z$ up to $z=110$ km as $\lambda \propto 1/\sigma_H$ and $\sigma_H$ increases up to $z=110$ km. The increase in the wave speed due to the reduced conductivities during night time leads to the deeper penetration. For continuous wave antenna field, the vertical scale length is smaller than that in the case of pulsed antenna field because of the presence of lower frequencies in the later case. The vertical scale length increases rapidly with $\theta$ up to $\theta=20^o$ and then slowly to approach an asymptotic value for large values of $\theta$. This leads to the deeper penetration with increasing value of $\theta$. The magnitudes of $J_y$ and $J_z$ near the lower boundary decrease with $\theta$. The vertical electric field $E_z$ decreases drastically with $\theta$, e.g., from 1.5 mV/m ($\theta=0^o$) to $10^{-4}$ mV/m ($\theta=90^o$) for pulsed antenna field and day time conditions.
The $z$-integrated current along $y$ is fitted with a current distribution $ a^2p(t)/(x^2+a^2)$ where $a=100$ km and p(t) is the peak value of the distribution. Such a current distribution can be replaced by a line current raised above its actual height by the half width ($a$) of the current distribution for the purpose of the calculation of radiation \citep{park73}. The maximum total current along $y$ (310 Amps) remains same for various simulation parameters due to the magnetic shielding which gives $B_x \sim 1.8$ nT at $x=0$ and $z=90$ km for all the simulation parameters.
|
2,869,038,154,055 | arxiv | \section{Temperature Parameter}\vspace{-12pt}
We explored the effects of the temperature parameter (Fig.~\ref{fig:temp_vs_time}).
Temperature is a value between 0 and 100 that users specify to control how much frame-to-frame differences should be encouraged. Temperature controls the weight of the video stability loss ($w_c$ in Eq.~\ref{eq:img_consistence}) and the standard deviation of the noise added to frames when initializing the subsequent frame with the previous.
The exact relationship between temperature and these parameters was hand written based on observations of the effects of the parameters.
Objects and settings with low temperature barely move frame-to-frame, while with high temperature the scene completely changes (Fig.~\ref{fig:temp_vs_time}).
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/temp_vs_time.png}
\caption{\small Sampling 6 (out of 60) frames from generated videos using two input text prompts and varying the temperature parameter.}
\label{fig:temp_vs_time}
\end{figure}
\section{Resolution and Aspect Ratio}\vspace{-12pt}
A user specifies the resolution of the generated video a priori. CLIP requires $224\times224$ resolution input images, however, with the augmentation step, video frames can be cropped and resized to fulfill this requirement. Our approach operates with arbitrary resolutions, but the resolution does impact both the content and the appearance of the generated video frames. Lower resolutions produce simpler and smoother video frames, while a high resolutions are noisy, sparse, and do not align to the text prompt as well as medium resolutions (Fig.~\ref{fig:temp_vs_time}). To investigate our post-processing model's involvement in the altering of the appearance of generated frames at different resolutions, we resized the same image to multiple resolutions prior to post-processing (Fig.~\ref{fig:diff_res_cyclegan}). At low resolutions, our post processing model, CycleGAN, smooths the images greatly but has little effect at denoising with high resolutions.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/res.png}
\caption{\small Generating video frames with varying resolutions.
The vertical pixel resolution is shown above each generated frame with the post-processed frame displayed above the pixels that were directly optimized. The language prompts used: (top) ``A cat swimming in the ocean" and (bottom) ``A koala playing the piano on mars".}
\label{fig:resolutions}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/cyclegan_v_res.png}
\caption{\small A video frame with a vertical height of 256 pixels was generated with the language description ``Albert Einstein dancing in the club". The image, prior to post-processing with CycleGAN, was scaled to different resolutions to investigate CycleGAN's denoising abilities on different sized images.}
\label{fig:diff_res_cyclegan}
\end{figure}
\end{document} |
2,869,038,154,056 | arxiv | \section{Introduction}
Superstring theories enjoy a number of interesting
properties at first sight not shared by ordinary
four-dimensional field theories.
For a superstring theory,
(1) the basic entity is
a string of size $O(10^{-32}$) cm, with massless levels and excitation
energies $O(10^{19}$) GeV;
(2) loop corrections are ultraviolet
finite and quantum gravity is well defined;
(3) the fundamental dynamical
variables consist of the spacetime
$x^\mu(\sigma,\tau)$ and the
internal $\psi^i(\sigma,\tau)$ fields, all as functions
of the worldsheet coordinates $\sigma$ and $\tau$; (4)
these variables propagate as independent
free fields throughout the worldsheet, in a
manner dependent on the topology but not on
the geometry of the worldsheet
(reparametrization and conformal invariance);
(5) an external photon of momentum
$p$ and wave function $\epsilon_\mu(p)$ is inserted into
the string through a vertex operator $\epsilon(p)\!\cdot\!
[\partial_\tau x (\sigma,\tau)]\exp[ip\!\cdot\!x(\sigma,\tau)]$, a form which
is
fixed by conformal invariance; (6) conformal invariance leads to local
(Veneziano) duality of the scattering amplitude.
A scattering process is described by
one or very few string diagrams. In particular, for elastic
scattering in the tree approximation, one string
diagram (the Veneziano amplitude)
gives rise simultaneously to all
the $s$-channel and the
$u$-channel exchanges.
In constrast, in ordinary four-dimensional field theories (QFT),
($1'$) the basic entities are point particles;
($2'$) loop corrections
contain ultraviolet divergences and quantum gravity is
non-renormalizable;
($3'$) the fundamental dynamical
variables are fields $\psi^i(x)$ of the four-dimensional
spacetime coordinates $x^\mu$; ($4'$) these fields propagate
freely only between vertices, where interactions take place; ($5'$)
external photons are inserted into a Feynman diagram through the
photon operator $ \epsilon(p)\!\cdot\!A (x)\exp[ip\!\cdot\!x]$;
($6'$) there are
many distinct Feynman diagrams contributing to a scattering
amplitude. For elastic scattering, $s$- and $u$-channel exchanges
are given by different diagrams that must be added up together.
In the `low-energy' limit when $E\ll 10^{19}$ GeV, a string
of dimension $10^{-32}$ cm is indistinguishable from a point,
energy levels $O(10^{19}$) GeV are too high to matter,
so one would expect a string theory to reduce to an ordinary
field theory of zero masses. In fact, explicit calculations
have been carried out \cite{1,2} to show that a one-loop $n$-gluon amplitude
in a string theory does reduce to corresponding results in field theory.
In order to go beyond one loop or the $n$-gluon amplitude, where no
simple string
expression is available, it is better to proceed in a
different way. Since the string properties (3)--(6)
are quite different from the field-theory properties ($3'$)--($6'$),
the equivalence of string and QFT at low energies
is not immediately obvious. The
purpose of this series of papers is, among other things, to make these
connections.
There are two motivations for doing so. At a theoretical
level, one can hope to gain new insights into gauge and
gravitational theories from the string arrangement.
For example, according to (3) and ($3'$), spacetime and internal
coordinates
are treated on an equal footing in a string, but not so
in QFT. This asymmetry makes {\it local}
gauge transformations in QFT awkward to deal with and practical
calculations difficult to obtain. It would therefore be nice
to be able to reformulate gauge and a gravitational theories
in the symmetrical way of a string.
Conversely, one can hope that the successful multiloop treatment of
QFT, when written in a string-like manner,
can give hints useful for multiloop
string calculations.
On the practical side,
string-like organization of an field-theoretical amplitude allows
the spinor helicity technique \cite{3,4}
to be used on multiloop diagrams as easily as for tree diagrams \cite{5};
it also enables color, spin,
and momentum degrees of freedom to propagate independently, a separation
that
leads to efficient simplifications in actual calculations,
so much so that amplitudes not computable by ordinary means
can be obtained
when organized in this novel way.
Examples of this includes the Parke-Taylor $n$-gluon amplitude \cite{6},
the factorization of identical-helicity photons produced
in the $e^+e^-$ annihilation into $\mu $-pairs \cite{4},
the computation of the one-loop $n$-photon amplitude
of identical helicities \cite{7}, and the calculation of the $n$-gluon
one-loop amplitudes \cite{1,2,8}.
In the low energy limit, the variable $\sigma$ along the string is
frozen, so the dynamical fields in (3) are effectively functions of
$\tau$ alone. To convert ($3'$) to (3), one must be able to free
the dependence of $\psi^i$ on $x$, to make them both independent
functions of some {\it proper time} $\tau$. This can be accomplished by
using the Schwinger-parameter representation for the field-theoretic
scattering amplitude \cite{5}. In this representation, every vertex $i$ in
a Feynman diagram
is assigned a proper time $\tau_i$, and each propagator $r$ is assigned
a
Schwinger parameter $\alpha_r$. If $r=(ij)$ is a line connecting
vertices $i$ to $j$, then $\alpha_r=|\tau_j-\tau_i|$.
If we regard the Feynman diagram as an electric circuit
with resistances $\alpha_r$, then $x(\tau_i)$ can be interpreted as
(the four-dimensional) voltage at the vertex $i$ \cite{9}, thus freeing
it to be an independent function of $\tau$. Spacetime flow is thus
analogous to
the
current flow of an electric network, or equivalently the change of the
electrical potential from point to point.
Color flows can be isolated by creating color-oriented vertices in
Feynman
diagrams \cite{5}. One color-oriented vertex may be related to another by
twising, thus creating twisted Feynman diagrams in much the same
way like twisted open strings. The color subamplitudes so isolated
with these color-flow factors are gauge invariant.
With massless fermions,
chirality and helicities are conserved and this
allows free and unobstructed spin flows.
This is the essence of the spinor helicity technique which is applicable
to photons and gluons as well, for a spin-1 particle can kinematically
be regarded as the composite of two spin-${1\over 2}$ particles. The Schwinger
proper-time formalism allows the spinor helicity technique to continue
to work in loop diagrams \cite{5}; otherwise the loop momentum would be there
to
break chirality conservation and to obstruct the free flow of spins.
The spacetime, color, and spin flows thus obtained approximate the
properties
(3) and (4) of the string.
Moreover, using {\it differential circuit
identities} \cite{5}, external electromagnetic vertices can be converted
from ($5'$) to (5). What remains to be considered in completing the
string organization of field theory
is then the property of
{\it local duality}. Historically it was this unusual feature, first
seen in the Veneziano model \cite{10}, that marked the beginning of
string theory. To what extent this interesting property is
retained in field theory is therefore an interesting topic to study.
We propose to make a first attempt in that direction in the present
paper.
Duality for tree diagrams
is considered in Sec.~2, where its exact meaning
in field theory is also discussed.
For the sake of simplicity we shall confine that section
to a scalar theory where
a scalar photon field $A$ is coupled to a charged scalar meson field
$\phi$ via the Lagrangian $e\phi^*\phi A$, but it turns out
that once we solve the duality problem here it is solved in
quantum electrodynamics as well. To prepare for discussions of
QED and multiloop amplitudes, we review in Sec.~3 the
Schwinger-parameter representation for a field-theoretic scattering
amplitude which provides the main tool for further discussions.
Duality for the multiloop scalar theory is discussed in Sec.~4,
and duality for QED is discussed in Sec.~5. The problem of QCD
is much harder and we shall defer that to a future publication.
Finally, a concluding
section appears in Sec.~6.
\section{Duality for Tree Diagrams in a Scalar-Photon Theory}
Duality in a string theory follows from its conformal invariance,
which in the special case of a four-point amplitude can be used to
fix the (complex) worldsheet positions of three of the external lines to
be
$0,1$ and $\infty$, and the fourth one
to be $x\in[0,1]$. A particularly
simple example is the Veneziano amplitude \cite{10}
in the Mandelstam variables $s$
and $u$,
\begin{eqnarray}
A(s,u)&&=-B(-u,-s)=-\int_0^1x^{-u-1}
(1-x)^{-s-1}dx \nonumber\\
&&=-{\Gamma(-s)\Gamma(-u)\over \Gamma(-s-u)}\ .
\end{eqnarray}
This amplitude has poles
when either $s$ or $u$ is a
non-negative integer (in units of $[M_P\sim O(10^{19})$ GeV]$^2$),
but there are no
simultaneous $s$- and $u$-channel poles. The amplitude can be
expanded either as a sum of $s$-channel poles,
represented purely by $s$-channel exchange Feynman diagrams, {\it or} a
sum
of $u$-channel poles, represented purely by $u$-channel
exchange diagrams.
There is no need to {\it add} both the $s$-channel and the
$u$-channel diagrams, as is necessary in ordinary quantum field
theories.
The $u$-channel amplitude is obviously equal to the $s$-channel
amplitude,
and both are equal to the single integral in (2.1).
This is {\it duality}.
These results can be obtained directly from
a pole expansion of the
Euler Gamma functions, or from the integral representation for the
Beta function.
In the latter case, an expansion of the integrand about $x=0$ gives rise
to the $u$-channel poles, and an expansion of the integrand about $x=1$
gives rise to the $s$-channel poles.
At present energies, $|s|, |u|\ll 1$ (in units of $M_P^2$), so only
the massless poles contribute, giving rise to
\begin{equation}
A(s,u)\simeq {1\over s} +{1\over u}\ .
\end{equation}
The $u$-channel pole comes from the divergence of the integral near
$x=0$
when $u=0$, and the $s$-channel pole comes from the divergence of the
integral near $x=1$ when $s=0$.
In this form, the amplitude does not {\it appear} to be `dual' anymore,
because
{\it both} the $s$-channel and the $u$-channel poles are {\it summed},
instead of having a {\it single} expression like (2.1), where only a sum
of the $s$-channel {\it or} a sum of the $u$-channel poles are present.
Nevertheless, appearances are deceiving, because (2.2) follows
mathematically from (2.1), which is dual.
In other words, the $u$-channel poles in (2.2)
can be formally obtained by an infinite sum of massive $s$-channel
poles,
and (2.2) is as dual as it can be at low energies.
It is instructive for later discussions to
obtain (2.2) directly from the integral representation of (2.1).
For that purpose, divide the integral in $x$ into two
halves at the midpoint
$x={1\over 2}$. Since $|s|,|u|\ll 1$, the contribution of the integral comes
mainly from $x=0$, so we can put $(1-x)^{-s-1}\simeq 1$ there.
Similarly the
term $x^{-u-1}$ can be ignored in the second integral.
Next, make the transformation
$y=\ln(2x)$ in the first integral, so that $y\in[-\infty,0]$ there, and
the
transformation $y=-\ln[2(1-x)]$ in the second integral, so that
$y\in[0,\infty]$ there .
Then for $|s|,|u|\ll 1$, the integral in (2.1) becomes
\begin{eqnarray}
A(s,u)&&\simeq-\int_{-\infty}^0 dy\exp(-uy)-\int_0^\infty dy\exp(sy)
\nonumber\\
&&={1\over u}+{1\over s}\ ,
\end{eqnarray}
which is the same as (2.2).
\begin{figure}
\vskip -2 cm
\centerline{\epsfxsize 4.7 truein \epsfbox {diagram1.ps}}
\nobreak
\vskip -7.5 cm\nobreak
\vskip .1 cm
\caption{Compton scattering diagrams,
in which $s=(p_1+p_2)^2$ and $u=(p_2-p_3)^2$}
\label{fig1}
\end{figure}
If one were to start directly from a massless scalar field theory
$\phi^*\phi
A$ (where all fields are scalar and massless), then the `Compton
scattering'
amplitude given by Fig.~1 is identical to (2.2). In that sense the
field-theoretic amplitude is already dual, or as dual as it can be
at the present energy range.
One might still be unsatisfied with this remark
about duality, and points out that
the original dual amplitude in (2.1) is given by a {\it single}
integral, whereas in (2.3) this is given by the sum of {\it two}
integrals.
Since (2.3) comes from (2.1) it must be able to write it as a single
integral as well. All that we have to do is to
define a function $P=\theta(-y)(-uy)+\theta(y)(sy)$, then
\begin{equation}
A(s,u)=-\int_{-\infty}^\infty dy\exp(P)\ ,
\end{equation}
which is of course equivalent to the Veneziano integral (2.1) at the
present energy range. We
shall refer to expressions of this type, where
the sum of a {\it number of diagrams} is represented
by a {\it single} (possibly multi-dimensional) integral,
as {\it dual expressions}.
What allows the dual expression (2.4) to be written is not so much the
explicit
form of the integrands in (2.3), but that the two integrals there have
non-overlapping ranges in $y$. Given that, it is always possible to
define a
common integrand $\exp(P)$ so that the two integrals can be combined
into one.
Similar reasoning shows that dual expressions can be written for other
processes in a scalar-photon theory.
Since we are imitating the low energy limit of strings, we may simplify
writings by assuming all particles to be
massless. In that case,
the Schwinger-parameter representation for a scalar propagator is
\begin{equation}
{1\over q^2+i\epsilon}=-i\int_0^\infty d\alpha\exp(i\alpha q^2)\ ,
\end{equation}
and the variable $\alpha$ is called a Schwinger proper-time parameter.
Using this, we can obtain (2.3) from (2.5) simply by letting
$i\alpha=-y$, $q^2=u$ in the first term, and $i\alpha=y$,
$q^2=s$ in the second term. The fact that we
can get the Veneziano integral representation
(2.3) from the Schwinger representation
confirms the claim that the Schwinger-parameter representations
are string-like.
Consider now a more complicated example, Fig.~2, in
which charged particles scatter to produce $m$ (scalar) photons from one
charged line and $n$ photons from another.
We have drawn only one of the $(m+1)!(n+1)!$ possible diagrams; others
are obtained from it by permuting the photon lines.
\begin{figure}
\vskip .5 cm
\centerline{\epsfxsize 4.7 truein \epsfbox {diagram2.ps}}
\nobreak
\vskip -10.5 cm\nobreak
\caption{A tree diagram for multiphoton
emission from charged-particle scattering}
\label{fig2}
\end{figure}
Let us first establish some common notations.
Assign to each vertex of a Feynman diagram a {\it proper time} $\tau$,
as illustrated in Fig.~2. The
Schwinger proper-time parameters $\alpha$ are then given by differences
of the proper times. Specifically,
if $r=(ij)$ is an internal line between vertices $i$ and $j$, then
$\alpha_r=|\tau_i-\tau_j|$. In this way,
all the proper-time differences are determined but translational
invariance
prevents the origin of proper time to be fixed. Let
\begin{eqnarray}
\int_a^b d\tau_{[12\cdots n]}&&\equiv
\int_a^bd\tau_1\int_{\tau_1}^bd\tau_2
\cdots\int_{\tau_{n-1}}^bd\tau_n\ ,\nonumber\\
\int_a^b d\tau_{[12(345)678]}&&\equiv \int_a^b d\tau_{[12]}
\int_{\tau_2}^bd\tau_{[678]}
\left(\prod_{i=3}^5\int_{\tau_2}^{\tau _6}d\tau_i\right)\ ,\nonumber\\
\langle\int d\tau_{[12\cdots n]}\rangle&&\equiv\lim_{T\to\infty}{1\over 2T}
\int_{-T}^{T}d\tau_{[12\cdots n]}\ ,\nonumber\\
\langle\int d\tau_{[12(345)678]}\rangle&&\equiv\lim_{T\to\infty}{1\over 2T}
\int_{-T}^T d\tau_{[12(345)678]}\ ,\nonumber\\
\langle\int d\tau_{(12\cdots n)}\rangle&&\equiv \langle\int d\tau_{[(12\cdots
n)]}\rangle
\ .
\end{eqnarray}
In short, the integration variable enclosed between square
brackets are ordered, and those between round brackets are unordered.
We can now return to Fig.~2. Its amplitude is
\begin{equation}
A=(-i)^{n+m+2}\langle\int d\tau_{[12\cdots 0\cdots m]}\rangle\langle\int
d\tau'_{[12\cdots 0\cdots n]}\rangle\exp(iP)\ ,
\end{equation}
for some quadratic function $P$ of the external momenta obtained by
using (2.5)
on each propagator. The detailed form of $P$ does not concern us at the
moment.
The other diagrams are obtained by permuting the photon lines
in Fig.~2, so their amplitudes are all given by something like (2.7),
but with
the $\tau$ and $\tau'$ integration regions permuted separately. The
detailed form of the quadratic function $P$ may change from region to
region
but again we do not have to worry about it.
Since the integration regions of these different diagrams do not
overlap,
it is possible to define a common $P(\tau,\tau')$
equal to the individual $P$'s
in their respective regions. In this way, all the $(m+1)!(n+1)!$
diagrams can be summed up to get a single dual expression
\begin{equation}
A_{sum}
=(-i)^{n+m+2}\langle\int d\tau_{(012\cdots m)}\rangle\langle\int
d\tau'_{(012\cdots
n)}
\rangle\exp(iP)\ ,
\end{equation}
in which the integration regions are completely unordered.
The reasoning can obviously be extended to any tree diagram in a
scalar-photon
theory. In every case, each charged line provides a platform for
ordering
the photon lines attached to it. Different diagrams correspond to
different
permutations of these photon lines, so they correspond to different
integration regions in the proper times. A sum into a single
dual expression in which the proper time integration regions are
unordered is
clearly possible. Furthermore, with minor modifications to be discussed
in Sec.~4, essentially the same consideration works
for multiloop diagrams as well.
There are three remarks to be made. First of all, it is
interesting to note that the relation between
a dual expression and a Feynman diagram
is very much like the relation between
a Feynman diagram and an old-fashioned diagram.
Recall that a Feynman diagram with $n$ vertices is made up
of a sum of $n!$ old-fashioned diagrams, corresponding to the $n!$
possible (real) time orderings of its vertices.
Similarly, a dual `diagram' consists of
a sum of Feynman diagrams, and they
differ from one another by the {\it proper-time} orderings of their
vertices.
Secondly, proper-time orderings in QED diagrams are very simple and
natural,
because there are the conserved charged lines
along which the photon vertices can be proper-time ordered. In contrast,
in a
neutral scalar $\phi^3$ theory, or in a pure gluon QCD, all lines
are equivalent and there are no obvious ways to proper-time order
\begin{figure}
\vskip -.5cm
\centerline{\epsfxsize 4.7 truein \epsfbox {diagram3.ps}}
\nobreak
\vskip -13 cm\nobreak
\caption{ Multiphoton emission from a
charged line}
\vskip .1 cm
\end{figure}
\noindent the vertices,
especially in multiloop diagrams. This is one of the difficulties
one encounters in QCD.
Thirdly, there is the question of how useful these dual expressions are.
That naturally depends on the details of the diagrams one tries to sum,
and how simple the resulting integrand $\exp(iP)$ is. For example,
consider the emission of scalar photons shown in Fig.~3, where
$p$ and $k_i$ are massless but $p'$ may be offshell.
Using (2.5), and expressing the Schwinger parameters $\alpha $
as differences of the proper-time parameters $\tau _i$,
the exponent $P$ in the integrand $\exp(iP)$ becomes
\begin{eqnarray}
P&&=\sum_{i=1}^{n-1} \alpha _i(p+\sum_{j=1}^ik_i)^2 \nonumber\\
&&=2\sum_{i=1}^{n}(\tau _i-\tau _{n})p\!\cdot\!k_i
+2\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}(\tau _j-\tau _{n})k_i\!\cdot\!k_j\ .
\end{eqnarray}
By using momentum conservation, this can also be written in
a more symmetric form:
\begin{eqnarray}
P=&&(\tau_1-\tau_{n})p'\!\cdot\!p+\sum_{i=1}^{n}(\tau_i-\tau_{n})p'\!\cdot\!k_i
- \nonumber\\
&&\sum_{i=1}^{n}(\tau_1-\tau_i)p\!\cdot\!k_i-{1\over 2}\sum_{i\not=j=1}^{n}
|\tau_i-\tau_j|k_i\!\cdot\!k_j\ .
\end{eqnarray}
The function $P$ for a permuted diagram can be obtained
from these expressions by permuting the photon momenta.
In general,
it is quite impossible to obtain a closed analytic expression for
its dual sum $A_{sum}$.
However, in the eikonal approximation where the
photon momenta are considered
small, $O(k_i\!\cdot\!k_j)$ terms can be neglected from (2.9), then
\begin{equation}
P\simeq 2\sum_{i=1}^n(\tau _i-\tau _{n})p\!\cdot\!k_i\ .
\end{equation}
The amplitude for Fig.~3 is then proportional to
\begin{equation}
A=(-i)^{n-1}\int_{\tau_n}^\infty d\tau_{[n-1,n-2,\cdots,2,1]}\exp(iP)\
,
\end{equation}
where $\tau_n$ is completely arbitrary. This freedom can be exploited
to render the amplitude
$A$ more symmetrical, if we multiply and divide it by
\begin{equation}
2p\!\cdot\!\left(\sum_{i=1}^nk_i\right)=-i\int_0^\infty
d\tau_n\exp\left(i\tau_n
2p\!\cdot\!\sum_{j=1}^n
k_j\right)\ .
\end{equation}
Then
\begin{equation}
A=2p\!\cdot\!\left(\sum_{i=1}^nk_i\right)(-i)^n\int_0^\infty
d\tau_{[n,n-1,\cdots,1]}\exp(i\tilde P)\ ,
\end{equation}
with
\begin{equation}
\tilde P=2\sum_{i=1}^n \tau_ip\!\cdot\!k_i\ .
\end{equation}
Since $\tilde P$ is completely symmetrical in all the $k_i$, it is
identical
in all the permuted diagrams, so the dual sum of the $n!$
permuted diagram is
\begin{eqnarray}
A_{sum}&&=2p\!\cdot\!\left(\sum_{i=1}^nk_i\right)(-i)^n\int_0^\infty
d\tau_{(n,n-1,\cdots,1)}\exp(i\tilde P)\nonumber\\
&&=2p\!\cdot\!\left(\sum_{i=1}^nk_i\right)\prod_{j=1}^n{1\over 2
p\!\cdot\!k_j}\ ,
\end{eqnarray}
which is the well known eikenal expression.
Looking at this example, we see that there are two important ingredients
to make it successful.
The first is that $\tilde P$ has an identical functional form in every
one of the $n!$ integration regions. We shall refer to integrand
of this kind, that it has the same functional form in all integration
regions, to be {\it symmetrical}.
The second ingredient is that the
final integrals in (2.16) are simple enough to be computed analytically.
It turns out that the first ingredient is relatively easy to come by.
This is because the functional form of $P$ can be altered, either by
using momentum conservation to
substitute one external momentum by the negative sum of all others,
or by changing something like $\tau_i-\tau_j$ to $(\tau_i-\tau_k)+
(\tau_k-\tau_j)$. Quite often by making these changes one can manipulate
$P$ into a symmetric form. For example, eq.~(2.10) is completely
symmetrical in the indices $i=2$ to $n-1$ although (2.9)
is not. Yet, without the second
ingredient, there is really not much point in achieving the first.
This is clearly seen by comparing (2.10) with (2.9). Without the
eikonal approximation, it is impossible to obtain $A_{sum}$
in either form, in spite of the symmetry of (2.10).
Even numerically it is not clear that
a symmetric $P$ is easier
to compute, especially if its functional form is forced to
be very complicated when it is made symmetric.
\section{Schwinger-parameter Representation}
Every scattering amplitude can be written in the form
\begin{equation}
A=\left[{-i\mu^\epsilon\over
(2\pi )^{d}}\right]^{\ell }\int
\prod _{a=1}^{\ell }(d^{d}k_{a}){S_{0}(q,p)\over
\prod _{r=1}^{N}(-q_{r}^{2}+m_{r}^{2}-i\epsilon )}\ ,
\end{equation}
where $d=4-\epsilon$ is the dimension of spacetime,
$k_{a}\ (1\le a\le\l$) are the loop momenta,
$q_{r}, m_{r}\ (1\le r\le N)$
are the momenta and masses of the internal lines, and $p_{i}\ (1\le i\le
n)$ are the outgoing external momenta.
Since we will be mainly interested in massless field theories, all
$m_r^2$
will be set equal to 0 in the following.
The numerator function
$S_{0}(q,p)$ contains everything except the denominators of the
propagators. Specifically, it is the product
of the vertex factors, numerators of propagators,
wave functions of the external lines,
symmetry factor, and the signs associated with closed fermion loops.
All the $i$'s and $(2\pi )$'s have been included in the factor before
the
integral.
By introducing a Schwinger proper-time
parameter $\alpha_{r}$ for each internal line to represent its scalar
propagator
as in (2.5),
the loop integrations in (3.1) can be
explicitly carried out to obtain the Schwinger-parameter representation
\cite{9}
\begin{equation}
A=\int [D \alpha ]\Delta (\alpha )^{-d/2}S(q,p)\exp[iP]\
,\end{equation}
where
\begin{eqnarray}
\int [D \alpha ]\equiv&&\left[{(- i)^{d/2}\mu^\epsilon
\over (4 \pi)^{d/2}}\right]^{\l }i^{N}
\int _{0}^{\infty }(\prod _{r=1}^{N}d\alpha _{r})\ ,\nonumber\\
P=&&\sum_{r=1}^N\alpha_rq_r^2\equiv\sum_{i,j=1}^nZ_{ij}(\alpha)p_i\!\cdot\!p_j\
,\end{eqnarray}
\begin{equation}
S(q,p)\equiv \sum _{k\ge 0}S_{k}(q,p)\ .\end{equation}
In spite of the same notation, the momenta $q_r$ in (3.2)
to (3.4) cannot be the same as the one in (3.1) since
loop-momentum integrations have now been carried out. Instead, it is to
be
interpreted
as the current flowing through the $r$th line of an electric circuit
given
by the Feynman diagram, where $p_i$ are the outgoing currents and
$\alpha_r$
are
the resistances of the $r$th line. With this interpretation, $P$ in
(3.2) and
(3.3) becomes the power consumed by the circuit, and $Z_{ij}$ is then
the
impedance
matrix. On account of current conservation, $\sum_{i=1}^np_i=0$, $q_r$,
$P$,
and hence the amplitude $A$
are invariant under the {\it level transformation} $Z_{ij}\to
Z_{ij}+\xi_i+\xi_j$
for any arbitrary $\xi_i$. This enables us to choose an impedance matrix
with
$Z_{ii}=0$. Some of the formulas below, including (3.7), (3.11), and
(3.16),
are not valid without this condition.
Unless otherwise specified, this is
a choice we will adopt throughout.
Note that these and the following formulas are equally valid for tree
diagrams, or a combination of trees and loops.
Suppose the numerator function $S_{0}(q,p)$ is a
polynomial in $q$ of degree
$j$. Then $S_{k}(q,p)$ is defined to be
a polynomial in $q$ of degree $j-2k$, obtained from $S_{0}(q,p)$ by
contracting $k$ pairs of $q$'s in all possible ways and
summing over all the contracted results. The rule for contracting a
pair of $q$'s is:
\begin{equation}
q^{\mu }_{r}q^{\nu }_{s}\to -{i\over 2}H_{rs}(\alpha)g^{\mu \nu }
\equiv q^{\mu }_{r}\sqcup q^{\nu }_{s}\ .\end{equation}
The circuit quantities in (3.2) to (3.5), including
$\Delta,\ q_r,\ P,\ Z_{ij}$ and $ H_{rs}$, can all be
obtained directly from the Feynman diagram \cite{9}. For example,
the formula for the impedance matrix and the function $\Delta$ are
\begin{equation}
\Delta=\sum_{T_1}\prod^\l \alpha \ ,
\end{equation}
\begin{equation}
Z_{ij}=-{1\over 2}\Delta^{-1}\sum_{T_2^{ij}}\prod^{\l+1}\alpha\ .\end{equation}
These formulas have the following meaning. An $\l$-loop diagram can
be turned into a tree by cutting $\l$ appropriate lines.
$\Delta$ in (3.6) is obtained by summing over the set $T_1$
of all such cuts, with
the summand being the product of the $\alpha $'s of the cut lines
in each case. For tree diagrams where$\l=0$, by definition $\Delta=1$.
Similarly, let $T_2^{(ij)}$ be the set of all cuts
of $\l+1$ lines so that the diagram turns into two disconnected
trees, with vertex $i$ in one tree and vertex $j$ in another.
Then $-2\Delta\!\cdot\!Z_{ij}$ in (3.7) is given by the sum over $T_2^{(ij)}$,
with
the summand being the product of $\alpha $'s of the cut lines.
Besides satifying Kirchhoff's law, the electric-circuit
quantities obey a number of {\it differential circuit identities} \cite{5}:
\begin{equation}
{\partial \over\partial \alpha _{r}}P(\alpha ,p)=q_{r}^{2}\ ,\end{equation}
\begin{equation}
{\partial \over\partial \alpha _{s}}q_{r}(\alpha ,p)
=H_{rs}q_{s}\ ,\end{equation}
\begin{equation}
{\partial \over\partial \alpha
_{t}}H_{rs}(\alpha)=H_{rt}(\alpha)H_{ts}(\alpha)\
.\end{equation}
Moreover, the {\it contraction function} $H_{rs}=H_{sr}$
is `conserved' at each vertex as if the external
currents were absent, {\it i.e.,}
if $\sum_{r\in
V}q_r=p$ is obeyed at some vertex, then $\sum_{r\in V}H_{rs}=0$ for all
$s$.
In particular, if $q_r$ does not involve $\alpha$'s as is the case when
it
is a branch of a tree, then $H_{rs}=0$ for all $s$.
So far all quantities are expressed as functions of $p_i$ and
$\alpha_r$.
As in the case for tree diagrams, we can assign each vertex with a
proper time $\tau_i$ and consider $\alpha_r=|\tau_i-\tau_j|$ if
$r=(ij)$.
We can then convert all $\alpha$-integrations into $\tau$-integrations.
\begin{figure}
\vskip -0 cm
\centerline{\epsfxsize 4.7 truein \epsfbox {diagram4.ps}}
\nobreak
\vskip -13 cm\nobreak
\caption{Cubic vertex $C_a$ and seagull
vertex $Q_a$ of scalar electrodynamics}
\vskip .1 cm
\end{figure}
In scalar QED, there are two kinds of vertices: the cubic vertex
$C_a=e \epsilon (p_a)\!\cdot\!(q_{a'}+q_{a''})$ in Fig.~4(a) and the seagull
vertex
$Q_a=2e^2\epsilon(p_1)\!\cdot\!\epsilon(p_3)$ in Fig.~4(b). We shall call a
cubic
vertex `external', and perhaps less confusingly of {\it type-a}, if
it consists of one external photon line and two internal charged-scalar
lines. A type-a vertex $a$ has a string-like representation \cite{5}
\begin{equation}
C_a=-ie \epsilon (p_a)\!\cdot\!D_a(iP)\ ,\end{equation}
where
\begin{equation}
D_aP\equiv \partial_a{\partial P\over \partial p_a}\ , \quad
\partial_a\equiv{\partial\over\partial
\tau_a}\ .\end{equation}
Unfortunately, the same representation is not true for internal
cubic vertices.
To make (3.12) useful, we shall define $D_a$ to operate on
functions of the form $f(\tau)\!\cdot\!$ $\prod_rq_r\exp(iP)$ like a
derivation,
{\i.e.,} like a first derivative satisfying the product rules but
not like a second derivative. To complete the definition, we must also
define $D_a$ when it operates on $f(\tau ), P$ and $q_r$. For each of
these three elementary operations, it will be $D_a=\partial_a(\partial/\partial
p_a)$
as in (3.12). This leads to $D_a f(\tau )=0$ and it can be shown that
\cite{5}
\begin{equation}
D_aq_r=H_{ar}\ , \quad D_a^\mu D_b^\nu P
=2g^{\mu \nu }H_{a'b'}\ ,\end{equation}
where $b$ is another type-a vertex, and $a'\not= r\not= a''$.
Eq.~(3.11) makes it possible to replace a vertex $C_a$ by an operation
involving $D_a$. Eq.~(3.13) shows that the necessary contractions
(3.5) can automatically be accommodated as well.
Consequently, a scalar QED amplitude with $n_a$ vertices of type-a
can be written as \cite{5}
\begin{eqnarray}
A=&&\int [D\alpha]\Delta^{-d/2}(-ie)^{n_a}[\epsilon (p_1)\!\cdot\!D_1]
[\epsilon (p_2)\!\cdot\!D_2]\cdots\nonumber\\
&&[\epsilon (p_{n_a})\!\cdot\!D_{n_a}]S^{int}(q,p)\exp(iP),\end{eqnarray}
where $S^{int}(q,p)$ is given by the product of the non-type-a
vertices and terms generated from their mutual contractions.
Moreover, it is true that \cite{5}
\begin{equation}
\partial_a\Delta=0
\end{equation}
for a type-a vertex,
so it does not matter whether $\Delta$ in (3.14) is put before or after
the $D_a$'s. Another useful relation to know is
\begin{equation}
\partial_a Z_{ij}=0\end{equation}
provided $i\not=a\not=j$. This relation can be used to show how the
string-like vertex changes under a guage transformation,
when $\epsilon (p_a)$ is replaced by $p_a$. In that case, remembering
that $Z_{aa}=0$, (3.3) and (3.16) give
\begin{eqnarray}
C_a&\to -ie\partial_a\left(p_a\!\cdot\!{\partial P\over\partial p_a}\right)=
-2ie\partial_a\left(p_a\!\cdot\!\sum_iZ_{ai}p_i\right)&\nonumber\\
&=-ie\partial_a\sum_{i,j}Z_{ij}p_i\!\cdot\!p_j=-ie\partial_aP\ .&
\end{eqnarray}
\section{Multiloop Duality for the Scalar Theory}
In the scalar-photon theory with interaction $\phi ^* \phi A$, the most
general amplitude is given by (3.2), with $S=1$ and $P$
given by (3.3) and (3.7). The Schwinger parameters $\alpha $
will be expressed as differences of the proper-time parameters $\tau $,
and the proper times will again be ordered along the charged
lines as in the tree cases. The only new problem here is where to
begin the ordering in the case of a charged-scalar loop.
\begin{figure}
\vskip -0 cm
\centerline{\epsfxsize 3.0 truein \epsfbox {diagram5.ps}}
\nobreak
\vskip -5 cm\nobreak
\caption{An $n$-photon one-loop diagram}
\end{figure}
Since the origin of the proper time is never determined by the
$\alpha $'s, it can be chosen arbitrarily, say at the position
marked `0' in Figs.~5 and 6. We must now insert into (3.2) a factor
\begin{equation}
1=\int_{0}^\infty dT \delta (\sum_{loop} \alpha -T)\end{equation}
for every charged loop, where the sum is taken over
all the $\alpha $'s in the loop.
So for Fig.~5, the $\alpha $-integrations can be replaced by
\begin{equation}
\prod_{i=1}^n\left(\int_0^\infty d\alpha _i\right)=\int_0^\infty {dT\over T}
\int_0^T d \tau _{[12\cdots n]}\equiv
\langle\int d \tau _{[12\cdots n]}\rangle\ ,\end{equation}
\begin{figure}
\vskip -0 cm
\centerline{\epsfxsize 4.7 truein \epsfbox {diagram6.ps}}
\nobreak
\vskip -9.5 cm\nobreak
\caption{A complicated multiloop scattering
diagram}
\vskip .1 cm
\end{figure}
\noindent
where $\alpha _i=\tau _{i+1}-\tau _i$ for $i\le n-1$, and
$\tau _n=T-(\tau _n-\tau _1)$.
Strictly speaking, there is an inconsistency in the definition
of $\langle d \tau _{[\cdots]}\rangle$ between (4.2)
and (2.6), but
in practice (4.2) is always used for closed charged loops and (2.6)
is always used for open charged lines.
One can now obtain dual amplitudes by summing over all photon
permutations
in exactly the same way as before.
For example, for Fig.~5 and its permuted diagrams, the sum is
proportional to
\begin{equation}
A_{sum}=\langle\int d \tau _{(12\cdots n)}\rangle
\Delta^{-d/2}\exp(iP)\ .\end{equation}
For Fig.~6 and its permuted diagrams, the sum is proportional to
\begin{eqnarray}
A_{sum}=&&\langle\int d \tau _{(12\cdots m+a)}\rangle
\langle\int d \tau' _{(12\cdots n+b)}\rangle\langle\int d \tau'' _{(12\cdots
a+b+c)}\rangle\nonumber\\
&&\Delta^{-d/2}\exp(iP)
\ .\end{eqnarray}
As in tree amplitudes, how useful such dual expressions are depends
on the complexity of $P$ and $\Delta$ in each case. In the
eikonal approximation (2.11)---(2.16), the integrals can be
carried out because $\tilde P$ is common for all integration regions
{\it and}
because it has a simple dependence on the proper times. Now
the first condition is not hard to meet, if the diagrams to be summed
have a high degree of symmetry. For example, for the well-studied
case of Fig.~5 \cite{8,11,5}, eqs.~(3.6) and (3.7) show that
$\Delta=\sum_{i=1}^n
\alpha _i=T$, and
\begin{equation}
Z_{ij}=-|\tau _i-\tau _j|\left(T-|\tau _i-\tau _j|\right)/2T
\equiv G_B(\tau _i,\tau _j)
\ ,\end{equation}
which has a symmetric form in the sense that it has the same functional
form in
all integration regions. However, one is still unable to evaluate the
integral (4.3) analytically because of its relatively complicated
$\tau$-dependences.
For the reason discussed at the end of Sec.~2, $P$ can be made symmetric
or partially symmetric in many cases.
One way of seeing this is the following.
If we modify Fig.~3 by adding on a charged-scalar
propagator at each end, then analogous to (2.10) one can produce a form
of $P$ which is completely symmetrical in all the
photon lines. We have already seen that the $P$ in Fig.~5 is completely
symmetrical in all its photon lines as well. Now every Feynman diagram
can be built up from a number of open charged lines with
its attached photons, and a number of one-charged-loop diagrams with
its attached photons, by joining together pairs of photon lines.
Mathematically, one obtains the resulting amplitudes by multiplying
these $P$'s of the components,
together with the propagators of the joined photon lines
in the form of (2.5), then carries out the momentum integrations
of the joined photon lines. Since the dependences on these joined
momenta are
Gaussian, such integrations can be carried out, and one again obtains
a result of the form (3.2), with $S(q,p)=1$, and with $P$ of (3.2)
a function of the component $P$'s. Since the component $P$'s are
symmetric in all the photon lines, they will still be symmetric
in the remaining, unjoined, external photon lines, so in this way
one can obtain a symmetric form for the final
$P$. This mechanism for obtaining
a symmetric form has
been discussed recently \cite{12} in a slightly different language.
However, the symmetric form obtained this way is usually much more
complicated than those obtained directly from (3.3), (3.6), and (3.7).
It is so complicated that it is unlikely to be integrated analytically,
nor will it lead to simpler numerical evaluations in most cases.
See the end of Sec.~2 for more discussions along these lines.
Though one can generally produce other simpler symmetric forms, they are
still not simple enough for the integrations to be carried out
explicitly.
For these reasons there seems to be no particular advantage of having
a symmetric form and we will not do so most of the time.
\section{QED}
The most important ingredient for obtaining a dual expression
is the presence of a conserved charged line along which to order
the interaction vertices. This is independent
of the spin of the particles involved, hence one can
obtain dual amplitudes in QED just as easily as those in the
scalar-photon
theory.
The factor $S(q,p)$ in (3.2) is no longer 1 for QED. For scalar
QED, for example, it is made up of the product of the vertex factors
$C_i$ and $Q_i$ and their contractions. See eq.~(3.11) and the
paragraph above it. This however makes no difference to the
construction of the dual amplitude. Another minor complication
is the presence of the seagull vertex $Q_i$. This simply adds
a Dirac-$\delta $ function contribution to the integrand.
For example, in the Compton scattering diagram Fig.~7,
all that we have to do to accommodate the seagull vertex in Fig.~7(c)
is to define the integrand of the dual amplitude to be
\begin{eqnarray}
S\exp(iP)&&=\theta (\tau _3-\tau _1)S_a\exp(iP_a)
+\theta (\tau _1-\tau _3)S_b\exp(iP_b)\nonumber\\
&&+\delta (\tau _3-\tau
_1)S_c\exp(iP_c)
\ ,
\end{eqnarray}
where $S_i,P_i\ (i=a,b,c)$ are the respectively factors for the
three diagrams Figs.~7(a), 7(b), and 7(c).
\begin{figure}
\vskip -2.5 cm
\centerline{\epsfxsize 4.7 truein \epsfbox {diagram7.ps}}
\nobreak
\vskip -5.5 cm\nobreak
\caption{Lowest order Compton scattering
diagrams in scalar QED}
\vskip .1 cm
\end{figure}
What distinguishes the dual expressions of QED from the
scalar theory is gauge invariance, in that the diagrams in QED
to be summed are connected by gauge invariance.
This means that the gauge-dependent parts of each diagram should
no longer be there in the dual sum. How the dual expression can
be mathematically manipulated to achieve this purpose unfortunately
depends on the details. At this moment two general techniques are
available to aid us. One is the spinor helicity technique \cite{3,4,5}, where
the reference momenta of the photons can be chosen appropriately
to reduce the amount of gauge-dependent contributions. The other
is the integration-by-parts technique \cite{1,2,11} used in connection
with the string-like operators appearing in (3.11) and (3.14).
We mentioned previously that the relation between a dual amplitude
and a Feynman amplitude is analogous
to the relation between
a Feynman amplitude and an old-fashioned amplitude. In each case
the former is not time-ordered, and the latter is; the only difference
being that it is proper-time ordering for the first pair and real-time
ordering for the second. Now with gauge theories, there is another
parallel between these two cases: an old-fashioned diagram
is not relativistically invariant but a Feynman diagram is. Similarly,
a Feynman diagram is not gauge invariant but a dual diagram is.
Let us consider two simple examples to illustrate these points.
First, consider the Compton amplitude Fig.~7 in scalar QED.
A propagator is added to each external charged line so that
the vertices 1 and 3 in Figs.~7(a) and 7(b) are of type-a,
to enable the string-like vertex (3.11) to be used.
Without the seagull term Fig.~7(c), the amplitude is not gauge
invariant, so it definitely contains a non-trivial gauge-dependent
part. As we shall see below, one can actually manipulate the dual
expression so that the seagull vertex seems to disappear,
and the gauge-dependent parts from these three diagrams are no longer
present.
The expression $P$ for diagrams (a), (b), and (c) are respectively
\begin{eqnarray}
P_a&&=(\tau_3-\tau_1)(p_1+p_2)\!\cdot\!(p_3+p_4)
+(\tau_1-\tau_2)p_2\!\cdot\!(p_3+p_4-p_1)\nonumber\\
&&+(\tau_4-\tau_3)p_4\!\cdot\!(p_1+p_2-p_3)\ ,\nonumber\\
P_b&&=(\tau_1-\tau_3)(p_2-p_3)\!\cdot\!(p_4-p_1)
+(\tau_3-\tau_2)p_2\!\cdot\!(p_3+p_4-p_1)\nonumber\\
&&+(\tau_4-\tau_1)p_4\!\cdot\!(p_1+p_2-p_3)\ ,\nonumber\\
P_c&&=\left(P_a\right)_{\tau _3=\tau _1}=\left(P_b\right)_{\tau _3=\tau _1}
\ .\end{eqnarray}
Using (3.14), the vertex factors in both cases become
\begin{equation}
S_a=S_b=(-ie)^2\left[\epsilon (p_1)\!\cdot\!D_1\right]\left[\epsilon
(p_3)\!\cdot\!D_3\right]
\equiv S\ .\end{equation}
In their respective regions, one can replace $P_a$ and $P_b$ by
\begin{eqnarray}
P'_a&=\theta(\tau _3-\tau _1)P_a&\nonumber\\
P'_b&=\theta(\tau _1-\tau _3)P_b
\ .&\end{eqnarray}
However, since $\tau $-differentiations are involved in $S$ of (5.3),
$S_a\exp(iP'_a)$ and $S_b\exp(iP'_b)$ are not identical to
$S_a\exp(iP_a)$ and $S_b\exp(iP_b)$. It can be checked by explict
calculation that the former already contains the seagull vertex
in Fig.~7(c). Hence the dual expression for Fig.~7 is
\begin{equation}
A_{sum}=-ie^2\langle\int_{\tau _2}^{\tau _4}d \tau _{(13)}\rangle
\left[\epsilon(p_1)\!\cdot\!D_1\right]\left[ \epsilon(p_3)\!\cdot\!D_3
\right]\exp(iP')\ ,\end{equation}
where
\begin{equation}
P'=\theta(\tau_3-\tau_1)P_a+\theta(\tau_1-\tau_3)P_b=P'_a+P'_b\
.\end{equation}
The fact that the seagull vertex {\it seems} to have disappeared
suggests that we have eliminated the gauge-dependent contributions
altogether. To see that explicitly, use (3.17). Then under a gauge
transformation, $\epsilon (p_a)\!\cdot\!D_a$ ($a=1,3$) is changed into
something
proportional to $\partial_a$. The integral over $\tau _a$ in (5.5)
can then be carried out to yield the boundary contributions
at $\tau _2$ and $\tau _4$, and hence the Ward-Takahashi identity.
Since there is no trace of explicit cancellations needed
at $\tau _1=\tau _3$, it must mean that the gauge-dependent
terms in the individual diagrams have now been eliminated.
Another simple example one can mention is QED in the eikonal
approximation, Fig.~3. In the soft photon limit for scalar
QED, diagrams involving seagull vertex are not dominant
because of the presence of one less propagator $O(k^{-1})$
in the amplitude. The cubic vertex factor is trivial in the
soft photon limit, yielding $S=\prod_{i=1}^n(2e \epsilon (k_i)\!\cdot\!p)$.
The dual amplitude can therefore be read off from (2.17) to be
\begin{equation}
A_{sum}=2p\!\cdot\!\left(\sum_{i=1}^nk_i\right)\prod_{j=1}^n
\left({e \epsilon (k_j)\!\cdot\!p\over p\!\cdot\!k_j}\right)\
\end{equation}
It is gauge invariant to leading order in the photon momenta.
\section{Conclusion}
In the Schwinger-parameter representation, QED diagrams differing
from one another by the permutation of photon lines correspond to
different proper-time ordering of the vertices, and can be formally
summed into a single integral over a hypercubic region.
This sum is referred to as a dual sum because it is the field-theoretic
counter part of a dual amplitude in string theory.
The relation between individual Feynman diagrams and their
dual sum is analogous to the relation between individual old-fashioned
diagrams and their sum into a single Feynman diagram. Among other
things,
individual Feynman diagrams in QED are not gauge invariant but the
dual sum is. Similarly, the individual old-fashioned diagrams are not
Lorentz invariant but their sum is. The dual sum allows formal
manipulations
between different Feynman diagrams to be carried out, {\it e.g.,} by
the integration-by-parts technique on string-like vertices. With
appropriate approximations, such as the eikonal approximation, explicit
results may sometimes be obtained from the dual expression as well.
Dual expressions for QCD are much more complicated to deal with and are
not
discussed in the present paper.
\section{acknowledgement}
This research is supported in part by the Natural
Sciences and Engineering Research Council of Canada and the Qu\'ebec
Department of Education.s
|
2,869,038,154,057 | arxiv | \section*{Appendix A: Finite Temperature Calculations }
To calculate the differential conductivity and noise correlations above zero temperature we used the relations given in Ref. \cite{Anantram-1996} given in our notation by
$$I_\alpha=\frac{e}{h}\sum_{\alpha,i,j}sgn(\alpha)\int dE[\delta_{\alpha\beta}\delta_{ij}-|S_{\alpha,\beta,ij}|^2]f_{\alpha,j}(E)$$
\begin{align*}
C_{\alpha\beta}=& \frac{2e^2}{h}\sum_{\gamma,\delta,i,j,k,l}\mathrm{sgn}(i)\mathrm{sgn}(j)\\
&\int dE A_{\gamma k;\delta l}(\alpha i,E)A_{\delta l;\gamma k}(\beta j,E)f_{\gamma,k}(E)[1-f_{\delta, l}(E)]
\end{align*}
where $\alpha,\beta,\gamma,\delta$ label the lead or reservoir and $i,j,k,l\in\{e,h\}$ label hole and electron channels.
The functions $f$, sgn, and $A$ are given by $$f_{\alpha,j}(E)=\left[1+\mathrm{exp}\left(\frac{E-\mu_\alpha\mathrm{sgn}(j)}{k_BT}\right)\right]^{-1}$$ $\mathrm{sgn}(e)=1$ and $\mathrm{sgn}(h)=-1$
$$A_{\gamma k;\delta l}(\alpha i,E)=\delta_{\alpha\gamma}\delta_{\alpha\gamma}\delta_{ik}\delta_{il}-S_{\alpha,\gamma,ik}^*S_{\alpha,\delta,il}$$ $\mu_\alpha$ gives the chemical potential of the lead or reservoir compared to that of the superconductor. The only dependence on the voltage is given in the Fermi functions $f$ so the differential factors are given by
$$G_\alpha=\frac{e^2}{h}\sum_{\alpha,i,j}sgn(\alpha)\int dE[\delta_{\alpha\beta}\delta_{ij}-|S_{\alpha,\beta,ij}|^2]\frac{df_{\alpha,j}}{dV}(E)$$ and
\begin{align*}
P_{\alpha\beta}=& \frac{2e^2}{h}\sum_{\gamma,\delta,i,j,k,l}\mathrm{sgn}(i)\mathrm{sgn}(j)\\
&\times\int dE A_{\gamma k;\delta l}(\alpha i,E)A_{\delta l;\gamma k}(\beta j,E)\\
&\times\left(\frac{df_{\alpha,k}}{dV}(E)[1-f_{\delta,l}(E)]+f_{\alpha,k}(E)[1-\frac{df_{\delta,l}}{dV}(E)]\right)
\end{align*}
The temperature samples the differential conductance from an area of width $kT$. Away from resonance, where the differential conductance and noise change on the order of $\Gamma$, this sampling has little effect as the differential conductance and noise vary slowly in energy.
\section*{Appendix B: Calculation of the Poisoning Rate}
The measurements given above do not give enough information to determine the poisoning rate, but if supplemented by the ratio of the height of the resonance of each lead, do allow its determination. Expanding the differential cross-correlation at low bias we get
\begin{equation}
P_{LR}=\frac{2e^2}{h}\frac{\Gamma_t^L\Gamma_t^R}{E_M^2}
\end{equation}
to lowest order in $\Gamma^2/E_M^2$ and $E^2/E_M^2$. The value of the Majorana splitting, $E_M$, is easily determined by the location of the resonance peak. From this we can calculate the product
\begin{equation}
\Gamma_t^L\Gamma_t^R=\frac{E_M^2h}{2e^2}P_{LR}
\end{equation} From equation \ref{eq:LowBiasConductance} above we can write the product
\begin{equation}
\begin{aligned}
\Gamma_t^L\Gamma_p^R =& \frac{E_M^2h}{2e^2}G_L(eV=0)-\Gamma_t^L\Gamma_t^R \\
&= \frac{E_M^2h}{2e^2}\big(G_L(eV=0)-P_{LR}(eV=0)\big)
\end{aligned}
\label{eq:Product}
\end{equation}
and similarly for $\Gamma_t^R\Gamma_p^L$.
To measure the poisoning rate, $\Gamma_p^L+\Gamma_p^R$, we also need to know the ratio between tunneling on the two sides, $r=\Gamma_t^R/\Gamma_t^L$. $r$ can be measured by comparing the height of the differential conductance peak of each lead. The height of the resonance for $G_\alpha$ at zero temperature is given by $$\frac{2e^2}{h}\frac{\Gamma_t^\alpha}{\Gamma_t^L+\Gamma_t^R+\Gamma_p^L+\Gamma_p^R}$$ so taking the ratio of the conductance for each lead gives
\begin{equation}
r=\frac{G_R(eV=E_M)}{G_L(eV=E_M)}
\end{equation} Even at temperatures $k_BT>\Gamma_\alpha$ the result holds because the thermal sampling of the differential conductance is dominated by the region near resonance where the result holds.
Now in terms of these measurements we can write the sum of the poisoning rate as:
\begin{widetext}
\begin{equation}
\begin{aligned}
\Gamma_p^L+\Gamma_p^R=& \frac{1}{\sqrt{\Gamma_t^L\Gamma_t^R}} \left( \Gamma_p^L\Gamma_t^R\sqrt{\frac{1}{r}}+\Gamma_p^R\Gamma_t^L\sqrt{r} \right) \\
=&\sqrt{\frac{h}{2e^2}}\frac{1}{\sqrt{P_{LR}}}\left((G_R-P_{LR})\sqrt{\frac{1}{r}}+(G_L-P_{LR})\sqrt{r} \right)E_M
\end{aligned}
\label{eq:Rate}
\end{equation}
\end{widetext}
Multiplying Eq. \ref{eq:Rate} by $\Gamma_t^L$ and combining with Eq. \ref{eq:Product} allows us to determine $\Gamma_t^L\Gamma_p^L$. Combining with Eq. \ref{eq:Product} gives us the ratio $\Gamma_p^R/\Gamma_p^L$. Together with Eq. \ref{eq:Rate}, $\Gamma_p^R$ and $\Gamma_p^L$ are determined separately.
\end{document}
|
2,869,038,154,058 | arxiv | \section{Introduction, summary and conclusions}
Results coming from RHIC have raised the issue of how to
calculate transport properties of ultra-relativistic partons in a
strongly coupled gauge theory plasma. For example, one would like
to calculate the friction coefficient and jet quenching parameter,
which are measures of the rate at which partons lose energy to the
surrounding plasma \cite{baier,bsz0002,w0005,kw0106,gvwz0302,kw0304,
s0312,s0405,jw0405}. With conventional quantum field theoretic tools,
one can calculate these parameters only when the partons are interacting
perturbatively with the surrounding plasma. The AdS/CFT
correspondence \cite{agmoo9905} may be a suitable
framework in which to study strongly coupled QCD-like plasmas.
Attempts to use the AdS/CFT correspondence to
calculate these quantities have been made in
\cite{lrw0605,hkkky0605,gub0605,st0605}
and were generalized in various ways in
\cite{b0605,h0605,cg0605,fgm0605,vp0605,sz0606,cg0606,
lm0606,as0606,psz0606,aem0606,gxz0606,lrw0607,fgmp0607,cgg0607,mtw0607,
cno0607,aev0608,nsw0608,asz0609,fgmp0609,t0610,fgmp0611,cg0611,gxz0611,
a0611,g0611,lrw0612,gubser0612}.
The most-studied example of the AdS/CFT correspondence is that of
the large $N$, large 't Hooft coupling limit of
four-dimensional ${\cal N}=4$ $SU(N)$ super Yang-Mills (SYM) theory and
type IIB supergravity on $AdS_5\times S^5$.
At finite temperature, this SYM theory is equivalent to
type IIB supergravity on the background of the near-horizon region of
a large number $N$ of non-extremal D3-branes. From
the perspective of five-dimensional gauged supergravity, this is the
background of a neutral AdS black hole whose Hawking temperature equals the
temperature of the gauge theory \cite{wit9802}.
Since at finite temperature the superconformal invariance
of this theory is broken, and since fundamental matter can be
added by introducing D7-branes \cite{kk0205}, it is thought that
this model may shed light on certain aspects of strongly coupled
QCD plasmas.
According to the AdS/CFT dictionary, the endpoints of open strings
on this background can correspond to quarks and antiquarks in the
SYM thermal bath \cite{ry9803,m9803,rty9803,bisy9803}. For example,
a stationary single quark can be described by a string that stretches
from the probe D7-brane to the black hole horizon. A semi-infinite
string which drags behind a steadily-moving endpoint and asymptotically
approaches the horizon has been proposed as the configuration dual to
a steadily-moving quark in the ${\cal N}=4$ plasma, and was used to
calculate the drag force on the quark \cite{hkkky0605,gub0605}. A
quark-antiquark pair or ``meson", on the other hand, corresponds to a
string with both endpoints ending on the D7-brane. The static limit
of this string solution has been used to calculate the inter-quark
potential in SYM plasmas \cite{rty9803,bisy9803}. Smooth, stationary
solutions for steadily-moving quark-antiquark pairs exist
\cite{lrw0607,cgg0607,cno0607,aev0608,asz0609,fgmp0609,lrw0612} but are
not unique and do not ``drag" behind the string endpoints as in the
single quark configuration. This lack of drag has been interpreted
to mean that color-singlet states such as mesons are invisible to the
SYM plasma and experience no drag (to leading order in large $N$) even
though the string shape is dependent on the velocity of the meson with
respect to the plasma. Nevertheless, a particular no-drag string
configuration with spacelike worldsheet \cite{lrw0607,lrw0612} has
been used to evaluate a lightlike Wilson loop in the field theory
\cite{lrw0605,b0605,cg0605,vp0605,cg0606,lm0606,as0606,aem0606,
cgg0607,nsw0608,gxz0611}. It has been proposed that this Wilson loop can
be used for a non-perturbative definition of the jet quenching parameter
$\hat q$ \cite{lrw0605}.
The purpose of this paper is to do a detailed analysis of the
evaluation of this Wilson loop using no-drag spacelike string
configurations in the simplest case of finite-temperature ${\cal N}=4$
$SU(N)$ SYM theory.
\paragraph{Summary.}
We use the Nambu-Goto action to describe the classical dynamics
of a smooth stationary string in the background of a five-dimensional
AdS black hole. We put the endpoints of the string on a probe D7-brane
with boundary conditions which describe a quark-antiquark pair
with constant separation moving with constant velocity either
perpendicular or parallel to their separation.
In section 2, we present the string embeddings describing smooth and
stationary quark-antiquark configurations, and we derive their equations
of motion.
In section 3, we discuss spacelike solutions of these equations.
We find that there can be an infinite number of spacelike
solutions for given boundary conditions, although there is always
a minimum-length solution.
In section 4, we apply these solutions to the calculation of the
lightlike Wilson loop observable proposed by \cite{lrw0605} to
calculate the jet quenching parameter $\hat q$, by taking the
lightlike limit of spacelike string worldsheets \cite{lrw0607,lrw0612}.
We discuss the ambiguities in the evaluation of this Wilson
loop engendered by how the lightlike limit is taken, and by
how self-energy subtractions are performed. Technical aspects
of the calculations needed in section 4 are collected in two
appendices. We also do the calculation for Euclidean-signature
strings for the purpose of comparison.
\paragraph{Conclusions.}
We find that the lightlike limit of the spacelike string configuration
used in \cite{lrw0605,lrw0612} to calculate the jet quenching parameter
$\hat q$ is not the solution with minimum action for given boundary
conditions, and therefore gives an exponentially suppressed contribution
to the path integral. Regardless of how the lightlike limit is taken,
the minimum-action solution giving the dominant contribution to the
Wilson loop has a leading behavior that is linear in its width, $L$.
Quadratic behavior in $L$ is associated with radiative energy loss by
gluons in perturbative QCD, and the coefficient of the $L^2$ term is
taken as the definition of the jet-quenching parameter $\hat q$
\cite{lrw0605}. In the strongly coupled ${\cal N}=4$ SYM theory in
which we are computing, we find $\hat q=0$.
We now discuss a few technical issues related to the validity of
the dominant spacelike string solution which gives rise to the linear
behavior in $L$.
Depending on whether the velocity parameter approaches unity from above
or below, the minimum-action string lies below (``down string") or above
(``up string") the probe D7-brane, respectively. The down string
worldsheet is spacelike regardless of the region of the bulk space in
which it lies. On the other hand, in order for the up string worldsheet
to be spacelike, it must lie within a region bounded by a certain maximum
radius which is related to the position of the black hole horizon. The
lightlike limit of the up string involves taking the maximal radius and
the radius of the string endpoints to infinity simultaneously, such that
the string always lies within the maximal radius. Therefore, even though
the string is getting far from the black hole, its dynamics are still
sensitive to the black hole through this maximal radius.
In the lightlike limit, the up and down strings with minimal action
both approach a straight string connecting the two endpoints. This is
the ``trivial" solution discarded in \cite{lrw0605}, though we do not
find a compelling physical or mathematical reason for doing so. If
the D7-brane radius were regarded as a UV cut-off, then one might
presume that the dominant up string solution should be discarded, since
it probes the region above the cut-off. However, this is unconvincing
for two reasons. First, if one approaches the lightlike limit from
$v>1$, then the dominant solution is a down string, and so evades this
objection. Second, and more fundamentally, in a model which treats the
D7-brane radius as a cut-off one does not know how to compute accurately
in the AdS/CFT correspondence. For this reason we deal only with the
${\cal N}=4$ SYM theory and a probe brane D7-brane, for which the
AdS/CFT correspondence is precise.
A spacelike string lying straight along a constant radius is discussed
briefly in \cite{lrw0612}. This string also approaches the ``trivial"
lightlike solution in \cite{lrw0605} as the radius is taken to infinity.
As pointed out in \cite{lrw0612}, this straight string at finite radius
is not a solution of the (full, second order) equations of motion, and
should be rejected. We emphasize that our dominant string solutions
are \emph{not} this straight string, even though they approach the
straight string as the D7-brane radius goes to infinity, and are
genuine solutions to the full equations of motion.
To conclude, the results in this paper show these solutions to be robust,
in the sense that they give the same contribution to the path integral
independently of how the lightlike limit is taken. Furthermore, though
not a compelling argument, the fact that these solutions do not exhibit
any drag is consistent with the fact that they give $\hat q=0$. Therefore,
for the non-perturbative definition of $\hat q$ given in \cite{lrw0605},
direct computation of $\hat q\neq0$ within the AdS/CFT correspondence for
${\cal N}=4$ $SU(N)$ SYM would require either a compelling argument for
discarding the leading contribution to the path integral, or a different
class of string solutions giving the dominant contribution. On the other
hand, this computation may simply imply that at large $N$ and strong 't
Hooft coupling, the mechanism for relativistic parton energy loss in the
SYM thermal bath gives a linear rather than quadratic dependence on the
Wilson loop width $L$.
\section{String embeddings and equations of motion}
We consider a smooth and stationary string in the background
of a five-dimensional AdS black hole with the metric
\begin{equation}\label{metric}
ds_5^2=h_{\mu\nu} dx^{\mu} dx^{\nu}
=-{r^4-r_0^4\over r^2 R^2}\,dx_0^2+\frac{r^2}{R^2}
(dx_1^2+dx_2^2+dx_3^2)+ {r^2 R^2\over r^4-r_0^4} \,dr^2.
\end{equation}
$R$ is the curvature radius of the AdS space, and the black hole
horizon is located at $r=r_0$. We put the endpoints of the string
at the minimal radius $r_7$ that is reached by a probe D7-brane.
The classical dynamics of the string in this background is described
by the Nambu-Goto action
\begin{equation}\label{ngaction}
S = -\frac{1}{2\pi{\alpha}'} \int\! d\sigma d\tau\, \sqrt{-G},
\end{equation}
with
\begin{eqnarray}
G &=& \det[h_{\mu\nu}(\partial X^\mu/\partial\xi^{\alpha}) (\partial X^\nu/
\partial\xi^{\beta})],
\end{eqnarray}
where $\xi^{\alpha}= \{\tau, \sigma\}$ and $X^\mu=\{x_0,x_1,x_2,x_3,r\}$.
The steady state of a quark-antiquark pair with constant separation
and moving with constant velocity either perpendicular or parallel
to the separation of the quarks can be described (up to worldsheet
reparametrizations), respectively, by the worldsheet embeddings
\begin{eqnarray}\label{shape}
{}[v_\perp]: &\quad&
x_0 = \tau, \quad
x_1= v \tau, \qquad\ \,\,\,
x_2= \sigma, \quad
x_3= 0,\quad
r = r(\sigma), \nonumber\\ [1mm]
{}[v_{||}]: &\quad&
x_0 = \tau, \quad
x_1= v \tau + \sigma, \quad
x_2= 0, \quad
x_3= 0,\quad
r = r(\sigma).
\end{eqnarray}
For both cases, we take boundary conditions
\begin{equation}\label{bcs}
0\le \tau\le T,\qquad
-L/2 \le\sigma\le L/2, \qquad
r(\pm L/2)= r_7,
\end{equation}
where $r(\sigma)$ is a smooth embedding.
The endpoints of strings on D-branes satisfy Neumann boundary conditions
in the directions along the D-brane, whereas the above boundary conditions
are Dirichlet, constraining the string endpoints to lie along fixed
worldlines a distance $L$ apart on the D7-brane. The correct way to
impose these boundary conditions is to turn on a worldvolume background
$U(1)$ field strength on the D7-brane \cite{hkkky0605} to keep the string endpoints a distance $L$ apart. Thus at finite $r_7$, it is physically
more sensible to describe string solutions for a fixed force on the
endpoints instead of a fixed endpoint separation $L$.\footnote{We thank
A. Karch for discussions on this point.} Our discussion of spacelike
string solutions in the next section will describe both the
force-dependence and the $L$-dependence of our solutions. In the
application to evaluating a Wilson loop in section 4, though, we are
interested in string solutions (in the $r_7\to\infty$ limit) with
endpoints lying along the given loop, {\it i.e.}, at fixed $L$.
According to the AdS/CFT correspondence, strings ending on the
D7-brane are equivalent to quarks in a thermal bath in
four-dimensional finite-temperature ${\cal N}{=}4$ $SU(N)$ super
Yang-Mills (SYM) theory. The standard gauge/gravity dictionary is
that $N=R^4/(4\pi{\alpha}'^2 g_s)$ and ${\lambda} = R^4/{\alpha}'^2$
where $g_s$ is the string coupling, ${\lambda}:=g_{\rm YM}^2 N$ is the
't Hooft coupling of the SYM theory.
In the semiclassical string limit, {\it i.e.},
$g_s\to0$ and $N\to\infty$, the supergravity approximation in the
gauge/gravity correspondence holds when the curvatures are much
greater than the string length $\ell_s := \sqrt{{\alpha}'}$.
Furthermore, in this limit, one identifies
\begin{equation}\label{dictionary}
{\beta} = \pi R^2/r_0, \qquad
m_0 = r_7/(2\pi{\alpha}') ,
\end{equation}
where ${\beta}$ is the (inverse) temperature of the SYM thermal bath, and
$m_0$ is the quark mass at zero temperature.
It will be important to note that the velocity parameter $v$ entering
in the string worldsheet embeddings (\ref{shape}) is \emph{not}
the proper velocity of the string endpoints. Indeed, from (\ref{metric})
it is easy to compute that the string endpoints at $r=r_7$
move with proper velocity
\begin{equation}\label{realV}
V = {r_7^2\over\sqrt{r_7^4-r_0^4}}\, v.
\end{equation}
We will see shortly that real string solutions must have the
same signature everywhere on the worldsheet. Thus a string
wroldsheet will be timelike or spacelike depending on whether
$V$, rather than $v$, is greater or less than 1. Thus, translating
$V\lessgtr 1$ into corresponding inequalities for the velocity
parameter $v$, we have
\begin{eqnarray}\label{spacetime}
\begin{array}{c}
\mbox{timelike}\\
\mbox{string worldsheet}\\
\end{array}
&\Leftrightarrow &
\quad\, \mbox{both}\quad\ v<1\ \,({\gamma}^2>1) \ \mbox{and}\
z_7 > \sqrt{\gamma} ,\nonumber\\
&&\\
\begin{array}{c}
\mbox{spacelike}\\
\mbox{string worldsheet}\\
\end{array}
&\Leftrightarrow &
\left\{
\begin{array}{l}
\mbox{either}\ \ \,v\ge1\ \,({\gamma}^2<0)\ \mbox{and any}\ z_7, \\ [2mm]
\mbox{or}\qquad\ \, v< 1\ \,({\gamma}^2>1)\ \mbox{and}\
z_7 < \sqrt{\gamma} .\\
\end{array}
\right.\nonumber
\end{eqnarray}
Here we have defined the dimensionless ratio of the D7-brane
radial position to the horizon radius,
\begin{equation}
z_7 := {r_7\over r_0},\qquad\mbox{and}\qquad
{\gamma}^2 := {1\over 1-v^2 } .
\end{equation}
Furthermore, since the worldsheet has the same signature everywhere,
this implies that timelike strings can only exist for $r>\sqrt{\gamma}\,
r_0$, but spacelike strings may exist at all $r$, as illustrated in
figure \ref{fig6}. In this respect, $r=\sqrt{\gamma}\,r_0$ plays a role
analogous to that of the ergosphere of a Kerr black hole, although
in this case it is not actually an intrinsic feature of the background
geometry but instead a property of certain string configurations
(\ref{shape}) in the background geometry (\ref{metric}).
\begin{figure}[t]
\epsfxsize=3.0in \centerline{\epsffile{spacelikefig6.eps}}
\caption[FIG. \arabic{figure}.]{\footnotesize{Both timelike and
spacelike worldsheets can exist above the radius $r=\sqrt{\gamma} r_0$
(blue line) for $v<1$ and $v>1$, respectively. On the other hand,
only spacelike worldsheets exist in the region between the blue
line and the event horizon, given by $r_0<r<\sqrt{\gamma} r_0$.}}
\label{fig6}
\end{figure}
With the embeddings (\ref{shape}) and boundary conditions
(\ref{bcs}), the string action becomes\footnote{These expressions
for the string action are good only when there
is a single turning point around which the string is symmetric.
We will later see that for $[v_\perp]$ there exist solutions
with multiple turns. For such solutions the limits of integration
in (\ref{singleturnaction}) are changed, and appropriate terms for
each turn of the string are summed.}
\begin{eqnarray}\label{singleturnaction}
{}[v_\perp]: &\quad&
S=\frac{- T}{{\gamma}\,\pi{\alpha}'}
\int_0^{L/2} d\sigma \sqrt{{r^4-{\gamma}^2 r_0^4\over R^4}
+ {r^4-{\gamma}^2 r_0^4\over r^4-r_0^4} r'^2}, \nonumber\\
{}[v_{||}]: &\quad&
S=\frac{- T}{{\gamma}\,\pi{\alpha}'}
\int_0^{L/2} d\sigma \sqrt{{\gamma}^2 {r^4-r_0^4\over R^4}
+ {r^4-{\gamma}^2 r_0^4\over r^4-r_0^4} r'^2},
\end{eqnarray}
where $r' := \partial r/\partial\sigma$. The resulting equations of motion are
\begin{eqnarray}\label{E}
{}[v_\perp]: &\quad&
r'^2=\frac{1}{{\gamma}^2\,a^2 r_0^4 R^4}
(r^4-r_0^4) (r^4-{\gamma}^2 [1+a^2] r_0^4) , \nonumber\\
{}[v_{||}]: &\quad&
r'^2=\frac{{\gamma}^2}{a^2 r_0^4 R^4}
(r^4-r_0^4)^2 {(r^4-[1+a^2]r_0^4) \over (r^4-{\gamma}^2 r_0^4)} ,
\end{eqnarray}
where $a^2$ is a real integration constant. Here we have taken
the first integral of the second order equations of motion
which follows from the existence of a conserved momentum in
the direction along the separation of the string endpoints.
Since $a$ is associated with this conserved momentum, $|a|$ is
proportional to the force applied (via a constant background
$U(1)$ field strength on the D7 brane) to the string endpoints
in this direction \cite{hkkky0605}.
Although we have written $a^2$ as a square, it can be either
positive or negative. Using (\ref{E}), the determinant of the induced
worldsheet metric can be written as
\begin{eqnarray}\label{G}
{}[v_\perp]: &\quad&
G=-\frac{1}{{\gamma}^4\,a^2 r_0^4 R^4}
(r^4-{\gamma}^2 r_0^4)^2, \nonumber\\
{}[v_{||}]: &\quad&
G=-\frac{1}{a^2 r_0^4 R^4} (r^4-r_0^4)^2 .
\end{eqnarray}
Thus, the sign of $G$ is the same as that of $-a^2$ (since
the other factors are squares of real quantities).
In particular, the worldsheet is timelike ($G<0$) for $a^2>0$ and
spacelike ($G>0$) for $a^2<0$.
The reality of $r'$ implies that the right sides of (\ref{E})
must be positive in all these different cases, which implies
certain allowed ranges of $r$. Therefore, there can only be
real string solutions when the ends of the string, at $r=r_7$,
are within this range. The edges of this range are (typically)
the possible turning points $r_t$ for the string, whose possible
values will be analyzed in the next
section.
Given these turning points, (\ref{E}) can be integrated to give
\begin{eqnarray}\label{L}
{}[v_\perp]: &\quad&
{L\over{\beta}}=\frac{2|a{\gamma}|}{\pi} \left|\int_{z_t}^{z_7}
{dz\over\sqrt{(z^4-1)(z^4-{\gamma}^2[1+a^2])}}\right|, \nonumber\\
{}[v_{||}]: &\quad&
{L\over{\beta}}=\frac{2|a|}{\pi|{\gamma}|} \left|\int_{z_t}^{z_7}
{dz\sqrt{z^4-{\gamma}^2}\over(z^4-1)\sqrt{z^4-[1+a^2]}}\right|,
\end{eqnarray}
where we have used $r_0=\pi R^2/{\beta}$. Also, in (\ref{L}) we have
rescaled $z=r/r_0$ and likewise $z_t:=r_t/r_0$ and $z_7:=r_7/r_0$.
(The absolute value takes care of cases where $z_7<z_t$.)
These integral expressions determine the integration constant
$a^2$ in terms of $L/{\beta}$ and $v$.
Also, we can evaluate the action for the solutions of (\ref{E}):
\begin{eqnarray}\label{S}
{}[v_\perp]: &\quad&
S= \pm\frac{T\sqrt{\lambda}}{{\gamma}{\beta}}
\int_{z_t}^{z_7} {(z^4-{\gamma}^2)\, dz \over
\sqrt{(z^4-1)(z^4-{\gamma}^2[1+a^2])}}
, \nonumber\\
{}[v_{||}]: &\quad&
S= \pm\frac{T\sqrt{\lambda}}{{\gamma}{\beta}}
\int_{z_t}^{z_7} dz \sqrt{z^4-{\gamma}^2 \over
z^4-[1+a^2]} ,
\end{eqnarray}
where we have used $R^2/{\alpha}'=\sqrt{\lambda}$.
The plus or minus signs are to be chosen depending on the
relative sizes of $z_7$, $z_t$, and ${\gamma}^2$, and will
be discussed in specific cases below.
For finite $z_7$, these integrals are convergent. They diverge
when $z_7\to\infty$ and need to be regularized by subtracting
the self-energy of the quark and the antiquark \cite{ry9803,m9803},
which will be discussed in more detail in section 4.
Note that, in writing (\ref{L}) and (\ref{S}), we have
assumed that the string goes from $z_7$ to the turning
point $z_t$ and back only once. We will see that more
complicated solutions with multiple turning points are
possible. For these cases, one must simply add an appropriate
term, as in (\ref{L}) and (\ref{S}), for each turn of the string.
\section{Spacelike solutions}
Positivity of the determinant of the induced worldsheet
metric (\ref{G}) implies that the integration constant $a^2<0$
for spacelike configurations. It is convenient to define
a real integration constant ${\alpha}$ by
\begin{equation}
{\alpha}^2 := -a^2 >0.
\end{equation}
As remorked above, ${\alpha}$ is proportional to the magnitude of a
background $U(1)$ field strength on the D7 brane.
We will now classify the allowed ranges of $r$ for which $r'^2$
is positive in the equations of motion (\ref{E}). These ranges,
as well as the associated possible turning points of the string
depend on the relative values of ${\alpha}$, $v$ and 1.
\subsection{Perpendicular velocity}
The configurations of main interest to us are those for which
the string endpoints move in a direction perpendicular to their
separation. As we will now see, the resulting solutions have
markedly different behavior depending on whether the velocity
parameter is greater or less than 1.
\subsubsection{$\sqrt{1-z_7^{-4}}<v<1$}
If $v<1$, we have seen that the string worldsheet can
be spacelike as long as $v>\sqrt{1-z_7^{-4}}$.
A case-by-case classification of the possible
turning points of the $v_\perp$ equation in (\ref{E}) gives the
following table of possibilities:
\medskip
\centerline{{\footnotesize
\begin{tabular}{l|rlrl}
\normalsize parameters&\multicolumn{4}{c}{\normalsize allowed ranges}\\ \hline
$0<{\alpha}<v<1$\ \ &&&$1\le$&$\!\!\!\!z^4\le{\gamma}^2(1-{\alpha}^2)$\\
$0<v<{\alpha}<1$&${\gamma}^2(1-{\alpha}^2)\le$&$\!\!\!\!z^4\le1$&&\\
$0<v<1<{\alpha}$&$0\le$&$\!\!\!\!z^4\le1$&&\\
\end{tabular}
}}\medskip
\noindent The left column
of allowed ranges are those that lie inside the horizon and
the right one are the allowed ranges outside the horizon.
At the horizon, $r'=0$ and the string becomes tangent to $z=1$
at finite transverse distance giving a smooth turning point for
the string. In the last entry in the above table, ``$0\le z^4$"
indicates that there is no turning point before meeting the
singularity at $z=0$ and the string necessarily meets the
singularity.
Since only string solutions that extend into the $z>1$ region can
reliably describe quarks, this eliminates the left column of allowed
ranges. Thus, the only viable configurations are those in the right
column with ${\alpha}<v$, which all have turning points at $1$ or
${\gamma}^2(1-{\alpha}^2)$. This restricts the D7-brane minimum radius to lie
between these two turning points which, in turn, gives rise to string
configurations with multiple turns.
\begin{figure}[t]
\epsfxsize=5.4in \centerline{\epsffile{spacelikefig2.eps}}
\caption[FIG. \arabic{figure}.]{\footnotesize{Spacelike
string solutions with fixed $L/{\beta}=0.25$, ${\gamma}=20$
($v\approx0.99875$), $z_7=2$, and with low values of $n$
(the number of turns at the horizon). The horizon is the
solid line at $z=1$, and the minimum radius of the D7-brane
is the dashed line at $z=2$.}}\label{fig2}
\end{figure}
In order for the D7-brane to be within the allowed
range $1<z_7^4<{\gamma}^2(1-{\alpha}^2)$,
the parameter $v$ must be at least $v^2 > 1-z_7^{-4}$.
For a given $z_7$, $v$, and ${\alpha}$ satisfying
these inequalities, we integrate (\ref{L}) to obtain $L/{\beta}$.
There are two choices for the range of integration: $[1,z_7]$
and $[z_7,{\gamma}^2(1-{\alpha}^2)]$. The first one is appropriate for
a string which decends down to the horizon and then turns
back up to the D7-brane; we will call this a ``down string".
The second range describes a string which ascends to larger
radius and then turns back down to the D7-brane; we will call
this an ``up string". Given these two behaviors, it is clear
that we can equally well construct infinitely many other
solutions by simply alternating segments of up and down
strings. In particular, there are three possible series
of string configurations, which we will call the (a)$_n$,
(b)$_n$, and (c)$_n$ series. An (a)$_n$ string starts with
a down string then adds $n-1$ pairs of up and down strings,
thus ending with a down string; a (b)$_n$ string concatenates $n$
pairs of up and down strings--- for example, starts with an up string
and ends with a down string; and a (c)$_n$ string starts with
an up string and then adds $n$ pairs of down and up strings, thus
ending with an up string. $n$ counts the number of turns the string makes at the
horizon, $z=1$. In particular, for the (a)$_n$ and
(b)$_n$ series, $n$ is an integer $n\ge1$, while for the (c)$_n$ series,
$n\ge0$. Examples of these string configurations appear
in Figure \ref{fig2}. If the separation of the ends of the up and
down strings are $L_{\rm up}$ and $L_{\rm down}$, respectively,
then the possible total separations of the strings fall
into three classes of lengths
\begin{eqnarray}
L_{{\rm (a)},n} &=& n L_{\rm down} + (n-1) L_{\rm up} ,\nonumber\\
L_{{\rm (b)},n} &=& n L_{\rm down} + n L_{\rm up} ,\nonumber\\
L_{{\rm (c)},n} &=& n L_{\rm down} + (n+1) L_{\rm up} .
\end{eqnarray}
Figure \ref{fig1} illustrates the systematics of the $L_{{\rm (a,b,c)},n}$
dependence on ${\alpha}$. Here we have chosen $z_7=2$, so the
minimum value of $v$ for the solutions to exist has ${\gamma}=4$.
The leftmost plot illustrates that, for small ${\gamma}$,
$L_{{\rm (c)},0}=L_{\rm up} \ll L_{\rm down}$ for all ${\alpha}$.
Thus, for each $n\ge1$, $L_{{\rm (a)},n}\approx L_{{\rm (b)},n}
\approx L_{{\rm (c)},n}$, and are virtually indistinguishable
in the figure. As ${\gamma}$ increases, $L_{\rm up}$ and
$L_{\rm down}$ begin to approach each other for most ${\alpha}$,
except for ${\alpha}$ near ${\alpha}_{\rm max}:=\sqrt{1-z_7^4/{\gamma}^2}$,
where $L_{\rm up}$ decreases sharply back to zero.
\begin{figure}[t]
\epsfxsize=5.8in \centerline{\epsffile{spacelikefig1.eps}}
\caption[FIG. \arabic{figure}.]{\footnotesize{$L/{\beta}$ as a
function of ${\alpha}$ for spacelike string configurations
with perpendicular velocity, $z_7=2$ and ${\gamma}=4.2$, $7$, and $10$.
Green curves correspond to the (a)--series, blue to (b)--series,
and red to (c)--series. Only the series up to $n=20$ are shown;
the rest would fill the empty wedge near the $L/{\beta}$ axis.
Note that the scale of the ${\gamma}=4.2$ plot is half that of the
other two.}}
\label{fig1}
\end{figure}
This behavior implies that, for every fixed $L$ and $v$, there
is a very large number of solutions\footnote{Although $n$ does not
formally have an upper bound, as $n$ increases the turns of the
string become sharper and denser. Therefore, for large enough $n$ the
one can no longer ignore the backreaction of the string on the
background.} in each series but that the minimum value of $n$
that occurs decreases as $v$ increases. In detail, it is not too
hard to show that the pattern of appearance of solutions as $v$
increases for fixed $L$ is as follows: if for a given $v$ there
is one solution ({\it i.e.}, value of ${\alpha}$) for each (a,b,c)$_n$--string
with $n> n_0$, then as $v$ increases first {\it two} (c)$_{n_0}$
solutions will appear, then the (c)$_{n_0}$ solution with the
greater ${\alpha}$ will disappear just as a (b)$_{n_0}$ and an (a)$_{n_0}$
solution appear. Also, ${\alpha}({\rm (a)}_n)<{\alpha}({\rm (b)}_n)<{\alpha}({\rm (c)}_n)$.
For example, in Figure \ref{fig1}, when $L/{\beta}=0.25$ and ${\gamma}=4.2$,
there are (a,b,c)$_n$ solutions for $n\ge2$. Increasing $v$ to
${\gamma}=7$ (for the same $L$), there are now (a,b,c)$_n$ solutions for
$n\ge1$. Increasing $v$ further to ${\gamma}=20$, there are now in
addition two (c)$_0$ ({\it i.e.}, up string) solutions.
Figure \ref{fig2} plots the string solutions when $z_7=2$, $L/{\beta}=0.25$,
and ${\gamma}=20$, for low values of $n$.
Note that if one keeps the D7-brane $U(1)$ field strength, ${\alpha}$,
constant instead of the endpoint separation, $L$, then there
will still be an infinite sequence of string solutions
qualitatively similar to that shown in Figure 2. In this case
the endpoint separation $L$ increases with the number of
turns.
The action for spacelike configurations is
imaginary because the Nambu-Goto Lagrangian is $\sqrt{-G}
=\pm i \sqrt{G}$. Ignoring the $\pm i$ factor (which
we will return to in the next section), the integral
of $\sqrt{G}$ just gives the area of the worldsheet.
Dividing by the ``time" parameter $T$ in (\ref{S}) then
gives the length of the string: $\ell=\pm i S/T$.
Figure \ref{fig3} plots the lengths of the various series of
string configurations for increasing values of
the velocity parameter. There are negative lengths
because the length of a pair of straight strings stretched
between the D7-brane and the horizon has been subtracted,
for comparison purposes.
It is clear from the figure that the (c)$_0$ up strings
are the shortest for any given $L$ less than a
velocity-dependent critical value. Furthermore,
for $L$ small enough, they are also shorter
than the straight strings.
In particular, the shorter (larger ${\alpha}$) of the two up
strings has the smallest $\ell$ of all. As $v\to1$, the
critical value of $L$ below which the up string is the
solution with the minimum action increases without bound.
In this case, any of the other spacelike strings will
decay to this minimum-action configuration. Therefore,
it is this configuration which must be used for any
calculations of physical quantities, such as the jet
quenching parameter $\hat q$.
\begin{figure}[t]
\epsfxsize=5.7in \centerline{\epsffile{spacelikefig3.eps}}
\caption[FIG. \arabic{figure}.]{\footnotesize{Spacelike
string lengths $\ell$ in units of
$\sqrt{\lambda}/{\beta}$ as a function of endpoint separation
$L/{\beta}$ and $z_7=2$, for ${\gamma}=6,15$ and $100$.
The gray line along the $L/{\beta}$ axis is the (subtracted)
length of a pair of straight strings stretched between the
D7-brane and the horizon. Note that the scale of the
${\gamma}=6$ plot is half that of the others.}}\label{fig3}
\end{figure}
\subsubsection{$v>1$}
A case-by-case classification of the possible
turning points of the $v_\perp$ equation in (\ref{E})
when $v>1$ gives the
following table of possibilities:
\medskip
\centerline{{\footnotesize
\begin{tabular}{l|rlrl}
\normalsize parameters&\multicolumn{4}{c}{\normalsize allowed ranges}\\ \hline
$0<{\alpha}<1<v$&&&$1\le$&$\!\!\!\!z^4<\infty$\\
$0<1<{\alpha}<v$&$0\le$&$\!\!\!\!z^4\le{\gamma}^2(1-{\alpha}^2)$&$1\le$&$\!\!\!\!z^4<\infty$\\
$0<1<v<{\alpha}$&$0\le$&$\!\!\!\!z^4\le1$&${\gamma}^2(1-{\alpha}^2)\le$&$\!\!\!\!z^4<\infty$\\
\end{tabular}
}}\medskip
\noindent The left column
of allowed ranges are those that lie inside the horizon, while
the right one lists the allowed ranges which are outside the horizon.
At the horizon, $r'=0$ and the string becomes tangent
to $z=1$ at finite transverse distance. This can be either
a smooth turning point for the string or, if there is an allowed
region on the other side of the horizon, then the string can have
an inflection point at the horizon and continue through it.
From the table, we see that this can only happen at the crossover between
the last two lines---in other words, when ${\alpha}=v>1$.
In the above table we have written
``$0\le z^4$" when there is no turning point in an
allowed region before the singularity at $z=0$.
In these cases, a string extending towards smaller $z$
will necessarily meet the singularity.
As before, we are only interested in string solutions that extend
into the $z>1$ region. This eliminates
the left column of allowed ranges, with the possible
exception of the $v={\alpha}>1$ crossover case, for which the string might
inflect at the horizon and then extend inside. However, if it does
extend inside, then it will hit the singularity. Therefore, we can also discard this
possibility as being outside the regime of validity of our
approximation. Thus, the only viable configurations
are those given in the right column, which all have turning
points at either $1$ or ${\gamma}^2(1-{\alpha}^2)$, or else go off to infinity.
Since we want to identify the quarks with the ends of the
strings on the D7-brane, we are only interested in
string configurations that begin and end at $z_7>1$, and
so discard configurations which go off to $z\rightarrow\infty$ instead
of turning.
Thus the $v>1$ ranges compatible with these conditions all
have only one turning point, describing strings dipping
down from the D7-brane and either turning
at the horizon or above it, depending on ${\alpha}$ versus $v$.
\begin{figure}[t]
\epsfxsize=3.0in \centerline{\epsffile{spacelikefig4.eps}}
\caption[FIG. \arabic{figure}.]{\footnotesize{$Lv^8/{\beta}$ as a
function of ${\alpha}/v$ for spacelike string configurations
with perpendicular velocity, $z_7=2$, and $v=1.005$ (red), $1.05$
(green), and $1.2$ (blue).}}
\label{fig4}
\end{figure}
Indeed, it is straightforward to check
that for any $L$ there are two $v>1$ solutions, one with
${\alpha}>v$ and one with ${\alpha}<v$.
$L/{\beta}$ as a function of ${\alpha}$ is plotted in Figure \ref{fig4}.
(The rescalings by powers of $v$ are just so the curves
will nest nicely in the figure.)
The ${\alpha}<v$ solutions are long strings which
turn at the horizon, while the ${\alpha}>v$ solutions are
short strings with turning point $z_t^4 = ({\alpha}^2-1)/(v^2-1)$.
The norm of the action for these configurations (which is proportional
to the length of the strings) is likewise greater for the
${\alpha}<v$ solutions than for the ${\alpha}>v$ ones.
If, instead, one keeps the D7-brane $U(1)$ field strength ${\alpha}$
constant, then there is at most a single string solution with a
given velocity $v$.
\subsection{Parallel velocity}
For completenes, though we will not be using these confurations in
the rest of the paper, we briefly outline the set of solutions
for suspended spacelike
string configurations with velocity parallel to the endpoint
separation. A case-by-case analysis of the
equations of motion (\ref{E}) gives the following
table of allowed ranges for string solutions:
\medskip\centerline{{\footnotesize
\begin{tabular}{l|rl}
\normalsize parameters &\multicolumn{2}{c}{\normalsize allowed
ranges}\\ \hline
$0<({\alpha},v)<1$\ \ & $1-{\alpha}^2 \le z^4<1$ & $1<z^4\le{\gamma}^2$ \\
$0<v<1<{\alpha}$ & $0\le z^4<1$ & $1<z^4\le{\gamma}^2$ \\ \hline
$0<{\alpha}<1<v$ & $1-{\alpha}^2 \le z^4<1$ & $1<z^4<\infty$ \\
$0<1<({\alpha},v)$ & $0\le z^4<1$ & $1<z^4<\infty$ \\
\end{tabular}}}\medskip
\noindent Although $z=1$ is always included in the allowed ranges,
in the table we have split each range into
two regions: one inside the horizon and one outside. The reason
is that the string equation of motion (\ref{E}) near $r=r_0$
is $r'^2 \sim (r-r_0)^2$, whose solutions are
of the form $r-r_0 \sim \pm e^{\pm \sigma}$. This implies that these solutions
asymptote to the horizon and never turn. Thus, the parallel
spacelike strings can never cross the horizon.
As always, we only look at solutions that extend into the $z>1$
region, since that is where we can reliably put D7-branes.
This eliminates the left-hand column of configurations.
Recall that the signature of the worldsheet metric for a string
with $v<1$ changes at $z=\sqrt{{\gamma}}$. A string that reaches
this radius will have a cusp there. This is qualitatively
similar to the timelike parallel solutions with cusps described
in \cite{aev0608}, except that in the spacelike case the strings
extend away from the horizon (towards greater $z$). Thus,
the string solutions corresponding to the ranges in the
right-hand column all either asymptote to $z=1$, go off to
infinity or have a cusp at $z=\sqrt{{\gamma}}$. The first two
cases do not give strings with two endpoints on the D7-brane
at $z=z_7$. Therefore, the only potentially interesting
configurations for our purposes are those with $1<z_7 < z<\sqrt{{\gamma}}$,
which occur for $v<1$ and any ${\alpha}$. However, since these
configurations have cusps, their description in terms of
the Nambu-Goto action is no longer complete. That is,
there must be additional boundary conditions specified,
which govern discontinuities in the first derivatives of
the string shape. As discussed in \cite{aev0608} for the
analogous timelike strings, these cusps cannot be avoided
by extending the string to include a smooth but self-intersecting
closed loop, since real string solutions cannot change
their worldsheet signature.
\section{Application to jet quenching}
We will now apply the results of the last two sections
to the computation of the expectation value of a certain
Wilson loop $W[{\cal C}]$ in the SYM theory.
The interest of this Wilson loop is that it has been
proposed \cite{lrw0605} as a non-perturbative definition
of the jet quenching parameter $\hat q$. This medium-dependent
quantity measures the rate per unit distance traveled at which
the average transverse momentum-squared is lost by a parton
moving in plasma \cite{baier}.
In particular, \cite{lrw0605} considered a rectangular loop
$\cal C$ with parallel lightlike edges a distance $L$ apart
which extend for a time duration $T$. Motivated by a
weak-coupling argument, the leading behavior of $W[{\cal C}]$
(after self-energy subtractions) for large $T$ and $L{\beta}\ll 1$
is claimed to be
\begin{equation}\label{WL}
\langle W^{A}({\cal C})\rangle =\hbox{exp}[-\frac{1}{4}~{\hat q}~TL^2],
\end{equation}
where $\langle W^{A}({\cal C})\rangle$ is the thermal expectation
value of the Wilson loop in the adjoint representation. We will
simply view this as a definition of $\hat q$.\footnote{This differs
by a constant factor from the definition written in \cite{lrw0605}
since here it is expressed in the reference frame of the plasma
rather than that of the parton.} Note that exponentiating the
Nambu-Goto action gives rise to the thermal expectation value of
the Wilson loop in the fundamental representation $\langle {W^{F}
({\cal C})}\rangle$. Therefore, we will make use of the relation
$\langle W^{A}({\cal C})\rangle \approx \langle W^{F}({\cal C})\rangle^2$,
which is valid at large $N$.
Self-energy contributions are expected to contribute
on the order of $TL^0$ and, since this is independent of $L$, their
subtraction does not affect the $L$-dependence of the
results. The subtraction is chosen to remove infinite constant
contributions, but are ambiguous up to finite terms.\footnote{Note
that \cite{drukker} shows that the correct treatment of the Wilson
loop boundary conditions should automatically and uniquely subtract
divergent contributions; it would be interesting to evaluate
our WIlson loop using this prescription instead of the more
{\it ad hoc} one used here and throughout the literature.}
However, there may be other leading contributions of order $TL^{-1}$
or $TL$. For example, as we discussed in the conclusions, a term
linear in $L$ would be consistent with energy loss by elastic
scattering. Therefore, one requires a subtraction prescription.
We will assume the following one: extract $2^{-3/2}\hat q$ as the
coefficient of $L^2$ in a Laurent expansion of the action around
$L=0$. Thus, concretely,
\begin{equation}\label{qhat}
W[{\cal C}] \sim \exp\left\{-T
\left( \cdots + {{\alpha}_{-1}\over L} + {\alpha}_0 + {\alpha}_1 L
+ {\hat q\over4}L^2 +\cdots\right)\right\}.
\end{equation}
Implicit in this is a choice of finite parts of
leading terms to be subtracted, which could affect the
value of $\hat q$; we have no justification for this
prescription beyond its simplicity.
We will see that this issue of $L$-dependent leading
terms indeed arises in the computation of $\hat q$ using the AdS/CFT
correspondence.
There is a second subtlety in the definition of
$\hat q$ given in (\ref{qhat}), which involves how the lightlike
limit of $\cal C$ is approached. In the AdS/CFT correspondence,
we evaluate the expectation value of the Wilson loop as the
exponential $\exp\{iS\}$ of the Nambu-Goto action for a string
with boundary conditions corresponding to the Wilson loop $\cal C$.
If we treat $\cal C$ as the lightlike limit of a
sequence of timelike loops, then the string worldsheet
will be timelike and the exponential will
be oscillatory, instead of exponentially suppressed in $T$
as in (\ref{qhat}). The exponential suppression requires
either an imaginary action (of the correct sign) or
a Wick rotation to Euclidean signature.
The authors of \cite{lrw0605} advocate the use of the
lightlike limit of spacelike strings to evaluate
the Wilson loop \cite{lrw0607,lrw0612}.
Below, we will evaluate the Wilson loop using both the
spacelike prescription and the Euclidean one. Our interest
in the Euclidean Wilson loop is mainly for comparison
purposes and to help elucidate some subtleties in the
calculation; we emphasize that it is \emph{not} the one
proposed by the authors of \cite{lrw0605} to evaluate $\hat q$.
(Though the Euclidean prescription is the usual one for evaluating
static thermodynamic quantities, we are here evaluating
a non-static property of the SYM plasma and so the usual
prescription may not apply.)
In both cases we will find that, regardless of the manner
in which the above ambiguities are resolved, the computed value
of $\hat q$ is zero.
\subsection{Euclidean Wilson loop}
Euclidean string solutions \cite{aev0608} are reviewed in appendix A.
Here we just note their salient properties. In Euclidean signature,
nothing special happens in the limit $V\to1$ ($v\to\sqrt{1-z_7^{-4}}$).
When $V=1$ there are always only two Euclidean string solutions: the
``long string", with turning point at the horizon $z=1$, and the ``short
string", with turning point above the horizon. The one which gives the
dominant contribution to the path integral is the one with smallest
Euclidean action. For endpoint separation $L$ less than a critical value,
the dominant solution is the short string. This is the string configuration
that remains the furthest from the black hole horizon \cite{aev0608}.
We are interested in evaluating the Euclidean string action for the short
string in the small $L$ limit (the so-called ``dipole approximation").
However, there is a subtlety associated with taking this limit since it
does not commute with taking the $z_7\to\infty$ limit, which corresponds to
infinite quark mass. Recall that the quark mass scales as $r_7$ in string
units; introduce a rescaled length parameter
\begin{equation}
{\epsilon} := {1\over z_7} = {r_0\over r_7}
\end{equation}
associated with the Compton wavelength of the quark. Then the behavior of
the Wilson loop depends on how we parametrically take the $L\to0$ and
${\epsilon}\to0$ limits. For instance, if one keeps the mass (${\epsilon}^{-1}$) fixed
and takes $L\to0$ first, then the Wilson loop will reflect the overlap of
the quark wave functions. On the other hand, if one takes ${\epsilon}\to0$ before
$L$, then the Wilson loop should reflect the response of the plasma to
classical sources. The second limit is presumably the more physically
relevant one for extracting the $\hat q$ parameter. We perform the
calculation in both limits in appendix A to verify this intuition.
In the $L\to0$ limit at fixed (small) ${\epsilon}$, the action of the short string
as a function of $L$ and ${\epsilon}$ is found in appendix A to be
\begin{equation}\label{euc1}
S= {\pi T\sqrt{\lambda}\over\sqrt2{\beta}^2}
\left\{ {L\over{\epsilon}^2} \left[1 +\textstyle{1\over4}{\epsilon}^4+{\cal O}({\epsilon}^8)\right]
- {\pi^2 L^3\over{\beta}^2{\epsilon}^4} \left[ \textstyle{1\over3}
-{1\over6}{\epsilon}^4 +{\cal O}({\epsilon}^8)\right]
+{\cal O}\left(\textstyle{L^5\over{\beta}^4{\epsilon}^6}\right)\right\}.
\end{equation}
(In fact, this result is valid as long as $L\to0$ as $L\propto{\epsilon}$ or faster.)
The main thing to note about this expression is that it is divergent as
${\epsilon}\to0$. This is \emph{not} a self-energy divergence that we failed to
subtract, since any self-energy subtraction ({\it e.g.}, subtracting the
action of two straight strings extending radially from $z=z_7$ to $z=1$)
will be independent of the quark separation and so cannot cancel the
divergences in (\ref{euc1}). (In fact, it inevitably adds an ${\epsilon}^{-1}L^0$
divergent piece.) This divergence as the quark mass is taken infinite is
a signal of the unphysical nature of this order of limits.
The other order of limits, in which ${\epsilon}\to0$ at fixed (small) $L$, is
expected to reflect more physical behavior. Indeed, appendix A gives
\begin{equation}\label{euc2}
{{\beta}\hat S\over T \sqrt{\lambda}} =
- {0.32}\, {{\beta}\over L} + {1.08}
- {0.76}\, {L^3\over{\beta}^3} + {\cal O}(L^7) .
\end{equation}
Here $\hat S$ is the action with self-energy subtractions.
This result (which is in the large mass, or ${\epsilon}\to0$,
limit) is finite for finite quark separation $L$. The $L^{-1}$ term
recovers the expected Coulombic interaction.
Since there is no $L^2$ term in (\ref{euc2}), the subtraction prescription
(\ref{qhat}) implies that the Euclidean analog of the jet quenching
parameter vanishes.
For the sake of comparison, we also compute the long string action in
this limit with the same regularization in Appendix A, giving
\begin{equation}\label{euc3}
{\hat S_{\rm long}\over T \sqrt{\lambda}} =
+2.39\, {L^2\over{\beta}^3} + {\cal O}(L^4).
\end{equation}
This does have the leading $L^2$ dependence, giving rise to an unambiguous
nonzero $\hat q$. But it is exponentially suppressed compared to the short
string contribution (\ref{euc2}), and so gives no contribution to the
effective $\hat q$ in the $T\to\infty$ limit.
\subsection{Spacelike Wilson loop}
We now turn to the spacelike prescription for calculating the Wilson loop.
We will show that a similar qualitative behavior to that of the Euclidean
path integral shown in (\ref{euc2}) and (\ref{euc3}) also holds for
spacelike strings. In particular, the leading contribution is dominated
by a confining-like ($L$) behavior with no jet quenching-like ($L^2$)
subleading term, and only an exponentially suppressed longer-string
contribution has a leading jet quenching-like behavior. The analogous
results are recorded in (\ref{spacelikeshort}) and (\ref{spacelikelong}),
below.
Since $-G<0$ for spacelike worldsheets, the Nambu-Goto action is imaginary
and so $\exp\{i S\} = \exp\{\pm A\}$, where $A$ is the positive real area
of the string worldsheet. The sign ambiguity comes from the square root in
the Nambu-Goto action. For our stationary string solutions, the worldsheet
area is the time of propagation $T$ times the length of the string. Thus,
with the choice of the plus sign in the exponent, the longest string length
exponentially dominates the path integral, while for the minus sign, the
shortest string length dominates. Only the minus sign is physically sensible,
though, since we have seen in section 3 that the length of the spacelike
string solutions is unbounded from above (since there are solutions with
arbitrarily many turns). Thus we must pick the minus sign, and, as in the
Euclidean case, the solution with shortest string length exponentially
dominates the path integral.
As we illustrated in our discussion of the Euclidean Wilson loop, the
physically sensible limit is to take the quarks infinitely massive
($z_7\to\infty$) at fixed quark separation $L$. In the spacelike case,
however, there is a new subtlety: {\it a priori}\/ it is not obvious that
the lightlike limit $V\to1$ will
commute with the $z_7\to\infty$ limit. Since
$V= v (1-z_7^{-4})^{-1/2}$, the lightlike limit is $v\to1$ when
$z_7\to\infty$. We will examine four different approaches to this
limit, shown in figure \ref{fig5}.\footnote{See \cite{cgg0607}
for a related discussion of the lightlike limit.}
\begin{figure}[h]
\begin{center}
$\begin{array}{c@{\hspace{.75in}}c}
\epsfxsize=1.75in \epsffile{spacelikefig5.eps} &
\begin{array}{c@{\hspace{.25in}}c} \\ [-1.75in]
{\bf (a)} & \lim_{z_7\to\infty}\lim_{V\to1^+} \\ [.3cm]
{\bf (b)} & \lim_{z_7\to\infty} \lim_{v\to1^-} \\ [.3cm]
{\bf (c)} & \lim_{z_7\to\infty} \lim_{v\to1^+} \\ [.3cm]
{\bf (d)} & \lim_{v\to1^+} \lim_{z_7\to\infty}
\end{array} \\ [-.3cm]
\end{array}$
\end{center}
\caption[FIG. \arabic{figure}.]{\footnotesize{The shaded
region is the set of $(v,z_7)$ for which the string worldsheet
is spacelike and outside the horizon. The curved boundary
corresponds to lightlike worldsheets. The various approaches to the
lightlike $z_7=\infty$ limit discussed in the text are shown.}}
\label{fig5}
\end{figure}
\subsubsection*{Limit (a):\ lim$\bf _{z_7\to\infty}$\,lim$\bf_{V\to1^+}$.}
This is the limit in which we take the lightlike limit at fixed $z_7$,
then take the mass to infinity. Recall from (\ref{spacetime}) that a
spacelike worldsheet requires either $v\ge1$ (${\gamma}^2<0$) for any $z_7$,
or $v<1$ (${\gamma}^2>1$) and $z_7 <\sqrt{\gamma}$. Since, at fixed $z_7$, $V=1$
corresponds to ${\gamma}=z_7^2$, we necessarily have $v<1$. Thus, only the
$v<1$ spacelike solutions discussed in section 3.1.1 will contribute.
Recall that for these solutions $1\le z^4\le {\gamma}^2(1-{\alpha}^2)$, where the
integration constant is in the range $0<{\alpha}<v$. In particular, we must
keep ${\gamma}^2(1-{\alpha}^2)\ge z_7^4$ and ${\gamma}^2\ge z_7^4$ while taking the ${\gamma}^2
\to z_7^4$ limit. This implies that we must take solutions with ${\alpha}\to0$.
However, such solutions necessarily have $L\to0$ (see figure \ref{fig1}),
which contradicts our prescription of keeping $L$ fixed. Therefore, this
limit is not interesting.
\subsubsection*{Limit (b):\ lim$\bf_{z_7\to\infty}$\,lim$\bf_{v\to1^-}$.}
Another approach to the lightlike limit takes $v\to1$ from below and then
takes $z_7\to\infty$. Then the conditions for a spacelike worldsheet are
automatically satisfied. Again, only the $v<1$ spacelike solutions
discussed in section 3.1.1 contribute, but now the ${\gamma}^2(1-{\alpha}^2)\ge z_7^4$
condition places no restrictions on ${\alpha}$. In particular, this limit will
exist at fixed $L$. The behavior of figure \ref{fig1} as $v\to1$ suggests
that, at any given (small) $L$, all the series of string solutions
illustrated in figure \ref{fig2} occur. The lengths of these strings
follow the pattern plotted in figure \ref{fig3}. Actually, the analysis
given in appendix B shows that the short (c)$_0$ ``up" string does not exist
in the limit with fixed $L$. Thus the long (c)$_0$ ``up" string dominates
the path integral, with the (a)$_1$ ``down" string and all longer strings
relatively exponentially suppressed.
The result from appendix B.1 for the action of the (c)$_0$ string
as a function of $L$ is
\begin{equation}\label{spacelikeshort}
{{\beta}\hat S\over T\sqrt{\lambda}} = -1.31 + {\pi\over2}\, {L\over{\beta}} .
\end{equation}
This result is exact, in the sense that no higher powers of
$L$ enter. The constant term is from the straight string
subtraction. The linear term is consistent with energy loss by elastic scattering.
For comparison, the next shortest string is the (a)$_1$
down string solution. Appendix A computes its action to
be
\begin{equation}\label{spacelikelong}
{{\beta} \hat S_{\rm long}\over T\sqrt{\lambda}} =
0.941\, {L^2\over{\beta}^2} + {\cal O}(L^4) ,
\end{equation}
which shows the jet-quenching behavior found in \cite{lrw0605}. However, since the contribution from this configuration to the path integral is exponentially suppressed, the actual jet quenching parameter is zero.
\subsubsection*{Limit (c):\ lim$\bf _{z_7\to\infty}$\,lim$\bf_{v\to1^+}$.}
When $v>1$, the string worldsheet is spacelike regardless of the value of
$z_7$. Thus, we are free to take the order of limits in many ways. Limit
(c) takes $v\to1$ from above at fixed $z_7$ and then takes $z_7\to\infty$.
We saw in section 3.1.2 that there are always two string solutions for $v>1$:
a short one with ${\alpha}>v$, which turns at $z=z_t:={\gamma}^2(1-{\alpha}^2)$, and a long
one with ${\alpha}<v$, which turns at the horizon $z=1$. Appendix B.2 shows
that in the (c) limit, the short string gives precisely the same contribution
as the (c)$_0$ up string did in the (b) limit. Similarly, the long string
contirbution coincides with the (a)$_1$ down string. This agreement is
reassuring, showing that the path integral does not jump discontinuously
between the (b) and (c) limits even though they are evaluated on qualitatively
different string configurations. (The (b) and (c) limits approach the
lightlike limit in the same way, see figure \ref{fig5}.)
\subsubsection*{Limit (d):\ lim$\bf_{v\to1+}$\,lim$\bf_{z_7\to\infty}$.}
Limit (d) approaches the lightlike limit in the opposite order to the (c)
limit. Somewhat unexpectedly, the results for the string action in the
(d) limit are numerically the same as those found in the (b) and (c)
limits. This is unexpected since the details of evaluating the integrals
in the (c) and (d) limits are substantially different. We take this
agreement as evidence that the result is independent of how the lightlike
limit is taken. (Note that there are, in principle, many different
lightlike limits intermediate between the (c) and (d) limts.)
|
2,869,038,154,059 | arxiv | \section{Introduction}
The leading structures of the nucleon in deeply inelastic scattering
processes are described in terms of three twist-2 parton distribution
functions -- the unpolarized $f_1^a(x)$, helicity $g_1^a(x)$, and
transversity $h_1^a(x)$ distribution. Owing to its chirally odd nature $%
h_1^a(x)$ escapes measurement in deeply inelastic scattering experiments
which are the main source of information on the chirally even $f_1^a(x)$ and
$g_1^a(x)$. The transversity distribution function was originally introduced
in the description of the process of dimuon production in high energy
collisions of transversely polarized protons \cite{Ralston:ys}.
Alternative processes have been discussed. Let us mention here the Collins
effect \cite{Collins:1992kk} which, in principle, allows to access $%
h_{1}^{a}(x)$ in connection with a fragmentation function describing a
possible spin dependence of the fragmentation process, see also \cite%
{Mulders:1995dh} and references therein. Recent and/or future data from
semi-inclusive deeply inelastic scattering (SIDIS) experiments at HERMES %
\cite{Airapetian:1999tv}, CLAS \cite{Avakian:2003pk} and COMPASS \cite%
{LeGoff:qn} could be (partly) understood in terms of this effect \cite%
{DeSanctis:2000fh,Ma:2002ns,Efremov:2001cz}. Other processes to access $%
h_{1}^{a}(x)$ have been suggested as well, see the review \cite%
{Barone:2001sp}. However, in all these processes $h_{1}^{a}(x)$ enters in
connection with some unknown fragmentation function. Moreover these
processes involve the introduction of transverse parton momenta, and for
none of them a strict factorization theorem could be formulated so far. The
Drell-Yan process remains up to now the theoretically cleanest and safest
way to access $h_{1}^{a}(x)$.
The first attempt to study $h_1^a(x)$ by means of the Drell-Yan process is
planned at RHIC \cite{Bland:2002sd}. Dedicated estimates, however, indicate
that at RHIC the access of $h_1^a(x)$ by means of the Drell-Yan process is
very difficult \cite{Bunce:2000uv,Bourrely:1994sc}. This is partly due to
the kinematics of the experiment. The main reason, however, is that the
observable double spin asymmetry $A_{TT}$ is proportional to a product of
transversity quark and antiquark distributions. The latter are small, even
if they were as large as to saturate the Soffer inequality \cite%
{Soffer:1994ww} which puts a bound on $h_1^a(x)$ in terms of the better
known $f_1^a(x)$ and $g_1^a(x)$.
This problem can be circumvented by using an antiproton beam instead of a
proton beam. Then $A_{TT}$ is proportional to a product of transversity
quark distributions from the proton and transversity antiquark distributions
from the antiproton (which are connected by charge conjugation). Thus in
this case $A_{TT}$ is due to valence quark distributions, and one can expect
sizeable counting rates. The challenging program how to polarize an
antiproton beam has been recently suggested in the {\bf P}olarized {\bf A}%
ntiproton e{\bf X}periment (PAX) at GSI \cite{PAX}. The technically
realizable polarization of the antiproton beam of about $(5-10)\%$ and the
large counting rates -- due to the use of antiprotons -- make the program
promising.
In this note we shall make quantitative estimates for the Drell-Yan double
spin asymmetry $A_{TT}$ in the kinematics of the PAX experiment. For that we
shall stick to the description of the process at LO QCD. NLO corrections for
$A_{TT}$ have been shown to be small \cite{Bunce:2000uv,NLO,Shimizu:2005fp}.
Similar estimations were done earlier \cite{Anselmino:2004ki,Efremov:2004qs}
using different models for the transversity distribution. Here for
transversity distribution we shall use the result of the covariant
probabilistic model developed earlier \cite{zav,tra}. In this model
the quarks are represented by quasifree fermions on mass shell and their
intrinsic motion, which has spherical symmetry and is related to the orbital
momentum, is consistently taken into account. It was shown, that the model
nicely reproduces some well-known sum rules. The calculation was done from
the input on unpolarized valence quark distributions $q_{V}$ and it was
shown, that assuming $SU(6)$ symmetry, a very good agreement with
experimental data on the proton spin structure functions $g_{1}$ and $g_{2}$
can be obtained.
\section{Transversity and the dilepton\\ transverse spin asymmetry}
In the paper \cite{tra} we discussed the transversity distribution in the
mentioned quark-parton (QPM) model. This model, in the limit of massless
quarks, implies the relation between the transversity and the corresponding
valence quark distribution:
\begin{equation}
\delta q(x)=\varkappa \cos \eta _{q}\left( q_{V}(x)-x^{2}\int_{x}^{1}\frac{%
q_{V}(y)}{y^{3}}dy\right) . \label{dy1}
\end{equation}%
The factors $\cos \eta _{q}$ represent relative contributions to the proton
spin from different quark flavors, which for the assumed $SU(6)$ symmetry
means, that $\cos \eta _{u}=2/3$ and $\cos \eta _{d}=-1/3$. The factor $%
\varkappa $\ depends on the way, in which the transversity is calculated:
{\it i)} Interference effects are attributed to the quark level only, then $%
\varkappa =1$. In this approach the relation between the transversity and
the usual polarized distribution is obtained%
\begin{equation}
\delta q(x)=\Delta q(x)+\int_{x}^{1}\frac{\Delta q(y)}{y}dy, \label{dy2}
\end{equation}%
which means, that the resulting transversity distribution is roughly twice
as large as the usual $\Delta q$. The signs of both the distributions are
simply correlated. Soffer inequality in this approach is violated for the
case of large negative quark polarization, when $\cos \eta _{q}<-1/3$, which
means that the proton $d-$quarks in the $SU(6)$ scheme are just on the
threshold of violation.
{\it ii)} Interference effects at parton-hadron transition stage are
included in addition, but the result represents only upper bound for the
transversity. This bound is more strict than the Soffer one and roughly
speaking, our bound is more restrictive for quarks with\ a greater
proportion of intrinsic motion and/or smaller (or negative) portion in the
resulting polarization. No simple correspondence between the signs of actual
transversity and $\Delta q$ follows from this approach. In this scenario: $%
\varkappa =\cos ^{2}(\eta _{q}/2)/\cos \eta _{q}$.
Following the papers \cite{Anselmino:2004ki,Efremov:2004qs}, the
transversity can be measured from the Drell-Yan process $q\bar{q}\rightarrow
l^{+}l$ $^{-}$ in the transversely polarized $p\bar{p}$ collisions in the
proposed PAX experiment. The transversity can be extracted from the double
transverse spin asymmetry%
\begin{equation}
A_{TT}(y,Q^{2})=\frac{\sum_{q}e_{q}^{2}\delta q(x_{1},Q^{2})\delta
q(x_{2},Q^{2})}{\sum_{q}e_{q}^{2}q(x_{1},Q^{2})q(x_{2},Q^{2})};\qquad
x_{1/2}=\sqrt{\frac{Q^{2}}{s}}\exp (\pm y), \label{dy3}
\end{equation}%
where, using momenta $P_{1},P_{2}$ of the incoming proton$-$antiproton pair
and the momenta $k_{1},k_{2}$ of the outgoing lepton pair, one defines the
physical observables
\begin{equation}
s=\left( P_{1}+P_{2}\right) ^{2},\qquad Q^{2}=\left( k_{1}+k_{2}\right)
^{2},\qquad y=\frac{1}{2}\ln \frac{P_{1}(k_{1}+k_{2})}{P_{2}(k_{1}+k_{2})}.
\label{dy4}
\end{equation}%
The variable $y$ can be interpreted as the rapidity of lepton pair. The
asymmetry $A_{TT}$ is obtained from the cross sections corresponding to the
different combinations of transverse polarizations in the incoming $p\bar{p}$
pair%
\begin{equation}
A_{TT}(y,Q^{2})=\frac{1}{\hat{a}_{TT}}\frac{d\sigma ^{\uparrow \uparrow
}-d\sigma ^{\uparrow \downarrow }}{d\sigma ^{\uparrow \uparrow }+d\sigma
^{\uparrow \downarrow }};\qquad \hat{a}_{TT}=\frac{\sin ^{2}\theta }{1+\cos
^{2}\theta }\cos (2\varphi ), \label{dy5}
\end{equation}%
\begin{wrapfigure}[19]{RT!}{.5\textwidth}
\begin{center}
\vspace{-18mm}
\epsfig{file=atta.eps, width=.5\textwidth}
\end{center}
\vspace{-6mm}
\caption{\small
Double spin asymmetry at $Q^{2}=4GeV/c$ is calculated using two
transversity approaches: Interference effects are attributed to quark level
only {\it (solid line)}. Interference effects at parton-hadron transition
stage are included in addition {\it (dashed line)}, this curve represents
upper bound only. Dotted curve corresponds to the calculation based on
chiral quark-soliton model \cite{Efremov:2004qs}.}
\label{fi1}
\end{wrapfigure}
where the last expression corresponds to the double spin asymmetry in the
QED elementary process, $q\bar{q}\rightarrow l^{+}l^{-}$.
So using the above formulas, one can calculate the double spin asymmetry (%
\ref{dy3}) from the valence quark distribution according to the relation (%
\ref{dy1}). In Fig. \ref{fi1} the result of the calculation is shown.
The normalized input on the proton valence quark distribution was taken from
Ref. \cite{msr}, which corresponds to $Q^{2}=4GeV^{2}$ and the energy
squared of $p\bar{p}$ system is taken $45GeV^{2}$ in an accordance with the
assumed PAX kinematics. In the same figure the curve obtained at $%
Q^{2}=5GeV^{2}$ from the calculation \cite{Efremov:2004qs} based on the
chiral quark-soliton model \cite{bochum} is shown for a comparison.
All curves in
this figure are based on the same parameterization \cite{grv} of the
distribution functions $q(x,Q^{2})$ appearing in the denominator in Eq. (\ref%
{dy3}). Obviously, our calculation gives a lower estimate of the $A_{TT}$
and one of possible reasons can be the effect of quark intrinsic motion,
which, as we have shown, can play role not only for the spin function $g_{1}$%
\cite{zav}, but also for the transversity $\delta q$ \cite{tra}. In an
accordance with \cite{Anselmino:2004ki},\ the motion of the lepton pair can
be described alternatively with the using the variable
\begin{equation}
x_{F}=\frac{2q_{L}}{\sqrt{s}}=x_{1}-x_{2}=2\sqrt{\frac{Q^{2}}{s}}\sinh y.
\label{dy6}
\end{equation}%
\begin{wrapfigure}[16]{RH}{.5\textwidth}
\vspace{-15mm}
\begin{center}
\epsfig{file=attb.eps, width=.5\textwidth}
\end{center}
\vspace{-8mm}
\caption{\small
Double spin asymmetry: The solid and dashed lines are the same as in
the previous figure, but here their dependence on $x_F$ is displayed. Dotted
line is corresponding estimate from \protect\cite{Anselmino:2004ki}.}
\label{fi2}
\end{wrapfigure}
In the Fig. \ref{fi2} the estimation of asymmetry obtained in the cited
paper for $Q^{2}=4GeV^{2}$ and at $s=45GeV^{2}$ is compared with our curves
from Fig. \ref{fi1}, in which the variable $\ y$ is replaced by the $x_{F}$,
whereas both the variables are related by the transformation (\ref{dy6}).
Apparently, for $x_{F}\leq 0.5$ the curve from \cite{Anselmino:2004ki} is
quite compatible with our results.
So, in both the figures we have the set of curves resulting from different
assumptions and the experiment should decide, which one gives the best fit
to the data. How many events is necessary for discriminating among the
displayed curves? After integrating over angular variables one gets%
\begin{equation}
A_{TT}=\frac{n_{+}-n_{-}}{n_{+}+n_{-}}, \label{dy7}
\end{equation}%
then%
\begin{equation}
\Delta A_{TT}=\sqrt{\left( \frac{\partial A_{TT}}{\partial n_{+}}\Delta
n_{+}\right) ^{2}+\left( \frac{\partial A_{TT}}{\partial n_{-}}\Delta
n_{-}\right) ^{2}}=2\sqrt{\frac{n_{+}n_{-}}{n_{+}+n_{-}}}, \label{dy8}
\end{equation}%
which implies%
\begin{equation}
\Delta A_{TT}=\sqrt{\frac{1-A_{TT}^{2}}{N_{ev}}};\qquad N_{ev}=n_{+}+n_{-}.
\label{dy9}
\end{equation}%
So for approximate estimate of the statistical error we obtain%
\begin{equation}
\Delta A_{TT}\lesssim \frac{1}{\sqrt{N_{ev}}}, \label{dy10}
\end{equation}%
where $N_{ev}$ is number of events related to the bin or interval of $y$ or $%
x_{F}$ in which the curves are compared. For example, if one requires $%
\Delta A_{TT}\leq 1\%$, which is error allowing to separate the curves in
presented figures, then roughly $10^{4}$ should be the number of events in
the considered bin or interval. Of course, this estimation assumes full
polarization of the colliding proton and antiproton. Since the expected
polarization of antiprotons at the PAX will hardly be better than $(5-10)\%$%
, the minimum number of events will be correspondingly higher.
\section{Summary}
The covariant probabilistic QPM, which takes into account intrinsic quark
motion, was applied to the calculation of transverse spin asymmetry of
dileptons produced in the $p\bar{p}$ collisions in the conditions, which are
expected for the recently proposed experiment PAX. This asymmetry is
directly related to the transversity distributions of quarks inside the
proton.\ In our asymmetry calculation the two approaches for the
transversity, which differ in accounting for the interference effects, were
applied. Our obtained results are compared with the prediction based on the
quark-soliton model. One can observe, that quite different approaches give
the similar results, but both our curves are lower than that obtained from
the quark-soliton model. Our results for $x_{F}\leq 0.5$ are also well
compatible with the recent estimate \cite{Anselmino:2004ki}.
\paragraph{Acknowledgement.}
This work is supported by Votruba-Blokhintsev Programm of JINR, A.E. and
O.T. are partially supported by grants RFBR 03-02-16816. Further, this work has
been supported in part by the Academy of Sciences of the
Czech Republic under the project AV0-Z10100502.
|
2,869,038,154,060 | arxiv | \section{Introduction}
Recognizing Textual Entailment~(RTE) is one of the basics of Natural Language Understanding~(NLU) and NLU is a subclass of Natural Language Processing~(NLP). Textual entailment is the relationship between two texts where one text fragment, referred to as \textit{`Hypothesis~(H)'} can be inferred from another text fragment, referred to as \textit{`Text (T)'}~\citep{dagan2005pascal,sharma2015recognizing}. In other words, Text $T$ entails Hypothesis $H$, if hypothesis $H$ is considered to be true according to the corresponding text $T$'s context~\citep{dagan2005pascal}. Let’s consider a text-hypothesis pair to illustrate an example of an entailment relationship. Suppose \textit{``A mother is feeding milk to a baby"} a particular text $T$ and \textit{``A baby is drinking milk"} is a hypothesis $H$. We see that the hypothesis $H$ is a true statement that can easily be inferred from the corresponding text $T$. Let's consider another hypothesis $H$, \textit{``A man is eating rice''}. For the same text fragment $T$, we can see that there is no entailment relationship between $T$ and $H$. Hence this text-hypothesis pair does not hold any entailment relationship, meaning neutral. The identification of entailment relationship has a significant impact in different NLP applications that include question answering, text summarization, machine translation, information extraction, information retrieval etc.~\citep{almarwani2017arabic,sharma2015recognizing}.
Since the first PASCAL challenge~\citep{dagan2005pascal} for recognizing textual entailment to date, different machine learning approaches have been proposed by the research community. The proposed approaches tried to employ supervised machine learning (ML) techniques using different underlying lexical, syntactic, and semantic features of the text-hypothesis pair. Recently, deep learning-based approaches including LSTM (Long Short Term Memory), CNN (Convolutional Neural Network), and Transfer Learning are being applied to detect the entailment relationship between the text-hypothesis pair~\citep{kiros2015skip,vaswani2017attention,devlin2018bert,conneau2017supervised}. Almost all methods utilized the semantic information of the text-hypothesis pair by representing them as semantic vectors. For doing so, they considered all the values of the words' vectors returned from the word embedding model. Classical approaches also apply the average of real-valued words' vectors as sentence representation. We hypothesize that some values of a particular vector of a word might impact negatively since they will be passed through an arithmetic average function. Considering this intuition, we observed that the elements of the words' vectors whose relevant elements are already present in the semantic vectors of the text-hypothesis pair, can be eliminated to get a better semantic representation. Following this observation, we proposed a threshold-based representation technique considering the mean and standard deviation of the words' vectors.
Applying the threshold-based semantic sentence representation, the text and hypothesis are represented by two real-valued high-dimensional vectors. Then we introduce an element-wise Manhattan distance vector (EMDV) between vectors for text and hypothesis to have semantic representation for the text-hypothesis pair. This EMDV vector is directly employed as a feature vector to ML algorithms to identify the entailment relationship of the text-hypothesis pair. In addition, we introduce another feature by calculating the absolute average of the element-wise Manhattan distance vector of the text-hypothesis pair. In turn, we extract several handcrafted lexical and semantic features including Bag-of-Words (BoW) based similarity score, the Jaccard similarity score (JAC), and the BERT-based semantic textual similarity score (STS) for the corresponding text-hypothesis pair. To classify the text-hypothesis pair, we apply multiple machine learning classifiers that use different textual features including our introduced ones. Then the ensemble of the ML algorithms with the majority voting technique is employed that provides the final entailment relationship for the corresponding text-hypothesis pair. To validate the performance of our method, a wide range of experiments are carried out on a benchmark SICK-RTE dataset. The experimental results on the benchmark textual entailment classification dataset achieved efficient performance to recognize different textual entailment relations. The results also demonstrated that our approach outperforms some state-of-the-art methods.
The rest of the paper is organized as follows:~\Cref{related work} presents some related works on RTE. Then our method is discussed in~\Cref{pa}. The details of the experiments with their results are presented in~\Cref{experiment and result}. Finally,~\Cref{conclusion} presents the conclusion with the future direction.
\section{Related Work} \label{related work}
With the first PASCAL challenge, textual entailment recognition has gained considerable attention of the research community~\citep{dagan2005pascal}. Several research groups participated in this challenge. But most of the methods applied lexical features~(i.e., word-overlapping) with ML algorithms to recognize entailment relation~\citep{dagan2005pascal}. Several RTE challenges have been organized and some methods with promising performance on different downstream tasks are proposed~\citep{haim2006second,giampiccolo2007third,giampiccolo2008fourth,bentivogli2009fifth,bentivogli2011seventh,dzikovska2013semeval,paramasivam2021survey}. Malakasiotis et al.~\citep{malakasiotis2007learning} proposed a method employing the string matching-based lexical and shallow syntactic features with support vector machine~(SVM). Four distance-based features with SVM are also employed~\citep{castillo2008approach}. The features include edit distance, distance in WordNet, and longest common substring between texts.
Similarly, Pakray et al.~\citep{pakray2009lexical} applied multiple lexical features including WordNet-based unigram match, bigram match, longest common sub-sequence, skip-gram, stemming, and named entity matching. Finally, they applied SVM classifiers with introducing lexical and syntactic similarity. Basak et al.~\citep{basak2015recognizing} visualized the text and hypothesis leveraging directed networks (dependency graphs), with nodes denoting words or phrases and edges denoting connections between nodes. The entailment relationship is then identified by matching the graphs' with vertex and edge substitution. Some other methods made use of bag-of-words, word-overlapping, logic-based reasoning, lexical entailment, ML-based methods, and graph matching to recognize textual entailment\citep{ghuge2014survey,renjit2022feature,liu2016classification}.
Bowman et al.~\citep{bowman2015large} introduced a Stanford Natural Language Inference corpus (SNLI) dataset consists of labeled sentence pairs that can be used as a benchmark in NLP tasks. This is a very large entailment (inference) dataset that provides the opportunity for researchers to apply deep learning-based approaches to identify the entailment relation between text and hypothesis. Therefore, different deep learning-based approaches including LSTM (Long Short Term Memory), CNN (Convolutional Neural Network), BERT, and Transfer Learning are being applied to RTE~\citep{kiros2015skip,vaswani2017attention,devlin2018bert,conneau2017supervised}. All the methods either used lexical or semantic features. But our proposed method uses both the lexical and semantic features including element-wise Manhattan distance vector~(EMDV), an average of EMDV, BoW, Jaccard similarity, and semantic textual similarity to recognize entailment.
\begin{figure*}[!h]
\centering
\includegraphics[width=0.95\textwidth]{overviewofextended.pdf}
\caption{Overview diagram for recognising textual entailment} \label{overview}
\end{figure*}
\section{Proposed Approach} \label{pa}
This section describes the proposed framework to recognize textual entailment (RTE) using the semantic information of the Text-Hypothesis (T-H) pair. The overview of our method is presented in ~\Cref{overview}. First, we apply different preprocessing techniques to have a better textual representation that eventually boosts the performance. In this phase, punctuation marks are removed, and the stopwords are also eliminated (except for negative words such as no, not, etc.). Here, a stopword is a word that has very little influence on the meaning of the sentence (i.g. a, an, the, and, or, etc.). After that, a tokenizer is utilized to split the sentence into a list of words. Then, we apply a lemmatizer to get the base form of the words.
\subsection{Empirical text representation}
The semantic information of a word is represented as a vector. The elements of the vector are real numbers that represent the contextual meaning of that word. By using the word-embedding, the semantic vectors of the words can be obtained. Almost all classical approaches apply arithmetic average using the words' semantic vectors to get the semantic information of the sentences. But all the values of the words' semantic vector might not be important to express the meaning of the text-hypothesis pair in the form of vectors. We hypothesize that some values of a particular vector of a word might impact negatively since they will be passed through an arithmetic average function.
Let $T$ and $H$ be the input text and hypothesis, respectively. We apply our sentence representation~\Cref{algo} to represent the sentence that provides better semantic information. From the first two statements, the function named $preprocess(T)$ returns the list of preprocessed words $T_p$ and $H_p$ for corresponding text $T$ and hypothesis $H$, respectively. After that, a vector of size $K$ with the initial value of zeros is taken (statement 3). Then a function ($get_-Semantic_-Info()$) returns the semantic representation applying our threshold-based empirical representation (statement 4-21).
\begin{algorithm*}[h!]
\caption{Semantic information of T-H pair based on an automated threshold of words semantic representation:} \label{algo}
\begin{algorithmic}[1]
\Require{Text (T) and Hypothesis (H) pair and Word-embedding models (word2vec)}
\Ensure{Semantic information of Text (T) and Hypothesis (H)}
\State $T_p \Leftarrow preprocess(T)$ \Comment{List of words of Text
\State $H_p \Leftarrow preprocess(H)$ \Comment{List of words of Hypothesis}
\State $vec_-S \Leftarrow [0,0,...,0]$
\Function{$get_-Semantic_-Info$}{$Sent$}
\For{each $word \in Sent$}
\If {$word \in word2vec.vocab$}
\State $x \Leftarrow word2vec[word]$ \Comment{Vector representation of the word}
\State $\overline{x} \Leftarrow Mean(x) $
\State $\sigma \Leftarrow Standard\_Deviation (x) $
\State $\alpha \Leftarrow \overline{x}+\sigma $
\State $k \Leftarrow 0$
\While {$k < length(x)$}
\If { abs($vec_-S[k]-x[k]) \geq \alpha$}
\State $vec_-S[k] \Leftarrow add(vec_-S[k],x[k])$
\EndIf
\State $k++$
\EndWhile
\EndIf
\EndFor
\State $return$ $vec_-S$
\EndFunction
\State $v_-T \Leftarrow get_-Semantic_-Info(T_p)$ \Comment{Semantic information of Text}
\State $v_-H \Leftarrow get_-Semantic_-Info(H_p)$ \Comment{Semantic information of Hypothesis}
\end{algorithmic}
\end{algorithm*}
In the function, the words which are available in the word2vec model vocabulary are considered for further actions. The first-word representation is added with $vec_-S$ without considering any condition or automated threshold. We found with empirical experiments that some elements from the word's vector might give a negative bias in the arithmetic average. By ignoring those, the sentence can be represented as a semantic vector omitting sampling fluctuation. Therefore, we attempt to study the sentence intent information by employing an automated threshold ($\alpha$) on the semantic elements of the words. We hypothesize that, if any particular element in index $i$ has not a significant absolute difference with the average of feature value in the same index $i$ in the sentence $S$, then the representation might not capture correct contextual information. In turn, we introduce the empirical threshold $\alpha$ using mean and standard deviation as $\alpha = \overline{x} + \sigma$. The elements of the word are added after employing the threshold to get the semantic information of the sentence.
\subsection{Feature extraction of text-hypothesis Pair}
\subsubsection{Element-wise Manhattan distance vector~(EMDV)}
The empirical threshold-based text representation returns the semantic real-valued vectors $v_T$ and $v_H$ for text and hypothesis, respectively. Our primary intuition to recognize the entailment relationship is that, the smaller the difference between text and hypothesis the larger the chance of entailment between them. Therefore we apply the Manhattan distance function to compute the element-wise Manhattan distance vector $EMDV = v_T - v_H$ where each element is the difference between the corresponding elements of the vectors for $T$ and $H$, respectively.
\subsubsection{Average of EMDV} The EMDV provides a real-valued Manhattan distance vector for the text-hypothesis pair. Applying the average over the summation of the absolute difference between text $v_T$ and hypothesis $v_H$ representations, we can calculate the average of EMDV which is a scaler value corresponding to the text-hypothesis pair. This can be calculated as following:
\begin{equation}\label{avgemdv}
Sum_{EMDV} = \frac{1}{k}\sum_{i}^{k}{abs(v_{T_i} - v_{H_i})},
\end{equation}
where $k$ is the dimension of the vector. $v_{T_i}$ and $v_{H_i}$ are the $i$-th elements of the text and hypothesis, respectively.
\subsubsection{Jaccard similarity score (JAC)} Jaccard similarity assesses the similarity of the text-hypothesis pair (T-H) to determine which words are common and which are unique. It is calculated by no. of common words present in the pair divided by no. of total words present in the sentence pair. This can be represented in set notation as the ratio of intersection ($T \cap H$) and union ($T \cup H$) of two sentences.
\begin{equation*}
JAC(T,H) = \frac{T \cap H}{T \cup H}
\end{equation*}
where ($T \cap H$) indicates the number of words shared between both sentences and ($T \cup H$) provides the total number of words in both sentences (shared and un-shared). The Jaccard Similarity will be 0 if the two sentences don't share any values and 1 if the two sentences are identical.
\subsubsection{Bag-of-Words based similarity (BoW)} BoW is the vector representation where the dimension of the vector is the number of unique words exist in the text and the value of the vector is the frequency of the words. Suppose, two Bag-of-Words based vectors for text and hypothesis are obtained as $[1,0,2,0,4,0,0,0,1,1]$ and $[0,2,0,1,4,3,0,1,2,1]$ for T and H respectively. Then cosine similarity\citep{atabuzzaman2021semantic} is applied on these vectors to compute the similarity score.
\subsubsection{BERT-based semantic similarity score (STS)} Inspired by one of the prior works~\citep{shajalal2019semantic} on semantic textual similarity, we applied several semantic similarity methods. To compute the semantic textual similarity score (STS), pre-trained BERT word embedding is employed. Using the BERT word embedding, the T and H are represented as semantic vectors adding the words' vectors one by one. Then the cosine similarity between the vectors for text and hypothesis is considered as the STS score.
\section{Experiments Results} \label{experiment and result}
This section presents the details about the dataset, evaluation metrics, experimental setup, and performance analysis compared with known related works.
\subsection{Dataset}
We applied our method to a benchmark entailment recognition dataset named SICK-RTE~\citep{marelli2014sick}. This is an English dataset consisting of almost 10K English Text-Hypothesis (T-H) pairs exhibiting a variety of lexical, syntactic, and semantic phenomena. Each text-hypothesis pair is annotated as either \textbf{Neutral}, \textbf{Entailment} or \textbf{Contradiction} which are used as ground truth. Among 10k text-hypothesis pairs 5595 are annotated as Neutral, 2821 as Entailment, and 1424 as Contradiction. ~\Cref{dataset} presents some text-hypothesis pairs with corresponding entailment relations.
\begin{table*}[h!]
\centering
\caption{Examples of Text-Hypothesis pair from SICK-RTE dataset} \label{dataset}
\begin{tabular}{|p{4.5 cm}|p{4.5 cm}|p{2 cm}|} \hline
{\textbf{Text (T)}} &{\textbf{Hypothesis (H)}} &{\textbf{Relationship}}\\ \hline
Two dogs are fighting. &{Two dogs are wrestling and hugging.} &{Neutral}\\ \hline
A person in a black jacket is doing tricks on a motorbike. &{A man in a black jacket is doing tricks on a motorbike.} &{Entailment}\\ \hline
Two dogs are wrestling and hugging. &{There is no dog wrestling and hugging.} &{Contradiction}\\ \hline
A woman selling bamboo sticks talking to two men on a loading dock. &{There are at least three people on a loading dock.} &{Entailment}\\ \hline
A woman selling bamboo sticks talking to two men on a loading dock. &{A woman is selling bamboo sticks to help provide for her family.} &{Neutral}\\ \hline
A woman selling bamboo sticks talking to two men on a loading dock. &{A woman is not taking money for any of her sticks} &{Contradiction}\\ \hline
\end{tabular}
\end{table*}
We make use of the pre-trained BERT word-embedding model and pre-trained word-embedding model (word2vec) trained on the Google news corpus. The dimension of each word vector is $k=300$ and $k=768$ for word2vec and BERT, respectively. We evaluate the performance of our methods in terms of classification accuracy.
\subsection{Experimental settings}
To evaluate the performance of our approach, several experiments have been carried out on the SICK-RTE dataset. First, we fed our element-wise Manhattan distance vector (EMDV)-based $k$ dimensional feature vector to the ML classifiers, and this setting is denoted as $RTE_-EMDV$. The setting where the sentence representation did not apply the threshold-based algorithm is denoted as ${RTE_{without_-thr}}_{-}EMDV$. Then we compute the feature vector for the text-hypothesis pair employing the Average of EMDV, JAC, BOW, and STS measures. To illustrate the performance of our element-wise Manhattan distance vector, we applied the Average of EMDV~(ref.~\Cref{avgemdv}) in two different variations with the threshold-based algorithm and with a plain vector from embedding. For all settings, we employed several classification algorithms including support vector machine with RBF kernel, K-nearest neighbors, random forest, and naive Bayes. Finally, the ensemble result considering the majority voting of the ML algorithms is also considered. To do the experiments 75\% data are used in training and the rest are used as testing data.
\subsection{Performance analysis of entailment recognition}
\Cref{entailmentmd} demonstrates the performance of different ML algorithms to recognize entailment relation with element-wise Manhattan distance vector-based features ($RTE_-EMDV$). Here we also reported the performance of the EMDV feature vector without applying the representation algorithm (~\Cref{algo}), (${RTE_{without_-thr}}_{-}EMDV$). This table illustrates that the KNN classifier achieved better performance than other ML algorithms to detect entailment relationship using the representational $RTE_-EMDV$. The table also demonstrates how the element-wise distance-based feature vector from threshold-based semantic representation helps the ML models to recognize different entailment labels.
For a better understanding of the impact of the our introduced features, we present the~\Cref{Conf1} which is the confusion matrixes of the ensemble methods considering the element-wise EMDV vector with and without the proposed sentence representation algorithm. \Cref{cmwiththrmd} shows that the ensemble method can detect neutral, entailment, and contradiction T-H pair. But~\Cref{cmwithoutthrmd} reflects that without the proposed representation ML algorithms are not able to recognize the contradiction relationship between text and hypothesis and only can detect a few entailment relations. This also signifies the impact of our proposed feature with threshold-based sentence representation.
\begin{table*}[h!]
\centering
\caption{Performance of ML models based on EMDV}
\begin{tabular}{|c|c|c|} \hline
\textbf{Algorithm} &\textbf{Features} &\textbf{Accuracy} \\ \hline
\multirow{2}{*}{SVM\_rbf} &$RTE_-EMDV$ &0.66 \\
&${RTE_{without_-thr}}_{-}EMDV$ &0.58 \\ \hline
\multirow{2}{*}{KNN} &$RTE_-EMDV$ &\textbf{0.67} \\
&${RTE_{without_-thr}}_{-}EMDV$ &\textbf{0.57} \\ \hline
\multirow{2}{*}{R.Forest} &$RTE_-EMDV$ &0.62 \\
&${RTE_{without_-thr}}_{-}EMDV$ &0.58 \\ \hline
\multirow{2}{*}{Naive Bayes} &$RTE_-EMDV$ &0.66 \\
&${RTE_{without_-thr}}_{-}EMDV$ &0.58 \\ \hline
\multirow{2}{*}{Ensemble} &$RTE_-EMDV$ &0.66 \\
&${RTE_{without_-thr}}_{-}EMDV$ &0.58 \\ \hline
\end{tabular}
\label{entailmentmd}
\end{table*}
\begin{table*}[!htb]
\caption{Confusion matrix for ensemble learning with EMDV}
\begin{minipage}{.5\linewidth}
\caption{With threshold}
\label{cmwiththrmd}
\centering
\begin{tabular}{|c|c|c|c|} \hline
&Neutral &Entail &Contradict \\ \hline
Neutral &\textbf{1135} &193 &17 \\ \hline
Entail &328 &\textbf{370} &37\\ \hline
Contradict &96 &155 &\textbf{129}\\ \hline
\end{tabular}
\end{minipage}%
\begin{minipage}{.5\linewidth}
\centering
\caption{Without threshold}
\label{cmwithoutthrmd}
\begin{tabular}{|c|c|c|c|} \hline
&Neutral &Entail &Contradict \\ \hline
Neutral &\textbf{1407} &15 &0 \\ \hline
Entail &674 &\textbf{13} &0\\ \hline
Contradict &344 &7 &0\\ \hline
\end{tabular}
\end{minipage}
\label{Conf1}
\end{table*}
\if false
\begin{table}[h!]
\centering
\caption{Confusion Matrix of the Ensemble measure using proposed threshold based M\_D}
\begin{tabular}{|c|c|c|c|} \hline
&Neutral &Entail&Contradiction \\ \hline
Neutral &\textbf{1407} &15 &0 \\ \hline
Entailment &674 &\textbf{13} &0\\ \hline
Contradiction &344 &7 &0\\ \hline
\end{tabular}
\label{cmwiththrmd}
\end{table}
\begin{table}[h!]
\centering
\caption{Confusion Matrix of the Ensemble measure without proposed threshold based M\_D}
\begin{tabular}{|c|c|c|c|} \hline
&Neutral &Entailment &Contradiction \\ \hline
Neutral &\textbf{1407} &15 &0 \\ \hline
Entailment &674 &\textbf{13} &0\\ \hline
Contradiction &344 &7 &0\\ \hline
\end{tabular}
\label{cmwithoutthrmd}
\end{table}
\fi
\Cref{rm} presents the performance of different ML algorithms to recognize entailment with semantic and lexical features including our proposed average of EMDV feature (\Cref{avgemdv}). The table reflects that when all the features are combined, all the classifiers are showing better performance with the average of EMDV feature (\Cref{avgemdv}) than without threshold-based text representation. This also consistently demonstrates that the proposed average of the EMDV feature can capture a better entailment relationship than other classical features. \Cref{conf2} presents the confusion matrices of the ensemble models with different features' combinations. Both tables show that with all the features considering the proposed semantic representation, the ensemble method can classify different text-hypothesis pairs more accurately than classical semantic representation. This also concludes the performance consistency.
\begin{table}[h!]
\centering
\caption{Performance of ML models based on handcrafted features}
\begin{tabular}{|c|c|c|} \hline
\textbf{Algorithm} &\textbf{Features} &\textbf{Accuracy} \\ \hline
\multirow{2}{*}{SVM\_rbf} &${Avg_-Sum}_-{EMDV}$+BoW+JAC+STS &0.80 \\
&${Avg_-Sum}_-{without_-thr}$+BoW+JAC+STS &0.78 \\ \hline
\multirow{2}{*}{KNN} &${Avg_-Sum}_-{EMDV}$+BoW+JAC+STS &0.81 \\
&${Avg_-Sum}_-{without_-thr}$+BoW+JAC+STS &0.79 \\ \hline
\multirow{2}{*}{R.Forest} &${Avg_-Sum}_-{EMDV}$+BoW+JAC+STS &0.81 \\
&${Avg_-Sum}_-{without_-thr}$+BoW+JAC+STS &0.78 \\ \hline
\multirow{2}{*}{Naive Bayes} &${Avg_-Sum}_-{EMDV}$+BoW+JAC+STS &0.74 \\
&${Avg_-Sum}_-{without_-thr}$+BoW+JAC+STS &0.73 \\ \hline
\multirow{2}{*}{Ensemble} &${Avg_-Sum}_-{EMDV}$+BoW+JAC+STS &\textbf{0.81} \\
&${Avg_-Sum}_-{without_-thr}$+BoW+JAC+STS &\textbf{0.79} \\ \hline
\end{tabular}
\label{rm}
\end{table}
\if false
\begin{table}[h!]
\centering
\caption{Confusion matrix for ensemble prediction}
\begin{tabular}{|c|c|c|c|} \hline
&Neutral &Entailment &Contradiction \\ \hline
Neutral &\textbf{1252} &149 &25 \\ \hline
Entailment &191 &\textbf{479} &14\\ \hline
Contradiction &124 &15 &\textbf{211}\\ \hline
\end{tabular}
\label{cmwithoutthr}
\end{table}
\begin{table}[h!]
\centering
\caption{Confusion matrix for ensemble prediction using all features and threshold-based semantic information}
\begin{tabular}{|c|c|c|c|} \hline
&Neutral & Entailment & Contradiction \\ \hline
Neutral & \textbf{1225} & 138 & 16 \\ \hline
Entailment & 177 &\textbf{523} &8\\ \hline
Contradiction & 108 & 23 &\textbf{242}\\ \hline
\end{tabular}
\label{cmwiththr}
\end{table}
\fi
\begin{table}[!htb]
\caption{Confusion matrix for ensemble learning with handcrafted features}
\begin{minipage}{.5\linewidth}
\caption{With threshold}
\centering
\begin{tabular}{|c|c|c|c|} \hline
&Neutral &Entail &Contradict \\ \hline
Neutral &\textbf{1225} &138 &16 \\ \hline
Entail &177 &\textbf{523} &8\\ \hline
Contradict &108 &23 &\textbf{242}\\ \hline
\end{tabular}
\label{cmwithoutthr}
\end{minipage}%
\begin{minipage}{.5\linewidth}
\centering
\caption{Without threshold}
\begin{tabular}{|c|c|c|c|} \hline
&Neutral &Entail &Contradict \\ \hline
Neutral &\textbf{1252} &149 &25 \\ \hline
Entail &191 &\textbf{479} &14\\ \hline
Contradict &124 &15 &\textbf{211}\\ \hline
\end{tabular}
\end{minipage}
\label{conf2}
\end{table}
\begin{figure*}[!h]
\centering
\includegraphics[width=0.75\linewidth]{entailment_comparison.pdf}
\caption{Performance comparison of our proposed method in terms of Accuracy on the SICK-RTE dataset.} \label{compare_entailment}
\end{figure*}
\subsection{Comparative analysis}
\Cref{compare_entailment} shows the comparison of different known prior related works with our method on the SICK-RTE dataset. BUAP~\citep{bentivogli2016sick} employed a language model with different features including sentence's syntactic and negation features to classify text-hypothesis pairs as entailment, neutral, and contradiction. Utexas~\citep{bentivogli2016sick} used sentence composition and phrase composition features with negation and vector semantic model to recognize the entailment relationship. These two models employed different features and neural network models but still, our method outperformed them. Our proposed features with an empirical threshold-based sentence representation algorithm can capture better semantic entailment relationships. Different feature engineering method with deep learning is employed for RTE task by ECNU~\citep{shin2020autoprompt}. BERT~(Finetuned)~\citep{shin2020autoprompt} applied bidirectional encoder representation of sentences pair. Though our method does not outperform ECNU and BERT(Finetuned), the performance difference compared to them is subtle and hence performed effectively.
\section{Conclusion with future direction} \label{conclusion}
This paper presents a novel method to recognize textual entailment by introducing new features based on element-wise Manhattan distance vector employing empirical semantic sentence representation technique. To extract the semantic representation of the text-hypothesis pair, an empirical threshold-based algorithm is employed. The algorithm eliminates the unnecessary elements of the words' vectors and extracts the semantic information from the text-hypothesis pair. Then various ML algorithms are employed with the extracted semantic information along with several lexical and semantic features. The experimental results indicate the efficiency in identifying textual entailment relationships between text-hypothesis pairs. In summary, the performance of different experimental settings with multiple classifiers was consistent and outperformed some know-related works.
In the future, it would be interesting to apply deep learning-based method using the element-wise Manhattan distance vector to recognize text entailment.
\bibliographystyle{unsrtnat}
|
2,869,038,154,061 | arxiv | \section{Introduction}
Since the unanticipated discovery of high-temperature superconductivity in the cuprates, the single-band Hubbard model \cite{Hubbard_PRSLA_1993} has been the focus of an unparalleled level of theoretical scrutiny and associated algorithmic development.\cite{LeBlanc_Gull_PRX_2015,Lieb_Wu_RPL_1968,Scalapino_Brooks_NY_2007,Gukelberger_Werner_PRB_2015} Nevertheless, most materials exhibiting strong correlation, including most transition metal oxides\cite{KuneA_Pickett_NM_2008,Laad_Hartmann_PRB_2006,Maeno_Ikeda_MSE_1999} as well as the pnictides,\cite{Si_Abrahams_NRM_2016,Yin_Kotliar_NM_2011,Haule_Kotliar_NP_2009} fullerides,\cite{Nomura_JPCM_2016,Nomura_Science_2015} and chalcogenides\cite{Sun_Zhao_Nature_2012,Si_Abrahams_NRM_2016,Yin_Kotliar_NM_2011} possess multiple bands that cross their Fermi levels and are therefore fundamentally multi-band in nature.\cite{Georges_AnnRev_2013} In recent years, it has become increasingly evident that some of the most significant effects in such multi-band materials stem from Hund's coupling.\cite{Yin_Kotliar_NM_2011,Haule_Kotliar_NP_2009,Johannes_Mazin_PRB_2009} According to Hund's rules, electrons favor maximizing their total spin by first occupying different, degenerate bands in the same shell with parallel spins; only after they fill all available bands do they then doubly occupy the same bands.\cite{Georges_AnnRev_2013} As such, the effective Coulomb repulsion among electrons in a half-filled shell is increased due to Hund's rules, while that at any other filling is decreased. Hund's effects therefore drive half-filled $d$- and $f$-electron materials closer to a Mott transition for a given Coulomb repulsion, yet drive non-half-filled materials away from a Mott transition while also increasing the correlation within their metallic phases. The consequences of these effects are perhaps best illustrated in 4$d$ transition metal oxides that have more than a single electron or hole in their 4$d$ shells. \cite{Koster_Beasley_RMP_2012,Frandkin_Mackenzie_ARCMP_2010,Dang_Millis_PRB_2015, Han_Millis_PRB_2016} Unlike their rhodate counterparts, which possess a single hole in their shells, many ruthenates and molybdenates exhibit substantial mass enhancements,\cite{Ikeda_JPSJ_2000} unexpected Mott Insulator transitions,\cite{Liebsch_PRL_2007,Gorelov_PRL_2010,Sutter_Chang_NC_2017} novel quantum phase transitions,\cite{Grigera_Science_2001} and even superconducting phases\cite{Mackenzie_RMP_2003} -- all of which may be attributed to Hund's physics.
Despite both the prevalence and importance of Hund's effects, they remain a challenge to describe. Most analytical and numerical treatments revolve around solving a multi-band Hubbard model, most often the Hubbard-Kanamori (HK) model,\cite{Kanamori_PTP_1963} containing a mixture of kinetic, Coulomb $U$, and Hund's $J$ terms. Although analytical studies have been performed,\cite{Roth_PR_1966,LyonCaen_Cyrot_JPC_1975,Khomskii_SolidStateComm_1973,Kugel_Khomskii_Sov_1982,Ishihara_PRB_1997} just as in the case of the single-band Hubbard model containing a repulsive $U$ term,\cite{LeBlanc_Gull_PRX_2015,Zheng_Science_2017} accurate treatments of these models necessitate methods capable of treating strong correlation non-perturbatively. However, because these models possess significantly larger state spaces and involve additional pair-hopping and Hund's exchange terms, they are often even more difficult to treat than the Hubbard model.
Due to the complicated interactions involved, there is no general analytical solution for these problems. Thus, numerical treatments are in high demand. To date, most numerical studies of multi-band models have employed Dynamical Mean Field Theory (DMFT)\cite{Georges_RevModPhys_1996,Metzner_Vollhardt_PRL_1989} either on its own or in combination with Density Functional Theory (DFT)\cite{Anisimov_JPhysCondMat_1997} because of DMFT's ability to treat band and atomic effects on equal footing by self-consistently solving an impurity problem within a larger bath. DMFT has been very successful at mapping out multi-band phase diagrams at finite temperatures.\cite{Gorelov_PRL_2010,Han_Millis_PRL_2018,Han_Millis_PRB_2016,Dang_Millis_PRB_2015,Dang_PRL_2015,Werner_Andrew_PRL_2008,Inaba_PRB_2005} Nevertheless, DMFT is fundamentally limited by the accuracy and scaling of its impurity model solver. Some DMFT studies rely upon exact diagonalization (ED) to solve their impurity models, yet the computational cost of ED grows exponentially with the number of bands involved, thus thwarting its application to many-band models. Some DMFT algorithms employ continuous-time quantum Monte Carlo (CTQMC)\cite{Rubtsov_PRB_2005} to solve their impurity models. CTQMC can solve larger impurity models than ED, but is still hampered by the sign problem, an exponential decrease in the signal to noise ratio observed in stochastic simulations,\cite{Loh_PRB_1990} in certain parameter regimes and low temperature calculations remain difficult.\cite{Gull_RMP_2011} A method that can accurately simulate larger system sizes at lower temperatures is thus in need.
One suite of techniques particularly well-suited for studying the large state spaces inherent to multi-band models are quantum Monte Carlo (QMC) techniques.\cite{Foulkes_RMP_2001,Motta_Wiley_2018} Both finite temperature QMC methods, including CTQMC\cite{Gull_RMP_2011} and Hirsch-Fye QMC\cite{Hirsch_PRL_1986} algorithms that have been employed as impurity solvers within DMFT, and ground state\cite{Motome_Imada_JPSJ_1997,Motome_JPSJ_1998} QMC algorithms have been developed and applied to the multi-band Hubbard model. Nonetheless, the Hund's terms of the HK Hamiltonian have posed challenges for all of these methods. This is because Hund's terms are not readily expressed as products of density operators and are therefore not readily amenable to standard QMC transformations. Straightforward decoupling of the exchange and pair hopping terms leads to a severe sign problem.\cite{Held_Vollhardt_EPJB_1998} Attempts have therefore been made to simplify the Hund's contribution to the Hamiltonian to make it more palatable to QMC methods by constraining its direction to the z-axis,\cite{Held_Vollhardt_EPJB_1998,Han_PRB_1998} but such treatments sometimes fail to properly capture the model's expected physics. Several Hund's-specific transformations have been proposed, including a discrete transformation by Aoki \cite{Sakai_Aoki_PRB_2004,Sakai_Aoki_PRB_2006} and a continuous transformation by Imada. \cite{Motome_Imada_JPSJ_1997,Motome_JPSJ_1998} Nevertheless, these transformations ultimately do not eliminate the sign problem and are limited to parameter regimes with only high signal to noise ratios. These parameter constraints obscure our fundamental understanding of multi-band physics.
In this paper, we present an Auxiliary Field Quantum Monte Carlo (AFQMC) framework especially suited for the study of ground state multi-band Hubbard models and demonstrate its accuracy over a range of realistic parameters using different signal-preserving approximations and trial wave functions. Key to our approach is the strategic use of two forms of both the continuous and discrete Hubbard-Stratonovich (HS) Transformations to decouple the Hund's term: a charge decomposition for negative values of the Hund's coupling parameter, and a spin decomposition for positive values of the Hund's coupling parameter. We also employ an unconventional form of importance sampling in which we shift propagators instead of auxiliary fields so as to enable importance sampling of discrete transformed propagators. Unlike previous works, we furthermore utilize flexible Generalized Hartree-Fock (GHF) trial wave functions combined with the constrained path and phaseless approximations to tame the sign and phase problems, respectively. Altogether, we find that these improvements yield promising results for a variety of HK model benchmarks. Although the algorithm presented is designed for the ground state, it can easily be adapted for use in finite temperature methods.\cite{Liu_JCTC_2018,Zhang_PRL_1999} Our algorithm therefore paves the way to the high accuracy modeling of the low temperature physics of a wide range of multi-band models and materials over a dramatically larger portion of the phase diagram.
The remainder of the paper is organized as follows. In Section \ref{method}, we outline the HK model, summarize the key features of the AFQMC method, and describe how the conventional AFQMC technique may be modified to best accommodate the HK Hamiltonian. In Section \ref{results}, we then present benchmarks of our method's performance within different parameter regimes, using different trial wave functions, and employing different approximations on two- and three-band HK models for which ED results may be obtained. Towards the end of this section, we also demonstrate the accuracy with which our techniques can predict the charge gaps and magnetic ordering of two-dimensional lattice models far beyond the reach of most other techniques. We conclude with a discussion of the broader implications of this work and future directions in Section \ref{conclusions}.
\section{Methods}
\label{method}
\subsection{Hubbard Kanamori Model Hamiltonian}
The HK model is a multi-band version of the Hubbard model designed to account for the competition between the spin and band degrees of freedom observed in the physics of $d$- and $f$- electron material.\cite{Kanamori_PTP_1963,Georges_AnnRev_2013} In order to accomplish this, the model includes not only standard Hubbard on-site density-density interactions, but also inter-band density, exchange, and pair hopping terms. The full HK Hamiltonian, written as general as possible, reads
\begin{equation}
\hat{H} \equiv \hat{H}_{1} + \hat{H}_{2} \equiv \hat{H}_{1} + \hat{H}_{U} + \hat{H}_{J},
\label{Hamiltonian}
\end{equation}
where
\begin{equation}
\hat{H}_{1} = \sum_{i m \sigma}\sum_{j m^\prime \sigma^\prime} t_{im,jm^\prime}^{\sigma\sigma^\prime}\hat{c}_{im\sigma}^{\dagger}\hat{c}_{jm^\prime \sigma^\prime},
\label{HOne_Term}
\end{equation}
\begin{eqnarray}
\hat{H}_{U} &=&
\sum_{i,m} U_{im} \hat{n}_{im\uparrow}\hat{n}_{im\downarrow} \nonumber + \sum_{i, m \neq m^\prime} U^{\prime}_{imm'} \hat{n}_{im \uparrow} \hat{n}_{im^\prime \downarrow } \nonumber \\
&+& \sum_{i,m<m^\prime, \sigma } (U^{\prime}_{imm'} - J_{imm'}) \hat{n}_{im \sigma}\hat{n}_{im^\prime \sigma},
\label{HU_Term}
\end{eqnarray}
and
\begin{equation}
\begin{split}
\hat{H}_{J}
= \sum_{i, m \neq m^\prime} J_{imm^\prime} &(\hat{c}_{im\uparrow}^{\dagger} \hat{c}_{im^\prime\downarrow}^{\dagger} \hat{c}_{im\downarrow} \hat{c}_{im^\prime\uparrow} \\
&+\hat{c}_{im\uparrow}^{\dagger}\hat{c}_{im\downarrow}^{\dagger}\hat{c}_{im^\prime\downarrow}\hat{c}_{im^\prime\uparrow} + H.c.).
\end{split}
\label{HJ_Term}
\end{equation}
In the above, $\hat{c}_{im\sigma}^{\dagger}$($\hat{c}_{im\sigma}$) creates (annihilates) an electron with spin $\sigma$ in band $m$ at site $i$. $\hat{n}$ denotes the number operator and $\hat{n}_{im\uparrow}$, for example, represents the number of spin-up electrons at site $i$ in band $m$. $\hat{H}_{1}$ contains all one-body contributions to the Hamiltonian, including terms parameterized by the constants $t_{im,jm^\prime}^{\sigma \sigma'}$ that describe spin-orbit coupling and the hopping of electrons in different bands between sites $i$ and $j$. $\hat{H}_{2}$ denotes the collection of all two-body operators. $\hat{H}_{U}$ contains all density-density interactions, including the intraband ($U$) and interband ($U^\prime$) Coulomb interactions, and the $z$- (or Ising) component of the Hund's coupling. In contrast, $\hat{H}_{J}$ contains all of the terms that cannot be written as density-density interactions, which consist of the $x$- and $y$- components (spin-exchange) of the Hund's coupling, ($\hat{c}_{im\uparrow}^{\dagger} \hat{c}_{im^\prime\downarrow}^{\dagger} \hat{c}_{im\downarrow} \hat{c}_{im^\prime\uparrow} + H.c. $),
as well as the pair-hopping interaction ($\hat{c}_{im\uparrow}^{\dagger}\hat{c}_{im\downarrow}^{\dagger}\hat{c}_{im^\prime\downarrow}\hat{c}_{im^\prime\uparrow} +H.c.$), in which two electrons in a given band transfer as a pair to other bands. $J$ denotes the Hund's coupling constant. Note that our formalism is general and allows for band- and site-dependent $U$, $U$', and $J$ constants.
\subsection{Modified Hubbard Kanamori Model Hamiltonian}
In order to facilitate programming and the generalization of this HK Hamiltonian into a form in which all coupling constants are independent, we map the Hamiltonian given by Equations \eqref{Hamiltonian}-\eqref{HJ_Term} into a one-band model whose terms only depend upon their band indices. If we now let $i$ and $j$ denote superindices that combine both lattice site and band information, then
\begin{equation}
\begin{split}
\hat{H}
&= \hat{H}_{1} + \hat{H}_{2} \\
&= \sum_{ij,\sigma\sigma^\prime} t_{ij}^{\sigma\sigma^\prime} \hat{c}_{i\sigma}^{\dagger}\hat{c}_{j\sigma^\prime}\\
&+ \sum_{i} U^{i} \hat{n}_{i\uparrow} \hat{n}_{i\downarrow}\\
&+ \sum_{i<j} U_{1}^{ij} (\hat{n}_{i\uparrow} \hat{n}_{j\downarrow}+\hat{n}_{i\downarrow}\hat{n}_{j\uparrow})\\
&+ \sum_{i<j} U_{2}^{ij} (\hat{n}_{i\uparrow}\hat{n}_{j\uparrow}+\hat{n}_{i\downarrow}\hat{n}_{j\downarrow})\\
&+ \sum_{i<j} J^{ij} (\hat{c}_{i\uparrow}^{\dagger}\hat{c}_{j\downarrow}^{\dagger}\hat{c}_{i\downarrow}\hat{c}_{j\uparrow}
+\hat{c}_{i\uparrow}^{\dagger}\hat{c}_{i\downarrow}^{\dagger}\hat{c}_{j\downarrow}\hat{c}_{j\uparrow} \\
&+\hat{c}_{j\uparrow}^{\dagger}\hat{c}_{i\downarrow}^{\dagger}\hat{c}_{j\downarrow}\hat{c}_{i\uparrow}
+\hat{c}_{j\uparrow}^{\dagger}\hat{c}_{j\downarrow}^{\dagger}\hat{c}_{i\downarrow}\hat{c}_{i\uparrow}).
\end{split}
\label{Reformed_Hamiltonian}
\end{equation}
$t_{ij}^{\sigma\sigma^\prime}$ describes the hopping and spin-orbit coupling between different sites and bands. In keeping with the $\sum_{i, m<m'}$ and $\sum_{i, m\neq m'}$ summations in Equations \eqref{HU_Term} and \eqref{HJ_Term}, $\sum_{i<j}$ only sums over index combinations that reference different bands on the same site. In this modified HK Model, the $U$ term describes density-density interactions only between electrons with opposite spins in the same band, the $U_{1}$ term describes interactions between electrons with opposite spins in different bands on the same site, the $U_{2}$ term describes interactions between electrons with parallel spins in different bands on the same site, and the $J$ term describes spin-exchange and pair hopping interactions on the same site. Thus, in going from Equations \eqref{HU_Term} and \eqref{HJ_Term} to Equation \eqref{Reformed_Hamiltonian}, the original $U'$ term has become the $U_{1}$ term, the original $(U'-J)$ term has become the $U_{2}$ term, and the $J$ term has been re-expressed. Using Equation \eqref{Reformed_Hamiltonian}, we map a multi-band model into a single-band model in which the number of lattice sites has been enlarged into the number of bands. Since there is no explicit index in our model, we can deal with any number of bands as long as the mapping is done correctly.
\subsection{Overview of AFQMC}
In the remainder of this work, AFQMC will be employed to obtain accurate numerical solutions to the HK Model. AFQMC is a quantum many-body method that solves the ground state Schrodinger Equation by randomly sampling an overcomplete space of non-orthogonal Slater determinants\cite{Zhang_Gubernatis_PRB_1997,Zhang_Book_2003,Zhang_Book_2013} and has consistently been demonstrated to be among the most accurate of modern many-body methods for modeling the Hubbard model over a wide range of parameter regimes. \cite{LeBlanc_Gull_PRX_2015,Zheng_Science_2017,Chang_PRL_2010,Chang_PRB_2008,Shi_PRB_2013} At its heart, AFQMC is an imaginary-time projection quantum Monte Carlo technique that applies a projection operator, $e^{-\beta \hat{H}}$, onto an initial wave function, $|\Psi_{I} \rangle$,
\begin{equation}
|\Psi_{0} \rangle \propto \lim_{\beta \rightarrow \infty} \left(e^{-\beta \hat{H}}\right) | \Psi_{I} \rangle.
\end{equation}
In the limit of infinite imaginary projection time $(\beta \rightarrow \infty)$, it converges to the ground state wave function, $|\Psi_{0}\rangle$, as long as the initial wave function is not orthogonal to the ground state wave function. Because the projection operator cannot be evaluated for large values of $\beta$, it is discretized into $n = \beta/\Delta\tau$ smaller time slices for which it can be evaluated
\begin{equation}
|\Psi_{0} \rangle \propto \lim_{n \rightarrow \infty} \left(e^{-\Delta \tau \hat{H}}\right) ^{n} | \Psi_{I} \rangle,
\end{equation}
and the projection is carried out iteratively as follows
\begin{equation}
|\Psi^{(n+1)} \rangle = e^{-\Delta \tau \hat{H}} | \Psi^{(n)} \rangle.
\label{iteration}
\end{equation}
For sufficiently small $\Delta \tau$, the projection operator may be factored into one- and two-body pieces via Suzuki-Trotter Factorization\cite{Suzuki_ProgTheorPhys_1976,Trotter_PAMS_1959}
\begin{equation}
e^{-\Delta\tau \hat{H}} \approx e^{-\Delta\tau \hat{H}_{1}/2} e^{-\Delta\tau \hat{H}_{2}} e^{-\Delta\tau \hat{H}_{1}/2}.
\label{Projection_Equation}
\end{equation}
The two-body propagator may be further decomposed into the four terms given in Equation \eqref{Reformed_Hamiltonian}
\begin{equation}
\begin{split}
e^{-\Delta\tau \hat{H}_{2}}
&\approx e^{-\Delta\tau \hat{H}_{U}}
e^{-\Delta\tau \hat{H}_{U_1}}e^{-\Delta\tau \hat{H}_{U_2}}e^{-\Delta\tau \hat{H}_{J}}\\
&=e^{-\Delta\tau \sum\limits_{i} U_i \hat{n}_{i\uparrow} \hat{n}_{i\downarrow}}
e^{-\Delta\tau \sum\limits_{i<j} U_1^{ij} (\hat{n}_{i\uparrow} \hat{n}_{j\downarrow}+\hat{n}_{i\downarrow} \hat{n}_{j\uparrow})}\\
&e^{-\Delta\tau \sum\limits_{i<j} U_2^{ij} (\hat{n}_{i\uparrow} \hat{n}_{j\uparrow}+\hat{n}_{i\downarrow} \hat{n}_{j\downarrow})}\\
&e^{-\Delta\tau \sum\limits_{i<j}J^{ij}(\hat{c}_{i\uparrow}^{\dagger}\hat{c}_{j\downarrow}^{\dagger}\hat{c}_{i\downarrow}\hat{c}_{j\uparrow}
+\hat{c}_{i\uparrow}^{\dagger}\hat{c}_{i\downarrow}^{\dagger}\hat{c}_{j\downarrow}\hat{c}_{j\uparrow}
+H.c.)}.
\end{split}
\label{two-body propagator}
\end{equation}
A time step extrapolation is needed to make sure the Trotter error is negligible in the Monte Carlo simulation.
\subsection{Hubbard-Stratonovich Transformation of the Modified Hubbard Kanamori Hamiltonian \label{HSTransform}}
According to Thouless's Theorem, \cite{Thouless_NP_1960} acting the exponential of a one-body operator on a determinant results in another determinant, reducing the process of projecting a one-body operator onto the wave function into standard matrix multiplication. Nevertheless, no such theorem applies to exponentials of two-body operators, which necessitates re-expressing these operators into integrals over one-body operators using the so-called Hubbard-Stratonovich transformation. \cite{Hubbard_PRL_1959}
In order to transform the two-body propagator given by Equation \eqref{two-body propagator}, both discrete\cite{Hirsch_PRB_1983,Gubernatis_Werner} and continuous\cite{Buendia_PRB_1986} HS transformations need to be performed. The $U$, $U_{1}$, and $U_{2}$ terms are products of density operators, much like the conventional Hubbard $U$ term, and may therefore be decomposed using discrete transformations. For $\alpha < 0$, where $\alpha$ may be denote $U$, $U^{1}$, or $U^{2}$, it is usually better to use the discrete charge decomposition
\begin{equation}
e^{-\Delta\tau\alpha \hat{n}_{1}\hat{n}_{2}}=e^{-\Delta\tau\alpha (\hat{n}_{1}+\hat{n}_{2}-1)/2}\sum_{x=\pm{1}}\frac{1}{2}e^{\gamma x (\hat{n}_{1}+ \hat{n}_{2}-1)},
\label{density_charge}
\end{equation}
where
$\cosh(\gamma)=e^{-\Delta\tau\alpha/2}$, while for $\alpha > 0 $, it is usually better to use the spin decomposition
\begin{equation}
e^{-\Delta\tau\alpha \hat{n}_{1}\hat{n}_{2}}=e^{-\Delta\tau\alpha (\hat{n}_{1}+\hat{n}_{2})/2}\sum_{x=\pm{1}}\frac{1}{2}e^{\gamma x (\hat{n}_{1}-\hat{n}_{2})},
\label{density_spin}
\end{equation}
where $\cosh(\gamma)=e^{\Delta\tau\alpha/2}$. In both Equations \eqref{density_charge} and \eqref{density_spin}, $x$ represents the namesake auxiliary field that may assume the discrete values of $+1$ or $-1$. For the subsequent discussion, note that the charge decomposition is so named because it produces a one-body propagator involving the sum of $\hat{n}_{1}+\hat{n}_{2}$, which would be equivalent to the charge on a site if $1$ represented an up and $2$ a down spin on that site. Along similar lines, the spin decomposition is so named because it involves the difference between $\hat{n}_{1}$ and $\hat{n}_{2}$, which would represent the spin on a site under the same assumptions.
Because $\hat{H}_{J}$ contains terms that are not simple products of density operators, decomposing it is a much more challenging task. Past attempts have either neglected or simplified $\hat{H}_{J}$.\cite{Held_Vollhardt_EPJB_1998,Han_PRB_1998} Several techniques have employed exact decompositions,\cite{Motome_Imada_JPSJ_1997,Sakai_Aoki_PRB_2006,Sakai_Aoki_PRB_2004} but all such decompositions are accompanied by a sign problem that thwarts explorations of wide swaths of the phase diagram. Unlike these past attempts, in the following, we define a unique decomposition that can be employed in both continuous and discrete transformations, and accompany it by importance sampling that first mitigates and the constrained path and phaseless approximations that eliminate the sign and phase problems. As part of our decomposition of $e^{-\Delta \tau \hat{H}_{J}}$, we first re-expressed $\hat{H}_{J}$ in terms of squares of one-body operators. Let \begin{equation}
\hat{\rho}_{ij} \equiv \sum_{\sigma}(\hat{c}_{i\sigma}^{\dagger}\hat{c}_{j\sigma} + \hat{c}_{j\sigma}^{\dagger}\hat{c}_{i\sigma}).
\end{equation}
Then,
\begin{equation}
\hat{\rho}_{ij}^{2} = \sum_{\sigma\sigma^\prime}(\hat{c}_{i\sigma}^{\dagger}\hat{c}_{j\sigma} + \hat{c}_{j\sigma}^{\dagger}\hat{c}_{i\sigma}) (\hat{c}_{i\sigma^\prime}^{\dagger}\hat{c}_{j\sigma^\prime} + \hat{c}_{j\sigma^\prime}^{\dagger}\hat{c}_{i\sigma^\prime}),
\end{equation}
and $\hat{H}_{J}$ may be re-expressed as (see the Supplemental Materials for more details)
\begin{equation}
\begin{split}
\hat{H}_{J}
&=\sum_{i<j} J^{ij}(\hat{c}_{i\uparrow}^{\dagger}\hat{c}_{j\downarrow}^{\dagger}\hat{c}_{i\downarrow}\hat{c}_{j\uparrow}+\hat{c}_{i\uparrow}^{\dagger}\hat{c}_{i\downarrow}^{\dagger}\hat{c}_{j\downarrow}\hat{c}_{j\uparrow}+ H.c.) \\
&=\sum_{i<j} \frac{J^{ij}}{2} [\hat{\rho}_{ij}^{2} - \sum_{\sigma} (\hat{n}_{i\sigma}+\hat{n}_{j\sigma} - \hat{n}_{i\sigma}\hat{n}_{j\sigma}-\hat{n}_{j\sigma}\hat{n}_{i\sigma})] \\
&=\sum_{i<j} \frac{J^{ij}}{2}\hat{\rho}_{ij}^{2} - \sum_{i<j, \sigma} \frac{J^{ij}}{2} (\hat{n}_{i\sigma}+\hat{n}_{j\sigma}) +\sum_{i<j, \sigma} J^{ij} \hat{n}_{i\sigma}\hat{n}_{j\sigma}.
\end{split}
\label{transformation_1}
\end{equation}
The second term of Equation \eqref{transformation_1} consists of one-body operators and can be combined with the other one-body operators into $\hat{H}_{1}$. The third term consists of a product of density operators and can therefore be transformed according to either Equations \eqref{density_charge} or \eqref{density_spin}. The first term, however, consists of a square that cannot be resolved into products of density operators. In order to decouple this two-body term, a continuous HS transformation must be employed. In general, the continuous HS transformation may be written as
\begin{equation}
e^{-\Delta \tau \hat{A}^{2}/2} = \int dx \frac{1}{\sqrt{2\pi}} e^{-x^{2}/2} e^{x\sqrt{-\Delta \tau}\hat{A}}
\label{continuous_HS_transformation},
\end{equation}
where $\hat{A}$ represents any one-body operator and $x$ denotes an auxiliary field, as before. Letting $\hat{A} \equiv \hat{\rho}_{ij}$, it follows that the most obvious way to transform the exponential formed from the first term of Equation \eqref{transformation_1} is using the charge decomposition
\begin{equation}
\begin{split}
& e^{-\Delta\tau \sum\limits_{i<j}
\frac{J^{ij} }{2}[\sum\limits_{\sigma}(\hat{c}_{i\sigma}^{\dagger}\hat{c}_{j\sigma}+\hat{c}_{j\sigma}^{\dagger}\hat{c}_{i\sigma})]^{2}} \\
&=
\prod_{i<j}\int dx_{ij} \frac{1}{\sqrt{2\pi}} e^{- x_{ij}^{2}/2} e^{x_{ij}\sqrt{-\Delta\tau J^{ij}} [\sum\limits_{\sigma}(\hat{c}_{i\sigma}^{\dagger}\hat{c}_{j\sigma}+\hat{c}_{j\sigma}^{\dagger}\hat{c}_{i\sigma})]}.
\end{split}
\label{transformation_charge}
\end{equation}
As long as $J^{ij}<0$ for all $i,j$, all of the propagators produced by this transformation will be real, as is desirable within AFQMC simulations. However, if any of the $J^{ij}$ are greater than 0, $\sqrt{-\Delta\tau J^{ij}}$ will be complex resulting in a complex propagator that immediately introduces a complex phase into simulations. To prevent complexity from being introduced into the operators, in certain cases, we take a cue from the discrete case and define a continuous spin decomposition that involves the difference between spin up and down operators. Let
\begin{equation}
\hat{\rho}_{ij} = \sum_{\sigma} \delta _{\sigma}(\hat{c}_{i\sigma}^{\dagger}\hat{c}_{j\sigma} + \hat{c}_{j\sigma}^{\dagger}\hat{c}_{i\sigma}),
\label{Rho_Equation}
\end{equation}
where $\delta _{\uparrow}=1$ and $\delta _{\downarrow}=-1$, then (see the Supplemental Materials for further details)
\begin{equation}
\hat{\rho}_{ij}^{2} = \sum_{\sigma\sigma^\prime}
\delta _{\sigma}\delta _{\sigma^\prime}(\hat{c}_{i\sigma}^{\dagger}\hat{c}_{j\sigma} + \hat{c}_{j\sigma}^{\dagger}\hat{c}_{i\sigma}) (\hat{c}_{i\sigma^\prime}^{\dagger}\hat{c}_{j\sigma^\prime} + \hat{c}_{j\sigma^\prime}^{\dagger}\hat{c}_{i\sigma^\prime}).
\label{Rho_Equation_Squared}
\end{equation}
Using this to re-express $\hat{H}_{J}$, we have
\begin{equation}
\begin{split}
\hat{H}_{J}
&=\sum_{i<j} J^{ij}(\hat{c}_{i\uparrow}^{\dagger}\hat{c}_{j\downarrow}^{\dagger}\hat{c}_{i\downarrow}\hat{c}_{j\uparrow}+\hat{c}_{i\uparrow}^{\dagger}\hat{c}_{i\downarrow}^{\dagger}\hat{c}_{j\downarrow}\hat{c}_{j\uparrow}+ H .c.) \\
&=\sum_{i<j} - \frac{J^{ij}}{2} [\hat{\rho}_{ij}^{2} - \sum_{\sigma} (\hat{n}_{i\sigma}+\hat{n}_{j\sigma} - \hat{n}_{i\sigma}\hat{n}_{j\sigma}-\hat{n}_{j\sigma}\hat{n}_{i\sigma})] \\
&=\sum_{i<j} -\frac{J^{ij}}{2}\hat{\rho}_{ij}^{2} +\sum_{i<j, \sigma} \frac{J^{ij}}{2} (\hat{n}_{i\sigma}+\hat{n}_{j\sigma}) -\sum_{i<j, \sigma} J^{ij} \hat{n}_{i\sigma}\hat{n}_{j\sigma}.
\end{split}
\label{transformation_3}
\end{equation}
Employing this form for the decomposition, the exponential that stems from the first term of Equation \eqref{transformation_3} may now be transformed to yield
\begin{equation}
\begin{split}
& e^{\Delta\tau \sum\limits_{i<j}
\frac{J^{ij} }{2}[\sum\limits_{\sigma}\delta _{\sigma}(\hat{c}_{i\sigma}^{\dagger}\hat{c}_{j\sigma}+\hat{c}_{j\sigma}^{\dagger}\hat{c}_{i\sigma})]^{2}} \\
&=
\prod_{i<j}\int dx_{ij} \frac{1}{\sqrt{2\pi}} e^{- x_{ij}^{2}/2} e^{x_{ij}\sqrt{\Delta\tau J^{ij}} [\sum\limits_{\sigma}\delta _{\sigma}(\hat{c}_{i\sigma}^{\dagger}\hat{c}_{j\sigma}+\hat{c}_{j\sigma}^{\dagger}\hat{c}_{i\sigma})]},
\end{split}
\label{transformation_spin}
\end{equation}
which is real for $J^{ij} > 0$. Using the charge decomposition (Equation \eqref{transformation_charge}) when $J^{ij}<0$ and the spin decomposition (Equation \eqref{transformation_spin}) when $J^{ij}>0$ thus completely eliminates complex propagators, easing simulation. In Section \ref{two-band}, we compare the merits of using this mixed decomposition approach to exclusively relying upon the complex charge decomposition on the accuracy of our overall results.
Inserting the HS transformations defined by Equations \eqref{density_charge}, \eqref{density_spin}, \eqref{transformation_charge}, and \eqref{transformation_spin} into Equations \eqref{Projection_Equation} and \eqref{two-body propagator} and combining terms, one arrives at the final AFQMC expression for the projection operator
\begin{equation}
e^{-\Delta \tau \hat{H}} = \int d{\bf x} p({\bf x}) \hat{B}({\bf x}),
\label{effective_propagation}
\end{equation}
where ${\bf x} = \{x_1, x_2, . . . , x_{N_{F}}
\}$ denotes the set of $N_{F}$ total normally distributed auxiliary
fields sampled at a given time slice, $\hat{B}(x)$ represents the amalgamation of all one-body operators, and $p(x)$ is a combination of all scalar functions of the fields. Example expressions for $\hat{B}(x)$ and $p(x)$ are given in the Supplemental Information. As is clear from Equation \eqref{effective_propagation}, the series of HS Transformations described ultimately maps the original two-body propagator into a weighted integral over one-body propagators that are functions of external auxiliary fields.
\subsection{Sampling in AFQMC}
\subsubsection{The Sampling Process}
One of the most computationally efficient ways of evaluating many dimensional integrals such as that given by Equation \eqref{effective_propagation} is to use Monte Carlo sampling techniques. As described in more detail in previous publications,\cite{Hirsch_PRB_1985, Zhang_Gubernatis_PRB_1997,Zhang_Book_2003,Zhang_Book_2013} if $|\Psi_{I}\rangle$ is represented by a single Slater determinant, after each application of the projection operator, a new Slater determinant will be produced. Thus, if $k$ instances (so-called ``walkers'') are initialized to $|\Psi_{I}\rangle$ and the projection operation given by Equation \eqref{effective_propagation} is applied to each of them by independently sampling sets of fields, then a random walk through the space of non-orthogonal determinants is realized in which the overall wave function at time slice $n$, $|\Psi^{(n)}\rangle$, is represented by an ensemble of $k$ wave functions $|\psi_{k}^{(n)}\rangle$ with weights $w_{k}^{(n)}$
\begin{equation}
| \Psi^{(n)}\rangle = \sum_{k} w_k^{(n)} | \psi_k^{(n)} \rangle .
\label{wave function}
\end{equation}
Here, the $w_{k}^{(n)}$ consist of the products of numbers accumulated over all time slices by walker $k$, which can be a complex number.
Ground state observables at each time slice, such as the energies reported below, may then be computed by evaluating the mixed estimator\cite{Foulkes_RMP_2001} over the ensemble
\begin{eqnarray}
\langle \hat{A} \rangle_{mix} &=& \frac{\langle \Psi_{T} | \hat{A} | \Psi^{(n)} \rangle}{\langle \Psi_{T} | \Psi^{(n)} \rangle} \nonumber \\
&=& \frac{ \sum_{k} w_{k}^{(n)} \langle \Psi_{T} | \hat{A} | \psi_{k}^{(n)} \rangle}{ \sum_{k} w_{k}^{(n)} \langle \Psi_{T} | \psi_{k}^{(n)} \rangle} ,
\label{Mixed}
\end{eqnarray}
where $|\Psi_{T} \rangle$ denotes a trial wave function that approximates the true ground state wave function. To facilitate the evaluation of the mixed estimator, it is common to introduce the local energy
\begin{equation}
E_{L}[\Psi_{T}, \Phi] \equiv \frac{\langle \Psi_{T}| \hat{H} | \Phi \rangle}{\langle \Psi_{T} | \Phi \rangle},
\end{equation}
such that Equation \eqref{Mixed} may be simplified to
\begin{equation}
\langle \hat{A} \rangle_{mix} = \frac{ \sum_{k} w_{k}^{(n)} \langle \Psi_{T} | \psi_{k}^{(n)} \rangle E_{L}[\Psi_{T}, \psi_{k}^{(n)}]}{ \sum_{k} w_{k}^{(n)} \langle \Psi_{T} | \psi_{k}^{(n)} \rangle}.
\label{Mixed_Again}
\end{equation}
After a sufficiently large number of time slices such that $|\Psi^{(n)}\rangle$ approaches the ground state, final estimates of $\langle \hat{A}\rangle$ may be obtained by averaging over each time slice expectation value.
A population control procedure \cite{Calandra_Sorella_PRB_1998} is needed during the random walk. During this procedure, walkers with larger weights are replicated and those with smaller weights are eliminated probabilistically. The weight used in population control is
\begin{equation}
W^{(n)}_{k} = w_{k}^{(n)} \langle \Psi_{T} | \psi_{k}^{(n)} \rangle .
\label{weight}
\end{equation}
When there is a sign or phase problem, $ W^{(n)}_{k} $ may become negative or complex. As described in Section (\ref{cpa}) and Section (\ref{pha}), $ W^{(n)}_{k} $ is always positive or zero if the constrained path or phaseless approximations are employed.
\subsubsection{The Sign and Phase Problems}
Unfortunately, the ``free'' projection process just described is typically beset by either the sign\cite{Loh_PRB_1990,Ceperley1991} or phase problems.\cite{Zhang_Krakauer_PRL_2003} These problems fundamentally stem from the fact that observables computed using a single Slater determinant, $|\Psi\rangle$, remain invariant to arbitrary rotations, $e^{i\theta} |\Psi \rangle$, of that determinant, where $\theta$ is a phase angle. Consequently, during the course of an AFQMC simulation involving complex propagators, walkers may accumulate infinitely many possible phases (as there are infinitely many possible phase angles, $\theta \in [0, 2\pi)$), resulting in infinitely many possible determinants. Since these phases are directly multiplied into the walker weights of Equations \eqref{Mixed} and \eqref{Mixed_Again}, after many iterations, the walker weights end up populating the entire complex plane and many of the terms summed to compute weighted averages of observables cancel one another out. This cancellation leads to an exponential decline in observable signal to noise ratios that manifests as infinite variances\cite{Shi_PRB_2013} called the phase problem. If transformations that preclude propagators from becoming complex are employed as described above, positive and negative versions of each determinant may still be generated, resulting in a somewhat less pernicious cancellation of positive and negative weights termed the sign problem. If left unchecked, the sign and phase problems render obtaining meaningful observable averages nearly impossible, thwarting AFQMC simulations. We therefore mitigate these problems using a combination of background subtraction, importance sampling, and either the constrained path (for the sign problem) or phaseless (for the phase problem) approximations.
\subsubsection{Background Subtraction}
One of the simplest ways of reducing variances within AFQMC is via background subtraction.\cite{Purwanto_PRA_2005} As part of background subtraction, the two-body portion of a Hamiltonian is rewritten so that a mean field average is subtracted from each one-body operator. Thus, if the original two-body operator may be written as a square such that $\hat{V} = -\frac{1}{2} \sum_{i} \hat{v}_{i}^{2}$ to make it amenable to a HS Transformation, as part of background subtraction, it would be re-expressed as
\begin{equation}
\hat{V} = -\frac{1}{2} \sum_{i} \left(\hat{v}_{i} - \langle \hat{v}_{i} \rangle \right)^{2} - \sum_{i} \hat{v}_{i} \langle \hat{v}_{i} \rangle + \frac{1}{2} \sum_{i} \langle \hat{v}_{i} \rangle^{2},
\end{equation}
where $\langle \hat{v}_{i} \rangle$ denotes the mean field average of the operator $\hat{v}_{i}$ (see the Supplemental Materials for more details on how this mean field average is obtained). Because the modified $\hat{v}_{i} - \langle \hat{v}_{i} \rangle$ operator will be smaller in magnitude than the bare $\hat{v}_{i}$ operator, background subtraction reduces the variance involved in AFQMC simulations. In this work, we perform background subtraction on the only term in the Hamiltonian that is not a product of on-site densities, the $\frac{J^{ij}}{2}\hat{\rho}_{ij}^{2}$ term of Equation \eqref{transformation_1} or the $-\frac{J^{ij}}{2}\hat{\rho}_{ij}^{2}$ term of Equation \eqref{transformation_3}, yielding
\begin{eqnarray}
\sum_{i<j} \frac{J^{ij}}{2} \hat{\rho}_{ij}^{2}
&=& \sum_{i<j} \frac{J^{ij}}{2} (\hat{\rho}_{ij}-\langle \hat{\rho}_{ij} \rangle)^{2}
-\sum_{i<j} \frac{J^{ij}}{2} \langle \hat{\rho}_{ij} \rangle^{2} \nonumber \\
&+& \sum_{i<j}J^{ij}\langle \hat{\rho}_{ij} \rangle \hat{\rho}_{ij}
\end{eqnarray}
and
\begin{eqnarray}
\sum_{i<j} -\frac{J^{ij}}{2} \hat{\rho}_{ij}^{2}
&=&\sum_{i<j} -\frac{J^{ij}}{2} (\hat{\rho}_{ij}-\langle \hat{\rho}_{ij} \rangle)^{2} +\sum_{i<j} \frac{J^{ij}}{2} \langle \hat{\rho}_{ij} \rangle^{2} \nonumber \\
&-& \sum_{i<j}J^{ij} \langle \hat{\rho}_{ij} \rangle \hat{\rho}_{ij},
\end{eqnarray}
respectively.
\subsubsection{Importance Sampling}
In order to further reduce the variance of walker weights and to make our simulations more amenable to the constrained path and phaseless approximations, we additionally perform importance sampling, which aims to shift the center of the distribution from which we sample our auxiliary fields so that the most important fields are sampled more frequently. The conventional way of performing importance sampling in AFQMC simulations is by introducing a force bias that shifts each sampled field by an amount dependent upon the operator being transformed and the current walker wave function.\cite{Purwanto_PRE_2004,Zhang_Krakauer_PRL_2003,Rom_Neuhauser_CPL_1997,Shi_Zhang_PRA_2015} Because we utilize a mixture of discrete and continuous transformations and force bias importance sampling is only applicable to continuous transformations, in this work, we employ a formally equivalent strategy in which we shift \emph{the propagators instead of the auxiliary fields}.
For continuous HS Transformations, this may be accomplished by shifting the operator $\hat{A}$ by $\langle \hat{A} \rangle$ in Equation \eqref{continuous_HS_transformation}
\begin{equation}
\begin{split}
e^{-\Delta \tau \hat{A}^{2}/2}
&= \int dx \frac{1}{\sqrt{2\pi}} e^{-x^{2}/2} e^{x\sqrt{-\Delta \tau}\hat{A}} \\
&= \int dx \frac{1}{\sqrt{2\pi}} e^{-x^{2}/2} e^{x\sqrt{-\Delta \tau}\langle \hat{A} \rangle} e^{x\sqrt{-\Delta \tau}(\hat{A}-\langle \hat{A} \rangle)},
\label{importance_sampling_continuous}
\end{split}
\end{equation}
where $\langle \hat{A} \rangle$ is the mixed estimator of $\hat{A}$
\begin{equation}
\langle \hat{A} \rangle
\equiv \frac{\langle \Psi_T | \hat{A} | \psi_k^{(n)} \rangle }{\langle \Psi_T | \psi_k^{(n)} \rangle }.
\end{equation}
If we define the dynamic force as $F \equiv \sqrt{-\Delta \tau}\langle \hat{A} \rangle$, then Equation \eqref{importance_sampling_continuous} may be re-expressed as
\begin{equation}
\begin{split}
e^{-\Delta \tau \hat{A}^{2}/2}
&= \int dx \frac{1}{\sqrt{2\pi}} e^{-x^{2}/2} e^{xF} e^{x\sqrt{-\Delta \tau}\hat{A}-xF} \\
&= \int dx \frac{1}{\sqrt{2\pi}} e^{-(x-F)^{2}/2} e^{\frac{1}{2}F^2}e^{x\sqrt{-\Delta \tau}\hat{A}-xF} \\
&= \int dx \frac{1}{\sqrt{2\pi}} e^{-(x-F)^{2}/2} e^{\frac{1}{2}F^2-xF}e^{x\sqrt{-\Delta \tau}\hat{A}}
\label{importance_sampling_continuous-2}.
\end{split}
\end{equation}
In order to realize this transformation, fields are sampled from the shifted Gaussian probability density function, $\frac{1}{\sqrt{2\pi}} e^{-(x-F)^{2}/2}$, and the propagator $e^{x\sqrt{-\Delta \tau} \hat{A}}$ is applied with weight $e^{\frac{1}{2}F^{2}-xF}$. The field distributions are now centered around the dynamic force, which can be shown to minimize the variance. If the dynamic force $F$ is a complex number, our auxiliary fields will have the same imaginary part to ensure the $x-F$ is real. Then the probability function $\frac{1}{\sqrt{2\pi}} e^{-(x-F)^{2}/2}$ stays in the real axis, which can be sampled by Monte Carlo.
Shifting the propagator within a discrete transformation proceeds in exactly the same fashion. Comparing Equations \eqref{continuous_HS_transformation} and \eqref{density_spin}, the dynamic force needed to shift the propagator in Equation \eqref{density_spin}, for example, would be $F\equiv \gamma(\langle \hat{n}_{1} \rangle - \langle \hat{n}_{2} \rangle)$, resulting in the transformation
\begin{equation}
\begin{split}
e^{-\Delta\tau\alpha \hat{n}_{1}\hat{n}_{2}}
&=e^{-\Delta\tau\alpha (\hat{n}_{1}+\hat{n}_{2})/2}\sum_{x=\pm{1}}\frac{1}{2}e^{\gamma x (\hat{n}_{1}-\hat{n}_{2})} \\
&= e^{-\Delta\tau\alpha (\hat{n}_{1}+\hat{n}_{2})/2} \sum_{x=\pm{1}}\frac{1}{2} \left( \frac{e^{xF}}{W} \right) W e^{\gamma x (\hat{n}_{1}-\hat{n}_{2})-xF}. \\
\label{importance_sampling-discrete}
\end{split}
\end{equation}
As in the continuous case, in order to realize this transformation, fields are now sampled from a shifted probability density function, $e^{xF}/W$, where $W$ is the normalization factor, $W=e^{xF} + e^{-xF}$, and the propagator $e^{(-\Delta\tau\alpha/2 + \gamma x)\hat{n}_1}e^{(-\Delta\tau\alpha/2 - \gamma x)\hat{n}_2}$ is applied with weight $\frac{1}{2}We^{-xF}$. A shifted transformation may similarly be constructed for the discrete charge decomposition given by Equation \eqref{density_charge}. Propagators that include background subtraction may be shifted by simply replacing $\hat{A}$ with $\hat{A}-\langle \hat{A} \rangle$ in Equations \eqref{importance_sampling_continuous} and \eqref{importance_sampling_continuous-2} above (see the Supplemental Materials).
It can readily be proven that shifting auxiliary fields is equivalent to shifting propagators.\cite{Rom_Neuhauser_CPL_1997,Rom_JCP_1998,Shi_Zhang_PRA_2015} Shifting propagators therefore entails a convenient way of introducing importance sampling when discrete transformations are involved. Overall, the importance sampled propagation produces the same observable averages as free propagation, but favors the sampling of determinants with larger overlaps with the trial wave function and suppressing the sampling of determinants with no overlap.
\subsubsection{Constrained Path Approximation}
\label{cpa}
In order to address the sign problem that may emerge when our propagators, $\hat{B}(\vec{x})$, are real, we employ the constrained path approximation. \cite{Zhang_Gubernatis_PRB_1997} Here, we impose this approximation by requiring that all walkers maintain a positive overlap with the trial wave function after each propagation step
\begin{equation}
w_k^{(n)}\langle \Psi_T | \psi_k^{(n)} \rangle >0.
\label{overlap}
\end{equation}
As in typical constrained path implementations, walkers with negative overlaps with the trial wave function will be killed (have their weights set equal to zero), preventing them from being propagated further. This condition will select for only walkers with positive determinants, eliminating the sign problem. It can be shown that if the trial wave function is the exact ground state wave function, this condition will be exact;\cite{Carlson_PRB_1999} however, since the trial wave function is typically unknown, constraining the propagation path in this way results in a small, but consequential approximation.\cite{Chang_PRB_2016,Shi_PRB_2013}
\subsubsection{Phaseless Approximation}
\label{pha}
In cases in which our propagators are complex, instead of employing the constrained path approximation, we employ the more general phaseless approximation.\cite{Zhang_Krakauer_PRL_2003,Purwanto_PRA_2005} The phaseless approximation controls the phase problem by projecting complex walker weights onto the positive real axis according to the equation
\begin{equation}
\label{cosProjection}
W^{(n)}_{k} = |W^{(n)}_{k}| \times \max (0, \cos(\Delta \theta)),
\end{equation}
where $W^{(n)}_{k}$ is defined in Equation \eqref{weight} and $\Delta \theta$, the phase angle, is defined as
\begin{equation}
\begin{split}
&\Delta \theta =Arg \left[ \frac{\langle \Psi_T | \hat{B}(x) | \psi_k^{(n)} \rangle }{\langle \Psi_T | \psi_k^{(n)} \rangle } \right] \approx O(Im(xF)).
\end{split}
\end{equation}
The use of the cosine function to project also ensures that the density of the walkers will vanish at the origin. Because this cosine projection does not affect walkers with real weights, in practical implementations, we apply Equation \ref{cosProjection} to realize both the constrained path and phaseless Approximations.
\subsection{Trial and Initial Wave Functions}
Although AFQMC can readily accommodate multi-determinant trial wave functions, we restrict ourselves to employing single determinant trial wave functions that satisfy certain symmetries\cite{Shi_PRB_2013} such as the free electron (FE), restricted Hartree-Fock (RHF), unrestricted Hartree-Fock (UHF), and generalized Hartree-Fock (GHF) wave functions. RHF wave functions preserve spin symmetry. While RHF and UHF wave functions separately conserve the number spin up and down electrons, GHF only fix the total number of electrons. Details about how these wave functions are generated may be found in the Supplemental Materials.
As illustrated in what follows, because GHF wave functions do not impose any spin symmetries and are therefore the most flexible of these wave function ansatzes, they enable the fastest AFQMC wave function relaxation to the global energy minimum. Nevertheless, when the number of up and down electrons must be fixed, UHF/RHF wave functions were employed instead. Even though our formalism permits our initial wave functions to differ from our trial wave functions, we take our initial and trial wave functions to be the same, except where otherwise noted.
\section{Results and Discussion}
\label{results}
\subsection{Two-Band Hubbard Kanamori Model Benchmarks}
\label{two-band}
In order to test the accuracy of our theoretical framework, we began by benchmarking our method against ED results for the one-dimensional, two-band HK Model on $5 \times 1$ and $6 \times 1$ lattices with periodic boundary conditions small enough to diagonalize. For these benchmarks, we simplify the Hamiltonian given by Equations \eqref{Hamiltonian} and \eqref{HOne_Term} so that hopping can only occur between adjacent sites within the same bands and may be described by a single site- and spin-invariant constant $t$, such that
\begin{equation}
\hat{H}_{1}^{'} = -t \sum_{\langle ij \rangle,\sigma }\sum_{m=1}^{2} \hat{c}_{im\sigma}^{\dagger}\hat{c}_{jm\sigma}.
\end{equation}
We moreover assume that the parameters are site-invariant, such that U$^{i}$=U, U$_1^{ij}$ = U$_1$, U$_2^{ij}$ = U$_2$, and J$^{ij}$ = J.
\textbf{\begin{table}[htbp]
\footnotesize
\caption{The ground state energy of the two-band, 6$\times$1 HK model with $N_{\uparrow}=N_{\downarrow}=6$ over a range of parameters using ED and AFQMC. All energies and parameters are reported in units of $t$.}
\label{table2}
\begin{ruledtabular}
\begin{tabular}{cccccc}
U & U$_{1}$ & U$_{2}$ & J & ED & AFQMC \\
\hline
2.0 & 1.5 & 1.0 & 0.5 & -3.773268 & -3.774(3) \\
2.0 & 1.5 & 1.0 & 1.0 & -4.234037 & -4.230(6) \\
2.0 & 1.5 & 3.0 & 0.5 & 0.758540 & 0.755(4) \\
3.0 & 5.0 & 1.0 & 0.5 & 2.460374 & 2.466(5) \\
6.0 & 1.5 & 1.0 & 0.5 & 1.496509 & 1.503(6) \\
\end{tabular}
\end{ruledtabular}
\end{table}}
Table \ref{table2} presents our results for a $6\times 1$ HK model over a representative sampling of parameters at half filling. All of the calculations presented were initialized using 560 walkers and employed FE trial and initial wave functions, except for the $U$=3.0,$U_{1}$=5.0,$U_{2}$=1.0,$J$=0.5 case. In this case, it was found that an RHF trial wave function yielded a lower trial energy and manifested a different spin order (antiferromagnetic (AFM) order between two bands) than the FE solution. Thus, an RHF trial wave function was employed instead. This demonstrates that trial wave functions should first be analyzed to determine whether their global minima exhibit the correct order before using them to guide propagation within AFQMC. Unless otherwise noted, all of the results presented in this section were obtained using a charge decomposition for $J$ and the phaseless approximation to tame the related phase problem that emerges.
As is clear from the table, AFQMC results, are within $0.01t$ or less of exact results, with the smallest discrepancy occurring for the $U=2.0$, $U_{1}=1.5$ case and the largest occurring for the $U=6.0$ case. In all of these cases, exact results are within two standard derivations of the Monte Carlo results, despite the use of the phaseless approximation.
To pinpoint AFQMC systematic bias, as well as to better understand which regions of the phase diagram are the most challenging for AFQMC, we independently scanned through each of the $U$, $U_{1}$, $U_{2}$, and $J$ parameters holding the others fixed for a $5 \times 1$ HK model. In Figures \ref{f1} and \ref{f2}, we present our scans over $U$ and $J$; figures of our $U_{1}$ and $U_{2}$ scans are presented in the Supplemental Materials.
\begin{center}
\begin{figure}[htbp]
\includegraphics[width=10.5cm]{two_band_U.pdf}
\captionsetup{font=footnotesize, justification=raggedright, singlelinecheck=false}
\caption{AFQMC ground state energy vs. the density-density parameter $U$ for the two-band, 5$\times$1 HK model using the charge decomposition and FE trial wave functions. Here, all of the other Hamiltonian parameters are held fixed at $t=1$, $U_1$=0, $U_2$=0, and $J=0$ with $N_{\uparrow}=N_{\downarrow}=6$. Relative errors, $\Delta E$, taken with respect to ED result are plotted in the inset for clarity.}
\label{f1}
\end{figure}
\end{center}
As shown in Figure \ref{f1}, although the magnitude of the error bars grows with $U$, the relative error remains within 0.1\% to 1\% throughout this range. Similar trends are observed for $U_{1}$ and $U_{2}$. This gives us reason to believe that our method can readily accommodate some of the even larger $U$ values used in studies of strongly correlated materials. Nevertheless, much larger relative errors are observed as $J$ is varied, as depicted in Figure \ref{f2}. This is consistent with previous work, which also implicates the $J$ terms as being most conducive to QMC errors.\cite{Held_Vollhardt_EPJB_1998} Fortunately, for most real materials, $J$ is usually a small fraction of $U$. For small $J$ values, the relative errors are observed to remain less than 1\% and are therefore controllable.
\begin{center}
\begin{figure}[htbp]
\includegraphics[width=8.5cm]{two_band_J.pdf}
\captionsetup{font=footnotesize, justification=raggedright, singlelinecheck=false}
\caption{AFQMC ground state energy vs. the Hund's coupling parameter $J$ for the two-band, 5$\times$1 HK model using the charge decomposition and FE/RHF trial wave functions (WF). Here, all of the other Hamiltonian parameters are held fixed at $t=1$, $U=0$, $U_1$=0, and $U_2$=0 with $N_{\uparrow}=N_{\downarrow}=6$. Relative errors, $\Delta E$, taken with respect to ED results are plotted in the inset for clarity.}
\label{f2}
\end{figure}
\end{center}
What may also be gleaned from Figure \ref{f2} is that the quality of the $J>1.5$ energies depends upon the type of trial wave function employed. While free propagation calculations yield results that are independent of the trial wave function, the quality of the constrained path and phaseless approximations fundamentally depend on the accuracy of the trial wave function employed. As depicted in Figure \ref{f2}, the relative errors in the energies produced by FE trial wave functions surpass 10\% and increase with increasing $J$; in contrast, the relative errors produced by RHF trial wave functions not only remain less than 10\%, but plateau as a function of $J$. As $J$ increases, the RHF electron density becomes non-uniform, yielding a lower variational energy than the FE wave function. Figure \ref{f2} thus demonstrates that AFQMC becomes more accurate as trial wave functions better describe the ground state. Note that we also tested UHF and GHF wave functions, which all converged to the same states as RHF wave functions.
\begin{figure}[htp]
\includegraphics[width=9cm]{two_band_decomp_J.pdf}
\captionsetup{font=footnotesize, justification=raggedright, singlelinecheck=false}
\caption{Comparison of phaseless and constrained path AFQMC energy errors as a function of $J$ for a two-band, 5$\times$1 HK model. Open circles denote parameters at which the constrained path approximation was employed, while closed circles denote parameters at which the phaseless approximation was employed. Here, we set N$_{\uparrow}$= N$_{\downarrow}$=6, $t=1$, $U=0$, $U_{1}=0$, and $U_{2}=0$. FE trial wave functions were used for both the initial and trial wave functions, and 560 walkers were employed in each calculation.}
\label{f3}
\end{figure}
The accuracy of AFQMC predictions are also influenced by the constrained path and phaseless approximations employed. In Figure \ref{f3}, we compare the errors produced by these approximations. As discussed in Section \ref{HSTransform}, for $J>0$, the spin decomposition will yield real propagators that we constrain using the constrained path approximation, while for $J<0$, the spin decomposition will yield complex propagators that we constrain using the phaseless approximation. The charge decomposition behaves in the opposite fashion with respect to $J$. As shown in Figure \ref{f3}, the constrained path approximation behaves significantly better than the phaseless approximation, which appreciably differs from the exact results for $|J|>1.5$. Indeed, the constrained path approximation nearly reproduces the exact results for $J<0$, only manifesting a slight deviation for larger positive values of $J$. These results attest to the fact that using the transformations we describe to prevent the phase problem from emerging is key to maintaining AFQMC accuracy. They also underscore that our method is capable of simulating -$J$ values, which have been unattainable in previous QMC simulations. We expect these trends in accuracy to generalize to models with more bands and higher dimensionality.
\subsection{Application to Three-Band Hubbard Kanamori Models}
\label{three_band}
In order to understand how our techniques generalize to models that approximate more realistic materials and their magnetic phase transitions, we constructed a three-band model with an adjustable band gap. As illustrated in Figure \ref{three_band_structure}, in this model, three bands are located at each site, one band of which is lower in energy by a `band gap' parameter, $\Delta$, than the other two degenerate bands. When $\Delta = 0$, all three bands are completely degenerate. Similar to the two-band model, the hopping occurs between adjacent sites within the same bands, with hopping constant $t_{ij}=1$. While the band gap would be fixed in any given material, creating a separate $\Delta$ parameter enables us to sample a range of band gaps and, by extension, to drive magnetic ordering transitions. We moreover assume that $U^{i}=U$ and $J^{ij}=J=0.15 U$ with $U_1^{ij}=U_1=U-2J$ and $U_2^{ij}=U_2=U-3J$, which are appropriate for the description of transition-metal oxides with a partially occupied $t_{2g}$ shell.\cite{sugano_tanabe_kamimura_1970} In the following discussion, we fix our filling such that an average of four electrons occupy the three bands at each lattice site.
\begin{center}
\begin{figure}[htbp]
\includegraphics[width=7.5cm]{three_band_structure.pdf}
\captionsetup{font=footnotesize, justification=raggedright, singlelinecheck=false}
\caption{Schematic of our three-band model on a 4x4 lattice. At each site, there is one atom with three bands, one of which is lower in energy by $\Delta$ than the other two degenerate bands. The top right box illustrates a situation in which AFM order is present between adjacent lattice sites.}
\label{three_band_structure}
\end{figure}
\end{center}
\begin{center}
\begin{figure}[htbp]
\includegraphics[width=10.5cm]{three_band_mu.pdf}
\captionsetup{font=footnotesize, justification=raggedright, singlelinecheck=false}
\caption{AFQMC ground state energy as a function of the band gap magnitude, $\Delta$, for the three-band, 2$\times$2 HK model using the charge decomposition and GHF trial wave functions. Here, all of the other Hamiltonian parameters are held fixed at $t=1$, $U=6$, $U_1=U-2J$, $U_2=U-3J$, and $J=0.15U$ with $N_{\uparrow}=N_{\downarrow}=8$. Relative errors, $\Delta E$, taken with respect to ED results are plotted in the inset for clarity.}
\label{three_band_mu}
\end{figure}
\end{center}
As an initial step, we benchmarked our AFQMC method against ED results. Diverging from our previous two-band analysis, as part of our three-band benchmarks, we studied our model on two-dimensional lattices with periodic boundary conditions, only varying $\Delta$ and $U$ while keeping the other parameter relationships fixed in order to preserve realism. Our simulations were initialized with 560 walkers and GHF initial and trial wave functions for all of the benchmarks described below. The charge decomposition with the phaseless approximation was employed throughout this section.
In Figure \ref{three_band_mu}, we illustrate how the energy and relative errors change as $\Delta$ is varied from 0 to 1 with $U=6$ on a 2$\times$2 lattice. At fixed $U$, the relative error remains fairly stable and less than 0.1\% throughout this range. This may be anticipated since the band gap only modifies the magnitude of the one-body terms and does not change the phase of the model, which do not directly contribute to our method's stochastic errors.
\begin{center}
\begin{figure}[htbp]
\includegraphics[width=10.5cm]{three_band_U.pdf}
\captionsetup{font=footnotesize, justification=raggedright, singlelinecheck=false}
\caption{AFQMC ground state energy vs. $U$ for the three-band, 2$\times$2 HK model using the charge decomposition and GHF trial wave functions. Here, all of the other Hamiltonian parameters are held fixed at $t=1$, $\Delta =0.8$, $U_1=U-2J$, $U_2=U-3J$, and $J=0.15U$ with $N_{\uparrow}=N_{\downarrow}=8$. Relative errors, $\Delta E$, are taken with respect to ED results are plotted in the inset for clarity.}
\label{three_band_U}
\end{figure}
\end{center}
In Figure \ref{three_band_U}, instead of scanning $\Delta$, we scan $U$ with $\Delta = 0.8$. As shown in Figure \ref{three_band_U}, the relative errors are larger in this case, but still range from 0.1\% for $U<6$ to 1\% for $U>6$. Errors would be expected to grow in this manner as the system becomes more correlated. Overall, the magnitudes of these relative errors suggest that AFQMC's performance is promising.
The rationale for introducing the band gap $\Delta$ parameter is to enable tuning of the magnetic order of the model system. Intuitively, when the band gap is small, the three bands are nearly degenerate and the four electrons have the largest freedom to move among the bands. Such a situation would favor ferromagnetic (FM) order. However, when the band gap becomes sufficiently large, two electrons will populate the lower band, forcing the other two electrons to reside on the higher energy bands. Such a situation would favor AFM order.
This intuition was confirmed by comparing the AFQMC energies attained using trial wave functions with FM and AFM order, respectively (see Figure~\ref{three_band_phase}). Typically, GHF calculations converge to the lowest state with the same magnetic order as the initial state. Thus, in order to construct wave functions with FM order, a randomly initialized density matrix was supplied to the GHF self-consistent equations; to construct wave functions with AFM order, an AFM-ordered initial density matrix was supplied. Several independent GHF calculations were conducted for each system studied to guarantee that the final GHF wave functions produced attained their global minima. At large $\Delta$ ($\Delta \gtrsim 1.1$) values at which ferromagnetic order is disfavored, GHF calculations initialized with random density matrices often developed order. In these situations, FM wave functions produced at smaller values of $\Delta$ were used as trial wave functions in ``FM'' AFQMC calculations performed at larger $\Delta$ values. Figure \ref{three_band_phase} depicts the energies of AFQMC simulations performed with AFM and FM trial wave functions, respectively, as a function of band gap. All of the AFQMC energies presented here are the lowest energies we can obtain at each $\Delta$. At smaller $\Delta$s, trial wave functions with FM order led to the lowest AFQMC energies, while at larger $\Delta$s, AFM trial wave functions did so. This confirms that our model undergoes a ferromagnetic to antiferromagnetic transition at roughly $\Delta = 1.1$. In contrast, Hartree-Fock theory predicts the transition at $\Delta = 0.5$, which is reasonable since Hartree-Fock theory tends to fall into AFM order sector. An illustration of the AFM order exhibited by our model is depicted in Figure~\ref{three_band_structure}.
\begin{center}
\begin{figure}[htbp]
\includegraphics[width=10.5cm]{three_band_phase_trans.pdf}
\captionsetup{font=footnotesize, justification=raggedright, singlelinecheck=false}
\caption{AFQMC ground state energy vs. band gap magnitude, $\Delta$, for the three-band, 4$\times$4 HK model using the charge decomposition. GHF trial wave functions with both FM order and AFM order are used. A QMC predicted phase transition occurred at around $\Delta=1.1$. Hartree-Fock predicted transition point is at $\Delta = 0.5$ illustrated in green dotted line. Here, all of the other Hamiltonian parameters are held fixed at $t=1$, $U=6$, $U_1=U-2J$, $U_2=U-3J$, and $J=0.15$ $U$ with $N_{\uparrow}=N_{\downarrow}=32$.}
\label{three_band_phase}
\end{figure}
\end{center}
To further corroborate the phase transition we observe, we extrapolated the magnitude of the charge gap at $\Delta=0.2$ and $\Delta = 1.5$. To do so, we computed the ground state AFQMC energies of 4$\times$4, 6$\times$6, and 8$\times$8-site systems, with three bands filled with four electrons situated at each site. The charge gap may be determined by computing $E_{N-1}+E_{N+1}-2E_{N}$, where $N$ denotes the total number of electrons in the system. To determine the charge gap in the thermodynamic limit, we fit a $1/L$ form, where $L$ denotes the total number of lattice sites, to the energies and extrapolated to the infinite $L$ limit (see the Supplemental Materials for more details). The energies produced using FM initial trial wave functions were used to ascertain the $\Delta = 0.2$ charge gap, while those produced using AFM wave functions were employed to ascertain the $\Delta = 1.5$ charge gap. The charge gaps obtained are presented in Table \ref{charge_gap}. After extrapolations, the $\Delta=0.2$ charge gap converged to -0.006(47) and the $\Delta=1.5$ charge gap converged to 1.201(41). As one would expect antiferromagnetic, not ferromagnetic, order to be accompanied by a charge gap, these extrapolations support our previous conclusions.
\textbf{\begin{table}[htbp]
\footnotesize
\captionsetup{font=footnotesize, justification=raggedright, singlelinecheck=false}
\caption{The charge gaps of the three-band model at $\Delta = 0.2$ and $\Delta=1.5$ for different system sizes calculated using AFQMC. GHF trial wave functions with FM order and AFM order are used at $\Delta = 0.2$ and $\Delta = 1.5$, respectively. All of the other Hamiltonian parameters are held fixed at $t=1$, $U=6$, $U_1=U-2J$, $U_2=U-3J$, and $J=0.15U$. The electron density per band is $4/3$.}
\label{charge_gap}
\begin{ruledtabular}
\begin{tabular}{ccc}
\# of bands & Charge Gap ($\Delta = 0.2 $) & Charge Gap ($\Delta = 1.5 $) \\
\hline
4x4x3 & 0.222(29) & 1.311(32) \\
6x6x3 & 0.103(27) & 1.268(35) \\
8x8x3 & 0.015(72) & 1.225(36) \\
$\infty$ & -0.006(47) & 1.201(41) \\
\end{tabular}
\end{ruledtabular}
\end{table}}
The successful determination of the magnetic order and charge gaps in this model system illustrate our method's promise for accurately modeling realistic materials.
\section{Conclusions}
\label{conclusions}
In summary, we have presented a ground state AFQMC framework suited for the study of the HK model, a multi-band model designed to capture the Hund's physics of many $d$- and $f$-electron materials. Diverging with past QMC studies of the HK model, we employ a novel set of HS transformations to decouple the Hund's coupling term while preserving the term's essential physics. We find that by carefully combining these transformations with a form of importance sampling that shifts our propagators, well-optimized GHF wave functions, and the constrained path and phaseless approximations, we can accurately predict the energetics of benchmark lattice models and the magnetic order of much larger models that approximate realistic materials. Overall, we find that the phaseless version of our method produces nearly exact energies for small models for $-3<J<3$, a range of $J$ values which contains those commonly observed in experiment. This bodes well for the generalization of our method to other systems.
Our method may readily be extended to include spin-orbit coupling effects and negative $J$ values, which opens the doors to the highly accurate study of exotic, -$J$ fulleride physics.\cite{Nomura_JPCM_2016,Nomura_Science_2015} In order to describe superconducting physics, our method can be adapted to use superconducting trial wave function forms, including Bardeen-Cooper-Schrieffer\cite{Shi_Zhang_PRA_2015,Carlson_Zhang_PRA_2011} and Hartree-Fock-Bogoliubov\cite{Shi_Zhang_PRB_2017} wave functions. We foresee our method having the most immediate impact as a way to delineate low-temperature phase diagrams currently beyond the reach of DMFT methods.\cite{} As the same transformations and importance sampling techniques may readily be adapted into finite temperature AFQMC formalisms,\cite{Hirsch_PRL_1986,Liu_JCTC_2018,Zhang_PRL_1999} the same methods may be used to develop lower scaling, sign and phase problem free impurity solvers. We look forward to employing our methods to more accurately elucidate the complex many body physics of 4$d$ transition metal oxides such as the ruthenates, rhodates, and molybedenates in the near future.
\begin{acknowledgments}
H.H., B.R., and H.S. thank Andrew Millis, Antoine Georges, and Shiwei Zhang for stimulating discussions, and Qiang Han for providing data and insight. H.H. and B.R. acknowledge support from NSF grant DMR-1726213 and DOE grant DE-SC0019441. H.S. thanks the Flatiron Institute for research support. The Flatiron Institute is a division of the Simons Foundation. This work was conducted using computational
resources and services at the Brown University Center for Computation and Visualization, the Flatiron Institute, and the Extreme Science and Engineering Discovery Environment (XSEDE).
\end{acknowledgments}
|
2,869,038,154,062 | arxiv | \section{\textbf{Introduction}}
\indent In this work we consider global existence of smooth
solutions of the Cauchy problem
\begin{equation}\label{eq1}
\left \{
\begin{aligned}
&\partial_{tt}\phi-\partial_{x_i}\big(g^{ij}(t,x)\partial_{x_j}\phi\big)+\phi^5=0,~~\mathbb{R}_t
\times
\mathbb{R}_x^3,\\
&\phi(t_0,x)=f_1(x)\in
C_{0}^{\infty}(\mathbb{R}_x^3),~~~\phi_{t}(t_0,x)=f_2(x)\in
C_{0}^{\infty}(\mathbb{R}_x^3),\\
\end{aligned} \right.
\end{equation}
here~$\{g^{ij}(t,x)\}_{i,j=1}^3$~denotes a matrix valued smooth
function
of the variables~$(t,x)\in \mathbb{R} \times \mathbb{R}^3$,~
which takes values in the real, symmetric,~$3\times3$~matrices,
such that for some~$C >0$,~
\begin{equation}\label{eq101}
C|\xi|^2\leq g^{ij}(t, x)\xi_i\xi_j \leq
C^{-1}|\xi|^2,~~~~\forall~\xi \in
\mathbb{R}^3,~(t, x)\in \mathbb{R} \times \mathbb{R}^3 .\\
\end{equation}
Obviously it is a critical wave equation on a curved spacetime.
First let us survey existence and regularity results for critical
nonlinear wave equations briefly. If ~$g^{ij}=\delta^{ij}$,~which
denotes the Kronecker delta function, we say
problem~$\eqref{eq1}$~is of constant coefficients. In the case of
critical nonlinear wave equation with constant coefficients, a
wealth of results are available in the literature. For cauchy
problem, global existence of~$C^2$-solutions in dimension~$n=3$~was
first obtained by Rauch \cite{Rauch}, assuming the initial energy to
be small. In 1988, also for "large" data global~$C^2$-solutions in
dimension~$n=3$~were shown to exist by Struwe \cite{Struwe} in the
radially symmetric case. Grillakis \cite{Grillakis1} in 1990 was
able to remove the latter symmetry assumption and obtained the same
result. Not much later, Kapitanskii \cite{Ka} estiblished the
existence of a unique, partially regular solution for all
dimensions. Combining Strichartz inequality and Morawetz estimates,
Grillakis \cite{Grillakis2} in 1992 established global existence and
regularity for dimensions~$3\leq n\leq 5$~and announced the
corresponding results in the radial caes for dimensions~$n\leq 7$.~
Then Shatah and Struwe \cite{Shatah} obtained global existence and
regularity for dimensions~$3\leq n\leq 7$.~They also proved the
global well-posedness in the energy space in \cite{Shatah2} 1994.
For the critical exterior problem in dimension 3, Smith and Sogge
\cite{Sogge} in 1995 proved global existence of smooth solutions. In
2008, Burq et all \cite{Burq} obtained the same result in 3-D
bounded domain. \\
\indent For the critical Cauchy problem with time-independent
variable coefficients, Ibrahim and Majdoub \cite{Ibrahim} in 2003
studied the existence of both global smooth for dimensions~$3\leq n<
6$~and Shatah-Struwe's solutions for dimensions~$n\geq 3$.~Recently,
we have showed global existence and regularity in \cite{Zhou} for
the critical exterior problem with time-independent
variable coefficients in dimension~$n=3$.~\\
\indent In this paper we are interested in the critical case with
coefficients depending on the time and space variables. Our result
concerns global existence and regularity, showed as follow: \\
{\bf Theorem 1.1.} Problem~$\eqref{eq1}$~admits a unique global
solution~$\phi \in C^\infty(\mathbb{R}\times \mathbb{R}^3)$.~\\
\indent The demonstration of theorem 1.1 is done by contradiction,
showing~$\phi$~is uniformly bounded. For that purpose, the key step
is to show the non-concentration of the~$L^6$~part of the energy
(and hence the energy), and to do this the idea is to work in
geodesic cone just like
light cone in constant coefficients case. Thus we have\\
{\bf Lemma 1.2.}~(\textbf{Non-concentration lemma}) If~$\phi \in
([t_0, 0)\times \mathbb{R}^3)$~solves \eqref{eq1}, then
\begin{equation}\nonumber\\
\lim_{t\rightarrow 0}\int_{Q(t)}\phi^6\mathrm{d}v=0,\\
\end{equation}
where~$Q(t)$~is the intersection of~$t$~time slice with backward
solid
characteristic cone from origin.\\
\indent In order to prove the non-concentration lemma, in the
constant coefficients caes the Morawetz
multiplier~$t\partial_t+r\partial_r+1$~is used, where~$r=|x|$;~while
in the time-independent variable coefficients case the geometric
multiplier~$t\partial_t+\rho\nabla \rho+1$~is used instead,
where~$\rho$~is the associated distance function. The time-dependent
variable coefficients case considered in this work is much more
difficult, and the simple minded generalization to use
multiplier~$t\partial_t+(\underline{u}-t)\nabla(\underline{u}-t)+1$~will
not work, where~$\underline{u}$~is an optical function (close
to~$t+|x|$~). Following Christodoulou and Klainerman \cite{Chris} we
use a null frame. However, the emphasis in their work is the
asymptotic behavior of the null frame at infinity, and here we
emphasize its asymptotic behavior locally at a possible blow up
point. We derive the asymptotic behavior of the null frame by using
comparison theorem originated
from Riemainnian geometry. \\
\indent To prove our result, we also need Strichartz inequality, stated as \\
{\bf Lemma 1.3.}~(\textbf{Strichartz inequality})
Assuming~$g^{ij}(t,x)$~satisfies the conditions of the
introduction,~$\phi$~solves the Cauchy problem as follow in the half
open strip~$[t_0, 0)\times \mathbb{R}^3$:~
\begin{equation}\nonumber\\
\left \{
\begin{aligned}
&\partial_{tt}\phi-\partial_{x_i}\big(g^{ij}(t,x)\partial_{x_j}\phi\big)=F(t,x),\\
&\phi(t_0,x)=f_1(x)\in
C_{0}^{\infty}(\mathbb{R}_x^3),~~~\phi_{t}(t_0,x)=f_2(x)\in
C_{0}^{\infty}(\mathbb{R}_x^3),\\
\end{aligned} \right.
\end{equation}
then we have
\begin{eqnarray}\label{104}
\|\phi\|_{L_{t}^{\frac{2q}{q-6}}L_{x}^q([t_0,~0)\times~\mathbb{R}^3)}\leq
C\big(\|f_1\|_{H^1(\mathbb{R}^3)}+\|f_2\|_{L^2(\mathbb{R}^3)}+
\|F\|_{L_{t}^1L_{x}^2([t_0,~0)\times~\mathbb{R}^3)}\big)\nonumber \\
6\leq q <\infty.
\end{eqnarray}
\indent For the proof see Smith \cite{Smith}.\\
\indent Then combining these two lemmas we
can
establish the uniform bounds on the local solution~$\phi$,~which
implies our result, this step is completely
parallel to Ibrahim and Majdoub \cite{Ibrahim} and we omit it.\\
\indent Our results can extend to a more general variable
coefficient second order partial differential equation with operator as follow:\\
\begin{equation}\nonumber\\
\begin{aligned}
L\equiv
\partial_t^2+2b^i(t,x)\partial_{ti}^2-a^{ij}(t,x)\partial_{ij}^2+L_1,~~a^{ij}=a^{ji},\\
\end{aligned}
\end{equation}
where all coefficients are real and~$C^\infty$,~and~$L_1$~is a first
order operator. In general, we can get rid of cross terms (that is,
terms like~$
b^i\partial_{ti}^2$~) by the following procedure: let us write (with new first order terms~$L_1'$~)\\
\begin{equation}\nonumber\\
\begin{aligned}
L\equiv
(\partial_t+b^i\partial_i)^2-(a^{ij}+b^ib^j)\partial_{ij}^2+L_1'.\\
\end{aligned}
\end{equation}
If, in the region under consideration, we can perform a change of
variables
\begin{equation}\nonumber\\
\begin{aligned}
X_1=\varphi_1(t,x),\cdots, X_n=\varphi_n(t,x), T=t,\\
\end{aligned}
\end{equation}
and set
\begin{equation}\nonumber\\
\begin{aligned}
\frac{\partial \varphi_j}{\partial t}+b^i\frac{\partial
\varphi_j}{\partial x_i}=0,~~j=1,\cdots,n,\\
\end{aligned}
\end{equation}
in such a way that the vector
field~$\partial_t+b^i\partial_i$~becomes~$\partial_T$,~then the
operator~$L$~takes the form
\begin{equation}\nonumber\\
\begin{aligned}
\overline{L}=
\partial_T^2-\overline{a}^{ij}\partial_{X_iX_j}^2+\overline{L}_1,\\
\end{aligned}
\end{equation}
for some new coefficients~$\overline{a}^{ij}$~and lower order
terms~$\overline{L}_1$.~\\
\indent As an application of our result, we can consider the
critical wave equation in the Schwarzschild spacetime~$(\mathcal
{M}, g)$~with parameter~$M > 0$,~where~$g$~is the Schwarzschild
metric whose line
element is\\
\begin{equation}\nonumber\\
\begin{aligned}
ds^2=-\Big(1-\frac{2M}{r}\Big)dt^2+\Big(1-\frac{2M}{r}\Big)^{-1}dr^2+r^2d\omega^2,\\
\end{aligned}
\end{equation}
where~$d\omega^2$~is the measure on the
sphere~$\mathbbm{s}^2$.~While the singularity at~$r=0$~is a true
metric singularity, we note that the apparent singularity
at~$r=2M$~is merely a coordinate singularity. Indeed, define the
Regge-Wheeler tortoise
coordinate~$r^*$~by\\
\begin{equation}\nonumber\\
\begin{aligned}
r^*=r+2M\log(r-2M)-3M-2M\log M,\\
\end{aligned}
\end{equation}
and set~$v=t+r^*$.~Then in the~$(r^*, t, \omega)$~coordinates the
Schwarzschild metric~$g$~
is expressed in the form\\
\begin{equation}\nonumber\\
\begin{aligned}
ds^2=-\Big(1-\frac{2M}{r}\Big)dv^2+2dvdr+r^2d\omega^2.\\
\end{aligned}
\end{equation}
Let~$\Sigma$~be an arbitrary Cauchy surface for the (maximally
extended) Schwarzschild spacetime~$(\mathcal {M},
g)$~stated as above and consider the Cauchy problem of the wave equation\\
\begin{equation}\label{1000}\\
\left \{
\begin{aligned}
&\Box_g \phi-\phi^5=0,\\
&(\phi ,\phi_t )|_{\Sigma}=(\psi_0,\psi_1), \\
\end{aligned} \right.
\end{equation}
for this problem Marzuola et all \cite{Mar} proved global existence
and uniqueness of finite energy solution under the assumption of
small initial energy, and
according to our result we can remove the small energy assumption, that is\\
{\bf Theorem 1.4.} For smooth initial data prescribed on~$\Sigma$,
equation \eqref{1000} admits a unique
global smooth solution in the~$(r^*, t, \omega)$~coordinates.\\
\indent Now we sketch the plan of this article. In the next
section
we recall some geometric concepts which are necessary for our proofs.
Section 3 is devoted to the proof of lemma 1.2: the fundamental lemma
expressing the non-concentration of
~$L^6$~part of the energy.\\
\indent Finally, we remark that although our non-concentration
lemma is stated only in dimension~$n=3$,~the proof works in any
dimension for the critical wave equations.\\
\indent In this paper, the letter~$C$~denotes a constant which may
change from one to the other.\\
\section{\textbf{Null frame}}
\indent Let~$\{g_{ij}(t,x)\}_{i,j=1}^3$~denotes the inverse matrix
of~$\{g^{ij}(t,x)\}_{i,j=1}^3$,~and consider the spilt
metric~$g=-dt^2+g_{ij}(t,x)dx^idx^j=g_{\alpha\beta}dx^\alpha
dx^\beta$~on~$\mathbb{R}_{x,t}^4$~(close to the Minkowski metric).
So we will work in the spacetime, a 4-dimensional
manifold~$M.$~Local coordinates on~$M$~are denoted by~$x_\alpha,
\alpha=1,2,3,4.$~The convention is used that Latin indices run from
1 to 3 while Greek indices relate to the spacetime manifold~$M$~and
run from 1 to 4. The index 4 corresponds to the time coordinate,
while~$(x_1,x_2,x_3)$~are the spatial coordinates. The corresponding
partial derivatives are~$\partial_\alpha=\frac{\partial}{\partial
x_\alpha}.$~We introduce an optical function~$\underline{u}$~(close
to~$t+|x|$~)for~$g$:~ a~$C^1$~function which satisfies the eikonal
equation\\
\begin{equation}\label{eq105}
\left \{
\begin{aligned}
&g^{\alpha\beta}\partial_\alpha\underline{u}\partial_\beta\underline{u}
=g_{\alpha\beta}\partial^\alpha\underline{u}\partial^\beta\underline{u}
=\langle\nabla\underline{u},
\nabla\underline{u}\rangle=|\nabla\underline{u}|^2=0,\\
&\underline{u}(t,0)=t,\\
\end{aligned} \right.
\end{equation}
where~$\langle ~,~ \rangle$~denotes the inner product about the
given metric. In PDE terms, this means that the level
surfaces~$\{\underline{u}=C\}$~are characteristic surfaces for any
operator with principal
symbol~$g^{\alpha\beta}\xi_\alpha\xi_\beta$.~From this construction
it is easy to see that the first
order derivatives of~$\underline{u}$~are locally bounded.\\
\indent Then we set
\begin{equation}\label{eq0}
\begin{aligned}
&\underline{L}=-\nabla
\underline{u}=(\partial_t\underline{u})\partial_t-(g^{ij}\partial_i\underline{u})\partial_j=
m^{-1}(\partial_t+N),\\
&L=\frac{\partial_t}{\partial_t\underline{u}}+\frac
{(g^{ij}\partial_i\underline{u})\partial_j}{(\partial_t\underline{u})^2}=m(\partial_t-N),\\
\end{aligned}
\end{equation}\\
where~$\nabla$~is the gradient about the given
metric,~$m=(\partial_t\underline{u})^{-1},$~~$N=-\frac{(g^{ij}\partial_i\underline{u})\partial_j}
{\partial_t\underline{u}}=-(mg^{ij}\partial_i\underline{u})\partial_j.$
~It is easily to see that they are close
to~$\partial_t-\partial_r$~and~$\partial_t+\partial_r$~respectively.
And~$D_{\underline{L}}\underline{L}=0$,~showing that a integral
curve of~$\underline{L}$~is a geodesic. This follows from the
symmetry of the Hessian, since for any vector field~$X$,~we have
\begin{equation}\nonumber\\
\begin{aligned}
&<D_{\underline{L}}\underline{L},X>=-<D_{\underline{L}}\nabla\underline{u},X>
=-<D_X\nabla\underline{u},\underline{L}>\\
&=<D_X\underline{L},\underline{L}>=\frac{1}{2}X<\underline{L},\underline{L}>
=\frac{1}{2}X<\nabla\underline{u},\nabla\underline{u}>=0.\\
\end{aligned}
\end{equation}\\
So the integral curves of the field~$\underline{L}$~generate a
backward geodesic cone with vertices on the~$t$-axis.
Using the coordinate~$t$,~we define the
foliation~$\sum_{t_1}=\{(x,t),t=t_1\}$,~and
using~$\underline{u}$,~we
define the foliation by nonstandard 2-spheres as\\
\begin{equation}\nonumber\\
S_{t_1,\underline{u}_1}=\{(x,t),t=t_1,
\underline{u}(x,t)=\underline{u}_1\}.\\
\end{equation}
Since~$\triangledown\underline{u}$~ is orthogonal
to~$\{\underline{u}=\underline{u}_1\}$~and~$\partial_t$~is
orthogonal to~$\sum_{t_1}$,~the field~$\underline{L}$~is a null
vector orthogonal to the geodesic cone and~$N$~is an horizontal
field
orthogonal to~$S_{t_1,\underline{u}_1}$.~Moreover,\\
\begin{equation}\nonumber\\
\langle N,
N\rangle=\frac{1}{(\partial_t\underline{u})^2}g_{ij}(g^{ik}\partial_k\underline{u})(g^{jl}\partial_l
\underline{u})=\frac{1}{(\partial_t\underline{u})^2}g^{kl}\partial_k\underline{u}\partial_l\underline{u}
=1.\\
\end{equation}
Then, if~$(e_1, e_2)$~form an orthonormal basis on the nonstandard
spheres, the frame\\
\begin{equation}\nonumber\\
e_1, e_2, e_3\equiv L=m(\partial_t-N), e_4\equiv
\underline{L}=m^{-1}(\partial_t+N)\\
\end{equation}
is a null frame with
\begin{equation}\nonumber\\
\left \{
\begin{aligned}
&\langle e_1, e_1\rangle=\langle e_2, e_2\rangle=1, \langle e_1,
e_2\rangle=0,\\
&\langle e_1, L\rangle=\langle e_1, \underline{L}\rangle=\langle
e_2, L\rangle=\langle e_2, \underline{L}\rangle=0,\\
&\langle L, L\rangle=\langle \underline{L}, \underline{L}\rangle=0,
\langle L, \underline{L}\rangle=-2.\\
\end{aligned} \right.
\end{equation}
\indent We will work in the null frame as above and it requires that
we know the vector~$D_{\alpha}e_{\beta}$,~that is: the frame
coefficients~$<D_{\alpha}e_{\beta},e_{\gamma}>$.~\\
\indent We define the frame coefficients by
\begin{equation}\label{eq600}
\begin{aligned}
&<D_a\underline{L},e_b>=\underline{\chi}_{ab}=\underline{\chi}_{ba},~~
<D_aL,e_b>=\chi_{ab}=\chi_{ba},\\
&<D_{\underline{L}}\underline{L},e_a>=0, ~~~~~~~~~~~~~<D_LL,e_a>=2\xi_a,\\
&<D_{\underline{L}}L,e_a>=2\eta_a,
~~~~~~~~~~<D_L\underline{L},e_a>=2\underline{\eta}_a,\\
&<D_{\underline{L}}\underline{L},L>=0,
~~~~~~~~~~~~~~<D_LL,\underline{L}>=4\underline{\omega}=-<D_L\underline{L},L>,\\
\end{aligned}
\end{equation}
where~$a,b=1,2$.~\\
\indent If we call~$k$~the second fundamental form of~$\Sigma_t$~by
\begin{equation}\nonumber\\
k(X,Y)=-<D_X\partial_t,Y>, k_{ij}=-\frac{1}{2}\partial_tg_{ij},\\
\end{equation}
then~$k$~is nothing but the first order derivatives of~$g$~and so
bounded. By some simple computation, we also have
\begin{equation}\label{eq116}
\begin{aligned}
2\eta_a&=-2k_{Na},\\
2\underline{\eta}_a&=-2me_a(\underline{u}_t)+2k_{Na},\\
2\xi_a&=-2m^2\underline{\eta}_a+2m^2k_{Na},\\
\chi_{ab}&=-m^2\underline{\chi}_{ab}-2mk_{ab},\\
\underline{\omega}&=-\partial_tm=\frac{\partial_{tt}\underline{u}}{(\partial_t\underline{u})^2}.\\
\end{aligned}
\end{equation}
For the details, one can read Alinhac \cite{Alinhac}. And we are
interested in the asymptotic behavior of these frame coefficients
near the origin, thus we have\\
{\bf Theorem 2.1.} Assuming~$\xi_a, ~\eta_a, ~\underline{\eta}_a,~
\underline{\omega},~ \underline{\chi}_{ab}, ~\chi_{ab}$~are frame
coefficients as above,
then\\
\begin{align}
&|\eta_a|\leq C,\\
&\frac{ct}{2}\leq
\underline{\omega}=\frac{\partial_{tt}\underline{u}}{(\partial_t\underline{u})^2}
=m^2\partial_{tt}\underline{u}\leq -\frac{ct}{2},\\
&\frac{1}{t-\underline{u}}+ct\leq \underline{\chi}_{aa}\leq
\frac{1}{t-\underline{u}}-ct,\\
&|\underline{\eta}_{a}|\leq -Ct,\\
&|\xi_a|\leq C-Ct,\\
&|1-m||\underline{\chi}_{aa}|\leq C,\\
&4+Ct\leq \chi_{aa}\underline{u}+\chi_{bb}\underline{u}+
\underline{\chi}_{aa}u+\underline{\chi}_{bb}u\leq 4-Ct,\\
&(2+Ct)|\overline{\nabla}\phi|^2\leq
\sum_{a,b=1}^{2}(\chi_{ab}\underline{u}+\underline{\chi}_{ab}u)e_a(\phi)e_b(\phi)
\leq (2-Ct)|\overline{\nabla}\phi|^2,
\end{align}
where~$a=1, 2;~c,~C$~are positive constants;~$t< 0$~as we work in
the backward geodesic cone starting from the origin
and~$u=2t-\underline{u}$.~\\
\indent The first inequality is obviously from \eqref{eq116}, and to
prove the other inequality of this theorem, we need several
lemmas stated below. First let us introduce the comparison theorem.\\
\indent Assuming~$C,D$~take values in the real,
symmetric,~$(n-1)\times(n-1)$~matrices. If for
any~$(\alpha_1,\cdots,\alpha_{n-1}),(\beta_1,\cdots,\beta_{n-1})\in
\mathbb{R}^{n-1}$~and~$\sum\alpha_i^2=\sum\beta_i^2$,~we have
\begin{equation}\nonumber\\
\left(
\begin{array}{ccc}
\alpha_1 , \cdots , \alpha_{n-1} \\
\end{array}
\right)
C\left(
\begin{array}{c}
\alpha_1 \\
\vdots \\
\alpha_{n-1} \\
\end{array}
\right)\geq \left(
\begin{array}{ccc}
\beta_1 , \cdots , \beta_{n-1} \\
\end{array}
\right)
D\left(
\begin{array}{c}
\beta_1 \\
\vdots \\
\beta_{n-1} \\
\end{array}
\right),\\
\\
\end{equation}
then we say~$C\succ D$.~\\
{\bf Lemma 2.2.} Let~$gl(n-1, \mathbb{R})$~be set of~$n-1$~order
real symmetric matrix,~$K,~\widetilde{K}:[0, b)\rightarrow gl(n-1,
\mathbb{R})$.~Suppose~$A:[0, b)\rightarrow gl(n-1,
\mathbb{R})$~satisfies the ordinary differential equation
\begin{equation}\nonumber\\
\left \{
\begin{aligned}
&\frac{d^2A}{ds^2}+AK=0,\\
&A(0)=0, ~\frac{dA}{ds}(0)=I (the ~unit ~matrix),\\
\end{aligned} \right.
\end{equation}
and~$\widetilde{A}:[0, b)\rightarrow gl(n-1,
\mathbb{R})$~satisfies
\begin{equation}\nonumber\\
\left \{
\begin{aligned}
&\frac{d^2\widetilde{A}}{ds^2}+A\widetilde{K}=0,\\
&\widetilde{A}(0)=0, ~\frac{d\widetilde{A}}{ds}(0)=I (the ~unit ~matrix),\\
\end{aligned} \right.
\end{equation}
where~$s\in [0, b)$.~Also we assume~$A,\widetilde{A}$~are invertible
in~$[0, b)$~and~$K\prec \widetilde{K}$,~then
\begin{equation}\label{eq106}
A^{-1}\frac{dA}{ds}\succ \widetilde{A}^{-1}\frac{\widetilde{A}}{ds}.\\
\end{equation}
\indent For the proof see \cite{Wu}.\\
\indent If the
metric~$\widetilde{g}=-dt^2+\widetilde{g}_{ij}(x)dx^idx^j$,~$\widetilde{g}_{ij(x)}$~depend
only on the spatial coordinates,
then~$\widetilde{\underline{u}}=t+\widetilde{\rho}$~is an optical
function
for~$\widetilde{g}$~satisfies~$\widetilde{\underline{u}}(t,0)=t$,~where~$\widetilde{\rho}$~is
the Riemannian distance function on the Riemannian
manifold~$(\mathbb{R}^3, ~\widetilde{g}_{ij}(x))$.~The corresponding
null frame related to~$\widetilde{\underline{u}}$~is
\begin{equation}\nonumber\\
\begin{aligned}
\widetilde{e}_1,~ \widetilde{e}_2,
~\widetilde{e}_3=\partial_t+\partial_{\widetilde{\rho}}=\partial_t+\widetilde{g}^{ij}\widetilde{\rho}_i\partial_j,
~\widetilde{e}_4=\partial_t-\partial_{\widetilde{\rho}}=\partial_t-\widetilde{g}^{ij}\widetilde{\rho}_i\partial_j,\\
\end{aligned}
\end{equation}
where~$\{\widetilde{g}^{ij}(x)\}_{i,j=1}^3$~denotes the inverse
matrix of~$\{\widetilde{g}_{ij}(x)\}_{i,j=1}^3$.~And
then~\underline{u}~can be compared
with~$\widetilde{\underline{u}}$~through lemma 2.2.\\
\indent Let~$\gamma: [0, b)\rightarrow (\mathbb{R}_{x,t}^4, g)$~is
the integral curve of~$\underline{L}=-\nabla u$~and we call it null
geodesic, as~$<\underline{L},
\underline{L}>=0$,~then~$\dot{\gamma}=\underline{L}=-\nabla
u$.~Let~$\{e_1, e_2, e_3=L, e_4=\underline{L}\}$~be parallel null
frame along~$\gamma$,~$J_i(s)$~be the Jacobi field
along~$\gamma$,~satisfies~$J_i(0)=0, \dot{J_i}(0)=e_i(0),
(i=1,2,3)$.~So we have
\begin{equation}\nonumber\\
\left(
\begin{array}{c}
J_1(s) \\
J_2(s) \\
J_3(s) \\
\end{array}
\right)=A(s)\left(
\begin{array}{c}
e_1(s) \\
e_2(s) \\
e_3(s) \\
\end{array}
\right),\\
\end{equation}
where~$A(s)$~denotes an invertible matrix valued function of the
parameter~$s\in [0, b)$.~Then the Jacobi equation becomes
\begin{equation}\nonumber\\
\frac{d^2A}{ds^2}+AK=0,\\
\end{equation}
where~$K=(K_{ij})_{i,j=1}^3, K_{ij}=<R(\dot{\gamma},
e_i)\dot{\gamma}, e_j>$~denotes a symmetric matrix of~$3\times3$.~We
then easily get\\
\begin{equation}\label{eq109}
H_{\underline{u}}(e_i, e_j)=D^2\underline{u}(e_i, e_j)=-(A^{-1}\frac{dA}{ds})_{ij}=-\underline{\chi}_{ij},\\
\end{equation}
where~$H_{\underline{u}}$~denotes the Hessian form
of~$\underline{u}$.~And then \eqref{eq600} yields
\begin{equation}\label{eq601}
(\underline{\chi}_{ij})_{i,j=1}^3=\left(
\begin{array}{ccc}
\underline{\chi}_{11} & \underline{\chi}_{12} & 2\underline{\eta}_1 \\
\underline{\chi}_{12} & \underline{\chi}_{22} & 2\underline{\eta}_2 \\
2\underline{\eta}_1 & 2\underline{\eta}_2 & -4\underline{\omega} \\
\end{array}
\right).\\
\end{equation}
Correspondingly for optical
function~$\widetilde{\underline{u}}=t+\widetilde{\rho}$,~we
have~$\widetilde{A}, \widetilde{K}$~and
\begin{equation}\label{eq151}
H_{\widetilde{\underline{u}}}(\widetilde{e}_i,
\widetilde{e}_j)=D^2\widetilde{\underline{u}}(\widetilde{e}_i,
\widetilde{e}_j)=D^2(t+\widetilde{\rho})(\widetilde{e}_i,
\widetilde{e}_j)=
-(\widetilde{A}^{-1}\frac{d\widetilde{A}}{ds})_{ij}=-\widetilde{\underline{\chi}}_{ij}.\\
\end{equation}
\indent Note that one assumption of lemma 2.2 is~$K\prec
\widetilde{K}$,~but~$K_{33}=<R(\underline{L},~ L)\underline{L},~
L>\neq0$,~while
~$\widetilde{K}_{33}=<R(\partial_t-\partial_{\widetilde{\rho}},~
\partial_t+\partial_{\widetilde{\rho}})\partial_t-\partial_{\widetilde{\rho}},~
\partial_t+\partial_{\widetilde{\rho}}>=0$,~so we have to introduce a
conformally related metric tensor to~$\widetilde{g}$~to ensure the
condition~$K\prec \widetilde{K}$.~Let~$(\widetilde{g}_{ij}(x),
\mathbb{R}^3)$~be a space form with positive constant sectional
curvature~$c$.~We set then the conformally related
metric~$\widetilde{g}_c=e^{ct^2}\widetilde{g}$.~\\
{\bf Lemma 2.3.} Let~(M, g)~be a semi-Riemannian manifold of
dimension~$n$~and let~$g_c=\varphi g$~be a conformally related
metric tensor to~$g$,~where~$\varphi: M\rightarrow (0, \infty)$~is a
map. Then \\
(1) ~$\mathop \nabla
\limits^c=\frac{1}{\varphi}\nabla$,~where~$\nabla$~and~$\mathop
\nabla \limits^c$~are the gradients on~$(M, g)$~and~$(M,
g_c)$,~respectively.\\
(2) ~For~$X, Y\in \Gamma TM$,~
\begin{equation}\nonumber\\
{\mathop
\nabla\limits^c}_XY=\nabla_XY+\frac{1}{2\varphi}X(\varphi)Y+\frac{1}{2\varphi}Y(\varphi)X
-\frac{1}{2\varphi}g(X, Y)\nabla \varphi,\\
\end{equation}
where~$\nabla$~and~$\mathop \nabla \limits^c$~are the Levi-Civita
connections of~$(M,g)$~and~$(M,
g_c)$,~respectively.\\
(3) ~If~$f: M\rightarrow \mathbb{R}$~is a map then, for~$X, Y\in
\Gamma TM$,\\
\begin{equation}\nonumber\\
\begin{aligned}
H_f^c(X, Y)=&H_f(X, Y)-\frac{1}{2\varphi}[g(\nabla\varphi,
X)g(\nabla f, Y)\\
&+g(\nabla f, X)g(\nabla\varphi, Y)-g(\nabla\varphi, \nabla
f)g(X, Y)],\\
\end{aligned}
\end{equation}
where~$H_f$~and~$H_f^c$~are the Hessian forms of~$f$~on~$(M,g)$~and~$(M,
g_c)$,~respectively.\\
\indent For the proof see \cite{Eduardo}.\\
\indent From lemma 2.3, we have
\begin{equation}\nonumber\\
\begin{aligned}
<\mathop \nabla\limits^c(t+\widetilde{\rho}), \mathop
\nabla\limits^c(t+\widetilde{\rho})>_{\widetilde{g}_c}=e^{ct^2}<\frac{1}{e^{ct^2}}\nabla(t+\widetilde{\rho}),
\frac{1}{e^{ct^2}}\nabla(t+\widetilde{\rho})>_{\widetilde{g}}=0,\\
\end{aligned}
\end{equation}
then~$t+\widetilde{\rho}$~is also an optical function
for~$\widetilde{g}_c$.~As above, we define~$\widetilde{A}_c,
\widetilde{K}_c,
\widetilde{\chi}_{cij}$~associated to~$\widetilde{g}_c$.~\\
\indent It is easily known that for a manifold~$(M, g_M)$~ with constant
curvature~$c$~
\begin{equation}\label{eq405}
\begin{aligned}
&R(X, Y, Z, W)\\
&=c\big[g_M(X, Z)g_M(Y, W)-g_M(X, W)g_M(Y,
Z)\big],~\forall~ X, Y, W, Z\in \Gamma TM.\\
\end{aligned}
\end{equation}
Then for the space form~$(\widetilde{g}_{ij}(x), \mathbb{R}^3)$~with
positive constant sectional curvature~$c$~a computation according to
\eqref{eq405} gives
\begin{equation}\nonumber\\
\begin{aligned}
\widetilde{K}&=\left(
\begin{array}{ccc}
c & 0 & 0 \\
0 & c & 0 \\
0 & 0 & 0 \\
\end{array}
\right),\\
\widetilde{\underline{\chi}}_{11}&=-H_{\widetilde{\underline{u}}}(\widetilde{e}_1,~
\widetilde{e}_1)=-<D_{\widetilde{e}_1}\nabla(t+\widetilde{\rho}),~
\widetilde{e}_1>_{\widetilde{g}}=-<D_{\widetilde{e}_1}\nabla t,~
\widetilde{e}_1>_{\widetilde{g}}-<D_{\widetilde{e}_1}\nabla_g\widetilde{\rho},~
\widetilde{e}_1>_{\widetilde{g}}\\
&=-D^2\widetilde{\rho}(\widetilde{e}_1,~
\widetilde{e}_1)=-\sqrt{c}\cot(\sqrt{c}\widetilde{\rho})<\widetilde{e}_1,~
\widetilde{e}_1>_{\widetilde{g}}=-\sqrt{c}\cot(\sqrt{c}\widetilde{\rho}),\\
\widetilde{\underline{\chi}}_{22}&=-H_{\widetilde{\underline{u}}}(\widetilde{e}_2,~
\widetilde{e}_2)=-\sqrt{c}\cot(\sqrt{c}\widetilde{\rho})<\widetilde{e}_2,~
\widetilde{e}_2>_{\widetilde{g}}=-\sqrt{c}\cot(\sqrt{c}\widetilde{\rho}),\\
\widetilde{\underline{\chi}}_{12}&=-H_{\widetilde{\underline{u}}}(\widetilde{e}_1,~
\widetilde{e}_2)=-\sqrt{c}\cot(\sqrt{c}\widetilde{\rho})<\widetilde{e}_1,~
\widetilde{e}_2>_{\widetilde{g}}=0,\\
\widetilde{\underline{\chi}}_{33}&=-H_{\widetilde{\underline{u}}}(\widetilde{e}_3,~
\widetilde{e}_3)=-<D_{\widetilde{e}_3}\nabla(t+\widetilde{\rho}),~
\widetilde{e}_3>_{\widetilde{g}}=<D_{\partial_t+\partial\widetilde{\rho}}\partial_t-\partial\widetilde{\rho},~
\partial_t+\partial\widetilde{\rho}>_{\widetilde{g}}=0,\\
\widetilde{\underline{\chi}}_{13}&=-H_{\widetilde{\underline{u}}}(\widetilde{e}_1,~
\widetilde{e}_3)=-<D_{\widetilde{e}_3}\nabla(t+\widetilde{\rho}),~
\widetilde{e}_1>_{\widetilde{g}}=<D_{\partial_t+\partial\widetilde{\rho}}\partial_t-\partial\widetilde{\rho},~
\widetilde{e}_1>_{\widetilde{g}}=0,\\
\widetilde{\underline{\chi}}_{23}&=-H_{\widetilde{\underline{u}}}(\widetilde{e}_2,~
\widetilde{e}_3)=-<D_{\widetilde{e}_3}\nabla(t+\widetilde{\rho}),~
\widetilde{e}_2>_{\widetilde{g}}=<D_{\partial_t+\partial\widetilde{\rho}}\partial_t-\partial\widetilde{\rho},~
\widetilde{e}_2>_{\widetilde{g}}=0,\\
\end{aligned}
\end{equation}
where~$\nabla_g$~is the gradient on the space
form~$(\widetilde{g}_{ij}(x), \mathbb{R}^3)$.~Thanks to lemma 2.3,
after the conformal change of metric they become
\begin{equation}\label{eq113}
\begin{aligned}
\widetilde{K}_c&=\left(
\begin{array}{ccc}
2ce^{ct^2}-c^2t^2e^{ct^2} & 0 & 0 \\
0 & 2ce^{ct^2}-c^2t^2e^{ct^2} & 0 \\
0 & 0 & 4ce^{ct^2} \\
\end{array}
\right),\\
\widetilde{\underline{\chi}}_{c11}&=-H_{\widetilde{\underline{u}}}^c(\widetilde{e}_1,~
\widetilde{e}_1)=-\big(H_{\widetilde{\underline{u}}}(\widetilde{e}_1,~
\widetilde{e}_1)-ct\big)=-\sqrt{c}\cot(\sqrt{c}\widetilde{\rho})+ct,\\
\widetilde{\underline{\chi}}_{c22}&=-H_{\widetilde{\underline{u}}}^c(\widetilde{e}_2,~
\widetilde{e}_2)=-\big(H_{\widetilde{\underline{u}}}(\widetilde{e}_2,~
\widetilde{e}_2)-ct\big)=-\sqrt{c}\cot(\sqrt{c}\widetilde{\rho})+ct,\\
\widetilde{\underline{\chi}}_{c12}&=-H_{\widetilde{\underline{u}}}^c(\widetilde{e}_1,~
\widetilde{e}_2)=-H_{\widetilde{\underline{u}}}(\widetilde{e}_1,~
\widetilde{e}_2)=0,\\
\widetilde{\underline{\chi}}_{c33}&=-H_{\widetilde{\underline{u}}}^c(\widetilde{e}_3,~
\widetilde{e}_3)=-\big(H_{\widetilde{\underline{u}}}(\widetilde{e}_3,~
\widetilde{e}_3)-2ct\big)=2ct,\\
\widetilde{\underline{\chi}}_{c13}&=-H_{\widetilde{\underline{u}}}^c(\widetilde{e}_1,~
\widetilde{e}_3)=-H_{\widetilde{\underline{u}}}(\widetilde{e}_1,~
\widetilde{e}_3)=0,\\
\widetilde{\underline{\chi}}_{c23}&=-H_{\widetilde{\underline{u}}}^c(\widetilde{e}_2,~
\widetilde{e}_3)=-H_{\widetilde{\underline{u}}}(\widetilde{e}_2,~
\widetilde{e}_3)=0.\\
\end{aligned}
\end{equation}
So we can ensure~$K\prec \widetilde{K}_c$~when~$t\rightarrow
0$~and~$c$~big enough.\\
\indent Similarly, if we let~$(\widetilde{\widetilde{g}}_{ij}(x),
\mathbb{R}^3)$~be a space form with negative constant sectional
curvature~$-c$~and set the conformally related
metric~$\widetilde{\widetilde{g}}_c=e^{-ct^2}\widetilde{g}$,~then we
have
\begin{equation}\label{eq128}
\begin{aligned}
\widetilde{\widetilde{K}}_c&=\left(
\begin{array}{ccc}
-2ce^{-ct^2}-c^2t^2e^{-ct^2} & 0 & 0 \\
0 & -2ce^{-ct^2}-c^2t^2e^{-ct^2} & 0 \\
0 & 0 & -4ce^{-ct^2} \\
\end{array}
\right),\\
\widetilde{\widetilde{\underline{\chi}}}_{c11}&=-H_{\widetilde{\widetilde{\underline{u}}}}^c(\widetilde{\widetilde{e}}_1,~
\widetilde{\widetilde{e}}_1)=-\big(H_{\widetilde{\widetilde{\underline{u}}}}(\widetilde{\widetilde{e}}_1,~
\widetilde{\widetilde{e}}_1)+ct\big)=-\sqrt{c}\coth(\sqrt{c}\widetilde{\widetilde{\rho}})-ct,\\
\widetilde{\widetilde{\underline{\chi}}}_{c22}&=-H_{\widetilde{\widetilde{\underline{u}}}}^c(\widetilde{\widetilde{e}}_2,~
\widetilde{\widetilde{e}_2})=-\big(H_{\widetilde{\widetilde{\underline{u}}}}(\widetilde{\widetilde{e}}_2,~
\widetilde{\widetilde{e}}_2)+ct\big)=-\sqrt{c}\coth(\sqrt{c}\widetilde{\widetilde{\rho}})-ct,\\
\widetilde{\widetilde{\underline{\chi}}}_{c12}&=-H_{\widetilde{\widetilde{\underline{u}}}}^c(\widetilde{\widetilde{e}}_1,~
\widetilde{\widetilde{e}}_2)=-H_{\widetilde{\widetilde{\underline{u}}}}(\widetilde{\widetilde{e}}_1,~
\widetilde{\widetilde{e}}_2)=0,\\
\widetilde{\widetilde{\underline{\chi}}}_{c33}&=-H_{\widetilde{\widetilde{\underline{u}}}}^c(\widetilde{\widetilde{e}}_3,~
\widetilde{\widetilde{e}}_3)=-\big(H_{\widetilde{\widetilde{\underline{u}}}}(\widetilde{\widetilde{e}}_3,~
\widetilde{\widetilde{e}}_3)+2ct\big)=-2ct,\\
\widetilde{\widetilde{\underline{\chi}}}_{c13}&=-H_{\widetilde{\widetilde{\underline{u}}}}^c(\widetilde{\widetilde{e}}_1,~
\widetilde{\widetilde{e}}_3)=-H_{\widetilde{\widetilde{\underline{u}}}}(\widetilde{\widetilde{e}}_1,~
\widetilde{\widetilde{e}}_3)=0,\\
\widetilde{\widetilde{\underline{\chi}}}_{c23}&=-H_{\widetilde{\widetilde{\underline{u}}}}^c(\widetilde{\widetilde{e}}_2,~
\widetilde{\widetilde{e}}_3)=-H_{\widetilde{\widetilde{\underline{u}}}}(\widetilde{\widetilde{e}}_2,~
\widetilde{\widetilde{e}}_3)=0.\\
\end{aligned}
\end{equation}
Again we can ensure~$\widetilde{\widetilde{K}}_c\prec K$.~\\
\indent Using lemma 2.2, we get
\begin{equation}\label{eq108}
A^{-1}\frac{dA}{ds}\succ \widetilde{A}_c^{-1}\frac{d\widetilde{A}_c}{ds},\\
\end{equation}
together with~$\eqref{eq109}$,~we have
\begin{equation}\label{eq111}
\big(H_{\underline{u}}(e_i, e_j\big )\prec
\big(H_{\widetilde{\underline{u}}}^c(\widetilde{e}_i,
\widetilde{e}_j)\big).\\
\end{equation}
So
\begin{equation}\label{eq112}
4\underline{\omega}=-<D_L\underline{L}, L>=H_{\underline{u}}(e_3,
e_3)\leq H_{\widetilde{\underline{u}}}^c(\widetilde{e}_3,
\widetilde{e}_3),\\
\end{equation}
combining \eqref{eq113} and \eqref{eq112}, we obtain
\begin{equation}\label{eq114}
\underline{\omega}\leq -\frac{ct}{2}.\\
\end{equation}
For the same reason we have
\begin{equation}\label{eq115}
\underline{\omega}\geq \frac{ct}{2},~\\
\end{equation}
and the inequality (2.6) of theorem 2.1 follows.\\
\indent Combining lemma 2.2, \eqref{eq113} and \eqref{eq128}, we
obtain
\begin{equation}\label{eq186}
\begin{aligned}
-\sqrt{c}\cot(\sqrt{c}\widetilde{\rho})+ct=\widetilde{\underline{\chi}}_{caa}
\leq \underline{\chi}_{aa}\leq
\widetilde{\widetilde{\underline{\chi}}}_{caa}=
-\sqrt{c}\coth(\sqrt{c}\widetilde{\widetilde{\rho}})-ct,~~a=1,2,\\
\end{aligned}
\end{equation}
as we adopt comparison theorem along integral curves
of~$\underline{L}=-\nabla
\underline{u}$~,~$\widetilde{\underline{L}}=-\nabla
\widetilde{\underline{u}}$~and~$\widetilde{\widetilde{\underline{L}}}=-\nabla
\widetilde{\widetilde{\underline{u}}}$~we
set~$\underline{u}=\widetilde{\underline{u}}=t+\widetilde{\rho}=\widetilde{\widetilde{\underline{u}}}
=t+\widetilde{\widetilde{\rho}}$,~so when~$t,
~\widetilde{\rho},~\widetilde{\widetilde{\rho}}$~are small (close to
0), we have
\begin{equation}\label{eq612}
\begin{aligned}
\frac{1}{t-\underline{u}}+ct\leq \underline{\chi}_{aa}\leq
\frac{1}{t-\underline{u}}-ct
,~~a=1,2,\\
\end{aligned}
\end{equation}
which is the desired inequality (2.7).\\
\indent Using lemma 2.2, together with \eqref{eq109}, we have
\begin{equation}\nonumber\\
\big(\widetilde{\underline{\chi}}_{cij}\big)_{i,j=1}^3 \prec
\big(\underline{\chi}_{ij}\big)_{i,j=1}^3\prec
\big(\widetilde{\widetilde{\underline{\chi}}}_{cij}\big)_{i,j=1}^3,\\
\end{equation}
so
\begin{equation}\nonumber\\
\begin{aligned}
&\left(
\begin{array}{ccc}
1 & 0 & 1 \\
\end{array}
\right)\big(\widetilde{\underline{\chi}}_{cij}\big)\left(
\begin{array}{c}
1 \\
0 \\
1 \\
\end{array}
\right)\\
&\leq \left(
\begin{array}{ccc}
1 & 0 & 1 \\
\end{array}
\right)\big(\underline{\chi}_{ij}\big)\left(
\begin{array}{c}
1 \\
0 \\
1 \\
\end{array}
\right)\\
&\leq \left(
\begin{array}{ccc}
1 & 0 & 1 \\
\end{array}
\right)\big(\widetilde{\widetilde{\underline{\chi}}}_{cij}\big)\left(
\begin{array}{c}
1 \\
0 \\
1 \\
\end{array}
\right),\\
\end{aligned}
\end{equation}
thus
\begin{equation}\label{eq609}
\begin{aligned}
\widetilde{\underline{\chi}}_{c11}+2\widetilde{\underline{\chi}}_{c13}+\widetilde{\underline{\chi}}_{c33}
\leq
\underline{\chi}_{11}+2\underline{\chi}_{13}+\underline{\chi}_{33}\leq
\widetilde{\widetilde{\chi}}_{c11}+2\widetilde{\widetilde{\chi}}_{c13}+\widetilde{\widetilde{\chi}}_{c33}.\\
\end{aligned}
\end{equation}
As from lemma 2.2 we have
\begin{equation}\nonumber\\
\begin{aligned}
\widetilde{\underline{\chi}}_{cii}\leq \underline{\chi}_{ii}\leq
\widetilde{\widetilde{\chi}}_{cii},~~i=1,2,3,
\end{aligned}
\end{equation}
then \eqref{eq609} means
\begin{equation}\label{eq610}
\begin{aligned}
\widetilde{\underline{\chi}}_{c11}+2\widetilde{\underline{\chi}}_{c13}+\widetilde{\underline{\chi}}_{c33}
-\widetilde{\widetilde{\chi}}_{c11}-\widetilde{\widetilde{\chi}}_{c33}\leq
2\underline{\chi}_{13}\leq
\widetilde{\widetilde{\chi}}_{c11}+2\widetilde{\widetilde{\chi}}_{c13}+\widetilde{\widetilde{\chi}}_{c33}
-\widetilde{\underline{\chi}}_{c11}-\widetilde{\underline{\chi}}_{c33}.\\
\end{aligned}
\end{equation}
Combining \eqref{eq601}, \eqref{eq113}, \eqref{eq128} and
\eqref{eq612}, a calculation gives
\begin{equation}\nonumber\\
\begin{aligned}
\big|\underline{\eta}_1\big|\leq -Ct.
\end{aligned}
\end{equation}
For~$\big|\underline{\eta}_2\big|$~we have the same result, and then we obtain the inequality (2.8) in theorem 2.1.\\
\indent After that, inequality (2.9) can be easily obtained from
\eqref{eq116}.\\
\indent To prove the inequality (2.10) of theorem 2.1 we need the following two steps:\\
\indent First, show~$\mathop \nabla\limits^0\underline{u}_t$~is
bounded, where~$\mathop \nabla\limits^0$~is the gradient on
Euclidean space. From equation \eqref{eq105}, we have
\begin{equation}\nonumber\\
\begin{aligned}
&\partial_t(g^{ij}\partial_i\underline{u}\partial_j\underline{u})=
\partial_tg^{ij}\partial_i\underline{u}\partial_j\underline{u}+2g^{ij}
\partial_i\underline{u}\partial_t\partial_j\underline{u}\\
&=\partial_tg^{ij}\partial_i\underline{u}\partial_j\underline{u}+2g^{ij}
\partial_i\underline{u}\partial_j\underline{u}_t\\
&=\partial_t(\partial_t\underline{u})^2\\
&=2\partial_t\underline{u}\partial_{tt}\underline{u}.\\
\end{aligned}
\end{equation}
As the first order derivatives of~$\underline{u}$~are bounded,
together with (2.6), we have
\begin{equation}\nonumber\\
\begin{aligned}
|N(\underline{u}_t)|\leq C.\\
\end{aligned}
\end{equation}
Also from \eqref{eq116}, we get
\begin{equation}\nonumber\\
\begin{aligned}
me_a(\underline{u}_t)=-\underline{\eta}_a+k_{Na},\\
\end{aligned}
\end{equation}
thanks to inequality (2.8), it implies
\begin{equation}\nonumber\\
\begin{aligned}
|e_a(\underline{u}_t)|\leq C,~~a=1,2,\\
\end{aligned}
\end{equation}
so we finish the first step.\\
\indent Second, as the result of the first step\\
\begin{equation}\label{eq125}
\begin{aligned}
|1-m|=\big|\frac{\partial_t{\underline{u}}-1}{\partial_t{\underline{u}}}\big|
=|m||\underline{u}_t(t, x)-\underline{u}_t(t, 0)|\leq
C\sup_x|\mathop
\nabla\limits^0\underline{u}_t||x|\leq C|x|.\\
\end{aligned}
\end{equation}
Set~$\underline{v}=t+\delta|x|,~ \delta>0$,~then if we
choose~$\delta$~small enough
\begin{equation}\nonumber\\
\begin{aligned}
\dot{\gamma}(\underline{v})=\underline{L}(\underline{v})=\partial_t{\underline{u}}
\partial_t{\underline{v}}-g^{ij}\partial_i{\underline{u}}\partial_i{\underline{v}}
=\partial_t{\underline{u}}-\delta
g^{ij}\partial_i{\underline{u}}\frac{x_j}{|x|}\geq
1-C\delta >0,\\
\end{aligned}
\end{equation}
while
\begin{equation}\nonumber\\
\begin{aligned}
\dot{\gamma}(\underline{u})=\underline{L}(\underline{u})=0.\\
\end{aligned}
\end{equation}
As~$\gamma$~is a backwards integral curve of~$\underline{L}$,~along the curve~$\gamma$~we
conclude
\begin{equation}\nonumber\\
\begin{aligned}
\underline{u}\geq \underline{v}=t+\delta|x|,\\
\end{aligned}
\end{equation}
thus
\begin{equation}\label{eq126}
\begin{aligned}
\underline{u}-t\geq \delta|x|.\\
\end{aligned}
\end{equation}
Combining \eqref{eq125}, \eqref{eq126} and (2.7), we have
\begin{equation}\nonumber\\
\begin{aligned}
&|1-m||\underline{\chi}_{aa}|\leq C_1,\\
\end{aligned}
\end{equation}
which means inequality (2.10).\\
{\bf Lemma 2.4.} Inside the geodesic
cone where~$\underline{u}\leq 0$~, we have
\begin{equation}\label{eq196}
\begin{aligned}
|\underline{u}|\leq C|t|,~~|u|\leq C|t|.\\
\end{aligned}
\end{equation}
{\bf Proof.} By \eqref{eq126}, along the integral curve
of~$\underline{L}$~starting from the origin, we have
\begin{equation}\label{eq305}
\begin{aligned}
t< t+\delta|x|\leq \underline{u}\leq 0,\\
\end{aligned}
\end{equation}
then
\begin{equation}\nonumber\\
\begin{aligned}
2t\leq u=2t-\underline{u}\leq t,\\
\end{aligned}
\end{equation}
so we complete the proof of lemma 2.4.\\
\indent By \eqref{eq116}, we have
\begin{equation}\nonumber\\
\begin{aligned}
&\chi_{aa}\underline{u}+\chi_{bb}\underline{u}+
\underline{\chi}_{aa}u+\underline{\chi}_{bb}u \\
&=2(\underline{\chi}_{aa}+\underline{\chi}_{bb})(t-\underline{u})+
(1-m^2)(\underline{\chi}_{aa}+\underline{\chi}_{bb})\underline{u}-2m(k_{aa}+k_{bb})\underline{u},\\
\end{aligned}
\end{equation}
then (2.7), (2.10) and lemma 2.4 yield the inequality (2.11).\\
\indent Now we prove the last inequality of theorem 2.1. Using lemma
2.2 again, we have
\begin{equation}\nonumber\\
\begin{aligned}
&\left(
\begin{array}{ccc}
e_1(\phi) & e_2(\phi) & 0 \\
\end{array}
\right)\big(\widetilde{\underline{\chi}}_{cij}\big)\left(
\begin{array}{c}
e_1(\phi) \\
e_2(\phi) \\
0 \\
\end{array}
\right)\\
&\leq \left(
\begin{array}{ccc}
e_1(\phi) & e_2(\phi) & 0 \\
\end{array}
\right)\big(\underline{\chi}_{ij}\big)\left(
\begin{array}{c}
e_1(\phi) \\
e_2(\phi) \\
0 \\
\end{array}
\right)\\
&\leq \left(
\begin{array}{ccc}
e_1(\phi) & e_2(\phi) & 0 \\
\end{array}
\right)\big(\widetilde{\widetilde{\underline{\chi}}}_{cij}\big)\left(
\begin{array}{c}
e_1(\phi) \\
e_2(\phi) \\
0 \\
\end{array}
\right),\\
\end{aligned}
\end{equation}
together with \eqref{eq113} and \eqref{eq128}, we arrive at
\begin{equation}\nonumber\\
\begin{aligned}
&\big(-\sqrt{c}\cot(\sqrt{c}\widetilde{\rho})+ct\big)\big((e_1(\phi))^2+(e_1(\phi))^2\big)\\
&\leq
\sum_{a,b=1}^{2}\underline{\chi}_{ab}e_a(\phi)e_b(\phi)\\
&\leq
\big(-\sqrt{c}\coth(\sqrt{c}\widetilde{\widetilde{\rho}})-ct\big)\big((e_1(\phi))^2+(e_1(\phi))^2\big),\\
\end{aligned}
\end{equation}
which implies (~$t, \widetilde{\rho}, \widetilde{\widetilde{\rho}}
$~small)
\begin{equation}\label{eq350}
\begin{aligned}
\big(\frac{1}{t-\underline{u}}+ct\big)|\overline{\nabla}\phi|^2\leq
\sum_{a,b=1}^{2}\underline{\chi}_{ab}e_a(\phi)e_b(\phi)\leq
\big(\frac{1}{t-\underline{u}}-ct\big)|\overline{\nabla}\phi|^2,\\
\end{aligned}
\end{equation}
and \eqref{eq116} yields
\begin{equation}\label{eq155}
\begin{aligned}
&\sum_{a,b=1}^{2}(\chi_{ab}\underline{u}+\underline{\chi}_{ab}u)e_a(\phi)e_b(\phi)\\
&=2(t-\underline{u})\sum_{a,b=1}^{2}\underline{\chi}_{ab}e_a(\phi)e_b(\phi)+(1-m^2)\underline{u}
\sum_{a,b=1}^{2}\underline{\chi}_{ab}e_a(\phi)e_b(\phi)\\
&-2m\underline{u}\sum_{a,b=1}^{2}k_{ab}e_a(\phi)e_b(\phi).\\
\end{aligned}
\end{equation}
As~$k_{ab}$~is bounded, combining \eqref{eq350}, (2.10) and lemma
2.4 we conclude
\begin{equation}\label{eq351}
\begin{aligned}
(2+Ct)|\overline{\nabla}\phi|^2\leq \sum_{a,b=1}^{2}(\chi_{ab}\underline{u}+\underline{\chi}_{ab}u)e_a(\phi)e_b(\phi)
\leq(2-Ct)|\overline{\nabla}\phi|^2.\\
\end{aligned}
\end{equation}
So we finish the proof of theorem 2.1.\\
\section{\textbf{Non-concentration of the~$L^6$~part of the energy}}
\indent In this section, we will prove lemma 1.2, which is essential
to prove global existence and regularity. First we introduce some notations.\\
\indent Let~$z_0=(0, 0)$,~be the vertices of the backward geodesic
cone, then
\begin{equation}\nonumber\\
\begin{aligned}
Q(z_0)=\{(t, x)\in [t_0, 0)\times \mathbb{R}^3:~\underline{u}\leq
0,~~t_0<0\},\\
\end{aligned}
\end{equation}
denotes the backward geodesic cone, if~$t_0\leq s_1 < s_2 <0$,~set
\begin{equation}\nonumber\\
\begin{aligned}
Q_{s_1}^{s_2}=Q(z_0)\cap([s_1, s_2]),\\
\end{aligned}
\end{equation}
and
\begin{equation}\nonumber\\
\begin{aligned}
M_{s_1}^{s_2}=\partial Q_{s_1}^{s_2}=\{(t, x)\in Q_{s_1}^{s_2}:~\underline{u}=0\},\\
\end{aligned}
\end{equation}
denotes the mantle associated with the truncated cone~$Q_{s_1}^{s_2}$.~\\
\begin{equation}\nonumber\\
\begin{aligned}
Q(s)=\{x\in \mathbb{R}^3:~\underline{u}\leq 0,~t=s\}\\
\end{aligned}
\end{equation}
denotes the spatial cross-sections of the backward
cone~$Q(z_0)$~when
the time is~$s$.~\\
\indent Define the energy of problem~$\eqref{eq1}$~
\begin{equation}\label{eq2}
E_1(t)=\frac{1}{2}\int_{\mathbb{R}^3}
\Big(\phi_{t}^2+g^{ij}(t,x)\partial_i\phi\partial_j\phi+\frac{\phi^6}{3}\Big)\,\mathrm{d}x.\\
\end{equation}
As we have showed in section 2 that~$\partial_{tt}\underline{u},
\mathop \nabla\limits^0\underline{u}_t$~are bounded locally,
then~$\underline{u}_t$~is continuous and together with \eqref{eq105}
we have
\begin{equation}\label{eq131}
\lim_{t, x\rightarrow 0}m(t,x)=\frac{1}{\lim_{t, x\rightarrow
0}\partial_t\underline{u}(t,x)}=\frac{1}{\lim_{t, x\rightarrow
0}\partial_t\underline{u}(t,0)}=1,\\
\end{equation}
that is:~$m=1+\mathcal {O}(t).$~So when~$t$~is small,~$E_1(t)$~has a
equivalent form
\begin{equation}\label{eq3}
E(t)=\frac{1}{4}\int_{\mathbb{R}^3}
\Big(m^{-1}(L(\phi))^2+m(\underline{L}(\phi))^2
+(m+m^{-1})|\overline{\nabla}\phi|^2+\frac{m+m^{-1}}{3}\phi^6\Big)\,\mathrm{d}v,\\
\end{equation}
where~$|\overline{\nabla}\phi|^2=\big(e_1(\phi)\big)^2+\big(e_2(\phi)\big)^2,$~and~$dv=\sqrt{|g|}dx$~is
the volume element corresponding to the metric~$g$.~Denoting the
energy density
\begin{equation}\nonumber\\
e(t)=\frac{1}{4}\Big(m^{-1}(L(\phi))^2+m(\underline{L}(\phi))^2
+(m+m^{-1})|\overline{\nabla}\phi|^2+\frac{m+m^{-1}}{3}\phi^6\Big).\\
\end{equation}
\indent We then define the energy flux across~$M_s^t$:~
\begin{equation}\label{eq320}
Flux_1(\phi,
M_s^t)=\int_{M_s^t}\frac{\frac{\partial_t\underline{u}}{2}\big(\phi_t^2+g^{ij}\partial_i\phi
\partial_j\phi+\frac{\phi^6}{3}\big)-\phi_tg^{ij}\partial_i\underline{u}\partial_j\phi}
{\sqrt{(\partial_t\underline{u})^2+\sum_{j=1}^3(
\partial_j\underline{u})^2}}\mathrm{d}\nu,\\
\end{equation}
where~$d\nu$~denotes the induced Lebesgue measure
on~$M_s^t.$~Similar to the energy, it has an equivalent form
when~$t$~is small
\begin{equation}\label{eq152}
Flux(\phi,
M_s^t)=\int_{M_s^t}\frac{|\overline{\nabla}\phi|^2+\big(\underline{L}(\phi)\big)^2+\frac{\phi^6}{3}}
{2\sqrt{(\partial_t\underline{u})^2+(g^{ij}\partial_i\underline{u})^2}}\mathrm{d}\sigma,\\
\end{equation}
where~$d\sigma=\sqrt{|g|}d\nu$~denotes the volume element
corresponding to the metric~$g$~on~$M_S^T$,~and it implies
\begin{equation}\nonumber\\
Flux(\phi,
M_s^t)\geq 0.\\
\end{equation}
{\bf Lemma 3.1.} When~$t$~is small,~$E_1(t)$~and~$Flux_1(\phi,
M_s^t)$~are equivalent
to~$E(t)$~and~$Flux(\phi,
M_s^t)$~respectively, that is:~$E_1(t)\backsimeq E(t), Flux_1(\phi,
M_s^t)\backsimeq Flux(\phi,
M_s^t)$.~\\
Proof. Since
\begin{equation}\nonumber\\
\underline{L}= m^{-1}(\partial_t+N), L=m(\partial_t-N),\\
\end{equation}
we get
\begin{equation}\label{eq500}
\partial_t=\frac{1}{2}(m^{-1}L+m\underline{L}),\\
\end{equation}
so
\begin{equation}
\begin{aligned}
(\partial_t\phi)^2 &=\big[\frac{1}{2}\big(m^{-1}L(\phi)
+m\underline{L}(\phi)\big)\big]^2\\
&=\frac{1}{4}\big(m^{-2}\big(L(\phi)\big)^2\big)+2L(\phi)\underline{L}(\phi)
+m^2\big(\underline{L}(\phi)\big)^2\big).\nonumber\\
\end{aligned}
\end{equation}\\
And
\begin{equation}\nonumber\\
\langle\nabla\phi,\nabla\phi\rangle=-(\partial_t\phi)^2+g^{ij}\partial_i\phi\partial_j\phi
=\big(e_1(\phi)\big)^2+\big(e_2(\phi)\big)^2-L(\phi)\underline{L}(\phi),\\
\end{equation}
which yield
\begin{equation}\label{eq175}
g^{ij}\partial_i\phi\partial_j\phi=|\overline{\nabla}\phi|^2-L(\phi)\underline{L}(\phi)+
(\partial_t\phi)^2,\\
\end{equation}
then we get
\begin{equation}\nonumber\\
\begin{aligned}
&E_1(t)=\frac{1}{4}\int_{\mathbb{R}^3}
\Big(m^{-2}(L(\phi))^2+m^2(\underline{L}(\phi))^2
+2|\overline{\nabla}\phi|^2+\frac{2}{3}\phi^6\Big)\,\mathrm{d}x,\\
&Flux_1(\phi,
M_s^t)=\int_{M_s^t}\frac{1}{\sqrt{(\partial_t\underline{u})^2+\sum_{j=1}^3(
\partial_j\underline{u})^2}}\Big[\frac{1}{2\partial_t\underline{u}}\big((\partial_t\underline{u}\phi_t)^2
-2\partial_t\underline{u}\phi_tg^{ij}\partial_i\underline{u}\partial_j\phi\\
&+ (g^{ij}\partial_i\underline{u}\partial_j\phi)^2\big)
+\frac{\partial_t\underline{u}}{2}\big(g^{ij}\partial_i\phi\partial_j\phi+\frac{\phi^6}
{3}\big)-\frac{1}{2\partial_t\underline{u}}\big(g^{ij}\partial_i\underline{u}\partial_j\phi\big)^2\Big]\mathrm{d}\nu\\
&=\int_{M_s^t}\frac{1}{\sqrt{(\partial_t\underline{u})^2+\sum_{j=1}^3(
\partial_j\underline{u})^2}}\Big[\frac{m}{2}\big(\underline{L}(\phi)\big)^2+\frac{1}{2m}\big(
|\overline{\nabla}\phi|^2-L(\phi)\underline{L}(\phi)+
(\partial_t\phi)^2\big)\\
&+\frac{\phi^6}{6m}
-\frac{\partial_t\underline{u}}{2}\Big(\frac{g^{ij}\partial_i\underline{u}\partial_j\phi}{\partial_t\underline{u}
}\Big)^2\Big]\mathrm{d}\nu\\
&=\int_{M_s^t}\frac{1}{\sqrt{(\partial_t\underline{u})^2+\sum_{j=1}^3(
\partial_j\underline{u})^2}}\Big[\frac{m}{2}\big(\underline{L}(\phi)\big)^2+\frac{1}{2m}\big(
|\overline{\nabla}\phi|^2\\
&-m^{-1}(\partial_t\phi+N(\phi))m(\partial_t\phi-N(\phi))+
(\partial_t\phi)^2\big) +\frac{\phi^6}{6m}
-\frac{1}{2m}\big(N(\phi)\big)^2\Big]\mathrm{d}\nu\\
&=\int_{M_s^t}\frac{
\frac{1}{m}|\overline{\nabla}\phi|^2+m\big(\underline{L}(\phi)\big)^2
+\frac{1}{3m}\phi^6}{2\sqrt{(\partial_t\underline{u})^2+\sum_{j=1}^3(
\partial_j\underline{u})^2}}\mathrm{d}\nu,\\
\end{aligned}
\end{equation}
together with~$\eqref{eq131}$,~we obtain the result.\\
\indent To finish the proof, We shall require several other lemmas.
The first is standard and says that the energy associated with our
equation is
bounded.\\
{\bf Lemma 3.2.} If ~$\phi\in C^\infty([t_0, 0)\times
\mathbb{R}^3)$~is a solution to (1.1), then~$E_1(t)$~or~$E(t)$~is
bounded for all~$t_0\leq t <0$.~Additionally, if ~$t_0\leq s< t
<0$,~then
\begin{equation}\label{eq135}
Flux(\phi, M_s^t)\rightarrow 0,~~when~s, t\rightarrow 0.\\
\end{equation}
{\bf Proof.} To prove the boundedness of energy one multiplies both
sides of the equation
$\phi_{tt}-\frac{\mathrm{\partial}}{\mathrm{\partial}x_{i}}(g^{ij}(t,x)\phi_j)+\phi^5=0$
by $\partial_{t}\phi$ to obtain the identity
\begin{equation}\label{eq138}
\frac{\partial}{\partial{t}}\Big(\frac{\phi_{t}^2+g^{ij}(t,x)\phi_i\phi_j}{2}+\frac{\phi^6}{6}\Big)
-\frac{1}{2}\partial_tg^{ij}(t,x)\phi_i\phi_j-\frac{\mathrm{\partial}}
{\mathrm{\partial}x_{i}}\Big(\phi_{t}g^{ij}(t,x)\phi_j\Big)=0.
\end{equation}\\
Thus,
\begin{equation}\label{eq140}
\begin{aligned}
&\frac{\partial}{\partial{t}}\int_{\mathbb{R}^3}\Big(\frac{\phi_{t}^2+g^{ij}(t,x)
\phi_i\phi_j}{2}+\frac{\phi^6}{6}\Big)\mathrm{d}x
-\int_{\mathbb{R}^3}\frac{1}{2}\partial_tg^{ij}(t,x)\phi_i\phi_j\mathrm{d}x\\
&-\int_{\mathbb{R}^3} \frac{\mathrm{\partial}}
{\mathrm{\partial}x_{i}}\Big(\phi_{t}g^{ij}(t,x)\phi_j\Big)\mathrm{d}x=0.\\
\end{aligned}
\end{equation}
And since the last term is always zero, by the divergence theorem,
due to the fact that~$\phi(t,x)=0$~for~$|x|> C+t$,~\eqref{eq140}
implies
\begin{equation}\nonumber\\
\begin{aligned}
\partial_tE_1(t)\leq CE_1(t),\\
\end{aligned}
\end{equation}
which means
\begin{equation}\nonumber\\
\begin{aligned}
E_1(t)\leq E_1(t_0)e^{C(t-t_0)},\\
\end{aligned}
\end{equation}
so~$E_1(t)$~or~$E(t)$~is bounded, as desired.\\
\indent To prove the other half of lemma 3.2, we integrate
\eqref{eq138} over~$Q_s^t$~ and arrive at the "flux identity":
\begin{equation}\nonumber\\
\begin{aligned}
&\frac{1}{2}\int_{Q(t)}\Big(\phi_t^2(t, x)+g^{ij}(t,
x)\partial_i\phi(t, x)\partial_j\phi(t, x)+\frac{\phi^6(t,
x)}{3}\Big)\mathrm{d}x+Flux_1(\phi, M_s^t)\\
&-\frac{1}{2}\int_{Q(s)}\Big(\phi_t^2(s, x)+g^{ij}(s,
x)\partial_i\phi(s, x)\partial_j\phi(s, x)+\frac{\phi^6(s,
x)}{3}\Big)\mathrm{d}x\\
&=\frac{1}{2}\int_{Q_s^t}\partial_tg^{ij}(\tau,x)\partial_i\phi(\tau, x)
\partial_j\phi(\tau, x)\mathrm{d}x\mathrm{d}\tau,\\
\end{aligned}
\end{equation}
that is
\begin{equation}\label{eq330}
\begin{aligned}
E_1(\phi, Q(t))+Flux_1(\phi, M_s^t)-E_1(\phi, Q(s))\leq C(t_0)\int_s^t E_1(\phi, Q(\tau))\mathrm{d}\tau,\\
\end{aligned}
\end{equation}
where~$C(t_0)$~is a constant depending on~$t_0$.~And it means
\begin{equation}\label{eq331}
\begin{aligned}
&E_1(\phi, Q(t))-C(t_0)\int_{t_0}^tE_1(\phi,
Q(\tau))\mathrm{d}\tau+Flux_1(\phi,
M_s^t)\\
&\leq E_1(\phi, Q(s))-C(t_0)\int_{t_0}^sE_1(\phi,
Q(\tau))\mathrm{d}\tau,\\
\end{aligned}
\end{equation}
which implies~$E_1(\phi, Q(t))-C(t_0)\int_{t_0}^tE_1(\phi,
Q(\tau))\mathrm{d}\tau$~is a non-increasing function on~$[t_0,
0)$.~It is also bounded as we have showed above, hence~$E_1(\phi,
Q(t))-C(T)\int_{t_0}^tE_1(\phi, Q(\tau))\mathrm{d}\tau$~\\
and~$E_1(\phi, Q(s))-C(T)\int_{t_0}^sE_1(\phi,
Q(\tau))\mathrm{d}\tau$~in \eqref{eq330} must approach a
common limit. This in turn gives the important fact that\\
\begin{equation}\nonumber\\
Flux_1(\phi, M_s^t)\rightarrow 0,~~when~s, t\rightarrow 0,\\
\end{equation}
thanks to lemma 3.1, we complete the proof of lemma 3.2.\\
\indent To prove lemma 1.2, we need to introduce the energy-momentum
tensor~$\Pi$~as a symmetric 2-tensor by
\begin{equation}\nonumber\\
\begin{aligned}
\Pi(X, Y)&=X(\phi)Y(\phi)-\frac{1}{2}<X, Y>|\nabla\phi|^2,\\
\Pi_{\alpha\beta}&=\partial_\alpha \phi\partial_\beta
\phi-\frac{1}{2}g_{\alpha\beta}|\nabla\phi|^2,\\
\end{aligned}
\end{equation}
where~$X, Y$~are vector fields and~$\phi$~ a fixed~$C^1$~function.
Then we have
\begin{equation}\label{eq144}
\begin{aligned}
\Pi(\underline{L},
\underline{L})&=\big(\underline{L}(\phi)\big)^2,~~
\Pi(L, L)=\big(L(\phi)\big)^2,\\
\Pi(\underline{L}, e_a)&=\underline{L}(\phi)e_a(\phi), ~\Pi(L,
e_a)=L(\phi)e_a(\phi),\\
\Pi(\underline{L}, L)&=L(\phi)\underline{L}(\phi)-\frac{1}{2}<L,
\underline{L}>|\nabla \phi|^2=L(\phi)\underline{L}(\phi)+|\nabla
\phi|^2=|\overline{\nabla} \phi|^2,\\
\Pi(e_a, e_b)&=e_a(\phi)e_b(\phi)-\frac{1}{2}<e_a, e_b>|\nabla
\phi|^2\\
&=e_a(\phi)e_b(\phi)-\frac{1}{2}\delta_{ab}(|\overline{\nabla}
\phi|^2-L(\phi)\underline{L}(\phi)),\\
\end{aligned}
\end{equation}
where~$\delta_{ab}$~denotes the Kronecker delta function.\\
\indent We also need a key formula showed as a lemma below.\\
{\bf Lemma 3.3.} Let~$\phi$~be a~$C^1$~function and~$\Pi$~be
the associated energy-momentum tensor. Let~$X$~be a vector field,
and set~$P_\alpha=\Pi_{\alpha\beta}X^\beta$,~then
\begin{equation}\label{eq141}
\begin{aligned}
div P\equiv D_\alpha P^\alpha=\Box_g\phi
X(\phi)+\frac{1}{2}\Pi^{\alpha\beta} {^{(X)}\!\pi}_{\alpha\beta},\\
\end{aligned}
\end{equation}
where~$\Box_g$~is the wave operators associated to the given
metric~$g$~and has formula as follows:\\
\begin{equation}\label{eq142}
\begin{aligned}
&\Box_g\phi=|g|^{-1/2}\partial_{\alpha}(g^{\alpha\beta}|
g|^{1/2}\partial_{\beta}\phi)\\
&=-\partial_{tt}\phi+\partial_i\big(g^{ij}(t,
x)\phi_j\big)+\frac{1}{2}g^{ij}g^{lm}\partial
_mg_{ij}\partial_l\phi-\frac{1}{2}g^{ij}\partial_tg_{ij}\partial_t\phi,\\
\end{aligned}
\end{equation}
where~$|g|$~is the absolute value of the determinant of the
matrix~$(g_{\alpha\beta})$~and~$(g^{\alpha\beta})$~its inverse
matrix.\\
\indent For the proof one can read \cite{Alinhac}.\\
\indent We then construct a
multiplier:~$\frac{1}{2}(\underline{u}L+u\underline{L})+1$,~which is
close to the Morawetz multiplier~$t\partial_t+r\partial_r+1$,~and
setting~$Y=\frac{1}{2}(\underline{u}L+u\underline{L})$.~\\
\indent
Following Christodoulou and Klainerman \cite{Chris}, the deformation
tensor of a given vector field~$X$~is the symmetric
2-tensor~$^{(X)}\!\pi$~defined by
\begin{equation}\nonumber\\
^{(X)}\!\pi(Y,Z)\equiv\pi(Y,Z)=<D_YX,Z>+<D_ZX,Y>.\\
\end{equation}
In local coordinates
\begin{equation}\nonumber\\
\pi_{\alpha\beta}=D_{\alpha}X_{\beta}+D_{\beta}X_{\alpha},\\
\end{equation}
as
\begin{equation}\nonumber\\
\begin{aligned}
\nabla u&=2\nabla t-\nabla \underline{u}=-2\partial_t+\underline{L}\\
&=-(m^{-1}L+m\underline{L})+\underline{L}=-m^{-1}L+(1-m)\underline{L},\\
\end{aligned}
\end{equation}
then we can compute the deformation tensor
of~$Y=\frac{1}{2}(\underline{u}L+u\underline{L})$~as follows
\begin{equation}\nonumber\\
\begin{aligned}
&^{(Y)}\!\pi_{\underline{L}\underline{L}}=0, ~~~~^{(Y)}\!\pi_{L
\underline{L}}=-2-\frac{2}{m}+2\underline{\omega}\underline{u},\\
&^{(Y)}\!\pi_{LL}=4(1-m)-4\underline{\omega}u,~~~
^{(Y)}\!\pi_{\underline{L}e_a}=(\eta_a-\underline{\eta}_a)\underline{u},\\
&^{(Y)}\!\pi_{Le_a}=\xi_a\underline{u}+2\underline{\eta}_au,~~~
^{(Y)}\!\pi_{e_ae_b}=\underline{\chi}_{ab}u+\chi_{ab}\underline{u}.\\
\end{aligned}
\end{equation}
Also
\begin{equation}\label{eq183}
\begin{aligned}
div Y&=g^{\alpha\beta}<D_\alpha Y, e_\beta>=g^{\alpha\beta}<D_\alpha
\frac{1}{2}(\underline{u}L+u\underline{L}), e_\beta>\\
&=\frac{1}{2}(\chi_{aa}\underline{u}+\chi_{bb}\underline{u}+
\underline{\chi}_{aa}u+\underline{\chi}_{bb}u)+1+m^{-1}-\underline{u}\underline{\omega}.\\
\end{aligned}
\end{equation}
Combining \eqref{eq1} and \eqref{eq142}, we get
\begin{equation}\label{eq143}
\begin{aligned}
\Box_g\phi=\phi^5+\frac{1}{2}g^{ij}g^{lm}\partial
_mg_{ij}\partial_l\phi-\frac{1}{2}g^{ij}\partial_tg_{ij}\partial_t\phi,\\
\end{aligned}
\end{equation}
together with \eqref{eq141}, substitueing~$X$~with~$Y$~we arrive at
\begin{equation}\label{eq170}
\begin{aligned}
&div P\equiv D_\alpha P^\alpha=\Box_g\phi
Y(\phi)+\frac{1}{2}\Pi^{\alpha\beta} {^{(Y)}\!\pi}_{\alpha\beta}\\
&=\big(\phi^5+\frac{1}{2}g^{ij}g^{lm}\partial
_mg_{ij}\partial_l\phi-\frac{1}{2}g^{ij}\partial_tg_{ij}\partial_t\phi\big)Y(\phi)+\frac{1}{2}\Pi^{\alpha\beta}
{^{(Y)}\!\pi}_{\alpha\beta}\\
&=Y(\frac{\phi^6}{6})+\big(\frac{1}{2}g^{ij}g^{lm}\partial
_mg_{ij}\partial_l\phi-\frac{1}{2}g^{ij}\partial_tg_{ij}\partial_t\phi\big)Y(\phi)+\frac{1}{2}\Pi^{\alpha\beta}
{^{(Y)}\!\pi}_{\alpha\beta}\\
&=div(\frac{\phi^6Y}{6})-\frac{\phi^6}{6}divY+\big(\frac{1}{2}g^{ij}g^{lm}\partial
_mg_{ij}\partial_l\phi-\frac{1}{2}g^{ij}\partial_tg_{ij}\partial_t\phi\big)Y(\phi)+\frac{1}{2}\Pi^{\alpha\beta}
{^{(Y)}\!\pi}_{\alpha\beta}\\
\end{aligned}
\end{equation}
where~$P_\alpha=\Pi_{\alpha\beta}Y^\beta,$~and it means
\begin{equation}\label{eq171}
\begin{aligned}
-div(P-\frac{1}{6}\phi^6Y)&=\frac{1}{6}\phi^6div
Y-Y(\phi)\big(\frac{1}{2}g^{ij}g^{lm}\partial
_mg_{ij}\partial_l\phi-\frac{1}{2}g^{ij}\partial_tg_{ij}\partial_t\phi\big)\\
&-\frac{1}{2}\Pi^{\alpha\beta}
{^{(Y)}\!\pi}_{\alpha\beta}\mathop =\limits^{\vartriangle} \widetilde{R}(t, x).\\
\end{aligned}
\end{equation}
By \eqref{eq175} and \eqref{eq143}, we have
\begin{equation}\nonumber\\
\begin{aligned}
\Box_g(\frac{1}{2}\phi^2)&=div\big(\nabla(\frac{1}{2}\phi^2)\big)=div(\phi\nabla
\phi)=<D_\alpha \phi\nabla \phi, \partial^\alpha>\\
&=\partial_\alpha(\phi)<\nabla \phi,
\partial^\alpha>+\phi\Box_g\phi\\
&=g^{\alpha\beta}\partial_\alpha(\phi)<\nabla \phi,
\partial_\beta>+\phi\Box_g\phi\\
&=-(\partial_t\phi)^2+g^{ij}\phi_i\phi_j+\phi\Box_g\phi\\
&=|\overline{\nabla}\phi|^2-L(\phi)\underline{L}(\phi)+\phi^6+\phi(\frac{1}{2}g^{ij}g^{lm}\partial
_mg_{ij}\partial_l\phi-\frac{1}{2}g^{ij}\partial_tg_{ij}\partial_t\phi),\\
\end{aligned}
\end{equation}
so
\begin{equation}\label{eq178}
\begin{aligned}
-div(\phi\nabla
\phi)=-|\overline{\nabla}\phi|^2+L(\phi)\underline{L}(\phi)-\phi^6-\phi(\frac{1}{2}g^{ij}g^{lm}\partial
_mg_{ij}\partial_l\phi-\frac{1}{2}g^{ij}\partial_tg_{ij}\partial_t\phi).\\
\end{aligned}
\end{equation}
Adding \eqref{eq171} and \eqref{eq178}, we get
\begin{equation}\label{eq179}
\begin{aligned}
&-div(P-\frac{1}{6}\phi^6Y+\phi\nabla \phi)=\widetilde{R}(t, x)\\
&-|\overline{\nabla}\phi|^2+L(\phi)\underline{L}(\phi)-\phi^6-\phi(\frac{1}{2}g^{ij}g^{lm}\partial
_mg_{ij}\partial_l\phi-\frac{1}{2}g^{ij}\partial_tg_{ij}\partial_t\phi)\mathop
=\limits^{\vartriangle} R(t, x).\\
\end{aligned}
\end{equation}
Integrating the identity \eqref{eq179} over the truncated geodesic
cone~$Q_S^T,~S< T<0$,~we arrive at
\begin{equation}\nonumber\\
\begin{aligned}
&-\int_{Q(T)}<P-\frac{1}{6}\phi^6Y+\phi\nabla \phi,
-\partial_t>\mathrm{d}v-\int_{M_S^T}\frac{<P-\frac{1}{6}\phi^6Y+\phi\nabla
\phi, \nabla
\underline{u}>}{\sqrt{(\partial_t\underline{u})^2+(g^{ij}\partial_i\underline{u})^2}}\mathrm{d}\sigma\\
&+\int_{Q(S)}<P-\frac{1}{6}\phi^6Y+\phi\nabla \phi,
-\partial_t>\mathrm{d}v=\int_{Q_S^T}R(t, x)\mathrm{d}v\mathrm{d}t,\\
\end{aligned}
\end{equation}
that is
\begin{equation}\label{eq149}
\begin{aligned}
&\int_{Q(T)}\Pi(Y, \partial_t)-<\frac{1}{6}\phi^6Y-\phi\nabla \phi,
\partial_t>\mathrm{d}v-\int_{Q(S)}\Pi(Y, \partial_t)-<\frac{1}{6}\phi^6Y-\phi\nabla \phi,
\partial_t>\mathrm{d}v\\
&+\int_{M_S^T}\frac{<P-\frac{1}{6}\phi^6Y+\phi\nabla \phi,
\underline{L}>}
{\sqrt{(\partial_t\underline{u})^2+(g^{ij}\partial_i\underline{u})^2}}\mathrm{d}\sigma
=\int_{Q_S^T}R(t, x)\mathrm{d}v\mathrm{d}t.\\
\end{aligned}
\end{equation}
By \eqref{eq500}, we have
\begin{equation}\label{eq150}
\begin{aligned}
&\Pi(Y, \partial_t)-<\frac{1}{6}\phi^6Y-\phi\nabla \phi,
\partial_t>=\Pi\big(\frac{1}{2}(\underline{u}L+u\underline{L}),~
\frac{1}{2}(m^{-1}L+m\underline{L})\big)\\
&-<\frac{1}{6}\phi^6\frac{1}{2}(\underline{u}L+u\underline{L})-\phi\nabla
\phi,~
\frac{1}{2}(m^{-1}L+m\underline{L})>\\
&=\frac{mu}{4}\big(\underline{L}(\phi)\big)^2+\frac{\underline{u}}{4m}\big(L(\phi)\big)^2
+\big(\frac{u}{4m}+\frac{m\underline{u}}{4}\big)|\overline{\nabla}\phi|^2
+\big(\frac{u}{12m}+\frac{m\underline{u}}{12}\big)\phi^6\\
&+\frac{1}{2m}\phi L(\phi)+\frac{m}{2}\phi\underline{L}(\phi),\\
\end{aligned}
\end{equation}
and
\begin{equation}\label{eq151}
\begin{aligned}
&<P-\frac{1}{6}\phi^6Y+\phi\nabla \phi, ~\underline{L}>=\Pi(Y,~
\underline{L})-<\frac{1}{6}\phi^6Y-\phi\nabla \phi, ~\underline{L}>\\
&=\Pi\big(\frac{1}{2}(m^{-1}L+m\underline{L}),~
\underline{L}\big)-<\frac{1}{6}\phi^6\frac{1}{2}(\underline{u}L+u\underline{L})-\phi\nabla
\phi,~
\underline{L}>\\
&=\frac{1}{2}\underline{u}|\overline{\nabla}\phi|^2+\frac{1}{2}u\big(\underline{L}(\phi)\big)^2+
\frac{\underline{u}\phi^6}{6}+\phi\underline{L}(\phi),\\
\end{aligned}
\end{equation}
then \eqref{eq149} becomes
\begin{equation}\label{180}
\begin{aligned}
&\int_{Q(T)}\big[\frac{mu}{4}\big(\underline{L}(\phi)\big)^2+\frac{\underline{u}}{4m}\big(L(\phi)\big)^2
+\big(\frac{u}{4m}+\frac{m\underline{u}}{4}\big)|\overline{\nabla}\phi|^2
+\big(\frac{u}{12m}+\frac{m\underline{u}}{12}\big)\phi^6\\
&+\frac{1}{2m}\phi
L(\phi)+\frac{m}{2}\phi\underline{L}(\phi)\big]\mathrm{d}v+\int_{M_S^T}\frac{\frac{1}{2}u\big(\underline{L}(\phi)\big)^2
+\frac{1}{2}\underline{u}|\overline{\nabla}\phi|^2+\frac{\underline{u}}{6}\phi^6+
\phi\underline{L}(\phi)}{\sqrt{(\partial_t\underline{u})^2+(g^{ij}\partial_i\underline{u})^2}}\mathrm{d}\sigma\\
&-\int_{Q(S)}\big[\frac{mu}{4}\big(\underline{L}(\phi)\big)^2+\frac{\underline{u}}{4m}\big(L(\phi)\big)^2
+\big(\frac{u}{4m}+\frac{m\underline{u}}{4}\big)|\overline{\nabla}\phi|^2
+\big(\frac{u}{12m}+\frac{m\underline{u}}{12}\big)\phi^6\\
&+\frac{1}{2m}\phi
L(\phi)+\frac{m}{2}\phi\underline{L}(\phi)\big]\mathrm{d}v=\int_{Q_S^T}R(t,
x)\mathrm{d}v\mathrm{d}t,\\
\end{aligned}
\end{equation}
where~$Q(S)=\{x\in \mathbb{R}^3:~\underline{u}\leq 0, t=S\}$.~Noting
that~$\underline{u}=0$~on the mantle~$M_S^T$,~and when~$S, T$~is
small enough we can let~$m=1$~for the error margin is nothing
but~$\mathcal {O}(t^2)E(t)$,~then~\eqref{180} becomes a little
simpler form
\begin{equation}\label{eq181}
\begin{aligned}
&\int_{Q(T)}\big[\frac{u}{4}\big(\underline{L}(\phi)\big)^2+\frac{\underline{u}}{4}\big(L(\phi)\big)^2
+\frac{T}{2}|\overline{\nabla}\phi|^2
+\frac{T}{6}\phi^6+\frac{1}{2}\phi
L(\phi)+\frac{1}{2}\phi\underline{L}(\phi)\big]\mathrm{d}v\\
&+\int_{M_S^T}\frac{t\big(\underline{L}(\phi)\big)^2
+\phi\underline{L}(\phi)}{\sqrt{(\partial_t\underline{u})^2+(g^{ij}\partial_i\underline{u})^2}}\mathrm{d}\sigma\\
&-\int_{Q(S)}\big[\frac{u}{4}\big(\underline{L}(\phi)\big)^2+\frac{\underline{u}}{4}\big(L(\phi)\big)^2
+\frac{S}{2}|\overline{\nabla}\phi|^2
+\frac{S}{6}\phi^6+\frac{1}{2}\phi
L(\phi)+\frac{1}{2}\phi\underline{L}(\phi)\big]\mathrm{d}v\\
&=\int_{Q_S^T}R(t,
x)\mathrm{d}v\mathrm{d}t.\\
\end{aligned}
\end{equation}
Denote
\begin{equation}\nonumber\\
\begin{aligned}
&I=\int_{Q(T)}\big[\frac{u}{4}\big(\underline{L}(\phi)\big)^2+\frac{\underline{u}}{4}\big(L(\phi)\big)^2
+\frac{T}{2}|\overline{\nabla}\phi|^2
+\frac{T}{6}\phi^6+\frac{1}{2}\phi
L(\phi)+\frac{1}{2}\phi\underline{L}(\phi)\big]\mathrm{d}v,\\
&II=\int_{M_S^T}\frac{t\big(\underline{L}(\phi)\big)^2
+\phi\underline{L}(\phi)}{\sqrt{(\partial_t\underline{u})^2+(g^{ij}\partial_i\underline{u})^2}}\mathrm{d}\sigma
=\int_{M_S^0}\frac{t\big(\underline{L}(\phi)\big)^2
+\phi\underline{L}(\phi)}{\sqrt{(\partial_t\underline{u})^2+(g^{ij}\partial_i\underline{u})^2}}\mathrm{d}\sigma\\
&-\int_{M_T^0}\frac{t\big(\underline{L}(\phi)\big)^2
+\phi\underline{L}(\phi)}{\sqrt{(\partial_t\underline{u})^2+(g^{ij}\partial_i\underline{u})^2}}\mathrm{d}\sigma
=II_1-II_2,\\
&III=-\int_{Q(S)}\big[\frac{u}{4}\big(\underline{L}(\phi)\big)^2+\frac{\underline{u}}{4}\big(L(\phi)\big)^2
+\frac{S}{2}|\overline{\nabla}\phi|^2
+\frac{S}{6}\phi^6+\frac{1}{2}\phi
L(\phi)+\frac{1}{2}\phi\underline{L}(\phi)\big]\mathrm{d}v,\\
\end{aligned}
\end{equation}
then~$\eqref{eq181}$~becomes
\begin{equation}\label{eq315}
\begin{aligned}
I+II_1-II_2+III=\int_{Q_S^T}R(t,
x)\mathrm{d}v\mathrm{d}t.\\
\end{aligned}
\end{equation}
\indent Let us estimate the right-hind side of \eqref{eq315}
first.
\begin{equation}\label{eq182}
\begin{aligned}
&\Pi^{\alpha\beta}
{^{(Y)}\!\pi}_{\alpha\beta}=g^{\alpha\alpha'}g^{\beta\beta'}\Pi_{\alpha'\beta'}{^{(Y)}\!\pi}_{\alpha\beta}\\
&=(\underline{\omega}\underline{u}-1-\frac{1}{m})|\overline{\nabla}\phi|^2+(1-m-
\underline{\omega}u)\big(\underline{L}(\phi)\big)^2\\
&-\sum_{a=1}^{2}(\eta_a-\underline{\eta}_a)\underline{u}L(\phi)e_a(\phi)+
\sum_{a,b=1}^{2}(\chi_{ab}\underline{u}+\underline{\chi}_{ab}u)e_a(\phi)e_b(\phi)\\
&-\frac{1}{2}(\chi_{aa}\underline{u}+\chi_{bb}\underline{u}+
\underline{\chi}_{aa}u+\underline{\chi}_{bb}u)|\overline{\nabla}\phi|^2\\
&+\frac{1}{2}(\chi_{aa}\underline{u}+\chi_{bb}\underline{u}+
\underline{\chi}_{aa}u+\underline{\chi}_{bb}u)L(\phi)\underline{L}(\phi)\\
&-\sum_{a=1}^{2}(\xi_a\underline{u}+
2\underline{\eta}_au)\underline{L}(\phi)e_a(\phi).\\
\end{aligned}
\end{equation}
Combining \eqref{eq183} \eqref{eq171} \eqref{eq179} with
\eqref{eq182}, and set~$m=1$~(will not influence our result) we get
\begin{equation}\label{eq188}
\begin{aligned}
\int_{Q_S^T}R(t,
x)\mathrm{d}v\mathrm{d}t&=\int_{Q_S^T}\Big[\big(\frac{\frac{1}{2}(\chi_{aa}\underline{u}+\chi_{bb}\underline{u}+
\underline{\chi}_{aa}u+\underline{\chi}_{bb}u)+2-\underline{u}\underline{\omega}}{6}-\frac{2}{3}\big)\frac{\phi^6}{6}\\
&+\big(\frac{1}{4}(\chi_{aa}\underline{u}+\chi_{bb}\underline{u}+
\underline{\chi}_{aa}u+\underline{\chi}_{bb}u)-1-\frac{1}{2}(\underline{u}\underline{\omega}-2)\big)
|\overline{\nabla}\phi|^2\\
&-\frac{1}{2}\sum_{a,b=1}^{2}(\chi_{ab}\underline{u}+\underline{\chi}_{ab}u)e_a(\phi)e_b(\phi)\\
&+\big(1-\frac{1}{4}(\chi_{aa}\underline{u}+\chi_{bb}\underline{u}+
\underline{\chi}_{aa}u+\underline{\chi}_{bb}u)\big)L(\phi)\underline{L}(\phi)-\frac{1}{2}\underline{\omega}u\big(\underline{L}(\phi)\big)^2\\
&+
\frac{1}{2}\sum_{a=1}^{2}(\eta_a-\underline{\eta}_a)\underline{u}L(\phi)e_a(\phi)+
\sum_{a=1}^{2}(\xi_a\underline{u}+
2\underline{\eta}_au)\underline{L}(\phi)e_a(\phi)\\
&-\phi(\frac{1}{2}g^{ij}g^{lm}\partial
_mg_{ij}\partial_l\phi-\frac{1}{2}g^{ij}\partial_tg_{ij}\partial_t\phi)\\
&-\frac{1}{2}\big(\underline{u}
L(\phi)+u\underline{L}(\phi)\big)(\frac{1}{2}g^{ij}g^{lm}\partial
_mg_{ij}\partial_l\phi-\frac{1}{2}g^{ij}\partial_tg_{ij}\partial_t\phi)\\
&-\frac{\phi^6}{3}\Big]\mathrm{d}v\mathrm{d}t.\\
\end{aligned}
\end{equation}
\indent Also we have
\begin{equation}\label{eq192}
\begin{aligned}
&\int_{Q_S^T}\big(-\phi(\frac{1}{2}g^{ij}g^{lm}\partial
_mg_{ij}\partial_l\phi-\frac{1}{2}g^{ij}\partial_tg_{ij}\partial_t\phi)\big)\mathrm{d}v\mathrm{d}t\\
&\leq
C(T-S)\big(\int_{Q(S)}\phi^6\mathrm{d}v\big)^{\frac{1}{6}}\big(\int_{Q(S)}\mathrm{d}v\big)^{\frac{1}{3}}
\big[\big(\int_{Q(S)}(\partial_t\phi)^2\mathrm{d}v\big)^{\frac{1}{2}}+
\big(\int_{Q(S)}(\partial_j\phi)^2\mathrm{d}v\big)^{\frac{1}{2}}\big]\\
&\leq C(T-S)|S|\big(E(\phi, Q(S))\big)^{\frac{2}{3}},\\
&\int_{Q_S^T}-\frac{1}{2}\big(\underline{u}
L(\phi)+u\underline{L}(\phi)\big)(\frac{1}{2}g^{ij}g^{lm}\partial
_mg_{ij}\partial_l\phi-\frac{1}{2}g^{ij}\partial_tg_{ij}\partial_t\phi)\mathrm{d}v\mathrm{d}t\\
&\leq C|S|(T-S)\big(E(\phi, Q(S))\big).\\
\end{aligned}
\end{equation}
Combining (2.5), (2.6), (2.8), (2.9), (2.11), (2.12), \eqref{eq196},
\eqref{eq188} and \eqref{eq192}, we get
\begin{equation}\label{eq300}
\begin{aligned}
\int_{Q_S^T}R(t, x)\mathrm{d}v\mathrm{d}t\leq C|S|(T-S)\big(E(\phi,
Q(S))\big)+C(T-S)|S|\big(E(\phi, Q(S))\big)^{\frac{2}{3}}.\\
\end{aligned}
\end{equation}
\indent On the surface~$M_S^T$~where~$\underline{u}=0$,~we have
\begin{equation}\nonumber\\
\begin{aligned}
&t\big(\underline{L}(\phi)\big)^2 +\phi\underline{L}(\phi)\\
&=t\big(m^{-1}\partial_t\phi-g^{ij}\partial_i\underline{u}\partial_j\phi\big)^2+
\phi\big(m^{-1}\partial_t\phi-g^{ij}\partial_i\underline{u}\partial_j\phi\big)\\
&=-(\underline{u}-t)\big(g^{ij}\partial_i\underline{u}\partial_j\phi-m^{-1}\partial_t\phi\big)^2
-\phi\big(g^{ij}\partial_i\underline{u}\partial_j\phi-m^{-1}\partial_t\phi\big).\\
\end{aligned}
\end{equation}
If we parameterize~$M_S^0$~by
\begin{equation}\nonumber\\
y \rightarrow \big(f(y), y\big),~~y\in Q(S),
\end{equation}
then by~$\underline{u}\big(f(y), y\big)=0$~on~$M_S^0$,~we have
\begin{equation}\nonumber\\
\begin{aligned}
&\underline{u}_tf_i+\underline{u}_i=0,\\
&f_i=-\frac{\underline{u}_i}{\underline{u}_t}=-m\underline{u}_i,\\
\end{aligned}
\end{equation}
and let~$\psi(y)=\phi\big(f(y),
y\big)$,~then~$d\sigma=\sqrt{(\partial_t\underline{u})^2+(g^{ij}\partial_i\underline{u})^2}dy$~and
\begin{equation}\nonumber\\
\begin{aligned}
\psi_j=\phi_tf_j+\phi_j,\\
\end{aligned}
\end{equation}
which implies
\begin{equation}\nonumber\\
\begin{aligned}
g^{ij}\partial_i\underline{u}\partial_j\psi&=\phi_tg^{ij}\partial_i\underline{u}
f_j+g^{ij}\partial_i\underline{u}\partial_j\phi\\
&=-m\phi_tg^{ij}\partial_i\underline{u}
\partial_j\underline{u}+g^{ij}\partial_i\underline{u}\partial_j\phi\\
&=-m\phi_t(\partial_t\phi)^2+g^{ij}\partial_i\underline{u}\partial_j\phi\\
&=-m^{-1}\phi_t+g^{ij}\partial_i\underline{u}\partial_j\phi.\\
\end{aligned}
\end{equation}
Thus, a calculation gives
\begin{equation}\label{eq306}
\begin{aligned}
II_1&=-\int_{Q(S)}\big[(\underline{u}-S)(g^{ij}\partial_i\underline{u}\partial_j\psi)^2+\psi
g^{ij}\partial_i\underline{u}\partial_j\psi \big]\mathrm{d}v\\
&=-\int_{Q(S)}\frac{\big((\underline{u}-S)g^{ij}\partial_i\underline{u}\partial_j\psi+\psi\big)^2}
{\underline{u}-S}\mathrm{d}v+\int_{Q(S)}\frac{\psi^2}{\underline{u}-S}+\psi
g^{ij}\partial_i\underline{u}\partial_j\psi \mathrm{d}v.\\
\end{aligned}
\end{equation}
Integrating by parts we see
\begin{equation}\label{eq307}
\begin{aligned}
&\int_{Q(S)}\psi g^{ij}\partial_i\underline{u}\partial_j\psi
\mathrm{d}v\\
=&\int_{Q(S)}g^{ij}\partial_i\underline{u}\partial_j(\frac{1}{2}\psi^2)\mathrm{d}v\\
=&\int_{Q(S)}\big[\partial_j(\frac{1}{2}g^{ij}\partial_i\underline{u}\psi^2)-\frac{1}{2}\psi^2\partial_j
(g^{ij}\partial_i\underline{u})\big]\mathrm{d}v\\
=&\int_{\partial
Q(S)}\frac{g^{ij}\partial_i\underline{u}\partial_j\underline{u}}{2\sqrt{\sum_{j=1}^3(
\partial_j\underline{u})^2}}\psi^2\mathrm{d}\sigma-\int_{Q(S)}\frac{1}{2}\psi^2\partial_j
(g^{ij}\partial_i\underline{u})\mathrm{d}v.\\
\end{aligned}
\end{equation}
Note that
\begin{equation}\nonumber\\
\begin{aligned}
\Box_g\underline{u}&=div(\nabla
\underline{u})=-div(\underline{L})=-\underline{\chi}_{11}-\underline{\chi}_{22}\\
&=-\partial_{tt}\underline{u}+\partial_j\big(g^{ij}(t,
x)\partial_i\underline{u}\big)+\frac{1}{2}g^{ij}g^{lm}\partial
_mg_{ij}\partial_l\underline{u}-\frac{1}{2}g^{ij}\partial_tg_{ij}\partial_t\underline{u},\\
\end{aligned}
\end{equation}
which yields\\
\begin{equation}\nonumber\\
\begin{aligned}
&\partial_j\big(g^{ij}(t,
x)\partial_i\underline{u}\big)=-\underline{\chi}_{11}-\underline{\chi}_{22}+
\partial_{tt}\underline{u}-\frac{1}{2}g^{ij}g^{lm}\partial
_mg_{ij}\partial_l\underline{u}+\frac{1}{2}g^{ij}\partial_tg_{ij}\partial_t\underline{u},\\
\end{aligned}
\end{equation}
then from (2.6) and (2.7), we have
\begin{equation}\label{eq308}
\begin{aligned}
\frac{2}{\underline{u}-t}+Ct+C\leq \partial_j(g^{ij}\partial_i\underline{u})\leq \frac{2}{\underline{u}-t}-Ct+C.\\
\end{aligned}
\end{equation}
Combining \eqref{eq306}, \eqref{eq307} and \eqref{eq308} we get
\begin{equation}\label{eq310}
\begin{aligned}
II_1&=-\int_{Q(S)}\frac{\big((\underline{u}-S)g^{ij}\partial_i\underline{u}\partial_j\psi+\psi\big)^2}
{\underline{u}-S}\mathrm{d}v+\int_{\partial
Q(S)}\frac{g^{ij}\partial_i\underline{u}\partial_j\underline{u}}{2\sqrt{\sum_{j=1}^3(
\partial_j\underline{u})^2}}\psi^2\mathrm{d}\nu\\
&-(CS+C)\int_{Q(S)}\psi^2\mathrm{d}v\\
&=-\int_{M_S^0}\frac{(\underline{u}-S)\big(-m^{-1}\phi_t+g^{ij}\partial_i\underline{u}\partial_j\phi
+\frac{\phi}{\underline{u}-S}\big)^2+(CS+C)\phi^2}
{\sqrt{(\partial_t\underline{u})^2+(g^{ij}\partial_i\underline{u})^2}}\mathrm{d}\sigma\\
&+\int_{\partial
Q(S)}\frac{g^{ij}\partial_i\underline{u}\partial_j\underline{u}}{2\sqrt{\sum_{j=1}^3(
\partial_j\underline{u})^2}}\psi^2\mathrm{d}\nu\\
&=\int_{M_S^0}\frac{S\big(\underline{L}(\phi)
+\frac{\phi}{S}\big)^2+(CS+C)\phi^2}
{\sqrt{(\partial_t\underline{u})^2+(g^{ij}\partial_i\underline{u})^2}}\mathrm{d}\sigma
+\int_{\partial
Q(S)}\frac{g^{ij}\partial_i\underline{u}\partial_j\underline{u}}{2\sqrt{\sum_{j=1}^3(
\partial_j\underline{u})^2}}\phi^2\mathrm{d}\sigma\\
&\leq
C|S|\int_{M_S^0}\big(\underline{L}(\phi))^2\mathrm{d}\sigma+C\int_{M_S^0}(\frac{1}{|S|}
+1+|S|)\phi^2\mathrm{d}\sigma\\
&+\int_{\partial
Q(S)}\frac{g^{ij}\partial_i\underline{u}\partial_j\underline{u}}{2\sqrt{\sum_{j=1}^3(
\partial_j\underline{u})^2}}\phi^2\mathrm{d}\nu\\
&\leq C|S|Flux(\phi,
M_S^0)+C(|S|+|S|^2+|S|^3)\big(\int_{M_S^0}\phi^6\mathrm{d}\sigma\big)^{\frac{1}{3}}\\
&+\int_{\partial
Q(S)}\frac{g^{ij}\partial_i\underline{u}\partial_j\underline{u}}{2\sqrt{\sum_{j=1}^3(
\partial_j\underline{u})^2}}\psi^2\mathrm{d}\nu\\
&\leq C|S|(Flux(\phi, M_S^0)+Flux(\phi,
M_S^0)^{\frac{1}{3}})+\int_{\partial
Q(S)}\frac{g^{ij}\partial_i\underline{u}\partial_j\underline{u}}{2\sqrt{\sum_{j=1}^3(
\partial_j\underline{u})^2}}\phi^2\mathrm{d}\nu.\\
\end{aligned}
\end{equation}
For~$III$,~a computation gives (also let~$m=1$~)
\begin{equation}\nonumber\\
\begin{aligned}
&\frac{u}{4}\big(\underline{L}(\phi)\big)^2+\frac{\underline{u}}{4}\big(L(\phi)\big)^2
+\frac{S}{2}|\overline{\nabla}\phi|^2
+\frac{S}{6}\phi^6+\frac{1}{2}\phi
L(\phi)+\frac{1}{2}\phi\underline{L}(\phi)\\
&=\frac{2S-\underline{u}}{4}\big(m^{-1}\partial_t\phi-g^{ij}\partial_i\underline{u}\partial_j\phi\big)^2
+\frac{\underline{u}}{4}\big(m\partial_t\phi+m^2g^{ij}\partial_i\underline{u}\partial_j\phi\big)^2\\
&+\frac{S}{2}\big[-(\partial_t\phi)^2+g^{ij}\partial_i\phi\partial_j\phi
+(m^{-1}\partial_t\phi-g^{ij}\partial_i\underline{u}\partial_j\phi\big)(m\partial_t\phi
+m^2g^{ij}\partial_i\underline{u}\partial_j\phi\big)\big]\\
&+\frac{S}{6}\phi^6+\phi\partial_t\phi\\
&=\frac{S}{2}\big(\phi_t^2+g^{ij}\partial_i\phi\partial_j\phi+\frac{\phi^6}{3}\big)+\phi_t\big(\phi+
(\underline{u}-S)g^{ij}\partial_i\underline{u}\partial_j\phi\big).\\
\end{aligned}
\end{equation}
For the second term on the right-hand side, using Cauchy-Schwartz
inequality we have
\begin{equation}\nonumber\\
\begin{aligned}
&\phi_t\big(\phi+
(\underline{u}-S)g^{ij}\partial_i\underline{u}\partial_j\phi\big)\\
&\leq |S|\big[\frac{\phi_t^2}{2}+\frac{\big(\phi+
(\underline{u}-S)g^{ij}\partial_i\underline{u}\partial_j\phi\big)^2}{2|S|^2}\big]\\
&\leq |S|\big[\frac{\phi_t^2}{2}+\frac{\big(\phi+
(\underline{u}-S)g^{ij}\partial_i\underline{u}\partial_j\phi\big)^2}{2(\underline{u}-S)^2}\big]\\
&=|S|\frac{\phi_t^2}{2}+\frac{|S|}{2}\big[\frac{\phi^2}{(\underline{u}-S)^2}
+(g^{ij}\partial_i\underline{u}\partial_j\phi)^2+\frac{2\phi
g^{ij}\partial_i\underline{u}\partial_j\phi}{\underline{u}-S}\big]\\
&\leq|S|\frac{\phi_t^2}{2}+\frac{|S|}{2}\big[\frac{\phi^2}{(\underline{u}-S)^2}
+g^{ij}\partial_i\underline{u}\partial_j\underline{u}g^{ij}\partial_i\phi\partial_j\phi+\frac{2\phi
g^{ij}\partial_i\underline{u}\partial_j\phi}{\underline{u}-S}\big]\\
&\leq|S|\frac{\phi_t^2}{2}+\frac{|S|}{2}\big[\frac{\phi^2}{(\underline{u}-S)^2}
+m^{-2}g^{ij}\partial_i\phi\partial_j\phi+\frac{2\phi
g^{ij}\partial_i\underline{u}\partial_j\phi}{\underline{u}-S}\big].\\
\end{aligned}
\end{equation}
As~$S<0$,~we get
\begin{equation}\nonumber\\
\begin{aligned}
&\frac{S}{2}\big(\phi_t^2+g^{ij}\partial_i\phi\partial_j\phi+\frac{\phi^6}{3}\big)+\phi_t\big(\phi+
(\underline{u}-S)g^{ij}\partial_i\underline{u}\partial_j\phi\big)\\
&\leq
\frac{S\phi^6}{6}-\frac{S\phi^2}{2(\underline{u}-S)^2}-\frac{S\phi
g^{ij}\partial_i\underline{u}\partial_j\phi}{\underline{u}-S},\\
\end{aligned}
\end{equation}
so
\begin{equation}\label{eq318}
\begin{aligned}
III&=-\int_{Q(S)}\big[\frac{S}{2}\big(\phi_t^2+g^{ij}\partial_i\phi\partial_j\phi+\frac{\phi^6}{3}\big)+\phi_t\big(\phi+
(\underline{u}-S)g^{ij}\partial_i\underline{u}\partial_j\phi\big)\big]\mathrm{d}v\\
&\geq
|S|\int_{Q(S)}\frac{\phi^6}{6}\mathrm{d}v+S\Big(\frac{1}{2}\int_{Q(S)}\frac{\phi^2}{(\underline{u}-S)^2}\mathrm{d}v
+\int_{Q(S)}\frac{\phi
g^{ij}\partial_i\underline{u}\partial_j\phi}{\underline{u}-S}\mathrm{d}v\Big).\\
\end{aligned}
\end{equation}
Together with \eqref{eq308}, a similar computation gives
\begin{equation}\label{eq311}
\begin{aligned}
&\int_{Q(S)}\frac{\phi
g^{ij}\partial_i\underline{u}\partial_j\phi}{\underline{u}-S}\mathrm{d}v\\
&=\int_{Q(S)}\frac{
g^{ij}\partial_i\underline{u}\partial_j(\frac{\phi^2}{2})}{\underline{u}-S}\mathrm{d}v\\
&=\int_{Q(S)}\partial_j\Big(\frac{
g^{ij}\partial_i\underline{u}(\frac{\phi^2}{2})}{\underline{u}-S}\Big)\mathrm{d}v-
\int_{Q(S)}\frac{\phi^2}{2}\partial_j\Big(\frac{
g^{ij}\partial_i\underline{u}}{\underline{u}-S}\Big)\mathrm{d}v\\
&=\int_{\partial
Q(S)}\frac{g^{ij}\partial_i\underline{u}\partial_j\underline{u}\phi^2}{2(\underline{u}-S)\sqrt{\sum_{j=1}^3(
\partial_j\underline{u}})^2}\mathrm{d}\nu-\int_{Q(S)}\frac{\phi^2}{2(\underline{u}-S)}
\partial_j(g^{ij}\partial_i\underline{u})\mathrm{d}v\\
&+\int_{Q(S)}\frac{\phi^2g^{ij}\partial_i\underline{u}
\partial_j\underline{u}}{2(\underline{u}-S)^2}\mathrm{d}v\\
&=\int_{\partial
Q(S)}\frac{g^{ij}\partial_i\underline{u}\partial_j\underline{u}\phi^2}{-2S\sqrt{\sum_{j=1}^3(
\partial_j\underline{u}})^2}\mathrm{d}\nu-\int_{Q(S)}\frac{\phi^2}{2(\underline{u}-S)}
(\frac{2}{\underline{u}-S}+CS+C)\mathrm{d}v\\
&+\int_{Q(S)}\frac{m^{-2}\phi^2}{2(\underline{u}-S)^2}\mathrm{d}v.\\
\end{aligned}
\end{equation}
Combining \eqref{eq318} and \eqref{eq311}, we get
\begin{equation}\label{eq312}
\begin{aligned}
III\geq |S|\int_{Q(S)}\frac{\phi^6}{6}\mathrm{d}v-\int_{\partial
Q(S)}\frac{g^{ij}\partial_i\underline{u}\partial_j\underline{u}\phi^2}{2\sqrt{\sum_{j=1}^3(
\partial_j\underline{u}})^2}\mathrm{d}\sigma-(CS+CS^2)\int_{Q(S)}\frac{\phi^2}{2(\underline{u}-S)}
\mathrm{d}v.\\
\end{aligned}
\end{equation}
Using H\"{o}lder's inequality it is easy to see that
\begin{equation}\label{eq313}
\begin{aligned}
I&=\int_{Q(T)}\big[\frac{u}{4}\big(\underline{L}(\phi)\big)^2+\frac{\underline{u}}{4}\big(L(\phi)\big)^2
+\frac{T}{2}|\overline{\nabla}\phi|^2
+\frac{T}{6}\phi^6+\frac{1}{2}\phi
L(\phi)+\frac{1}{2}\phi\underline{L}(\phi)\big]\mathrm{d}v\\
&\leq C|T|E(\phi,
Q(T))+C|T|\Big[\Big(\int_{Q(T)}\phi^6\mathrm{d}v\Big)^{\frac{1}{6}}
\Big(\big(\int_{Q(T)}(L(\phi))^2\mathrm{d}v\big)^{\frac{1}{2}}+
\big(\underline{L}(\phi)^2\big)^{\frac{1}{2}}\mathrm{d}v\Big)\Big]\\
&\leq C|T|E(\phi, Q(T))+C|T|E(\phi, Q(T))^{\frac{2}{3}},\\
II_2&=\int_{M_T^0}\frac{t\big(\underline{L}(\phi)\big)^2
+\phi\underline{L}(\phi)}{\sqrt{(\partial_t\underline{u})^2+(g^{ij}\partial_i\underline{u})^2}}\mathrm{d}\sigma
\leq C|T|Flux(\phi, M_T^0)+C|T|Flux(\phi, M_T^0)^{\frac{2}{3}}.\\
\end{aligned}
\end{equation}
Now, we combine \eqref{eq315}, \eqref{eq300}, \eqref{eq310},
\eqref{eq312} and \eqref{eq313} to obtain
\begin{equation}\nonumber\\
\begin{aligned}
|S|\int_{Q(S)}\frac{\phi^6}{6}\mathrm{d}v&\leq III+\int_{\partial
Q(S)}\frac{g^{ij}\partial_i\underline{u}\partial_j\underline{u}\phi^2}{2\sum_{j=1}^3(
\partial_j\underline{u})^2}\mathrm{d}\sigma+(CS+CS^2)\int_{Q(S)}\frac{\phi^2}{2(\underline{u}-S)}\mathrm{d}v\\
&=-I-II_1+II_2+\int_{Q_S^T}R(t,
x)\mathrm{d}v\mathrm{d}t+\int_{\partial
Q(S)}\frac{g^{ij}\partial_i\underline{u}\partial_j\underline{u}\phi^2}{2\sum_{j=1}^3(
\partial_j\underline{u})^2}\mathrm{d}\sigma\\
&+(CS+CS^2)\int_{Q(S)}\frac{\phi^2}{2(\underline{u}-S)}\mathrm{d}v\\
&\leq C|T|\big(E(\phi, Q(T)+E(\phi,
Q(T))^{\frac{2}{3}}\big)\\
&+C|S|\big(Flux(\phi, M_S^0)+Flux(\phi, M_S^0)^{\frac{1}{3}}\big)\\
&+C|T|\big(Flux(\phi, M_T^0)+Flux(\phi, M_T^0)^{\frac{2}{3}}\big)\\
&+C|S|(T-S)\big(E(\phi, Q(S))+\big(E(\phi,
Q(S))\big)^{\frac{2}{3}}\big)\\
&+(CS^2+CS^3)\big(E(\phi, Q(S))\big)^{\frac{1}{3}},\\
\end{aligned}
\end{equation}
and then the result of lemma 1.2 follows as we can
choose~$T=-S^2$.~\\
\section*{\textbf{Acknowledgement}} \indent We are very grateful
to Professor Alinhac for giving a series of lectures in Fudan
University, introducing the idea of null frame by Christodoulou and
Klainerman to us and for many helpful discussions. Also we thank
Professor Yuxin Dong and Professor Yuanlong Xin for helping us to
understand some knowledge of
Riemannian Geometry.\\
\indent The authors are supported by the National Natural Science
Foundation of China under grant 10728101, the 973 Project of the
Ministry of Science and Technology of China, the doctoral program
foundation of the Ministry Education of China, the "111" project and
SGST 09DZ2272900, the outstanding doctoral science foundation
program of Fudan University.
|
2,869,038,154,063 | arxiv | \section{ Introduction}
In this paper, we construct a reference score for online peer assessments based on HodgeRank~\cite{jiang2011statistical}. Peer assessment is a process in which students grade their peers’ assignments~\cite{falchikov2000student, topping1998peer}.
A peer assignment system is used to enhance students’ learning process, especially in higher education. Through such a system, students are given the opportunity to not only learn knowledge from textbooks and instructors, but also from the process of making judgements on assignments completed by their peers. This process helps them understand the weaknesses and strengths in the work of others, and then to review their own.
However, there are some practical issues associated with a peer assignment system. For example, students tend to give significantly higher grades than senior graders or professionals (see ~\cite{freeman2010accurate} for more details). Also, students have a tendency to give grades within a range, with the center of such a range often being based on the first grade they gave. Therefore, bias and heterogeneity can occur in a peer assignment system.
There are various ranking methods on peer assessment problem, such as PeerRank~\cite{walsh2014} and Borda-like aggregation algorithm~\cite{caragiannis2015}. PeerRank, a famous method based on a iterative process to solve the fixed-point equation. PeerRank has many interesting properties from the view of linear algebra. Borda-like aggregation algorithm, a random matheod based on the theory of random graphs and voting theory, whcih provides some probabilistic explanation on peer assessment problem.
In this paper, we propose another ranking scheme to deal with peer assessment problems that uses HodgeRank, a statistical preference aggregation problem from pairwise comparison data. The purpose of HodgeRank is to find a global ranking system based on pairwise comparison data. HodgeRank can not only generate a ranking order, but also highlight inconsistencies in the comparisons (see~\cite{jiang2011statistical} for more detail). We apply HodgeRank to the problems in online assessment and display ranking results from HodgeRank and PeerRank in turn.
We will briefly introduce HodegRank and its useful properties in next section.
\section{HodgeRank}
HodgeRank, a statistical ranking method based on combinatorial Hodge theory to find a consistent ranking. Rigorously speaking, HodgeRank is one solution of a graph Laplacian problem with minimum Euclidean norm.
Now, we start from notations borrowed from graph theory.
Consider a connected graph $\mathcal{G} = (V, E)$, where $V=\{1, 2, \cdots, n\}$ is the set of alternatives to be ranked, and $E\subseteq V\times V$, consists of some unordered pairs from $V$.
In this paper, $V$ represents the set of students to be ranked by their peers, and $E$ collects the information of pairwise comparisons. i.e., $(i, j)\in E$ if students $i$ and $j$ are compared at least once.
Denote $\Lambda$ to be the number of assignments. Then for each assignment $\alpha\in\Lambda$, pairwise comparison data on a graph $\mathcal{G}$ of assignment $\alpha$, is given by $Y^\alpha:E\to\mathbb{R}$ so that $Y^\alpha$ is skew-symmetry. i.e., $Y^\alpha_{ij} = - Y^\alpha_{ji}$ for all $i,j\in V$. $Y^\alpha_{ij}>0$ if grade of the student $j$ is higher than student $i$ by $Y^\alpha_{ij}$ credits. For example, $Y^\alpha_{ij}\in[-100, 100]$ on hundred-mark system.
For each $\alpha\in\Lambda$, a weight matrix $W^\alpha = [w_{ij}^\alpha]$ is associated as follows: $w_{ij}^\alpha>0$ if $Y_{ij}^\alpha\neq0$, and $0$ otherwise. Set $W = \sum\limits_{\alpha\in\Lambda}W^\alpha$.
Let $Y = \sum\limits_{\alpha\in\Lambda}Y^\alpha$ be a $n$-by-$n$ matrix. The goal of the HodgeRank is find a ranking $s:V\to\mathbb{R}$ so that
\begin{equation} \label{e1.1}
Y_{ij} = s_j - s_i\mbox{ for all }i,j\in V.
\end{equation}
However, equations (\ref{e1.1}) can not be admissible in general. Consider the following example,
\[
Y = \begin{bmatrix}
0 & 1 & -1\\
-1 & 0 & -1\\
1 & 1 & 0
\end{bmatrix}
\]
If there exists $s:V\to\mathbb{R}$ such that (\ref{e1.1}) hold. Then
\[
1 = Y_{12} = s_2-s_1 = (s_2-s_3)+(s_3-s_1) = Y_{32} + Y_{13} = 0
\]
which leads to a contradiction. That is, it is impossible to solve (\ref{e1.1}) for any skew-symmetric matrix $Y$. Therefore, we should consider the least square solution of (\ref{e1.1}) instead. Before we rewrite above problem, we need to introduce some notations below.
\begin{Definition}{\rm ~\cite{jiang2011statistical} Denote
\[\mathcal{M}_G = \{X\in\mathbb{R}^{n\times n}~|~X_{ij} = s_i-s_j\mbox{for some }s:V\to\mathbb{R}\},\] the space of global ranking,
and the combinatorial gradient operator
\[
\mbox{grad}: \mathcal{F}(V, \mathbb{R})\to \mathcal{M}_G
\]
is an operator defined from $\mathcal{F}(V, \mathbb{R})$, the set of all function from $V$ to $\mathbb{R}$ (or the space of all potential functions), to $\mathcal{M}_G$, as follows
\[
\big(\mbox{grad}s\big)(i, j) = s_j - s_i.
\]
}
\end{Definition}
From the example above, it is easy to find that if $X = grad(s)$ for some $s\in\mathcal{F}(V, \mathbb{R})$, then $X_{ij}+X_{jk}+X_{ki} = 0$ for any $(i, j), (j, k), (k, i)\in E$. However, the converse might not be true in general. That is, denote
\[
\mathcal{A}=\{X\in\mathbb{R}^{n\times n}~|~X^T=-X\},
\] the set of all skew-symmetric matrices, and let
\[
\mathcal{M}_T=\{X\in\mathcal{A}~|~X_{ij}+X_{jk}+X_{ki}=0\},
\]
then $\mathcal{M}_G\subseteq\mathcal{M}_T$.
With these notations above, then the above problem becomes the following optimization problem:
\[
\min\limits_{X\in\mathcal{M}_G}|| X - Y||^2_{2, w}
=
\min\limits_{X\in\mathcal{M}_G}\sum\limits_{(i, j)\in E}w_{ij}(X_{ij}-Y_{ij})^2
\]
That is, once a graph is given, then the weight on edge $E$ determines an optimization problem. Conversely, a graph can intuitively arise from the ranking data.
Let $\{Y^\alpha~|~\alpha\in\Lambda\}$ be a set of $n$-by-$n$ skew-symmetric matrices, and $\{W^\alpha~|~\alpha\in\Lambda\}$ is associated as above.
Then an undirected graph $\mathcal{G}=(V, E)$ can be defined by $V = \{1, 2, \cdots, n\}$ and
\[
E = \{(i, j)\in V\times V~|~W_{ij}>0\}.
\]
In this case, we can treat $X$ as a edge flow on $\mathcal{G}$ in the sense of combinatorial vector calculus.
In conclusion, we have the following relation between graph and
\[
\begin{tikzcd}
\mathcal{G}=(V, E)\arrow[rr, Leftrightarrow] & & \left\{\begin{tabular}{l}
$X^T = -X$\\
$W = \sum\limits_{\alpha\in\Lambda}W^{\alpha}$.
\end{tabular}\right.
\end{tikzcd}
\]
Hence, the optimization problem of a skew-symmetric least square problem can be view as an optimization problem of edge flow on a graph.
\begin{Definition}(Consistency){\rm ~\cite{jiang2011statistical}
Let $X:V\times X\to\mathbb{R}$ be a pairwise ranking edge flow on a graph $\mathcal{G}=(G, E)$.
\begin{itemize}
\item X is called consistency on $\{i, j, k\}$ if
$(i,j), (j,k), (k,i)\in E$ and $X\in\mathcal{M}_T$
\item X is called globally consistency on $\{i, j, k\}$ if $X=\mbox{grad}(s)$ for some $s\in\mathcal{F}(V,\mathbb{R})$
\end{itemize}
}
\end{Definition}
Note that if $X$ is called globally consistency, then $X$ is consistency on any 3-clique $\{i, j, k\}$, where $(i,j), (j,k), (k,i)\in E$.
Now, consider the weighted trace induced by $W$. i.e.,
\[
<X, Y>=\mbox{tr}\big(X^T(W\odot Y)\big)=\sum\limits_{(i,j)\in E}W_{ij}X_{ij}Y_{ij}
\] for $X,Y\in\mathcal{A}$, where $\odot$ represents the Hadamard product or elementwise product.
With this weighted inner product, we obtain two orthogonal complement of $\mathcal{A}$
\[
\mathcal{A} = \mathcal{M}_G\oplus \mathcal{M}_G^{\perp}
= \mathcal{M}_T\oplus \mathcal{M}_T^{\perp}
\]
Since $\mathcal{M}_G\subseteq\mathcal{M}_T$, we have $\mathcal{M}_G^{\perp}\supseteq\mathcal{M}_T^{\perp}$ and we can get further orthogonal direct sum decomposition of $\mathcal{A}$ as follows:
\[
\mathcal{A} = \mathcal{M}_G\oplus \mathcal{M}_H\oplus \mathcal{M}_T^{\perp},
\]
where $\mathcal{M}_H=\mathcal{M}_T\cap\mathcal{M}_G^{\perp}$.
This decomposition is called the combinatorial Hodge decomposition. For more detail about the theory of combinatorial Hodge decomposition, please refer~\cite{jiang2011statistical} for more detail.
We now state one useful theorem in~\cite{jiang2011statistical}.
\begin{theorem}{\rm ~\cite{jiang2011statistical}\label{t2.1}
\begin{enumerate}
\item The minimum norm solution $s$ of (\ref{e1.1}) is the solution of the normal equation:
\[
\Delta_0 s = -\mbox{div}~Y,
\]
where $\Delta_0=\left\{\begin{tabular}{ll}
$\sum\limits_{(i,j)}w_{ij}$ & if $i = j$\\
$-w_{ij}$ & if $j\in V$ with $(i,j)\in E$\\
0 & otherwise
\end{tabular}\right.$, and
\[
\mbox{div}(Y)(i)=\sum\limits_{j s.t. (i,j)\in E}w_{ij}Y_{ij}
\]
is the combinatorial curl operator of $Y$.
\item The minimum norm solution $s$ of (\ref{e1.1}) is
\[
s^*=-\Delta_0^\dagger~\mbox{div}Y,
\]
where $\Delta_0^\dagger$ represents the Moore-Penrose pseudo inverse of the matrix $\Delta_0$.
\end{enumerate}
}
\end{theorem}
The Hodge decomposition indicates the solution of $(\ref{e1.1})$, while the theorem~\ref{t2.1} shows how to calculate the minimum solution by solving the normal equation. In the next section, we display how to apply HodgeRank to the online peer assessment problem.
\section{Online peer assessment problem}
As previously mentioned, bias and heterogeneity can lead to unfair scoring in online peer assessments. Students usually grade other students based on the first score they gave, which causes bias. However, since scores are usually compared with others, we can use this comparison behavior to reconstruct true ranking.
The data we used in this section were collected from an undergraduate calculus course. In this course, 133 students were asked to upload their GeoGebra ~\cite{hohenwarter2002geogebra} assignments. Each student was then asked to review five randomly chosen assignments completed by their peers to receive partial credits in return. There are 13 assignments during one semester.
Note that ne key point of the HodgeRank is the connectedness of the graph generated by pairwise comparison data. From table ~\ref{table1.1} above, we can easily see that after half the semester passed, comparison data between students forms a connected graph. Hence, we can apply HodgeRank to calculate the ranking of all the students after assignment 7.
\begin{table}[h]
\caption{Number of components with respect to the number of assignments}
\centering
\begin{tabular}{|c|ccccccc|}\hline
Assignment \# & 1 & 2 & 3 & 4 & 5 & 6 & $7\sim13$\\\hline
\# of components & 21 & 5 & 4 & 3 & 2 & 2 & 1\\\hline
\end{tabular}\label{table1.1}
\end{table}
The traditional method for finalizing peer assessment consists of either using an average cumulative score or a truncated average score. Although these approaches might have some statistical meaning, they cannot avoid bias and heterogeneity in peer assessment.
Figure~\ref{fig1.1} displays the cumulative score, PeerRank and HodgeRank, respectively. Here, $(\alpha, \beta) = (0.5, 0)$ in the setting of PeerRank. For what these parameters represent in PeerRank, please refer to ~\cite{walsh2014} for more discussion.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{results.png}
\caption{Final results using different ranking methods}\label{fig1.1}
\end{figure}
To compare these results, ranking results were normalized into the interval [0, 1] linearly and sorted in ascending order. In addition, to reveal the tendency of each ranking method, a steady line was plotted on the graph. There are some interesting implications that can be observed from this figure.
First, the cumulative score offers a ranking higher than the steady line. This reflects the existence of bias and heterogeneity in the cumulative average method. Second, PeerRank can be viewed as a modification of the average scoring. Third, sorted ranking result from HodgeRank is a normal distributed curve. This result can might be an explanation why HodgeRank can be solution to eliminate bias and heterogeneity by the normality.
Note that the reason why HodgeRank and PeerRank show different results is their conclusion base are totally different, while former method relies on the pairwise comparison data and latter one is applied on the average score as an initial ranking. Hence, HodgeRank provides instructors with an objective scoring reference using score difference rather than cumulative or average score.
In conclusion, this is the first time HodgeRank has been applied in the field of education. While numerical results were processed using real world data in this study, certain issues, such as how to aggregate the HodgeRank ranking method into a peer assessment system, remain unsolved. This task will be attempted as part of our future work.
|
2,869,038,154,064 | arxiv | \section{Introduction}
An abundance of astrophysical observations indicate that the majority (85\%~\cite{Planck}) of the mass of the universe exists in some unidentified form, called `dark matter'. The Lambda cold dark matter ($\Lambda$CDM) model of the universe ascribes the following characteristics to the dark matter: that it is feebly interacting, non-relativistic, and non-baryonic~\cite{PDG}. One dark matter candidate, known as the axion, solves the so-called strong CP (Charge-Parity) problem via a global chiral symmetry introduced by Peccei and Quinn~\cite{Peccei1977June,Weinberg:1977ma,Wilczek:1977pj}. Assuming a typical post-inflationary scenario, QCD (Quantum Chromodynamics) axions in a mass range of 1--100 $\si\micro$eV may account for the entirety of dark matter, if they exist~\cite{PhysRevD.96.095001,PhysRevLett.118.071802,Borsanyi2016}. Two models, the KSVZ (Kim-Shifman-Vainshtein-Zakharov) model~\cite{Kim:1979if,Shifman:1979if} and the DFSZ (Dine-Fischler-Srednicki-Zhitnisky) model~\cite{Dine:1981rt,Zhitnitsky:1980tq}, are benchmarks for axion experiments and can be described by their coupling strengths of the axion to photons. The dimensionless axion-photon coupling parameter, known as $g_{\gamma}$, is smaller for DFSZ axions than KSVZ axions by a factor of approximately 2.7, making DFSZ axions more challenging to detect. In both models, the strength of the axion coupling to photons is further suppressed by the very high energy scale associated with the Peccei-Quinn (PQ) symmetry breaking. The dimensionless coupling, $g_{\gamma}$ is related to the axion coupling to two photons via $g_{a\gamma\gamma}={\alpha}g_{\gamma}/{{\pi}f_a}$, where $\alpha$ is the fine structure constant, and $f_a$ is the PQ symmetry breaking scale. The DFSZ axion couples directly to both hadrons and leptons, whereas the KSVZ axion couples directly only to hadrons. In all grand unified theories, the coupling strength of the axion to two photons is that of the DFSZ model~\cite{PhysRevLett.47.402}.
Although a number of experimental efforts to detect axions are now underway, the Sikivie microwave cavity detector~\cite{Sikivie:1983ip,PhysRevLett.52.695.2}, marked the first feasible means of detecting the so-called `invisible' axion. This paper described the first axion haloscope, in which a static magnetic field provided a new channel for the axion to decay into a photon. The process, known as inverse Primakoff conversion~\cite{PhysRevD.84.121302}, follows from the equations of axion electrodynamics. The resulting excess power from the photon could then be resonantly enhanced and detected in a microwave cavity. A few years ago, the Axion Dark Matter eXperiment, ADMX, became the first experiment to reach DFSZ sensitivity. Defined as `Run 1A', this run resulted in the reporting of a limit on $g_{a\gamma\gamma}$ over axion masses of 2.68--2.7 $\si\micro$eV~\cite{Du_2018}. The experiment recently extended this limit to cover the range from 2.81--3.31 $\si\micro$eV, corresponding to a frequency range from about 680 to 790 MHz. The resulting data, acquired over a period between January and October of 2018, are referred to as `Run 1B'~\cite{PhysRevLett.124.101303}. This paper gives complete details of the analysis for Run 1B,
assuming a fully virialized dark matter halo. While the foundation of the analysis is unchanged from previous runs, improvements have been made, and the details specific to this run are explained.
There are two key components to a haloscope analysis worth emphasizing: axion search data and noise characterization data. The former is acquired by digitizing power from the cavity, in series with a number of other processes (described as the `run cadence'), whereas the latter is acquired periodically by halting axion search operations and performing a noise temperature measurement. Both are essential to the final analysis.
Ultimately, the analysis hinges not only on these two distinct sets of data, but on a number of other factors, which are described in the course of this paper, and outlined below.
\begin{enumerate}
\item The experimental configuration is described for Run 1B (Section~\ref{sec:experiment}), with particular emphasis on the aspects of the receiver chain that were updated for this run. For the purposes of this paper, the receiver chain is defined as all RF components that are used in both axion search and noise characterization modes, as described in Section~\ref{sec:experiment}. The design of the receiver chain directly motivates particular choices for the analysis.
\item Section~\ref{sec:data_taking} undertakes a discussion of the run cadence and means of data acquisition. This section includes the acquisition of sensor data as well as radio frequency (RF) data. The specifics of the data pre-processing are elaborated.
\item The techniques that were used to characterize the system noise temperature, which is critical to quantifying our sensitivity, are explained in Section~\ref{section:noise_measurement}. This section also enumerates and motivates data quality cuts. Systematic uncertainties are quantified and discussed.
\item Section~\ref{sec:analysis_procedure} explains the analysis of the raw power spectra, beginning with removal of the warm electronics baseline, followed by the filtering and combining of data to form the \emph{grand spectrum} via an optimal weighting procedure.
\item Section~\ref{sec:synthetics} describes both hardware and software synthetic axion injections.
\item Section~\ref{sec:mode_crossings} describes the handling of mode crossings.
\item Section~\ref{sec:rescan_procedure} explains the rescan procedure.
\item The final section of this paper (Section~\ref{sec:limit}) explains the limit-setting procedure and interpretation.\\
\end{enumerate}
\noindent Barring the existence of any persistent candidates, the limit setting process marks the final step in the data-processing sequence, resulting in a statement of exclusion over the Run 1B frequency range.
\vspace{0.5cm}
\section{Experimental Setup}
\label{sec:experiment}
\subsection{Detector}
The Axion Dark Matter eXperiment uses the haloscope approach to search for dark matter axions~\cite{Sikivie1985,Sikivie:1983ip}. A cavity haloscope is a high-$Q$, cryogenic, microwave cavity immersed in a high field solenoid. The ADMX solenoid can be operated at fields as high as 8.5 T, but, in the interest of safety and reliability, was operated at 7.6 T throughout the course of Run 1B. The Run 1B cavity consisted of a 140-liter cavity made of copper-plated stainless steel (136 liter when the tuning rod volume is subtracted). Two 50.8-mm diameter copper tuning rods ran the length of the cavity parallel to the axis. Each rod could be translated from near the wall to near the center of the cavity. To detect the axion signal, the microwave cavity must be tuned to match the signal frequency defined by $f_a\,{\approx}\,m_a$ (not accounting for its small kinetic energy). The axion mass is unknown over a broad range, so the cavity was tuned by moving metallic rods to scan a range of frequencies. Power from the cavity was extracted by an antenna consisting of the exposed center conductor of a semi-rigid coaxial cable. The antenna was inserted into the top of the cavity and connected to the receiver chain. Assuming their existence, axions would deposit excess power in the cavity when the cavity was tuned to the axion mass equivalent frequency. This excess power would be detected as a small narrowband excess in the digitized spectrum. The detected axion power is given by
\begin{widetext}
\begin{multline}
P_{\text{axion}}=2.2{\times}10^{-23}\mathrm{W}\left(\frac{\beta}{1+\beta}\right)\left(\frac{V}{136~\mathrm{\ell}}\right)\left(\frac{B}{7.6~ \mathrm{T}}\right)^2\left(\frac{C_{010}}{0.4}\right)\left(\frac{g_{\gamma}}{0.36}\right)^2\left(\frac{\rho}{0.45~\mathrm{GeV cm^{-3}}}\right) \\ \left(\frac{Q_{\text{axion}}}{10^6}\right)\left(\frac{f}{740~ \mathrm{MHz}}\right)\left(\frac{Q_{\text{L}}}{30{,}000}\right)\left(\frac{1}{1+(2{\delta}f_{a}/{{\Delta}f})^2}\right),
\label{eqn:axion_pwr}
\end{multline}
\end{widetext}
\noindent where $V$ is the volume of the cavity, $B$ is the static magnetic field from the solenoid, $\rho$ is the dark matter density, $f$ is the frequency of the photon, $Q_{\text{L}}$ is the loaded quality factor, $Q_{\text{axion}}$ is the axion quality factor, and $C_{010}$ is the form factor. The form factor describes the overlap of the electric field of the cavity mode and magnetic field generated by the solenoid~\cite{Sikivie1985}. The indices denote the usage of the $\mathrm{TM_{010}}$ mode, which maximizes the form factor. The cavity mode linewidth is given by ${\Delta}f=f/Q_{\text{L}}$. The detuning factor, ${\delta}f_{a}$, is some frequency offset from the cavity resonance. The cavity coupling parameter, which describes how much power is picked up by the strongly coupled antenna, is given by $\beta=\left(Q_0/Q_{\text{L}}-1\right)$, where $Q_0$ is the unloaded cavity quality factor. The dark matter density of 0.45 $\mathrm{GeV/cm^{3}}$~\cite{Read_2014} has previously been assumed by ADMX in presenting its sensitivity. Of note is that the deposited power is on the order of 10s of yoctowatts--a level which is just barely detectable using state-of-the-art technology. Typically, the experimentalist has control over the cavity coupling parameter, volume, magnetic field, form factor and quality factor, whereas the remaining parameters are set by nature. Optimizing for signal-to-noise (SNR) means maximizing the former, while minimizing the system noise.
ADMX Run 1B relied on two key components to achieve DFSZ sensitivity: the use of a quantum amplifier, and a dilution refrigerator. The quantum amplifier afforded the experiment a low amplifier noise, whereas the dilution refrigerator reduced the physical temperature of the microwave cavity and the quantum amplifier. Combined, the two advances reduced the system noise compared to earlier ADMX experiments~\cite{PhysRevLett.80.2043,PhysRevD.69.011101,Asztalos:2009yp}.
ADMX has evolved and been improved since its first run at DFSZ sensitivity~\cite{PhysRevLett.120.151301}. Each run presents its own unique set of challenges, motivating unique choices for the analysis. Challenges pertaining to the Run 1B receiver chain will be described in the following sections.
\subsection{ADMX Run 1B Receiver Chain}
The receiver chain for ADMX varies between runs, as the system is continuously optimized for the frequency range covered. For Run 1B, the part of the receiver chain that was contained in the cold space (defined as everything that is colder than room temperature) is shown in Fig.~\ref{fig:run1b_receiver_chain}. The receiver chain was designed with two goals in mind: first, to read out power from the cavity (`axion search mode') and second, to characterize the noise of the receiver chain (`noise characterization mode'). There were a few factors which motivated the design of the operating modes, each accessible by flipping an RF switch (indicated by $\mathrm{S}$ in Fig.~\ref{fig:run1b_receiver_chain}) that allowed the JPA to be connected to either the cavity (axion search mode) or the hot load (noise characterization mode). The design of the axion search mode was driven by the desire to minimize attenuation along the output line and reduce the amplifier and physical noise as much as possible. Likewise, the design of the noise characterization mode was motivated by the need to have a reliable means of heating the 50-ohm terminator (`hot load') at the end of the output line, as described in Section~\ref{section:noise_measurement}.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Graphics/Run1B_Receiver_v6.png}
\caption{ADMX Run 1B receiver chain. $C_{1}$, $C_{2}$ and $C_{3}$ are circulators. The temperature stages for all components are shown on the right-hand side. Attenuator labelled $\mathrm{A}$ plays an important role in noise calibration.}
\label{fig:run1b_receiver_chain}
\end{figure}
With the switch configured to connect the output line to the cavity, there were three critical RF paths. First, a swept RF signal from the vector network analyzer (VNA) could be routed through the cavity via the weak port (2) and up through the cavity and output line (1), back to the VNA. The weak port is aptly named to describe the fact that it connects to a weakly coupled antenna at the base of the cavity. Such measurements were referred to as \emph{transmission measurements}. Next, a swept RF signal could be injected via the bypass line (3), reflected off the cavity and emerge via the output line (1). Because this setup was used to measure power reflected off the cavity, this is referred to colloquially as a \emph{reflection measurement}, even though the signal path technically followed that of an $\mathrm{S_{21}}$ measurement. While the axion search data were being acquired, connections to network analyzer input and output were disabled and power coming out of the cavity via the output line (1) was amplified, mixed to an IF frequency, filtered, and further amplified before reaching the digitizer (Signatec PX1500~\cite{Signatec}). The other two setups (reflection and transmission routes) were used to characterize the detector and receiver chain. Reflection measurements were used to determine and adjust the antenna coupling, and transmission measurements were used to determine the cavity quality factor and resonant frequency. Broadly speaking, both measurements were used throughout data-taking operations to check the integrity of the receiver chain, as abnormal transmission or reflection measurements could be indicative of problems along the signal path.
In the cold space, signals from the cavity on the output line were amplified by a Josephson Parameteric Amplifier (JPA)~\cite{PhysRevB.83.134501,PhysRevApplied.8.054030} followed by a Heterostructure Field-Effect Transistor (HFET), model number LNF-LNC03\_14A from Low Noise Factory~\cite{LNF}. In general, the noise contribution from the first stage amplifier was the dominant source of noise coming from the electronics~\cite{friis1944}, motivating the decision to place the JPA, with its exceedingly low amplifier noise, as close to the strongly coupled antenna as possible. The JPA was highly sensitive to magnetic fields, and was therefore strategically placed in a low-field region, accomplished via a bucking coil that partially cancels the main magnetic field about a meter above the cavity. The JPA was also encased in passive magnetic shielding consisting of a mu-metal cylinder. For the purposes of this paper, all RF electronics from the HFET to the warm electronics are defined as the `downstream' electronics. Further, all components from the first circulator, $C_{1}$, to the third circulator, $C_{3}$, including the JPA, are defined as the `quantum electronics package'. The quantum electronics package was contained within a metal framework that is thermally sunk to the top of the cavity. This package was contained in the 250 mK temperature space shown in Fig.~\ref{fig:run1b_receiver_chain}.
Upon exiting the insert, signals on the output line entered the warm electronics. First, the signal was amplified by a post-amplifier located immediately outside the insert. The signal then proceeded to the receiver box. The chain of components inside the receiver box can be seen in Fig.~\ref{fig:receiver_components}. The signal from the cavity output was first amplified, then mixed with a local oscillator, before being filtered via a low pass filter, amplified and further filtered, first by a 2-MHz bandpass filter, and later by a 150-kHz bandpass filter.
\begin{figure*}
\includegraphics[width=5in]{Graphics/admx_receiver_box_5.png}
\caption{Components within the ADMX Run 1B receiver box. From left to right: DC amplifiers (Minicircuits ZX60-3018G+), directional coupler (Minicircuits ZX30-17-5-S+), Polyphase Microwave image-reject mixer, low pass filter (Minicircuits ZX75LP-50+), directional coupler (Minicircuits ZX30-17-5-S+), 2-MHz bandpass filter (Minicircuits SBP-10.7+), DC amplifier (Minicircuits ZFL-500+), 2-MHz bandpass filter (Minicircuits SBP-10.7+), DC amplifier (Minicircuits ZFL-500+), 150-kHz wide custom made filter. The center frequency of the two filters was 10.7 MHz. The intent of these filters is to reduce wide band noise that would cause the digitizer to clip. The directional couplers enable trouble-shooting before and after the mixing stage.}
\label{fig:receiver_components}
\end{figure*}
Upon exiting the receiver box, the signal was digitized with a Nyquist sampling time of 10 ms, yielding a 48.8-kHz wide spectrum centered at the cavity frequency with bins 95-Hz wide. The native digitizer sampling rate itself was 200 Megasamples per second, which was downsampled to 25 Megasamples per second. For each bin, 10,000 of the 10-ms subspectra were co-added to produce the power spectrum from the cavity averaged over 100 s. The noise in each spectrum bin can be reliably approximated as Gaussian. Further instrumentation details can be found in Ref.~\cite{khatiwada2020axion}. There were two data output paths: one for the medium-resolution analysis (this paper) and another for the high-resolution analysis, which is currently in preparation. For the medium-resolution analysis, the 100 s of data were averaged, resulting in a 512-point power spectrum with 95-Hz bin widths. For the high-resolution analysis, an inverse FFT was performed with sufficient phase coherence to be able to reconstruct the characteristics of the time series. The 100-s digitization time was a prerequisite for performing a high-resolution search~\cite{PhysRevD.94.082001}. The high-resolution analysis would be able to detect annual and diurnal shifts in the frequency of an axion signal if detected, something unresolvable with the medium-resolution.
\section{Run Cadence}
\label{sec:data_taking}
The goal of an axion haloscope analysis is to search for power fluctuations above an average noise background that could constitute an axion signal. Rescans are used to identify persistent candidates and rule out candidates that arise from statistical fluctuations. For an axion signal to trigger a rescan, it must be flagged as a candidate in the analysis. In ADMX Run 1B, there were three distinct types of candidates, which are explained in Section~\ref{sec:rescan_procedure}, but, in general, a candidate can be thought of as a power fluctuation above the average noise background. With this in mind, the raw data were processed in such a way that accounted for variations in the individual spectra both at a single frequency and across a range of frequencies.
An axion haloscope search must incorporate mechanisms for discerning false signals from a true signal. Possible false signals include statistical fluctuations, RF interference, and intentionally injected synthetic axion signals. For ADMX Run 1B, such false signals were rejected via both data quality cuts as well as the rescan procedure, described in Sections~\ref{sec:analysis_procedure} and \ref{sec:rescan_procedure}.
The haloscope technique is established as an effective means to search for axions, as evidenced by the fact that it is currently one of only a few types of experiment that have reached DFSZ sensitivity. Nevertheless, a well-known shortcoming of the haloscope technique is its inability to search over a wide range of axion masses \emph{quickly}. Therefore, a critical figure of merit for the axion haloscope is the \emph{scan rate}, which can be written as
\begin{widetext}
\begin{multline}
\frac{df}{dt}~{\approx}~157~\frac{\mathrm{MHz}}{\mathrm{yr}}\left(\frac{g_{\gamma}}{0.36}\right)^4\left(\frac{f}{740~ \mathrm{MHz}}\right)^2\left(\frac{\rho}{0.45\mathrm{~\mathrm{GeV/cm^3}}}\right)^2\left(\frac{3.5}{\text{SNR}}\right)^2\left(\frac{B}{7.6~\mathrm{T}}\right)^4\left(\frac{V}{136~\mathrm{\ell}}\right)^2\left(\frac{Q_{\text{L}}}{30{,}000}\right) \\ \left(\frac{C}{0.4}\right)^2\left(\frac{0.2~\mathrm{K}}{T_{\text{sys}}}\right)^2,
\end{multline}
\end{widetext}
\noindent where $T_{\text{sys}}$ is the system noise temperature~\cite{Sikivie1985,PhysRevD.36.974}. This equation represents the instantaneous scan rate; in other words, it does not account for ancillary measurements and amplifier biasing procedures. Data-taking operations involved tuning, with the scan rate set according to the parameters above. One advantage of a haloscope experiment, however, is that it possesses a robust means of confirming the existence of a dark matter candidate. The data-taking strategy for the run took the form of a decision tree such that advancement to each new step signified a higher probability of axion detection. The strategy is illustrated in Fig.~\ref{fig:decisiontree}.
The first step was tuning the cavity at a fixed rate over a pre-defined frequency range, called a \emph{nibble}, which was typically about 10-MHz wide, but varied depending on run conditions. Ideally, the first pass through a nibble would occur at a rate that was commensurate with achieving DFSZ sensitivity, although, due to fluctuating noise levels, that was not always the case. The center frequency of spectra acquired under ideal operating conditions were typically spaced 2-kHz apart. The scan rate varied depending on the achievable operating conditions, including quality factor and system noise temperature.
Data-taking under these circumstances advanced as follows. Each 100-s digitization was accompanied by a series of measurements and procedures needed to characterize and optimize the receiver chain (Table~\ref{tab:cadence}). Every pass through this sequence was referred to as a single data-taking cycle and lasted approximately 2 minutes without JPA optimization.
\begin{table}
\setlength{\tabcolsep}{1pt}
\renewcommand{\cellalign}{lc}
\renewcommand{\arraystretch}{2.}
\begin{savenotes}
\setlength{\tabcolsep}{9pt}
\centering
\begin{tabular}{|l|l|l|l|}
\bottomrule[0.1ex]
\hline
Process & Frequency & \makecell{{Fraction of Time}\\{ per Iteration}} \\
\hline
\makecell{{Transmission}\\{Measurement}} & \makecell{{Every}\\{Iteration}} & \makecell{~$<$ 1\%} \\
\hline
\makecell{{Reflection}\\{Measurement}} & \makecell{{Every}\\{Iteration}} & \makecell{~$<$ 1\%} \\
\hline
\makecell{{JPA}\\{Rebias}} & \makecell{{Every 5-7}\\{Iterations}} & \makecell{~~25\%} \\
\hline
\makecell{{Check for}\\{SAG Injection}} & \makecell{{Every}\\{Iteration}} & \makecell{~$<$ 1\%} \\
\hline
\makecell{Digitize} & \makecell{{Every}\\{Iteration}} & \makecell{~~98\%\footnotemark} \\
\hline
\makecell{Move Rods} & \makecell{{Every}\\{Iteration}} & \makecell{~$<$ 1\%} \\
\hline
\bottomrule[0.1ex]
\end{tabular}
\end{savenotes}
\footnotetext[1]{73\% for cycles when the JPA rebias procedure runs.}
\caption{Data-taking cadence. Ancillary procedures were used to characterize and optimize the RF system in real time. Axion search data were acquired only during the digitization process. SAG stands for synthetic axion generator, which was programmed to inject synthetic axions at specific frequencies.}
\label{tab:cadence}
\end{table}
An additional step of recoupling the antenna was also performed on occasion. This adjustment required user intervention and was done manually. Under ideal operating conditions, this cadence continued for the duration of a data `nibble', after which a rescan procedure was implemented. Rescans acquired more data in regions where axion candidates were flagged. The precise definition of what constitutes a candidate is described in Section~\ref{sec:rescan_procedure}. The rescan procedure used the same run cadence, but with significantly increased tuning rate, slowing down only at axion candidate frequencies. After rescan, all the data were examined to see if the candidate was persistent, followed by other tests to evaluate the axionic nature of the signal. The analysis was run continually throughout data-taking so that the scan rate could be adjusted in real time, to reflect changes in the experiment's sensitivity to axions. A detailed discussion of rescan procedure and data-taking decision tree can be found in Section~\ref{sec:rescan_procedure}.
\begin{figure*}
\begin{adjustwidth}{}{-8em}
\centering
\dimendef\prevdepth=0
\tikzset{
every node/.style={
font=\scriptsize
},
decision/.style={
shape=rectangle,
minimum height=0.5cm,
text width=2.cm,
text centered,
rounded corners=1ex,
draw,
label={[rotate=90,xshift=0.2cm,yshift=0.45cm,color=red]left:NRR},
label={[rotate=-90,yshift=0.5cm,xshift=-0.1cm,color=blue]right:RR},
},
outcome/.style={
shape=rectangle,
rounded corners=1ex,
text width=2.cm,
minimum height=0.5cm,
fill=gray!15,
draw,
text centered
},
found/.style={
shape=circle,
fill=blue!15,
draw,
text width=1.0cm,
text centered
},
decision tree/.style={
edge from parent path={[-latex] (\tikzparentnode) -| (\tikzchildnode)},
sibling distance=2.9cm,
level distance=1.4cm
}
}
\begin{tikzpicture}
\node [decision] { First pass through nibble at fixed tuning rate.}
[decision tree]
child { node [outcome] { Continue to next nibble. } }
child { node [decision] { Rescan at variable tuning rate. $\sim$2-5x}
child { node [outcome] { Continue to next nibble. } }
child { node [decision] { Persistence check. }
child { node [outcome] { Continue to next nibble. } }
child { node [decision] { Turn off primary synthetic axion injections. Rescan. }
child { node [outcome] { Continue to next nibble. } }
child { node [decision] { Persistence check. }
child { node [outcome] { Continue to next nibble. } }
child { node [decision] { Make RFI checks. }
child { node [outcome] { Continue to next nibble. } }
child { node [decision] { Turn off secondary synthetic axion injections. }
child { node [outcome] { Continue to next nibble. } }
child { node [decision] { Check for signal suppression in $\mathrm{TM_{010}}$ mode. }
child { node [outcome] { Continue to next nibble. } }
child { node [decision] { Check signal $\propto\,\mathrm{B^2}$. }
child { node [outcome] { Continue to next nibble. } }
child { node [found] { Axion found. } } }
} }}}}}
};
\node[draw,align=left,rounded corners=1ex] at (0,-10.7) {\textcolor{blue} {RR}: Rescan Regions identified\\ \textcolor{red} {NRR}: No Rescan Regions identified};
\end{tikzpicture}
\caption{Data-taking decision tree. After a first scan through a 10-MHz nibble, the grand spectrum is checked for rescan triggers. If found, further scans are then acquired to assess if any of the rescan triggers are axion candidates. Typically, there are always some rescan triggers on a first pass through the nibble due to the statistics associated with the chosen tuning rate. Non-axionic rescan regions vanish with increasing statistics. Nevertheless, there are usually some rescan regions remaining. If so-called `persistent candidates' still remain, they are evaluated using two tests: persistence checks and on-off resonance tests. A persistence check verifies that a signal appears in every spectrum (i.e. is not intermittent). An on-off resonance test verifies that the signal maximizes on resonance. Some of these may be intentionally injected synthetic axions. As such, the blind injection team is asked to disable injections, after which, further rescans follow. Should candidates remain, a spectrum analyzer is used to eliminate the possibility that it is an ambient (external) signal, such as a radio station. If the candidate is still viable, the blind injection team is asked to reveal all secondary synthetic injections. If the candidate is not synthetic, a magnet ramp ensues to verify that the signal power is proportional to the magnetic field squared. Candidates that passed this step would be determined as axionic in nature. When no candidates were uncovered at the DFSZ level, a limit was set.}
\label{fig:decisiontree}
\end{adjustwidth}
\end{figure*}
\section{Analysis Inputs}
\label{section:noise_measurement}
\subsection{System Noise Characterization}
Central to any haloscope search is the ability to achieve a large SNR for axions. Given that ADMX operates in the high-temperature limit, where $hf\textless\textless k_{\text{B}}T$, the system noise temperature, $T_{\text{sys}}$, can be written as
\begin{equation}
T_{\text{sys}}=T_{\text{cav}}+T_{\text{amp}},
\label{eqn:tsys}
\end{equation}
\noindent where $T_{\text{amp}}$ is the noise temperature of the amplifiers and $T_{\text{cav}}$ is the physical temperature of the cavity. The amplifier noise can be written as
\begin{align}
\begin{split}
T_{\text{amp}}&=T_{\text{quantum}}+T_{\mathrm{HFET}}/G_{\text{quantum}}\\
&+T_{\text{post}}/(G_{\text{quantum}}G_{\text{HFET}}),
\end{split}
\end{align}
where $T_{\text{quantum}}$ is the noise temperature of the JPA, $T_{\mathrm{HFET}}$ is the noise temperature of the HFET, and $T_{\text{post}}$ is the noise temperature of the post-amplifier. The gain of the first stage amplifier (the JPA) is given by $G_{\text{quantum}}$, and the gain of the HFET is given by $G_{\mathrm{HFET}}$.
This means that the noise power, $P_{n}$ can be written as
\begin{equation}
P_{n}=k_{\text{B}}T_{\text{sys}}b,
\end{equation}
\noindent where $P_{n}$ is the noise power, $k_{\text{B}}$ is the Boltzmann constant, and $b$ is the bandwidth over which the noise power is measured.
The Dicke radiometer equation~\cite{doi:10.1063/1.1770483} in the high temperature limit provides the signal-to-noise ratio as
\begin{equation}
\text{SNR}=\frac{P_{\text{axion}}}{k_{\text{B}}T_{\text{sys}}}\sqrt{\frac{t}{b}},
\end{equation}
\noindent where $P_{\text{axion}}$ is the signal power of the axion.
Critical to quantifying the system noise temperature were measurements of the receiver temperature, which were acquired periodically throughout the course of the run. During Run 1B, four noise temperature measurements were made: one in February, one in July, one in September, and one in October of 2018.
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{Graphics/new_hotload.png}
\caption{Heating the quantum amplifier package. The plot on the left shows the increase in quantum amplifier package temperature with time and power detected by the digitizer as a function of time during a $Y$-factor measurement of type 2. The plot on the right shows the power per unit bandwidth, measured off-resonance, as a function of temperature. The resulting fit, using Eq.~\ref{eqn:power_fit_equation_simple}, is shown in orange.}
\label{fig:squidadel_heating}
\end{figure*}
In Run 1B, the receiver temperature had to be measured by halting ordinary data-taking operations and performing a $Y$-factor measurement~\cite{wilson2011techniques}. Although the goal of $Y$-factor measurement was to quantify the receiver noise temperature, there were two other unknowns that must be handled in this process: the attenuation between the cavity and the HFET, and the receiver gain. This information can be extracted via two different $Y$-factor techniques.
\subsection{$Y$-factor Method 1}
The first noise temperature measurement involved using the `hot load', labeled in Fig.~\ref{fig:run1b_receiver_chain}. Physically, the hot load consisted of an attenuator thermally sunk to a resistive heater. The hot load was connected to the switch via a superconducting NbTi coax line to minimize thermal conduction and attenuation. An excessive heat leak to the 4-K temperature stage limited the range over which the hot load temperature could be varied during this run.
An ideal noise temperature measurement would be performed with the JPA pump enabled, allowing the characterization of the noise along the entire receiver chain. However, the JPA does not maintain stable gain performance under changing temperatures and can saturate with small amounts of input thermal noise. Therefore, noise temperature measurements were performed with the pump disabled. The JPA, when turned off, was a passive mirror that allowed signals to propagate down the output line with minimal attenuation.
Once the JPA was powered off, the RF switch was actuated so that the output RF line was connected to the hot load instead of the cavity. A heater and thermometer were attached to the hot load, enabling its temperature to be adjusted and measured.
As the hot load was heated, a wide bandwidth power measurement was acquired by appending separate scans with roughly 5-MHz spacing. Under these conditions, the expected output power per unit bandwidth can be written as
\begin{equation}
P=G_{\text{HFET}}k_{\text{B}}\left[T_{\mathrm{JPA}}(1-\epsilon)+T_{\mathrm{load}}\epsilon+T_{\mathrm{HFET}}\right],
\label{eqn:power_fit_equation_load}
\end{equation}
\noindent where $G_{\text{HFET}}$ is the HFET gain, $T_{\mathrm{JPA}}$ is the physical temperature of the quantum amplifier package and $T_{\mathrm{load}}$ is the physical temperature of the hot load. $T_{\mathrm{HFET}}$ is the noise attributed to the HFET and all downstream electronics, henceforth referred to as the \emph{receiver temperature}. The emissivity of the quantum amplifier package is given by $\epsilon$, which can be written as a function of the attenuation in the quantum amplifier package, $\alpha$:
\begin{equation}
\epsilon=10^{-\alpha/10}.
\end{equation}
Loss from the hot load to the JPA was quantified in two ways. First, it was quantified \emph{ex-situ} by measuring the losses in the two circulators. Next, it was quantified \emph{in-situ} using two different methods: first, by inferring it from a multi-component fit of a $Y$-factor measurement, and second by using the JPA signal to noise ratio improvement (SNRI), and assuming that the JPA noise performance is independent of frequency. This was a reasonable assumption because variations in the noise performance are subdominant to other effects, such as variations in the circulator loss. The determination of the SNRI is discussed in the following section. A linear interpolation was used to increase the expected loss in the quantum amplifier package.
Equation~\ref{eqn:power_fit_equation_load} was then used on the $Y$-factor data to perform a two-component fit, where $G_{\text{HFET}}$ and $T_{\mathrm{HFET}}$ are fit parameters, whereas $T_{\mathrm{JPA}}$, $T_{\mathrm{load}}$, and $\epsilon$ were independently measured quantities. A hot load measurement of this type was performed twice throughout the course of Run 1B, on February 13 and October 9, 2018.
\subsection{$Y$-Factor Method 2}
The other means of acquiring a receiver noise temperature measurement was to apply a low enough voltage across the RF switch such that it would heat without flipping, thus, warming the quantum amplifier package. Noise temperature measurements of this type were performed on July 18 and September 12, 2018.
The power per unit bandwidth measured off-resonance by the digitizer in these configurations can be modeled with
\begin{equation}
P=G_{\text{HFET}}k_{\text{B}}\left[T_{\text{JPA}}(1-\epsilon)+T_{\text{cav}}\epsilon+T_{\text{HFET}}\right].
\label{eqn:power_fit_equation}
\end{equation}
\noindent Under these conditions, $T_{\mathrm{JPA}}$ is approximately equal to $T_{\mathrm{cav}}$, due to the location of the final stage attenuator, so that Eq.~\ref{eqn:power_fit_equation} simplifies to
\begin{equation}
P=G_{\text{HFET}}k_{\text{B}}\left[T_{\text{JPA}}+T_{\text{HFET}}\right].
\label{eqn:power_fit_equation_simple}
\end{equation}
This enables a separate confirmation of $T_{\mathrm{HFET}}$ that is independent of $\epsilon$. In this case, the fit parameters were the gain and $T_{\mathrm{HFET}}$, whereas $T_{\mathrm{JPA}}$ and $T_{\mathrm{cav}}$ were measured quantities.
An example of such a measurement can be seen in Fig.~\ref{fig:squidadel_heating}. The left-hand side of Fig.~\ref{fig:squidadel_heating} shows the JPA temperature and power detected by the digitizer as a function of time. Over the course of the first 3 hours, a small voltage was incrementally increased to heat the hot load. The right-hand side shows the digitized power as a function of the JPA temperature, with the fit, using Eq.~\ref{eqn:power_fit_equation_simple}, shown in orange. There was no indication of any significant changes in the HFET over time, so the assumption was made that the HFET was stable throughout the course of the run.
\subsection{Combined Noise Temperature}
Both type 1 and type 2 $Y$-factor measurements were used to characterize the receiver noise temperature throughout the course of the run. The final analysis, however, relied on a combined receiver noise temperature measurement to set a limit. For Run 1B, it was realized that the receiver temperatures without the JPA taken throughout the course of the run did not vary significantly over the frequency range 680-760 MHz. This motivated the decision to generate a single noise temperature value that combined the results from our four measurements. The fit was achieved by calculating the expected residuals and the gain for each noise temperature measurement and performing a least squares fits on the combined result.
A plot of the combined receiver noise across the frequency range for Run 1B can be seen in Fig.~\ref{fig:receiver_temp}. The average value for the noise in the frequency range from 680 to 760 MHz was 11.3 K\,$\pm$\,0.11 K, where the error comes from the square root of the covariance from the fit. The receiver noise was higher at the upper end of the frequency range because of larger losses in the circulators near the end of the circulator band.
\begin{figure
\centering
\includegraphics[width=0.5\textwidth]{Graphics/combined_receiver_temp_gray.png}
\caption{Combined receiver temperature over the frequency range for Run 1B. A noise temperature of 11.3 K\,$\pm$\,0.11 K was used from 680-760 MHz (highlighted in gray). The rise in the equivalent receiver temperature at the upper end of the frequency range is attributable to this range being the end of the circulator band.}
\label{fig:receiver_temp}
\end{figure}
\begin{figure*
\centering
\includegraphics[width=0.7\textwidth]{Graphics/jpa_bias_map_example.png}
\caption{Sample SNRI calculated for several different bias and pump parameters during a single rebias procedure. \emph{Left:} Gain difference as measured by the network analyzer. \emph{Middle:} Increase in power as measured by the digitizer. \emph{Right:} The resultant noise temperature.}
\label{fig:nice_snri_plot}
\end{figure*}
\subsection{SNRI Measurement}
The signal-to-noise ratio improvement (SNRI), commonly used to characterize quantum amplifiers, is defined as
\begin{equation}
\text{SNRI}=\frac{G_{\text{on}}}{G_{\text{off}}}\frac{P_{\text{off}}}{P_{\text{on}}},
\end{equation}
\noindent where $G_{\text{on}}$ is the gain with the JPA on, $G_{\text{off}}$ is the gain with the JPA off, $P_{\text{on}}$ is the measured noise power with the JPA on, and $P_{\text{off}}$ is the measured noise power with the JPA off.
The SNRI was monitored approximately every 10 min throughout the course of the run by measuring the gain and power coming from the receiver with the JPA on versus with the JPA off. This measurement occurred about once every 5-7 iterations through the full data-taking cycle. The SNRI typically did not vary more than 1 dB over this time frame. The SNRI changed throughout the course of the run because the JPA gain was not stable under changing temperatures; temperature variations on the order of 300-400 mK proved too large to guarantee gain stability. The HFET amplifier and upstream electronics were stable throughout the course of the run, so any SNRI changes could be attributed to the JPA. To mitigate any instability of the JPA, the SNRI was continuously optimized by searching over a range of pump powers and currents. A chart showing how the gain, power increase, and noise temperature vary with pump power and bias current at a given frequency is shown in Fig.~\ref{fig:nice_snri_plot}. Throughout data-taking, the JPA pump was offset by 375 kHz above the digitization region so as not to overwhelm the digitizer dynamic range.
\subsection{Total System Noise}
\label{sec:sys_noise}
The total system noise at the JPA input, given by Eq.~\ref{eqn:tsys}, can also be calculated from
\begin{equation}
T_{\mathrm{sys}}=T_{\mathrm{HFET}}/\mathrm{SNRI}.
\label{eqn:snr_improvement}
\end{equation}
A plot of the system noise at the JPA input over the full frequency range of Run 1B is shown in Fig.~\ref{fig:tsys_vs_freq}. To calculate the total system noise, one must account for the loss between the cavity and JPA as well.
\subsection{Parameter Extraction}
Throughout the course of data-taking, ADMX tracked and monitored a number of system state data via various sensors and RF measurements. Data from temperature sensors were not tethered to the run cadence, whereas RF measurements typically occurred once per data-taking cycle (see Table~\ref{tab:cadence}).
\begin{figure
\centering
\includegraphics[width=0.5\textwidth]{Graphics/tsys_vs_freq.png}
\caption{System noise as a function of frequency for the duration of the run.}
\label{fig:tsys_vs_freq}
\end{figure}
The sensors were read out by numerous instruments, and the logging rate was a function of the capabilities and settings of a specific instrument. These instruments were queried every minute for their latest reading. To save memory, not every sensor reading was logged. Each sensor had a custom `deadband', or tolerance. If the preceding measurement was outside the `deadband', the sensor would be logged. If, after 10 minutes, no sensor readings existed outside the `deadband', the sensor reading would be logged regardless.
Aside from the SNRI rebiasing procedure, RF measurements occurred once every data-taking cycle. The following parameters were extracted from these measurements to be used in the analysis:
\begin{enumerate}
\item Quality factor as measured by transmission scans.
\item Resonant frequency as measured via transmission scans.
\item Coupling coefficient (which can be thought of as the ratio of the impedance of the cavity and the impedance of the 50-ohm transmission line connected to the cavity), as measured via reflection scans.
\end{enumerate}
\noindent The cavity coupling coefficient, $\beta$, was given by
\begin{equation}
\Gamma=\frac{\beta-1-(2iQ_0\delta\omega/\omega_0)}{\beta+1+(2iQ_0\delta\omega/\omega_0)},
\end{equation}
where $\Gamma$ is the reflection coefficient of the cavity, $Q_0$ is the unloaded quality factor, $\omega$ is the frequency, and $\omega_0$ is the resonant frequency. Using this equation, a fit to the coupling constant, $\beta$, was performed on the complex and imaginary data obtained from a reflection measurement~\cite{brubaker2018results,pozar2009microwave}.
Since the quality factor, resonant frequency and coupling were expected to change very slowly with frequency, more accurate measurements could be obtained by smoothing. The coupling coefficient was smoothed over a period of 30 min, whereas the quality factor was smoothed over a period of 15 min. Neither the quality factor nor the coupling parameters varied significantly over these time scales.
The form factors were simulated and read in from a separate file. The simulation used the Computer Simulation Technology (CST) software~\cite{CST}. The output of the simulation was the form factor at a few select frequencies; to acquire a form factor at every point in frequency space, the simulated data were interpolated.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Graphics/newformfactor.png}
\caption{Form factor as a function of frequency. The dip near 750 MHz is at the location of mode crossings.}
\label{fig:formfactors}
\end{figure}
The system noise across the full frequency range for Run 1B, as described in Section~\ref{section:noise_measurement}, was also provided as an input to the analysis. The system noise was composed of the receiver temperature divided by the SNRI and the loss between the cavity and the HFET amplifier. While the SNRI was interpolated in time, the receiver temperature was interpolated at each point in frequency space.
\subsection{Systematics}
The systematic uncertainty was quantified for the following parameters that were used in the analysis. A summary of all systematics can be seen in Table~\ref{tab:uncertainties}.
First, the uncertainty in the quality factor was quantified by repeatedly measuring the quality factor in a narrow range of frequencies, 739-741 MHz, where the quality factor was not expected to change much as a function of frequency, according to models. The fractional uncertainty in the quality factor in this range was determined to be $\pm$\,1.1\%. The fractional uncertainty in the coupling was also computed over the same frequency range, and determined to be $\pm$\,0.5\%. The fractional uncertainty from the $Y$-factor measurements is cited as `RF model fit', and accounts for uncertainty in the receiver temperature as well as the uncertainty in the attenuation. Uncertainty on our temperature sensors came from the values stated on their datasheets. This factored into the uncertainty of the receiver noise temperature, and therefore the system noise.
The uncertainty in the SNRI measurement was evaluated using the following method. It was observed that the measured SNRI varied as a function of the JPA gain, with the worst uncertainty occurring at high JPA gain. The largest observed uncertainty was $\pm$\,0.18 dB, corresponding to a linear uncertainty of $\pm$\,0.042 in the power measured in each bin of the grand spectrum.
\begin{table}
\centering
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{@{}lc@{}}
\toprule[0.1ex]
\hline
Source & Fractional Uncertainty \\
\toprule[0.1ex]
\hline
$B^2VC_{\text{010}}$ & 0.05 \\
$Q$ & 0.011 \\
Coupling & 0.0055 \\
RF model fit & 0.029 \\
Temperature Sensors & 0.05 \\
SNRI measurement & 0.042 \\
\bottomrule[0.1ex]
\hline
Total on power & 0.088 \\
\hline
\bottomrule[0.1ex]
\hline
\end{tabular}
\caption{Dominant sources of systematic uncertainty. The uncertainties were added in quadrature to attain the uncertainty on the total axion power from the cavity, shown in the bottom row. For the first entry, $B$ is the magnetic field, $V$ is the volume, and $C_{\text{010}}$ is the form factor. The last row shows the total uncertainty on the axion power from the cavity.}
\label{tab:uncertainties}
\end{table}
The total systematic uncertainty of $\pm$\,0.088, shown in Table~\ref{tab:uncertainties}, was computed simply by adding all listed uncertainties in quadrature.
\section{Axion Search Data-Processing}
\label{sec:analysis_procedure}
\subsection{Baseline Removal}
The first step in processing the raw spectra was to remove the fixed baseline imposed on the spectra from the warm electronics. A nonflat power spectrum had three possible underlying causes:
\begin{enumerate}
\item Frequency dependent gain variations after mixing.
\item Frequency dependent gain variations before mixing.
\item Frequency dependent noise variations.
\end{enumerate}
The last of these was subdominant because most noise sources had approximately the same temperature. Gain variations before mixing, attributable to interactions of RF devices in the cold space, were evident, but small compared to gain variations after mixing. Gain variations after mixing were primarily determined by filters in the receiver chain. The characteristic shape of these gain variations, also known as the spectrum's \emph{baseline}, can be seen in Fig.~\ref{fig:bg}. The upwards trends to the far right and left were a result of digitizing in the final two-pole 150-kHz bandpass filter, between the two poles. The baseline was averaged and smoothed using a Savitzky-Golay software filter ~\cite{SavGol,Malagon:2014nba}.
The average baseline is shown in blue and the filtered background is shown in orange. The y-axis was normalized because the original scale is arbitrary and a combination of the gain and attenuation of the output line.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Graphics/background_large.png}
\caption{Filtered background shape (`Filtered BG' and orange) and the average baseline (`Average' and blue) from the warm electronics.}
\label{fig:bg}
\end{figure}
\subsection{Spectrum Processing}
An example spectrum after the baseline removal procedure is shown in Fig.~\ref{fig:raw_spectrum}. Each raw spectrum consisted of 512 bins, with bin widths of 95 Hz, for a total spectrum width of 48.8 kHz. A single spectrum is representative of axion search data acquired over an integration time of 100 s, a combination of $\mathrm{10^4}$ Fourier transforms of 10 ms of cavity output signal. In the following discussion, the smallest discretization of measured power is defined by $P^{j}_{i}$, where $j$ identifies an individual spectrum, and $i$ identifies an individual bin. Each raw spectrum was processed individually as follows. First, the raw power was divided by the baseline and convolved with a six-order Padé filter to remove the residual shape from the cryogenic receiver transfer function. The use of a Padé filter was motivated by deriving the shape of the power spectrum at the output of the last-stage cold amplifier~\cite{Daw:2018tru}.
The power in each bin was then divided by the mean for the entire spectrum to create a normalized spectrum. In the absence of an axion signal, the power in each bin could then be represented as a random sample from a Gaussian distribution with a mean of $\mu=1$. Evidence that this was indeed the case can be seen in Fig.~\ref{fig:whitenoise}, where a Gaussian fit to the data is shown in orange. Subtracting 1 from each bin shifted the mean of the normalized spectrum to $\mu=0$, which gave a more intuitive meaning to the data, enabling us to search for power fluctuations above zero. An example of such a filtered spectrum is shown in Fig.~\ref{fig:filter_spectrum}. The gray band highlights the 1$\sigma$ error bar, which implies 68\% of the data falls within this region.
\begin{figure}
\begin{minipage}[t]{0.5\textwidth}
\includegraphics[width=\textwidth]{Graphics/raw_spectrum_large.png}
\caption{Raw spectrum, or single digitizer scan. All the raw scans have a distinct shape imposed by the receiver chain.}
\label{fig:raw_spectrum}
\end{minipage}
\end{figure}
Another feature of the raw data that must be considered is inherited from the microwave cavity itself: the Lorentzian shape. Power measured closer to cavity resonance is enhanced by the full $Q$ of the cavity, whereas power measured further from resonance is not. The enhancement follows the Lorentzian shape of the cavity, which varies depending on the coupling and frequency at the the time of the scan. The filtered spectra were therefore scaled by their respective Lorentzian shapes. The result of this step can be seen in Fig.~\ref{fig:lor_spectrum}, where the error bars are indicative of the distance from the cavity resonance peak.
\begin{figure}
\includegraphics[width=0.5\textwidth]{Graphics/bin_dev_mean.png}
\caption{Histogram (blue) of individual bin deviations about the mean for the first nibble of Run 1B. The orange curve is a Gaussian fit to the data.}
\label{fig:whitenoise}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{Graphics/excess_snr_filtered_spectrum_large.png}
\caption{Filtered Spectrum.}
\label{fig:filter_spectrum}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{Graphics/lor_weighted_spectrum_large.png}
\caption{Lorentzian weighted spectrum divided by the noise power}
\label{fig:lor_spectrum}
\end{figure}
\subsection{Implementation of Analysis Cuts}
Five analysis cuts, shown in Table~\ref{tab:analysis_cuts}, were applied for quality control of the data. The original Run 1B data consisted of 197,680 raw spectra. After implementing the analysis cuts shown in Table~\ref{tab:analysis_cuts}, 185,188 raw spectra remained. Motivation for these cuts proceeded as follows. First, quality factors lower than 10,000 and greater than 120,000 were omitted from the data because they were likely unphysical and the result of a poor fit to a noisy transmission measurement. System noises below 0.1 K and above 2.0 K were excluded, as these were likely simply to be the result of incorrectly measuring the SNRI. Temperatures below 0.1 K were removed because they were lower than any physical temperature in the experiment and would violate the Standard Quantum Limit. Additionally, the six-order Padé fit to the background was required to have a ${\chi}^2$ per degree of freedom less than 2. This proved sufficient enough to reject poor fits while retaining potential axion signals.
In addition to these parameter cuts, cuts were also made over various time stamps as a result of aberrant run conditions. Reasons included uncoupling of the antenna, digitizer failures, software malfunctions, excursions of the SNR that required better background fitting, scans containing pervasive and obvious RFI, unexpected mode crossings, a poorly biased JPA, and various engineering studies. These studies ranged from manual rebiasing of the JPA, to heating or cooling of the dilution refrigerator, to ramping the main magnet.
\begin{center}
\begin{table}
\centering
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{@{}lcr@{}}
\toprule[0.1ex]
\hline
\toprule[0.1ex]
Cut Parameter & Scans Removed & Constraint \\
\hline
Timestamp cuts & 7,189 & N/A \\
\hline
Quality Factor & 316 & 10,000
\textless\,$Q$\,\textless 120,000 \\
\hline
System Noise & 4,514 & 0.1 \textless\,$T_{\text{sys}}$\,\textless 2.0 \\
\hline
Max Std. Dev. Increase & 224 & 2.0 \\
\hline
Error in filter shape & 249 & N/A \\
\hline
\bottomrule[0.1ex]
\hline
\end{tabular}
\caption{Table of analysis cuts made to spectra.}
\label{tab:analysis_cuts}
\end{table}
\end{center}
\subsection{Grand Spectrum Preparation}
The final step of the analysis was to merge all the power spectra into a single grand spectrum. This presents a challenging problem: how does one combine overlapping spectra into a single RF bin? The conditions under which each scan is acquired change, and so each must be weighted accordingly.
The primary priority of such an endeavor is to control for these varying conditions throughout the run. As in previous analyses, the way this was accomplished was to scale the power excess in each bin of the normalized spectrum by the power that would be generated by a DFSZ axion signal (see Eq.~\ref{eqn:axion_pwr}) under the conditions present during that particular scan acquisition (inputting the measured $Q$, $f$ and $C(f)$ for that scan).
Another condition that must be controlled between spectra is the system noise. All else considered, axion peaks of identical signal power but different noise temperatures lead to different peak heights. By scaling each bin in the normalized spectrum by the noise power, $k_{\text{B}}T_{\text{sys}}$, the effects of varying system noise are mitigated.
Scaling by the axion signal power parameters and accounting for the differences in system noise requires computing
\begin{equation}
P^{j}_{i_\text{scaled}}=P^{j}_{i,\text{lor}}\left(\frac{1}{C_{010}}\right)\left(\frac{1~\mathrm{m^3}}{V}\right)\left(\frac{1}{Q}\right)\left(\frac{1~\mathrm{T^2}}{B^2}\right)
\end{equation}
\noindent on a bin-by-bin basis, where $P^{j}_{i,\text{lor}}$ is the filtered power from an individual frequency bin and spectrum, scaled by the Lorentzian shape of the cavity. The effect of all this processing is to remove all possible discrepancies between scans, enabling apples-to-apples comparisons of the power between bins, resulting in $P^{j}_{i_\text{scaled}}$.
To further increase sensitivity to a potential axion signal, one final step is performed before combining the data into a grand spectrum: filtering in accordance with the axion lineshape. It is well known that an axion signal would have a characteristic lineshape reflective of the axion kinetic energy distribution~\cite{PhysRevD.42.3572}. The velocity of axions in the case of an isothermal, virialized halo would follow a Maxwell-Boltzmann distribution. This distribution derives from the assumption that dark matter obeys the standard halo model (SHM), which describes the Milky Way Halo as thermalized, with isotropic velocity distribution. The Maxwell-Boltzmann lineshape is
\begin{equation}
g(f)=\frac{2}{\sqrt{\pi}}\sqrt{f-f_{a}}\left(\frac{3}{f_{a}\frac{c^2}{\braket{v^2}}}\right)^{3/2}e^{\frac{-3(f-f_a)}{f_a}\frac{c^2}{\braket{v^2}}},
\label{eqn:MaxBoltz}
\end{equation}
\noindent where $f$ is the measured frequency and $f_a$ is the axion rest mass frequency. The rms velocity of the dark matter halo is given by $\braket{v^2}=(270\;\mathrm{km/s})^2$~\cite{PhysRevD.42.3572}.
The measured power in each bin is then convolved with a Maxwell-Boltzmann filter which uses this distribution. Note that the effects from the orbital motion of Earth around the Sun and the rotational motion of the detector about the axis of the Earth have been averaged out in this equation. The medium-resolution analysis does not have the required spectral resolution to observe the Doppler effect of such motion, which would result in a frequency shift that can be attributed to daily and yearly modulation. A separate, `high-resolution' analysis is underway which would be capable of detecting this shift. Additionally, at this stage of the analysis, an alternative axion velocity distribution, known as an N-body lineshape, was be used as a filter. This filter emerged from developments in galaxy formation simulations for the Milky Way. The simulation describes galaxies using the N-body+smooth-particle-hydrodynamics (N-Body+SPH) method, in lieu of the assumption that the dark matter obeys the Standard Halo Model (SHM). The N-body signal shape keeps a Maxwellian-like form
\begin{equation}
g({f})\approx\left(\frac{(f-f_{a})}{m_{a}\kappa}\right)^{\alpha}e^{-\left(\frac{(f-f_{a})}{m_{a}\kappa}\right)^{\beta}}.
\end{equation}
\noindent The best fit parameters were computed via simulation, and found to be $\alpha = 0.36~{\pm}~0.13$, $\beta = 1.39~{\pm}~0.28$, and $\kappa=(4.7~{\pm}~1.9){\times}10^{-7}$~\cite{0004-637X-845-2-121}. The medium resolution analysis results were also computed separately with this filter which produced different limits on the axion coupling relative to the assumption of a Maxwell-Boltzmann distribution.
Combining individual spectra into a grand spectrum involves the use of a well-established `optimal weighting procedure'~\cite{Brubaker:2017rna,PhysRevD.64.092003}. The optimal weighting procedure finds weights for the individual power excesses that result in the optimal SNR for the grand spectrum~\cite{Brubaker:2017rna}. In this procedure, the weights are chosen such that the maximum likelihood estimation of the true mean value, $\mu$, is the same for all contributing bins.
More rigorously, the grand spectrum power excesses can be computed on a bin-by-bin basis using the following equation
\begin{equation}
P_{w}=\frac{\sum\limits_{j=0}^{N} \frac{P^{j}_{scaled}}{{\sigma^{j}}^2}}{\sum\limits_{j=0}^{N}\frac{1}{{\sigma^{j}}^2}},
\end{equation}
\noindent where $N$ is the total number of spectra for a given frequency bin, and $P_{w}$ is the weighted power for an individual RF bin of the grand spectrum. The standard deviation for each bin in the grand spectrum is calculated via
\begin{equation}
\sigma_{w}=\sqrt{\frac{1}{\sum\limits_{j=0}^{N}\frac{1}{{\sigma_{j}}^2}}}.
\end{equation}
The grand spectrum is completely defined, bin-by-bin, by these two values: the measured excess power, $P_{w}$, and the standard deviation, $\sigma_{w}$.
Searches for excess signals above the noise in the grand spectrum that would correspond to an axion are further delineated in the following sections.
\section{Synthetic Axions}
\label{sec:synthetics}
There were two types of synthetic axions signals used in Run 1B: software and hardware synthetic injections. Synthetics were used to build confidence in our analysis.
\subsection{Software Synthetics}
Software synthetics serve the purpose of better understanding the analysis---in particular, the detection efficiency. Software synthetics reflected the axion lineshape as described by the Maxwell-Boltzmann distribution, and their power levels could be adjusted relative to the KSVZ axion power. By injecting 1,773 evenly-spaced software signals at DFSZ power with a density of 20 per MHz into the real data and checking what fraction were flagged as candidates, the detection efficiency was calculated. This process was performed for all data that were collected at DFSZ sensitivity. Of these 1,773 injections, 1,684 were detected, corresponding to a $95\pm2\%$ detection efficiency of DFSZ signals.
It was also discovered that the background fit can reduce the significance of axion signals. This effect arises from the fact that the background fit is designed to accurately describe wide features and ignore narrow peaks so as not to accidentally fit out a potential axion candidate. This effect was quantified by calculating the ratio of the power of the injected synthetic signal to the measured power. This ratio was computed across the relevant frequency range and can be seen in Fig.~\ref{fig:fudgecalc}. The average ratio was $0.818\,\pm\,0.008$. Power measurements from the grand spectrum were therefore corrected by dividing by this ratio to account for sensitivity loss from the background fit or other analysis steps.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Graphics/synthetics_power.png}
\caption{Ratio of measured power to injected software synthetic power over the full frequency range for Run 1B (the gap at 750-760 MHz was a large set of mode crossings).}
\label{fig:fudgecalc}
\end{figure}
\subsection{Hardware Synthetics}
The hardware synthetic axions were a novel addition to ADMX for Run 1B and were used for better understanding of the receiver chain and sensitivity. The Synthetic Axion Generator (SAG) was located in a separate rack, away from ordinary data-acquisition. The SAG consisted of an arbitrary waveform generator (Agilent 33220A) that created a low frequency Maxwell-Boltzmann-like signal, about 500-Hz wide. This signal was mixed up to a specific RF frequency and injected into the cavity via the weak port as it was tuned through that frequency. The attenuation was calibrated by intentionally injecting synthetic axions of known attenuation and measuring their output power, so that signals could be sent in during the run as fractions of DFSZ signal power. Hardware synthetics were injected into the weak port of the receiver chain via a blind injection scheme throughout the course of the run. These synthetics were successfully detected, confirming our understanding of the receiver chain and analysis. An example of such a synthetic axion that was detected and flagged as a candidate via the analysis is shown in Fig.~\ref{fig:hardware_synth}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Graphics/run1b_synthetic_candidate_example.png}
\caption{Hardware synthetic injection. Blue shows the results from the initial set of scans over this frequency interval, and orange shows the results after rescans (with the synthetic candidate still present).}
\label{fig:hardware_synth}
\end{figure}
\section{Mode Crossings}
\label{sec:mode_crossings}
The original axion search for Run 1B proceeded with the tuning rods operating in what is known as the `symmetric configuration'. That is to say, the rods, starting at the same position opposite each other next to the walls, were rotated in the same direction, at the same rate. The first pass through the Run 1B frequency band included 8 mode crossings of the $\mathrm{TM_{010}}$ cavity mode with other modes, mostly $\mathrm{TE}$ modes. These mode crossings were predicted via simulation and verified on-site via wide network analyzer scans. There are two major challenges associated with mode crossings. The first is that the form factor diminishes as the cavity mode draws near. The second is that tracking the cavity mode becomes difficult as the other mode appears in the transmission and reflection scans. These issues were circumvented by maneuvering the rods in an anti-symmetric configuration; in other words, moving the rods in opposite direction simultaneously. Moving rods anti-symmetrically shifted several weakly tuning modes, and therefore mode crossings, on the order of a few MHz. This configuration provided form factors around 0.35, sufficient for axion data-acquisition in the previously inaccessible frequency range. Data were acquired in three mode crossing regions using this technique after the initial axion search. An example of anti-symmetric motion as compared to standard symmetric motion can be seen in Fig.~\ref{fig:rod_motion}. The five remaining mode crossings either proved intractable to the changed rod configuration or were too wide to be realistically filled in with this approach. These can be seen in Table~\ref{tab:regions}.
\begin{table}
\centering
\begin{tabular}{|c|c|l|}
\hline
Mode Crossing Frequency (MHz) & Width (MHz) \\
\hline
704.659 & 0.350 \\
715.064 & 0.140 \\
717.025 & 0.140 \\
726.624 & 0.701 \\
753.844 & 12.682 \\
\hline
\end{tabular}
\caption{Mode crossing locations where an exclusion limit could not be set.}
\label{tab:regions}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{Graphics/largefont_rods.png}
\caption{Positioning of the rods in symmetric vs. anti-symmetric configurations. Normal data-taking operations used the symmetric mode (both rods moving counter-clockwise to brings rods to the center and increase frequency) whereas the anti-symmetric mode (left rod moving counter-clockwise with right rod moving clockwise to bring both rods to the center and increase frequency) was used to navigate mode crossings. The color scale shows the electric field strength (V/m) as modeled by CST Microwave Simulation~
\cite{CST}.}
\label{fig:rod_motion}
\end{figure*}
\section{Rescan Procedure}
\label{sec:rescan_procedure}
A well-defined rescan protocol is critical to the success of any resonant haloscope experiment insofar as it minimizes the chances of missing a potential axion signal. Conditions change throughout the course of the run, and decisions must be made so that a thorough search is conducted regardless. ADMX Run 1B proceeded as follows. The full run frequency range of approximately 125 MHz was scanned in 10-MHz nibbles. This approach enabled us to perform rescans under operating conditions that were similar to the initial scan and kept rescans at a manageable size. After data were acquired from the first pass, the rods were moved in the opposite direction to perform the first rescan. During a rescan, rod motion was slowed and digitization turned on when passing over frequencies flagged as candidates. The following criteria were used to define rescan regions in a grand spectrum:
\begin{enumerate}
\item The power at that frequency is in excess of 3$\sigma$.
\item The expected signal-to-noise for a DFSZ axion at that frequency is too low.
\item Limits set at that frequency do not meet DFSZ sensitivity requirement. In other words, the measured power plus some fraction of sigma (called the candidate threshold power) exceeds the DFSZ axion power.
\end{enumerate}
Regions with SNR less than 2.4 were considered to have insufficient data, triggering a rescan. This particular value was selected because it resulted in a reasonable amount of candidates after a first pass through a nibble. A rescan was also triggered if a candidate's power exceeded that of a DFSZ axion by 0.5$\sigma$.
A persistent candidate is one which does not average to zero with increasing rescans. A true axion signal would not only fulfill this requirement, but its power would maximize on the cavity $\mathrm{TM_{010}}$ mode, with the power scaling as $B^2$. Thus, should a persistent signal maximize on-resonance, the next step in confirming an axion signal would be to switch to the $\mathrm{TM_{011}}$ mode or change the magnetic field and verify the power scaling. The three persistent candidates found in Run 1B are shown in Table~\ref{tab:candidates}. Of the three, one was verified as an initially blinded hardware synthetic, and the other two maximized off-resonance with the $\mathrm{TM_{010}}$ mode, and therefore could not be axions. These other signals were not confirmed to exist independently in the ambient lab setting, although this is perhaps not surprising as the ADMX receiver chain is more sensitive than any ordinary lab equipment. The hardware synthetic maximized on-resonance, but before a magnet ramp could be performed the injection team notified the collaboration that it was in fact a synthetic signal.
\begin{table}
\centering
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{@{}lcr@{}}
\hline
\toprule[0.1ex]
\hline
Frequency (MHz) & Notes & Power (DFSZ) \\
\toprule[0.1ex]
\hline
780.255 & Maximized off-resonance & 1.49\\
730.195 & Synthetic blind & 1.51\\
686.310 & Maximized off-resonance & 2.36\\
\hline
\bottomrule[0.1ex]
\hline
\end{tabular}
\caption{Candidates that persisted past rescan. The signal power of the candidate is shown on the right-hand side, in units of DFSZ signal power.}
\label{tab:candidates}
\end{table}
\section{Run 1B Limit}
\label{sec:limit}
At the end of all data-taking for Run 1B, the final limit was computed. An RF bin containing an axion signal, scanned multiple times, would result in a Gaussian distribution centered about some mean, $\mu=g_{\gamma}^2\eta$, where $\eta$ is the SNR for the given measurement. An RF bin containing no axion signal, scanned multiple times, would result in a Gaussian distribution centered about a mean $\mu=0$. A limit was set by computing $\mu$ for a given RF frequency bin that gave a 90\% confidence limit that our measurement did not contain an axion.
It is not obvious from this procedure how to convert a negative power to a limit on $g_{\gamma}$. Thus, in determining the value of $\mu$ that gives the desired confidence level, the cumulative distribution function for a truncated normal distribution was used. This gave a confidence level that covered only physical values of $g_{\gamma}$~\cite{Feldman1998}.
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{Graphics/run1b_sensitivity_prelim_updated.png}
\caption{Exclusion plot for Run 1B, shown in green. Dark green represents the region excluded using a standard Maxwell-Boltzmann filter, whereas light green represents the region excluded by an N-body filter~\cite{0004-637X-845-2-121}. }
\label{fig:exclusion}
\end{figure*}
Because this technique also results in a jagged, 300-Hz bin-wide limit, the following approach is used to smooth the result to produce an exclusion plot. A small number of bins (200, representing the number of bins in one plot pixel) were combined into a single limit as follows. For each bin, a normal distribution was generated using the measured power as the mean, and the measured uncertainty as the standard deviation. This distribution was then randomly sampled 100 times for each bin. When this resulted in a negative value, it was clipped to zero. The full list of randomly sampled values is then sorted, and the 90\% confidence limit was determined to be the generated power that was 90\% of the way to the top of the sorted list.
With Run 1B, ADMX was able to exclude the regions shown in green in Fig.~\ref{fig:exclusion}. Dark green shows the region excluded by using the standard Maxwell-Boltzmann filter, whereas light green shows the region excluded by using an N-body filter (see Ref.~\cite{0004-637X-845-2-121}). The Maxwell-Boltzmann exclusion limit used a local dark matter density of 0.45 $\mathrm{GeV/{cm^3}}$, whereas the N-body filter used a local dark matter density of 0.63 $\mathrm{GeV/{cm}^3}$. Regions where there are gaps in the data are due to mode crossings. The frequency range for QCD axions as 100\% dark matter 680-790 MHz was excluded at the 90\% confidence limit, except for the few regions where there were mode crossings. The total mass range covered in Run 1B is larger by a factor of four over the previous Run 1A~\cite{PhysRevLett.120.151301}.
\section{Conclusion}
In conclusion, the ADMX collaboration did not observe any persistent candidates which fulfilled the requirements for an axion signal throughout the course of Run 1B. This implies the 90\% confidence limit exclusion of DFSZ axions for 100\% dark matter density over the frequency range 680-790 MHz (2.81--3.31 $\si\micro$eV), omitting the five regions with mode crossings. Notably, the ADMX collaboration is the only collaboration to have achieved sensitivity to DFSZ axions in this frequency range, and have refined their approach in covering a wider portion of the expected DFSZ axion frequency space than ever before.
\section{Acknowledgements}
This work was supported by the U.S. Department of Energy through Grants No. DE-SC0009800, No. DESC0009723, No. DE-SC0010296, No. DE-SC0010280, No. DE-SC0011665, No. DEFG02-97ER41029, No. DEFG02-96ER40956, No. DEAC52-07NA27344, No. DEC03-76SF00098, and No. DE-SC0017987. Fermilab is a U.S. Department of Energy, Office of Science, HEP User Facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No. DE-AC02-07CH11359. Additional support was provided by the Heising-Simons Foundation and by the Lawrence Livermore National Laboratory and Pacific Northwest National Laboratory LDRD offices. LLNL Release No. LLNL-JRNL-813347.
\newpage
\bibliographystyle{apsrev4-1}
\raggedright
|
2,869,038,154,065 | arxiv | \section{Introduction and summary}\label{sec:Intr-Summ}
The concept of Spontaneous Symmetry Breaking (SSB) is a central one in quantum physics, both in statistical
mechanics and quantum field theory and particle physics.
The definition of SSB is well-known since the middle sixties
\cite{Ru}, Ch.6.5.2., as well as \cite{BR87}, Ch.4.3.4.
Recall that one starts from a state (ground or thermal), assumed to be invariant under a symmetry group
$G$, but which has a nontrivial decomposition into extremal states, which may be physically interpreted as
pure thermodynamic phases (states). The latter, however, do not exhibit invariance under $G$, but only under
a proper subgroup $H$ of $G$.
There are basically two ways of constructing extremal states: \\
(1) by a choice of boundary conditions (b.c.) for Hamiltonians $H_{\Lambda}$ in finite regions
$\Lambda \subset \mathbb{R}^d$ and then take the thermodynamic limit ($\Lambda \uparrow \mathbb{Z}^{d}$
or $\Lambda \uparrow \mathbb{R}^{d}$) of expectations over the corresponding local states; \\
(2) by replacing: $H_{\Lambda} \rightarrow H_{\Lambda} + h \, B_{\Lambda}$, where $B_{\Lambda}$ is a
suitable extensive operator and $h$ a real parameter, then by taking first $\Lambda \uparrow \mathbb{Z}^{d}$
or $\Lambda \uparrow \mathbb{R}^{d}$, and the second, $h \to +0$ (or $h \to -0$). Here one assumes that the
states considered are locally normal or locally finite, see e.g. \cite{Sewell1} and references there.
The method (2) is known as Bogoliubov's \textit{quasi-averages} (q-a) method \cite{Bog07}-\cite{Bog70}.
We comment that although quite transparent, e.g. for classical lattice systems, the method of boundary
conditions is unsatisfactory for continuous and even worse for the quantum systems.
In this paper we advocate the Bogoliubov method of quasi-averages for quantum systems.
First, we elucidate its applications to study the phase transitions with SSB, see Section \ref{sec:Bosons}.
To this aim we consider first the quantum phase transition, which is the conventional one-mode
Bose-Einstein condensation (BEC) of the \textit{perfect} Bose-gas. In this simplest case the condensation occurs
in the \textit{single} zero mode, which implies a spontaneous breaking of $G$: the gauge group of
transformations (GSB). After that, we consider the case when the condensation is dispersed over
\textit{infinitely} many modes. Our analysis of different \textit{types} of
this \textit{generalised} condensation (gBEC) demonstrates that the only physically \textit{reliable} quantities,
are those that defined by the Bogoliubov method of q-a, see Remark \ref{rem:3.3} and Theorem \ref{theo:3.4}.
We extend this analysis to \textit{imperfect} Bose-gas. As a consequence of our results, a general question
posed by Lieb, Seiringer and Yngvason \cite{LSYng} concerning the equivalence between Bose-Einstein condensation
$\rm{(BEC)}_{q-a}$ and Gauge Symmetry Breaking $\rm{(GSB)}_{q-a}$, both defined via the one-mode Bogoliubov
quasi-average, is elucidated for any {type} of {generalised} BEC (gBEC) \textit{\`{a} la} van
den Berg-Lewis-Pul\`{e} \cite{vdBLP} and \cite{BZ}, see Remark \ref{rem:3.5},
where it is also pointed out that the fact that quasi-averages lead to ergodic states clarifies an
important conceptual aspect of the quasi-average \textit{trick}.
Second, using the Bogoluibov method of q-a and taking the \textit{structural} quantum phase transition
as a basic example, we scrutinise a relation between SSB and the critical \textit{quantum fluctuations}, see
Section \ref{BQ-A-QFl}. Our analysis in Section \ref{Q-A-Cr-Q-Fl} shows that again the Bogoliubov quasi-averages
give an adequate tool for description of the algebra of \textit{fluctuation operators} on the critical line of
transitions. There we study the both commutative and noncommutative cases of this algebra, see
Theorem \ref{thm:5.1} and Theorem \ref{thm:5.2}.
We note here that it was Dmitry Nikolaevich Zubarev \cite{Zu70}, who for the first time indicated a
relevance of the Bogoliubov quasi-averages in the theory of non-equilibrium processes. In this case
the infinitesimal external sources serve to break the time-invariance of the Liouville equation for the
statistical operator. Although well-known in the mathematical physics as the limit-absorption principle this
approach was developed in \cite{Zu70} to many-body problems. This elegant extension is now called the
Zubarev method of a Non-equilibrium Statistical Operator \cite{Zu71}, \cite{ZMP}.
This interesting aspect of the Bogoliubov quasi-average method is out of the scope of the present paper.
\section{{Continuous boson systems}}\label{sec:Bosons}
\subsection{Conventional or generalised condensations and ODLRO} \label{sec:gBEC}
We note that existence of \textit{generalised} Bose-condensations (gBEC) makes the boson
systems more relevant for demonstration of efficiency of the Bogoliubov quasi-averages than, e.g., spin lattice
systems. This becomes clear even on the level of the Perfect Bose-gas (PBG).
To this aim we consider first the Bose-condensation of PBG in a three-dimensional anisotropic parallelepiped
$\Lambda:= V^{\alpha_1}\times V^{\alpha_2}\times V^{\alpha_3}$, with \textit{periodic} boundary
condition (p.b.c.) and $\alpha_1 \geq \alpha_2 \geq \alpha_3$, $\alpha_1 + \alpha_2 + \alpha_3 = 1$, i.e.
the volume $|\Lambda| = V$. In the boson Fock space $\mathcal{F}_{\Lambda}:=
\mathcal{F}_{boson}(\mathcal{L}^2 (\Lambda))$ the Hamiltonian of this system for the grand-canonical ensemble
with chemical potential $\mu < 0$ is defined by :
\begin{eqnarray}\label{G-C-PBG}
H_{0,\Lambda,\mu} = T_{\Lambda} - \mu \, N_{\Lambda} =
\sum_{k \in \Lambda^{*}} (\varepsilon_{k} - \mu)\, b^{*}_{k} b_{k} \ , \ \ \ \
{\rm{dom}}(H_{0,\Lambda,\mu})= {\rm{dom}}(T_{\Lambda}) \ .
\end{eqnarray}
Here one-particle kinetic-energy operator spectrum $\{\varepsilon_{k} = k^2\}_{k \in \Lambda^{*}}$, where
the set $\Lambda^{*}$ is dual to $\Lambda$:
\begin{equation}\label{dual-Lambda}
\Lambda ^{\ast }:= \{k_{j}= \frac{2\pi }{V^{{\alpha_{j}}}}n_{j} : n_{j}
\in \mathbb{Z} \}_{j=1}^{d=3}
\ \ \ {\rm{then}} \ \ \ \varepsilon _{k}= \sum_{j=1}^{d} {k_{j}^2} \ .
\end{equation}
We denote by $b_{k}:=b(\varphi_{k}^{\Lambda})$ and $b^{*}_{k}= (b(\varphi_{k}^{\Lambda}))^{*}$ the $k$-mode
boson annihilation and creation operators in the Fock space $\mathcal{F}_{\Lambda}$. They are indexed by
the ortho-normal basis $\{\varphi_{k}^{\Lambda}(x) = e^{i k x}/\sqrt{V}\}_{k \in \Lambda^{*}}$ in
$\mathcal{L}^2 (\Lambda)$ generated by
the eigenfunctions of the self-adjoint one-particle kinetic-energy operator $(- \Delta)_{p.b.c.}$ in
$\mathcal{L}^2 (\Lambda)$. Formally these operators satisfy the Canonical Commutation Relations (CCR):
$[b_{k},b^{*}_{k'}]=\delta_{k,k'}$, for $k, k' \in \Lambda^{*}$. Then $N_k = b^{*}_{k} b_{k}$ is
occupation-number operator of the one-particle state $\varphi_{k}^{\Lambda}$ and $N_{\Lambda} =
\sum_{k \in \Lambda^{*}} N_k$ denotes the total-number operator in $\Lambda$.
For temperature $\beta^{-1} := k_B \, T$ and chemical potential $\mu$ we denote by
$\omega_{\beta,\mu,\Lambda}^{0}(\cdot)$ the grand-canonical Gibbs state of the PBG generated by (\ref{G-C-PBG}):
\begin{equation}\label{0-state}
\omega_{\beta,\mu,\Lambda}^{0}(\cdot) = \frac{{\rm{Tr}}_{{\mathcal{F}}_{\Lambda}}
(\exp(-\beta H_{0,\Lambda,\mu})\ \cdot \ )}
{{\rm{Tr}}_{{\mathcal{F}}_{\Lambda}} \exp(-\beta H_{0,\Lambda,\mu})} \ \ .
\end{equation}
Then the problem of existence of a Bose-condensation is related to solution of the equation
\begin{equation}\label{BEC-eq}
\rho = \frac{1}{V} \sum_{k \in \Lambda^{*}} \omega_{\beta,\mu,\Lambda}^{0}(N_k) =
\frac{1}{V} \sum_{k\in \Lambda ^{\ast}}\frac{1}{e^{\beta \left(\varepsilon_{k}-\mu \right)}-1} \ ,
\end{equation}
for a given total particle density $\rho$ in $\Lambda$. Note that by (\ref{dual-Lambda}) the thermodynamic limit
$\Lambda \uparrow \mathbb{R}^3$ in the right-hand side of (\ref{BEC-eq})
\begin{equation}\label{I}
\mathcal{I}(\beta,\mu) = \lim_{\Lambda} \frac{1}{V} \sum_{k \in \Lambda^{*}} \omega_{\beta,\mu,\Lambda}^{0}(N_k)
= \frac{1}{(2\pi)^3}\int_{\mathbb{R}^3} d^3 k \ \frac{1}{e^{\beta \left(\varepsilon_{k}-\mu \right)}-1} \ ,
\end{equation}
exists for any $\mu <0$. It reaches its (finite) maximal value $\mathcal{I}(\beta,\mu =0) = \rho_c(\beta)$,
which is called the \textit{critical} particle density for a given temperature.
Recall that existence of the finite critical density $\rho_c(\beta)$ triggers (via the \textit{saturation}
mechanism) a \textit{zero-mode} {Bose-Einstein condensation} (BEC): $\rho_0(\beta) := \rho - \rho_c(\beta)$,
when the total particle density $\rho > \rho_c(\beta)$.
We note that indeed for $\alpha_1 < 1/2$, the totality of condensate $\rho_0(\beta)$ is sitting in the
one-particle ground state mode $k=0$:
\begin{eqnarray} \label{BEC}
&&{\rho_0} (\beta)= {\rho} - \rho _{c}(\beta) = \lim_{\Lambda} \frac{1}{V}
\omega_{\beta,\mu_{\Lambda}(\beta,\rho),\Lambda}^{0}(b^{*}_{0} b_{0})
= \lim_{\Lambda} \frac{1}{V} \frac{1}{e^{-\beta \, {\mu_{\Lambda}(\beta,\rho\geq
\rho _{c}(\beta))} }-1} \ , \\
&&{\mu_{\Lambda}(\beta,\rho\geq \rho _{c}(\beta))}= {- \, \frac{1}{V}} \ \frac{1}
{\beta(\rho -\rho _{c}(\beta))} + {o}({1}/{V}) \ , \\
&& \lim_{\Lambda} \frac{1}{V} \omega_{\beta,\mu,\Lambda}^{0}(b^{*}_{k} b_{k}) = 0 \ , \ \
\end{eqnarray}
where $\mu_{\Lambda}(\beta,\rho)$ is a unique solution of equation (\ref{BEC-eq}).
Following van den Berg-Lewis-Pul\'{e} \cite{vdBLP}) we introduce \textit{generalised} BEC (gBEC):
\begin{defi} \label{def:gBEC}
Total amount $\rho_{gBEC}(\beta, \mu)$ of the gBEC is defined by the
\textit{double-limit}:
\begin{equation} \label{gBEC}
\rho_{gBEC}(\beta, \mu) := \lim_{\delta \rightarrow +0}\lim_{\Lambda }
\frac{1}{V}\sum_{\left\{ k\in \Lambda^{\ast }, \, \left\| k\right\|\leq \delta \right\}} \,
\omega_{\beta,\mu,\Lambda}(b^{*}_{k} b_{k}) \ .
\end{equation}
Here $\omega_{\beta,\mu,\Lambda}(\cdot)$ denotes the corresponding finite-volume grand-canonical Gibbs state.
\end{defi}
Then according to nomenclature proposed in \cite{vdBLP}) the {zero-mode} BEC in PBG is nothing but
the generalised Bose-Einstein condensation of the \textit{type} I. Indeed, by (\ref{BEC}) and (\ref{gBEC})
a non-vanishing BEC \textit{implies} a nontrivial gBEC: $\rho_{0, gBEC}(\beta, \rho) > 0$. We denote this
{relation} as
\begin{equation}\label{BEC-gBEC}
{\rm{BEC}} \ \Rightarrow \ {\rm{gBEC}} \ .
\end{equation}
Moreover, (\ref{BEC}) and (\ref{gBEC}) yield: $\rho_{0}(\beta, \rho) = \rho_{0, gBEC}(\beta, \rho)$.
Recall that one also has BEC $\Rightarrow$ ODLRO, which is the \textit{Off-Diagonal-Long-Range-Order} for the
boson field
\begin{equation}\label{b-field}
b(x) = \sum_{k \in \Lambda^{*}} b_{k} {\varphi_{k}^{\Lambda}}(x) \ .
\end{equation}
Indeed, by definition of ODLRO \cite{Ver} the value of the off-diagonal \textit{spacial} correlation
$LRO(\beta,\rho)$ of the Bose-field is:
\begin{equation}\label{PBG-ODLRO}
LRO(\beta,\rho) = \lim_{\|x-y\|\rightarrow\infty}\lim_{\Lambda} \omega_{\beta,\mu_{\Lambda},\Lambda}^{0}
(b^*(x) \ b(y))=
\lim_{\Lambda} \omega_{\beta,\mu_{\Lambda},\Lambda}^{0}(\frac{b^{*}_{0}}{\sqrt{V}}\frac{b_{0}}{\sqrt{V}}) =
\rho_{0}(\beta,\rho) \ .
\end{equation}
Hence, (\ref{PBG-ODLRO}) coincides with the \textit{zero-mode} spacial \textit{averages} correlation of the
local observables (\ref{b-field}).
We recall that the $p$-mode \textit{spacial average} $\eta_{\Lambda,p}(b)$ of (\ref{b-field})
is equal to
\begin{equation}\label{b-p-mode-av}
\eta_{\Lambda,p}(b):= \frac{1}{V}\int_{\Lambda}dx \, b(x) \, e^{- i \, p x} = \frac{b_p}{\sqrt{V}} \ ,
\ p \in \Lambda^{*} \ .
\end{equation}
As it is known, for PBG the value $LRO(\beta,\rho)$ of ODLRO {coincides} with the BEC (i.e., also with the
\textit{type} I gBEC) condensate density $\rho_{0}(\beta,\rho)$ \cite{vdBLP}.
To appreciate the relevance of gBEC versus quasi-averages we study more anisotropic thermodynamic limit:
$\alpha_1 = 1/2$, known as the \textit{Casimir} box. Then one observes the infinitely-many state macroscopic
occupation, which is known as the gBEC of \textit{type} II defined by (\ref{gBEC}).
The total amount $\rho_{0}(\beta,\rho)$ of this condensate is asymptotically distributed between
infinitely-many low-energy microscopic states $\{\varphi_{k}^{\Lambda}\}_{k \in \Lambda^{*}}$ in such a
way that
\begin{eqnarray} \label{cond-II}
\rho_{0}(\beta,\rho)={\rho} - \rho _{c}(\beta) &=& \lim_{\delta \rightarrow +0}\lim_{\Lambda }
\frac{1}{V}\sum_{\left\{ k\in \Lambda^{\ast }, \, \left\| k\right\|
\leq \delta \right\}}\left\{e^{\beta(\varepsilon_{k}- {\mu_{\Lambda}(\beta,\rho)})}- 1
\right\}^{-1} \\
&=& \sum_{n_1\in \mathbb{Z}}\frac{1}{(2\pi n_1)^2/2 + A} \ , \ \ {\rho} > \rho _{c}(\beta) \ . \nonumber
\end{eqnarray}
Here the parameter $A = A(\beta, \rho)\geq0$ is a {\textit{unique} root} of equation (\ref{cond-II}).
Then the amount of the zero-mode condensate BEC is:
\begin{eqnarray*}
\lim_{\Lambda} \frac{1}{V}\omega_{\beta,\mu,\Lambda}^{0}({b^{*}_{0} b_{0}}) = (A(\beta, \rho))^{-1} \ .
\end{eqnarray*}
Note that in contrast to the case of \textit{type} I, the \textit{zero-mode} BEC $(A(\beta, \rho))^{-1}$ is
\textit{smaller} than gBEC of the \textit{type} II (\ref{cond-II}). Therefore, the relation between BEC and
gBEC is nontrivial.
To elucidate this point, we consider $\alpha_1 > 1/2$ (the \textit{van den Berg-Lewis-Pul\`{e}} box \cite{vdBLP}).
Then one obtains
\begin{equation}\label{BEC=0}
\lim_{\Lambda} \omega_{\beta,\mu,\Lambda}^{0}(\frac{b^{*}_{k} b_{k}}{V}) =
\lim_{\Lambda}\frac{1}{V}\left\{e^{\beta(\varepsilon_{k}-{\mu_{\Lambda}(\beta,\rho)})}-
1 \right\}^{-1} = 0 \ , \ \forall k \in \Lambda^{*} \ ,
\end{equation}
i.e., there is no macroscopic occupation of \textit{any} mode $k \in \Lambda^{*}$ for any value of particle
density $\rho$. So, density of the zero-mode BEC is zero, but the gBEC (called the \textit{type} III) does
exist in the same sense as it is defined by (\ref{gBEC}):
\begin{equation}\label{cond-III}
\rho -\rho_{c}(\beta)= \lim_{\delta \rightarrow +0}\lim_{\Lambda }
\frac{1}{V}\sum_{\left\{ k\in \Lambda^{\ast }, \left\| k\right\|
\leq \delta \right\}}\left\{e^{\beta(\varepsilon_{k}- {\mu_{\Lambda}(\beta,\rho)})}- 1
\right\}^{-1} > 0 , \ {\rm{for}} \ \ \rho > \rho_{c}(\beta) \ ,
\end{equation}
with the \textit{same} amount of the total density as that for types I and II.
We note that even for PBG the calculation of the ODLRO for the case of \textit{type} II and \textit{type} III
gBEC is a nontrivial problem. This concerns, in particular, a regime when
there exists the \textit{second} critical density $\rho_{m}(\beta)>\rho_{c}(\beta)$ separating different
types of gBEC, see \cite{vdBLL} and \cite{BZ}. It is also clear that the zero-mode BEC is a more
\textit{restrictive} concept than the gBEC.
We comment that the fact that gBEC is different from BEC is \textit{not} exclusively due to a special
anisotropy: $\alpha_1 > 1/2$,
or other geometries for the PBG, see \cite{BZ}. In fact the same phenomenon of the (\textit{type} III) gBEC
occurs due to repulsive \textit{interaction}. A simple example is the model with Hamiltonian \cite{ZBru}:
\begin{equation}\label{Int-TypeIII}
H_{\Lambda }= {\sum_{k\in \Lambda^{*}} }\varepsilon_{k}b_{k}^{*}b_{k}+
\frac{a}{2V}{\sum_{k\in\Lambda^{*}}} b_{k}^{*}b_{k}^{*}b_{k}b_{k}\ , \ \text{ } a>0 \ .
\end{equation}
Summarising we note that the concept of gBEC (\ref{gBEC}) covers the cases (e.g. (\ref{Int-TypeIII}))
when calculation of conventional BEC gives a trivial value: gBEC $\nRightarrow$ BEC, cf (\ref{BEC-gBEC}).
We also conclude that relations between BEC, gBEC and ODLRO are a subtle matter. This
motivates and bolsters a relevance of the Bogoliubov {quasi-average method} \cite{Bog07}-\cite{Bog70},
that we are going to consider also in connection with the Spontaneous Symmetry Breaking (SSB) of
gauge-invariance for the Gibbs states. We call the SSB of the gauge-invariance by the Gauge Symmetry Breaking
(GSB).
\subsection{Condensates, Bogoliubov quasi-averages and pure states} \label{sec:gBEC-BQ-A}
We now study the states of Boson systems, and for that matter assume (see \cite{Ver}, Ch.4.3.2), that they
are analytic in the sense of \cite{BR97}, Ch.5.2.3.
We start with the Hamiltonian for Bosons in a cubic box $\Lambda\subset \mathbb{R}^3$ of side $L$ with p.b.c.
and volume $V=L^{3}$:
\begin{equation}\label{4.1}
H_{\Lambda,\mu} = H_{0,\Lambda,\mu} + V_{\Lambda} \ ,
\end{equation}
where the interaction term has the form
\begin{equation}\label{4.2}
V_{\Lambda} = \frac{1}{2V} \sum_{{k},{p},{q}\in \Lambda^*}\,
\nu({p})b_{{k}+{p}}^{*}b_{{q}-{p}}^{*}b_{{q}} b_{{k}} \ ,
\end{equation}
Here $\nu$ is the Fourier transformation in $\mathbb{R}^3$ of the two-body potential
$v({x})$, with bound
\begin{equation}\label{4.3}
|\nu({k})| \le \nu({0}) < \infty \ .
\end{equation}
We define the group $G$ of (global) \textit{gauge} transformations $\{\tau_{s}\}_{s \in [0,2\pi)}$
by the Bogoliubov canonical mappings of CCR:
\begin{eqnarray}\label{4.17}
&& \tau_{s}(b^{*}(f)) = b^{*}( \exp(i \, s) f) = \exp(i \, s)b^{*}(f) \ , \\
&& \tau_{s}(b(f)) = b( \exp(i \, s) f) = \exp(-i \, s)b(f) \ , \nonumber
\end{eqnarray}
where $b^{*}(f)$ and $b(f)$ are the creation and annihilation operators smeared over test-functions $f$
from the Schwartz space. Note that for $f=\varphi_{k}^{\Lambda}$ they coincide with $b^{*}_{k}, b_{k}$, cf
(\ref{b-field}), and $\tau_{s} (\cdot) = \exp(i \, s N_{\Lambda}) (\cdot) \exp(- i \, s N_{\Lambda})$,
see (\ref{G-C-PBG}). By definition (\ref{4.17}) and by virtue of (\ref{G-C-PBG}), (\ref{4.2})
the Hamiltonian (\ref{4.1}) is gauge-invariant:
\begin{equation}\label{4.31}
H_{\Lambda,\mu} = e^{i \, s N_{\Lambda}} H_{\Lambda,\mu} e^{- i \, s N_{\Lambda}} \ .
\end{equation}
Note that the property (\ref{4.31}) evidently implies the \textit{gauge-invariance} of the Gibbs state
(\ref{0-state}) as well as that for Hamiltonian (\ref{4.1}) of imperfect Bose-gas:
\begin{equation}\label{4.32}
\omega_{\beta,\mu,\Lambda}(\cdot) = \omega_{\beta,\mu,\Lambda}(\tau_{s}(\cdot)) =
\frac{{\rm{Tr}}_{{\mathcal{F}}_{\Lambda}}
(\exp(-\beta H_{\Lambda,\mu})\ \tau_{s}(\cdot) \ )}
{{\rm{Tr}}_{{\mathcal{F}}_{\Lambda}} \exp(-\beta H_{\Lambda,\mu})} \ \ .
\end{equation}
Symmetry (\ref{4.32}) is a source of \textit{selection} rules. For example:
\begin{equation}\label{4.33}
\omega_{\beta,\mu,\Lambda}(A_{n,m}) = 0 \ , \ {\rm{for}} \ \
A_{n,m}=\prod_{i=1,\, j=1}^{n,\, m} b^{*}_{k_i} b_{k_j} \ , \ \ {\rm{if}} \ \ n \neq m \ .
\end{equation}
The \textit{quasi}-Hamiltonian corresponding to (\ref{4.1}) with \textit{gauge symmetry} breaking
sources is taken to be
\begin{equation}\label{4.6}
H_{\Lambda,\mu,\lambda_{\phi}} = H_{\Lambda,\mu} + H_{\Lambda}^{\lambda_{\phi}} \ .
\end{equation}
Here the sources are switched on only in zero mode ($k=0$):
\begin{equation}\label{4.7}
H_{\Lambda}^{\lambda_{\phi}} = \sqrt{V}(\bar{\lambda}_{\phi} b_{{0}}+\lambda_{\phi} b_{{0}}^{*}) \ ,
\end{equation}
for
\begin{equation}\label{4.8}
\lambda_{\phi} = \lambda \exp(i\phi) \ \mbox{ with } \lambda \geq 0 \, , \,
\mbox{ where } \, {\rm{arg}}(\lambda_{\phi}) = \phi \in [0,2\pi) \ .
\end{equation}
In this case the corresponding Gibbs state is \textit{not} gauge-invariant (\ref{4.33}) since, for example
\begin{equation}\label{4.34}
\omega_{\beta,\mu,\Lambda,\lambda_{\phi}}(b_{{k}}) =
\frac{{\rm{Tr}}_{{\mathcal{F}}_{\Lambda}}
(\exp(-\beta H_{\Lambda,\mu,\lambda_{\phi}})\ b_{{k}} \ )}
{{\rm{Tr}}_{{\mathcal{F}}_{\Lambda}} \exp(-\beta H_{\Lambda,\mu,\lambda_{\phi}})} \neq 0
\ \ {\rm{for}} \ \ k=0 \ .
\end{equation}
The GSB of the state (\ref{4.34}), which is induced by the sources in (\ref{4.6}), persists in the thermodynamic
limit for the state $\omega_{\beta,\mu,\lambda_{\phi}}(\cdot):=
\lim_{V \rightarrow \infty}\omega_{\beta,\mu,\Lambda,\lambda_{\phi}}(\cdot)$. But it may occur in this
limit spontaneously without external sources. Let us denote
\begin{equation}\label{4.431}
\omega_{\beta,\mu}(\cdot):=
\lim_{V \rightarrow \infty}\omega_{\beta,\mu,\Lambda,\lambda_{\phi}=0}(\cdot) \ .
\end{equation}
\begin{defi} \label{SSB}
We say that the state $\omega_{\beta,\mu}$ undergoes a spontaneous breaking
of the $G$-invariance (\textit{spontaneous} Gauge Symmetry Breaking (GSB)), if:\\
(i) $\omega_{\beta,\mu}$ is $G$-invariant,\\
(ii) $\omega_{\beta,\mu}$ has a nontrivial decomposition into ergodic states $\omega_{\beta,\mu}^{'}$,
which means that at least two such distinct states occur in representation
\begin{equation*}
\omega_{\beta,\mu}(\cdot) = \int_{0}^{2\pi} d\nu(s) \ \omega_{\beta,\mu}^{'}(\tau_{s} \ \cdot)\ ,
\end{equation*}
and for some $s$
\begin{equation*}
\omega_{\beta,\mu}^{'}(\tau_{s}\cdot) \ne \omega_{\beta,\mu}^{'}(\cdot) \ .
\end{equation*}
Note that ergodic states are characterized by the \textit{clustering} property,
which implies a decorrelation of the zero-mode spacial averages (\ref{b-p-mode-av}) for the PBG,
as well as in general for the imperfect Bose gas.
\end{defi}
We take initially $\lambda \geq 0$ and consider first the perfect Bose-gas (\ref{G-C-PBG}) to define
the Hamiltonian
\begin{equation}\label{4.9.1}
H_{0,\Lambda, \mu, \lambda_{\phi}} = H_{0,\Lambda,\mu} + H_{\Lambda}^{\lambda_{\phi}} \ ,
\end{equation}
which is \textit{not} globally gauge-invariant. To separate the symmetry-breaking term $H_{{0}}$ we rewrite
(\ref{4.9.1}) as
\begin{equation*}
H_{0,\Lambda,\mu,\lambda_{\phi}}= H_{{0}}+H_{{k}\ne{0}} \ ,
\end{equation*}
where $H_{{0}} = -\mu \ b_{{0}}^{*}b_{{0}}+\sqrt{V}(\bar{\lambda}_{\phi} b_{{0}}+
\lambda_{\phi} b_{{0}}^{*}) = -\mu (b_{{0}} - \sqrt{V}{\lambda}_{\phi}/\mu)^{*}
(b_{{0}} - \sqrt{V}{\lambda}_{\phi}/\mu) + V |{\lambda}_{\phi}|^2/\mu $.
Recall that for the perfect Bose-gas the grand-canonical partition function $\Xi_{0,\Lambda}$ splits into
a product over the zero mode and the remaining modes. We introduce the canonical shift transformation
\begin{equation}\label{4.9.2}
\widehat{b}_{{0}} := b_{{0}} - \frac{\lambda_{\phi} \sqrt{V}}{\mu} \ ,
\end{equation}
without altering the nonzero modes. Since $\mu < 0$, we thus obtain for the grand-canonical partition
function $\Xi_{0,\Lambda}$,
\begin{equation}\label{4.9.3}
\Xi_{0,\Lambda}(\beta,\mu,\lambda_{\phi}) = (1-\exp(\beta \mu))^{-1}
\exp(-\frac{\beta |\lambda_{\phi}|^{2}}{\mu} V)\ \Xi^{\prime}_{0,\Lambda}(\beta,\mu) \ ,
\end{equation}
where
\begin{equation}\label{4.9.4}
\Xi^{\prime}_{0,\Lambda}(\beta,\mu) := \prod_{{k} \ne {0}} (1-\exp(-\beta(\epsilon_{{k}}-\mu)))^{-1} \ ,
\end{equation}
with $\epsilon_{{k}}={k}^{2}$. Recall that the grand-canonical state for the perfect Bose-gas is
\begin{equation}\label{4.9.5}
\omega^{0}_{\beta,\mu,\Lambda,\lambda_{\phi}}(\cdot):= {\frac{1}{\Xi_{0,\Lambda}(\beta,\mu,\lambda_{\phi})}} \
{\rm{Tr}}_{{\mathcal{F}}_{\Lambda}}[e^{-\beta H_{0,\Lambda,\mu,\lambda_{\phi}}} \ (\cdot )] \ .
\end{equation}
see Section \ref{sec:gBEC}. Then it follows from (\ref{4.9.3})-(\ref{4.9.5}) that the mean density ${\rho}$
equals to
\begin{equation}\label{4.9.6}
{\rho}=\omega^{0}_{\beta,\mu,\Lambda,\lambda_{\phi}}(\frac{N_{\Lambda}}{V})=
\frac{|\lambda_{\phi}|^{2}}{\mu^{2}} + \frac{1}{V}\frac{1}{\exp(-\beta \mu)-1} +
\frac{1}{V} \sum_{{k} \ne {0}} \frac{1}{\exp(\beta(\epsilon_{{k}}-\mu))-1} \ .
\end{equation}
Equation (\ref{4.9.6}) is the starting point of our analysis. Since the critical density
$\rho_{c}(\beta)= \mathcal{I}(\beta,\mu =0)$ is finite (\ref{I}), we have the following statement.
\begin{proposition}\label{prop:4.1}
Let $0 < \beta <\infty$ be fixed. Then, for each
\begin{equation}\label{4.11}
{\rho_{c}}(\beta) < {\rho} < \infty \ ,
\end{equation}
and for each $\lambda >0$, $V <\infty$, there exists a unique solution of (\ref{4.9.6}) of the form
\begin{equation}
\mu_{\Lambda}({\rho},|\lambda_{\phi}|) = -\frac{|\lambda_{\phi}|}{\sqrt{{\rho}-{\rho_{c}}(\beta)}}\\
+ \alpha(|\lambda_{\phi}|,V) \ ,
\label{4.12.1}
\end{equation}
with
\begin{equation}\label{4.12.2}
\alpha(|\lambda_{\phi}|,V) \ge 0 \ \ \forall \ |\lambda_{\phi}|, V \ ,
\end{equation}
and such that
\begin{equation}\label{4.13}
\lim_{|\lambda_{\phi}| \to 0} \lim_{V \to \infty} \frac{\alpha(|\lambda_{\phi}|,V)}{|\lambda_{\phi}|} = 0 \ .
\end{equation}
\end{proposition}
\begin{rem}\label{rem:4.2} The proof of this statement is straightforward and follows from equation
(\ref{4.9.6}). {We also note that besides the cube $\Lambda$, the Proposition \ref{prop:4.1} is also true
for the case of three-dimensional anisotropic parallelepiped
$\Lambda:= V^{\alpha_1}\times V^{\alpha_2}\times V^{\alpha_3}$, with p.b.c. and
$\alpha_1 \geq \alpha_2 \geq \alpha_3$, $\alpha_1 + \alpha_2 + \alpha_3 = 1$, i.e. when for $\lambda =0$
one has \textit{type} II or \textit{type} III condensations .}
\end{rem}
Since $|\lambda_{\phi}|^{2} = \lambda_{\phi}\bar{\lambda}_{\phi} = \lambda^{2}$, we obtain that the limit of
expectation
\begin{equation}\label{4.14.1}
\lim_{\lambda \to +0} \lim_{V \to \infty}\omega^{0}_{\beta,\mu,\Lambda,\lambda_{\phi}}(b_{{0}}^{*}/\sqrt{V})=
- \lim_{\lambda \to +0} \lim_{V \to \infty} \frac{\partial}{\partial \lambda_{\phi}}
p_{\beta,\mu,\Lambda,\lambda_{\phi}} \ ,
\end{equation}
is related to derivative of the \textit{grand-canonical pressure} with respect to the breaking-symmetry sources
(\ref{4.6}):
\begin{equation}\label{4.14.2}
p_{\beta,\mu,\Lambda,\lambda_{\phi}}:=\frac{1}{\beta V}\ln \Xi_{0,\Lambda}(\beta,\mu,\lambda_{\phi}) \ .
\end{equation}
Recall that the left-hand side of (\ref{4.14.1}) is in fact the Bogoliubov quasi-average of
$b_{{0}}^{*}/\sqrt{V}$.
By (\ref{4.9.4}) and (\ref{4.14.2})
we obtain that
\begin{equation}\label{4.14.3}
\frac{\partial}{\partial \lambda_{\phi}} p_{\beta,\mu,\Lambda,\lambda_{\phi}}=
-\frac{\bar{\lambda}_{\phi}}{\mu} \ .
\end{equation}
Since for a given $\rho$ the asymptotic of the chemical potential is (\ref{4.12.1}),
by (\ref{4.14.1}) and (\ref{4.14.3}) one gets
\begin{equation}\label{4.15.1}
\lim_{\lambda \to +0} \lim_{V \to \infty}\omega^{0}_{\beta,\mu,\Lambda,\lambda_{\phi}}(b_{{0}}^{*}/\sqrt{V})
= \sqrt{\rho_{{0}}(\beta,\rho)} \exp(- i\phi) \ ,
\end{equation}
where according to (\ref{4.9.6}) and (\ref{4.14.3})
\begin{equation*}
\rho_{{0}}(\beta,\rho) = {\rho}-{\rho_{c}}(\beta, \rho) \ ,
\end{equation*}
is the perfect Bose-gas condensation in \textit{zero mode}. We see therefore that the phase in (\ref{4.14.1})
remains in (\ref{4.15.1}) even after the limit $\lambda \to +0$.
In \cite{LSYng} the following definition of $\rm{(GSB)}_{q-a}$ was suggested in the more general framework
of the imperfect Bose gas that we consider later:
\begin{defi} \label{SSBq-a}
We say that the state $\omega_{\beta,\mu,\Lambda,\lambda_{\phi}}$ undergoes a \textit{spontaneous} Gauge
Symmetry Breaking $\rm{(GSB)}_{q-a}$ in the Bogoliubov q-a sense if the limit state (\ref{4.431})
rests gauge-invariant, whereas the state
\begin{equation}\label{GSB-q-a}
\omega_{\beta,\mu,\phi} (\cdot) := \lim_{\lambda \to +0}
\lim_{V \to \infty}\omega_{\beta,\mu,\Lambda,\lambda_{\phi}} (\cdot) \ ,
\end{equation}
is not gauge-invariant and $\omega_{\beta,\mu,\phi} \neq \omega_{\beta,\mu,\phi^{\prime}}$, when
$\phi \neq \phi^{\prime}$.
\end{defi}
We note that $\rm{(GSB)}_{q-a}$ is {equivalent} to $\rm{(GSB)}$, i.e. to
Definition \ref{SSB}, where the ergodic states $\omega_{\beta,\mu}^{'}$ in (ii) coincide with the set
of $\omega_{\beta,\mu,\phi}$ in (\ref{GSB-q-a}), see Theorem \ref{theo:3.4} below.
The notion $\rm{(GSB)}_{q-a}$ is, however, useful for purposes of comparison with \cite{LSYng}.
\begin{rem}\label{rem:4.3} Note that by (\ref{4.9.6}) together with Proposition \ref{prop:4.1} and
(\ref{4.15.1}) one gets
\begin{eqnarray}\label{4.15.11}
&&\rho_{{0}}(\beta,\rho) = \lim_{\lambda \to +0} \lim_{V \to \infty}
\omega^{0}_{\beta,\mu,\Lambda,\lambda_{\phi}}(\frac{b_{{0}}^{*}}{\sqrt{V}}\frac{b_{{0}}}{\sqrt{V}}) = \\
&&\lim_{\lambda \to +0} \lim_{V \to \infty}\omega^{0}_{\beta,\mu,\Lambda,\lambda_{\phi}}(b_{{0}}^{*}/\sqrt{V})
\lim_{\lambda \to +0} \lim_{V \to \infty}\omega^{0}_{\beta,\mu,\Lambda,\lambda_{\phi}}(b_{{0}}/\sqrt{V}) \ .
\nonumber
\end{eqnarray}
Besides \textit{decorrelation} of the zero-mode \textit{spacial} averages
$\eta_{\Lambda, 0}(b^*) = b_{{0}}^{*}/\sqrt{V}$ and $\eta_{\Lambda, 0}(b) = b_{{0}}/\sqrt{V}$,
(\ref{b-p-mode-av}), for the Bogoliubov q-a, equation (\ref{4.15.11}) establishes \textit{also}
the identity between zero-mode condensation fraction $\rho_{{0}}(\beta,\rho)$ and
$LRO(\beta,\rho)$ (\ref{PBG-ODLRO}), that we denote by $\rm{(ODLRO)}_{q-a}$.
Decorrelation in the right-hand side of (\ref{4.15.11}) indicates for the Bogoliubov q-a \textit{a nontrivial}
$\rm{(GSB)}_{q-a}$ in the presence of condensate, see (\ref{4.15.1}) and Definition \ref{SSBq-a}.
\end{rem}
Remarks \ref{rem:4.2} and \ref{rem:4.3} motivate definition of the q-a \textit{states} for the perfect Bose-gas
as follows:
\begin{equation}\label{PBG-LimSt-phi}
\omega^{0}_{\beta,\mu,\phi} := \lim_{\lambda \to +0}
\lim_{V \to \infty}\omega^{0}_{\beta,\mu,\Lambda,\lambda_{\phi}} \ ,
\end{equation}
where the double limit along a subnet $\Lambda \uparrow \mathbb{R}^3$ exists by weak* compactness of the
set of states \cite{BR87}.
Below we use notation $\omega$ for the Gibbs state in general case (\ref{4.1})-(\ref{4.3}) and we keep
$\omega^{0}$ for the perfect Bose-gas.
\begin{defi}\label{defi:2.1}
We recall that Bose-gas undergoes the \textit{zero}-mode BEC if
\begin{equation}\label{4.16}
\lim_{V \to \infty} \frac{1}{V}\omega_{\beta,\mu,\Lambda}({b_{{0}}^{*}b_{{0}}}) =
\lim_{V \to \infty} \frac{1}{V^2} \int_{\Lambda} \int_{\Lambda} dx \, dy \,
\omega_{\beta,\mu,\Lambda}(b^*(x)b(y)) > 0 \ .
\end{equation}
Simultaneously, this means a non-trivial correlation (\ref{PBG-ODLRO})
\begin{equation}\label{4.161}
\lim_{\|x-y\| \to \infty} \lim_{V \to \infty} \omega_{\beta,\mu,\Lambda}(b^*(x)b(y)) > 0 \ ,
\end{equation}
of zero-mode spacial averages (\ref{b-p-mode-av}), that we denoted by ODLRO .
\end{defi}
As we demonstrated in Section \ref{sec:gBEC} even for the PBG this definition is too restricted since
(\ref{4.16}) might be trivial, although condensation does exist because of a finite critical density
$\rho_{c}(\beta,\mu)$.
We say that Bose-gas undergoes gBEC (Definition \ref{def:gBEC}) if
\begin{equation}\label{4.17}
\lim_{\delta \rightarrow +0}\lim_{\Lambda } \frac{1}{V}\sum_{\left\{ k\in \Lambda^{\ast }, \,
\left\| k\right\| \leq \delta \right\}} \omega_{\beta,\mu,\Lambda}({b_{{k}}^{*}b_{{k}}}) =
\rho -\rho_{c}(\beta,\mu) > 0 \ .
\end{equation}
To classify different \textit{types} of the gBEC one has to consider the value of the limits:
\begin{equation}\label{4.18}
\lim_{\Lambda } \frac{1}{V}\ \omega_{\beta,\mu,\Lambda}({b_{{k}}^{*}b_{{k}}}) =: \rho_k \ , \
k\in \Lambda^{\ast } \ .
\end{equation}
Then according to Section \ref{sec:gBEC}, one has $\rho_{k=0} = \rho -\rho_{c}$
for the type I gBEC, $\rho_{k=0} < \rho -\rho_{c}$ for the type II gBEC. If one has
$\{\rho_k = 0\}_{k\in \Lambda^{\ast }}$ and non-trivial (\ref{4.17}), then the gBEC is of the type III.
\begin{defi}\label{defi:2.3}
We say that Bose-gas undergoes Bogoliubov quasi-average {condensation} $\rm{(BEC)}_{q-a}$ if
\begin{equation}\label{4.19}
\lim_{\lambda \to +0} \lim_{V \to \infty}
\omega_{\beta,\mu,\Lambda,\lambda_{\phi}}(\frac{b_{{0}}^{*}}{\sqrt{V}}\frac{b_{{0}}}{\sqrt{V}}) > 0 \ .
\end{equation}
\end{defi}
\begin{rem}\label{rem:3.3}
First, the results of Remark \ref{rem:4.3} are \textit{independent} of the anisotropy, i.e. of whether the
condensation for $\lambda =0$ is in single mode ($k=0$) (i.e. BEC) or it is extended as the gBEC-type III,
Section \ref{sec:gBEC}. We comment that the condensate in the mode $k=0$ is due to the one-particle Hamiltonian
\textit{spectral property} that implies $\varepsilon_{k=0}=0$ (\ref{dual-Lambda}).
Second, these results yield that the Bogoliubov quasi-average method solves for PBG the
question about \textit{equivalence} between $\rm{(BEC)}_{q-a}$, $\rm{(GSB)}_{q-a}$ and $\rm{(ODLRO)}_{q-a}$:
\begin{equation}\label{PBG-qa-equiv}
\rm{(BEC)}_{q-a} \Leftrightarrow \rm{(ODLRO)}_{q-a} \Leftrightarrow \rm{(GSB)}_{q-a} \ ,
\end{equation}
which holds if they are defined via the \textit{one-mode} quasi-average for $k=0$.
Here equivalence $\Leftrightarrow$ means implications in both sense.
\end{rem}
Then the quasi-average for $k\neq 0$, i.e. for $\varepsilon_{k} > 0$, needs a certain elucidation.
To this aim we revisit the prefect Bose-gas (\ref{G-C-PBG}) with symmetry breaking sources (\ref{4.7})
in a single mode $q \in \Lambda^{*}$, which is in general not a zero-mode:
\begin{eqnarray}\label{freeQE}
H^{0}_{\Lambda} (\mu; h) \, := \, H^{0}_{\Lambda}(\mu) \, + \, \sqrt{V} \ \big( \overline{h} \ b_{{q}} +
h \ b^{*}_{{q}} \big) \ , \ \mu \leq 0.
\end{eqnarray}
Then for a fixed density ${\rho}$, the the grand-canonical condensate equation (\ref{BEC-eq}) for (\ref{freeQE})
takes the following form:
\begin{eqnarray}\label{perfect-gas-with-source-density-equation-finite-volume}
&&{\rho} = \rho_{\Lambda}(\beta, \mu, h) \, := \, \frac{1}{V} \sum_{k \in \Lambda^{*}_{l}}
\omega_{\beta,\mu,\Lambda,h}^{0}(b^{*}_{k}b_{k}) = \\
&&\frac{1}{V} (e^{\beta(\varepsilon_{{q}} - \mu)}-1)^{-1} \, + \, \frac{1}{V}
\sum_{k\in \Lambda^{*}\setminus{q}} \frac{1}{e^{\beta(\varepsilon_{k} - \mu)}-1} \, + \,
\frac{\vert h \vert\, ^{2}}
{(\varepsilon_{{q}} - \mu)\, ^{2}} \ . \nonumber
\end{eqnarray}
According the quasi-average method, to investigate a possible condensation, one must first take the thermodynamic
limit in the right-hand side of (\ref{perfect-gas-with-source-density-equation-finite-volume}), and then switch
off the symmetry breaking source: $h \rightarrow 0$. Recall that the critical density, which defines the
threshold of boson saturation is equal to $\rho_c(\beta) = \mathcal{I}(\beta,\mu=0)$ (\ref{I}), where
$\mathcal{I}(\beta,\mu)=\lim_{\Lambda} \rho_{\Lambda}(\beta, \mu , h = 0)$.
Since $\mu \leq 0$, we now have to distinguish two cases:\\
(i) Let the mode ${q}\in \Lambda^{*}$ be such that $\lim_{\Lambda} \varepsilon_{{q}} > 0$. Then we obtain from
(\ref{perfect-gas-with-source-density-equation-finite-volume}) for the condensate equation and for the simplest
$q$-mode gauge-symmetry breaking expectation:
\begin{eqnarray*}
{\rho} \, = \, \lim_{h \rightarrow 0}
\lim_{\Lambda} \rho_{\Lambda}(\beta, \mu, h) \, = \, \mathcal{I}(\beta, \mu) \ , \ \
\lim_{h \to 0} \lim_{V \to \infty}
\omega_{\beta,\mu,\Lambda,h}^{0}(\frac{b_{{q}}^{*}}{\sqrt{V}}) =
\lim_{h \to 0} \frac{ \overline{h} } {(\varepsilon_{{q}} - \mu)\,} = 0
\ .
\end{eqnarray*}
This means that the quasi-average coincides with the average. Hence, we return to the analysis of the condensate
equation (\ref{perfect-gas-with-source-density-equation-finite-volume}) for $h =0$. This leads to finite-volume
solutions $\mu_{\Lambda}(\beta,\rho)$ and consequently to all possible types of condensation as a function of
anisotropy $\alpha_1$, see Section \ref{sec:gBEC} for details.\\
(ii) On the other hand, if ${q}\in \Lambda^{*}$ is such that $\lim_{\Lambda} \varepsilon_{{q}} = 0$, then
thermodynamic limit in the right-hand side of the condensate equation
(\ref{perfect-gas-with-source-density-equation-finite-volume}) and the
$q$-mode gauge-symmetry breaking expectation yield:
\begin{eqnarray}\label{perfect-gas-with-source-density-equation-infinite-volume}
{\rho} = \lim_{\Lambda} \rho_{\Lambda}(\beta, \mu, h)
\, = \, \mathcal{I}(\beta, \mu) + \frac{\vert h \vert\, ^{2}}{\mu\, ^{2}} \ , \ \
\lim_{V \to \infty}
\omega_{\beta,\mu,\Lambda,h}^{0}(\frac{b_{{q}}^{*}}{\sqrt{V}}) =
\frac{ \overline{h} } {(- \mu)\,} \ .
\end{eqnarray}
If ${\rho} \leq \rho_{c}(\beta)$, then the limit of solution of
(\ref{perfect-gas-with-source-density-equation-infinite-volume}):
$\lim_{h \rightarrow 0}{\mu}(\beta, {\rho}, h) = {\mu}_{0} (\beta, {\rho}) <0$,
where ${\mu}(\beta,{\rho}, h)= \lim_{\Lambda}{\mu}_{\Lambda} (\beta,{\rho}, h)<0 $ is thermodynamic limit of
the finite-volume solution of condensate equation (\ref{perfect-gas-with-source-density-equation-finite-volume}).
Therefore, there is no condensation in any mode and according to
(\ref{perfect-gas-with-source-density-equation-infinite-volume}) the corresponding $q$-mode gauge-symmetry
breaking expectation for $h \to 0$ (Bogoliubov quasi-average) again equals to zero.
But if ${\rho} > \rho_{c}(\beta)$, then (\ref{perfect-gas-with-source-density-equation-finite-volume})
yields that $\lim_{h \rightarrow 0}{\mu}(\beta, {\rho},h) =0$. Therefore, by
(\ref{perfect-gas-with-source-density-equation-infinite-volume}) the density of condensate and the
Bogoliubov quasi-average are
\begin{eqnarray}\label{BEC-qa}
&& \rho_{0}(\beta) = {\rho} - \rho_{c}(\beta) =
\lim_{h \rightarrow 0}\frac{\vert h \vert\, ^{2}}{\mu(\beta, {\rho},h)\, ^{2}} \ \ , \\
&& \lim_{h \rightarrow 0} \lim_{V \to \infty} \omega_{\beta,{\mu}_{\Lambda}(\beta,{\rho}, h),\Lambda,h}^{0}
(\frac{b_{{q}}^{*}}{\sqrt{V}}) =
\lim_{h \rightarrow 0} \lim_{V \to \infty}
\omega_{\beta,{\mu}_{\Lambda}(\beta,{\rho}, h),\Lambda,h}^{0}(\frac{b_{{0}}^{*}}{\sqrt{V}}) =
\sqrt{\rho_{0}(\beta)} e^{- i \, {\rm{arg}} (h)} \nonumber \ .
\end{eqnarray}
Consider now the \textit{case} (i) in more details. Let $\lim_{\Lambda} \varepsilon_{{q}} =: \varepsilon_{{q}} > 0$.
Then by (\ref{perfect-gas-with-source-density-equation-finite-volume})
for the finite-volume expectation of the particle density in the $q$-mode is
\begin{equation}\label{BEC-q-posit}
\omega_{\beta,\mu,\Lambda,h}^{0}({b^{*}_{q}b_{q}}/{V}) = \frac{1}{V} (e^{\beta(\varepsilon_{{q}} - \mu)}-1)^{-1}
+ \frac{\vert h \vert\, ^{2}} {(\varepsilon_{{q}} - \mu)\, ^{2}} \ .
\end{equation}
Since the one-particle spectrum $\{\varepsilon_{{k}}\geq 0\}_{k\in\Lambda^*}$ and $\varepsilon_{{k=0}} = 0$
(\ref{dual-Lambda}), the solution of equation (\ref{perfect-gas-with-source-density-equation-finite-volume})
is unique and negative: ${\mu}_{\Lambda} (\beta,{\rho}, h)<0$.
Then the Bogoliubov quasi-average of ${b^{*}_{q}b_{q}}/{V}$ is equal to
\begin{eqnarray}\label{Bog-qa}
&&\lim_{h \rightarrow 0}\lim_{\Lambda}\omega_{\beta,{\mu}_{\Lambda}
(\beta,{\rho}, h),\Lambda,h}^{0}({b^{*}_{q}b_{q}}/{V}) = \\
&&\lim_{h \rightarrow 0}\lim_{\Lambda}\frac{1}{V} (e^{\beta(\varepsilon_{{q}} - {\mu}_{\Lambda}
(\beta,{\rho}, h))}-1)^{-1} +
\lim_{h \rightarrow 0}\lim_{\Lambda} \frac{\vert h \vert\, ^{2}} {(\varepsilon_{{q}} -
{\mu}_{\Lambda} (\beta,{\rho}, h))\, ^{2}} = 0 \ , \nonumber
\end{eqnarray}
for any particle density including the case ${\rho} > \rho_{c}(\beta)$.
Now the condensate equation (\ref{perfect-gas-with-source-density-equation-infinite-volume}) and the
$q$-mode gauge-symmetry breaking expectation get the form:
\begin{eqnarray}\label{perfect-gas-with-source-density-equation-infinite-volume-q}
&&{\rho} = \lim_{\Lambda} \rho_{\Lambda}(\beta, \mu, h)
\, = \, \mathcal{I}(\beta, \mu) + \frac{\vert h \vert\, ^{2}}{(\varepsilon_{{q}} - \mu )^{2}}
=: \rho(\beta, \mu, h) \ , \\
&&\lim_{V \to \infty}
\omega_{\beta,\mu,\Lambda,h}^{0}(\frac{b_{{q}}^{*}}{\sqrt{V}}) =
\frac{ \overline{h} } {(\varepsilon_{{q}} - \mu)\,} \label{infinite-volume-q} \ .
\end{eqnarray}
\begin{rem}\label{rem:3.31}
Note that (\ref{freeQE}) gives an example of the model of condensation that
depend on \textit{external source} in non-zero mode. Indeed, for the perfect Bose-gas with the one-particle
spectrum (\ref{dual-Lambda}) the solution ${\mu}(\beta,{\rho}, h))$ of the
condensate equation (\ref{perfect-gas-with-source-density-equation-infinite-volume-q}) is such that
\begin{equation*}
\lim_{\rho \rightarrow \rho_{c}(\beta, h)}{\mu}(\beta,{\rho}, h)) = 0 \ \ \ {\rm{and}} \ \ \
\rho_{c}(\beta, h) := \sup_{\mu \leq 0} \rho(\beta, \mu, h) = \rho(\beta, \mu =0, h) \ .
\end{equation*}
Since $\varepsilon_{{q}} > 0$ and $\varepsilon_{{0}} = 0$ the finite saturation density $\rho_{c}(\beta, h)$
trigger BEC in the zero mode of perfect Bose-gas (\ref{freeQE}) if $\rho > \rho_{c}(\beta, h)$. To this end
we observe that by (\ref{perfect-gas-with-source-density-equation-finite-volume}), (\ref{BEC-q-posit}) and
(\ref{perfect-gas-with-source-density-equation-infinite-volume-q}) one finds
\begin{eqnarray}\label{BEC-q}
{\rho} - \rho_{c}(\beta, h) =
\lim_{\Lambda} \frac{1}{V} \omega_{\beta,{\mu}_{\Lambda} (\beta,{\rho}, h),\Lambda,h}^{0}(b^{*}_{0}b_{0}) \ ,
\end{eqnarray}
where solution of equation (\ref{perfect-gas-with-source-density-equation-finite-volume}) has for
$V \rightarrow \infty$ the asymptotics:
\begin{equation*}
{\mu}_{\Lambda} (\beta,{\rho}, h) = - ({\rho} - \rho_{c}(\beta, h))V^{-1} + {o} (V^{-1}) \ .
\end{equation*}
Therefore, the model (\ref{freeQE}) is the ideal Bose-gas with external sources, which behaviour
is almost identical to Bose-gas with $h=0$, Section \ref{sec:gBEC}. This concerns the higher critical density:
$\rho(\beta, \mu =0, h) \geq \rho_{c}(\beta)$ (\ref{perfect-gas-with-source-density-equation-infinite-volume-q})
and non-trivial expectation of the particle density (\ref{BEC-q-posit}) in a non-zero $q$-mode.
\end{rem}
\textit{Summarising the case} (i). The non-zero mode sources for the ideal Bose-gas and the corresponding
Bogoliubov quasi-averages give the \textit{same} results as for the ideal Bose-gas \textit{without} external
sources. Hence, the quasi-averages in this case have \textit{no impact} and lead to the same conclusions
(and problems) as the generalised BEC in Section \ref{sec:gBEC}. If one keeps the non-zero mode source, then
this generalised BEC has a \textit{source-dependent} critical density as in Remark \ref{rem:3.31}.
\textit{Summarising the case} (ii). First we note that by virtue of
(\ref{perfect-gas-with-source-density-equation-finite-volume}),
(\ref{perfect-gas-with-source-density-equation-infinite-volume}) one has
${\mu}(\beta,{\rho}, h \neq 0)<0$ and that for any $k \neq q \, $, even when
$\lim_{\Lambda} \varepsilon_{{k}} = 0 \, $,
\begin{equation}\label{zero-non-zero-modes}
\lim_{h \rightarrow 0}\lim_{\Lambda}\omega_{\beta,{\mu}_{\Lambda}
(\beta,{\rho}, h),\Lambda,h}^{0}({b^{*}_{k}b_{k}}/{V}) =
\lim_{h \rightarrow 0}\lim_{\Lambda} \frac{1}{V}
\frac{1}{e^{\beta(\varepsilon_{{k}}- {\mu}_{\Lambda} (\beta,{\rho}, h)))}-1} = 0 \ .
\end{equation}
This means for any anisotropy $\alpha_1$ the \textit{quasi-average} condensation $\rm{(BEC)}_{q-a}$ occurs
only in one zero-mode (BEC type I), whereas the gBEC for $\alpha_1 >1/2$ is of the type III,
see Section \ref{sec:gBEC}.
Diagonalisation (\ref{4.9.2}) for $b_{{q}} \rightarrow \widehat{b}_{{q}} $, and (\ref{BEC-qa}) allow to
apply the quasi-average method to calculate a nonvanishing for ${\rho} > \rho_{c}(\beta)$ gauge-symmetry
breaking $\rm{(GSB)}_{q-a}$:
\begin{equation}\label{GSB-qa}
\lim_{h \rightarrow 0}\lim_{\Lambda}\omega_{\beta,{\mu}_{\Lambda}
(\beta,{\rho}, h),\Lambda,h}^{0}({b_{q}}/\sqrt{V}) =
\lim_{h \rightarrow 0}\frac{h}{\mu(\beta,{\rho}, h)} =
e^{i \, {\rm{arg}}(h)} \, \sqrt{{\rho} - \rho_{c}(\beta)} \ ,
\end{equation}
along $\{h = |h| e^{i \, {\rm{arg}}(h)} \wedge |h|\rightarrow 0\}$.
Then by inspection of (\ref{Bog-qa}) and (\ref{GSB-qa}) we find that $\rm{(GSB)}_{q-a}$ and $\rm{(BEC)}_{q-a}$
are \textit{equivalent}:
\begin{eqnarray}\label{Bog=GSB-qa}
&&\lim_{h \rightarrow 0}\lim_{\Lambda} \ \omega_{\beta,{\mu}_{\Lambda}
(\beta,{\rho}, h),\Lambda,h}^{0}({b^{*}_{q}}/\sqrt{V}) \ \omega_{\beta,{\mu}_{\Lambda}
(\beta,{\rho}, h),\Lambda,h}^{0}({b_{q}}/\sqrt{V}) = \\
&& = \lim_{h \rightarrow 0}\lim_{\Lambda} \ \omega_{\beta,{\mu}_{\Lambda}
(\beta,{\rho}, h),\Lambda,h}^{0}({b^{*}_{q}b_{q}}/{V}) = {\rho} - \rho_{c}(\beta) \ . \nonumber
\end{eqnarray}
Note that by (\ref{PBG-ODLRO}) the $\rm{(GSB)}_{q-a}$ and $\rm{(BEC)}_{q-a}$ are in turn \textit{equivalent} to
$\rm{(ODLRO)}_{q-a}$. In \textit{contrast} to $\rm{(BEC)}_{q-a}$ for the one-mode BEC one gets
\begin{equation*}
\lim_{\Lambda} \ \omega_{\beta,{\mu}_{\Lambda}
(\beta,{\rho}, 0),\Lambda,0}^{0}({b^{*}_{q}b_{q}}/{V}) =
\lim_{\Lambda} \ \omega_{\beta,{\mu}_{\Lambda}
(\beta,{\rho}, 0),\Lambda, 0}^{0}({b^{*}_{q}}/\sqrt{V}) \ \omega_{\beta,{\mu}_{\Lambda}
(\beta,{\rho}, 0),\Lambda, 0}^{0}({b_{q}}/\sqrt{V})= 0 \ ,
\end{equation*}
for any $\rho$ and $q\in \Lambda^{*}$ as soon as $\alpha_1 > 1/2$, see Section \ref{sec:gBEC}.
On the other hand, the value of gBEC coincides with $\rm{(BEC)}_{q-a}$.
\begin{rem}\label{rem:3.311}
Therefore, the zero-mode conventional BEC and the zero-mode quasi-average $\rm{(BEC)}_{q-a}$ for the perfect
Bose-gas are \textit{not} equivalent: $\rm{(BEC)}_{q-a}$ $\nRightarrow$ BEC ,
but the zero-mode $\rm{(BEC)}_{q-a}$ is \textit{equivalent} to gBEC: $\rm{(BEC)}_{q-a}$ $\Leftrightarrow$ gBEC.
The equivalence (\ref{PBG-qa-equiv}) shows that the Bogoliubov quasi-average method is definitely appropriate
for the case of the PBG.
\end{rem}
We comment that (\ref{4.15.1}), (\ref{PBG-LimSt-phi}) show that the states $\omega_{\beta,\mu,\phi}$ are not
gauge invariant. Assuming that they are the ergodic states in the ergodic decomposition of $\omega_{\beta,\mu}$,
it follows that for \textit{interacting} Bose-gas one has: $\rm{(BEC)}_{q-a}$ $\Leftrightarrow$ $\rm{(GSB)}_{q-a}$,
which is similar to the equivalence for the PBG. It is illuminating
to observe the explicit mechanism for the appearance of the breaking symmetry phase $\phi$, connected with
(\ref{4.12.1}) of Proposition \ref{prop:4.1} in the PBG case. Note that in this case the chemical potential
remains proportional to $|\lambda|$ even after the thermodynamic limit (\ref{4.14.3}). This property
persists also for the {interacting} Bose-gas, see Section \ref{sec:BogAppr-Q-A}.
\subsection{Interaction, quasi-averages and the Bogoliubov $c$-number approximation} \label{sec:BogAppr-Q-A}
We now consider the imperfect Bose-gas with interaction(\ref{4.1})-(\ref{4.3}). The famous
\textit{Bogoliubov approximation} that replacing $\eta_{\Lambda, 0}(b), \eta_{\Lambda, 0}(b^{*})$
(\ref{b-p-mode-av}) by $c$-numbers \cite{Bog07} (see also \cite{ZBru}, \cite{JaZ10}, \cite{Za14}) will be
instrumental. The exactness of this procedure was proved by Ginibre \cite{Gin} on the level of thermodynamics.
Later Lieb, Seiringer and Yngvason (\cite{LSYng1}, \cite{LSYng}) and independently
S\"{u}t\"{o} \cite{Suto1} improved the arguments in \cite{Gin} and elucidated the \textit{exactness}
of the Bogoliubov approximation. In our analysis we shall rely on the method of \cite{LSYng}, which
uses the Berezin-Lieb inequality \cite{Lieb1}.
Recall that the Fock space ${\cal F}_{\Lambda} \simeq {\cal F}_{{0}} \otimes {\cal F}^{\prime}$, where
${\cal F}_{{0}}$ denotes the zero-mode subspace and ${\cal F}^{\prime} := {\cal F}_{{k} \ne {0}}$, see
Section \ref{sec:gBEC}.
Let $z\in \mathbb{C}$ be a complex number and $|z\rangle = \exp(-|z|^{2}/2 +z b_{{0}}^{*})\ |0\rangle$ be
the Glauber coherent vector in ${\cal F}_{{0}}$. As in \cite{LSYng}, let operator
$(H_{\Lambda,\mu,\lambda})^{'}(z)$ be the \textit{lower symbol} of the operator
$H_{\Lambda,\mu,\lambda}$ (\ref{4.6}).
Then the corresponding to this symbol pressure $p_{\beta,\Lambda,\mu,\lambda}^{'}$ is defined by
\begin{equation}
\exp(\beta V p_{\beta,\Lambda,\mu,\lambda}^{'})=\Xi_{\Lambda}(\beta,\mu,\lambda)^{'} =
\int_{\mathbb{C}} d^{2}z {\rm{Tr}}_{{\cal F}^{'}} \exp(-\beta (H_{\Lambda,\mu,\lambda})^{'}(z)) \ .
\label{2.4.18}
\end{equation}
Consider the probability density:
\begin{equation}
{\cal W}_{\mu,\Lambda, \lambda}(z) := \Xi_{\Lambda}(\beta,\mu,\lambda)^{-1}\\
{\rm{Tr}}_{{\cal F}^{'}}\langle z| \exp(-\beta H_{\Lambda,\mu,\lambda})|z \rangle \ .
\label{2.4.19}
\end{equation}
As it is proved in \cite{LSYng} for almost all $\lambda >0$ the density
${\cal W}_{\mu,\Lambda, \lambda} (\zeta \sqrt{V})$ converges, as $V \to \infty$, to $\delta$-density at
the point $\zeta_{max}(\lambda)=\lim_{V \to \infty} {z_{max}(\lambda)}/{\sqrt{V}}$, where $ z_{max}(\lambda)$
maximises the partition function ${\rm{Tr}}_{{\cal F}^{'}} \exp(-\beta (H_{\Lambda,\mu,\lambda})^{'}(z))$.
Although \cite{LSYng} took $\phi=0$ in (\ref{4.8}), their results in the general case (\ref{4.8}) may be obtained
by the trivial substitution $b_{{0}}\to b_{{0}}\exp(-i\phi)$, $b_{{0}}^{*} \to b_{{0}}^{*} \exp(i\phi)$ motivated
by (\ref{4.6}). Note that expression (34) in \cite{LSYng} may be thus re-written as
\begin{eqnarray}
&& \lim_{V \to \infty} \omega_{\beta,\mu,\Lambda,\lambda}(\eta_{\Lambda, 0}(b^{*}\exp(i\phi))=
\lim_{V \to \infty} \omega_{\beta,\mu,\Lambda,\lambda}(\eta_{\Lambda,0}(b\exp(-i\phi)) \nonumber \\
&& = \zeta_{max}(\lambda)=\frac{\partial p(\beta, \mu,\lambda)}{\partial \lambda} \ , \label{2.4.20}
\end{eqnarray}
and consequently yields
\begin{equation}
\lim_{V \to \infty} \omega_{\beta,\mu,\Lambda,\lambda}(\eta_{\Lambda, 0}(b^{*})\eta_{\Lambda, 0}(b))
= |\zeta_{max}(\lambda)|^{2} \ .
\label{2.4.21}
\end{equation}
Here we denote by
\begin{equation}\label{4.22}
p(\beta,\mu,\lambda) = \lim_{V \to \infty} p_{\beta,\mu,\Lambda,\lambda} \ ,
\end{equation}
the grand-canonical pressure of the imperfect Bose-gas (\ref{4.1})-(\ref{4.3}) in the thermodynamic limit.
Equality (\ref{2.4.20}) follows from the convexity of $p_{\beta,\mu,\Lambda,\lambda}$ in
$\lambda = |\lambda_{\phi}|$ by the Griffiths lemma \cite{Gri66}.
In \cite{LSYng} it is shown the pressure $p(\beta,\mu,\lambda)$ is equal to
\begin{equation}\label{4.23}
p(\beta,\mu,\lambda)^{'} = \lim_{V \to \infty} p_{\beta,\mu,\Lambda,\lambda}^{'} \ .
\end{equation}
Moreover, (\ref{4.22}) is also equal to the pressure $p(\beta,\mu,\lambda)^{''}$, which is the thermodynamic limit
of the pressure associated to the \textit{upper symbol} of the operator $H_{\Lambda,\mu,\lambda}$.
The crucial is the proof \cite{LSYng} that all of these \textit{three} pressures $p', p, p''$ coincide with
$p_{max}(\beta,\mu,\lambda)$, which is the pressure associated with
${\rm{max}}_{z} {\rm{Tr}}_{{\cal F}^{'}} \exp(-\beta (H_{\Lambda,\mu,\lambda})^{'}(z))$:
\begin{equation}\label{4.231}
p_{max}(\beta,\mu,\lambda)= \lim_{V \to \infty}
\frac{1}{\beta V}\ln \{{\rm{max}}_{z} {\rm{Tr}}_{{\cal F}^{'}} \exp(-\beta (H_{\Lambda,\mu,\lambda})^{'}(z))\} \ .
\end{equation}
Now we are in position to prove one of the main statements of this paper.
\begin{theo}\label{theo:3.4}
Consider the system of interacting Bosons (\ref{4.1})-(\ref{4.8}). If this system displays
$\rm{(ODLRO)}_{q-a}$/$\rm{(BEC)}_{q-a}$, then the limit $\omega_{\beta,\mu,\phi}:=
\lim_{\lambda \to +0} \lim_{V \to \infty}\omega_{\beta,\mu,\Lambda,\lambda_{\phi}} $, on the
set of monomials $\{\eta_{0}(b^{*})^{m}\eta_{0}(b)^{n}\}_{m,n \in \mathbb{N}\cup 0}$ exists and satisfies
\begin{equation}\label{4.24.1}
\omega_{\beta,\mu,\phi} (\eta_{0}(b^{*})) = \sqrt{\rho_{0}} \exp(i\phi) \ ,
\end{equation}
\begin{equation}\label{4.24.2}
\omega_{\beta,\mu,\phi} (\eta_{0}(b)) = \sqrt{\rho_{0}} \exp(-i\phi) \ ,
\end{equation}
together with $\rm{(GSB)}_{q-a}$:
\begin{equation}\label{3.4.24.3}
\omega_{\beta,\mu,\phi} (\eta_{0}(b^{*})\eta_{0}(b)) =
\omega_{\beta,\mu,\phi} (\eta_{0}(b^{*})) \ \omega_{\beta,\mu,\phi}(\eta_{0}(b))
= \rho_{{0}} \ , \ \forall \phi \in [0,2\pi) \ ,
\end{equation}
and
\begin{equation}\label{4.24.4}
\omega_{\beta,\mu} = \frac{1}{2\pi} \int_{0}^{2\pi} d\phi \ \omega_{\beta,\mu,\phi} \ .
\end{equation}
On the Weyl algebra the limit that defines $\omega_{\beta,\mu,\phi}, \ \phi \in [0,2\pi)$
exists along the nets in variables $(\lambda,V)$. The corresponding states are ergodic,
and coincide with the states obtained in Proposition \ref{prop:A.1}.
Conversely, if the $\rm{(GSB)}_{q-a}$ occurs in the sense that (\ref{4.24.1}), (\ref{4.24.2}) hold with
$\rho_{0} \ne 0$, then one gets that $\rm{(ODLRO)}_{q-a}$/$\rm{(BEC)}_{q-a}$ take place.
\end{theo}
\begin{proof} We only have to prove the direct statement, because the converse follows by applying the Schwarz
inequality to the states $\omega_{\beta,\mu,\phi}$, together with the forthcoming (\ref{3.4.27}).
We thus prove $\rm{(ODLRO)}_{q-a}$ $\Rightarrow$ $\rm{(GSB)}_{q-a}$. We first assume that some state
$ \omega_{\beta,\mu,\phi_{0}},\phi_{0}
\in [0,2\pi)$ satisfies $\rm{(ODLRO)}_{q-a}$. Then by (\ref{2.4.21}),
\begin{equation}\label{2.4.25.1}
\lim_{\lambda \to +0} \lim_{V \to \infty}
\omega_{\beta,\mu,\Lambda,\lambda}(\eta_{\Lambda,0}(b^{*})\eta_{\Lambda,0}(b))
= \lim_{\lambda \to +0} |\zeta_{max}(\lambda)|^{2} =: \rho_{{0}} > 0 \ .
\end{equation}
The above limit exists by the convexity of $p(\beta,\mu,\lambda)$ in $\lambda$ and (\ref{4.14.1}) by virtue of
(\ref{2.4.25.1}),
\begin{equation}\label{4.25.2}
\lim_{\lambda \to +0} \frac{\partial p(\beta,\mu,\lambda)}{\partial \lambda} \ne 0 \ .
\end{equation}
At the same time, (\ref{2.4.20}) shows that all states $\omega_{\beta,\mu,\phi}$ satisfy (\ref{2.4.25.1}).
Thus, $\rm{(GSB)}_{q-a}$ is broken in the states $\omega_{\beta,\mu,\phi}, \phi \in [0,2\pi)$.\,
We now prove that the original assumption (\ref{4.16}) implies that all states
$\omega_{\beta,\mu,\phi}, \phi \in [0,2\pi)$ exhibit $\rm{(ODLRO)}_{q-a}$.
Gauge invariance of $\omega_{\beta,\mu,\Lambda}$ (or equivalently $H_{\Lambda,\mu}$) yields, by (\ref{4.7}),
(\ref{4.17}),
\begin{equation}
\omega_{\beta,\mu,\Lambda,\lambda}(\eta_{\Lambda,0}(b^{*})\eta_{\Lambda,0}(b))
=\omega_{\beta,\mu,\Lambda,-\lambda}(\eta_{\Lambda,0}(b^{*})\eta_{\Lambda,0}(b)) \ .
\label{2.4.26.1}
\end{equation}
Again by (\ref{4.7}), (\ref{4.12.1}) and gauge invariance of $H_{\Lambda,\mu}$,
\begin{equation*}
\lim_{\lambda \to -0} \frac{\partial p(\beta,\mu,\lambda)}{\partial \lambda}=
-\lim_{\lambda \to +0} \frac{\partial p(\beta,\mu,\lambda)}{\partial \lambda} \ ,
\end{equation*}
and, since by convexity the derivative ${\partial p(\beta,\mu,\lambda)}/{\partial \lambda}$ is monotone
increasing, we find
\begin{equation}
\lim_{\lambda \to +0} \frac{\partial p(\beta,\mu,\lambda)}{\partial \lambda}
= \lim_{\lambda \to +0} \zeta_{max}(\lambda) = \sqrt{\rho_{0}} \ ,
\label{2.4.26.2}
\end{equation}
\begin{equation}
\lim_{\lambda \to -0} \frac{\partial p(\beta,\mu,\lambda)}{\partial \lambda}
= -\lim_{\lambda \to +0} \zeta_{max}(\lambda)= -\sqrt{\rho_{0}} \ .
\label{2.4.26.3}
\end{equation}
Again by (\ref{2.4.26.1}),
\begin{equation}
\lim_{\lambda \to -0}\lim_{V \to \infty}
\omega_{\beta,\mu,\Lambda,\lambda}(\eta_{\Lambda,0}(b^{*})\eta_{\Lambda,0}(b))
= \lim_{\lambda \to +0}\lim_{V \to \infty}
\omega_{\beta,\mu,\Lambda,\lambda}(\eta_{\Lambda,0}(b^{*})\eta_{\Lambda,0}(b)) \ .
\label{2.4.26.4}
\end{equation}
By \cite{LSYng}, the weight ${\cal W}_{\mu,\lambda}$ is, for $\lambda=0$, supported on a disc with radius
equal to the right-derivative (\ref{4.25.2}). Convexity of the pressure as a function of $\lambda$ implies
\begin{eqnarray*}
\frac{\partial p(\beta,\mu,\lambda_{0}^{-})}{\partial \lambda_{0}^{-}} \le \lim_{\lambda \to -0}\frac{\partial
p(\beta,\mu,\lambda)}{\partial \lambda}
\le \lim_{\lambda \to +0}\frac{\partial p(\beta,\mu,\lambda)}{\partial \lambda} \le \frac{\partial p(\beta,\mu,
\lambda_{0}^{+})}{\partial \lambda_{0}^{+}} \ ,
\end{eqnarray*}
for any $\lambda_{0}^{-}<0<\lambda_{0}^{+}$. Therefore, by the Griffiths lemma (see e.g. \cite{Gri66},
\cite{LSYng}) one gets
\begin{eqnarray}
\lim_{\lambda \to -0}\lim_{V \to \infty}
\omega_{\beta,\mu,\Lambda,\lambda}(\eta_{\Lambda,0}(b^{*})\eta_{\Lambda,0}(b))
&\le& \lim_{V \to \infty} \omega_{\beta,\mu,\Lambda}(\frac{b_{{0}}^{*}b_{{0}}}{V}) \nonumber \\
&\le& \lim_{\lambda \to +0}\lim_{V \to \infty}
\omega_{\beta,\mu,\Lambda,\lambda}(\eta_{\Lambda,0}(b^{*})\eta_{\Lambda,0}(b)) \ .
\label{3.4.27}
\end{eqnarray}
Then (\ref{2.4.26.4}) and (\ref{3.4.27}) yield
\begin{equation}
\lim_{V \to \infty} \omega_{\beta,\mu,\Lambda}(\frac{b_{{0}}^{*}b_{{0}}}{V})=\\
\lim_{\lambda \to +0}\lim_{V \to \infty}
\omega_{\beta,\mu,\Lambda,\lambda}(\eta_{\Lambda,0}(b^{*})\eta_{\Lambda,0}(b)) \ , \ \
\forall \phi \in [0,2\pi) \ .
\label{3.4.28}
\end{equation}
This proves that all $\omega_{\beta,\mu,\phi}, \phi \in [0,2\pi)$ satisfy $\rm{(ODLRO)}_{q-a}$, as asserted.
By (\ref{2.4.20}) and (\ref{2.4.26.2}) one gets (\ref{4.24.1}) and (\ref{4.24.2}). Then (\ref{4.24.4}) is a
consequence of the gauge-invariance of $\omega_{\beta,\mu}$. Ergodicity of the states
$\omega_{\beta,\mu,\phi}, \phi \in [0,2\pi)$ follows from
(\ref{4.24.1}), (\ref{4.24.2}), and (\ref{3.4.28}), see Definition \ref{SSB}(ii).
An equivalent construction is possible using the \textit{Weyl algebra} instead of the \textit{polynomial algebra},
see \cite{Ver}, Ch.4.3.2, and references given there for Proposition \ref{prop:A.1}.
The limit along a subnet in the $(\lambda,V)$ variables exists by weak*-compactness, and,
by asymptotically abelianness of the Weyl algebra for space translations (see, e.g., \cite{BR97}, Example 5.2.19),
the ergodic decomposition (\ref{4.24.4}), which is also a central decomposition, is unique. Thus, the
$\omega_{\beta,\mu,\phi}, \phi \in [0,2\pi)$ coincide with the states constructed in Proposition \ref{prop:A.1}.
\end{proof}
\begin{rem}\label{rem:3.5}
Our Remark \ref{rem:3.3} and Theorem \ref{theo:3.4} elucidate a problem discussed in \cite{LSYng}.
In this paper the authors defined a generalised Gauge Symmetry Breaking via quasi-average $\rm{(GSB)}_{q-a} \ $,
i.e. by
$\lim_{\lambda \to +0} \lim_{V \to \infty} \omega_{\beta,\mu,\Lambda,\lambda}(\eta_{\Lambda, 0}(b)) \ne 0 \ $.
(If it involves other than the gauge group, we denote this by $\rm{(SSB)}_{q-a}$.)
Similarly they modified definition of the one-mode condensation denoted by $\rm{(BEC)}_{q-a}$ (\ref{2.4.25.1}),
and established the equivalence: $\rm{(GSB)}_{q-a} \Leftrightarrow \rm{(BEC)}_{q-a}$.
They also posed a problem: whether $\rm{(BEC)}_{q-a} \Leftrightarrow \rm{BEC}\ $?
In Theorem \ref{theo:3.4} we show that $\rm{(GSB)}_{q-a}$ (\ref{4.24.1}) implies $\rm{(ODLRO)}_{q-a}$, or
$\rm{(BEC)}_{q-a}$.
Note that for the \textit{zero-mode} BEC (Definition \ref{defi:2.1}), the same theorem shows that their
question is answered in the \textit{affirmative}. This is due to the crucial fact that the state
$\omega_{\beta,\mu}$ is gauge-invariant, which is consistent with the decomposition (\ref{4.24.4}) and leads to
the inequalities (\ref{3.4.27}).
On the other hand, for another (but nonetheless equally important, as argued in (\ref{Int-TypeIII}))
types of condensation the comparison (implication "$\Rightarrow$", or equivalence "$\Leftrightarrow$", see
Remark \ref{rem:3.3}) between q-a and \textit{non} q-a values may \textit{fail}.
For example, we note that for PBG the value of $\rm{(BEC)}_{q-a}$ is strictly \textit{larger} the zero-mode
BEC for anisotropy $\alpha_1 \geq 1/2$, and $\rm{(BEC)}_{q-a} \nRightarrow $ BEC for $\alpha_1 > 1/2 $,
see Section \ref{sec:gBEC}. {{One observes the similar phenomenon
$\rm{(BEC)}_{q-a} \nRightarrow $ BEC for the interacting Bose-gas (\ref{Int-TypeIII}). Although for the
both cases (PBG and (\ref{Int-TypeIII})) we get $\rm{(BEC)}_{q-a} \Leftrightarrow $ gBEC, see
Section \ref{sec:gBEC-BQ-A}.}} Therefore, in the general case the answer to the question in Remark \ref{rem:3.5}
is \textit{negative}.
Note that the fact established in Theorem \ref{theo:3.4}, that the quasi-averages lead to ergodic states
clarifies an important conceptual aspect of the quasi-average \textit{trick}.
\end{rem}
\begin{rem}\label{rem:4.21}
The states $\omega_{\beta,\mu,\phi}$ in Theorem \ref{theo:3.4} have the property ii) of Proposition \ref{prop:A.1},
i.e., if $\phi_{1} \ne \phi_{2}$, then
$\omega_{\beta,\mu,\phi_{1}} \ne \omega_{\beta,\mu,\phi_{2}}$. By a theorem of Kadison \cite{Kadison}, two factor
states are either disjoint or quasi-equivalent,
and thus the states $\omega_{\beta,\mu,\phi}$ for different $\phi$ are mutually disjoint.
This phenomenon also occurs for spontaneous magnetisation in quantum spin systems.
It is in this sense that the word ''degeneracy'' must be understood, compare with the discussion in \cite{Bog70}.
\end{rem}
\section{Bogoliubov quasi-averages and critical quantum fluctuations} \label{BQ-A-QFl}
The aim of this section is to show that the scaled breaking symmetry external sources may have a nontrivial
impact on critical quantum fluctuations. This demonstrates that quasi-averages are helpful not only to study
phase transitions via $\rm{(SSB)}_{q-a}$, but also to analyse the corresponding \textit{critical} and, in
particular, commutative and noncommutative \textit{quantum} fluctuations. To this end we use for illustration
an example of a concrete model that manifests quantum phase transition with discrete $\rm{(SSB)}_{q-a}$
\cite{VZ1}
\subsection{Algebra of fluctuation operators.}\label{Alg-Fl-Op}
We start this section by a general setup to recall the concept of \textit{quantum fluctuations} via the
noncommutative Central Limit Theorem (CLT) and the corresponding to them Canonical Commutation Relations (CCR).
To describe any ($\mathbb{Z}^d$-lattice) quantum statistical model, one has to
start from {\it microscopic} dynamical system, which is a triplet ($\mathcal{A},\omega,\alpha_t$) where:\\
\hspace*{1 cm}(a) $\mathcal{A}=\cup_\Lambda \mathcal{A}_\Lambda$ is the quasi-local algebra of observables,
here $\Lambda$ are bounded subset of $\mathbb{Z}^d$ and $[\mathcal{A}_{\Lambda'},\mathcal{A}_{\Lambda''}]=0$ if
$\Lambda'\cap\Lambda''=\emptyset $.\\
\hspace*{1 cm}(b) $\omega$ is a {\it state} on $\mathcal{A}$. Let $\tau_x$ be space
translation automorphism of translations over the distance $x\in\mathbb{Z}^d$,
i.e., the map $\tau_x:A\in \mathcal{A}_\Lambda\rightarrow\tau_x(A)\in \mathcal{A}_{\Lambda+x}$. Then the
state $\omega$ is {\it translation-invariant} if $\omega\circ\tau_x(A)\equiv\omega(\tau_x(A))=\omega(A)$ and
{\it space-clustering} if $\lim_{\vert x\vert\rightarrow\infty}\omega(A\tau_x(B))= \omega(A)\omega(B)$ for
$A,B\in\mathcal{A}$.\\
\hspace*{1 cm}(c) $\alpha_t$ is dynamics described by the family of local Hamiltonians
$\{{H}_\Lambda\}_{\Lambda \subset \mathbb{Z}^d}$. Usually, $\alpha_t$ is defined as a norm limit of
the local dynamics:
$\alpha_t(A):= \lim_\Lambda\exp(it{H}_\Lambda)A\exp(-it{H}_\Lambda)$, i.e.,
$\alpha_t:\mathcal{A}\rightarrow\overline{\cal A}$-norm-closure
of $\mathcal{A}$. For {\it equilibrium states} one assumes that $\omega\circ\alpha_t=\omega$
({\it time invariance}).
Note that usually one assumes also that the space and time translations {\it commute}:
$\tau_x(\alpha_t(A))=\alpha_t(\tau_x((A))$, where $A\in \mathcal{A}_{\Lambda}$
and $\Lambda \subset \mathbb{Z}^d$.
On the way from the {\it micro system} $(\mathcal{A}, \, \omega,\alpha_t)$ to {\it macro
system} of physical observables, one has to distinguish \textit{two} essentially different classes.
The first one ({\textit{macro} I}) corresponds to the \textit{Weak Law of Large Numbers} (WLLN).
It is well-suited for description of {\it order parameters} in the system.
Formally this class of observables is defined as follows:
for any $A\in\mathcal{A}$ the local space \textit{mean} mapping $m_\Lambda:A\rightarrow
m_\Lambda(A):= \vert\Lambda\vert^{-1} \sum_{x\in\Lambda}\tau_x(A)$.
Then, the limiting map $m:A\rightarrow {\cal C}$
\begin{equation}\label{1.1}
m(A)= w\!-\!\lim_\Lambda m_\Lambda(A) \ , \ \forall A\in\mathcal{A} \ ,
\end{equation}
exists in the $\omega$-\textit{weak} topology, induced by the \textit{ergodic}, see (b), state $\omega$.
Let $m(\mathcal{A})=\{m(A): A\in\mathcal{A}\}$. Then the {\it macro system} I has the
following properties:\\
\hspace*{1 cm}(Ia) $m(\mathcal{A})$ is a set of observables {\it at infinity} because
[$m(\mathcal{A}), \mathcal{A}]=0$.\\
\hspace*{1cm}(Ib) $m(\mathcal{A})$ is an {\it abelian} algebra and $m(A)=\omega(A) \cdot {1\hskip -4,5 pt 1\hskip 0,2 pt} $. Hence
the states on $m(\mathcal{A})$ are probability measures.\\
\hspace*{1 cm}(Ic) Since $m(\tau_a(A)) = m(A)$, the map $m$: $\mathcal{A}\rightarrow m(\mathcal{A})$ is
\textit{not} injective. This is a mathematical expression of the {\it coarse graining} under the WLLN.\\
\hspace*{1 cm}(Id) The {\it macro-dynamics} $\tilde{\alpha}_t (m(A)):= m(\alpha_t (A))$ induced by the
micro-dynamics (c) on $m(\mathcal{A})$ is {\it trivial} since
$m(\alpha_t (A))=\omega(\alpha_t(A))\cdot {1\hskip -4,5 pt 1\hskip 0,2 pt} =\omega(A)\cdot {1\hskip -4,5 pt 1\hskip 0,2 pt} =m(A)$.
The second class of {\it macro-observable} (\textit{macro} II) correspond to the {\it
Quantum Central Limit} (QCL), which is well-suited for description of
(quantum) fluctuations and, in particular, for description of collective and
elementary excitations (\textit{phonons}, \textit{plasmons}, \textit{excitons}, etc) in many body
quantum systems \cite{Ver}.
To proceed in construction of the \textit{macro} II one has to be more precise.
Let $A\in\mathcal{A}_{sa}:=\{B\in\mathcal{A}: B=B^\ast\}$
be self-adjoint operators in a Hilbert
space $\mathfrak{H}$. Then one can define the local
mapping $F^\delta_{k,\Lambda}: A\rightarrow F^\delta_{k,\Lambda}(A)$, where
\begin{equation}\label{1.2}
F^{\delta_A}_{k,\Lambda}(A): =\frac{1}{\vert\Lambda\vert^{\frac{1}{2}+\delta_A}}\sum_{x\in\Lambda}(\tau_x(A)-
\omega(A))e^{ikx} \, , \ \ k \, , \ \delta_A \in \mathbb{R} \ .
\end{equation}
This is nothing but the {\it local fluctuation operator} for the mode k. If $\delta_A =0$, this fluctuation
operator is called {\it normal}. The next important concept is due to \cite{GVV1}-\cite{GV} and a further
development in \cite{Re}:\\
\textbf{Quantum Central Limit Theorem.} Let
\begin{equation*}
\gamma_\omega(r):=\sup_{\Lambda,\Lambda'}\sup_{A\in {\cal
A}_\Lambda\atop B\in {\cal A}_{\Lambda'}}\left\{\frac
{\omega(AB)-\omega(A)\omega(B)}{\parallel A\parallel \parallel
B\parallel}:\;r\leq dist( \Lambda,\Lambda')\right\} \ \ {\rm{and}} \ \
\sum_{x\in\mathbb{Z}^d}\gamma_\omega(\vert x\vert)<\infty \ .
\end{equation*}
Then, for any $A\in\mathcal{A}_{sa}$, the corresponding limiting \textit{characteristic} function exists
for the normal fluctuation operator ($\delta_A=0$) for the \textit{zero-mode} $k=0$:
\begin{equation}\label{1.3}
\lim_\Lambda\omega(e^{i\, u \, F_\Lambda(A)})=e^{- {u^2}S_\omega(A,A)/{2}} \ \ ,\ u\in\mathbb{R} \ ,
\end{equation}
where sesquilinear form
$S_\omega(A,B):={\rm{Re}}\sum_{x\in\mathbb{Z}^d}\omega((A-\omega(A)) \, \tau_x(B-\omega(B)))$, for
$A,B \in\mathcal{A}_{sa}$.
\\
\hspace*{1 cm}(IIa) The result (\ref{1.3}) establishes the meaning of the QCL
for normal fluctuation operators. If (\ref{1.3}) exists for $\delta_{A,B} \not= 0$ with the
modified sesquilinear form
\begin{equation}\label{1.4}
S_{\omega,\delta_{A,B}}(A,B)=\lim_\Lambda {\rm{Re}}\frac{1}{\vert \Lambda\vert^{\delta_A+\delta_B}}
\sum_{x\in\mathbb{Z}^d}\omega((A-\omega(A))\ \tau_x(B-\omega(B))) \ ,
\end{equation}
we say that QCL exists for the zero-mode {\it abnormal fluctuations}:
\begin{equation}\label{1.5}
\lim_\Lambda F^{\delta_A}_\Lambda(A)=F^{\delta_A}(A) \ .
\end{equation}
The fluctuation operators $\{F^{\delta_A}(A)\}_{A \in\mathcal{A}_{sa}}$ act in a Hilbert space $\mathcal{H}$,
which is defined by the corresponding to (\ref{1.3}) and (\ref{1.4}) Reconstruction Theorem.\\
\hspace*{1 cm}(IIb) To this end we consider $\mathcal{A}_{as}$ as a {\it vector-space} with {\it symplectic} form
$\sigma_\omega(\cdot,\cdot)$, which is correctly defined for the case $\delta_A+\delta_B=0$ by the WLLN :
\begin{equation}\label{1.6}
i\sigma_\omega(A,B)\cdot {1\hskip -4,5 pt 1\hskip 0,2 pt} =\lim_\Lambda[F^{\delta_A}_\Lambda(A),F^{\delta_B}_\Lambda(B)]=
2\, i \, {\rm{Im}}\sum_{y\in\mathbb{Z}^d}(\omega(A\tau_y(B))-\omega(A)\omega(B)) \ .
\end{equation}
Suppose that $W(\mathcal{A}_{sa},\sigma_\omega)$ is the {\it Weyl algebra}, i.e., the family of the Weyl operators
$W\, : \, \mathcal{A}_{sa}\ni A \mapsto W(A)$ such that
\begin{equation}\label{1.7}
W(A)W(B)=W(A+B)e^{- i \, \sigma_\omega(A,B)/2} \ ,
\end{equation}
where operators $A,B\in\mathcal{A}_{sa}$, acting in the Hilbert space $\mathfrak{H}$.\\
\textbf{Reconstruction Theorem.} Let $\tilde{\omega}$ be a {\it quasi-free} state on the Weyl algebra
$W(\mathcal{A}_{sa},\sigma_\omega)$, which is defined by the sesquilinear form $S_\omega(\cdot,\cdot)$:
\begin{equation}\label{1.8}
\tilde{\omega}(W(A)):=e^{- S_\omega(A,A)/2} \ .
\end{equation}
Since (\ref{1.7}) implies that $W(A):=e^{i\Phi (A)}$, where $\Phi : A \mapsto \Phi (A)$ are {\it boson
field} operators acting in the Canonical Commutation Relations (CCR) representation Hilbert space
${\cal H}_{\tilde{\omega}}$ corresponding to the state $\tilde{\omega}$,
the relations (\ref{1.2})-(\ref{1.8}) yield {\it identifications} of the spaces:
${\cal H}={\cal H}_{\tilde{\omega}}$, and of the operators:
\begin{equation}\label{1.9}
\lim_\Lambda F^{\delta_A}_{\Lambda}(A) =: F^{\delta_A}(A)=\Phi (A) \ .
\end{equation}
\hspace*{1cm}(IIc) The {\it Reconstruction Theorem} gives a transition from the
micro-system ($\mathcal{A}_{sa},\omega$) to the macro-system of {\it fluctuation
operators} ($F(\mathcal{A}_{sa},\sigma_\omega),\tilde{\omega}$). We note that
$F(\mathcal{A}_{sa},\sigma_\omega)=\{F^{\delta_A}(A)\}_{A\in {\cal H}_{sa}}$ is the
{\it CCR-algebra} on the symplectic space $(\mathcal{A}_{sa},\sigma_\omega)$, see
(\ref{1.7})-(\ref{1.9}).\\
\hspace*{1 cm}(IId) The map $F:\mathcal{A}_{sa}\rightarrow
F(\mathcal{A}_{sa},\sigma_\omega)$ is not injective (the zero-mode coarse graining).
For example, $\tilde{\tau}_x(F(A)):= F(\tau_x(A))=F(A)$, but it has a non-trivial
{\it macro-dynamics} $\tilde{\alpha}_t(F(A)):= F(\alpha_t(A))$.
Therefore, the {\it macro-system} II defined by the algebra of fluctuation
operators is the {\it triplet}
($F(\mathcal{A}_{sa},\sigma_\omega),\tilde{\omega},\tilde{\alpha}_t)$.
Identification of the algebra of the fluctuation operators $F$($\cal A$$_{sa},\sigma_\omega)$ for a given
micro-system ($\cal A$,$\omega,\alpha_t$) with the CCR-algebra of the boson field operators
supplies a mathematical description of so-called {\it collective
excitations} (phonons, plasmons, excitons etc) in the \textit{pure} state $\omega$.
The same approach gives as well a break into the mathematical foundation of
another physical concept: the {\it Linear Response Theory} \cite{GVV2}.
In the latter case, it became clear that {\it algebra of fluctuations} is more
sensible with respect to "gentle" perturbations of the microscopic
Hamiltonian by \textit{external sources} than, e.g., {\it algebra at infinity} $m($$\cal A$). This property
gets even more sound if the equilibrium state $\omega$ (being pure) belongs
to the critical domain \cite{VZ1}. In this case, perturbations of microscopic
Hamiltonien, which do not change equilibrium state $\omega$ ("gentle" pertubations), can produce
{\it different} algebras of fluctuations independent of quantum or classical nature of the micro-system.
As we learned in Section \ref{sec:gBEC-BQ-A} the idea of perturbation of Hamiltonien to produce
{\it pure} equilibrium states comes back to the Bogoliubov quasi-averages.
Later this method was generalised to include the construction of the {\it mixed states} \cite{BZT}.
We recall that it can be formulated as follows:
(i) Let $\{B_l=\tau_l \,(B)\}_{l\in\mathbb{Z}}$ be operators breaking the symmetry of the initial system
\begin{equation*}
H_\Lambda(h) := H_\Lambda-\sum_{l\in\Lambda}h_lB_l\;\;,\;\;\; h_l\in\reel^1 \ .
\end{equation*}
(ii) Then the limiting states for $h_l=h$
\begin{equation}\label{eq:1.10}
\langle-\rangle=\lim_{h\rightarrow 0}\lim_{\Lambda}\langle-\rangle_{\Lambda,h} \ ,
\end{equation}
pick out pure states with respect to decomposition corresponding the
symmetry (Bogoliubov's quasi-averages).
(iii) If the external field $h=\widehat{h}/\vert \Lambda\vert^\alpha$, then the obvious generalization
of (\ref{eq:1.10}) either coincides with \textit{pure} states $(\alpha<\alpha_c)$ or give a
family of \textit{mixed} states enumerated by $\widehat{h}$ and $\alpha\geq\alpha_c$, see \cite{BZT}.
As it was found in \cite{VZ1}, the algebra of fluctuations for a
quantum model of ferroelectric (\textit{structural} phase transitions) depends on the
parameter $\alpha$ in the critical domain (below the critical line) even for
the pures states, i.e., for $\alpha<\alpha_c=1$ one obtains for correlation critical exponents (\ref{1.2}):
$\delta_Q=\alpha/2$, while $\delta_P=0$ (for $T\not= 0$, $T$ is the temperature). Here $A:=Q$ and
$B:=P$ are respectively the atomic displacement and momentum operators in the
site ($l=0$) of $\mathbb{Z}$. The second observation of \cite{VZ1}
concerns the {\it quantum nature} of the critical fluctuations $F^{\delta}(\cdot)$, i.e.
fluctuations in the \textit{pure} state $\omega$, which belongs to the critical line.
It was shown that expected abelian properties of critical fluctuations
can changes into non-abelian commutations between $F^{\delta_Q}(Q)$ and
$F^{\delta_P}(P)$ with $\delta_Q=-\delta_P>0$, at the quantum critical point $(T=0, \lambda =\lambda_c)$. Here,
$\lambda :=\hbar/\sqrt{m}$ is the \textit{quantum parameter} of the model, where $m$ is the
mass of atoms in the nodes of lattice $\mathbb{Z}$.
Since usually one has a long-range correlations on the critical line, the {\it critical} fluctuations are
anticipated to be sensitive with respect to the above "gentle" perturbations
$\ h=\widehat{h}/\vert \Lambda\vert^\alpha$. On the other hand, they have to be also sensitive to
decay of a \textit{direct} interaction between particles: in our model, the decay of
the harmonic force matrix elements is given by
\begin{equation}\label{eq:1.11}
\phi_{l,l'}\sim\vert l-l'\vert^{-(d+\sigma)}\;\; {\rm{for}} \;\;\vert l-l'\vert \longrightarrow \infty \ .
\end{equation}
If $\sigma\geq 2$, then one classifies interaction (\ref{eq:1.11}) as a {\it short-range}, whereas the
case $0<\sigma<2$ as {\it long-range}, because the corresponding lattice Fourier-transform has the
following two types of asymptotics for $k\rightarrow 0$:
\begin{equation}\label{eq:1.12}
\tilde{\phi}(k)\sim\left\{\begin{array}{ll} a^\sigma k^\sigma+o(k^\sigma) \ ,
& 0<\sigma<2 \ , \\ a^2k^2+o(k^2) \ , & \sigma\geq 2 \ . \end{array} \right.
\end{equation}
Therefore, our purpose is to find exponents $\delta_A$ as the function of the
parameter $\alpha$ and $\sigma$ for a quantum ferroelectric model. Note that $\delta_Q=\delta_Q(\alpha,\sigma)$ is
directly related to the critical exponent $\eta$ describing decay of the two-point correlation function
for displacements on the critical line: $\eta=2-2d\delta_A$ , \cite{APS}.
\subsection{Quantum phase transition, fluctuations and quasi-averages} \label{QPT-Fluct-Q-A}
Let $\mathbb{Z}$ the d-dimensional square lattice. At each lattice site $l$
occupied by a particle with mass $m$, we associate the position operator
$Q_l\in\reel^1$ and the momentum operator $P_l=(\hbar/i)(\partial/\partial
Q_l)$ in the Hilbert space $\mathcal{H}_l = L^2(\reel^1, dx)$. Let $\Lambda$ be a finite cubic
subset of $\mathbb{Z}$, $V=\vert\Lambda\vert$ and the set $\Lambda^\ast$ is \textit{dual} to $\Lambda$
with respect to \textit{periodic} boundary conditions. The local Hamiltonian $H_\Lambda$ of the model is
a self-adjoint operator on domain ${\rm{dom}}(H_\Lambda) \subset \mathcal{H}_\Lambda$, given by
\begin{equation}\label{2.1}
H_\Lambda=\sum_{l\in\Lambda}\frac{P_l^2}{2m}+\frac{1}{4}\sum_{l, \, l' \, \in \Lambda}\phi_{l,l'}(Q_l-
Q_{l'})^2+\sum_{l\in \Lambda}U(Q_l)-h\sum_{l\in \Lambda}Q_l \ .
\end{equation}
Here the local Hilbert space $\mathcal{H}_\Lambda := \otimes_{l \in \Lambda} \mathcal{H}_l $.
Note that the second term of (\ref{2.1}) represents the harmonic interaction between particles, the last term
represents the action of an external field and the third one is the
anharmonic on-site potential acting in each $l \in \mathbb{Z}$. Recall that potential $U$ must have a
\textit{double-well} form to describe a \textit{displacive} structural phase transition
attributed to the \textit{one}-component ferroelectric \cite{APS}. For example:
$U(x)=\frac{a}{2}Q_l^2+W(Q_l^2)$, $a<0$, with $W(x)=\frac{1}{2}bx^2$, $b>0$. Another example is
is a nonpolynomial $U$, such that $a>0$ and $W(x)=\frac{1}{2}b\exp{(-\eta x)}\;,\;\eta>0$ for $b>0$.
Then (\ref{2.1}) becomes
\begin{equation}\label{2.2}
H_\Lambda=\sum_{l\in \Lambda}\frac{P_l^2}{2m}+\frac{1}{4}\sum_{l, \, l' \, \in \Lambda}\phi_{l,l'}
(Q_l-Q_{l'})^2+\frac{a}{2}\sum_{l\in \Lambda}Q_l^2 \ + $$ $$ + \ \sum_{l\in
\Lambda}W(Q_l^2)-h\sum_{l\in \Lambda}Q_l \ .
\end{equation}
Recall that model (\ref{2.2}) manifests a structural phase transition, breaking $Z_2$-symmetry
$\{Q_l \rightarrow - Q_l\}_{l\in \mathbb{Z}}$ at low temperature, if the quantum parameter
$\lambda < \lambda_c$, \cite{MPZ}, \cite{AKKR}.
We comment that a modified model (\ref{2.2}) can be solved exactly if one applies the following
approximation:
\begin{equation*}
\sum_{l\in\Lambda} W(Q_l^2)\longrightarrow V \, W(\frac{1}{V}\sum_{l\in\Lambda}Q_l^2) \ ,
\end{equation*}
known as the concept of \textit{self-consistent} phonons (SCP), see \cite{APS}. This yields a model with
Hamiltonian
\begin{equation}\label{2.3}
H_\Lambda^{SCP}=\sum_{l\in
\Lambda}\frac{P_l^2}{2m}+\frac{1}{4}\sum_{l,\, l' \, \in \Lambda}\phi_{l,l'}
(Q_l-Q_{l'})^2+\frac{a}{2}\sum_{l\in \Lambda}Q_l^2 \ +$$
$$ \ + V \, W(\frac{1}{V}\sum_{l\in \Lambda}Q_l^2) - h \sum_{l\in \Lambda}Q_l \ ,
\end{equation}
that can be solved by the Approximating Hamiltonian Method (AHM) \cite{BBZKT}, see \cite{PT} and \cite{VZ1}.
Then the \textit{free-energy} density for Hamiltonian $H_\Lambda(c)$, which is
approximating for $H_\Lambda^{SCP}$ (\ref{2.3}), is
\begin{equation}\label{2.31}
f_{\Lambda}[H_\Lambda(c)] : =
- \frac{1}{\beta V} \ln {\rm{Tr}}_{{\mathcal{H}}_{\Lambda}} e^{-\beta H_{\Lambda}(c)} \ , \ \ \
\beta := \frac{1}{k_B T} \ ,
\end{equation}
where $k_B$ is the Boltzmann constant. Since the AHM yields that
\begin{equation}\label{2.32}
H_\Lambda(c) := \sum_{l\in
\Lambda}\frac{P_l^2}{2m}+\frac{1}{4}\sum_{l,\, l' \, \in \Lambda}\phi_{l,l'}
(Q_l-Q_{l'})^2+\frac{a}{2}\sum_{l\in \Lambda}Q_l^2 \ +$$
$$ \ + V \, \left[W(c) + W'(c)\left(\frac{1}{V}\sum_{l\in \Lambda}Q_l^2 - c\right)\right] -
h \sum_{l\in \Lambda}Q_l \ ,
\end{equation}
the \textit{free-energy} density (\ref{2.31}) gets the explicit form
$$ f_{\Lambda}[H_\Lambda (c_{\Lambda,h}(T,\lambda))]=\frac{1}{\beta V}\sum_{q\in\Lambda^*}
\ln{\left[2\sinh{\frac{\beta\lambda\Omega_q(c_{\Lambda,h}(T,\lambda))}{2}}\right]}-
\frac{1}{2}\frac{h^2}{\Delta(c_{\Lambda,h}(T_c(\lambda),\lambda))}$$
$$+[W(c_{\Lambda,h}(T,\lambda))-c_{\Lambda,h}(T,\lambda) W'(c_{\Lambda,h}(T,\lambda))] \ . $$
Here $c =c_{\Lambda,h}(T,\lambda)$ is a solution of the self-consistency equation:
\begin{equation}\label{2.5}
c=\frac{h^2}{\Delta^2(c)}+\frac{1}{V}\sum_{q\in\Lambda^*}\frac{\lambda}{2\Omega_q(c)}
\coth{\frac{\beta\lambda}{2}\Omega_q(c)} \ .
\end{equation}
The spectrum $\Omega_q(c_{\Lambda,h}(T,\lambda))$, $q\in\Lambda^*$, of $H_\Lambda (c_{\Lambda,h}(T,\lambda))$ is defined by the harmonic spectrum $\omega_q$ and
by the gap $\Delta(c_{\Lambda,h}(T,\lambda))$:
$$\Omega_q^2(c_{\Lambda,h}(T,\lambda)):=\Delta(c_{\Lambda,h}(T,\lambda))+\omega_q^2 \ , $$
$$\Delta(c_{\Lambda,h}(T,\lambda)):=a+2W'(c_{\Lambda,h}(T,\lambda)) \ , $$
$$\omega_q^2:=:\tilde{\phi}(0)-\tilde{\phi}(q) \ , \
\tilde{\phi}(q) := \sum_{l\in\Lambda}\phi_{l,0}\exp{(-iql)} \ . $$
Finally, $\lambda = {\hbar}/{\sqrt{m}}$ is the quantum parameter of the model and $\beta=(k_B T)^{-1}$,
where $T$ is the temperature.
The approximating Hamiltonian method gives for $H_\Lambda (c_{\Lambda,h}(T,\lambda)) \geq 0$ the following condition of \textit{stability}
in thermodynamic limit $\Lambda\rightarrow\mathbb{Z}$:
\begin{equation}\label{2.12}
\Delta(c_{h}(T,\lambda))=\lim_\Lambda \Delta(c_{\Lambda, h}(T,\lambda)) \geq 0 \ , \
c_{h}(T,\lambda):= \lim_\Lambda c_{\Lambda, h}(T,\lambda) \ .
\end{equation}
Let $a>0$ and $W:\mathbb{R}_{+}^1\rightarrow \mathbb{R}_{+}^1$ be a monotonous decreasing function with
$W''(c)\geq w>0$. Then by definition of the gap $\Delta(c_{\Lambda,h}(T,\lambda))$ and by (\ref{2.12}) one gets for the
\textit{stability domain}: $D=[c^*,\infty)$, where
$c^*=\inf\{{c\;: c\geq0\;,\;\Delta(c)\geq 0\}}$ and $\Delta(c^*)= a+2W'(c^*) = 0 $.
\begin{theo} \label{thm:3.1}
\begin{equation}\label{AHM}
\lim_\Lambda f_{\Lambda}[H_\Lambda^{SCP}] = \lim_\Lambda \sup_{c \, \geq \, c^*} f_{\Lambda}[H_\Lambda (c)]
=: f(\beta, h).
\end{equation}
\end{theo}
By this main for the AHM theorem \cite{VZ1} thermodynamics of the system $H_\Lambda^{SCP}$ and
$H_\Lambda (c)$ for $c =c_{\Lambda,h}(T,\lambda)$ (\ref{2.5}) are \textit{equivalent}.
Therefore, to study the phase diagram of the model (\ref{2.3}) we have to consider equation
(\ref{2.5}) in the thermodynamic limit $\Lambda\rightarrow\mathbb{Z}$:
\begin{equation}\label{2.14}
c_{h}(T,\lambda)=\rho(T,\lambda, h)+I_d(c_{h}(T,\lambda),T,\lambda) \ .
\end{equation}
Here we split the thermodynamic limit of the integral sum (\ref{2.5}) into zero-mode term plus $h$-term
and the rest:
\begin{eqnarray}\label{2.15}
&&\rho(T,\lambda, h) = \lim_{\Lambda} \rho_{\Lambda}(T,\lambda, h) :=\\
&&\lim_{\Lambda}\left\{\frac{h^2}{\Delta^2(c_{\Lambda,h}(T,\lambda))} + \frac{1}{V}\frac{\lambda}{2\sqrt{\Delta(c_{\Lambda,h}(T,\lambda))}}
\coth\frac{\beta\lambda}{2}\sqrt{\Delta(c_{\Lambda,h}(T,\lambda))}\right\} , \nonumber \\
&&I_d(c_{h}(T,\lambda),T,\lambda):= \frac{\lambda}{(2\pi)^d}\int_{q\in B_d}d^dq
\frac{1}{2\Omega_q (c_{h}(T,\lambda))}\coth\frac{\beta\lambda}{2}\Omega_q(c_{h}(T,\lambda)) \ . \nonumber
\end{eqnarray}
Here, $B_d=\{q\in\reel^d;\vert q\vert\leq\pi\}$ is the first Brillouin zone.
To analyse solution of (\ref{2.14}) we consider below two cases: (a) $h=0$ and (b) $h\not = 0$.
\smallskip
\noindent (a) $h=0$: From (\ref{2.14}), (\ref{2.15}), one easily gets that for $T=0$, there is
$\lambda_c$ such that $c^\ast\leq I_d(c^\ast,0,\lambda)$ for
$\lambda\geq\lambda_c$ and $c^\ast\ = I_d(c^\ast,0,\lambda_c)$ defines the critical value of the
\textit{quantum parameter} $\lambda$. Then the line $(\lambda, T_c(\lambda))$ of critical temperatures:
$\lambda \mapsto T_c(\lambda)$, which separates the \textit{phase diagram} $(\lambda, T)$ into two domains
(A)-(B), verifies the identity:
\begin{equation}\label{2.17}
c^*=I_d(c^*,T_c(\lambda),\lambda)\, , \ \ \lambda\leq\lambda_c \ \ {\rm{and}} \ \ T_c(\lambda_c ) = 0 \ .
\end{equation}
Taking into account (\ref{2.14}) and (\ref{2.15}) one can express the conditions (\ref{2.17}) as the
\textit{critical-line} equation:
\begin{equation}\label{2.16}
\rho_{c^*}(T_c(\lambda),\lambda):= \rho(T,\lambda, h)\big|_{c_{h}(T,\lambda) = c^*} =
c^*-I_d(c^*,T_c(\lambda),\lambda) = 0 \ .
\end{equation}
Therefore, we obtain two solutions of (\ref{2.14}) distinguished by the value of the gap (\ref{2.12}):
\begin{itemize}
\item (A) $ \ \rho(T,\lambda, 0) = 0 \ , \ \ c_{0}(T,\lambda) > c^\ast \ \ {\rm{or}} \ \
\Delta(c_{0}(T,\lambda)) > 0 : \, T > T_c(\lambda) \vee \lambda > \lambda_c$ \ ,
\item (B) $ \ \rho(T,\lambda, 0) \geq 0 \ , \ \ c_{0}(T,\lambda)=c^\ast \ \ {\rm{or}} \ \
\Delta(c_{0}(T,\lambda)) = 0 : \, 0 \leq T \leq T_c(\lambda) \wedge \lambda \leq \lambda_c$ \ .
\end{itemize}
For $\lambda < \lambda_c$ fixed, by looking along the vertical ($\lambda= \, $ const) line, we observe
the well-known temperature-driven phase transition at $T_c(\lambda)>0$ with \textit{order} parameter,
which can be identified with $\rho$. On the other hand, for a fixed $T < T_c(0)$, looking along the horizontal
($T = \, $ const) line
one observes a phase transition at $\{\lambda: T_c(\lambda) = T\}$, which is driven by the quantum parameter
$\lambda = \hbar/\sqrt{2m}$.
Note that for $\lambda>\lambda_c$, i.e. for \textit{light} atoms, the temperature-driven phase
transition is suppressed by quantum tunneling or quantum fluctuations. Decreasing of $T_c(\lambda)$ for
light atoms is well-known as \textit{isotopic effect} in ferroelectrics \cite{APS}. Since by
Theorem \ref{thm:3.1} thermodynamics of the models (\ref{2.3}) and approximating Hamiltonian $H_\Lambda (c_{\Lambda,h}(T,\lambda))$ are
are equivalent the proof that one has the same effect in the model (\ref{2.3}) including the existence of
$\lambda_c$ follows from solution of equation (\ref{2.3}) and monotonicity of
$\lambda \mapsto I_d(c^\ast, 0,\lambda)$. The proof of the isotopic effect for the original model (\ref{2.1})
was obtained in \cite{VZ2}, see also \cite{MPZ}, \cite{AKKR}.
To proceed we introduce for Hamiltonians (\ref{2.2}), (\ref{2.3}), and (\ref{2.32}) the canonical Gibbs states:
\begin{equation}\label{2.171}
\omega_{\beta,\Lambda, \ast}(\cdot) = \frac{{\rm{Tr}}_{{\mathcal{H}}_{\Lambda}}
[\exp(-\beta H_{\Lambda, \ast})\ (\cdot) \ ]}
{{\rm{Tr}}_{{\mathcal{H}}_{\Lambda}} \exp(-\beta H_{\Lambda, \ast})} \ \ , \ \
H_{\Lambda, \ast} = H_{\Lambda} \vee H_{\Lambda}^{SCP} \vee H_{\Lambda}(c) \ .
\end{equation}
Note that by (\ref{2.171}) these states inherit for $h=0$ the $ {Z\hskip -5,1 pt Z\hskip 0,2 pt} ^2$-symmetry of Hamiltonians
(\ref{2.2}), (\ref{2.3}), and (\ref{2.32}): $Q_l\rightarrow -Q_l$, i.e. one has
\begin{equation}\label{2.18}
\omega_{\beta,\Lambda, \ast}(Q_l) = \lim_\Lambda \omega_{\beta,\Lambda, \ast}(- Q_l) = 0 \ .
\end{equation}
\noindent (b) $h\not = 0$: Then we obtain
\begin{equation}\label{2.19}
\omega_{\beta, c_h}(Q_l)=\frac{h}{\Delta(c_h(T,\lambda))} \ .
\end{equation}
For \textit{disordered} phase (A), we have $\lim_{h\rightarrow 0} c_h(T,\lambda)=c(T,\lambda)> c^*$.
So, $\Delta(c)>0$ and
\begin{equation}\label{2.20}
\lim_{h\rightarrow 0}\omega_{\beta, c_h}(Q_l)=0 \ .
\end{equation}
For \textit{ordered} phase (B), we have $\lim_{h\rightarrow 0} c_h(T,\lambda)=c^*$, then by (\ref{2.15})
\begin{equation}\label{2.21}
\rho_{c^*}(T,\lambda)=c^*-I_d(c^*,T,\lambda)=\lim_{h\rightarrow 0}\frac{h^2}{\Delta^2(c_h)}>0 \ .
\end{equation}
Finally, (\ref{2.19}) and (\ref{2.21}) yield the values of the physical order parameters
\begin{equation}\label{2.22}
\omega_{\beta, \pm}(Q_l):= \lim_{h\rightarrow\pm 0}\omega_{\beta, c_h}(Q_l)=
\pm\sqrt{\rho_{c^*}(T,\lambda)}\not=0 \ .
\end{equation}
Therefore, using the Bogoliubov quasi-average (\ref{2.22}), and Section \ref{sec:gBEC-BQ-A}, we obtain
two extremal translation invariant equilibrium states $\omega_{\beta, +}$ and $\omega_{\beta, -}$,
invariant by translations, such that
\begin{equation}\label{2.23}
\omega_{\beta, +}(Q_l) = - \ \omega_{\beta, -}(Q_l)=[\rho_{c^*}(T,\lambda)]^{{1}/{2}}\not=0 \ , \ \ \
l\in \mathbb{Z} \ .
\end{equation}
In this case, one can easily check that positions and momenta have \textit{normal}
fluctuations $\delta_Q = \delta_P = 0$ (\ref{1.2}), \cite{VZ1}. We return to this observation below in a
framework of a more general approach: a {scaled} Bogoliubov quasi-averages \cite{VZ1}.
\begin{defi} \label{def:3.11}
We say that external source in (\ref{2.2}), (\ref{2.3}), and (\ref{2.32}) corresponds to the \textit{scaled}
Bogoliubov quasi-average $h\rightarrow 0$, if it is coupled with the thermodynamic limit
$\Lambda\uparrow\mathbb{Z}$ by the relation:
\begin{equation}\label{2.24}
h_{\alpha}:=\frac{\widehat{h}}{V^\alpha} \ , \ \alpha>0 \ .
\end{equation}
\end{defi}
This choice of quasi-average is flexible enough to scan between weak/strong external sources as a function of
$0< \alpha$. It is a message of the following proposition, see \cite{VZ1}.
\begin{proposition}\label{prop:3.12}
If $\alpha<1$, then limiting equilibrium states rest \textit{pure}: $\lim_\Lambda
\omega_{\beta, c_h}(Q_l)=(sign \, \widehat{h})[\rho_{c^*}(T,\lambda)]^{{1}/{2}}$, which is similar to the case of
the standard Bogoliubov quasi-average $h\rightarrow \pm 0$, (\ref{2.23}).\\
If $\alpha\geq1$, then the limiting state $\omega_{\beta, \widehat{h}}(Q_l)$ becomes a mixture of pure states:
\begin{equation*}
\omega_{\beta, \widehat{h}}(Q_l)= a \ \omega_{\beta, +}(Q_l) + (1-a) \ \omega_{\beta, -}(Q_l) \ ,
\end{equation*}
where $a := a(\widehat{h}, \alpha, \rho_{c^*}(T,\lambda)) \in [0,1]$ is
\begin{equation}\label{2.24a}
a(\widehat{h}, \alpha, \rho) = \frac{1}{2} \left(1 + \frac{\widehat{h}}{\xi \sqrt{\rho}}\right) \ , \ {\rm{for}} \
\alpha = 1\ , \ \ \ {\rm{and}} \ \ \ \ a(\widehat{h}, \alpha, \rho) = 1/2 \ , \ {\rm{for}} \ \alpha > 1 \ .
\end{equation}
Here $\xi: = \lim_{\Lambda} [\Delta(c_{\Lambda,h}(T,\lambda)) \, V] = (2\beta \rho)^{-1} + \sqrt{(2\beta \rho)^{-2} +
\widehat{h}^2/\rho}\ $.
\end{proposition}
Our next step is to study the impact of the scaled quasi-average sources on the quantum fluctuation operators.
Consider now the zero-mode ($k=0$, (\ref{1.2})) fluctuation operators of position and momentum given by
\begin{equation}\label{2.25}
F_{\delta_Q}(Q)=\lim_\L\frac{1}{V^{\frac{1}{2}+\delta_Q}}\sum_{i\in\L}(Q_i-\omega_{\beta, \Lambda, c_h}(Q_i)) \ ,
\end{equation}
and
\begin{equation}\label{2.26}
F_{\delta_P}(P)=\lim_\L\frac{1}{V^{\frac{1}{2}+\delta_P}}\sum_{i\in\L}(P_i-\omega_{\beta, \Lambda, c_h}(P_i)) \ .
\end{equation}
Since the approximating Hamiltonian is quadratic operator form (\ref{2.32}), one can calculate the variances
of fluctuation operators (\ref{2.25}) and (\ref{2.26}) explicitly:
\begin{equation*}
\lim_\L\omega_{\beta, \Lambda, c_h}\left(\{\frac{1}{V^{\frac{1}{2}+\delta_Q}}
\sum_{i\in\L}(Q_i-\omega_{\beta, \Lambda, c_h}(Q_i))\}^2\right)=
\end{equation*}
\newline
\begin{equation}\label{2.27}
= \lim_\L\frac{1}{V^{2\delta_Q}}\frac{\lambda}{2\sqrt{\Delta(c_{\Lambda, h}(T,\lambda))}}
\coth\frac{\beta\lambda}{2}\sqrt{\Delta(c_{\Lambda, h}(T,\lambda))} \ ,
\end{equation}
\begin{equation*}
\lim_\L\omega_{\beta, \Lambda, c_h}\left(\{\frac{1}{V^{\frac{1}{2}+\delta_P}}
\sum_{i\in\L}(P_i-\omega_{\beta, \Lambda, c_h}(P_i))\}^2\right)=
\end{equation*}
\newline
\begin{equation}\label{2.28}
= \lim_\L\frac{1}{V^{2\delta_P}}\frac{\lambda
m\sqrt{\Delta(c_{\Lambda, h}(T,\lambda))}}{2}\coth\frac{\beta\lambda}{2}\sqrt{\Delta(c_{\Lambda, h}(T,\lambda))} \ .
\end{equation}
Here $c_h : = c_{\Lambda,h}(T,\lambda)$ is a solution of the self-consistent equation (\ref{2.5}) and by the $ {Z\hskip -5,1 pt Z\hskip 0,2 pt} ^2$-symmetry
of Hamiltonian (\ref{2.3}): $P_l\rightarrow - P_l$, one has $\omega_{\beta,\Lambda, c_h}(P_l) = 0$ in
(\ref{2.28}) for all $l \in \Lambda$ and for any values of $\beta , h$.
We note that existence of \textit{nontrivial} variances (\ref{2.27}) and (\ref{2.28}) is sufficient for the
proof of existence of the characteristic function (\ref{1.3}) with sesquilinear form
$S_\omega(\cdot,\cdot)$. The next ingredient is the corresponding to the fluctuation operator algebra
{symplectic} form $\sigma_\omega(\cdot,\cdot)$ one has to calculate the limit of commutator (\ref{1.6}). By
(\ref{2.25}) and (\ref{2.26}) we get
\begin{equation}\label{1.61}
\lim_\Lambda[F^{\delta_P}_\Lambda(P),F^{\delta_Q}_\Lambda(Q)]=
\lim_\Lambda \frac{1}{V^{1+\delta_P + \delta_Q}}\sum_{l,l'\in\L} [P_{l} , Q_{l'}] =
\lim_\Lambda \frac{1}{V^{\delta_P + \delta_Q}} \ \frac{\hbar}{i} \ .
\end{equation}
We summarise this subsection by the following list of comments and remarks.
\begin{rem}\label{rem:3.12} We summarise this subsection by the following list of comments.
(a)-(A) Let $h = 0$ and let $[0, \lambda_c] \ni \lambda \mapsto T_c (\lambda)$.
If the point $(\lambda, T)$ on the phase diagram is above the critical line $(\lambda, T_c (\lambda))$:
$T > T_c(\lambda)$, or if $\lambda > \lambda_c$, see (\ref{2.17}), then this is the case (A), when
$\Delta(c_{h=0}(T,\lambda)) > 0$. Consequently (\ref{2.27}) and (\ref{2.28}) yield $\delta_Q=\delta_P = 0$,
to ensure non-triviality of the variances, i.e. of the central limit
both for momentum and for displacement fluctuation operators. They are called \textit{normal}, or
\textit{noncritical} fluctuation operators. Since in this case the commutator (\ref{1.61}) is nontrivial,
the operators $F_{0}(P)$ and $F_{0}(Q)$ are generators of \textit{non-abelian} algebra of normal
fluctuations. Since in this domain of the phase diagram the order parameter $\rho(T,\lambda, h=0) =0$
(\ref{2.15}), we call this pure phase disordered. Note that $\rho(T,\lambda, h=0) =0$ implies
$\omega_{\beta,\Lambda, c_{h=0}}(Q_l) = 0$ even without the reference on $Z_2$-symmetry (\ref{2.15}).
(a)-(B) Let $h = 0$. If the point $(\lambda, T)$ on the phase diagram is below the critical line
$(\lambda, T_c (\lambda))$: $T < T_c(\lambda)$ and $\lambda < \lambda_c$, then
$\lim_\Lambda c_{\Lambda, h=0}(T,\lambda)= c^*$, i.e. the gap
$\lim_\Lambda \Delta(c_{\Lambda, h=0}(T,\lambda)) = 0$ (\ref{2.12}) and the order parameter
$\rho(T,\lambda, h=0) > 0$ (\ref{2.15}). Therefore, by (\ref{2.27}) one gets $\delta_Q = 1/2$ and by
(\ref{2.28}) one gets $\delta_P = 0$ to ensure a nontrivial central limit. Hence, the
displacement fluctuation operator $F_{1/2}(Q)$ is \textit{abnormal}, whereas the momentum fluctuation operator
$F_{0}(P)$ is normal. By (\ref{1.61}) the operators $F_{1/2}(Q), F_{0}(P)$ (\ref{2.25}), (\ref{2.26}),
commute, i.e. they generate a \textit{abelian} algebra of fluctuations. We comment that although
order parameter $\rho(T,\lambda, h=0) > 0$ the $Z_2$-symmetry (\ref{2.15}) implies that \textit{displacement}
order parameter $\omega_{\beta, c^*}(Q_l) = 0$. The Bogoliubov quasi-average (\ref{2.22})
gives non-zero value for \textit{displacement} order parameter. This means that $\omega_{\beta, c^*}$
is the \textit{one-half} mixture of the pure states $\omega_{\beta,\pm}$ (\ref{2.23}) and explains
abnormal fluctuation of displacement.
Now let $h \neq 0$ and consider the standard Bogoliubov quasi-averages, see (b).
(b)-(A) Since $\Delta(c_{h}(T,\lambda)) > 0$, by (\ref{2.27}), (\ref{2.28}) one gets the finite
quasi-averages
\begin{eqnarray*}
&&\lim_{h \rightarrow 0} \lim_\L\omega_{\beta, \Lambda, c_h}\left(\{\frac{1}{V^{\frac{1}{2}}}
\sum_{i\in\L}(Q_i-\omega_{\beta, \Lambda, c_h}(Q_i))\}^2\right) \ , \\
&&\lim_{h \rightarrow 0} \lim_\L\omega_{\beta, \Lambda, c_h}\left(\{\frac{1}{V^{\frac{1}{2}}}
\sum_{i\in\L}(P_i-\omega_{\beta, \Lambda, c_h}(P_i))\}^2\right)\ . \nonumber
\end{eqnarray*}
They yield the same result on the normal fluctuations as in (a)-(A).
(b)-(B) Since $h \neq 0$, the difference with the case (a)-(B) comes from
$\lim_\Lambda \Delta(c_{\Lambda, h}(T,\lambda)) > 0$ (\ref{2.12}) and from (\ref{2.21}), which is valid in
the ordered phase. Then the quasi-average for the displacement variance (\ref{2.27}):
\begin{equation}\label{1.62}
\lim_{h \rightarrow 0}
\lim_\L\frac{1}{V^{2\delta_Q}}\frac{\lambda}{2\sqrt{\Delta(c_{\Lambda, h}(T,\lambda))}}
\coth\frac{\beta\lambda}{2}\sqrt{\Delta(c_{\Lambda, h}(T,\lambda))} \ ,
\end{equation}
has \textit{no} nontrivial sense for any $\delta_Q$. Whereas the quasi-average for the momentum variance
(\ref{2.28}) is nontrivial only when $\delta_P = 0$.
(b*)-(B) This difficulty is one of the motivation to consider instead of (\ref{1.62}) the \textit{scaled}
Bogoliubov quasi-average (\ref{2.24}) for $h_{\alpha}$.
(a)-(B*) We conclude this remark by the case when the point $(\lambda, T)$ belongs to the
\textit{critical line}: $(\lambda, T_c(\lambda))$, where $\lambda \leq \lambda_c$. Therefore, the gap
$\lim_\Lambda \Delta(c_{\Lambda, h=0}(T_c(\lambda),\lambda)) = 0$ (\ref{2.12}) and the order parameter
$\rho(T_c(\lambda),\lambda, h=0) = 0$ (\ref{2.15}).
(i) If $\lambda < \lambda_c$, then $T_c(\lambda) > 0$. Hence, by (\ref{2.28}) the momentum fluctuation operator
is normal, $\delta_P = 0$, whereas displacement fluctuation operator is \textit{abnormal} with the power
$\delta_Q >0 $, which depends on the asymptotics $\mathcal{O}(V^{-\gamma}), \gamma > 0$, of the gap
$\Delta(c_{\Lambda, h=0}(T_c(\lambda)))$ in thermodynamic limit.
Note that in the \textit{scaled} limit $\lim_\Lambda \Delta(c_{\Lambda, h_{\alpha}}(T_c(\lambda),\lambda)) = 0$
the asymptotics $\mathcal{O}(V^{-\gamma})$, and by consequence $\delta_Q >0 $, may be modified by the power
$\alpha$. Although it leaves stable $\delta_P = 0$. We study this phenomenon in the next section. By (\ref{1.61})
the corresponding algebra of fluctuations is abelian.
(ii) If $\lambda = \lambda_c$, then $T_c(\lambda_c) = 0$ (\ref{2.15}) and one observes a zero-temperature
\textit{quantum} phase transition at the critical point $(0,\lambda_c)$ by varying the quantum parameter
$\lambda$. In this case the variances (\ref{2.27}), (\ref{2.28}) take the form
\begin{equation}\label{2.271}
\lim_\L\frac{1}{V^{2\delta_Q}}\frac{\lambda_c}{2\sqrt{\Delta(c_{\Lambda, h}(0,\lambda_c))}} \ ,
\end{equation}
\begin{equation}\label{2.281}
\lim_\L\frac{1}{V^{2\delta_P}}\frac{\lambda_c m\sqrt{\Delta(c_{\Lambda, h}(0,\lambda_c))}}{2}\ .
\end{equation}
Since $\Delta(c_{\Lambda, h_{\alpha}}(0,\lambda_c)) = \mathcal{O}(V^{-\gamma})$, (\ref{2.271})
implies that the displacement fluctuation operator is abnormal with the power $\delta_Q = \gamma/4 >0 $,
which may be modified by the power $\alpha$. The momentum fluctuation operator is also abnormal, but
\textit{squeezed} since by (\ref{2.281}) one gets $\delta_P = - \gamma/4 < 0$ . Note that $\delta_Q + \delta_P = 0$
yields a nontrivial commutator (\ref{1.61}). Therefore, algebra of abnormal fluctuations generated by
$F_{\delta_Q}(Q), F_{\delta_P}(P)$ is non-abelian and possibly $\alpha$-dependent.
\end{rem}
In the next Sections we elucidate a relation between definition of quantum fluctuation
operators and the scaled Bogoliubov quasi-averages (\ref{2.24}) indicated in Remark \ref{rem:3.12}.
\section{Quasi-averages for critical quantum fluctuations} \label{Q-A-Cr-Q-Fl}
\subsection{Quantum fluctuations below the critical line} \label{QF-below}
We consider here the case (b*)-(B). We show that the \textit{scaled} Bogoliubov quasi-average (\ref{2.24})
for $h_{\alpha}$ is relevant for analysis of fluctuations below the critical line.
\begin{proposition}\label{prop:3.13}
Let $0 \leq T < T_c(\lambda) \wedge \lambda < \lambda_c$. Then the momentum fluctuation operator
is normal, $\delta_P = 0$, whereas displacement fluctuation operator is \textit{abnormal} with the power
$0 < \delta_Q \leq 1/2 $, which depends on the scaled Bogoliubov quasi-average parameter ${\alpha}$ (\ref{2.24}).
The fluctuation algebra is abelian.
\end{proposition}
\begin{proof}
(1) Let $0<\alpha<1$. Then by (\ref{2.15}), (\ref{2.21}), and (\ref{2.24}) we obtain that
\begin{equation}\label{2.272-1}
\omega_{\beta, {\rm{sign}}(\widehat{h})}(Q_l) = \lim_{\Lambda}\omega_{\beta,\Lambda, c_h}(Q_l)= \lim_{\Lambda}
\frac{\widehat{h}}{V^{\alpha}\Delta(c_{\Lambda,h}(T,\lambda))} =
{\rm{sign}}(\widehat{h})\sqrt{\rho_{c^*}(T,\lambda)} \ .
\end{equation}
This indicates that this scaled quasi-average limit gives pure states (\ref{2.23}) and that by (\ref{2.27})
the variance of displacement fluctuation operator has a finite value
\begin{equation}\label{2.272}
0 < \lim_\L\omega_{\beta, \Lambda, c_h}\left(\{\frac{1}{V^{\frac{1}{2}+\delta_Q}}
\sum_{i\in\L}(Q_i-\omega_{\beta, \Lambda, c_h}(Q_i))\}^2\right)=
\lim_\L\frac{1}{V^{2\delta_Q - \alpha}}\frac{\sqrt{\rho_{c^*}(T,\lambda)}}
{2 \beta |\widehat{h}|} < \infty ,
\end{equation}
if $\delta_Q = \alpha/2$. On the other hand, the finiteness of (\ref{2.28}) implies that $\delta_P = 0$,
i.e. the momentum fluctuation operator is normal. Since $0<\alpha$, by (\ref{1.61}) the fluctuation algebra is
abelian.
(2) Let $\alpha = 1$. Then by (\ref{2.14}), (\ref{2.15}), and (\ref{2.24}) we obtain that
\begin{eqnarray}\label{2.273}
\rho_{c^*}(T,\lambda) &=& \lim_{\Lambda} \rho_{\Lambda}(T,\lambda, h) =
\lim_{\Lambda} \left\{\frac{{\widehat{h}}^2}{[V \Delta(c_{\Lambda,h}(T,\lambda))]^2} +
\frac{1}{V \Delta(c_{\Lambda,h}(T,\lambda))}\right\} \\
&=& c^* - I_d(c^* ,T,\lambda) > 0 \ , \nonumber
\end{eqnarray}
yields the bounded $w_{\widehat{h}}((T,\lambda))) : = \lim_{\Lambda} [V \Delta(c_{\Lambda,h}(T,\lambda))] > 0$ for
$h = \widehat{h}/V$. Then the displacement order parameter
\begin{equation*}
- \sqrt{\rho_{c^*}(T,\lambda)} < \lim_{\Lambda}\omega_{\beta,\Lambda, c_h}(Q_l)= \lim_{\Lambda}
\frac{\widehat{h}}{V \Delta(c_{\Lambda,h}(T,\lambda))} = \frac{\widehat{h}}{w_{\widehat{h}}((T,\lambda)))}
< \sqrt{\rho_{c^*}(T,\lambda)} \ .
\end{equation*}
This means that the equilibrium Gibbs state
\begin{equation}\label{2.274}
\omega_{\beta, \widehat{h}}(\cdot) = \xi \ \omega_{\beta, +}(\cdot) +
(1 - \xi) \ \omega_{\beta, -}(\cdot) \ , \ \ \xi = \frac{1}{2}[1 +
{\widehat{h}}/{(w_{\widehat{h}}(T,\lambda)\sqrt{\rho_{c^*}(T,\lambda)})}] \in (0,1) \ .
\end{equation}
is a convex combination the pure states (\ref{2.272-1}). Note that (\ref{2.27}) and the boundedness of
$w_{\widehat{h}}((T,\lambda)))$ imply: $\delta_Q = 1/2$, whereas (\ref{2.28}) gives $\delta_P = 0$ .
So, in the mixed state $\omega_{\beta, \widehat{h}}(\cdot)$ the displacement fluctuations are abnormal,
but the momentum fluctuation operator rests normal, and the fluctuation algebra is abelian as in the
case (1).
(3) Let $\alpha > 1$. Then again by (\ref{2.14}), (\ref{2.15}), and by (\ref{2.24}) we obtain that
\begin{equation}\label{2.275}
\rho_{c^*}(T,\lambda) =
\lim_{\Lambda} \frac{1}{V \Delta(c_{\Lambda,h}(T,\lambda))} = c^* - I_d(c^* ,T,\lambda) > 0 \ ,
\end{equation}
which by (\ref{2.19}) yields for the displacement order parameter
\begin{equation}\label{2.276}
\lim_{\Lambda}\omega_{\beta,\Lambda, c_h}(Q_l)= \lim_{\Lambda}
\frac{\widehat{h}}{V^{\alpha} \Delta(c_{\Lambda,h}(T,\lambda))} = 0 \ .
\end{equation}
Note that these scaled quasi-averages (\ref{2.275}), (\ref{2.276}) in the ordered phase (B) are completely
different from the standard quasi-average (\ref{2.21}), (\ref{2.22}). By (\ref{2.27}) and by (\ref{2.28})
one gets $\delta_Q = 1/2$ and $\delta_P = 0$, which are the same as in the case (2), including the abelian
fluctuation algebra.
We comment that the case $\alpha > 1$ is formally equivalent to the case (2) for $\widehat{h} \rightarrow 0$,
which implies $\xi \rightarrow 1/2$, see (\ref{2.274}). The same one deduce from (\ref{2.276}).
\end{proof}
\subsection{Abelian algebra of fluctuations on the critical line} \label{Ab-alg-fl}
In this section we are going to characterise the exponents $\delta_Q$ and $\delta_P$ on the critical line as
function of parameters $d,\sigma$ and (if it is the case) of the parameter $\alpha$. To this
end, we proceed as follows. Note that the critical line is defined by equation (\ref{2.16}).
Hence, $\rho_{c^\ast}(T_c(\lambda),\lambda)=0$, and (\ref{2.15}) for $\lim_\Lambda \Delta(c_{\Lambda,h}(T_c(\lambda),\lambda)) = c^\ast$ takes
the form
\begin{equation}\label{3.4}
\lim_{\Lambda}\left\{\frac{1}{V}\frac{\lambda}{2\sqrt{\Delta(c_{\Lambda,h}(T_c(\lambda),\lambda))}}\coth\frac{\beta_{c}\lambda}{2}\sqrt{\Delta(c_{\Lambda,h}(T_c(\lambda),\lambda))}+
\frac{\widehat{h}^2}{V^{2\alpha}\Delta(c_{\Lambda,h}(T_c(\lambda),\lambda))^2}\right\}=0 \ ,
\end{equation}
where $\beta_{c} := (k_B T_c(\lambda))^{-1}$.
Since for the scaled quasi-average (\ref{2.24}) we choose $h=\widehat{h}/V^\alpha$, by (\ref{2.15}) and
(\ref{2.16}) the limit $\lim_\Lambda c_{\Lambda,h}(T_c(\lambda),\lambda) = c^\ast$. Hence, $\lim_\Lambda\Delta(c_{\Lambda,h}(T_c(\lambda),\lambda)) =0$.
Now one has to distinguish two cases: \\ \\
\hspace*{2 cm}(a)
$T_c(\lambda)>0$ (\ref{2.17}), then (\ref{3.4}) is equivalent to
\begin{equation}\label{3.6}
\lim_{\Lambda}\left\{\frac{1}{V\Delta(c_{\Lambda,h}(T_c(\lambda),\lambda))\beta_c}+
\frac{\widehat{h}^2}{V^{2\alpha}\Delta(c_{\Lambda,h}(T_c(\lambda),\lambda))^2}\right\}=0 \ ,
\end{equation}
\hspace*{2cm}(b) $T_c(\lambda_c)=0$ (\ref{2.17}), then (\ref{3.4}) is equivalent to
\begin{equation}\label{3.7}
\lim_{\Lambda}\left\{\frac{\lambda}{2V\sqrt{\Delta(c_{\Lambda,h}(0,\lambda))}}+
\frac{\widehat{h}^2}{V^{2\alpha}\Delta(c_{\Lambda,h}(0,\lambda))^2}\right\}=0 \ .
\end{equation}
Both cases imply that for $V\rightarrow\infty$ the gap $\Delta$ in (\ref{3.4}) has the asymptotic
behaviour $\Delta\simeq V^{-\gamma}$ with $(0<\gamma<1) \wedge (0<\gamma<\alpha)$ for (\ref{3.6}) or
$(0<\gamma<2) \wedge (0<\gamma<\alpha)$ for (\ref{3.7}), correspondingly.
Note that it is equation (\ref{2.5}), which is the key to calulate these asymptotics. To make this argument
evident, we rewrite (\ref{2.5}) identically as
\begin{eqnarray}
&&(c_\Lambda-c^\ast)+[c^\ast-I_d(c_\Lambda,T_c(\lambda),\lambda)]+\left[I_d(c_\Lambda,T_c(\lambda),\lambda)-
\frac{1}{V} \sum_{q\in\Lambda^\ast,q\not=0}\frac{\lambda}
{2\Omega_q(c_\Lambda)}\coth\frac{\beta_c \lambda\Omega_q(c_\Lambda)}{2}\right] \nonumber \\
&&= \left(\frac{\widehat{h}}{V^\alpha\Delta}\right)^2+\frac{1}{V}\frac{\lambda}{2\sqrt{\Delta}}
\coth\left(\frac{\beta_c\lambda \sqrt{\Delta}}{2}\right) \ , \label{3.8}
\end{eqnarray}
here we denote $c_\Lambda := c_{\Lambda,h}(T_c(\lambda),\lambda)$ and $\Delta := \Delta(c_{\Lambda,h}(T_c(\lambda),\lambda))$ .
The asymptotic behaviour of the left-hand side of equation (\ref{3.8}) resulats from the hypothesis
$h=\widehat{h}/V^\alpha$ and from the convergence rate of the Darboux-Riemann sum to the limit of integral
$I_d(c_\Lambda,T_c(\lambda),\lambda)$. Together with asymptotics of the right hand-side this gives the power
$\gamma$.
\begin{proposition}\label{proposition:5.1} If ($T,\lambda$) belongs to the
critical line ($T_c(\lambda),\lambda$) with $T_c(\lambda)>0$, then the
asymptotic volume behaviour of the gap $\Delta(c_{\Lambda,h}(T_c(\lambda),\lambda))$ is defined by \\ \begin{math}
\hspace*{2,5 cm}\gamma=\left\{\begin{array}{ll} if\; d>2\sigma\\ \hspace{1
cm}\gamma=\frac{2}{3}\alpha \hspace{1 cm}&for\; \alpha<\frac{3}{4}=\alpha_c\\
\hspace{1 cm}\gamma=\frac{1}{2}\hspace{1 cm}&for\; \alpha\geq\frac{3}{4}\\ \\
if\; d=2\sigma\\ \hspace{1 cm}\gamma=\frac{2}{3}\alpha+0 \hspace{1 cm}&for\;
\alpha<\frac{3}{4}=\alpha_c\\ \hspace{1 cm}\gamma=\frac{1}{2}+0 \hspace{1
cm}&for\; \alpha\geq\frac{3}{4}\\ \\ if\;\sigma<d<2\sigma\\ \hspace{1
cm}\gamma=2\alpha\frac{\sigma}{d+\sigma}\hspace{1 cm}&for\;
\alpha<\frac{1}{2}+\frac{\sigma}{2d}=\alpha_c\\ \hspace{1
cm}\gamma=\frac{\sigma}{d}\hspace{1 cm}&for\;
\alpha\geq\frac{1}{2}+\frac{\sigma}{2d} \end{array} \right.\ \end{math}
\end{proposition}
Since $T_c(\lambda)>0$, the right side of (\ref{3.8}) has asymptotics (\ref{3.6}) or
\begin{equation}\label{3.9}
O [(V \Delta)^{-1} + (V^\alpha \Delta)^{-2} ] \ .
\end{equation}
Let us define $\alpha_c$ such that $O[(V \Delta)^{-1}] = O[(V^{\alpha_c}\Delta)^{-2}]$
Then for the asymptotics (\ref{3.9}) one obviously gets
$O [(V \Delta)^{-1} + (V^{\alpha_c} \Delta)^{-2}] = O [(V \Delta)^{-1}]$,
i.e. for $\alpha = \alpha_c$, the gap $\Delta$ has the same asymptotic behaviour as for
$\widehat{h}=0$. The three regimes of the potentiel decreasing $\sigma$ indicated in Proposition
\ref{proposition:5.1} are considered in details in \cite{JZ98}.
\begin{theo} \label{thm:5.1} If ($T,\lambda$) belongs to the critical line
($T_c(\lambda),\lambda$) with $T_c(\lambda)>0$, then the algebra of
fluctuation operators is abelian. The momentum fluctuation operator
$F_{\delta_P}(P)$ is normal ($\delta_P=0$) while the position fluctuation operator
$F_{\delta_Q}(Q)$ is abnormal with a critical exponent given by\\
\\
\begin{math}
\hspace*{2,5 cm}\delta_Q=\left\{\begin{array}{ll}
if\; d>2\sigma\\
\hspace{1 cm}\delta_Q=\frac{1}{3}\alpha \hspace{1 cm}&for\; \alpha<\frac{3}{4}=\alpha_c\\
\hspace{1 cm}\delta_Q=\frac{1}{4}\hspace{1 cm}&for\; \alpha\geq\frac{3}{4}\\
\\
if\; d=2\sigma\\
\hspace{1 cm}\delta_Q=\frac{1}{3}\alpha+0 \hspace{1 cm}&for\; \alpha<\frac{3}{4}=\alpha_c\\
\hspace{1 cm}\delta_Q=\frac{1}{4}+0 \hspace{1 cm}&for\; \alpha\geq\frac{3}{4}\\
\\
if\;\sigma<d<2\sigma\\
\hspace{1 cm}\delta_Q=\alpha\frac{\sigma}{d+\sigma}\hspace{1 cm}&si\; \alpha<\frac{1}{2}+
\frac{\sigma}{2d}=\alpha_c\\
\hspace{1 cm}\delta_Q=\frac{\sigma}{2d}\hspace{1 cm}&for\; \alpha\geq\frac{1}{2}+\frac{\sigma}{2d}
\end{array}
\right.\
\end{math}\
\end{theo}
\begin{proof}
To check the abelian character of the algebra of
fluctuation operators generated by $F^{\delta_Q}$ and $F^{\delta_P}$, it is enough to note that the limit of the
commutator:
\begin{equation*}
\lim_\Lambda\left[ F_\Lambda^{\delta_P},F_\Lambda^{\delta_Q}\right]=
\lim_\Lambda\frac{1}{\vert\Lambda\vert^{1+\delta_P+\delta_Q}}
\sum_{l,l'\in\Lambda}[P_l,Q_{l'}]=0 \ .
\end{equation*}
The second part of the theorem results from (\ref{2.27}) and (\ref{2.28}), which get on the critical line for
$h=\widehat{h}/V^\alpha$ the form:
\begin{equation}\label{3.22}
\lim_\L\omega_{\beta, \Lambda, c_h}\left(\{\frac{1}{V^{\frac{1}{2}+\delta_Q}}
\sum_{i\in\L}(Q_i-\omega_{\beta, \Lambda, c_h}(Q_i))\}^2\right)=
\lim_\L\frac{1}{V^{2\delta_Q}}\frac{kT_c(\lambda)}{\Delta(c_{\Lambda,h}(T_c(\lambda),\lambda))} \ ,
\end{equation}
and
\begin{equation}\label{3.23}
\lim_\L\omega_{\beta, \Lambda, c_h}\left(\{\frac{1}{V^{\frac{1}{2}+\delta_P}}
\sum_{i\in\L}(P_i-\omega_{\beta, \Lambda, c_h}(P_i))\}^2\right)=
\lim_\L\frac{1}{V^{2\delta_P}}mkT_c(\lambda) \ .
\end{equation}
So the variance (\ref{3.22}) is not trivial if and only if $\delta_Q=\gamma/2$ and (\ref{3.23}) is not trivial if
and only if $\delta_P=0$. Here the value of $\delta_Q$ is defined by Proposition \ref{proposition:5.1}.
\end{proof}
We comment that if one puts in the preceding theorem $\sigma =2$, then the statement corresponds to
short-range interactions when $\sigma\geq2$, see (\ref{eq:1.12}). This result coincides with that in
\cite{VZ1} if one puts $\alpha = \infty$, i.e. when there are no quasi-average sources.
\subsection{Non-abelian algebra of fluctuations on the critical line} \label{Non-abel-fl}
\begin{proposition}\label{proposition:5.2} If ($T,\lambda$) coincides with the critical point $(0,\lambda_c)$,
then the asymptotic volume behaviour of the gap $\Delta(c_{\L,h}(0,\lambda),0)$ is
given by\\ \\ \begin{math} \hspace*{2,5 cm}\gamma=\left\{\begin{array}{ll}
if\; d>\frac{3\sigma}{2}\\ \hspace{1 cm}\gamma=\frac{2}{3}\alpha \hspace{1
cm}&for\; \alpha<1=\alpha_c\\ \hspace{1 cm}\gamma=\frac{2}{3}\hspace{1
cm}&for\; \alpha\geq 1\\ \\ if\; d=\frac{3\sigma}{2}\\ \hspace{1
cm}\gamma=\frac{2}{3}\alpha+0 \hspace{1 cm}&for\;\alpha<1=\alpha_c\\
\hspace{1 cm}\gamma=\frac{1}{2}+0 \hspace{1 cm}&for\; \alpha\geq 1\\ \\
if\;\frac{\sigma}{2}<d<\frac{3\sigma}{2}\\ \hspace{1
cm}\gamma=2\alpha\frac{2\sigma}{2d+3\sigma}\hspace{1 cm}&for\;
\alpha<\frac{1}{2}+\frac{3\sigma}{4d}=\alpha_c\\ \hspace{1
cm}\gamma=\frac{\sigma}{d}\hspace{1 cm}&for\;
\alpha\geq\frac{1}{2}+\frac{3\sigma}{4d} \end{array} \right.\ \end{math}
\end{proposition}
At the point $(0,\lambda_c)$ of the critical line, we obtain the limit (\ref{3.7}), i.e. the gap has the
asymptotic $\Delta \simeq V^{-\gamma}$. Then the right-hand side of (\ref{3.8}) gets the following asymptotic
form:
$O[({V^\alpha\Delta})^{-2}+ (V\Delta^{\frac{1}{2}})^{-1}]$.
Similar to Proposition \ref{proposition:5.1} we define $\alpha = \alpha_c$ in such a way that
$O[({V^\alpha_c \Delta})^{-2}] = O[(V\Delta^{\frac{1}{2}})^{-1}]$. Again one has to consider three
regimes for the value of $\sigma$ as it is indicated in Proposition \ref{proposition:5.2} \cite{JZ98}.
\begin{theo} \label{thm:5.2} If $(T,\lambda)$ coincides with the critical
point $(0,\lambda_c)$, then the algebra of fluctuation operators is
non-abelian because the position fluctuation operator $F_{\delta_Q}(Q)$ is
abnormal ($\delta_Q>0$), while the momentum fluctuation operator $F_{\delta_P}(P)$ is
supernormal (squeezed) with $\delta_P=-\delta_Q$ and\\
\\
\begin{math} \hspace*{2,5cm}\delta_Q=\left\{\begin{array}{ll} if\; d>\frac{3\sigma}{2}\\
\hspace{1cm}\delta_Q =\frac{1}{6}\alpha \hspace{1 cm}&for\; \alpha<1=\alpha_c\\
\hspace{1 cm}\delta_Q =\frac{1}{6}\hspace{1 cm}&for\; \alpha\geq 1\\ \\ if\;
d=\frac{3\sigma}{2}\\ \hspace{1 cm}\delta_Q =\frac{1}{6}\alpha+0 \hspace{1
cm}&for\; \alpha<1=\alpha_c\\ \hspace{1 cm}\delta_Q =\frac{1}{8}+0 \hspace{1
cm}&for\; \alpha\geq 1\\ \\ if\;\frac{\sigma}{2}<d<\frac{3\sigma}{2}\\
\hspace{1 cm}\delta_Q =\alpha\frac{\sigma}{2d+3\sigma}\hspace{1 cm}&for\;
\alpha<\frac{1}{2}+\frac{3\sigma}{4d}=\alpha_c\\ \hspace{1
cm}\delta_Q =\frac{\sigma}{4d}\hspace{1 cm}&for\;
\alpha\geq\frac{1}{2}+\frac{3\sigma}{4d} \end{array} \right.\ \end{math}
\end{theo}
\begin{proof}
By (\ref{2.17}) the limit $\lim_{\lambda \rightarrow \lambda_c -0} (T_c (\lambda),\lambda) = (0,\lambda_c)$
yields: $\beta_c =(k_B T_c (\lambda))^{-1}\rightarrow\infty$. Then the variances (\ref{2.27}) and (\ref{2.28})
become
\begin{equation}\label{3.37}
\lim_\L\omega_{\infty, \Lambda, c_h}\left(\{\frac{1}{V^{\frac{1}{2}+\delta_Q}}
\sum_{i\in\L}(Q_i-\omega_{\beta, \Lambda, c_h}(Q_i))\}^2\right)=
\lim_\L\frac{1}{V^{2\delta_Q}}\frac{\lambda}{\sqrt{\Delta(c_{\Lambda,h}(0,\lambda_c))}}\ ,
\end{equation}
and
\begin{equation}\label{3.38}
\lim_\L\omega_{\infty, \Lambda, c_h}\left(\{\frac{1}{V^{\frac{1}{2}+\delta_P}}
\sum_{i\in\L}(P_i-\omega_{\beta, \Lambda, c_h}(P_i))\}^2\right)=
\lim_\L\frac{1}{V^{2\delta_P}}\frac{\lambda m}{2}\sqrt{\Delta(c_{\Lambda,h}(0,\lambda_c))} \ .
\end{equation}
Since $\Delta \simeq V^{-\gamma}$, one has just to apply Proposition \ref{proposition:5.2} for
$\delta_Q=\gamma/4=-\delta_P$ to get the possible values of $\delta_Q$. As far as $\delta_Q+\delta_P=0$. The non-abelian nature of
the algebra of fluctuation operators follows from commutator (\ref{1.61}).
\end{proof}
Note that the same remark about the $\sigma = 2$, as at the end of Section \ref{Ab-alg-fl}, is also valid
for the quantum critical fluctuations at the point $(0,\lambda_c)$.
\section{Concluding remarks}
In this paper we scrutinise the Bogoliubov method of quasi-averages for quantum systems that manifest
phase transitions.
First, we re-examine a possible application of this method to analysis of the phase transitions with
Spontaneous Symmetry Breaking (SSB). To this aim we consider examples of the Bose-Einstein condensation in
continuous perfect and interacting systems. The existence of different type of generalised condensations leads
to conclusion (see Sections \ref{sec:gBEC-BQ-A} and \ref{sec:BogAppr-Q-A}) that the only
physically reliable quantities are those that defined by the Bogoliubov quasi-averages.
In the second part of the paper we advocate the Bogoluibov method of the \textit{scaled} quasi-averages.
By taking the structural quantum phase transition as a basic example, we scrutinise a relation between SSB
and the critical quantum fluctuations. Our analysis in Section \ref{BQ-A-QFl} shows that again the
scaled quasi-averages give an adequate tool for description of the algebra of quantum fluctuation operators.
The subtlety of quantum fluctuations is already visible on the level of existence-non-existence of the order
parameter that can be \textit{destroyed} by quantum fluctuations even for the zero temperature,
Section \ref{QPT-Fluct-Q-A}. The standard Bogoluibov method is sufficient to this analysis.
A relevance of the scaled Bogoluibov quasi-averages becomes evident for (\textit{mesoscopic}) quantum
fluctuation operators defined by the Quantum Central Limit since this limit becomes \textit{sensible} to
the value of the scaling parameter rate $\alpha$. In contract to the non-abelian algebra of \textit{normal}
quantum fluctuation operators in the disordered phase the \textit{critical} quantum fluctuations in the
ordered phase and on the critical line do depend on the parameter $\alpha$, see Section \ref{Q-A-Cr-Q-Fl}.
This concerns \textit{abnormal} and \textit{supernormal} (\textit{squeezed}) quantum fluctuations. They
manifest variety of abelian-non-abelian algebras of fluctuation operators, Sections
\ref{QF-below}-\ref{Non-abel-fl}, which are all $\alpha$-dependent.
\section{Appendix A}
In this Appendix we reproduce, for the reader's convenience, the statement of the basic theorem of Fannes,
Pul\`{e} and Verbeure \cite{FPV1}, see also \cite{PVZ} for the extension to nonzero momentum, and Verbeure's
book \cite{Ver}. Unfortunately, neither \cite{FPV1} nor \cite{PVZ} show that the states
$\omega_{\beta,\mu,\phi},\phi \in [0,2\pi)$ in the theorem below are ergodic. The simple, but instructive
proof of this fact was given by Verbeure in his book \cite{Ver}.
\begin{proposition}\label{prop:A.1}
Let $\omega_{\beta,\mu}$ be an analytic, gauge-invariant equilibrium state. If
$\omega_{\beta,\mu}$ exhibits ODLRO (\ref{4.16}), then there exist ergodic states
$ \omega_{\beta,\mu,\phi},\phi \in [0,2\pi)$, not gauge invariant, satisfying {\rm{:}}
(i) $\forall \theta,\phi \in [0,2\pi)$ such that $\theta \ne \phi$, $\omega_{\beta,\mu,\phi} \ne
\omega_{\beta,\mu,\theta}$;
(ii) the state $\omega_{\beta,\mu}$ has the decomposition
\begin{equation*}
\omega_{\beta,\mu} = \frac{1}{2\pi} \int_{0}^{2\pi} d\phi\omega_{\beta,\mu,\phi} \ .
\end{equation*}
(iii) For each polynomial $Q$ in the operators $\eta(b_{{0}})$,$\eta(b_{{0}}^{*})$, and for each
$\phi \in [0,2\pi)$,
\begin{eqnarray*}
\omega_{\beta,\mu,\phi}(Q(\eta(b_{{0}}^{*}),\eta(b_{{0}})X)
= \omega_{\beta,\mu,\phi}(Q(\sqrt{\rho_{0}} \exp(-i\phi),\sqrt{\rho_{0}} \exp(i \phi)X)\ \
\forall X \in {\cal A} \ .
\end{eqnarray*}
\end{proposition}
We remark, with Verbeure \cite{Ver}, that the proof of Proposition \ref{prop:A.1} is {constructive}.
One essential ingredient is the separating character (or faithfulness) of the state $\omega_{\beta,\mu}$, i.e.,
$\omega_{\beta,\mu}(A) = 0$ implies $A=0$. This property, which depends on the extension of $\omega_{\beta,\mu}$
to the von-Neumann algebra $\pi_{\omega}({\cal A})^{''}$ (see \cite{BR97}, \cite{Hug}) is true for thermal
states, but is not true for ground states, even without this extension: in fact, a ground state (or vacuum)
is non-faithful on ${\cal A}$ (see Proposition 3 in \cite{Wrep}). We see, therefore, that thermal states and
ground states might differ with regard to the ergodic decomposition (ii). Compare also with our discussion
in the Concluding remarks.
\bigskip
\noindent
\textbf{Acknowledgements}
\noindent
Certain issues dealt with in this manuscript are developed in our paper \cite{WZ16}. Other topics are
originated from the open problems posed in Sec.3 of \cite{SeW} and in \cite{JaZ10} as well as from discussions
around \cite{WZ16}. One of us (W.F.W.) would like to thank G. L. Sewell for sharing with him his views on ODLRO
along several years. He would also like to thank the organisers of the Satellite conference "Operator Algebras
and Quantum Physics" of the XVIII conference of the IAMP (Santiago de Chile) in S\~{a}oPaulo,
July 17th-23rd 2015, for the opportunity to present a talk on these topics.
We are thankful to Bruno Nachtergaele for useful remarks and suggestions concerning the
problems that we discussed in \cite{WZ16} and in the present paper.
|
2,869,038,154,066 | arxiv | \section{Introduction}
\label{sec:introduction}
Broad deployment of artificial intelligence (AI) systems in safety-critical domains, such as autonomous driving and aircraft landing, necessitates the development of approaches for trustworthy AI. One key ingredient for trustworthiness is \emph{explainability}: the ability for an AI system to communicate the reasons for its behaviour in terms that humans can understand.
Previous work on explainable AI includes well-known model-agnostic explainers which produce explanations that remain valid for nearby inputs in feature space. In particular, LIME~\cite{lime} and SHAP~\cite{shap} learn simple, and thus interpretable, models locally around a given input. Building on this work, Anchors~\cite{anchors} attempts to identify a subset of such input explanations that are sufficient to ensure the corresponding output value. However, such approaches are heuristic and do not provide any formal guarantees. They are thus inappropriate for use in high-risk scenarios.
For instance, if a loan application model used by a bank has an explanation claiming that it depends only on a user's ``$\mathsf{age}$'', ``$\mathsf{employment \ type}$'' and ``$\mathsf{salary \ range}$'', yet in actuality applicants with the same such attributes but different ``$\mathsf{gender}$'' or ``$\mathsf{ethnicity}$'' receive dissimilar loan decisions, then the explanation is not only wrong, but may mask the actual bias in the model. Another drawback of model-agnostic approaches is that they depend on access to training data, which may not always be available (perhaps due to privacy concerns). And even if available, distribution shift can compromise the results.
Recent efforts towards \emph{formal} explainable AI~\cite{formalXAI} aim to compute rigorously defined explanations that can guarantee \emph{soundness}, in the sense that fixing certain input features is sufficient to ensure the invariance of a model's prediction (see Section~\ref{sec:related} for a more detailed discussion). Unfortunately, these approaches are unable to tackle state-of-the-art deep neural networks and may extract unnecessarily complicated explanations, hindering the ability of humans to understand the model's behaviour.
\begin{figure}[t]
\centering
\begin{subfigure}{0.32\linewidth}
\includegraphics[width=\linewidth]{figures/motivation/mnist-72.png}
\caption{Original ``$\mathtt{2}$''}
\label{fig:motivation-org}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\includegraphics[width=\linewidth]{figures/motivation/mnist-72-kmeans.png}
\caption{Segmentation}
\label{fig:motivation-segmentation}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\includegraphics[width=\linewidth]{figures/motivation/mnist-72-verix.png}
\caption{$\textsc{VeriX}$}
\label{fig:motivation-explanation}
\end{subfigure}
\hfill
\begin{subfigure}{0.32\linewidth}
\includegraphics[width=\linewidth]{figures/motivation/mnist-72-cover1.png}
\caption{``$\mathtt{2}$'' into ``$\mathtt{7}$''}
\label{fig:motivation-7}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\includegraphics[width=\linewidth]{figures/motivation/mnist-72-cover2.png}
\caption{``$\mathtt{2}$'' into ``$\mathtt{0}$''}
\label{fig:motivation-0}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\includegraphics[width=\linewidth]{figures/motivation/mnist-72-cover3.png}
\caption{``$\mathtt{2}$'' into ``$\mathtt{3}$''}
\label{fig:motivation-3}
\end{subfigure}
\caption{Intuition for our $\textsc{VeriX}$ approach: (a) An MNIST handwritten ``$\mathtt{2}$'' digit; (b) Segmentation of ``$\mathtt{2}$'' into 3 partitions; (c) $\textsc{VeriX}$ explanation (green pixels) of ``$\mathtt{2}$''; (d)(e)(f) Masking white pixels or whitening black pixels may turn ``$\mathtt{2}$'' into ``$\mathtt{7}$'', ``$\mathtt{0}$'', or ``$\mathtt{3}$''.}
\label{fig:motivation}
\end{figure}
In this paper, we present $\textsc{VeriX}$ (\textsc{veri}fied e\textsc{x}plainability) as a step towards computing sound and reliable explanations for deep neural networks. Our explanations guarantee prediction invariance against bounded perturbations imposed upon irrelevant input features.
We provide intuition for our $\textsc{VeriX}$ approach by analysing an example explanation for the MNIST digit ``$\mathtt{2}$'' in Figure~\ref{fig:motivation}. The original digit is shown in Figure~\ref{fig:motivation-org}. Anchors, mentioned above, relies on partitioning an image into a disjoint set of segments and then selecting the most prominent segment(s). Figure~\ref{fig:motivation-segmentation} shows ``$\mathtt{2}$'' divided into 3 parts using k-means clustering~\cite{k-means}. Based on this segmentation, the purple and yellow parts would be chosen for the explanation, suggesting that the model largely relies on these pixels to make its decision. This also matches our intuition, as a human would immediately identify these pixels as containing information and disregard the background. However, does this mean it is enough to focus on the salient features when explaining a classifier's prediction? Not necessarily. $\textsc{VeriX}$'s explanation is highlighted in green in Figure~\ref{fig:motivation-explanation}. It demonstrates that \emph{whatever is prominent is important but what is absent in the background also matters}. We observe that $\textsc{VeriX}$ not only marks those white pixels forming the silhouette of ``$\mathtt{2}$'' but also includes some background pixels that might affect the prediction if changed. For instance, neglecting the bottom white pixels may lead to a misclassification as a ``$\mathtt{7}$''; meanwhile, the classifier also needs to check if the pixels along the left and in the middle are not white to make sure it is not ``$\mathtt{0}$'' or ``$\mathtt{3}$''.
While Figures~\ref{fig:motivation-7}, \ref{fig:motivation-0}, and \ref{fig:motivation-3} are simply illustrative examples to provide intuition about why different parts of the explanation may be present, we remark that all the $\textsc{VeriX}$ explanations are produced automatically and deterministically.
\section{$\textsc{VeriX}$: \underline{Veri}fied E\underline{x}plainability}
Let $\mathcal{N}$ be a neural network and $\mathbf{x}$ be a $d$-dimensional input vector of features $\langle \chi^1, \dots, \chi^d \rangle$. We use $\Theta(\mathbf{x})$, or simply $\Theta$, when the context is clear, to denote its set of feature indices $\{1, \dots, d \}$. We write $\mathbf{x}^\mathbf{A}$ where $\mathbf{A} \subseteq \Theta(\mathbf{x})$ to denote only those features indexed by indices in $\mathbf{A}$.
We denote model prediction as $\mathcal{N}(\mathbf{x}) = c$, where $c$ is a single quantity in regression or a label among others ($c \in C$) in classification. For the latter, we use $\mathcal{N}_c(\mathbf{x})$ to denote the confidence value (pre- or post- softmax) of classifying as $c$, i.e., $\mathcal{N}(\mathbf{x}) = \arg \max \mathcal{N}_c(\mathbf{x})$.
Though we illustrate our $\textsc{VeriX}$ framework using image classification networks, where $\mathbf{x}$ is an image consisting of $d$ pixels, it can also generalise to other machine learning domains such as natural language processing, where $\mathbf{x}$ is a text of $d$ words and each $\chi$ denotes a word embedding. Our motivation for focusing on images in this paper is that their explanations are self-illustrative and thus easier to understand.
\subsection{Guaranteed Explanations}
We define an \emph{explanation} as a subset of the features in an input, representing those features responsible for a model's prediction. Formally, an explanation is a set of features $\mathbf{x}^\mathbf{A}$ with $\mathbf{A} \subseteq \Theta(\mathbf{x})$ that is sufficient to ensure that a model $\mathcal{N}$ makes a specific prediction even if the remaining features $\mathbf{x}^\mathbf{B}$ (with $\mathbf{B} = \Theta(\mathbf{x}) \setminus \mathbf{A}$) are perturbed. We use $\epsilon$ to bound the perturbation on $\mathbf{x}^\mathbf{B}$ so as to be able to explore explanations of varying strength.
\begin{definition}[Guaranteed Explanation] \label{dfn:explanation}
Given a network $\mathcal{N}$, an input $\mathbf{x}$, a manipulation magnitude $\epsilon$, and a discrepancy $\delta$, a \emph{guaranteed explanation with respect to norm $p$} is a set of input features $\mathbf{x}^\mathbf{A}$ such that if $\mathbf{B} = \Theta(\mathbf{x})\setminus\mathbf{A}$, then
\begin{equation}
\forall \ \mathbf{x}^{\mathbf{B}'}. \norm{\mathbf{x}^\mathbf{B} - \mathbf{x}^{\mathbf{B}'}}_p \leq \epsilon \Rightarrow \abs{\mathcal{N}(\mathbf{x}) - \mathcal{N}(\mathbf{x}')} \leq \delta,
\end{equation}
where $\mathbf{x}^{\mathbf{B}'}$ is some perturbation on features $\mathbf{x}^\mathbf{B}$, $\mathbf{x}'$ is the input variant combining $\mathbf{x}^\mathbf{A}$ and $\mathbf{x}^{\mathbf{B}'}$, and $p \in \{ 1, 2, \infty \}$ is Manhattan, Euclidean, or Chebyshev distance, respectively.
\end{definition}
\noindent
The role of $\delta$ is to measure the prediction discrepancy. For classification models, we set $\delta$ to $0$. For regression models, $\delta$ could be some pre-defined hyper-parameter quantifying allowable output change. We refer to
$\mathbf{x}^\mathbf{B}$ as the \emph{irrelevant} features. Intuitively, for classification models, a perturbation bounded by $\epsilon$ imposed upon the irrelevant features $\mathbf{x}^\mathbf{B}$ will \emph{never} change the prediction. Figure~\ref{fig:verix} illustrates a guaranteed explanation, showing an original input $\mathbf{x}$ and several variants. Those that do not perturb the explanation features $\mathbf{A}$ are guaranteed to have the same prediction as $\mathbf{x}$.
\begin{figure}[t]
\centering
\includegraphics[width=0.85\linewidth]{figures/verix/VeriX.png}
\caption{Graphical illustration of $\textsc{VeriX}$. The grey square at the left represents the features of an input $\mathbf{x}$, shown also as a big blue ``+'' in input space. Variations of $\mathbf{x}$ that do not change the explanation $\mathbf{A}$, shown as a green circle, are guaranteed to lie on the same side of the decision boundary as $\mathbf{x}$. On the other hand, if some feature in the green circle is changed (i.e., $\mathbf{A}' \not= \mathbf{A}$), then the result may be different.}
\label{fig:verix}
\end{figure}
\subsection{Optimal Towards Model Decision Boundary}
While there are infinitely many variants in the input space, we are particularly interested in those that lie along the decision boundary of a model. Figure~\ref{fig:verix} shows several pairs of variants (blue and orange ``$+$'') connected by red dotted lines. Each pair has the property that the blue variant produces the same result as the original input $\mathbf{x}$, whereas the orange variant, obtained by perturbing only one feature $\chi$ in the blue variant, produces a different result. Those key features are thus \emph{indispensable} when explaining the decision-making process. With this insight, we define an \emph{optimal} guaranteed explanation $\mathbf{x}^\mathbf{A}$ such that for each feature present in $\mathbf{x}^{\mathbf{A}}$, a change can be made to produce a different result.
\begin{definition}[Optimal Explanation] \label{dfn:optimal}
Given a guaranteed explanation $\mathbf{x}^\mathbf{A}$ for a network $\mathcal{N}$, with irrelevant features $\mathbf{x}^\mathbf{B}$, input $\mathbf{x}$, magnitude $\epsilon$, and discrepancy $\delta$, we say that $\mathbf{x}^\mathbf{A}$ is \emph{optimal} if
\begin{multline}
\forall \chi\in \mathbf{x}^\mathbf{A}. \exists \ \mathbf{x}^{\mathbf{B}'}, \chi'. \norm{(\mathbf{x}^\mathbf{B} \oplus \chi) - (\mathbf{x}^{\mathbf{B}'} \oplus \chi')}_p \leq \epsilon \\
\wedge \abs{\mathcal{N}(\mathbf{x}) - \mathcal{N}(\mathbf{x}')} > \delta,
\end{multline}
where $\mathbf{x}^{\mathbf{B}'}$ and $\chi'$ denote \emph{some} perturbations of $\mathbf{x}^\mathbf{B}$ and $\chi$ and $\oplus$ denotes concatenation of two features.
\end{definition}
\noindent
Intuitively, each feature in the optimal explanation can be perturbed (together with the irrelevant features) to change the prediction.
We mention two special cases: (1) if $\mathbf{x}$ is $\epsilon$-robust, then all features are irrelevant, i.e., $\mathbf{A} = \varnothing$, meaning there is no valid explanation as any $\epsilon$-perturbation does not affect the prediction at all (in other words, a larger $\epsilon$ is required to get a meaningful explanation); (2) if perturbing any feature in input $\mathbf{x}$ can change the prediction, then $\mathbf{A} = \Theta(\mathbf{x})$, meaning the entire input is an explanation.
We remark that our definition of optimal is \emph{local} in that it is defined with respect to a specific set of features. This type of explanations is also referred to as abductive explanations~\cite{ignatiev2019abduction} or PI-explanations~\cite{PI-explanations}. An interesting problem would be to find a \emph{globally optimal} explanation, that is, the one that is the smallest (i.e., fewest features) among all possible optimal explanations. A naive approach for computing such globally optimal explanations is extremely computationally difficult. In this paper, we propose an approximation that works well in practice and leave further improvements towards computing globally optimal explanations to future work.
\section{Computing Guaranteed Explanations by Constraint Solving}
\input{figures/verix-example}
\input{figures/verix-table}
The $\textsc{VeriX}$ algorithm is shown as Algorithm~\ref{alg:verix}. Before presenting it in detail, we first illustrate it via a simple example.
\begin{example}[$\textsc{VeriX}$ Computation] \label{xmp:verix}
Suppose $\mathbf{x}$ is an input with $9$ features $\langle \chi^1, \dots, \chi^9 \rangle$, and we have classification network $\mathcal{N}$, a perturbation magnitude $\epsilon$, and are using $p=\infty$. The outer loop of the algorithm traverses the input features. For simplicity, assume the order of the traversal is from $\chi^1$ to $\chi^9$.
Both the explanation index set $\mathbf{A}$ and the irrelevant set $\mathbf{B}$ are initialised to $\varnothing$. At each iteration, $\textsc{VeriX}$ decides whether to add the index $i$ to $\mathbf{A}$ or $\mathbf{B}$.
The evolution of the index sets is shown in Table~\ref{tab:methodology}. Concretely, when $i=1$, $\textsc{VeriX}$ formulates a \emph{pre-condition} which specifies that $\chi^1$ can be perturbed by $\epsilon$ while the other features remain unchanged. An automated reasoner is then invoked to check whether the pre-condition logically implies the \emph{post-condition} (in this case, $\mathcal{N}(\mathbf{x}) = \mathcal{N}(\hat{\image})$, meaning the prediction is the same after perturbation). Suppose the reasoner returns $\mathtt{True}$; then, no $\epsilon$-perturbation on $\chi^1$ can alter the prediction. Following Definition~\ref{dfn:explanation}, we thus add $\chi^1$ to the irrelevant features $\mathbf{x}^\mathbf{B}$.
Figure~\ref{fig:methodology}, top left, shows a visualisation of this. $\textsc{VeriX}$ next moves on to $\chi^2$.
This time the precondition allows $\epsilon$-perturbations on both $\chi^1$ and $\chi^2$ while keeping the other features unchanged. The post-condition remains the same. Suppose the reasoner returns $\mathtt{True}$ again -- we then add $\chi^2$ to $\mathbf{x}^\mathbf{B}$ (Figure~\ref{fig:methodology}, top middle). Following similar steps, we add $\chi^3$ to $\mathbf{x}^\mathbf{B}$ (Figure~\ref{fig:methodology}, top right).
When it comes to $\chi^4$, we allow $\epsilon$-perturbations for $\langle \chi^1, \chi^2, \chi^3, \chi^4 \rangle$ while the other features are fixed. Suppose this time the reasoner returns $\mathtt{False}$ -- there exists a counterexample that violates $\mathcal{N}(\mathbf{x}) = \mathcal{N}(\hat{\image})$, i.e., the prediction can be different. Then, according to Definition~\ref{dfn:optimal}, we add $\chi^4$ to the optimal explanation $\mathbf{x}^\mathbf{A}$ (shown as green in Figure~\ref{fig:methodology}, middle left).
The computation continues until all the input features are visited. Eventually, we have $\mathbf{x}^\mathbf{A} = \langle \chi^4, \chi^5, \chi^8 \rangle$ (Figure~\ref{fig:methodology}, bottom right), which means that, if the features in the explanation are fixed, the model's prediction is invariant to any $\epsilon$-perturbation on the other features. Additionally, if any of the features in $\mathbf{x}^\mathbf{A}$ is added to the irrelevant set, the model can find a way to change the prediction..
\end{example}
\subsection{Building Optimal Explanations Iteratively}
\label{subsec:verix}
\begin{algorithm}[t]
\caption{$\textsc{VeriX}$ (\textsc{veri}fied e\textsc{x}plainability)}
\label{alg:verix}
\textbf{Input}: neural network $\mathcal{N}$ and input $\mathbf{x} = \langle \chi^1, \dots, \chi^d \rangle$ \\
\textbf{Parameter}: $\epsilon$-perturbation, norm $p$, and discrepancy $\delta$ \\
\textbf{Output}: optimal explanation $\mathbf{x}^\mathbf{A}$
\begin{algorithmic}[1]
\Function{\textsc{VeriX}}{$\mathcal{N}, \mathbf{x}$}
\State $\mathbf{A}, \mathbf{B} \mapsto \varnothing, \varnothing$ \label{line:initialize}
\State $c \mapsto \mathcal{N}(\mathbf{x})$
\State $\hat{\class} \mapsto \mathcal{N}(\hat{\image})
$\label{line:predition}
\State $\perm \mapsto \getTranversalOrder(\mathbf{x})$ \label{line:traversal}
\For{$i \text{ in } \perm$} \label{line:for-loop}
\State{$\mathbf{B}' \mapsto \mathbf{B} \cup \{i\}$} \label{line:update-irrelevant}
\State{$\phi \mapsto (\norm{\hat{\pixel}^{\mathbf{B}'} -\chi^{\mathbf{B}'}}_p \leq \epsilon)$} \label{line:perturb}
\State{$\phi \mapsto \phi \land (\hat{\pixel}^{\Theta\backslash \mathbf{B}'} = \chi^{\Theta\backslash \mathbf{B}'})$} \label{line:fix}
\State $\mathtt{HOLD} \mapsto \checkValid(\mathcal{N}, \phi \Rightarrow |\hat{\class} - c| \leq \delta)$ \label{line:solver}
\If{$\mathtt{HOLD}$} {$\mathbf{B} \mapsto \mathbf{B}'$} \label{line:notExplanation}
\Else{\ $\mathbf{A} \mapsto \mathbf{A} \cup \{i\}$} \label{line:explanation}
\EndIf
\EndFor
\State{\textbf{return} $\mathbf{x}^\mathbf{A}$}
\EndFunction
\end{algorithmic}
\end{algorithm}
We now formally describe our $\textsc{VeriX}$ approach, which exploits an automated reasoning engine for neural network verification as a black-box sub-procedure. We assume the reasoner takes as inputs a network $\mathcal{N}$ and a specification
\begin{equation} \label{eq:spec}
\phi_{in}(\hat{\image}) \Rightarrow \phi_{out}(\hat{\class})
\end{equation}
where $\hat{\image}$ are variables representing the network inputs and $\hat{\class}$ are expressions representing the network outputs. $\phi_{in}(\hat{\image})$ and $\phi_{out}(\hat{\class})$ are formulas. We use $\hat{\pixel}^i$ to denote the variable corresponding to the $i^{th}$ feature. The reasoner checks whether a specification holds on a network.
As shown in Algorithm~\ref{alg:verix}, the $\textsc{VeriX}$ procedure takes as input a network $\mathcal{N}$ and an input $\mathbf{x} = \langle \chi^1, \dots, \chi^d \rangle$. It outputs an optimal explanation $\mathbf{x}^\mathbf{A}$ with respect to perturbation magnitude $\epsilon$, distance metric $p$, and discrepancy $\delta$. The procedure maintains two sets, $\mathbf{A}$ and $\mathbf{B}$, throughout: $\mathbf{A}$ comprises feature indices forming the explanation, whereas $\mathbf{B}$ includes feature indices that can be excluded from the explanation.
Recall that $\mathbf{x}^\mathbf{B}$ denotes the \emph{irrelevant} features (i.e., perturbing $\mathbf{x}^\mathbf{B}$ while leaving $\mathbf{x}^\mathbf{A}$ unchanged never changes the prediction). To start with, these two sets are initialised as $\varnothing$ (Line~\ref{line:initialize}), and input $\mathbf{x}$ is predicted as $c$, for which we remark that $c$ may or may not be an \emph{accurate} prediction according to the ground truth -- $\textsc{VeriX}$ generates an explanation regardless. Overall, the procedure examines every feature $\chi^i$ in $\mathbf{x}$ according to $\getTranversalOrder$ (Line~\ref{line:traversal}) to determine whether $i$ can be added to $\mathbf{B}$ or must belong to $\mathbf{A}$. The traversal order can significantly affect the size and shape of the explanation.
We propose a heuristic for computing a traversal order that aims to produce small explanations in Section~\ref{subsec:traverse} (in Example~\ref{xmp:verix}, a sequential order is used for ease of explanation).
For each $i$, we compute $\phi$, a formula that encodes two conditions: (i) the current $\chi^i$ and $\mathbf{x}^\mathbf{B}$ are allowed to be perturbed by at most $\epsilon$ (Line~\ref{line:perturb}); and (ii) the rest features are fixed (Line~\ref{line:fix}).
The property that we check is that $\phi$ implies $\abs{\hat{\class} - c} \leq \delta$ (Line~\ref{line:solver}) denoting prediction invariance.
An automated reasoning sub-procedure $\checkValid$ is deployed to check whether on network $\mathcal{N}$ the specification $\phi \Rightarrow \abs{\hat{\class} - c} \leq \delta$ holds (Line~\ref{line:solver}) -- i.e., whether perturbing the current $\chi^i$ and irrelevant features while fixing the rest ensures a consistent prediction -- it returns $\mathtt{True}$ if this is the case and $\mathtt{False}$ if not. In practice, this can be instantiated with an off-the-shelf neural network verification tool~\cite{deeppoly,prima,marabou,bcrown,verinet}.
If $\checkValid$ returns $\mathtt{True}$, $i$ is added to the irrelevant set $\mathbf{B}$ (Line~\ref{line:notExplanation}). Otherwise, $i$ is added to the explanation index set $\mathbf{A}$ (Line~\ref{line:explanation}), which conceptually indicates that $\chi^i$ contributes to the explanation of the prediction (since feature indices in $\mathbf{B}$ have already been proven to not affect prediction). In other words, an $\epsilon$-perturbation that includes the irrelevant features as well as the current $\chi^i$ can breach the decision boundary of $\mathcal{N}$. The procedure continues until all feature indices in $\mathbf{x}$ are traversed and placed into one of the two disjoint sets $\mathbf{A}$ and $\mathbf{B}$. At the end, $\mathbf{x}^\mathbf{A}$ is returned as the optimal explanation.
\begin{conference}
To ensure the procedure returns a guaranteed explanation, we require that $\checkValid$ is \emph{sound}, i.e., the solver returns $\mathtt{True}$ only if the specification actually holds. For the guaranteed explanation to be optimal, $\checkValid$ also needs to be \emph{complete}, i.e., the solver always returns $\mathtt{True}$ if the specification holds. We can incorporate various existing reasoners as the $\checkValid$ sub-routine. We note that an incomplete reasoner (the solver may return $\mathtt{Unknown}$), does \emph{not} undermine the soundness of our approach, though it does affect optimality (the produced explanations may be larger than necessary).
Below we state these two properties of the $\textsc{VeriX}$ procedure.
Rigorous proofs are in Appendix~A of \cite{verix}.
\end{conference}
\begin{arxiv}
To ensure the procedure returns a guaranteed explanation, we require that $\checkValid$ is \emph{sound}, i.e., the solver returns $\mathtt{True}$ only if the specification actually holds. For the guaranteed explanation to be optimal, $\checkValid$ also needs to be \emph{complete}, i.e., the solver always returns $\mathtt{True}$ if the specification holds. We can incorporate various existing reasoners as the $\checkValid$ sub-routine. We note that an incomplete reasoner (the solver may return $\mathtt{Unknown}$), does \emph{not} undermine the soundness of our approach, though it does affect optimality (the produced explanations may be larger than necessary).
Below we state these two properties of the $\textsc{VeriX}$ procedure.
Rigorous proofs are in Appendix~\ref{app:proof}.
\end{arxiv}
\begin{lemma} \label{lemma:irrelevant}
If $\checkValid$ is sound, at the end of each iteration in Algorithm~\ref{alg:verix}, the \emph{irrelevant} set of indices $\mathbf{B}$ satisfies
\begin{equation*}
(\norm{\hat{\pixel}^{\mathbf{B}'} -\chi^{\mathbf{B}'}}_p \leq \epsilon)
\land (\hat{\pixel}^{\Theta\backslash \mathbf{B}'} = \chi^{\Theta\backslash \mathbf{B}'})\\
\Rightarrow \abs{\hat{\class} - c} \leq \delta.
\end{equation*}
\end{lemma}
\noindent
Intuitively, any $\epsilon$-perturbation imposed upon all irrelevant features when fixing the others will always keep prediction consistent, i.e., those infinite number of input variants (small blue ``$+$'' in Figure~\ref{fig:verix}) will always remain within the decision boundary.
This can be proven by induction on the number of iterations. Soundness directly derives from Lemma~\ref{lemma:irrelevant}.
\begin{theorem}[Soundness] \label{thm:sound}
If $\checkValid$ is sound, then the value $\mathbf{x}^\mathbf{A}$ returned by Algorithm~\ref{alg:verix} is a \emph{guaranteed} explanation.
\end{theorem}
\begin{theorem}[Optimality] \label{thm:optimal}
If $\checkValid$ is sound and complete, then the guaranteed explanation $\mathbf{x}^\mathbf{A}$ returned by Algorithm~\ref{alg:verix} is \emph{optimal}.
\end{theorem}
\noindent
Intuitively, optimality holds because if it is not possible for an $\epsilon$-perturbation on some feature $\chi^i$ in explanation $\mathbf{x}^\mathbf{A}$ to change the prediction, then it will be added to the irrelevant features $\mathbf{x}^\mathbf{B}$ when feature $\chi^i$ is considered during the execution of Algorithm~\ref{alg:verix}.
\begin{proposition}[Complexity]
Given a $d$-dimensional input $\mathbf{x}$ and a network $\mathcal{N}$, the \emph{complexity} of computing an optimal explanation is $O(d \cdot P(\mathcal{N}))$, where $P(\mathcal{N})$ is the cost of checking a specification (as in Equation~\ref{eq:spec}) over $\mathcal{N}$.
\end{proposition}
\noindent
Note that an optimal explanation can be achieved from one traversal of input features. If $\mathcal{N}$ is piecewise-linear, checking a specification over $\mathcal{N}$ is NP-complete~\cite{reluplex}.
\subsection{Feature-Level Sensitivity Traversal}
\label{subsec:traverse}
Example~\ref{xmp:verix} used a straightforward left-to-right and top-to-bottom traversal order. Here we introduce a heuristic based on \emph{feature-level sensitivity}, inspired by the occlusion method \cite{occlusion}.
\begin{definition}[Sensitivity]
Given an input $\mathbf{x} = \langle \chi^1, \dots, \chi^d \rangle $ and a network $\mathcal{N}$, the feature-level \emph{sensitivity} (in classification for a label $c$ or in regression for a single quantity) for a feature $\chi^i$ with respect to a transformation $\mathcal{T}$ is
\begin{equation}
\mathsf{sensitivity}(\chi^i) = \mathcal{N}_{(c)}(\mathbf{x}) - \mathcal{N}_{(c)}(\mathbf{x}'),
\end{equation}
where $\mathbf{x}'$ is $\mathbf{x}$ with $\chi^i$ replaced by $\mathcal{T}(\chi^i)$.
\end{definition}
\noindent
Typical transformations include deletion ($\mathcal{T}(\chi)=0$) and reversal ($\mathcal{T}(\chi)= \overline{\chi} -\chi$, where $\overline{\chi}$ is the maximum value for feature $\chi$).
Intuitively, we measure how sensitive (in terms of an increase or decrease) a model's confidence is to each individual feature. Given sensitivity values with respect to some transformation, we rank the feature indices into a traversal order from least sensitive to most sensitive.
\section{Experimental Results}
\label{sec:experiments}
\begin{conference}
We have implemented the $\textsc{VeriX}$ algorithm in Python\footnote{The $\textsc{VeriX}$ code is available at \url{https://github.com/NeuralNetworkVerification/VeriX}.}, using the $\mathsf{Marabou}$~\cite{marabou} neural network verification tool to implement $\checkValid$ (Algorithm~\ref{alg:verix}, Line~\ref{line:solver}). $\mathsf{Marabou}$'s Python API supports specification encoding and incremental solving, making it a good fit. We trained fully-connected and convolutional networks on the MNIST~\cite{mnist}, GTSRB~\cite{gtsrb}, and TaxiNet~\cite{taxinet} datasets for classification and regression tasks.
Model specifications are in Appendix~B of \cite{verix}.
Experiments were performed on a cluster equipped with Intel Xeon E5-2637 v4 CPUs running Ubuntu 16.04. We set a time limit of $300$ seconds for each $\checkValid$ call.
\end{conference}
\begin{arxiv}
We have implemented the $\textsc{VeriX}$ algorithm in Python\footnote{The $\textsc{VeriX}$ code is available at \url{https://github.com/NeuralNetworkVerification/VeriX}.}, using the $\mathsf{Marabou}$~\cite{marabou} neural network verification tool to implement $\checkValid$ (Algorithm~\ref{alg:verix}, Line~\ref{line:solver}). $\mathsf{Marabou}$'s Python API supports specification encoding and incremental solving, making it a good fit. We trained fully-connected and convolutional networks on the MNIST~\cite{mnist}, GTSRB~\cite{gtsrb}, and TaxiNet~\cite{taxinet} datasets for classification and regression tasks.
Model specifications are in Appendix~\ref{app:model}.
Experiments were performed on a cluster equipped with Intel Xeon E5-2637 v4 CPUs running Ubuntu 16.04. We set a time limit of $300$ seconds for each $\checkValid$ call.
\end{arxiv}
\begin{figure}[t]
\centering
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=0.24\linewidth]{figures/explanations/gtsrb-2-explanation.png}
\includegraphics[width=0.24\linewidth]{figures/explanations/gtsrb-37-explanation.png}
\includegraphics[width=0.24\linewidth]{figures/explanations/gtsrb-39-explanation.png}
\includegraphics[width=0.24\linewidth]{figures/explanations/gtsrb-23-explanation.png}
\caption{``$\mathsf{keep \ right}$'', ``$\mathsf{50 \ mph}$'', ``$\mathsf{road \ work}$'', ``$\mathsf{no \ passing}$''}
\label{}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=0.24\linewidth]{figures/explanations/mnist-32-explanation.png}
\includegraphics[width=0.24\linewidth]{figures/explanations/mnist-21-explanation.png}
\includegraphics[width=0.24\linewidth]{figures/explanations/mnist-94-explanation.png}
\includegraphics[width=0.24\linewidth]{figures/explanations/mnist-26-explanation.png}
\caption{``$\mathtt{3}$'', ``$\mathtt{6}$'', ``$\mathtt{1}$'', and ``$\mathtt{7}$''}
\label{fig:explanations-mnist}
\end{subfigure}
\caption{Optimal explanations (green) from $\textsc{VeriX}$ on GTSRB (top) and MNIST (bottom) images.}
\label{fig:explanations}
\end{figure}
\subsection{Example Explanations}
\begin{conference}
Figure~\ref{fig:explanations} shows examples of $\textsc{VeriX}$ explanations for GTSRB and MNIST images. The convolutional model trained on GTSRB and fully-connected model on MNIST are in Appendix~B of \cite{verix}, Tables~7 and 5. Aligning with our intuition, $\textsc{VeriX}$ can distinguish the traffic signs (no matter a circle, a triangle, or a square in Figure~\ref{fig:sensitivity-random-gtsrb}) from their surroundings well; the explanations focus on the actual contents within the signs, e.g., the right arrow denoting ``$\mathsf{keep \ right}$'' and the number $50$ as in ``$\mathsf{50 \ mph}$''. Interestingly, for traffic signs consisting of irregular dark shapes on a white background such as ``$\mathsf{road \ work}$'' and ``$\mathsf{no \ passing}$'', $\textsc{VeriX}$ discovers that the white background contains the essential features. We notice that MNIST explanations are in general more scattered around the background because the network relies on the non-existence of white pixels to recognise certain digits (e.g., the classifier requires an empty top region to predict a ``$\mathtt{7}$'' instead of a ``$\mathtt{9}$'' as shown in Figure~\ref{fig:explanations-mnist} last column), whereas GTSRB explanations can safely disregard the surrounding pixels outside the traffic signs.
\end{conference}
\begin{arxiv}
Figure~\ref{fig:explanations} shows examples of $\textsc{VeriX}$ explanations for GTSRB and MNIST images. The convolutional model trained on GTSRB and fully-connected model on MNIST are in Appendix~\ref{app:model}, Tables~\ref{tab:gtsrb-arch} and \ref{tab:mnist-arch}. Aligning with our intuition, $\textsc{VeriX}$ can distinguish the traffic signs (no matter a circle, a triangle, or a square in Figure~\ref{fig:sensitivity-random-gtsrb}) from their surroundings well; the explanations focus on the actual contents within the signs, e.g., the right arrow denoting ``$\mathsf{keep \ right}$'' and the number $50$ as in ``$\mathsf{50 \ mph}$''. Interestingly, for traffic signs consisting of irregular dark shapes on a white background such as ``$\mathsf{road \ work}$'' and ``$\mathsf{no \ passing}$'', $\textsc{VeriX}$ discovers that the white background contains the essential features. We notice that MNIST explanations are in general more scattered around the background because the network relies on the non-existence of white pixels to recognise certain digits (e.g., the classifier requires an empty top region to predict a ``$\mathtt{7}$'' instead of a ``$\mathtt{9}$'' as shown in Figure~\ref{fig:explanations-mnist} last column), whereas GTSRB explanations can safely disregard the surrounding pixels outside the traffic signs.
\end{arxiv}
\begin{figure}[t]
\centering
\begin{subfigure}{0.32\linewidth}
\includegraphics[width=\linewidth]{figures/evolving/index-10.png}
\caption{MNIST ``$\mathtt{0}$''}
\label{}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\includegraphics[width=\linewidth]{figures/evolving/index-10-unsat-linf0.1.png}
\caption{$\epsilon: 100\% \rightarrow 10\%$}
\label{}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\includegraphics[width=\linewidth]{figures/evolving/index-10-unsat-linf0.05.png}
\caption{$\epsilon: 100\% \rightarrow 5\%$}
\label{}
\end{subfigure}
\caption{Visualisation of the expansion of the irrelevant pixels when perturbation magnitude $\epsilon$ decreases from $100\%$ to $10\%$ and further to $5\%$ (from deep blue to light yellow). Each brighter colour denotes the pixels added when moving to the next smaller $\epsilon$, e.g., $100\%, 90\%, 80\%$ and so on.}
\label{fig:evolving}
\end{figure}
\subsection{Effect of Varying $\epsilon$-Perturbations}
A key parameter of $\textsc{VeriX}$ is the perturbation magnitude $\epsilon$. When $\epsilon$ is varied, the irrelevant features change accordingly. Figure~\ref{fig:evolving} visualises this, showing how the irrelevant features change $\epsilon$ is varied from $100\%$ to $10\%$ and further to $5\%$. As $\epsilon$ decreases, more pixels become irrelevant. Intuitively, the $\textsc{VeriX}$ explanation helps reveal how the network classifies this image as ``$\mathsf{0}$''. The deep blue pixels are those that are irrelevant with $\epsilon=100\%$. Light blue pixels are more sensitive, allowing perturbations of only $10\%$. The light yellow pixels represent $5\%$, and bright yellow are pixels that cannot even be perturbed $5\%$ without changing the prediction. The resulting pattern is roughly consistent with our intuition, as the shape of ``$\mathsf{0}$'' can be seen embedded in the explanation.
We remark that determining an appropriate perturbation magnitude $\epsilon$ is non-trivial because if $\epsilon$ is too loose, explanations may be too conservative, allowing very few pixels to change. On the other hand, if $\epsilon$ is too small, nearly the whole set of pixels could become irrelevant. For instance, in Figure~\ref{fig:evolving}, if we set $\epsilon$ to $1\%$ then all pixels become irrelevant -- the classifier's prediction is robust to perturbations of $1\%$. The ``colour map'' we propose makes it possible to visualise not only the explanation but also how it varies with $\epsilon$. The user then has the freedom to pick a specific $\epsilon$ depending on their application.
\begin{figure}[t]
\centering
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=0.24\linewidth]{figures/sensitivity/gtsrb-46.png}
\includegraphics[width=0.24\linewidth]{figures/sensitivity/gtsrb-46-sensitivity.png}
\includegraphics[width=0.24\linewidth]{figures/sensitivity/gtsrb-46-explanation.png}
\includegraphics[width=0.24\linewidth]{figures/sensitivity/gtsrb-46-explanation-random.png}
\caption{``$\mathsf{priority \ road}$'', sensitivity, explanations (sensitivity/random)}
\label{fig:sensitivity-random-gtsrb}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=0.24\linewidth]{figures/sensitivity/mnist-25.png}
\includegraphics[width=0.24\linewidth]{figures/sensitivity/mnist-25-sensitivity.png}
\includegraphics[width=0.24\linewidth]{figures/sensitivity/mnist-25-explanation.png}
\includegraphics[width=0.24\linewidth]{figures/sensitivity/mnist-25-explanation-random.png}
\caption{``$\mathtt{0}$'', sensitivity, explanations (sensitivity/random)}
\label{}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/sensitivity/mnist-1-10-random.png}
\caption{Sensitivity vs. random in explanation size}
\label{fig:sensitivity-random-size}
\end{subfigure}
\caption{Comparing $\textsc{VeriX}$ explanations, when using \emph{sensitivity} (green) and random (red) traversals, on GTSRB and MNIST. (c) Each blue triangle denotes $1$ deterministic explanation from sensitivity ranking, and each bunch of circles represents $100$ random traversals.}
\label{fig:sensitivity-random}
\end{figure}
\subsection{Sensitivity vs. Random Traversal}
\label{sec:sensitivity-random}
To show the advantage of the \emph{sensitivity} traversal (described in Section~\ref{subsec:traverse}), Figure~\ref{fig:sensitivity-random} compares $\textsc{VeriX}$ explanations using sensitivity-based and random traversal orders. The first column shows the original image; the second a heatmap of the sensitivity (with $\mathcal{T}(\chi)=0$); and the third and fourth columns show explanations using the sensitivity and random traversal orders, respectively.
Sensitivity, as shown in the heatmaps, prioritises pixels that have less influence on the network's prediction. In contrast, a random ranking is simply a shuffling of all the pixels. We observe that the sensitivity traversal generates smaller and more sensible explanations. In Figure~\ref{fig:sensitivity-random-size}, we compare explanation sizes for the first $10$ images (to avoid potential selection bias) of the MNIST test set. For each image, we show $100$ random traversal explanations compared to the sensitivity traversal explanation. We notice that the latter is almost always smaller, often significantly so, suggesting that sensitivity-based traversals are a reasonable heuristic for attempting to approach globally optimal explanations.
\begin{figure}[t]
\centering
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=0.24\linewidth]{figures/anchor/gtsrb-8-original-4.png}
\includegraphics[width=0.24\linewidth]{figures/anchor/gtsrb-8-explanation.png}
\includegraphics[width=0.24\linewidth]{figures/anchor/gtsrb-8-anchor.png}
\includegraphics[width=0.24\linewidth]{figures/anchor/gtsrb-8-segments.png}
\caption{``$\mathsf{keep \ right}$'', $\textsc{VeriX}$, Anchors, segmentation}
\label{fig:verix-anchor-segments}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=0.24\linewidth]{figures/anchor/gtsrb-2-original-4.png}
\includegraphics[width=0.24\linewidth]{figures/anchor/gtsrb-2-explanation.png}
\includegraphics[width=0.24\linewidth]{figures/anchor/gtsrb-2-anchor.png}
\includegraphics[width=0.24\linewidth]{figures/anchor/gtsrb-2-counterexample.png}
\caption{``$\mathsf{keep \ right}$'', $\textsc{VeriX}$, Anchors, misclassified example}
\label{fig:verix-anchor-counterexample}
\end{subfigure}
\caption{Comparing $\textsc{VeriX}$ (green) to Anchors (red) on two versions of a ``keep right'' traffic sign, one with strong light in the background and one without.}
\label{fig:verix-anchor}
\end{figure}
\begin{table}[t]
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{l|cc|cc}
\hline \hline
\multirow{2}{*}{} & \multicolumn{2}{c|}{MNIST} & \multicolumn{2}{c}{GTSRB} \\
& size & time & size & time \\ \hline
$\textsc{VeriX}$ (sensitivity) & 180.6 & 174.77 & 357.0 & 853.91 \\
$\textsc{VeriX}$ (random) & 294.2 & 157.47 & 383.5 & 814.18 \\
Anchors & 494.9 & 13.46 & 557.7 & 26.15 \\
\hline \hline
\end{tabular}
\caption{$\textsc{VeriX}$ vs. Anchors regarding average explanation size (number of pixels) and generation time (seconds). In $\textsc{VeriX}$, $\epsilon$ is set to $5\%$ for MNIST and $0.5\%$ for GTSRB.}
\label{tab:verix-anchors}
\end{table}
\input{figures/taxi-explanations}
\begin{figure}[t]
\centering
\begin{subfigure}{0.57\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/taxi/airplane.png}
\caption{X-Plane 11 aircraft taxiing.}
\end{subfigure}
\begin{subfigure}{0.42\linewidth}
\centering
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/taxi/crop-before.png}
\caption{View from right wing.}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/taxi/crop-after.png}
\caption{A downsampled image.}
\end{subfigure}
\end{subfigure}
\caption{An autonomous aircraft taxiing scenario~\cite{taxinet}. Pictures taken from the camera on the right wing of the aircraft are cropped (red box) and downsampled to obtain the TaxiNet dataset.}
\label{fig:taxiing}
\end{figure}
\subsection{$\textsc{VeriX}$ vs. Anchors}
We compare our $\textsc{VeriX}$ approach with Anchors~\cite{anchors}. Figure~\ref{fig:verix-anchor} shows both approaches applied to two different versions of a ``$\mathsf{keep \ right}$'' traffic sign. Anchors performs image segmentation and selects a set of the segments as the explanation, making its explanations heavily dependent on the quality of the segmentation. For instance, distractions such as strong light in the background may compromise the segments (Figure~\ref{fig:verix-anchor-segments}, last column) thus resulting in less-than-ideal explanations, e.g., the top right region of the anchor (red) is outside the actual traffic sign. As described above, $\textsc{VeriX}$ utilises the model to compute the sensitivity traversal, often leading to more reasonable explanations. Anchors is also not designed to provide \emph{formal} guarantees. In fact, replacing the background of an anchor explanation can change the classification. For example, the last column of Figure~\ref{fig:verix-anchor-counterexample} is classified as ``$\mathsf{yield}$'' with confidence $99.92\%$.
Two key metrics for evaluating the quality of an explanation are the \emph{size} and the \emph{generation time}. Table~\ref{tab:verix-anchors} shows that overall, $\textsc{VeriX}$ produces much smaller explanations than Anchors. On the other hand, it takes much longer to perform the computation necessary to ensure formal guarantees. The two techniques thus provide a trade-off between time and explanation quality.
Table~\ref{tab:verix-anchors} also shows that sensitivity traversals produce significantly smaller sizes with only a modest overhead in time.
\subsection{Vision-Based Autonomous Aircraft Taxiing}
\begin{conference}
We also applied $\textsc{VeriX}$ to the real-world safety-critical aircraft taxiing scenario~\cite{taxinet} shown in Figure~\ref{fig:taxiing}. The vision-based autonomous taxiing system needs to make sure the aircraft stays on the taxiway utilising only pictures taken from the camera on the right wing. The task is to evaluate the cross-track position of the aircraft so that a controller can adjust its position accordingly. To achieve this, a regression model is used that takes a picture as input and produces an estimate of the current position. A preprocessing step crops out the sky and aircraft nose, keeping the crucial taxiway region (in the red box). This is then downsampled into a grey-scale image of size $27\cross 54$ pixels. We label each image with its corresponding lateral distance to the runway centerline as well as the taxiway heading angle. We trained a fully-connected feed-forward network on this dataset, referred to as TaxiNet (Appendix~B.3 of \cite{verix}, Table~9), to predict the aircraft's cross-track distance.
\end{conference}
\begin{arxiv}
We also applied $\textsc{VeriX}$ to the real-world safety-critical aircraft taxiing scenario~\cite{taxinet} shown in Figure~\ref{fig:taxiing}. The vision-based autonomous taxiing system needs to make sure the aircraft stays on the taxiway utilising only pictures taken from the camera on the right wing. The task is to evaluate the cross-track position of the aircraft so that a controller can adjust its position accordingly. To achieve this, a regression model is used that takes a picture as input and produces an estimate of the current position. A preprocessing step crops out the sky and aircraft nose, keeping the crucial taxiway region (in the red box). This is then downsampled into a grey-scale image of size $27\cross 54$ pixels. We label each image with its corresponding lateral distance to the runway centerline as well as the taxiway heading angle. We trained a fully-connected feed-forward network on this dataset, referred to as TaxiNet (Appendix~\ref{app:model-taxinet}, Table~\ref{tab:taxi-arch}), to predict the aircraft's cross-track distance.
\end{arxiv}
Figure~\ref{fig:taxi-explanation} shows $\textsc{VeriX}$ applied to TaxiNet, including a variety of taxiway images with different heading angles and number of lanes. For each taxiway, we show its $\textsc{VeriX}$ explanation accompanied by a sensitivity heatmap and the cross-track estimate. We observe that the model is capable of detecting the more remote line -- its contour is clearly marked in green. Meanwhile, the model is mainly focused on the centerline (especially in Figures~\ref{fig:taxi-explanation-2}, \ref{fig:taxi-explanation-4}, \ref{fig:taxi-explanation-5}, and \ref{fig:taxi-explanation-6}), which makes sense as it needs to measure how far the aircraft has deviated from the center. Interestingly, while we intuitively might assume that the model would focus on the white lanes and discard the rest, $\textsc{VeriX}$ shows that the bottom middle region is also crucial to the explanation (e.g., as shown in Figures~\ref{fig:taxi-explanation-1} and \ref{fig:taxi-explanation-3}). This is because the model must take into account the presence and absence of the centerline. This is in fact in consistent with our observations about the black background in MNIST images (Figure~\ref{fig:motivation}). We used $\epsilon=5\%$ for these explanations, which suggests that for modest perturbations (e.g., brightness change due to different weather conditions) the predicted cross-track estimate will remain within an acceptable discrepancy, and taxiing will not be compromised.
\subsection{Runtime Performance}
\label{subsec:scalability}
\begin{table*}[t]
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{l|cc|cc|cc}
\hline \hline
\multirow{2}{*}{} & \multicolumn{2}{c|}{$\mathtt{Dense}$} & \multicolumn{2}{c|}{$\mathtt{Dense\ (large)}$} & \multicolumn{2}{c}{$\mathtt{CNN}$} \\
& $\checkValid$ & $\textsc{VeriX}$ & $\checkValid$ & $\textsc{VeriX}$ & $\checkValid$ & $\textsc{VeriX}$ \\ \hline
MNIST ($28 \times 28$) & 0.013 & 160.59 & 0.055 & 615.85 & 0.484 & 4956.91 \\
TaxiNet ($27 \times 54$) & 0.020 & 114.69 & 0.085 & 386.62 & 2.609 & 8814.85 \\
GTSRB ($32 \times 32 \times 3$) & 0.091 & 675.04 & 0.257 & 1829.91 & 1.574 & 12935.27 \\
\hline \hline
\end{tabular}
\caption{Average execution time (seconds) of $\checkValid$ and $\textsc{VeriX}$ for \emph{complete} verification. In particular, $\epsilon$ is set to $3\%$ across the $\mathtt{Dense}$, $\mathtt{Dense\ (large)}$, $\mathtt{CNN}$ models and the MNIST, TaxiNet, GTSRB datasets for sensible comparison.}
\label{tab:complete}
\end{table*}
\begin{table}[t]
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{c|c|c|c|c}
\hline \hline
& \# ReLU & \# MaxPool & $\checkValid$ & $\textsc{VeriX}$ \\ \hline
MNIST-$\mathtt{sota}$ & 50960 & 5632 & 2.31 & 1841.25 \\
GTSRB-$\mathtt{sota}$ & 106416 & 5632 & 8.54 & 8770.15 \\
\hline \hline
\end{tabular}
\caption{Average execution time (seconds) of $\checkValid$ and $\textsc{VeriX}$ for \emph{incomplete} verification. $\epsilon$ is $3\%$ for both MNIST-$\mathtt{sota}$ and GTSRB-$\mathtt{sota}$ models.}
\label{tab:incomplete}
\end{table}
\begin{conference}
We analyse the empirical time \emph{complexity} of our $\textsc{VeriX}$ approach in Table~\ref{tab:complete}.
The model structures are described in Appendix~B.4 of \cite{verix}.
Typically, the individual pixel checks ($\checkValid$) return a definitive answer ($\mathtt{True}$ or $\mathtt{False}$) within a second on dense models and in a few seconds on convolutional networks. For image benchmarks such as MNIST and GTSRB, larger inputs or more complicated models result in longer (pixel- and image-level) execution times for generating explanations. As for TaxiNet as a regression task, while its pixel-level check takes longer than that of MNIST, it is actually faster in total time on dense models because TaxiNet does not need to check against other labels.
\end{conference}
\begin{arxiv}
We analyse the empirical time \emph{complexity} of our $\textsc{VeriX}$ approach in Table~\ref{tab:complete}.
The model structures are described in Appendix~\ref{app:model-compare}.
Typically, the individual pixel checks ($\checkValid$) return a definitive answer ($\mathtt{True}$ or $\mathtt{False}$) within a second on dense models and in a few seconds on convolutional networks. For image benchmarks such as MNIST and GTSRB, larger inputs or more complicated models result in longer (pixel- and image-level) execution times for generating explanations. As for TaxiNet as a regression task, while its pixel-level check takes longer than that of MNIST, it is actually faster in total time on dense models because TaxiNet does not need to check against other labels.
\end{arxiv}
\begin{conference}
The \emph{scalability} of $\textsc{VeriX}$ can be improved if we perform incomplete verification, for which we re-emphasise that the soundness of the resulting explanations is not undermined. To illustrate, we deploy the incomplete $\mathsf{CROWN}$~\cite{crown} analysis (implemented in $\mathsf{Marabou}$) to perform the $\checkValid$ sub-procedure. Table~\ref{tab:incomplete} reports the runtime performance of $\textsc{VeriX}$ when using incomplete verification on state-of-the-art network architectures with hundreds of thousands of neurons.
See models in Appendix~B of \cite{verix}, Tables~6 and 8.
In general, the scalability of $\textsc{VeriX}$ will grow with that of verification tools, which has improved significantly in the past several years as demonstrated by the results from the Verification of Neural Networks Competitions (VNN-COMP)~\cite{vnncomp22}.
\end{conference}
\begin{arxiv}
The \emph{scalability} of $\textsc{VeriX}$ can be improved if we perform incomplete verification, for which we re-emphasise that the soundness of the resulting explanations is not undermined. To illustrate, we deploy the incomplete $\mathsf{CROWN}$~\cite{crown} analysis (implemented in $\mathsf{Marabou}$) to perform the $\checkValid$ sub-procedure. Table~\ref{tab:incomplete} reports the runtime performance of $\textsc{VeriX}$ when using incomplete verification on state-of-the-art network architectures with hundreds of thousands of neurons.
See models in Appendix~\ref{app:model}, Tables~\ref{tab:mnist-sota} and \ref{tab:gtsrb-sota}.
In general, the scalability of $\textsc{VeriX}$ will grow with that of verification tools, which has improved significantly in the past several years as demonstrated by the results from the Verification of Neural Networks Competitions (VNN-COMP)~\cite{vnncomp22}.
\end{arxiv}
\section{Related Work}
\label{sec:related}
\subsection{Formal Explanations}
Existing work on formal explanations has significant limitations~\cite{formalXAI}.
First, in terms of \emph{scalability}, they can only handle simple machine learning models such as naive Bayes classifiers~\cite{marques2020explaining}, random forests~\cite{izza2021explaining,boumazouza2021asteryx}, decision trees~\cite{izza2022tackling}, and boosted trees~\cite{ignatiev2020towards,ignatiev2022using}. In particular, \cite{ignatiev2019abduction} addresses networks with very simple structure (e.g., one hidden layer of $15$ or $20$ neurons) and reports only preliminary results. In contrast, $\textsc{VeriX}$ works with state-of-the-art deep neural networks applicable to real-world safety-critical scenarios.
Second, the \emph{size} of explanations in existing work can be unnecessarily conservative. As a workaround, approximate explanations~\cite{waeldchen2021computational,wang2021probabilistic} are proposed as a generalisation to provide probabilistic (thus compromised) guarantees of prediction invariance. $\textsc{VeriX}$, by utilising feature-level sensitivity ranking, produces reasonably-sized and sensible explanations (see our advantage over random traversal in Section~\ref{sec:sensitivity-random}) with rigorous guarantees.
Third, current formal explanations allow \emph{any possible input} in feature space, which is not necessary or even realistic. For instance, a lane perception model deployed in self-driving cars should be able to correctly recognise traffic signs all day and night, with brightness changes in a practical range, e.g., the road signs will not turn completely white (RGB values $(255, 255, 255)$). $\textsc{VeriX}$'s perturbation parameter $\epsilon$ allows us the flexibility to imitate physically plausible distortions.
\subsection{Verification of Neural Networks}
Researchers have investigated how automated reasoning can aid verification of neural networks with respect to formally specified properties~\cite{liu2021algorithms,huang2020survey}, by utilising reasoners based on abstraction~\cite{crown,deepz,kpoly,deeppoly,AI2,nnv,prima,neurify,reluval,optAndAbs,zelazny2022optimizing,vegas,wu2022toward} and search~\cite{planet,reluplex,marabou,dlv,bcrown,mipverify,verinet,babsr,scaling,deeptre,deepgame,deepvideo,dependency,peregrinn,mnbab,wu2020parallelization,soi}. Those approaches mainly focus on verifying whether a network satisfies a certain pre-defined property (e.g., robustness), i.e., either prove the property holds or disprove it with a counterexample. However, this does not shed light on \emph{why} a network makes a specific prediction.
We take a step further, repurposing those verification engines as sub-routines to inspect the decision-making process of a model, thereby explaining its behaviour (through the presence or absence of certain input features). The hope is that these explanations can help humans better interpret machine learning models and thus facilitate appropriate deployment.
\section{Conclusions and Future Work}
We have presented the $\textsc{VeriX}$ framework for computing \emph{sound} and \emph{optimal} explanations for state-of-the-art neural networks, facilitating explainable and trustworthy AI in safety-critical domains.
A possible future direction is generalising to other crucial properties such as \emph{fairness}. Recall the loan application model in Section~\ref{sec:introduction}; our approach can discover potential bias (if it exists) by including those ``$\mathsf{gender}$'' and ``$\mathsf{ethnicity}$'' attributes in the produced explanations; then a human decision-maker can better interpret the loan prediction outcomes to promote a fair and unbiased model.
\section{Proofs for Theorems}
\label{app:proof}
We present rigorous proofs for Lemma~\ref{lemma:irrelevant}, Theorems~\ref{thm:sound} and \ref{thm:optimal} in Section~\ref{subsec:verix}, justifying the \emph{soundness} and \emph{optimality} of our $\textsc{VeriX}$ approach. For better readability, we repeat each lemma and theorem before their corresponding proofs.
\subsection{Proof for Lemma~\ref{lemma:irrelevant}}
\begin{replemma}{lemma:irrelevant}
If $\checkValid$ is sound, at the end of each iteration in Algorithm~\ref{alg:verix}, the \emph{irrelevant} set of indices $\mathbf{B}$ satisfies
\begin{equation*}
(\norm{\hat{\pixel}^{\mathbf{B}'} -\chi^{\mathbf{B}'}}_p \leq \epsilon)
\land (\hat{\pixel}^{\Theta\backslash \mathbf{B}'} = \chi^{\Theta\backslash \mathbf{B}'})\\
\Rightarrow \abs{\hat{\class} - c} \leq \delta.
\end{equation*}
\end{replemma}
\begin{proof}
Recall that the sub-procedure $\checkValid$ is sound means the deployed automated reasoner returns $\mathtt{True}$ only if the specification actually holds. That is, from Line~\ref{line:solver} we have
\begin{equation*}
\phi \Rightarrow \abs{\hat{\class} - c} \leq \delta
\end{equation*}
holds on network $\mathcal{N}$. Simultaneously, from Lines~\ref{line:perturb} and \ref{line:fix} we know that, to check the current feature $\chi^i$ of the traversing order $\perm$, the pre-condition $\phi$ contains
\begin{equation*}
\phi \mapsto
(\norm{\hat{\pixel}^{\mathbf{B}'} -\chi^{\mathbf{B}'}}_p \leq \epsilon)
\land (\hat{\pixel}^{\Theta\backslash \mathbf{B}'} = \chi^{\Theta\backslash \mathbf{B}'}).
\end{equation*}
Specifically, we prove this through induction on the number of iteration $i$. When $i$ is $0$, pre-condition $\phi$ is initialised as $\top$ and the specification holds trivially. In the inductive case, suppose $\checkValid$ returns $\mathtt{False}$, then the set $\mathbf{B}$ is unchanged as in Line~\ref{line:explanation}. Otherwise, if $\checkValid$ returns $\mathtt{True}$, which makes $\mathtt{HOLD}$ become $\mathtt{True}$, then the current feature index $i$ is added into the irrelevant set of feature indices $\mathbf{B}$ as in Line~\ref{line:notExplanation}, with such satisfying specification
\begin{equation*}
(\norm{\hat{\pixel}^{\mathbf{B}'} -\chi^{\mathbf{B}'}}_p \leq \epsilon)
\land (\hat{\pixel}^{\Theta\backslash \mathbf{B}'} = \chi^{\Theta\backslash \mathbf{B}'})\\
\Rightarrow \abs{\hat{\class} - c} \leq \delta.
\end{equation*}
As the iteration proceeds, each time $\checkValid$ returns $\mathtt{True}$, the irrelevant set $\mathbf{B}$ is augmented with the current feature index $i$, and the specification always holds as it is explicitly checked by the $\checkValid$ reasoner.
\end{proof}
\subsection{Proof for Theorem~\ref{thm:sound}}
\begin{reptheorem}{thm:sound}[Soundness]
If $\checkValid$ is sound, Algorithm~\ref{alg:verix} returns a \emph{guaranteed} explanation $\mathbf{x}^\mathbf{A}$ (as in Definition~\ref{dfn:explanation}) with respect to network $\mathcal{N}$ and input $\mathbf{x}$.
\end{reptheorem}
\begin{proof}
The for-loop from Line~\ref{line:for-loop} indicates that Algorithm~\ref{alg:verix} goes through every each feature $\mathbf{x}^i$ in input $\mathbf{x}$ by traversing the set of indices $\Theta(\mathbf{x})$. Line~\ref{line:traversal} means that $\perm$ is one such instance of ordered traverse. When the iteration ends, all the indices in $\Theta(\mathbf{x})$ are either put into the irrelevant set of indices by $\mathbf{B} \mapsto \mathbf{B}'$ as in Line~\ref{line:notExplanation} or the explanation index set by $\mathbf{A} \mapsto \mathbf{A} \cup \{i\}$ as in Line~\ref{line:explanation}. That is, $\mathbf{A}$ and $\mathbf{B}$ are two disjoint index sets forming $\Theta(\mathbf{x})$; in other words, $\mathbf{B} = \Theta(\mathbf{x}) \setminus \mathbf{A}$. Therefore, combined with Lemma~\ref{lemma:irrelevant}, when the reasoner $\checkValid$ is sound, once iteration finishes we have the following specification
\begin{equation*}
(\norm{\hat{\pixel}^{\mathbf{B}} -\chi^{\mathbf{B}}}_p \leq \epsilon)
\land (\hat{\pixel}^{\Theta \setminus \mathbf{B}} = \chi^{\Theta \setminus \mathbf{B}})\\
\Rightarrow \abs{\hat{\class} - c} \leq \delta.
\end{equation*}
holds on network $\mathcal{N}$, where $\hat{\pixel}^{\mathbf{B}}$ is the variable representing all the possible assignments of irrelevant features $\mathbf{x}^\mathbf{B}$, i.e., $\forall \ \mathbf{x}^{\mathbf{B}'}$, and the pre-condition $\hat{\pixel}^{\Theta \setminus \mathbf{B}} = \chi^{\Theta \setminus \mathbf{B}}$ fixes the values of the explanation features of an instantiated input $\mathbf{x}$. Meanwhile, the post-condition $\abs{\hat{\class} - c} \leq \delta$ where $c \mapsto \mathcal{N}(\mathbf{x})$ as in Line~\ref{line:predition} ensures prediction invariance such that $\delta$ is $0$ for classification and otherwise a pre-defined allowable amount of perturbation for regression. To this end, for some specific input $\mathbf{x}$ we have the following property
\begin{equation*}
\forall \ \mathbf{x}^{\mathbf{B}'},
(\norm{\mathbf{x}^{\mathbf{B}'} - \mathbf{x}^{\mathbf{B}}}_p \leq \epsilon)
\Rightarrow \abs{\mathcal{N}(\mathbf{x}') - \mathcal{N}(\mathbf{x})} \leq \delta.
\end{equation*}
holds. Here we prove by construction. According to Definition~\ref{dfn:explanation}, if the irrelevant features $\mathbf{x}^\mathbf{B}$ satisfy the above property, then we call the rest features $\mathbf{x}^\mathbf{A}$ a \emph{guaranteed} explanation with respect to network $\mathcal{N}$ and input $\mathbf{x}$.
\end{proof}
\subsection{Proof for Theorem~\ref{thm:optimal}}
\begin{reptheorem}{thm:optimal}[Optimality]
If $\checkValid$ is sound and complete, the guaranteed explanation $\mathbf{x}^\mathbf{A}$ returned by Algorithm~\ref{alg:verix} is \emph{optimal} (as in Definition~\ref{dfn:optimal}).
\end{reptheorem}
\begin{proof}
We prove this by contradiction. From Definition~\ref{dfn:optimal} of optimal explanation, we know that explanation $\mathbf{x}^\mathbf{A}$ is optimal if, for any feature $\chi$ in explanation, there always exists an $\epsilon$-perturbation on $\chi$ and the irrelevant features $\mathbf{x}^\mathbf{B}$ such that the prediction alters. Let us suppose $\mathbf{x}^\mathbf{A}$ is not optimal, then there exists a feature $\chi$ in $\mathbf{x}^\mathbf{A}$ such that no matter how to manipulate this feature $\chi'$ and the irrelevant features $\mathbf{x}^{\mathbf{B}'}$, the prediction always remains the same. That is,
\begin{multline*}
\forall \ \mathbf{x}^{\mathbf{B}'}, \chi', \norm{(\mathbf{x}^\mathbf{B} \oplus \chi) - (\mathbf{x}^{\mathbf{B}'} \oplus \chi')}_p \leq \epsilon \\
\Rightarrow \abs{\mathcal{N}(\mathbf{x}) - \mathcal{N}(\mathbf{x}')} \leq \delta,
\end{multline*}
where $\oplus$ denotes concatenation of two features. When we pass this input $\mathbf{x}$ and network $\mathcal{N}$ into the $\textsc{VeriX}$ framework, suppose Algorithm~\ref{alg:verix} examines feature $\chi$ at the $i$th iteration, then as in Line~\ref{line:update-irrelevant}, the current irrelevant set of indices is $\mathbf{B}' \mapsto \mathbf{B} \cup \{i\}$, and accordingly the pre-conditions are $\phi \mapsto$
\begin{equation*}
(\norm{\hat{\pixel}^{\mathbf{B} \cup \{i\}} - \chi^{\mathbf{B} \cup \{i\}}}_p \leq \epsilon)
\land (\hat{\pixel}^{\Theta \setminus (\mathbf{B} \cup \{i\})} = \chi^{\Theta \setminus (\mathbf{B} \cup \{i\})}).
\end{equation*}
Because $\hat{\pixel}^{\mathbf{B} \cup \{i\}}$ is the variable representing all the possible assignments of irrelevant features $\mathbf{x}^\mathbf{B}$ and the $i$th feature $\chi$, i.e., $\forall \ \mathbf{x}^{\mathbf{B}'}, \chi'$, and meanwhile
$$\hat{\pixel}^{\Theta \setminus (\mathbf{B} \cup \{i\})} = \chi^{\Theta \setminus (\mathbf{B} \cup \{i\})}$$
indicates that the other features are fixed with specific values of this $\mathbf{x}$. Thus, with $c \mapsto \mathcal{N}(\mathbf{x})$ in Line~\ref{line:predition}, we have the specification $\phi \Rightarrow \abs{\hat{\class} - c} \leq \delta$
holds on input $\mathbf{x}$ and network $\mathcal{N}$. Therefore, if the reasoner $\checkValid$ is sound and complete,
\begin{equation*}
\checkValid(\mathcal{N}, \phi \Rightarrow \abs{\hat{\class} - c} \leq \delta)
\end{equation*}
will always return $\mathtt{True}$. Line~\ref{line:solver} assigns $\mathtt{True}$ to $\mathtt{HOLD}$, and index $i$ is then put into the irrelevant set $\mathbf{B}$ thus feature $\chi$ in the irrelevant features $\mathbf{x}^\mathbf{B}$. However, based on the assumption, feature $\chi$ is in explanation $\mathbf{x}^\mathbf{A}$, so $\chi$ is in $\mathbf{x}^\mathbf{A}$ and $\mathbf{x}^\mathbf{B}$ simultaneously -- a contradiction occurs. Therefore, Theorem~\ref{thm:optimal} holds.
\end{proof}
\section{Model Specifications}
\label{app:model}
Apart from those experimental settings in Section~\ref{sec:experiments}, we include detailed model specifications for reproducibility and reference purposes. Although evaluated on the Modified National Institute of Standards and Technology (MNIST)~\cite{mnist}, German Traffic Sign Recognition Benchmark (GTSRB)~\cite{gtsrb}, and TaxiNet~\cite{taxinet} image datasets -- MNIST and GTSRB in classification and TaxiNet in regression, our $\textsc{VeriX}$ framework can be generalised to other machine learning applications such as natural language processing.
As for the sub-procedure $\checkValid$ of Algorithm~\ref{alg:verix}, while $\textsc{VeriX}$ can potentially incorporate existing automated reasoners, we deploy the neural network verification tool $\mathsf{Marabou}$~\cite{marabou}. While it supports various model formats such as $\mathsf{.pb}$ from
TensorFlow and $\mathsf{.h5}$ from Keras,
we employ the cross platform $\mathsf{.onnx}$ format for better Python API support.
When importing a model with $\mathsf{softmax}$ as the final activation function, we remark that, for the problem to be \emph{decidable}, one needs to specify the $\mathtt{outputName}$ parameter of the $\mathtt{read\_onnx}$ function as the pre-$\mathsf{softmax}$ logits. As a workaround for this, one can also train the model without $\mathsf{softmax}$ in the last layer and instead use the $\mathsf{SoftmaxLoss}$ loss function from the $\mathsf{tensorflow\_ranking}$ package. Either way, $\textsc{VeriX}$ produces consistent results.
\subsection{MNIST}
For MNIST, we train a fully-connected feed-forward neural network with $3$ dense layers activated with $\mathsf{ReLU}$ (first $2$ layers) and $\mathsf{softmax}$ (last classification layer) functions as in Table~\ref{tab:mnist-arch}, achieving $92.26\%$ accuracy. While the MNIST dataset can easily be trained with accuracy as high as $99.99\%$, we are more interested in whether a very simple model as such can extract sensible explanations -- the answer is yes. Meanwhile, we also train several more complicated MNIST models, and observe that their optimal explanations share a common phenomenon such that they are relatively more scattered around the background compared to the other datasets. This cross-model observation indicates that MNIST models need to check both the presence and absence of white pixels to recognise the handwritten digits correctly. Besides, to show the scalability of $\textsc{VeriX}$, we also deploy incomplete verification on state-of-the-art model structure as in Table~\ref{tab:mnist-sota}.
\subsection{GTSRB}
As for the GTSRB dataset, since it is not as identically distributed as MNIST, to avoid potential distribution shift, instead of training a model out of the original $43$ categories, we focus on the top first $10$ categories with highest occurrence in the training set. This allows us to obtain an appropriate model with high accuracy -- the convolutional model we train as in Table~\ref{tab:gtsrb-arch} achieves a test accuracy of $93.83\%$. It is worth mentioning that, our convolutional model is much more complicated than the simple dense model in \cite{ignatiev2019abduction}, which only contains one hidden layer of $15$ or $20$ neurons trained to distinguish two MNIST digits.
Also, as shown in Table~\ref{tab:incomplete}, we report results on the state-of-the-art GTSRB classifier in Table~\ref{tab:gtsrb-sota}.
\subsection{TaxiNet}
\label{app:model-taxinet}
Apart from the classification tasks performed on those standard image recognition benchmarks, our $\textsc{VeriX}$ approach can also tackle regression models, applicable to real-world safety-critical domains. In this vision-based autonomous aircraft taxiing scenario~\cite{taxinet} of Figure~\ref{fig:taxiing}, we train the regression mode in Table~\ref{tab:taxi-arch} to produce an estimate of the cross-track distance (in meters) from the ownship to the taxiway centerline. The TaxiNet model has a mean absolute error of $0.824$ on the test set, with no activation function in the last output layer.
\subsection{Dense, Dense (large), and CNN}
\label{app:model-compare}
In Section~\ref{subsec:scalability}, we analyse execution time of $\textsc{VeriX}$ on three models with increasing complexity: $\mathtt{Dense}$, $\mathtt{Dense\ (large)}$, and $\mathtt{CNN}$ as in Tables~\ref{tab:dense}, \ref{tab:dense-large}, and \ref{tab:cnn}, respectively. To enable a fair and sensible comparison, those three models are used across the MNIST, TaxiNet, and GTSRB datasets with only necessary adjustments to accommodate each task. For example, in all three models $\mathtt{h} \times \mathtt{w} \times \mathtt{c}$ denotes different input size $\mathtt{height} \times \mathtt{width} \times \mathtt{channel}$ for each dataset. For the activation function of the last layer, $\mathsf{softmax}$ is used for MNIST and GTSRB while TaxiNet as a regression task needs no such activation. Finally, TaxiNet deploys $\mathsf{he\_uniform}$ as the $\mathtt{kernel\_initializer}$ parameter in the intermediate dense and convolutional layers for task specific reason.
\begin{table}[h]
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{c|c|c}
\hline \hline
Layer Type & Parameter & Activation \\
\hline
Input & $28 \times 28 \times 1$ & -- \\
Flatten & -- & -- \\
Fully Connected & $10$ & $\mathsf{ReLU}$ \\
Fully Connected & $10$ & $\mathsf{ReLU}$ \\
Fully Connected & $10$ & $\mathsf{softmax}$ \\
\hline \hline
\end{tabular}
\caption{Architecture for the MNIST classifier.}
\label{tab:mnist-arch}
\end{table}
\begin{table}[h]
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{c|c|c}
\hline \hline
Type & Parameter & Activation \\
\hline
Input & $28 \times 28 \times 1$ & -- \\
Convolution & $3 \times 3 \times 32$ & $\mathsf{ReLU}$ \\
Convolution & $3 \times 3 \times 32$ & $\mathsf{ReLU}$ \\
MaxPooling & $2 \times 2$ & -- \\
Convolution & $3 \times 3 \times 64$ & $\mathsf{ReLU}$ \\
Convolution & $3 \times 3 \times 64$ & $\mathsf{ReLU}$ \\
MaxPooling & $2 \times 2$ & -- \\
Flatten & -- & -- \\
Fully Connected & $200$ & $\mathsf{ReLU}$ \\
Dropout & 0.5 & -- \\
Fully Connected & $200$ & $\mathsf{ReLU}$ \\
Fully Connected & $10$ & $\mathsf{softmax}$ \\
\hline \hline
\end{tabular}
\caption{Architecture for the MNIST-$\mathtt{sota}$ classifier.}
\label{tab:mnist-sota}
\end{table}
\newpage
\begin{table}[h]
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{c|c|c}
\hline \hline
Type & Parameter & Activation \\
\hline
Input & $32 \times 32 \times 3$ & -- \\
Convolution & $3\times 3 \times 4$ ($1$) & -- \\
Convolution & $2\times 2 \times 4$ ($2$) & -- \\
Fully Connected & $20$ & $\mathsf{ReLU}$ \\
Fully Connected & $10$ & $\mathsf{softmax}$ \\
\hline \hline
\end{tabular}
\caption{Architecture for the GTSRB classifier.}
\label{tab:gtsrb-arch}
\end{table}
\begin{table}[h]
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{c|c|c}
\hline \hline
Type & Parameter & Activation \\
\hline
Input & $28 \times 28 \times 1$ & -- \\
Convolution & $3 \times 3 \times 32$ & $\mathsf{ReLU}$ \\
Convolution & $3 \times 3 \times 32$ & $\mathsf{ReLU}$ \\
Convolution & $3 \times 3 \times 64$ & $\mathsf{ReLU}$ \\
MaxPooling & $2 \times 2$ & -- \\
Convolution & $3 \times 3 \times 64$ & $\mathsf{ReLU}$ \\
Convolution & $3 \times 3 \times 64$ & $\mathsf{ReLU}$ \\
MaxPooling & $2 \times 2$ & -- \\
Flatten & -- & -- \\
Fully Connected & $200$ & $\mathsf{ReLU}$ \\
Dropout & 0.5 & -- \\
Fully Connected & $200$ & $\mathsf{ReLU}$ \\
Fully Connected & $10$ & $\mathsf{softmax}$ \\
\hline \hline
\end{tabular}
\caption{Architecture for the GTSRB-$\mathtt{sota}$ classifier.}
\label{tab:gtsrb-sota}
\end{table}
\begin{table}[h]
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{c|c|c}
\hline \hline
Type & Parameter & Activation \\
\hline
Input & $27 \times 54 \times 1$ & -- \\
Flatten & -- & -- \\
Fully Connected & $20$ & $\mathsf{ReLU}$ \\
Fully Connected & $10$ & $\mathsf{ReLU}$ \\
Fully Connected & $1$ & -- \\
\hline \hline
\end{tabular}
\caption{Architecture for the TaxiNet regression model.}
\label{tab:taxi-arch}
\end{table}
\newpage
\begin{table}[h]
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{c|c|c}
\hline \hline
Layer Type & Parameter & Activation \\
\hline
Input & $\mathtt{h} \times \mathtt{w} \times \mathtt{c}$ & -- \\
Flatten & -- & -- \\
Fully Connected & $10$ & $\mathsf{ReLU}$ \\
Fully Connected & $10$ & $\mathsf{ReLU}$ \\
Fully Connected & $10$ / $1$ & $\mathsf{softmax}$ / -- \\
\hline \hline
\end{tabular}
\caption{Architecture for the $\mathtt{Dense}$ model.}
\label{tab:dense}
\end{table}
\begin{table}[h]
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{c|c|c}
\hline \hline
Layer Type & Parameter & Activation \\
\hline
Input & $\mathtt{h} \times \mathtt{w} \times \mathtt{c}$ & -- \\
Flatten & -- & -- \\
Fully Connected & $30$ & $\mathsf{ReLU}$ \\
Fully Connected & $30$ & $\mathsf{ReLU}$ \\
Fully Connected & $10$ / $1$ & $\mathsf{softmax}$ / -- \\
\hline \hline
\end{tabular}
\caption{Architecture for the $\mathtt{Dense\ (large)}$ model.}
\label{tab:dense-large}
\end{table}
\begin{table}[h]
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{c|c|c}
\hline \hline
Layer Type & Parameter & Activation \\
\hline
Input & $\mathtt{h} \times \mathtt{w} \times \mathtt{c}$ & -- \\
Convolution & $3 \times 3 \times 4$ & -- \\
Convolution & $3 \times 3 \times 4$ & -- \\
Fully Connected & $20$ & $\mathsf{ReLU}$ \\
Fully Connected & $10$ / $1$ & $\mathsf{softmax}$ / -- \\
\hline \hline
\end{tabular}
\caption{Architecture for the $\mathtt{CNN}$ model.}
\label{tab:cnn}
\end{table}
|
2,869,038,154,067 | arxiv | \section{Introduction}
Shape fluctuation in nuclei is an important aspect of quantum many-body physics
and is known to appear in transitional regions of the nuclear chart as,
for example, shape coexistence and $\gamma$-soft nuclei\cite{heyde11}.
To investigate such shape fluctuation phenomena,
we need a theory to go beyond
the mean field.
One of the promising methods is a five-dimensional quadrupole collective
Hamiltonian method.
Recently, the collective Hamiltonian method has been developed with
modern energy density functionals (EDF)\cite{prochniak04,niksic09,delaroche10},
where the potential term is obtained by the constrained EDF
and the collective inertial functions
in the kinetic terms are estimated by the Inglis-Belyaev cranking formula
at each ($\beta,\gamma$) quadrupole deformation parameter.
Another progress has been made by deriving collective inertial functions
by the local quasiparticle random-phase approximation (QRPA)
including the dynamical residual interaction with
the pairing plus quadrupole (P+Q) force\cite{hinohara10, hinohara11,hinohara12}
based on the adiabatic selfconsistent collective coordinate
method\cite{matsuo00, nakatsukasa12, nakatsukasa16}.
Our goal is to combine these two approaches, that is,
to derive collective inertial functions by the local QRPA
with Skyrme EDF toward microscopic and non-empirical description of
the collective Hamiltonian.
Solutions of the local QRPA require to construct a huge QRPA matrix
(its dimension of about $10^6$ for deformed nuclei)
and to diagonalize it
at each point of the ($\beta$,$\gamma$) plane.
This is computationally very demanding.
With the help of the finite amplitude method (FAM)\cite{nakatsukasa07,
inakura09, avogadro11, stoitsov11,liang13,avogadro13,kortelainen15},
we have constructed a FAM-QRPA code
for triaxial nuclear shapes.
Below, we will report quadrupole strength functions
for a triaxially deformed superfluid nucleus $^{188}$Os.
Then, we will show applications of local FAM-QRPA to rotational moment
of inertia on the ($\beta,\gamma$) plane in a transitional nucleus $^{106}$Pd.
\section{Finite amplitude method}
The basic equation of FAM is the following linear-response equation
with an external field,
\begin{subequations}\label{eq:FAM}
\begin{align}
(E_\mu + E_{\nu} - \omega) X_{\mu\nu}(\omega) + \delta H^{20}_{\mu\nu}(\omega) &= -F^{20}_{\mu\nu}, \\
(E_\mu + E_{\nu} + \omega) Y_{\mu\nu}(\omega) + \delta H^{02}_{\mu\nu}(\omega) &= -F^{02}_{\mu\nu},
\end{align}\end{subequations}
where $X_{\mu\nu}(\omega)$ and $Y_{\mu\nu}(\omega)$ are
FAM amplitudes at an external frequency $\omega$,
$E_{\mu(\nu)}$ are one-quasiparticle energies, and $\delta H^{20,02}$
$(F^{20,02})$ are two-quasiparticle components of
an induced Hamiltonian (external field).
Details of the FAM formulation
can be found in Refs.\cite{nakatsukasa07, avogadro11}.
The important aspect of FAM is to use the linear response equation
and to replace a functional derivative
in the residual interaction with a finite difference form.
These avoid most time-consuming computation for QRPA, that is,
constructing QRPA matrix and diagonalizing it.
The solution of the FAM amplitudes in Eq.~(\ref{eq:FAM}) can be obtained by an
iterative procedure
(the modified Broyden method\cite{baran08} for our case).
We construct a FAM-QRPA code for iteratively solving the FAM equation (\ref{eq:FAM})
in three-dimensional (3D) Cartesian coordinate mesh.
More details of our 3D FAM-QRPA code can be found in Ref.\cite{washiyama17}.
Because of a specific reflection symmetry\cite{bonche85,bonche87}
in the basis states,
the FAM equation is solved in only $x>0,y>0,z>0$ space.
We define the external isoscalar quadrupole operators with $K$ quantum numbers
as $Q^{(\pm)}_{2K}=(f_{2K}\pm f_{2-K})/\sqrt{2}$,
where $f_{2K} = (eZ/A)\sum^A_{i=1} r^2_i Y_{2K}(\hat{\boldsymbol{r}}_i)$
for $K>0$ and $Q^{(+)}_{20}=f_{20}$.
In Ref.\cite{washiyama17}, we have confirmed the validity of
our 3D FAM-QRPA code by comparing our results
for axially symmetric nuclei with those in Refs.\cite{stoitsov11,kortelainen15}.
\section{Results}
\begin{figure}[tb]
\centering
\includegraphics[width=0.64\linewidth]{FAM_strength.skm.Os188.680.19.ISQ.Ew20.180221.eps}
\caption{Strength function $S(\omega)$ of different quadrupole modes
as a function of frequency $\omega$ for $^{188}$Os.
}
\label{fig:strength}
\end{figure}
We have reported the quadrupole strength functions of
triaxially deformed superfluid nuclei, $^{110}$Ru and $^{190}$Pt,
in Ref.\cite{washiyama17}.
Here, we show another example of triaxially deformed
superfluid nucleus, $^{188}$Os. The ground state is found
to be at $\beta=0.216,\gamma=20.7^\circ$ and superfluid in neutrons
when we use $19^3$ mesh, 1360 basis states, SkM$^*$ functionals,
and volume pairing with the pairing strength
that reproduces the neutron pairing gap of 1.25\,MeV in $^{120}$Sn.
We calculate the strength functions of the quadrupole modes as
\begin{align}
S(\omega) = -\frac{1}{\pi} \text{Im} \left(\sum_{\mu<\nu} F^{20*}_{\mu\nu} X_{\mu\nu}(\omega)+F^{02*}_{\mu\nu} Y_{\mu\nu}(\omega)\right),
\end{align}
obtained from converged FAM amplitudes
at 200 $\omega$ points between $\omega=0$ and 50\,MeV
with $\Delta \omega =0.25$\,MeV.
The imaginary part of the frequency (of 0.5\,MeV)
is added as smearing width.
Figure \ref{fig:strength} shows the strength functions $S(\omega)$ of
quadrupole modes for different $K$ quantum numbers
as a function of frequency $\omega$ for $^{188}$Os.
We observe five $K$ splittings in strength and
three peaks for two $K=1$ and one $K=2$ associated with spurious modes
of nuclear rotation around $x,y$, and $z$ axes near zero energy.
These are seen only in triaxial nuclei.
The energy-weighted sum rule
of $K=0,2$ modes is well satisfied;
98.6\% for both modes when $\omega$ is summed up to $\omega=50$\,MeV.
Next, we consider the evaluation of the collective
inertial functions from the local QRPA with our 3D FAM-QRPA framework.
We start from the evaluation of the rotational moment of inertia.
Recently, the relation between the Thouless-Valatin inertia $M_{\textrm{NG}}$ of
a Nambu-Goldstone (NG) mode (spurious mode)
and the FAM strength function for the momentum operator
$\hat{\mathcal{P}}_{\textrm{NG}}$ of this NG mode as an external field
at zero frequency was obtained in Ref.\cite{hinohara15}. This is expressed as
\begin{align}\label{eq:NGmode}
S(F=\hat{\mathcal{P}}_{\textrm{NG}}, \omega=0) &= \sum_{\mu<\nu}[F^{20*}_{\mu\nu} X_{\mu\nu}(\omega=0)+F^{02*}_{\mu\nu} Y_{\mu\nu}(\omega=0)] \notag \\
&=-M_{\textrm{NG}}\, .
\end{align}
For the case of nuclear rotation as an NG mode,
the corresponding momentum operator is the angular momentum operator,
and its Thouless-Valatin inertia is the rotational moment of inertia.
In Ref.\cite{petrik17},
this relation was used to obtain the Thouless-Valatin rotational
moment of inertia with an axially symmetric FAM-QRPA.
We use this relation for
the Thouless-Valatin rotational moment of inertia.
We perform the constrained HFB at each ($\beta,\gamma$) point,
and then perform FAM-QRPA calculation
on top of these constrained HFB states.
This is the constrained HFB + local FAM-QRPA calculation.
In this case, we need to compute the FAM strength function at only $\omega=0$
without smearing width.
We take $^{106}$Pd as an example of such calculations.
To perform the constrained HFB + local FAM-QRPA at each ($\beta,\gamma$) point,
we use $17^3$ mesh, 1120 basis states, SkM$^*$ functionals,
and volume pairing with the pairing strength
that reproduces odd-even mass staggering in $^{106}$Pd.
\begin{figure}[tb]
\includegraphics[width=0.495\linewidth]{FAM_Ew20_p280n240_moi_x.skm.Pd106.560.17.interp.180222.eps}
\hspace{0.1em}
\includegraphics[width=0.495\linewidth]{FAM_Ew20_p280n240_moi_x_TVbyIB.skm.Pd106.560.17.interp.180222.eps}
\caption{(Left) Thouless-Valatin rotational moment of inertia on $x$ axis
$\mathcal{J}^{\textrm{TV}}_1(\beta,\gamma)$
as functions of $\beta$ and $\gamma$ for $^{106}$Pd.
(Right) Ratio of Thouless-Valatin rotational moment of inertia
$\mathcal{J}^{\textrm{TV}}_1(\beta,\gamma)$
to Inglis-Belyaev one $\mathcal{J}^{\textrm{IB}}_1(\beta,\gamma)$
as functions of $\beta$ and $\gamma$ for $^{106}$Pd.
White circles on the ($\beta,\gamma$) plane represent the points at which
we perform constrained HFB + local FAM-QRPA.
}\label{fig:moi}
\end{figure}
The left panel of Fig.~\ref{fig:moi} shows the
Thouless-Valatin rotational moment of inertia $\mathcal{J}_1(\beta,\gamma)$
on $x$ axis for $^{106}$Pd.
The value of the moment of inertia becomes larger with larger $\beta$,
as it should be. We find some structure such as local maximum of
moment of inertia at $\beta\sim 0.4,\gamma\sim 5^\circ$.
This is due to a change in a microscopic structure, especially
in pairing correlations.
The right panel of Fig.~\ref{fig:moi} shows the ratio of the Thouless-Valatin
moment of inertia and the Inglis-Belyaev moment of inertia
calculated by neglecting the residual interaction in FAM.
At most of the calculated ($\beta,\gamma$) points, this ratio exceeds unity.
This indicates the enhancement of the moment of inertia
by the residual interaction in FAM-QRPA calculations.
This is consistent with previous investigations\cite{hinohara10,hinohara12}.
\section{Summary}
Our goal is to construct the 5D quadrupole collective
Hamiltonian based on Skyrme EDF.
Toward this goal, we have first developed 3D Skyrme FAM-QRPA framework
for triaxial nuclear shapes.
We have obtained strength functions of quadrupole modes in triaxial superfluid
nucleus $^{188}$Os. These strengths show five $K$ splittings and
three spurious peaks of nuclear rotations at zero energy as specific features
of triaxial nuclei.
Then, we have extended our 3D Skyrme FAM-QRPA into the local one to estimate
the collective inertial functions for the 5D quadrupole collective Hamiltonian.
We have calculated the Thouless-Valatin moment of inertia in $^{106}$Pd
on the ($\beta,\gamma$) plane with the constrained HFB + local FAM-QRPA.
We have observed a significant enhancement of Thouless-Valatin
moment of inertia from the Inglis-Belyaev one.
The important purpose of this work is to obtain FAM-QRPA results
with a small computational cost.
We have obtained strength function at one $\omega$ point
in Fig.~\ref{fig:strength}
within about 8 minutes in average
and one value of moment of inertia at one ($\beta,\gamma$) point in
Fig.~\ref{fig:moi} within about 2 minutes,
with parallelization of 16 threads.
The estimation of the collective inertial functions in the vibrational
kinetic term in the quadrupole collective Hamiltonian with the 3D FAM-QRPA is in progress.
Then, the quadrupole collective Hamiltonian with the collective
inertial functions by the constrained HFB + local FAM-QRPA
based on Skyrme EDF will be constructed in the near future.
\vspace{1em}
This work was funded by ImPACT Program of Council for Science,
Technology and Innovation (Cabinet Office, Government of Japan).
Numerical calculations were performed in part using the COMA (PACS-IX)
at the Center for Computational Sciences, University of Tsukuba, Japan.
|
2,869,038,154,068 | arxiv | \section{Introduction}
We start off by recalling the definition of Hall-Littlewood functions in the context of a general symmetrizable Kac-Moody algebra (see, for example,~\cite{carter}). Let $\mathfrak g$ be such an algebra with Cartan subalgebra $\mathfrak h$. Let $\Phi\subset \mathfrak h^*$ be its root system and $\Phi^+$ be the subset of positive roots, $\alpha\in\Phi$ having multiplicity $m_\alpha$. Finally, let $\lambda\in\mathfrak h^*$ be an integral dominant weight and $W$ be the Weyl group with length function $l$. The corresponding Hall-Littlewood function is then defined as
\begin{equation}\label{def}
P_\lambda=\frac 1{W_\lambda(t)}\sum\limits_{w\in W} w\left(e^\lambda \prod\limits_{\alpha\in\Phi^+}\left(\frac{1-t e^{-\alpha}}{1-e^{-\alpha}}\right)^{m_\alpha}\right).
\end{equation}
Here $W_\lambda(t)$ is the Poincaré series of the stabilizer $W_\lambda\subset W$, i.e. $$W_\lambda(t)=\sum\limits_{w\in W_\lambda}t^{l(w)}$$ (in particular, $W_\lambda(t)=1$ for regular $\lambda$).
Both sides of~(\ref{def}) should be viewed as elements of $\mathfrak R_t=\mathfrak R\otimes\mathbb{Z}[t]$, where $\mathfrak R$ is the ring of characters the support of which is contained in the union of a finite number of lower sets with respect to the standard ordering on $\mathfrak h^*$. It is easy to show that $P_\lambda$ is indeed a well-defined element of $\mathfrak R_t$ (see, for instance,~\cite{vis1}).
The definition~(\ref{def}) could be given only in terms of the corresponding root system eliminating any mention of Lie algebras and thus giving Hall-Littlewood functions a purely combinatorial flavor. The language of Kac-Moody algebras and their representations is, however, very natural when dealing with these objects.
It is worth noting that $P_\lambda$ specializes to the Kac-Weyl formula for the character of the irreducible representation $L_\lambda$ with highest weight $\lambda$ when $t=0$ and to $\sum_{w\in W}e^{w\lambda}$ when $t=1$. Thus it can be viewed as an interpolation between the two.
Another important observation is that once we've chosen a basis $\gamma_1,\ldots,\gamma_n$ in the lattice of integral weights, the $P_\lambda$ turn into formal Laurent series in corresponding variables $x_1,\ldots,x_n$ with coefficients in $\mathbb{Z}[t]$ (as does any other element of $\mathfrak R_t$). In the case of $\mathfrak{g}$ having finite type these Laurent series are, in fact, Laurent polynomials (the characters $P_\lambda$ have finite support) and are often referred to as ``Hall-Littlewood polynomials". We are, however, primarily interested in the affine case.
Our main result is a new combinatorial formula for the functions $P_\lambda$ in the case of $\mathfrak g=\mathfrak{\widehat{sl}_n}$ (root system of type $\tilde A_{n-1}$). One geometrical motivation for considering these expressions is as follows. Consider the group $\widehat{G}=\widetilde{SL}_n(\mathbb{C}[t,t^{-1}]),$ the central extension of the loop group of $SL_n(\mathbb{C})$ defined in the standard way. Next, consider the flag variety $F=\widehat{G}/B_+$, where $B_+$ is the Borel subgroup of $\widehat{G}$. On $F$ we have the sheaf of differentials $\Omega^*$ as well as the equivariant linear bundle $\mathcal L_\lambda$. It can be shown that the equivariant Euler characteristic of the sheaf $\Omega^*\otimes\mathcal L_\lambda$, namely $$\sum\limits_{i,j\ge 0}(-1)^i t^j\charac(H^i(F,\Omega^j\otimes\mathcal L_\lambda))$$ is precisely $W_\lambda(-t)P_\lambda(-t)$. In fact, in order to get rid of the factor $W_\lambda(-t)$ for singular $\lambda$, one may consider the corresponding parabolic flag variety with its sheaf of differential forms twisted by the corresponding equivariant linear bundle. In this context affine Hall-Littlewood functions appear, for example, in~\cite{gro}.
Another topic in which Hall-Littlewood functions of type $\tilde A$ appear is the representation theory of the double affine Hecke algebra (see~\cite{che}).
Our formula turns out to be similar in spirit to the combinatorial formula for classic Hall-Littlewood functions, that is of type $A$. The latter formula, found already in~\cite{mac}, is a sum over Gelfand-Tsetlin patterns, combinatorial objects enumerating a basis in $L_\lambda$. The formula we present is the sum over a basis in an irreducible integrable representation of the affine algebra $\widehat{\mathfrak{sl}}_n$, which was obtained in the works~\cite{fs,fjlmm1,fjlmm2}. Moreover, although at first the combinatorial set enumerating the latter basis seems to be very different from the set of Gelfand-Tsetlin patterns, a certain correspondence may be constructed which lets one then define the summands in the formula similarly to the classic case.
We consider it essential to review the finite case as well as the affine one in order to both illustrate the more complicated affine case and to emphasize the deep analogies between the two cases. For this last reason we will deliberately introduce certain conflicting notations, i.e. analogous objects in the finite and affine cases may be denoted by the same symbol. However, which case is being considered should always be clear from the context.
Our approach to proving the formula is based on Brion's theorem for convex polyhedra, originally due to~\cite{bri}. This formula expresses the sum of exponentials of integer points inside a rational polyhedron as a sum over the polyhedron's vertices.
Let us first explain how the approach works in the classic case.
The set of Gelfand-Tsetlin patterns associated with weight $\lambda$ may be viewed as the integer points of the Gelfand-Tsetlin polytope. This means that the character of $L_\lambda$ is in fact a sum of certain exponentials of these integer points and may thus be computed via Brion's theorem. It turns that the contributions of most vertices are zero, while the remaining vertices provide the summands in the classic formula for Schur polynomial. This scenario is discussed in the paper~\cite{me1}.
Further, the mentioned combinatorial formula for Hall-Littlewood polynomials of type $A$ implies that, in this case, $P_\lambda$ is the sum of these same exponentials but this time with coefficients which are polynomials in $t$. We derive and employ a generalization of Brion's theorem which expresses weighted sums of exponentials of a certain type as, again, a sum over the vertices. Our weights turn out to be of this very type and we may thus apply our weighted version of Brion's theorem. Once again most vertices contribute zero, while the remaining contributions add up to give formula~(\ref{def}).
Now it can be said that in the case of $\widehat{\mathfrak{sl}}_n$ the situation is similar. The set parametrizing the basis vectors can be, again, viewed as the set of integer points of a ``convex polyhedron"{}, this time, however, infinite dimensional. Moreover, the summand corresponding to each point is once more a certain exponential. One can prove a Brion-type formula for this infinite-dimensional polyhedron, which expresses the sum of exponentials as a sum over the vertices. Again, the contributions of most vertices are zero and the remaining contributions add up to the Kac-Weyl formula for $\charac{L_\lambda}$. This scenario is presented in~\cite{me2}. (To be accurate,~\cite{me2} deals with the Feigin-Stoyanovsky subspace and its character but the transition to the whole representation can be carried out rather simply, as shown in~\cite{fjlmm2}.)
Finally, our formula for affine Hall-Littlewood functions is, just like in the classic case, a sum of the same exponentials of integer points of the same polyhedron but with coefficients which are polynomials in $t$. We show that using our weighted version of Brion's theorem we may decompose this sum as a sum over the vertices. The same distinguished set of vertices will provide nonzero contributions which add up to formula~(\ref{def}), proving the result.
The below text is structured as follows. In Part~\ref{part1} we recall the preliminary results mentioned in the introduction and give the statement of our main result. Then we introduce our generalization of Brion's theorem and explain in more detail how it can be applied to proving our formula. In Part~\ref{tools} we develop the combinatorial arsenal needed to implement our proof. We introduce a family of polyhedra naturally generalizing Gelfand-Tsetlin polytopes and prove two key facts concerning those polyhedra. From the author's viewpoint, the topics discussed in Part~\ref{tools} are of some interest in their own right. In the last part we show how to obtain the weighted Brion-type formula for the infinite-dimensional polyhedron and then prove our central theorem concerning the contributions of vertices.
\part{Preliminaries, The Result and Idea of Proof}\label{part1}
\section{The Combinatorial Formula for Finite Type $A$}\label{gtcomb}
Let $g=\mathfrak{sl}_n$ and $\lambda\in\mathfrak h^*$ be an integral dominant nonzero weight. Let $$\lambda=(a_1,\ldots,a_{n-1})$$ with respect to a chosen basis of fundamental weights. In the appropriate basis $\lambda$ has coordinates $$\lambda_i=a_i+\ldots+a_{n-1}.$$ This will be our basis of choice in the lattice of integral weights, and we will view characters as Laurent polynomials in corresponding variables $x_1,\ldots,x_{n-1}$.
The Gelfand-Tsetlin basis in $L_\lambda$ is parametrized by the following objects known as Gelfand-Tsetlin patterns (abbreviated as GT-patterns.). Each such pattern is a number triangle $\{s_{i,j}\}$ with $0\le i\le n-1$ and $1\le j\le n-i$. The top row is given by $s_{0,j}=\lambda_j$ (with $s_{0,n}=0$) while the other elements are arbitrary integers satisfying the inequalities
\begin{equation}\label{gt}
s_{i,j}\ge s_{i+1,j}\ge s_{i,j+1}.
\end{equation}
The standard way to visualize these patterns is the following:
\begin{center}
\begin{tabular}{ccccccc}
$s_{0,1}$ &&$ s_{0,2}$&& $\ldots$ && $s_{0,n}$\\
&$s_{1,1}$ &&$ \ldots$&& $s_{1,n-1}$ &\\
&&$\ldots$ &&$ \ldots$& &\\
&&&$s_{n,1}$ &&&
\end{tabular}
\end{center}
Thus each number is no greater than the one immediately to its upper-left and no less than the one immediately to its upper-right, except for the numbers in row $0$ (row $i$ is comprised of the numbers $s_{i,*}$).
Let us denote the set of GT-patterns $\mathbf{GT}_\lambda$. For $A\in \mathbf{GT}_\lambda$ let $v_A$ be the corresponding basis vector, $v_A$ is a weight vector with weight $\mu_A$. If $A=(s_{i,j})$, then in the chosen basis $\mu_A$ has coordinates $$(\mu_A)_i=\sum\limits_j s_{i,j}-\sum\limits_j s_{i-1,j}.$$
Each $A\in \mathbf{GT}_\lambda$ also determines a polynomial in $t$ denoted $p_A$. We have
\begin{equation}\label{hlcoeff}
p_A=\prod\limits_{l=1}^{n-1} (1-t^l)^{d_l}
\end{equation}
where $d_l$ is the following statistic. It is the number of pairs consisting of $1\le i\le n-1$ and $a\in\mathbb{Z}$ such that the integer $a$ occurs $l$ times in row $i$ of $A$ and $l-1$ times in row $i-1$. The combinatorial formula is then as follows.
\begin{theorem}\label{combfin}
$$P_\lambda=\sum\limits_{A\in \mathbf{GT}_\lambda} p_A e^{\mu_A}.$$
\end{theorem}
This theorem is a direct consequence of the branching rule for classic Hall-Littlewood polynomials, which can be found in~\cite{mac}. One needs to note, however, that the definition in~\cite{mac} corresponds to the case of $\mathfrak{gl}_n$ rather than $\mathfrak{sl}_n$. Fortunately, this adjustment is fairly simple to make: the polynomial we have obtained is, in the notations of~\cite{mac}, just $$P_{(\lambda_1,\ldots,\lambda_{n-1},0)}(x_1,\ldots,x_{n-1},1;t).$$
\section{The Monomial Basis}\label{monbasis}
Just like the combinatorial formula for type $A$ is a sum over the elements of a basis in the irreducible $\mathfrak{sl}_n$-module, our formula for type $\tilde{A}$ is a sum over the elements of a certain basis in the irreducible $\widehat{\mathfrak{sl}}_n$-module. This basis was constructed by Feigin, Jimbo, Loktev, Miwa and Mukhin in the papers~\cite{fjlmm1,fjlmm2}, in this section we give a concise review of its properties.
Let $\lambda$ be an integral dominant $\widehat{\mathfrak{sl}}_n$-weight with coordinates $$(a_0,\ldots,a_{n-1})$$ with respect to a chosen basis of fundamental weights. The level of $\lambda$ is $k=\sum a_i$.
The basis in $L_\lambda$ is parametrized by the elements of the following set $\mathbf{\Pi}_\lambda$. Each element $A$ of $\mathbf{\Pi}_\lambda$ is a sequence of integers $(A_i)$ infinite in both directions which satisfies the following three conditions.
\begin{enumerate}[label=\roman*)]
\item For $i\gg 0$ we have $A_i=0$.
\item For $i\ll 0$ we have $A_{i}=a_{i\bmod n}$.
\item For all $i$ we have $A_i\ge 0$ and $A_{i-n+1}+A_{i-n+2}+\ldots+A_i\le k$ (sum of any $n$ consecutive terms).
\end{enumerate}
The basis vector corresponding to $A\in\mathbf{\Pi}_\lambda$ is a weight vector with weight $\mu_A$. We will need an explicit description of $\mu_A$. First, observe that, since $\mu_A$ is in the support of $\charac(L_\lambda)$, the weight $\mu_A-\lambda$ is in the root lattice. In other words, we may fix a basis in the root lattice and describe $\mu_A-\lambda$ with respect to this basis. If $\alpha_0,\ldots,\alpha_{n-1}$ are the simple roots and $\delta$ is the imaginary root, then the basis consists of the roots $$\gamma_i=\alpha_1+\ldots+\alpha_i$$ for $1\le i\le n-1$ and the root $-\delta$.
Now consider $T^0\in\mathbf{\Pi}_\lambda$ given by $T^0_i=0$ when $i>0$ and $A_{i}=a_{(i\bmod n)}$ when $i\le 0$. The coordinates of $\mu_A-\lambda$ are determined by the termwise difference $A-T^0$ in the following way. The coordinate corresponding to $\gamma_i$ is equal to
\begin{equation}\label{zweight}
\sum\limits_{q\in\mathbb{Z}}(A_{q(n-1)+i}-T^0_{q(n-1)+i}),
\end{equation}
while the coordinate corresponding to $-\delta$ is
\begin{equation}\label{qweight}
\sum\limits_{i\in\mathbb{Z}}\left\lceil\frac{i}{n-1}\right\rceil(A_i-T^0_i).
\end{equation}
For example one may now check that $\mu_{T^0}=\lambda$, i.e. $v_{T^0}$ is the highest weight vector.
We will refrain from giving an explicit definition of the vectors $v_A$ themselves, only pointing out that the basis is monomial. That means that every $v_A$ is obtained from the highest weight vector by the action of a monomial in the root spaces of the algebra $\widehat{\mathfrak{sl}}_n$. Thus this basis is of completely different nature than the Gelfand-Tsetlin basis, which makes the deep similarities between the affine and finite cases presented below, in a way, surprising.
\section{The Main Result}
One of the keys to our main result is the transition from infinite sequences comprising $\mathbf{\Pi}_\lambda$ to Gelfand-Tsetlin patterns (of sorts), which we mentioned in the introduction. It should be noted that analogues of GT patterns for the (quantum) affine scenario have been considered in papers~\cite{jimbo, ffnl, tsym}, our construction, however, differs substantially.
The object we associate with every $A\in\mathbf{\Pi}_\lambda$ is an infinite set of numbers $s_{i,j}(A)$ with both $i$ and $j$ arbitrary integers, for any $i,j$ satisfying the inequalities~(\ref{gt}). In general, we will refer to arrays of real numbers $(s_{i,j})$ satisfying~(\ref{gt}) as ``plane-filling GT-patterns". Similarly to classic GT-patterns, we visualize them as follows.
\begin{center}
\begin{tabular}{ccccccccc}
&$\ldots$ &&$\ldots$ && $\ldots$ && $\ldots$&\\
$\ldots$ &&$ s_{-1,-1}$&& $s_{-1,0}$ && $s_{-1,1}$&&\ldots\\
&$\ldots$ &&$ s_{0,-1}$&& $s_{0,0}$ &&\ldots&\\
$\ldots$ &&$ s_{1,-2}$&& $s_{1,-1}$ && $s_{1,0}$&&\ldots\\
&$\ldots$ &&$\ldots$ && $\ldots$ && $\ldots$&
\end{tabular}
\end{center}
To generalize the definition of $T^0$, for any $m\in\mathbb{Z}$ let $T^m$ be given by $T^m_i=0$ when $i>mn$ and $A_{i}=a_{(i\bmod n)}$ when $i\le mn$. Then, by definition,
\begin{equation}\label{gtdef}
s_{i,j}(A)=\sum\limits_{l\le in+j(n-1)}(A_l-T_l^{i+j})-\sum_{l=in+j(n-1)+1}^{(i+j)n}T^{i+j}_l.
\end{equation}
Note that the first sum on the right has a finite number of nonzero summands and the second sum is nonzero only when $j>0$. A way to rephrase this definition is to say that we take the sum of all terms of the sequence obtained from $A$ by setting all terms with number greater than $in+j(n-1)$ to zero and then subtracting $T^{i+j}$.
We will also use definition~(\ref{gtdef}) in the more general context of $A$ being any sequence satisfying i) and ii) from the definition in the previous section but not necessarily iii).
\begin{proposition}
If $A\in\mathbf{\Pi}_\lambda$, then the array $(s_{i,j}(A))$ constitutes an plane-filling GT-pattern.
\end{proposition}
\begin{proof}
One observes that $$s_{i,j}(A)-s_{i-1,j+1}(A)=A_{in+j(n-1)}\ge 0$$ and
\begin{multline*}
s_{i,j}(A)-s_{i-1,j}(A)=A_{(i-1)n+j(n-1)+1}+\ldots+A_{in+j(n-1)}+\\-T^{i+j}_{(i+j-1)n+1}-\ldots-T^{i+j}_{(i+j)n}=A_{(i-1)n+j(n-1)+1}+\ldots+A_{in+j(n-1)}-k\le 0.
\end{multline*}
In other words, the numbers $s_{i,j}(A)$ satisfy the inequalities~(\ref{gt}).
\end{proof}
The proof shows that $s_{i,j}(A)=s_{i-1,j+1}(A)$ if and only if $A_{in+j(n-1)}=0$ and $s_{i,j}(A)=s_{i-1,j}(A)$ if and only if $A_{in+(j-1)(n-1)}+\ldots+A_{in+j(n-1)}=k$ (sum of $n$ consecutive terms). This observation should be kept in mind when dealing with plane-filling GT-patterns $(s_{i,j}(A))$.
Now to give the statement of our main theorem we associate with every sequence $A$ satisfying i) and ii) a weight $p(A)$ of form $\prod_{l=1}^n (1-t^l)^{d_l}$. The integers $d_l$ are defined in terms of the associated plane-filling GT-pattern. Once again, to define $d_l$ we consider the set of pairs of integers $(x,i)$ such that the number $x$ appears $l-1$ times in row $i-1$ and $l$ times in row $i$ of $(s_{i,j}(A))$. The set of such pairs is, however, likely to be infinite and $d_l$ is, in fact, the size of a factor set with respect to a certain equivalence relation which we now describe.
One of the key features of the array $(s_{i,j})=(s_{i,j}(A))$ is the easily verified equality
\begin{equation}\label{shift}
s_{i-n+1,j+n}=s_{i,j}-k
\end{equation}
holding for any $i,j$. Now consider the set $X_l$ of pairs $(i,j)$ for which $$s_{i-1,j}\neq s_{i-1,j+1}=\ldots=s_{i-1,j+l-1}\neq s_{i-1,j+l}$$ and $$s_{i,j-1}\neq s_{i,j}=\ldots=s_{i,j+l-1}\neq s_{i,j+l}.$$ $X_l$ is in an obvious bijection with the set of pairs $(x,i)$ defined above. Equality~(\ref{shift}) shows that if $(i,j)\in X_l$, then $(i-\alpha(n-1),j+\alpha n)\in X_l$ for any integer $\alpha$. Our relation is defined by $(i,j)\sim (i-n+1,j+n)$.
\begin{proposition}\label{finite}
The set $X_l/\sim$ is finite.
\end{proposition}
\begin{proof}
First, every equivalence class in $X_l$ contains exactly one representative $(i,j)$ with $1\le i\le n-1$. Therefore, it suffices to show that the number of $(i,j)\in X_l$ with $i$ within these bounds is finite. Further, the following two facts are straightforward from definition~(\ref{gtdef}) and $A$ satisfying i) and ii).
\begin{enumerate}[label=\arabic*)]
\item If $1\le i\le n-1$, then for $j\gg 0$ one has $s_{i,j+1}=s_{i,j}-k$.
\item If $1\le i\le n-1$, then for $j\ll 0$ one has $s_{i,j+1}=s_{i,j}$ if and only if $a_{j\bmod n}=0$ and thus if and only if $s_{i-1,j+1}=s_{i-1,j}$ holds as well.
\end{enumerate}
However, 1) shows that if $(i,j)\in X_l$ and $i\in[1,n-1]$, then $j$ can not be arbitrarily large, while 2) shows that $-j$ can not be arbitrarily large.
\end{proof}
We can now define $d_l=|X_l/\sim|$ and state our main result.
\begin{theorem}\label{main}
For an integral dominant nonzero $\widehat{\mathfrak{sl}}_n$-weight $\lambda$ one has
\begin{equation}\label{mainformula}
P_\lambda=\sum\limits_{A\in\mathbf{\Pi}_\lambda}p(A) e^{\mu_A}.
\end{equation}
\end{theorem}
{\bf Remark.} As one can see, in the case of $\lambda=0$ the set $\mathbf\Pi_\lambda$ consists of a single zero sequence. The corresponding plane-filling GT pattern is also identically zero and our definition of $p(A)$ falls apart. In a sense, the case of $\lambda=0$ being exceptional is caused by the fact that for an affine root system the stabilizer of $0$ is infinite, unlike any other integral weight. This ultimately leads to definition~(\ref{def}) rendering $P_0$ not equal to $1$, unlike any root system of finite type.
\section{Brion's Theorem and its Generalization}
In this section we give a concise introduction to Brion's theorem and then present our generalization. After that we will elaborate on the connection between these subjects and our formula.
Consider a vector space $\mathbb{R}^m$ with a fixed basis and corresponding lattice of integer points $\mathbb{Z}^m\subset\mathbb{R}^m$. For any set $P\subset\mathbb{R}^m$ one may consider its characteristic series $$S(P)=\sum\limits_{a\in P\cap\mathbb{Z}^m} e^a,$$ a formal Laurent series in the variables $x_1,\ldots,x_m$. (Once we have assigned a formal variable to each basis vector we may define the monomial $e^a=x_1^{a_1}\ldots x_m^{a_m}$.)
If $P$ is a rational convex polyhedron (a set defined by a finite number of non-strict linear inequalities with integer coefficients, not necessarily bounded) it can be shown that there exists a Laurent polynomial $q\in\mathbb{Z}[x_1^{\pm 1},\ldots,x_m^{\pm 1}]$ such that $qS(P)$ is also some Laurent polynomial. Moreover, the rational function $\tfrac{qS(P)}q$ does not depend on the choice of $q$ and is denoted $\sigma(P)$. This function is known as the integer point transform (IPT) of $P$.
For a vertex $v$ of $P$ let $C_v$ be the tangent cone to $P$ at $v$. Brion's theorem is then the following identity.
\begin{theorem}[\cite{bri,kp}]\label{brion}
In the field of rational functions we have $$\sigma(P)=\sum\limits_{v\text{ vertex of }P}\sigma(C_v).$$
\end{theorem}
A nice presentation of these topics can be found in the books~\cite{barv,beckrob}.
Our generalization of Theorem~\ref{brion} is stated in the following setting. Suppose we have a convex rational polyhedron $P\subset\mathbb{R}^n$. Let $R$ be an arbitrary commutative ring, and consider any map $\varphi:\mathcal{F}_P\rightarrow R$, where we use $\mathcal F_P$ to denote the set of faces of $P$. The map $\varphi$ defines a function $g:P\rightarrow R$, where for $x\in P$ we have $g(x)=\varphi(f)$ with $f$ being the face of minimal dimension containing $x$.
Next, consider the weighted generating function $$S_\varphi(P)=\sum\limits_{a\in P\cap\mathbb{Z}^n}g(a)\exp(a)\in R[[x_1^{\pm1},\ldots,x_n^{\pm 1}]].$$
\begin{proposition}
There exists a polynomial $Q\in R[x_1,\ldots,x_n]$ such that $QS_\varphi(P)\in R[x_1^{\pm1},\ldots,x_n^{\pm1}]$.
\end{proposition}
\begin{proof}
This follows from the fact that we may present a finite set of nonintersecting subpolyhedra of $P$, the union of which contains any lattice point in $P$ and on each of which $g$ is constant.
Namely, for each face we may consider its image under a dilation centered at its interior point with rational coefficient $0<\alpha<1$ large enough for the image to contain all of the face's interior lattice points. $\{P_i\}$, the union of the obtained set of polyhedra with the set of all of $P$'s vertices has the desired properties. Thus $Q$ may be taken equal to the product of all the denominators of the rational functions $\sigma(P_i)$.
An important observation is that we may, therefore, actually take $Q$ to equal the product of of $1-e^\varepsilon$ over all minimal integer direction vectors $\varepsilon$ of edges of $P$, just like in the unweighted case.
\end{proof}
Thus we obtain a (well-defined) weighted integer point transform $$\sigma_\varphi(P)=\frac{QS_v(P)}Q\in R(x_1,\ldots,x_n).$$
Now note that if $v$ is a vertex of $P$ with tangent cone $C_v$, then there is a natural embedding $\mathcal F_{C_v}\hookrightarrow F_{P}$. If we allow ourselves to also use $\varphi$ to denote the restriction of $\varphi$ to $\mathcal F_{C_v}$ then our weighted Brion theorem can be stated as follows.
\begin{theorem}\label{wbrion}
$\sigma_\varphi(P)=\sum\limits_{v \text{ vertex of } P} \sigma_\varphi(C_v).$
\end{theorem}
\begin{proof}
Consider once again the set $\{P_i\}$ of polyhedra from the proof of the proposition. These polyhedra are in one-to-one correspondence with with $P$'s faces. Evidently, if we write down the regular Brion theorem for each of these polyhedra and then add these identities up with coefficients equal to the values of $\varphi$ at the corresponding faces, we end up with precisely the statement of our theorem.
\end{proof}
With the necessary adjustments, $R$ could actually be any abelian group. We, however, are interested in the specific case of $R=\mathbb{Z}[t]$.
\section{Employing the Weighted Brion Theorem in the Finite Case}\label{gtbrion}
First, we explain how this works in the classic case.
Consider the ${n+1}\choose 2$-dimensional real space with its coordinates labeled by pairs of integers $(i,j)$ such that $i\in[0,n-1]$ and $j\in[1,n-i]$. We then may view the elements of $\mathbf{GT}_\lambda$ as the integer points of the Gelfand-Tsetlin polytope, which we denote $GT_\lambda$. This polytope consists of points with coordinates $s_{i,j}$ satisfying~(\ref{gt}). (Visibly, $GT_\lambda$ is contained in a $n\choose2$-dimensional affine subspace obtained by fixing the coordinates in the row 0.)
With each GT-pattern we now associate two Laurent monomials. One is $e^{\mu_A}$, a monomial in ${x_1,\ldots,x_{n-1}}$ as explained in Section~\ref{gtcomb}. The other one is $e^A$, a monomial in ${n+1} \choose 2$ variables, the exponential of a point in $\mathbb{R}^{{n+1} \choose 2}$. We denote these variables $\{t_{i,j}\}$.
Now it is easily seen that $e^{\mu_A}$ is obtained from $e^A$ by the specialization
\begin{equation}\label{specfin}
t_{i,j}\longrightarrow x_i^{-1} x_{i+1}
\end{equation}
(within this section we set $x_0=x_n=1$).
In general, for a rational function $$Q\in\mathbb{Z}[t](\{t_{i,j}\})$$ we denote the result of applying~(\ref{specfin}) to $Q$ by $F(Q)$ which, when well-defined, is an element of $\mathbb{Z}[t](x_1,\ldots,x_{n-1})$.
To make use of Theorem~\ref{wbrion} we need one more simple observation. For a GT-pattern $A$, the weight $p_A$ depends only on which of the inequalities~(\ref{gt}) are actually equalities for this specific pattern. These inequalities, however, define our polytope and therefore $p_A$ only depends on the minimal face of $GT_\lambda$ containing $A$. Therefore we have a weight function $$\varphi:\mathcal F_{GT_\lambda}\rightarrow\mathbb{Z}[t]$$ as discussed in the previous section.
We now see that the right-hand side in Theorem~\ref{combfin} can be expressed by applying our weighted Brion theorem to $GT_\lambda$ and $\varphi$ and then applying specialization $F$. The result of this procedure is described by the following theorem, which visibly implies Theorem~\ref{combfin}.
\begin{theorem}\label{contribfin}
There's a distinguished subset of vertices of $GT_\lambda$ parametrized by elements of the orbit $W\lambda$. For vertex $v$ corresponding to some $\mu\in W\lambda$ we have $$F(\sigma_{\varphi}(C_v))=\sum\limits_{w\lambda=\mu} w\left(e^\lambda \prod\limits_{\alpha\in\Phi^+}\left(\frac{1-t e^{-\alpha}}{1-e^{-\alpha}}\right)^{m_\alpha}\right).$$ For any $v$ outside this distinguished subset we have $F(\sigma_{\varphi}(C_v))=0$.
\end{theorem}
Interestingly enough, for a regular weight $\lambda$ this distinguished subset of vertices is precisely the set of simple vertices. As mentioned in the introduction, how and why this works out in the case of $t=0$ is shown in the preprint~\cite{me1}.
Since Theorem~\ref{combfin} itself is a well known result, we will not give a detailed proof of Theorem~\ref{contribfin}. However, it is rather easily deduced from the statements we do prove as will be briefly explained in the end Part~\ref{tools}.
\section{Employing the Weighted Brion Theorem in the Affine Case}\label{affbrion}
We now move on to the main affine case which is ideologically very similar but, of course, infinite-dimensional and thus technically more complicated.
Consider the real countable dimensional space $\Omega$ of sequences $x$ infinite in both directions for which one has $x_i=0$ when $i\gg 0$ and $x_{i}=x_{i-n}$ when $i\ll 0$ (for $x\in\Omega$ we denote $x_i$ the terms of this sequence). We denote the lattice of integer sequences $\mathbb{Z}^{{}^\infty}\subset\Omega$. In $\Omega$ we also have the affine subspace $V$ of sequences $x$ for which $x_i=a_{i\bmod n}$ when $i\ll 0$. Note that the functions $s_{i,j}(x)$ and $p(x)$ are defined precisely for $x\in V$.
Define the functionals $\chi_i$ on $\Omega$ taking $x$ to $x_{i-n+1}+\ldots+x_i$. In these notations, the set $\mathbf{\Pi}_\lambda$ is precisely $\Pi\cap\mathbb{Z}^{{}^\infty}$, where $\Pi\subset V$ is the ``polyhedron"{} defined by the inequalities $x_i\ge 0$ and $\chi_i(x)\le k$ for all $i$.
It will often be more convenient to consider the translated polyhedron $$\bar{\Pi}=\Pi-T^0.$$ Geometrically and combinatorially the two polyhedra are identical, the advantage of $\bar{\Pi}$ is that it is contained in the linear subspace $\bar V\subset\Omega$ of sequences with a finite number of nonzero terms. For compactness use the $\ \bar{}\ $ notation to denote the $-T^0$ translation in general in the following two ways. If $X$ is a point or subset in $V$ we denote $\bar X=X-T^0$. If $\Phi$ is a map the domain of which consists of points or subsets in $V$, we define $\bar\Phi(\bar X)=\Phi(X)$.
To any integer sequence $x\in \bar V$ we may assign its formal exponent $e^x$, a (finite!) monomial in the infinite set of variables $\{t_i,i\in\mathbb{Z}\}$. Also, for $A\in\mathbf{\Pi}_\lambda$ the weight $\mu_A-\lambda$ is an integral linear combination of $\gamma_1,\ldots,\gamma_{n-1},-\delta$. Consequently, we may view $e^{\mu_A-\lambda}$ as a monomial in the corresponding variables $z_1,\ldots,z_{n-1},q$. Formulas~(\ref{zweight}) and~(\ref{qweight}) show that $e^{\mu_A-\lambda}$ is obtained from $e^{\bar A}$ by the specialization
\begin{equation}\label{affspec}
t_i\longrightarrow z_{i\bmod (n-1)}q^{\left\lceil\tfrac{i}{n-1}\right\rceil},
\end{equation}
where the remainder is taken from $[1,n-1]$. In general we will denote the above specialization $G$, it being applicable to (some) expressions in the $t_i$.
Now we present a (weighted) Brion-type formula for $\bar\Pi$. One may define the faces of $\Pi$ and $\bar\Pi$ in a natural way (which will be done below). Of course, $f\subset\Pi$ is a face if and only if $\bar f\subset\bar\Pi$ is a face. One will see that $p(x)$ depends only on the minimal face of $\Pi$ containing $x$. In other words there is a map $$\varphi:\mathcal F_{\Pi}\rightarrow\mathbb{Z}[t]$$ such that $p(x)=\varphi(f)$ for the minimal face $f$ containing $x$. Denote $$S_{\bar\varphi}(\bar\Pi)=\sum_{x\in\bar\Pi\cap\mathbb{Z}^{{}^\infty}}\bar p(x)e^x.$$
Our formula will be an identity in the ring $\mathfrak S$ of those Laurent series in $q$ with coefficients in the field $\mathbb{Z}[t](z_1,\ldots,z_{n-1})$ which contain only a finite number of negative powers of $q$. This ring is convenient for the following reason. Consider a sequence of monomials $y_1,y_2,\ldots$ in variables $z_1,\ldots,z_{n-1},q$. If only a finite number of the $y_i$ contain a non-positive power of $q$ and none of them are equal to 1, then the product
\begin{equation}\label{invprod}
(1-y_1)(1-y_2)\ldots
\end{equation}
is a well-defined element of $\mathfrak S$ and, most importantly, is invertible therein.
With each vertex $\bar v$ of $\bar\Pi$ we will associate a series $\tau_{\bar v}\in\mathfrak S$. This series will, in a certain sense, be the result of applying $G$ to an ``integer point transform" of the tangent cone $C_{\bar v}$ (also defined below). Our formula will then simply read as follows.
\begin{theorem}\label{infbrion}
In $\mathfrak S$ one has the identity $$G(S_{\bar\varphi}(\bar\Pi))=\sum_{\substack{\bar v\text{ vertex}\\\text{of }\bar\Pi}}\tau_{\bar v}.$$
\end{theorem}
Now, Theorem~\ref{main} may be rewritten as
\begin{equation}\label{rewrmain}
P_\lambda=e^{\lambda}G(S_{\bar\varphi}(\bar\Pi)).
\end{equation}
In view of this, Theorem~\ref{main} now follows from the following statement which is the affine analogue of Theorem~\ref{contribfin}.
\begin{theorem}\label{contrib}
There's a distinguished subset of vertices of $\Pi$ parametrized by elements of the orbit $W\lambda$ with the following two properties.
\begin{enumerate}[label=\alph*)]
\item For $v$ from this distinguished subset corresponding to $\mu\in W\lambda$ one has $$\tau_{\bar v}=\frac1{W_\lambda(t)}\sum\limits_{w\lambda=\mu}\frac{e^{w\lambda-\lambda} w\left(\prod\limits_{\alpha\in\Phi^+}\left(1-t e^{-\alpha}\right)^{m_\alpha}\right)}{w\left(\prod\limits_{\alpha\in\Phi^+}\left(1-e^{-\alpha}\right)^{m_\alpha}\right)},$$ where $v$ corresponds to weight $\mu\in W\lambda$.
\item For any other vertex $v$ of $\Pi_\lambda$ one has $\tau_{\bar v}=0$.
\end{enumerate}
\end{theorem}
The expression in the right-hand side in part a) is an element of $\mathfrak S$ because its denominator is a product of the type concerned in~(\ref{invprod}).
\part{Combinatorial Tools: Generalized Gelfand-Tsetlin Polyhedra}\label{tools}
In this Part we discuss certain finite-dimensional polyhedra which are seen to generalize Gelfand-Tsetlin polytopes. The acquired tools will be applied to to the proof of Theorem~\ref{contrib} in the next part.
\section{Ordinary Subgraphs and Associated Polyhedra}
Consider an infinite square lattice as a graph $\mathcal R$ the vertices being the vertices of the lattice and the edges being the segments joining adjacent vertices. We visualize this lattice being rotated by $45^\circ$, i.e. the segments forming a $45^\circ$ angle with the horizontal.
We enumerate the vertices in accordance with our numbering of the elements of plane-filling GT-patterns. That is the vertices are enumerated by pairs of integers $(i,j)$. The set of vertices $(i,\cdot)$ form a row, they are the set of vertices situated on the same horizontal line. Within a row the second index increases from left to right and the two vertices directly above $(i,j)$ are $(i-1,j)$ and $(i-1,j+1)$.
We term a subgraph $\Gamma$ of $\mathcal R$ ``ordinary"{} if it has the following properties.
\begin{enumerate}
\item $\Gamma$ is a finite connected full subgraph.
\item Whenever both $(i,j)\in\Gamma$ (short for $(i,j)$ is a vertex of $\Gamma$) and $(i,j+1)\in\Gamma$ we also have $(i+1,j)\in\Gamma$.
\item Let $a_\Gamma$ be the number of the top row containing vertices of $\Gamma$. If $i>a_\Gamma$, then whenever both $(i,j)\in\Gamma$ and $(i,j+1)\in\Gamma$ we also have $(i-1,j+1)\in\Gamma$.
\end{enumerate}
Note that $(i-1,j+1)$ and $(i+1,j)$ are the two common neighbors of $(i,j)$ and $(i,j+1)$. Below are some examples of what such a subgraph may look like.
\setlength{\unitlength}{1mm}
\begin{picture}(95,50)
\put(5,45){\line(1,-1){5}} \put(15,45){\line(-1,-1){5}} \put(15,45){\line(1,-1){5}} \put(25,45){\line(-1,-1){5}}
\put(10,40){\line(1,-1){5}} \put(20,40){\line(-1,-1){5}} \put(20,40){\line(1,-1){5}}
\put(15,35){\line(-1,-1){5}} \put(15,35){\line(1,-1){5}} \put(25,35){\line(-1,-1){5}} \put(25,35){\line(1,-1){5}}
\put(10,30){\line(1,-1){5}} \put(20,30){\line(-1,-1){5}} \put(20,30){\line(1,-1){5}} \put(30,30){\line(-1,-1){5}}
\put(15,25){\line(1,-1){5}} \put(25,25){\line(-1,-1){5}}
\put(20,20){\line(-1,-1){5}}
\pic{ex1}
\put(15,5){Figure \ref{ex1}}
\put(55,40){\line(-1,-1){5}} \put(55,40){\line(1,-1){5}}
\put(50,35){\line(-1,-1){5}} \put(50,35){\line(1,-1){5}} \put(60,35){\line(-1,-1){5}}
\put(45,30){\line(1,-1){5}} \put(55,30){\line(-1,-1){5}}
\put(50,25){\line(1,-1){5}}
\pic{ex2}
\put(45,5){Figure \ref{ex2}}
\put(80,45){\line(1,-1){5}} \put(90,45){\line(-1,-1){5}}
\put(85,40){\line(-1,-1){5}}
\put(80,35){\line(-1,-1){5}} \put(80,35){\line(1,-1){5}}
\put(75,30){\line(-1,-1){5}} \put(75,30){\line(1,-1){5}} \put(85,30){\line(-1,-1){5}} \put(85,30){\line(1,-1){5}}
\put(70,25){\line(1,-1){5}} \put(80,25){\line(-1,-1){5}} \put(80,25){\line(1,-1){5}} \put(90,25){\line(-1,-1){5}}
\put(75,20){\line(1,-1){5}} \put(85,20){\line(-1,-1){5}}
\pic{ex3}
\put(75,5){Figure \ref{ex3}}
\end{picture}
Note that every ordinary graph has one vertex in its last nonempty row.
Suppose $\Gamma$ has $l_\Gamma$ vertices in its top row. With each $\Gamma$ and nonincreasing sequence of integers $b_1,\ldots,b_{l_\Gamma}$ we associate a finite-dimensional rational polyhedron $D_\Gamma(b_1,\ldots,b_{l_\Gamma})$ in the countable-dimensional real space with coordinates enumerated by the vertices of $\mathcal R$. Consider a point $s$ in this space with its $(i,j)$-coordinate equal to $s_{i,j}$. By definition, $s\in D_\Gamma(b_1,\ldots,b_{l_\Gamma})$ if it satisfies the following requirements.
\begin{enumerate}
\item If $(i,j)\not\in\Gamma$, then $s_{i,j}=0$.
\item The $l_\Gamma$ coordinates in row $a_\Gamma$ are equal to $b_1,\ldots,b_{l_\Gamma}$ in that order from left to right.
\item\label{facets} For any $(i,j)\in\Gamma$ we have $s_{i-1,j}\ge s_{i,j}$ whenever $(i-1,j)\in\Gamma$ and $s_{i,j}\ge s_{i-1,j+1}$ whenever $(i-1,j+1)\in\Gamma$. In other words, for any two adjacent vertices of $\Gamma$ the corresponding inequality of type~(\ref{gt}) holds.
\end{enumerate}
Such polyhedra are a natural generalization of Gelfand-Tsetlin polytopes, the latter being $D_{\mathcal T}(b_1,\ldots,b_n)$, where $\mathcal T\subset\mathcal R$ is the ordinary subgraph with vertices $(i,j)$ for $0\le i\le n-1$ and $1\le j\le n-i$.
Any $s\in D_\Gamma(b_1,\ldots,b_{l_\Gamma})$ defines a subgraph of $\Gamma$ the vertices of which are the vertices of $\Gamma$ and edges are edges of $\Gamma$ for which the two corresponding coordinates in $s$ are equal. Since the polyhedron $D_\Gamma(b_1,\ldots,b_{l_\Gamma})$ is defined by the inequalities in correspondence with the edges of $\Gamma$, one sees that two points define the same subgraph if and only if the minimal faces containing them coincide. For this reason we have the following description of the faces of $D_\Gamma(b_1,\ldots,b_{l_\Gamma})$.
\begin{proposition}\label{faces}
The faces of $D_\Gamma(b_1,\ldots,b_{l_\Gamma})$ are in bijection with subgraphs of $\Gamma$ containing all vertices of $\Gamma$ and with the following properties.
\begin{enumerate}
\item Whenever two adjacent vertices of $\Gamma$ are in the same connected component of the subgraph they are also adjacent in the subgraph.
\item Whenever $(i,j)$ and $(i,j+1)$ are in the same component of the subgraph so are $(i+1,j)$ and $(i-1,j+1)$ (the latter when $i>a_\Gamma$).
\item The $i$-th and $j$-th vertex in row $a_\Gamma$ (counting from left to right) are in the same component of the subgraph if and only if $b_i=b_j$.
\end{enumerate}
The face corresponding to subgraph $\Delta$ consists of the points for which any two coordinates corresponding to adjacent vertices of $\Delta$ are equal. The dimension of the face is the number of those connected components in $\Delta$, which do not contain a vertex from row $a_\Gamma$.
\end{proposition}
\begin{proof}
If subgraph $\Delta$ has these properties it is straightforward to define a point $(s_{i,j})\in D_\Gamma(b_1,\ldots,b_{l_\Gamma})$ such that for two vertices $(i_1,j_1)$ and $(i_2,j_2)$ one has $s_{i_1,j_1}=s_{i_2,j_2}$ if and only if these vertices are in the same connected component of $\Delta$.
The statement concerning the dimension follows from the following observation. If $\Delta$ corresponds to face $f$, then for any point in $f$ all its coordinates in a component of $\Delta$ containing the $i$-th vertex from the top row are fixed and equal to $b_i$. Thus, when choosing a point in $f$, the degree of freedom is the number of components without a vertex from the top row.
\end{proof}
If $f$ is a face of some $D_\Gamma(b_1,\ldots,b_{l_\Gamma})$ we denote corresponding subgraph simply $\Delta_f$, the graph $\Gamma$ and values $b_1,\ldots,b_{l_\Gamma}$ being implicit. Note that, in particular, any connected component of $\Delta_f$ is itself an ordinary graph.
We now define a weight function $$\varphi_\Gamma(b_1,\ldots,b_{l_\Gamma}):\mathcal F_{D_\Gamma(b_1,\ldots,b_{l_\Gamma})}\rightarrow \mathbb{Z}[t].$$ The value of $\varphi_\Gamma(b_1,\ldots,b_{l_\Gamma})(f)$ is defined in terms of the graph $\Delta_f$. Namely, it is the product $\prod(1-t^l)^{d_l}$, where $d_l$ is the following statistic. It is the number of pairs $(E,i)$ where $E$ is a connected component of $\Delta_f$ and $i>a_\Gamma$ is an integer, such that $E$ contains exactly $l-1$ vertices from row $i-1$ and $l$ vertices from row $i$.
Here are three subgraphs of the three examples above accompanied by the dimension and weight of the corresponding face.
\begin{picture}(95,50)
\put(5,45){\line(1,-1){5}} \put(15,45){\line(-1,-1){5}} \put(15,45){\line(1,-1){5}} \put(25,45){\line(-1,-1){5}}
\put(10,40){\line(1,-1){5}} \put(20,40){\line(-1,-1){5}} \multiput(20,40)(1,-1){5}{\line(1,-1){0.2}}
\put(15,35){\line(-1,-1){5}} \multiput(15,35)(1,-1){5}{\line(1,-1){0.2}} \multiput(25,35)(-1,-1){5}{\line(-1,-1){0.2}} \put(25,35){\line(1,-1){5}}
\multiput(10,30)(1,-1){5}{\line(1,-1){0.2}} \put(20,30){\line(-1,-1){5}} \put(20,30){\line(1,-1){5}} \multiput(30,30)(-1,-1){5}{\line(-1,-1){0.2}}
\put(15,25){\line(1,-1){5}} \put(25,25){\line(-1,-1){5}}
\put(20,20){\line(-1,-1){5}}
\put(15,10){$\dim=2$}
\put(10,5){$(1-t)^2(1-t^2)$}
\multiput(55,40)(-1,-1){5}{\line(-1,-1){0.2}} \put(55,40){\line(1,-1){5}}
\multiput(50,35)(-1,-1){5}{\line(-1,-1){0.2}} \put(50,35){\line(1,-1){5}} \multiput(60,35)(-1,-1){5}{\line(-1,-1){0.2}}
\put(45,30){\line(1,-1){5}} \multiput(55,30)(-1,-1){5}{\line(-1,-1){0.2}}
\put(50,25){\line(1,-1){5}}
\put(45,10){$\dim=2$}
\put(45,5){$(1-t)^2$}
\put(80,45){\line(1,-1){5}} \put(90,45){\line(-1,-1){5}}
\multiput(85,40)(-1,-1){5}{\line(-1,-1){0.2}}
\put(80,35){\line(-1,-1){5}} \put(80,35){\line(1,-1){5}}
\put(75,30){\line(-1,-1){5}} \put(75,30){\line(1,-1){5}} \put(85,30){\line(-1,-1){5}} \put(85,30){\line(1,-1){5}}
\put(70,25){\line(1,-1){5}} \put(80,25){\line(-1,-1){5}} \put(80,25){\line(1,-1){5}} \put(90,25){\line(-1,-1){5}}
\put(75,20){\line(1,-1){5}} \put(85,20){\line(-1,-1){5}}
\put(75,10){$\dim=1$}
\put(65,5){$(1-t)(1-t^2)(1-t^3)$}
\end{picture}
For integers $b_1\ge\ldots\ge b_{l_\Gamma}$ the expression
\begin{equation}\label{ipt}
\sigma_{\varphi_\Gamma(b_1,\ldots,b_{l_\Gamma})}(D_\Gamma(b_1,\ldots,b_{l_\Gamma}))
\end{equation}
is a rational function in variables $\{t_{i,j}\}$ which are in correspondence with the vertices of $\mathcal R$. However, we're interested in the result of applying the specialization $$t_{i,j}\longrightarrow x_i^{-1}x_{i+1}$$ to~(\ref{ipt}). We denote this specialization $F$, since it formally coincides with the specialization $F$ defined above when $i\in[1,n-1]$ and $j\in[1,n-i]$. We denote the obtained rational function in variables $\{x_i\}$ simply $\psi_\Gamma(b_1,\ldots,b_{l_\Gamma})$.
(Note that for any array $s=(s_{i,j})$ with a finite number of nonzero elements the power of $x_i$ monomial $F(e^s)$ is the sum of the elements of $s$ in row $i-1$ minus the sum of its elements in row $i$.)
First of all, its worth mentioning that the functions $\psi_\Gamma(b_1,\ldots,b_{l_\Gamma})$ are well-defined, i.e. the reduced denominator of~(\ref{ipt}) does not vanish under $F$. To see this for any edge $e$ of $C$ consider the subgraph $\Delta_e$ and let $\varepsilon$ be the direction vector of $e$. Proposition~\ref{faces} shows that $\Delta_e$ contains exactly one component without a vertex in the top row, let $r$ be the row containing the single top vertex of that component. One may easily deduce that $F(e^\varepsilon)$ contains a nonzero power of $x_r$ and then invoke the remark at the end of the proof of Theorem~\ref{wbrion}.
Now we are ready to present the statement which will turn out to be the key to the proof of part b) of Theorem~\ref{contrib}.
\begin{theorem}\label{zero}
If $\Gamma$ is an ordinary subgraph and for some $i\ge a_\Gamma$ the number of its vertices in row $i+1$ is greater than in row $i$, then $\psi_\Gamma(b_1,\ldots,b_{l_\Gamma})=0$ for any integers $b_1\ge\ldots\ge b_{l_\Gamma}$.
\end{theorem}
Our proof of this Theorem requires an identity which relates the singular case of the $b_i$ all being the same to the regular case of them being pairwise distinct. Note that $D_\Gamma(b,\ldots,b)$ is a cone, we denote the vertex of this cone $v_\Gamma(b)$.
\begin{lemma}\label{singular}
For pairwise distinct $b_1>\ldots>b_{l_\Gamma}$ let $v_1,\ldots,v_m$ be the vertices of $D_\Gamma(b_1,\ldots,b_{l_\Gamma})$ with tangent cones $C_1,\ldots,C_m$. Then we have $$[l_\Gamma]_t!\sigma_{\varphi_\Gamma(b,\ldots,b)}(D_\Gamma(b,\ldots,b))=\sum\limits_{i=1}^m e^{v_\Gamma(b)-v_i}\sigma_{\varphi_\Gamma(b_1,\ldots,b_{l_\Gamma})}(C_i)$$ (the summands on the right are simply IPT's of the cones $C_i$ shifted by $v_\Gamma(b)-v_i$).
\end{lemma}
This identity is obtained as the weighted Brion theorem applied to $D_\Gamma(b,\ldots,b)$ viewed as a degeneration $D_\Gamma(b_1,\ldots,b_{l_\Gamma})$. We thus postpone the proof of the lemma until we have discussed these topics in detail.
\begin{proof}[Proof of Theorem~\ref{zero}.] We proceed by induction on the number of vertices in $\Gamma$ considering three cases.
{\it Case 1.} No row in $\Gamma$ contains more than two vertices. This will include the base of our induction. Unfortunately, this case is the most computational part of the paper, although, in its essence, the argument is pretty straightforward.
First of all, if we have an $i>a_\Gamma$ such that $\Gamma$ has one vertex in row $i$ and two vertices in row $i+1$, we may apply the induction hypothesis. To do this, denote $\Gamma'$ the graph obtained from $\Gamma$ by removing all vertices in rows above $i$. Now consider a section of $D_\Gamma(b_1,\ldots,b_{l_\Gamma})$ obtained by fixing all coordinates in rows $i$ and above. The contribution of any such section to $\psi_\Gamma(b_1,\ldots,b_{l_\Gamma})$ is zero by the induction hypothesis applied to $\Gamma'$.
Thus we may assume that $\Gamma$ has one vertex in row $a_\Gamma$ and two vertices in row $a_\Gamma+1$. Figure~\ref{ex2} provides an example of such a graph.
We may also assume that $b_1=0$ since any $\psi_\Gamma(b)$ is obtained from $\psi_\Gamma(0)$ by multiplication by a monomial. We will compute $\psi_\Gamma(0)$ by considering the sections of $D_\Gamma(0)$ obtained by fixing the two coordinates in row $a_\Gamma+1$. If $\Gamma'$ is $\Gamma$ with the top vertex removed we have
\begin{equation}\label{sectsum}
\psi_\Gamma(0)=\sum\limits_{b_1\ge0,b_2\le0}c_{b_1,b_2}\psi_{\Gamma'}(b_1,b_2),
\end{equation}
where
$$
c_{b_1,b_2}=
\begin{cases}
(1-t)^2& \text{if }b_1>0>b_2,\\
(1-t)& \text{if }b_1>0=b_2\text{ or }b_1=0>b_2,\\
(1-t^2)& \text{if }b_1=b_2=0.
\end{cases}
$$
Of course,~(\ref{sectsum}) needs to be formalized in order to make sense. This is done routinely so we will not go into detail. The idea is to observe that all the functions $\psi_{\Gamma'}(b_1,b_2)$ together with $\psi_\Gamma(0)$ have a common finite denominator (this is shown below). Multiplying~(\ref{sectsum}) by that common denominator yields an identity of formal Laurent series.
Now, for some $b_1>b_2$ consider the vertices of $D_{\Gamma'}(b_1,b_2)$. It easy to see that among the corresponding subgraphs $\Delta_v$ there are exactly two consisting of two path graph components. Here are these two subgraphs for the example in Figure~\ref{ex2}.
\begin{picture}(95,35)
\put(35,30){\circle*{0.4}}
\put(30,25){\line(-1,-1){5}} \multiput(30,25)(1,-1){5}{\line(1,-1){0.2}} \put(40,25){\line(-1,-1){5}}
\put(25,20){\line(1,-1){5}} \multiput(35,20)(-1,-1){5}{\line(-1,-1){0.2}}
\put(30,15){\line(1,-1){5}}
\put(30,5){$\Delta_{v_1}$}
\put(80,30){\circle*{0.4}}
\put(75,25){\line(-1,-1){5}} \multiput(75,25)(1,-1){5}{\line(1,-1){0.2}} \put(85,25){\line(-1,-1){5}}
\multiput(70,20)(1,-1){5}{\line(1,-1){0.2}} \put(80,20){\line(-1,-1){5}}
\put(75,15){\line(1,-1){5}}
\put(75,5){$\Delta_{v_2}$}
\end{picture}
Denote $r$ the least number such that $\Gamma$ has one vertex in row $r$ but two vertices in row $r-1$. The difference between the two graphs is then that in one case (vertex $v_1$) the vertex in row $r$ is connected to its upper-left neighbor and in the other case (vertex $v_2$) to its upper-right neighbor.
\begin{proposition}
Consider a vertex $v$ of $D_{\Gamma'}(b_1,b_2)$ other than $v_1$ and $v_2$. Let $C_v$ be the tangent cone. The induction hypothesis than implies $$F(\sigma_{\varphi_{\Gamma'}(b_1,b_2)}(C_v))=0.$$
\end{proposition}
\begin{proof}
Consider the two components $\Gamma_1$ and $\Gamma_2$ of $\Delta_v$. We have $$F(\sigma_{\varphi_{\Gamma'}(b_1,b_2)}(C_v))=\psi_{\Gamma_1}(b_1)\psi_{\Gamma_2}(b_2).$$ The fact that at least one of $\Gamma_1$ and $\Gamma_2$ is not a path graph translates into that component containing one vertex in some row $i$ and two vertices in row $i+1$. The induction hypothesis then shows that the corresponding factor in the right-hand side above is zero.
\end{proof}
The weighted Brion theorem for $D_{\Gamma'}(b_1,b_2)$ is now seen to provide $$\psi_{\Gamma'}(b_1,b_2)=F(\sigma_{\varphi_{\Gamma'}(b_1,b_2)}(C_1))+F(\sigma_{\varphi_{\Gamma'}(b_1,b_2)}(C_2)),$$ where $C_1$ and $C_2$ are the corresponding tangent cones. It isn't too hard to compute the two summands on the right explicitly which is exactly what do.
Both of $C_1$ and $C_2$ cones are simplicial and unimodular. This is seen by considering the minimal integer direction vectors (generators) of their edges. If $d_\Gamma$ is the last row containing vertices of $\Gamma$, then the set of generators for each of $C_1$ and $C_2$ satisfies the following description.
\begin{proposition}
The values of the coordinates of any such generator take only two values: 0 and either $-1$ or 1. For any $i\in[a_\Gamma+2,r-1]$ there is a single generator with exactly one nonzero coordinate in each of the rows in $[i,r-1]$ and all other coordinates 0. Also, for any $i\in[a_\Gamma+2,d_\Gamma]$ there is a single generator with exactly one nonzero coordinate in each of the rows in $[i,d_\Gamma]$ and all other coordinates 0.
\end{proposition}
\begin{proof}
In accordance with Proposition~\ref{faces}, for an edge $e$ of $C_1$ or $C_2$ the graph $\Delta_e$ is obtained from respectively $\Delta_{v_1}$ or $\Delta_{v_2}$ by deleting a single edge. This leaves $\Delta_e$ with exactly one component (of three) not containing a vertex in row $a_\Gamma+1$. The corresponding direction vector is obtained by setting the coordinates in this component to either $-1$ or 1 depending on the orientation of the deleted edge. All the other coordinates are zero.
The proposition now follows if we consider such vectors for each of the edges of $\Delta_{v_1}$ and $\Delta_{v_2}$ being deleted.
\end{proof}
We denote the generators described in the second sentence of the Proposition by $\varepsilon_i^1$ or $\varepsilon_i^2$ respectively. For the generators described in the third sentence we use the notations $\xi_i^1$ and $\xi_i^2$. Here are some of these vectors for our example with the cross marking the edge being deleted.
\begin{picture}(105,30)
\put(9.25,24){\footnotesize 0} \put(19.25,24){\footnotesize 0}
\put(9,24){\line(-1,-1){3}} \multiput(11,24)(1,-1){3}{\line(1,-1){0.2}} \put(19,24){\line(-1,-1){3}}
\put(4.25,19){\footnotesize 0} \put(14.25,19){\footnotesize 0}
\put(6,19){\line(1,-1){3}} \multiput(14,19)(-1,-1){3}{\line(-1,-1){0.2}}
\put(6.5,17.5){\line(1,0){2}} \put(7.5,18.5){\line(0,-1){2}}
\put(8.75,14){\footnotesize $-1$}
\put(11,14){\line(1,-1){3}}
\put(13.75,9){\footnotesize $-1$}
\put(5,5){$\xi_{a_\Gamma+3}^1(=\xi_r^1)$}
\put(34.25,24){\footnotesize 0} \put(44.25,24){\footnotesize 0}
\put(34,24){\line(-1,-1){3}} \multiput(36,24)(1,-1){3}{\line(1,-1){0.2}} \put(44,24){\line(-1,-1){3}}
\put(41.5,22.5){\line(1,0){2}} \put(42.5,23.5){\line(0,-1){2}}
\put(29.25,19){\footnotesize 0} \put(39.25,19){\footnotesize 1}
\put(31,19){\line(1,-1){3}} \multiput(39,19)(-1,-1){3}{\line(-1,-1){0.2}}
\put(34.25,14){\footnotesize 0}
\put(36,14){\line(1,-1){3}}
\put(39.25,9){\footnotesize 0}
\put(35,5){$\varepsilon_{a_\Gamma+2}^1$}
\put(64.25,24){\footnotesize 0} \put(74.25,24){\footnotesize 0}
\put(64,24){\line(-1,-1){3}} \multiput(66,254)(1,-1){3}{\line(1,-1){0.2}} \put(74,24){\line(-1,-1){3}}
\put(71.5,22.5){\line(1,0){2}} \put(72.5,23.5){\line(0,-1){2}}
\put(59.25,19){\footnotesize 0} \put(69.25,19){\footnotesize 1}
\multiput(61,19)(1,-1){3}{\line(1,-1){0.2}} \put(69,19){\line(-1,-1){3}}
\put(64.25,14){\footnotesize 1}
\put(66,14){\line(1,-1){3}}
\put(69.25,9){\footnotesize 1}
\put(65,5){$\xi_{a_\Gamma+2}^2$}
\put(89.25,24){\footnotesize 0} \put(99.25,24){\footnotesize 0}
\put(89,24){\line(-1,-1){3}} \multiput(91,24)(1,-1){3}{\line(1,-1){0.2}} \put(99,24){\line(-1,-1){3}}
\put(86.5,22.5){\line(1,0){2}} \put(87.5,23.5){\line(0,-1){2}}
\put(84.25,19){\footnotesize 1} \put(94.25,19){\footnotesize 0}
\multiput(86,19)(1,-1){3}{\line(1,-1){0.2}} \put(94,19){\line(-1,-1){3}}
\put(89.25,14){\footnotesize 0}
\put(91,14){\line(1,-1){3}}
\put(94.25,9){\footnotesize 0}
\put(90,5){$\varepsilon_{a_\Gamma+2}^2$}
\end{picture}
It is easily seen that for all $i\in[a_\Gamma+2,r-1]$ we have $F(e^{\varepsilon_i^1})=F(e^{\varepsilon_i^2})$ and for all $i\in[a_\Gamma+2,d_\Gamma],i\neq r$ we have $F(e^{\xi_i^1})=F(e^{\xi_i^2})$. However, $F(e^{\xi_r^1})=x_r x_{d_\Gamma+1}^{-1}$ while $F(e^{\xi_r^2})=x_r^{-1} x_{d_\Gamma+1}$.
The last nuance we need to discuss to write out $\psi_{\Gamma'}(b_1,b_2)$ is how $\varphi_{\Gamma'}(b_1,b_2)$ behaves on faces of $C_1$ and $C_2$. This behavior is rather simple.
\begin{proposition}
For a face $f$ of either cone we have $$\psi_{\Gamma'}(b_1,b_2)(f)=(1-t)^{\dim f}.$$
\end{proposition}
\begin{proof}
Since the graph $\Delta_f$ has $\dim f+2$ connected components, it is obtained from respectively $\Delta_{v_1}$ or $\Delta_{v_2}$ by deleting $\dim f$ edges. The definition of $\varphi_{\Gamma'}(b_1,b_2)$ then immediately provides the weight $(1-t)^{\dim f}$.
\end{proof}
The above facts give us $$F(\sigma_{\varphi_{\Gamma'}(b_1,b_2)}(C_1))=F(e^{v_1})\frac{1-t x_r x_{d_\Gamma+1}^{-1}}{1-x_r x_{d_\Gamma+1}^{-1}}Z$$ and $$F(\sigma_{\varphi_{\Gamma'}(b_1,b_2)}(C_2))=F(e^{v_2})\frac{1-t x_r^{-1} x_{d_\Gamma+1}}{1-x_r^{-1} x_{d_\Gamma+1}}Z,$$ where $$Z=F\left(\prod\limits_{i=a_\Gamma+2}^{r-1}\frac{1-te^{\varepsilon_i^1}}{1-e^{\varepsilon_i^1}} \prod\limits_{\substack{i\in[a_\Gamma+2,d_\Gamma]\\i\neq r}} \frac{1-te^{\xi_i^1}}{1-e^{\xi_i^1}} \right).$$
Also, we can now employ Lemma~\ref{singular} to derive
\begin{multline*}
\psi_{\Gamma'}(0,0)=F(e^{v_{\Gamma'}(0)})\frac1{1+t}\left(\frac{1-t x_r x_{d_\Gamma+1}^{-1}}{1-x_r x_{d_\Gamma+1}^{-1}}+\frac{1-t x_r^{-1} x_{d_\Gamma+1}}{1-x_r^{-1} x_{d_\Gamma+1}}\right)Z=\\F(e^{v_{\Gamma'}(0)})Z.
\end{multline*}
Since $F(e^{v_1})=x_{a_\Gamma+1}^{-b_1-b_2}x_r^{b_2}x_{d_\Gamma+1}^{b_1}$ and $F(e^{v_2})=x_{a_\Gamma+1}^{-b_1-b_2}x_r^{b_1}x_{d_\Gamma+1}^{b_2}$ and \\$e^{v_{\Gamma'}(0)}=1$ we, conclusively, have
\begin{multline*}
\frac1Z\sum\limits_{b_1\le0,b_2\ge0}c_{b_1,b_2}\psi_{\Gamma'}(b_1,b_2)=\\
\frac{1-t x_r x_{d_\Gamma+1}^{-1}}{1-x_r x_{d_\Gamma+1}^{-1}}\left((1-t)^2\sum\limits_{b_1>0>b_2}x_{a_\Gamma+1}^{-b_1-b_2}x_r^{b_2}x_{d_\Gamma+1}^{b_1}+\right.\\
\left.(1-t)\sum\limits_{b_1>0}x_{a_\Gamma+1}^{-b_1}x_{d_\Gamma+1}^{b_1}+(1-t)\sum\limits_{b_2<0}x_{a_\Gamma+1}^{-b_2}x_r^{b_2}\right)+\\
\frac{1-t x_r^{-1} x_{d_\Gamma+1}}{1-x_r^{-1}x_{d_\Gamma+1}}\left((1-t)^2\sum\limits_{b_1>0>b_2}x_{a_\Gamma+1}^{-b_1-b_2}x_r^{b_1}x_{d_\Gamma+1}^{b_2}+\right.\\
\left.(1-t)\sum\limits_{b_1>0}x_{a_\Gamma+1}^{-b_1}x_r^{b_1}+(1-t)\sum\limits_{b_2<0}x_{a_\Gamma+1}^{-b_2}x_{d_\Gamma+1}^{b_2}\right)+1-t^2=\\
\frac{1-t x_r x_{d_\Gamma+1}^{-1}}{1-x_r x_{d_\Gamma+1}^{-1}}\left((1-t)^2\frac{x_r^{-1}x_{d_\Gamma+1}}{(1-x_{a_\Gamma+1}^{-1}x_{d_\Gamma+1})(1-x_{a_\Gamma+1}x_r^{-1})}+\right.\\
\left.(1-t)\frac{x_{a_\Gamma+1}^{-1}x_{d_\Gamma+1}}{1-x_{a_\Gamma+1}^{-1}x_{d_\Gamma+1}}+(1-t)\frac{x_{a_\Gamma+1}x_r^{-1}}{1-x_{a_\Gamma+1}x_r^{-1}}\right)+\\
\frac{1-t x_r^{-1} x_{d_\Gamma+1}}{1-x_r^{-1}x_{d_\Gamma+1}}\left((1-t)^2\frac{x_r x_{d_\Gamma+1}^{-1}}{(1-x_{a_\Gamma+1}^{-1}x_r)(1-x_{a_\Gamma+1}x_{d_\Gamma+1}^{-1})}+\right.\\
\left.(1-t)\frac{x_{a_\Gamma+1}^{-1}x_r}{1-x_{a_\Gamma+1}^{-1}x_r}+(1-t)\frac{x_{a_\Gamma+1}x_{d_\Gamma+1}^{-1}}{1-x_{a_\Gamma+1}x_{d_\Gamma+1}^{-1}}\right)+1-t^2\mathbf{=0}.
\end{multline*}
The last equality is verified directly, best on a machine.
\\
{\it Case 2.} There exist at least two distinct $b_i$ (and we are not within case 1).
It suffices to show that for any vertex $v$ of $D_\Gamma(b_1,\ldots,b_{l_\Gamma})$ with tangent cone $C_v$ the contribution $F(\sigma_{\varphi_\Gamma(b_1,\ldots,b_{l_\Gamma})}(C_v))$ is zero.
By Proposition~\ref{faces} the number of connected components in $\Delta_v$ is the number of distinct $b_i$. Let $G_1,\ldots,G_m$ be these components, with $G_i$ containing the $l_i$-th through $r_i$-th vertex from the top row of $\Gamma$. We have the decomposition $$F(\sigma_{\varphi_\Gamma(b_1,\ldots,b_{l_\Gamma})}(C_v))=\prod\limits_{i=1}^m \psi_{G_i}(b_{l_i},\ldots,b_{r_i}).$$ That is because the cone $C_v$ is (a translate of) the direct sum of the cones $D_{G_i}(b_{l_i},\ldots,b_{r_i})$ and for a face $f=\bigoplus_{i=1}^m f_i$ of $C_v$ we have $$\varphi_\Gamma(b_1,\ldots,b_{l_\Gamma})=\prod\limits_{i=1}^m \varphi_{G_i}(b_{l_i},\ldots,b_{r_i})(f_i).$$ However, with the induction hypothesis taken into account, it is clear that at least one of the factors $\psi_{G_i}(b_{l_i},\ldots,b_{r_i})$ is zero.
\\
{\it Case 3.} We have $b_1=b_{l_\Gamma}$ (and we are not within case 1).
Consider any integers $b'_1>\ldots>b'_{l_\Gamma}$ and let $v_1,\ldots,v_m$ be the vertices of $D_\Gamma(b'_1,\ldots,b'_{l_\Gamma})$ with the tangent cones being $C'_1,\ldots,C'_m$. Now Lemma~\ref{singular} in combination with the argument for case 2 show that $$\psi_\Gamma(b_1,\ldots,b_{l_\Gamma})=\frac1{[l_\Gamma]_t!}F\left(\sum\limits_{i=1}^m e^{v_\Gamma(b_1)-v_i}\sigma_{\varphi_\Gamma(b'_1,\ldots,b'_{l_\Gamma})}(C'_i)\right)=0.$$
\end{proof}
\section{Weighted Brion's Theorem for Degenerated Polyhedra}
In order to prove Lemma~\ref{singular} it turns out necessary to occupy ourselves with the following question: how does our weighted version of Brion's theorem behave when we degenerate a polyhedron by shifting some of its facets? Let us elaborate.
We start with the following definition: two polytopes in $\mathbb{R}^d$ are said to be {\it analogous} if their normal fans coincide. The defintion of the normal fan of a polytope (also referred to as the "polar fan" or the "dual fan") may, for example, be found in any textbook on toric geometry. In other words, two polytopes are analogous if there is a combinatorial equivalence between them such that the tangent cones at two corresponding faces may be obtained from one another by a translation.
We then say that a polytope $\Sigma'\subset\mathbb{R}^d$ is a degeneration of polytope $\Sigma\subset\mathbb{R}^d$ if there is continuous deformation $\Sigma(\alpha), \alpha\in[0,1]$ such that
\begin{enumerate}
\item $\Sigma(0)=\Sigma$,
\item $\Sigma(1)=\Sigma'$ and
\item $\Sigma(\alpha)$ is a polytope analogous to $\Sigma$ for $0\le\alpha<1$.
\end{enumerate}
One may thus say that we deform $\Sigma$ by continuously shifting its facets (or, rather, the hyperplanes containing its facets) in such a way that the combinatorial structure does not change until we reach point 1 in time.
It is easy to show that if $\Sigma'$ is a degeneration of $\Sigma$, then the normal fan of $\Sigma$ is a refinement of the normal fan of $\Sigma'$. This gives us a map $\pi:\mathcal F_\Sigma\rightarrow\mathcal F_{\Sigma'}$ sending $f\in\mathcal F_\Sigma$ to the minimal face $\pi(f)$ of $\Sigma'$ such that the cone corresponding to $\pi(f)$ in the normal fan of $\Sigma'$ contains the cone corresponding to $f$ in the normal fan of $\Sigma$. The map $\pi$ is surjective and has the property $\dim f\ge\dim \pi(f)$.
A useful fact is that, in terms of Brion's theorem, we may ignore the combinatorial structure having changed as a result of the degeneration. That is to say that the following identity holds. $$\sigma(\Sigma')=\sum\limits_{v\text{ vertex of }\Sigma}e^{\pi(v)-v}\sigma(C_v),$$ where $C_v$ is the corresponding tangent cone and we abuse the notations somewhat, knowing that $\pi(v)$ is a vertex.
We now demonstrate how and why this can be generalized to the weighted setting.
\begin{lemma}\label{wdegen}
In the above setting consider a weight function $\varphi:\mathcal F_\Sigma\rightarrow R$ for some commutative ring R. Next define $\varphi':\mathcal F_{\Sigma'}\rightarrow R$ by $$\varphi'(f')=\sum\limits_{f\in\pi^{-1}(f')}(-1)^{\dim f-\dim f'}\varphi(f).$$ Then the identity $$\sigma_{\varphi'}(\Sigma')=\sum\limits_{v\text{ vertex of }\Sigma}e^{\pi(v)-v}\sigma_\varphi(C_v)$$ holds.
\end{lemma}
\begin{proof}
It suffices to show that for any vertex $v'$ of $\Sigma'$ with tangent cone $C_{v'}$ we have $$\sigma_{\varphi'}(C_{v'})=\sum\limits_{\substack{\text{vertex }v\\\pi(v)=v'}}e^{v'-v}\sigma_\varphi(C_v).$$
Consider a face $f$ of $\Sigma$ such that $\pi(f)$ contains $v'$. Let $v_1,\ldots,v_m$ be the vertices of $f$ with $\pi(v_i)=v'$. Let $C_i$ denote the face of $C_{v_i}$ corresponding to (containing) $f$ and let $C'$ be the face of $C_{v'}$ corresponding to $\pi(f)$. We have
\begin{multline*}
\sum\limits_{i=1}^m\sigma(\mathrm{Int}(C_i-v_i+v'))=(-1)^{\dim f}\sum\limits_{i=1}^m\sigma(-(C_i-v_i)+v'))=\\(-1)^{\dim f}\sigma(-(C'-v')+v')=(-1)^{\dim f-\dim\pi(f)}\sigma(\mathrm{Int}(C'),
\end{multline*}
where $\mathrm{Int}$ denotes the relative interior of a polyhedron (the polyhedron minus its boundary), $X+a$ is set $X$ translated by vector $a$ and $-X$ is $X$ reflected in the origin. The first an third equalities are due to Stanley reciprocity (see~\cite{beckrob}) while the second one is Brion's theorem for the cone $-(C'-v')+v'$ viewed as a degeneration of the polyhedron $-\bigcap_{i=1}^m C_i$. We understand the relative interior of a single point to be itself (rather than the empty set).
Now it remains to point out that adding up the above equalities with coefficients $\varphi(f)$ yields the desired identity.
\end{proof}
\section{Proof of Lemma~\ref{singular}}
It is actually somewhat more convenient to prove a generalization of Lemma~\ref{singular}.
For an ordinary graph $\Gamma$ let $b_1,\ldots,b_{l_\Gamma}$ be a strictly decreasing sequence of integers and $b'_1,\ldots,b'_{l_\Gamma}$ be decreasing but not strictly, i.e. at least two of $b'_i$ coincide. Specifically, let there be $m$ distinct $b'_i$ with the $j$-th largest of those $m$ values occurring $l_i$ times.
The polyhedron $D_\Gamma(b'_1,\ldots,b'_{l_\Gamma})$ is a degeneration of the polyhedron $D_\Gamma(b_1,\ldots,b_{l_\Gamma})$ in the above sense. To see this simply consider continuous functions $b_i(\alpha)$ with $b_i(0)=b_i$, $b_i(1)=b'_i$ and $b_1(\alpha)<\ldots<b_{l_\Gamma}(\alpha)$ for $\alpha<1$. Let $$\pi:\mathcal F_{D_\Gamma(b_1,\ldots,b_{l_\Gamma})}\rightarrow \mathcal F_{D_\Gamma(b'_1,\ldots,b'_{l_\Gamma})}$$ be the corresponding map. Also, let $v_1,\ldots,v_N$ be the vertices of $D_\Gamma(b_1,\ldots,b_{l_\Gamma})$ with respective tangent cones $C_1,\ldots,C_N$. The mentioned generalization is then as follows.
\begin{lemma}\label{gensingular}
$$[l_1]_t!\ldots[l_m]_t!\sigma_{\varphi_\Gamma(b'_1,\ldots,b'_{l_\Gamma})}\left(D_\Gamma(b'_1,\ldots,b'_{l_\Gamma})\right)=\sum\limits_{i=1}^N e^{\pi(v_i)-v_i}\sigma_{\varphi_\Gamma(b_1,\ldots,b_{l_\Gamma})}(C_i).$$
\end{lemma}
However, with Lemma~\ref{wdegen} taken into account, Lemma~\ref{gensingular} is an immediate consequence of the below fact.
\begin{lemma}\label{graphsum}
For a face $f$ of $D_\Gamma(b'_1,\ldots,b'_{l_\Gamma})$ we have $$[l_1]_t!\ldots[\l_m]_t!\varphi_\Gamma(b'_1,\ldots,b'_{l_\Gamma})(f)=\sum\limits_{g\in\pi^{-1}(f)}(-1)^{\dim g-\dim f}\varphi_\Gamma(b_1,\ldots,b_{l_\Gamma})(g).$$
\end{lemma}
\begin{proof}
First of all, let us describe the map $\pi$ in terms of corresponding subgraphs. Consider a face $g$ of $D_\Gamma(b_1,\ldots,b_{l_\Gamma})$.
\begin{proposition}
The subgraph $\Delta_{\pi(g)}$ is the smallest subgraph containing all edges of $\Delta_g$ and indeed corresponding to some face of $D_\Gamma(b'_1,\ldots,b'_{l_\Gamma})$, i.e satisfying the respective three conditions from Proposition~\ref{faces}.
\end{proposition}
\begin{proof}
The tangent cone $C_g$ at $g$ consists of points $x$ for which all coordinates outside of $\Gamma$ are 0, the coordinates in row $l_\Gamma$ are equal to $b_1,\ldots,b_{l_\Gamma}$ and for any edge of $\Delta_g$ the corresponding inequality between coordinates of $x$ holds. For a face $h$ of $D_\Gamma(b'_1,\ldots,b'_{l_\Gamma})$ the tangent cone $C_h$ is described analogously.
The cone in the normal fan corresponding to $h$ containing the cone in the normal fan corresponding to $g$ is equivalent to $C_g-x_g$ containing $C_h-x_h$, where $x_g$ is an arbitrary point in $g$ and $x_h$ is an arbitrary point in $h$. Let $e$ be an edge of $\Delta_g$ not in $\Delta_h$. For any point $x\in C_g-x_g$ the inequality corresponding to $e$ holds (since the corresponding two coordinates of $x_g$ are equal). However, we may find a point $y\in C_h-x_h$ for which the inequality corresponding to the edge does not hold.
This shows that any edge of $\Delta_g$ must be an edge of $\Delta_{\pi_g}$ and the minimality of $\Delta_\pi(g)$ follows from the minimality of normal fan cone corresponding to $\pi(g)$.
\end{proof}
We prove the Lemma by induction on the number of vertices in $\Gamma$. The base of the induction is the case of $\Gamma$ having three vertices, two in row $a_\Gamma$ and one in row $a_\Gamma+1$. In this case we are dealing with a segment degenerating into a point and the Lemma simply states that $1+1-(1-t)=[2]_t!\cdot 1=1+t$. We turn to the step of our induction, it being broken up into two cases.
{\it Case 1.} The graph $\Delta_f$ is not connected.
Let $G_1,\ldots,G_m$ be the connected components of $\Delta_f$ which contain vertices from the top row $a_\Gamma$. Recall that the weight $\varphi(b'_1,\ldots,b'_{l_\Gamma})(f)$ is a product over the components in $\Delta_f$. Let $R$ be the product over the components other than these $G_j$.
The above characterization of $\pi$ shows that any component of $\Delta_f$ not amongst the $G_j$ is also a connected component of $\Delta_g$ for any $g\in\pi^{-1}(f)$. Thus $\varphi(b_1,\ldots,b_{l_\Gamma})(g)$ is a product of $R$ and factors corresponding to components of $\Delta_g$ which are contained in one of the $G_j$.
Write out the induction hypotheses for each of the degenerations of \\$D_{G_j}(b_1,\ldots,b_{l_j})$ into $D_{G_j}(b,\ldots,b)$ for some integer $b$. The observation in the previous paragraph shows that the product of these $m$ identities with an additional factor of $(-1)^{\dim f}R$ is precisely the desired identity. The induction hypothesis applies since in this case all the $G_i$ have less vertices than $\Gamma$.
\\
{\it Case 2.} The graph $\Delta_f$ is connected, i.e. $\Delta_f=\Gamma$. This means that \\$D_\Gamma(b'_1,\ldots,b'_{l_\Gamma})$ is a cone (all of the $b'_i$ are the same) and $f$ is the vertex of that cone.
Denote the value of all the $b'_i$ as $b$. The preimage $\pi^{-1}(f)$ consists precisely of the bounded faces of $D_\Gamma(b_1,\ldots,b_{l_\Gamma})$ because, for any degeneration, $\pi(f)$ is bounded if and only if $f$ is.
\begin{proposition}
A face $g$ of $D_\Gamma(b_1,\ldots,b_{l_\Gamma})$ is bounded if and only if $\Delta_g$ possesses the following two properties.
Whenever both vertices $(i,j)$ and $(i+1,j-1)$ are the leftmost within $\Gamma$ in their respective rows, then $\Delta_g$ includes the edge joining them.
Similarly, if both vertices $(i,j)$ and $(i+1,j)$ are the rightmost within $\Gamma$ in their respective rows, then $\Delta_g$ includes the edge joining them.
\end{proposition}
\begin{proof}
If the conditions are satisfied, then, visibly, every coordinate of any point in $g$ is between $b_1$ and $b_{l_\Gamma}$. Conversely, if the first condition is violated, then $g$ contains points for which the coordinate $i+1,j-1$ is arbitrarily large. Similarly, if the second condition is violated, then $g$ contains points for which the negative of coordinate $i+1,j$ is arbitrarily large.
\end{proof}
Let $\Gamma'$ be $\Gamma$ with its top row removed. Choose $g$, a bounded face of \\$D_\Gamma(b_1,\ldots,b_{l_\Gamma})$, and let $\Delta'$ be obtained from $\Delta_g$ by removing the vertices in row $a_\Gamma$. The graph $\Delta'$ is a subgraph of $\Gamma'$.
Since all the vertices in the top row of $\Delta_g$ are in different components, every component of $\Delta'$ contains no more than two vertices from the top row of $\Delta'$. We introduce a nonincreasing sequence of integers $c'_1,\ldots,c'_{l_{\Gamma'}}$ such that $c'_i=c'_{i+1}$ if and only if vertices number $i$ and $i+1$ from the left in the top row of $\Delta'$ are in the same component.
On top of that, let $c_1,\ldots,c_{l_{\Gamma'}}$ be a strictly decreasing sequence of integers. We have three polyhedra: one is $D_{\Gamma'}(c_1,\ldots,c_{l_{\Gamma'}})$, the second is $D_{\Gamma'}(c'_1,\ldots,c'_{l_{\Gamma'}})$ and the third is the cone $D_{\Gamma'}(c,\ldots,c)$ (where $c$ is an arbitrary integer).
The second polyhedron is a degeneration of the first, while the cone is a degeneration of both others. We have the three corresponding maps of faces:
$$\pi':\mathcal F_{D_{\Gamma'}(c_1,\ldots,c_{l_{\Gamma'}})}\rightarrow\mathcal F_{D_{\Gamma'}(c,\ldots,c)},$$
$$\rho:\mathcal F_{D_{\Gamma'}(c_1,\ldots,c_{l_{\Gamma'}})}\rightarrow\mathcal F_{D_{\Gamma'}(c'_1,\ldots,c'_{l_{\Gamma'}})}\text{ and }$$
$$\upsilon:\mathcal F_{D_{\Gamma'}(c'_1,\ldots,c'_{l_{\Gamma'}})}\rightarrow\mathcal F_{D_{\Gamma'}(c,\ldots,c)}.$$
The induction hypothesis for the vertex $f'$ of $D_{\Gamma'}(c,\ldots,c)$ reads
\begin{equation}\label{hypoth1}
[l_{\Gamma'}]_t!\varphi_{\Gamma'}(c,\ldots,c)(f')=\sum\limits_{h\in(\pi')^{-1}(f')}(-1)^{\dim h}\varphi_{\Gamma'}(c_1,\ldots,c_{l_{\Gamma'}})(h).
\end{equation}
Further, $\Delta'$ corresponds to some face of $D_{\Gamma'}(c'_1,\ldots,c'_{l_{\Gamma'}})$ which we denote $g'$ (so $\Delta'=\Delta_{g'}$). Let $d$ denote the number of pairs $c'_i=c'_{i+1}$. The induction hypothesis applied to $g'$ states that
\begin{equation}\label{hypoth2}
(1+t)^d\varphi_{\Gamma'}(c'_1,\ldots,c'_{l_{\Gamma'}})(g')=\sum\limits_{h\in\rho^{-1}(g')}(-1)^{\dim h-\dim g'}\varphi_{\Gamma'}(c_1,\ldots,c_{l_{\Gamma'}})(h).
\end{equation}
Now denote $I_g$ the graph obtained from $\Delta_g$ by removing all vertices below row $a_\Gamma+1$, that is, leaving only the top two rows. For bounded faces of $D_\Gamma(b_1,\ldots,b_{l_\Gamma})$ write $g_1\sim g_2$ if and only if $I_{g_1}=I_{g_2}$. The faces in the equivalence class of $g$ are in bijection with the bounded faces of $D_{\Gamma'}(c'_1,\ldots,c'_{l_{\Gamma'}})$, face $g_1$ corresponding to face $g'_1$ (defined analogously to $g'$).
The previous paragraph shows that adding up identities~(\ref{hypoth2}) for all $g_1\sim g$ gives
\begin{multline}\label{total}
(1+t)^d\sum\limits_{g'_1\in\upsilon^{-1}(f')}(-1)^{\dim g'_1}\varphi_{\Gamma'}(c'_1,\ldots,c'_{l_{\Gamma'}})(g'_1)=\\
\sum\limits_{h\in(\pi')^{-1}(f')}(-1)^{\dim h}\varphi_{\Gamma'}(c_1,\ldots,c_{l_{\Gamma'}})(h),
\end{multline}
$\upsilon^{-1}(f')$ being precisely the set of bounded faces of $D_{\Gamma'}(c'_1,\ldots,c'_{l_{\Gamma'}})$. The sum in the right-hand side ranges over all of $(\pi')^{-1}(f')$ because $\pi'=\upsilon\rho$.
Denote $e$ is the number of vertices in row $a_\Gamma+1$ in $\Delta_g$ which are not connected to any vertex from the top row $a_\Gamma$. For any $g_1\sim g$ we have
\begin{equation}\label{gviag}
\varphi_\Gamma(b_1,\ldots,b_{l_\Gamma})(g_1)=(1-t)^e(1-t^2)^d\varphi_{\Gamma'}(c'_1,\ldots,c'_{l_{\Gamma'}})(g'_1).
\end{equation}
What we do now is substitute~(\ref{hypoth1}) into~(\ref{total}) and then substitute~(\ref{gviag}) into the result. Taking into account that $\dim g_1=\dim g'_1+e$, we obtain
\begin{equation}\label{subst}
\sum\limits_{g_1\sim g}(-1)^{\dim g_1}\varphi_\Gamma(b_1,\ldots,b_{l_\Gamma})(g_1)=(-1)^e(1-t)^{d+e}[l_{\Gamma'}]_t!\varphi_{\Gamma'}(c,\ldots,c)(f').
\end{equation}
We denote $\nu(I_g)$ the coefficient $(-1)^e(1-t)^{d+e}$, both $d$ and $e$ being determined by $I_g$.
However, we have $$[l_\Gamma]_t!\varphi_\Gamma(b,\ldots,b)(f)=\kappa_\Gamma[l_{\Gamma'}]_t!\varphi_{\Gamma'}(c,\ldots,c)(f'),$$ where
\begin{equation}\label{fviaf}
\kappa_\Gamma=
\begin{cases}
\frac{1-t^{l_\Gamma}}{1-t}&\text{if } l_{\Gamma'}=l_\Gamma-1,\\
1& \text{if } l_{\Gamma'}=l_\Gamma,\\
1-t& \text{if } l_{\Gamma'}=l_\Gamma+1.
\end{cases}
\end{equation}
Therefore, if we sum up identity~(\ref{subst}) with $g$ ranging over a set $S$ of representatives for relation $\sim$, all that will be left to prove is the following proposition.
\begin{proposition}
$\sum\limits_{g\in S} \nu(I_g)=\kappa_\Gamma$
\end{proposition}
\begin{proof}
$I_g$ is a graph with two rows. Each vertex from the lower row is either connected to one of the two (or one) vertices directly above it or is isolated. The fact that $g$ is bounded translates into the following two additional requirements. If the leftmost vertex in the lower row has no upper-left neighbor, it is necessarily connected to its upper-right neighbor (i.e. it is not isolated). Similarly, if the rightmost vertex in the lower row has no upper-right neighbor, it is necessarily connected to its upper-left neighbor. Here are examples for each of the three cases from definition~(\ref{fviaf}).
\begin{picture}(105,20)
\put(15,15){\line(-1,-1){5}} \put(15,15){\line(1,-1){5}}
\put(5,15){\circle*{0.4}} \put(25,15){\circle*{0.4}} \put(20,10){\circle*{0.4}}
\put(7,3){$d=1$, $e=0$}
\put(60,15){\line(-1,-1){5}} \put(60,15){\line(1,-1){5}}
\put(50,15){\circle*{0.4}} \put(40,15){\circle*{0.4}} \put(45,10){\circle*{0.4}}
\put(44,3){$d=1$, $e=1$}
\put(85,15){\line(-1,-1){5}} \put(95,15){\line(1,-1){5}} \put(105,15){\line(1,-1){5}}
\put(90,10){\circle*{0.4}}
\put(87,3){$d=0$, $e=1$}
\end{picture}
Coincidentally, in each of the three cases the number of different possible $I_g$ is $3^{l_\Gamma-1}$.
We denote
$$
\sum\limits_{g\in S} \nu(I_g)=
\begin{cases}
\Sigma_{l_\Gamma}^-&\text{if } l_{\Gamma'}=l_\Gamma-1,\\
\Sigma_{l_\Gamma}^0& \text{if } l_{\Gamma'}=l_\Gamma,\\
\Sigma_{l_\Gamma}^+& \text{if } l_{\Gamma'}=l_\Gamma+1.
\end{cases}
$$
The Proposition follows directly from the recurrence relations
$$\Sigma_{l+1}^-=-(1-t)\Sigma_{l}^-+\Sigma_{l}^-+\Sigma_{l}^0,$$
$$\Sigma_{l+1}^0=-(1-t)\Sigma_{l}^-+(1-t)\Sigma_{l}^-+\Sigma_{l}^0,$$
$$\Sigma_{l+1}^+=-(1-t)\Sigma_{l}^0+(1-t)\Sigma_{l}^0+\Sigma_{l}^+.$$
\end{proof}
We have completed the consideration of case 2 and subsequently the step of our induction.
\end{proof}
\section{Application to the Finite Case}
With the above machinery at hand, little more effort is needed to prove Theorem~\ref{contribfin}. We give an outline of this argument, the details being filled in straightforwardly.
Let $\lambda$ be an integral dominant $\mathfrak{sl}_n$-weight. As mentioned above, the polytope $GT_\lambda$ is in a natural bijection with the polytope $D_{\mathcal T}(\lambda_1,\ldots,\lambda_{n-1},0)$. Moreover, for an integer point $A$ in $GT_\lambda$ we visibly have $$p_A=\varphi_{\mathcal T}(\lambda_1,\ldots,\lambda_{n-1},0)(f),$$ where $f$ is the minimal face containing $A$. We obtain $$\sum_{A\in\mathbf{GT}_\lambda}p_A e^{\mu_A}=\psi_{\mathcal T}(\lambda_1,\ldots,\lambda_{n-1},0)|_{x_0=x_n=1}.$$
Now, the distinguished set of vertices of $GT_\lambda$, mentioned in Theorem~\ref{contribfin} can be described in terms of $D_{\mathcal T}(\lambda_1,\ldots,\lambda_{n-1},0)$ as follows. They are those vertices $v$ for which $\Delta_v$ contains no component which has more vertices in some row $i$ than in row $i-1$ (with $i>0$). We term those vertices ``relevant", the rest being ``non-relevant".
Indeed, let the partition $(\lambda_1,\ldots,\lambda_{n-1},0)$ have type $l_1,\ldots,l_m$, i.e. $r$-th largest part occurs $l_r$ times. Choose a vertex $v$. We see that $\Delta_v$ has $m$ connected components, denote them $\Gamma_1,\ldots,\Gamma_m$. If $C_v$ is the tangent cone to $D_{\mathcal T}(\lambda_1,\ldots,\lambda_{n-1},0)$, we have $$C_v=D_{\Gamma_1}(\lambda_1,\ldots,\lambda_{l_1})\times D_{\Gamma_2}(\lambda_{l_1+1},\ldots,\lambda_{l_1+l_2})\times\ldots$$ and, consequently, $$F(\sigma_{\varphi_{\mathcal T}(\lambda_1,\ldots,\lambda_{n-1},0)}(C_v))=\psi_{\Gamma_1}(\lambda_1,\ldots,\lambda_{l_1}) \psi_{\Gamma_2}(\lambda_{l_1+1},\ldots,\lambda_{l_1+l_2})\ldots$$
The contributions of non-relevant vertices being zero now follows from Theorem~\ref{zero}.
Now, let us consider the relevant vertices. In the case of $\lambda$ being regular the following facts can be easily deduced (and are found in~\cite{me1}). There are exactly $n!$ relevant vertices of $GT_\lambda$. For each $w\in W$ we have exactly one relevant vertex with $\mu_v=w\lambda$, denote this vertex $v_w$. The tangent cone at $v_w$ is simplicial and unimodular. Let $\varepsilon_{w,1},\ldots,\varepsilon_{w,{n\choose 2}}$ be the generators of edges of tangent cone $C_{v_w}$. The set $$\left\{F(e^{\varepsilon_{w,1}}),\ldots,F\left(e^{\varepsilon_{w,{n\choose 2}}}\right)\right\}$$ coincides with the set $\{e^{-w\alpha},\alpha\in\Phi^+\}$. Finally, for a face $f$ of $C_{v_w}$ we have $\varphi(f)=(1-t)^{\dim f}$ (in the notations of Section~\ref{gtbrion}).
All of this together translates into the formula for the contribution of a relevant vertex provided by Theorem~\ref{contribfin}.
If $\lambda$ is singular, we reduce to the regular case. Indeed, let $\lambda^1$ be some regular integral dominant weight. We see that $GT_{\lambda}$ is a degeneration of $GT_{\lambda^1}$. If $\pi$ is the corresponding map between face sets, we have $\pi(v_{w_1}^1)=\pi(v_{w_2}^1)$ (relevant vertices of $GT_{\lambda^1}$) if and only if $w_1\lambda=w_2\lambda$. Since our degeneration coincides with the degeneration of $D_{\mathcal T}(\lambda_1^1,\ldots,\lambda_{n-1}^1,0)$ into $D_{\mathcal T}(\lambda_1,\ldots,\lambda_{n-1},0)$, we may apply Lemma~\ref{gensingular} to show that for a vertex $v$ of $GT_\lambda$ we have $$F(\sigma_\varphi(C_v))=\frac 1{[l_1]_t!\ldots[l_m]_t!}\sum_{\pi(v_w^1)=v}F(e^{v-v_w^1}\sigma_{\varphi^1}(C_{v_w^1})).$$
With the regular case taken into account, the above identity proves Theorem~\ref{contribfin} for the case of singular $\lambda$.
The structure of the above argument is, in its essence, the same as that of the argument we give in Section~\ref{last} to prove Theorem~\ref{contrib}. However, significant care is needed to deal with the infinite dimension of the polyhedra, the tools necessary for that will be developed in the first two sections of Part~\ref{proof}.
We finish this part off by showing how applying Lemma~\ref{graphsum} in the above situation provides some identities in $\mathbb{Z}[t]$ which we find to be fascinating. Indeed, let $\lambda$ be some integral dominant $\mathfrak{sl}_n$-weight and let $\lambda^1$ be such a weight which is also regular. Apply Lemma~\ref{graphsum} to the degeneration of $D_{\mathcal T}(\lambda_1^1,\ldots,\lambda_{n-1}^1,0)$ into $D_{\mathcal T}(\lambda_1,\ldots,\lambda_{n-1},0)$ and then to the degeneration of $D_{\mathcal T}(\lambda_1^1,\ldots,\lambda_{n-1}^1,0)$ into $D_{\mathcal T}(0,\ldots,0)$ (which is a point). Combining the results provides
\begin{theorem}
$$\sum_{\substack{f\text{ face}\\\text{of }GT_\lambda}}(-1)^{\dim f}\varphi(f)={{n\choose{l_1,\ldots,l_m}}\!}_t\,,$$ where $l_1,\ldots,l_m$ is the type of partition $(\lambda_1,\ldots,\lambda_{n-1},0)$ and we refer to the $t$-multinomial coefficient.
\end{theorem}
In particular, when $\lambda$ is regular on the right-hand side we simply have $[n]_t!$.
\part{Structure of $\Pi$ and Proof of Theorem~\ref{contrib}}\label{proof}
For the entirety of this Part we consider $\lambda$ to be a fixed nonzero integral dominant $\widehat{\mathfrak sl}_n$-weight, $n$ also being fixed. All the definitions from the Part~\ref{part1} should be understood with respect to these values.
\section{The Brion-type Theorem for $\bar\Pi$}
Recall the infinite-dimensional polyhedron $\Pi$ introduced in Section~\ref{affbrion}.
We call nonempty $f\subset\Pi$ a face of $\Pi$ if it is the intersection of $\Pi$ and some of the spaces $$E_i=\{x|x_i=0\}\cap V$$ and $$H_i=\{x|\chi_i(x)=k\}\cap V.$$ The faces form a lattice with respect to inclusion, a vertex is any minimal element in this lattice.
\begin{proposition}\label{vert}
The vertices of $\Pi$ are precisely those points $x\in\Pi$ for which for any $i$ at least on of $x_i=0$ or $\chi_i(x)=0$ holds.
\end{proposition}
\begin{proof}
Consider the face $$\Pi^l=\Pi\cap\bigcap_{i\le l}H_i.$$ In~\cite{me2} the statement of the Proposition was proved for vertices contained in $\Pi^0$. Since any $\Pi^l$ is obtained from $\Pi^0$ by the operator $(x_i)\rightarrow(x_{i+l})$, the Proposition also holds for vertices contained in any $\Pi^l$, but the $\Pi^l$ exhaust $\Pi$.
\end{proof}
We see that for $x\in V$ we have $s_{i,j}(x)=s_{i-1,j}(x)$ if and only if $x\in H_{in+j(n-1)}$ and $s_{i,j}(x)=s_{i-1,j+1}(x)$ if and only if $x\in E_{in+j(n-1)}$. This shows that the weight $p(x)$ depends only on the minimal face of $\Pi$ containing $x$. Every point is contained in some finite-dimensional face and every finite-dimensional face is a finite-dimensional polyhedron. Thus for any finite-dimensional face $f$ of positive dimension we may take a point $x$ in its relative interior and see that $f$ is the minimal face containing $x$. If we then define $\varphi(f)=p(x)$, we obtain a function $$\varphi:\mathcal F_\Pi\rightarrow\mathbb{Z}[t],$$ where $\mathcal F_\Pi$ is the set of all finite-dimensional faces of $\Pi$.
We now set out to define the series $\tau_{\bar v}$ mentioned in Section~\ref{affbrion}.
At any vertex we $v$ of $\Pi$ we have the tangent cone $$C_v=\{v+\alpha(x-v),x\in\Pi,\alpha\ge 0\}.$$ For any face $f$ containing $v$ we have the corresponding face of $C_v$: $$f_v=\{v+\alpha(x-v),x\in f,\alpha\ge 0\}.$$
For any edge (one-dimensional face) $e$ of $\Pi$ containing $v$ we have its generator, the minimal integer vector $\varepsilon$ such that $v+\varepsilon\in e$. Let $\{\varepsilon_{v,i},i>0\}$ be the set of generating vectors of all edges containing $v$. Any point of $C_v$ is obtained from $v$ by adding a non-negative linear combination of these $\varepsilon_{v,i}$.
We will make use of the following propositions.
For a Laurent monomial $y$ in $z_1,\ldots,z_{n-1},q$ denote $\deg y$ the power in which $y$ contains $q$. Also, recall the specialization $G$ given by~(\ref{affspec}).
\begin{proposition}\label{qbig}
For any vertex $v$ and any $N\in\mathbb{Z}$ there is only a finite number of $i$ such that $\deg G(e^{\varepsilon_{v,i}})<N$.
\end{proposition}
\begin{proof}
We use the following fact. For any $M$ there is only a finite number of vertices $u$ with $\deg G(e^{\bar u})<M$. This, for example, follows from the fact that the sum of these monomials over all integer points in $\Pi$ (including all vertices) is $e^{-\lambda}\charac{L_\lambda}$.
Every edge of $\Pi$ is a segment joining two vertices. In other words, for every $\varepsilon_{v,i}$ there is a positive integer $K$ such that $v+K\varepsilon_{v,i}$ is some other vertex $u_i$. Let $l$ be the number of the first nonzero coordinate in $\varepsilon_{v,i}$ and let that coordinate be equal to $c$. We have $\chi_l(u_i)=\chi_l(v)+Kc$, which shows that $K\le k$.
However, $$\deg G(e^{\bar u_i})-\deg G(e^{\bar v})=K\deg G(e^{\varepsilon_{v,i}}).$$ Now we see that an infinite number of $\varepsilon_{v,i}$ with $\deg G(e^{\varepsilon_{v,i}})<N$ would contradict the fact in the beginning of the proof.
\end{proof}
\begin{proposition}\label{fundpar}
Consider any finite dimensional rational cone $C$ and map $\psi:\mathcal F_C\rightarrow R$ for some commutative ring $R$. Let $\varepsilon_1,\ldots,\varepsilon_m$ be the generators of the edges of $C$. Then
\begin{equation}\label{numer}
(1-e^{\varepsilon_1})\ldots(1-e^{\varepsilon_m})S_\psi(C)
\end{equation}
is a linear combination of exponents of points of the form $$v+\alpha_1\varepsilon_1+\ldots+\alpha_m\varepsilon_m$$ with all $\alpha_i\in[0,1]$.
\end{proposition}
\begin{proof}
Consider a triangulation of $C$ by simplicial cones, each cone being generated by some of the $\varepsilon_i$. Let $T$ be a face of one of the cones. We may assume that $T$ is generated by $\varepsilon_1,\ldots,\varepsilon_l$. The expression $$(1-e^{\varepsilon_1})\ldots(1-e^{\varepsilon_l})S(\mathrm{Int}(T))$$ is precisely the sum of exponentials of all integer points within the parallelepiped $$\{v+\alpha_1\varepsilon_1+\ldots+\alpha_l\varepsilon_l,\alpha_i\in(0,1]\}.$$
Let $f$ be the minimal face of $C$ containing $T$. We see that $$\psi(f)(1-e^{\varepsilon_1})\ldots(1-e^{\varepsilon_m})S(\mathrm{Int}(T))$$ is a sum of exponentials of the desired type. However~(\ref{numer}) is the sum of the above expressions over all $T$ plus $\varphi(u)e^{u}$, where $u$ is the vertex of $C$.
\end{proof}
Let us denote $C_{\bar v}=\widebar{C_v}$. Note that the generators of edges of $C_{\bar v}$ comprise the same set $\{\varepsilon_{v,i}\}$. In the below arguments we switch somewhat freely between $C_{\bar v}$ and $C_v$ and their attributes. The reader should be attentive not to miss the $\bar{}$ and keep in mind that in most ways the structure of these cones is the same.
In any point $x\in C_v$ we now may define the function $$p_v(x)=\varphi\left(\min_{x\in f_v}f\right).$$ We have the formal Laurent series in variables $t_i$ $$S_{\bar\varphi}(C_{\bar v})=\sum_{x\in C_{\bar v}\cap\mathbb{Z}^{{}^\infty}}\widebar{p_v}(x)e^x.$$
In what follows we implicitly use the fact that $G(e^{\varepsilon_{v,i}})\neq 1$. This will be proved in the next section.
Consider the cone $C_v-v=C_{\bar v}-\bar v$ with vertex at the origin. Just like for a finite dimensional cone, Laurent series that are sums of monomials $e^x$ with $x\in C_v-v$ comprise a ring. Both $e^{-v}S_{\bar\varphi}(C_{\bar v})$ and the product $$(1-e^{\varepsilon_{v,1}})(1-e^{\varepsilon_{v,2}})\ldots$$ are elements of that ring and thus the product $$Q_v=S_{\bar\varphi}(C_{\bar v})(1-e^{\varepsilon_{v,1}})(1-e^{\varepsilon_{v,2}})\ldots$$ is well-defined.
\begin{lemma}\label{welldef}
$G(Q_v)$ is a well-defined element of $\mathfrak S$.
\end{lemma}
\begin{proof}
We are to show that for any integer $N$ among those monomials $e^x$ that occur in $Q_v$ with a nonzero coefficient there is only a finite number for which $\deg G(e^x)<N$.
For $l\gg 0$ the intersection $$C_{v,l}=C_{\bar v} \cap\bigcap_{i<-l}\widebar{H_i}\cap\bigcap_{i>l}\widebar{E_i}$$ is a finite-dimensional cone with vertex $\bar v$ and is a face of $C_{\bar v}$. We thus have an increasing sequence of faces that exhausts $C_{\bar v}$. Every edge of cone $C_{v,l}$ is an edge of $C_{\bar v}$. Choose some cone $C_{v,l}$ and suppose that its edges are generated by $\varepsilon_{v,1},\ldots,\varepsilon_{v,m}$. We then denote $$Q_{v,l}=(1-e^{\varepsilon_{v,1}})\ldots(1-e^{\varepsilon_{v,m}})S_{\bar\varphi}(C_{v,l}),$$ where $\bar\varphi$ is evaluated in faces of $C_{v,l}$ in the natural way. Evidently, the coefficient of $e^x$ in $Q_{v,l}$ stabilizes onto the coefficient of $e^x$ in $Q_v$ as $l$ approaches infinity. We prove the lemma by showing that for $l\gg 0$ the difference $Q_{v,l}-Q_{v,l-1}$ has a zero coefficient at any monomial $e^x$ with $\deg G(e^x)<N$.
Let $S$ be the set of those $\varepsilon_{v,i}$ for which $\deg G(e^{\varepsilon_{v,i}})<0$ and let $$K=\deg\left(\prod_{\varepsilon_{v,i}\in S}G(e^{\varepsilon_{v,i}})\right).$$ We show that $Q_{v,l}-Q_{v,l-1}$ has a zero coefficient at any monomial $e^x$ with $\deg G(e^x)<N$ whenever the following holds. For every $\varepsilon_{v,i}$ which generates an edge contained in $C_{v,l}$ but not $C_{v,l-1}$ one has $\deg G(e^{\varepsilon_{v,i}})\ge N-K$. This visibly holds for all $l\gg 0$, fix some $l$ for which it does.
Proposition~\ref{fundpar} shows that for every $e^x$ which appears in $Q_{v,l}$ the vector $x$ is of the form $$\bar v+\alpha_1\varepsilon_{v,1}+\ldots+\alpha_m\varepsilon_{v,m},\alpha_i\in[0,1].$$ If, however, $e^x$ appears in $Q_{v,l}-Q_{v,l-1}$, then we must have $\alpha_i>0$ for some $\varepsilon_{v,i}$ which generates an edge contained in $C_{v,l}$ but not $C_{v,l-1}$. This is since $C_{v,l-1}$ is a face of $C_{v,l}$.
We fix $x$ such that $e^x$ appears in $Q_{v,l}-Q_{v,l-1}$ and $$x=\bar v+\alpha_1\varepsilon_{v,1}+\ldots+\alpha_m\varepsilon_{v,m},\alpha_i\in[0,1]$$ and show that $$\sum_{\substack{i\in[1,m],\\\bar v+\varepsilon_{v,i}\not\in C_{v,l-1}}}\alpha_i\ge 1.$$ This completes the proof since we have $$\deg G(e^x)\ge K+(N-K)\sum_{\substack{i\in[1,m],\\\bar v+\varepsilon_{v,i}\not\in C_{v,l-1}}}\alpha_i.$$
To prove this last assertion we use the following fact about the $\varepsilon_{v,i}$, which may be extracted from~\cite{me2}. All the nonzero coordinates (terms) of $\varepsilon_{v,i}$ are either $-1$ or 1.
From the fact that $\bar v\in C_{v,l}$ we see that $\bar v_i=0$ whenever $i<-l$ or $i>l$. Further we see that if $\varepsilon_{v,i}$ which generates an edge contained in $C_{v,l}$ but not $C_{v,l-1}$, then it has a nonzero coordinate with number either $-l-1$ or $l+1$. Moreover, the fact that $v\in H_{-l-1}$ and $v\in E_{l+1}$ implies the following. If the coordinate of $\varepsilon_{v,i}$ with number $-l-1$ is nonzero, then this coordinate must be $-1$ in order to have $\chi_{-l-1}(v+\varepsilon_{v,i})\le k$. Also, if the coordinate of $\varepsilon_{v,i}$ with number $l+1$ is nonzero, then this coordinate must be 1 in order for the corresponding coordinate of $v+\varepsilon_{v,i}$ to be nonnegative. Herefrom we deduce that if $$\sum_{\substack{i\in[1,m],\\\bar v+\varepsilon_{v,i}\not\in C_{v,l}}}\alpha_i<1,$$ then the coordinate of $x$ with number either $-l-1$ or $l+1$ turns out to be non-integral.
\end{proof}
Now we can finally define
\begin{equation}\label{tau}
\tau_{\bar v}=\frac{G(Q_v)}{(1-G(e^{\varepsilon_{v,1}}))(1-G(e^{\varepsilon_{v,2}}))\ldots}.
\end{equation}
Proposition~\ref{qbig} shows that the denominator is indeed an invertible element of $\mathfrak S$.
The proof above shows that $G(Q_v)$ contains no monomials with powers of $q$ less than $\deg G(e^{\bar v})+K$ (the number $K$ is defined in the proof). Also, it is obvious that the denominator of~(\ref{tau}) contains no monomials with powers of $q$ less than $K$. This shows that $\tau_{\bar v}$ only contains powers of $q$ no less than $\deg G(e^{\bar v})$.
Furthermore, consider a cone $C_{v,l}$ from the proof and let it be generated by $\varepsilon_1,\ldots,\varepsilon_m$. One also sees that $G(Q_{v,l})$ contains no monomials with powers of $q$ less than $\deg G(e^{\bar v})+K$ and $$G((1-e^{\varepsilon_{v,1}})\ldots(1-e^{\varepsilon_{v,m}}))$$ contains no monomials with powers of $q$ less than $K$. Consequently, we may view the quotient of $G(Q_{v,l})$ by the above product as $\tau_{\bar v,l}\in\mathfrak S$ which only contains powers of $q$ no less than $\deg G(e^{\bar v})$. (As a rational function this quotient is, of course, $G(\sigma_{\bar\varphi}(C_{v,l}))$.)
These observations are necessary to obtain the goal of this section.
\begin{proof}[Proof of Theorem~\ref{infbrion}.]
Let $$\bar\Pi_l=\Pi\cap\bigcap_{i<-l}\widebar{H_i}\cap\bigcap_{i>l}\widebar{E_i}.$$ Theorem~\ref{wbrion} shows that
\begin{equation}\label{finbrion}
G(S_{\bar\varphi}(\Pi_l))=\sum_{\substack{\bar v\text{ vertex}\\\text{of }\Pi_l}}\tau_{\bar v,l}.
\end{equation}
Obviously, the coefficients of the series in $q$ on the left stabilize onto the coefficients of $G(S_{\bar\varphi}(\bar\Pi))$. Also, for any $v$ the coefficients of the series $\tau_{\bar v,l}$ stabilize onto the coefficients of $\tau_{\bar v}$.
The remarks preceding the proof show that for any integer $N$ there is only a finite number of vertices $v$ for which $\tau_{\bar v,l}$ may contain a power of $q$ less than $N$. This shows that the infinite sum $$\sum_{\substack{\bar v\text{ vertex}\\\text{of }\Pi}}\tau_{\bar v}$$ is well-defined and that the coefficients of the right-hand side of~(\ref{finbrion}) stabilize onto this infinite sum's coefficients.
\end{proof}
\section{Assigning Lattice Subgraphs to Faces of $\Pi$}\label{proofintro}
First we define a subgraph $\Theta(x)\subset\mathcal R$ for any point $x\in\Pi$. The vertices of $\Theta(x)$ are all the vertices of $\mathcal R$. An edge of $\mathcal R$ connecting $(i_1,j_1)$ and $(i_2,j_2)$ is in $\Theta(x)$ if and only if $s_{i_1,j_1}(x)=s_{i_2,j_2}(x)$.
Now for a finite dimensional face $f$ we take a point $x$ such that $f$ is the minimal face containing $x$. We see that the subgraph $\Theta(x)$ does not depend on $x$ and we define $\Theta_f=\Theta(x)$. Visibly, whenever $f\subset g$ the graph $\Theta_g$ is a subgraph of $\Theta_f$.
Relation~(\ref{shift}) shows that the graph $\Theta_f$ is invariant under the shift $(i,j)\rightarrow(i-n+1,j+n)$. This means that its connected components are divided into equivalence classes, with two components being equivalent if and only if they can be identified by an iteration of this shift. We choose a set of representatives and denote the union of these components $\Delta_f\subset \Theta_f$.
Moreover, relation~(\ref{shift}) shows that $(i,j)$ and $(i-n+1,j+n)$ are never in one component of $\Theta_f$. This means that for every integer $l$ there is exactly one vertex $(i,j)\in\Delta_f$ with $in+j(n-1)=l$. We denote this vertex $(\eta_f(l),\theta_f(l))$. We also see that the edges of $\Delta_f$ are in one-to-one correspondence with those hyperplanes $E_l$ and $H_l$ which contain $f$.
Now consider a vertex $v$ of $\Pi$. We can define a change of coordinates on $V$ in terms of the graph $\Delta_v$. The new coordinates will be labeled by pairs $(i,j)$ such that $(i,j)$ is a vertex of $\Delta_v$. The corresponding coordinate of $x$ is simply $s_{i,j}(x)$. Definition~(\ref{gtdef}) together with the previous paragraph show that this is indeed a nondegenerate change of coordinates and the new coordinates of a point are integral if and only if this point was integral.
\begin{proposition}\label{imcv}
For a point $x\in V$ we have $x\in C_v$ if and only if for any edge of $\Delta_v$ joining vertices $(i_1,j_1)$ and $(i_2,j_2)$ the coordinates $s_{i_1,j_1}(x)$ and $s_{i_2,j_2}(x)$ satisfy the corresponding inequality.
\end{proposition}
\begin{proof}
This is evident from the fact that $x\in C_v$ if and only if $x_l\ge 0$ whenever $v\in E_l$ and $\chi_l(x)\le k$ whenever $v\in H_l$.
\end{proof}
We proceed to give an extensive list of properties of the introduced objects.
\begin{proposition}\label{vertgraph}
If $v$ is a vertex of $\Pi$, then every vertex of $\Delta_v$ is connected to one if its two upper neighbors.
\end{proposition}
\begin{proof}
This is evident from Proposition~\ref{vert}.
\end{proof}
\begin{proposition}
Whenever $(i,j)$ and $(i,j+1)$ are in the same connected component of $\Delta_v$ the vertices $(i-1,j+1)$ and $(i+1,j)$ (i.e. the two common neighbors of $(i,j)$ and $(i,j+1)$) are also in that same component of $\Delta_v$.
\end{proposition}
\begin{proof}
This evident from the fact that $s_{i,j}(v)$ is an plane-filling GT-pattern.
\end{proof}
Next visualize a cycle graph with $n$ vertices labeled $0,\ldots,n-1$ and its subgraph determined by the following rule. Vertices $i$ and $i+1$ are adjacent in the subgraph whenever $a_{i+1}=0$ (all indices are to be read modulo $n$). Since $\lambda\neq 0$ this subgraph is a disjoint union of $m(\lambda)$ path graphs of sizes $l_1,\ldots,l_{m(\lambda)}$. The numbers $m(\lambda)$ and $l_1,\ldots,l_{m(\lambda)}$ are important characteristics of $\lambda$. We point out straight away that, as is well-known, the stabilizer $$W_\lambda\simeq S_{l_1}\times\ldots\times S_{l_m}$$ and $$W_\lambda(t)=[l_1]_t!\ldots[l_m]_t!$$
\begin{proposition}\label{uplimit}
For any vertex $v$ of $\Pi$ the number of connected components in $\Delta_v$ is $m(\lambda)$. Moreover, they can be labeled $\Gamma_1,\ldots,\Gamma_{m(\lambda)}$ in such a way that for $i\ll 0$ component $\Gamma_r$ contains exactly $l_r$ vertices from row $i$.
\end{proposition}
\begin{proof}
Consider some $r\in[1,m(\lambda)]$. Due to the definition of the integers $l_r$ we can specify an integer ${I_r}$ with the following properties.
\begin{enumerate}
\item For $l<{I_r}+n^2$ one has $v_l=a_{l\bmod n}$.
\item One has $v_{I_r}=v_{{I_r}-1}=\ldots=v_{{I_r}-l_r+2}=0.$ \\($l_r-1$ consecutive terms.)
\item One has $v_{{I_r}+1}\neq 0$ and $v_{{I_r}-l_r+1}\neq 0$.
\end{enumerate}
The above translates into the following statement about the plane-filling GT-pattern associated with $v$. Each of the elements \[s_{\eta_v({I_r}),\theta_v({I_r})}(v),\ldots,s_{\eta_v({I_r}),\theta_v({I_r})+l_r-1}(v)\] (\(l_r\) consecutive elements in row \(\eta_v({I_r})\)) is equal to its respective upper-left neighbor. That is due to Property 1 of ${I_r}$ above. Also, $s_{\eta_v({I_r}),\theta_v({I_r})}(v)$ and the $l_r-2$ elements to its right are equal to their respective upper-right neighbors. That is due to Property 2. However, $s_{\eta_v({I_r}),\theta_v({I_r})-1}(v)$ is not equal to its upper-right neighbor $s_{\eta_v({I_r})-1,\theta_v({I_r})}(v)$ and $s_{\eta_v({I_r}),\theta_v({I_r})+l_r-1}(v)$ is not equal to its upper-right neighbor $s_{\eta_v({I_r})-1,\theta_v({I_r})+l_r}(v)$. That is by Property 3.
We have established that vertex $(\eta_v({I_r}),\theta_v({I_r}))$ is in one component with the $l_r-1$ vertices to its right, as well as its upper-left neighbor and the $l_r-1$ vertices to that neighbor's right. We have also seen that this component has no other vertices in rows $\eta_v({I_r})$ and $\eta_v({I_r})-1$.
We see that there are indeed $m(\lambda)$ components $\Gamma_1,\ldots,\Gamma_{m(\lambda)}$ such that for $$i\le \min_r(\eta_v(I_r))$$ component $\Gamma_r$ contains exactly $l_r$ vertices from row $i$. It remains to observe that for all $l\le\min_r(I_r)$ the vertex $(\eta_v(l),\theta_v(l))$ is contained in one of those $m(\lambda)$ components. This shows that there are no other components since Proposition~\ref{vertgraph} implies that every component of $\Delta_v$ has vertices in row $i$ for $i\ll 0$.
\end{proof}
\begin{proposition}\label{upsame}
For a point $x\in C_v$ one has $s_{i,j}(x)=s_{i,j}(v)$ when $(i,j)\in\Delta_v$ and $i\ll 0$.
\end{proposition}
\begin{proof}
Obviously, there exists an integer $M$ such that $s_{\eta_v(l),\theta_v(l)}(x)=s_{\eta_v(l),\theta_v(l)}(v)$ whenever $l<M$. However, Proposition~\ref{uplimit} shows that for $i\ll 0$ we have $l<M$ whenever $\eta_v(l)=i$.
\end{proof}
We have another proposition describing $\Delta_v$ in rows $i\gg 0$.
\begin{proposition}\label{downlimit}
Only one of the $m(\lambda)$ components of $\Delta_v$ contains vertices $(i,j)$ with arbitrarily large $i$. For $i\gg 0$ this component contains a single vertex in row $i$.
\end{proposition}
\begin{proof}
For $l\gg 0$ we have $v_l=0$ which shows that $$s_{\eta_v(l),\theta_v(l)}(v)=s_{\eta_v(l)-1,\theta_v(l)+1}(v).$$ Consequently, for any $l>0$ we have $$(\eta_v(l),\theta_v(l))=(\eta_v(l-1)+1,\theta_v(l-1)-1)$$ and the two vertices are adjacent in $\Delta_v$. Since this holds for all $l\gg 0$, the proposition is proved.
\end{proof}
\begin{proposition}\label{downsame}
For a point $x\in C_v$ all the coordinates $s_{i,j}(x)$ with $(i,j)\in\Delta_v$ and $i\gg 0$ are the same.
\end{proposition}
\begin{proof}
For $l\gg 0$ we have $x_l=0$ which entails $s_{\eta_v(l),\theta_v(l)}(x)=s_{\eta_v(l-1),\theta_v(l-1)}(x).$ We then apply Proposition~\ref{downlimit}.
\end{proof}
For a point $x\in C_v$ how do we express the monomial $G(e^{\bar x})$ via the coordinates $s_{i,j}(x)$? This question is best answered in terms of the array $$s_{i,j}(x,v)=s_{i,j}(x)-s_{i,j}(v).$$
\begin{proposition}\label{zpow}
For an integer point $x\in C_v$ the power in which $G(e^{\bar x-\bar v})$ contains $z_r$ is equal to $$\sum\limits_{i\equiv r\bmod(n-1)} \left(\sum\limits_{(i,j)\in\Delta_v}s_{i,j}(x,v)-\sum\limits_{(i-1,j)\in\Delta_v}s_{i-1,j}(x,v)\right).$$
\end{proposition}
\begin{proof}
Formula~(\ref{zweight}) shows that $G(e^{\bar x-\bar v})$ contains $z_r$ in the power
$$\sum\limits_{l\equiv r\bmod(n-1)}(x_l-v_l)=\\\sum\limits_{l\equiv r\bmod(n-1)} (s_{\eta_v(l),\theta_v(l)}(x,v)-s_{\eta_v(l-1),\theta_v(l-1)}(x,v)).$$ Now it remains to apply $$l=n\eta_v(l)+(n-1)\theta_v(l)\equiv\eta_v(l)\bmod(n-1).$$
Propositions~\ref{upsame} and~\ref{downsame} show that all the sums in consideration have a finite number of nonzero summands.
\end{proof}
\begin{proposition}\label{qpow}
For an integer point $x\in C_v$ we have $$\deg G(e^{\bar x-\bar v})=\sum\limits_{i\equiv 0\bmod(n-1)} \sum\limits_{(i,j)\in\Delta_v}(-s_{i,j}(x,v)+S_{i,j}),$$ where $S_{i,j}=0$ when $in+j(n-1)<0$ and $S_{i,j}=\sum_l(x_l-v_l)$ if $in+j(n-1)\ge 0$.
\end{proposition}
\begin{proof}
Via~(\ref{qweight}) we have
\begin{multline*}
\deg G(e^{\bar x-\bar v})=-\sum\limits_{r<0}\sum\limits_{l\le r(n-1)}(x_l-v_l)+\sum\limits_{r\ge0}\left(\sum_{l=-\infty}^\infty(x_l-v_l)-\sum\limits_{l\le r(n-1)}(x_l-v_l)\right)=\\\sum\limits_{r\in\mathbb{Z}}\left(S_{\eta_v(r(n-1)),\theta_v(r(n-1))}-s_{\eta_v(r(n-1)),\theta_v(r(n-1))}(x,v)\right).
\end{multline*}
We then apply $\eta_v(r(n-1))\equiv 0\bmod(n-1)$. Note that we again have a finite number of nonzero summands in every sum.
\end{proof}
Further, the weight $\varphi(f)$ has a nice interpretation in terms of the graph $\Delta_f$.
\begin{proposition}\label{phigraph}
For a face $f$ and integer $l>0$ let $d_l$ be the number of pairs $(\Gamma,i)$ with $\Gamma$ a connected component of $\Delta_f$ and $i$ an integer such that $\Gamma$ has $l$ vertices in row $i$ and $l-1$ vertices in row $i-1$. Then $\varphi(f)=\prod(1-t^l)^{d_l}$.
\end{proposition}
\begin{proof}
Straightforward from the definitions.
\end{proof}
\begin{proposition}\label{dimf}
For a finite-dimensional face $f$ we have $$\dim f=|\{\text{components of }\Delta_f\}|-m(\lambda).$$
\end{proposition}
\begin{proof}
If $f$ is a vertex this follows from Proposition~\ref{uplimit}. If $f$ is not a vertex it has a nonempty interior with the same dimension.
Choose a point $x\in V$ from the interior of $f$. For any two vertices $(i_1,j_1)$ and $(i_2,j_2)$ of $\Delta_f$ that are adjacent in $\mathcal R$ we have $s_{i_1,j_1}(x)=s_{i_2,j_2}(x)$ if and only if the two vertices are adjacent in $\Delta_f$.
Consider a vertex $v$ of $f$. Since $\Theta_f\subset\Theta_v$, we may assume that $\Delta_f\subset\Delta_v$. Proposition~\ref{upsame} together with Proposition~\ref{uplimit} then shows that there are $m(\lambda)$ components of $\Delta_f$ that meet arbitrarily high rows. If $(i,j)$ is a vertex in one of these components, then $s_{i,j}(x)=s_{i,j}(v)$. Thus, from the previous paragraph we see that we have exactly $|\{\text{components of }\Delta_f\}|-m(\lambda)$ degrees of freedom when choosing the coordinates $s_{i,j}(x)$ (with respect to vertex $v$).
\end{proof}
We finish this section off by showing, as promised, that $G(e^{\varepsilon_{v,l}})\neq 1$.
Consider a vertex $v$ and an edge $e$ containing $v$. We may assume that $\Delta_e$ is a subgraph of $\Delta_v$. According to Proposition~\ref{dimf} the graph $\Delta_e$ has $m(\lambda)+1$ connected components of which only one does not meet arbitrarily high rows. Denote that component $\Gamma_e\subset\Delta_e$. Let $\varepsilon_{v,l}$ be the generator of $e$.
\begin{proposition}\label{edges}
In the above notations the array $$s_{i,j}(\varepsilon_{v,l})=s_{i,j}(v+\varepsilon_{v,l})-s_{i,j}(v)$$ with $(i,j)$ ranging over the vertices of $\Delta_e$ has the following description. If $(i,j)$ is outside of $\Gamma_e$, then $s_{i,j}(\varepsilon_{v,l})=0$. For all $(i,j)$ in $\Gamma_e$ the value $s_{i,j}(\varepsilon_{v,l})$ is the same and equal to either $-1$ or 1.
\end{proposition}
\begin{proof}
Proposition~\ref{upsame} shows that for any point $x$ of $e$ for $(i,j)$ outside of $\Gamma_e$ we indeed have $s_{i,j}(x)=s_{i,j}(v)$. Moreover, by definition for $x\in e$ all of its coordinates $s_{i,j}(x)$ with $(i,j)$ within $\Gamma_e$ must be the same. By taking $x=v$ and $x=v+\varepsilon_{v,l}$ we obtain the Proposition.
\end{proof}
\begin{proposition}
For any vertex $v$ and generator $\varepsilon_{v,l}$ we have $G(e^{\varepsilon_{v,l}})\neq 1$.
\end{proposition}
\begin{proof}
$$G(e^{\varepsilon_{v,l}})=G(e^{(\bar v+\varepsilon_{v,l})-\bar v}).$$ This monomial may be calculated via Propositions~\ref{zpow} and~\ref{qpow}. More specifically, we see that there are three possible cases.
\begin{enumerate}
\item $\Gamma_e$ is finite and does not intersect any row $i$ with $i\equiv 0\bmod (n-1)$. From Proposition~\ref{zpow} we then we see that $z_{i_0\bmod(n-1)}$ occurs in a nonzero power, where $i_0$ is the highest row containing vertices from $\Gamma_e$.
\item $\Gamma_e$ is finite and intersects some row with $i\equiv 0\bmod (n-1)$. Then $\deg G(e^{\varepsilon_{v,l}})\neq 0$ since for vertices $(i,j)$ of $\Delta_e$ with $i\gg 0$ we have $s_{i,j}(\varepsilon_{v,l})=0$ and thus all the values $S_{i,j}$ from Proposition~\ref{qpow} are zero.
\item $\Gamma_e$ is infinite. This means that for $i\gg 0$ there is a single vertex of $\Gamma_e$ in row $i$. Proposition~\ref{zpow} then shows that the sum of powers in which the $z_r$ occur is 1.
\end{enumerate}
\end{proof}
\section{Proof of Theorem~\ref{contrib}}\label{last}
In this Section we finally apply the tools developed in Part~\ref{tools} combining them with the Propositions from the previous section.
The vertex $v$ of $\Pi$ is fixed throughout this section. Denote $\Gamma_1,\ldots,\Gamma_{m(\lambda)}$ the connected components of $\Delta_v$. For $(i,j)\in\Gamma_r$ all the numbers $s_{i,j}(v)$ are the same, let them be equal to $b_r$.
Choose an integer $M_v$ such that for $i\ge M_v$ row $i$ meets $\Delta_v$ in exactly one vertex while for $i\le -M_v$ row $i$ meets component $\Gamma_r$ of $\Delta_v$ in $l_r$ vertices and, furthermore, one has $in+j(n-1)<0$ for any vertex $(i,j)$ of $\Delta_v$ in row $-M_v$ or above. Propositions~\ref{uplimit} and~\ref{downlimit} show that such a $M_v$ exists.
For $l\ge M_v$ denote $D_l$ the section of $C_v$ comprised of points $x\in C_v$ with the following properties.
\begin{enumerate}
\item For $i\le -l$ we have $s_{i,j}(x)=s_{i,j}(v)$ for all vertices $(i,j)$ of $\Delta_v$.
\item For $i\ge l$ all the coordinates $s_{i,j}(x)$ with $(i,j)$ a vertex of $\Delta_v$ are the same.
\end{enumerate}
An important observation is that $D_l$ is a finite dimensional face of $C_v$. Indeed, $D_l$ is defined as the intersection of $C_v$ and all the hyperplanes $E_i\ni v$ and $H_i\ni v$ except for a finite number.
Now, the rational function $G(\sigma_{\bar\varphi}(\widebar{D_l}))$ may be viewed as an element of $\mathfrak S$ which we denote $\sigma_l$.
\begin{lemma}\label{limit}
The series $\sigma_l$ converge coefficient-wise to the series $\tau_{\bar v}$.
\end{lemma}
\begin{proof}
We consider $G(\sigma_{\bar\varphi}(\widebar{D_l}))$ to be a fraction the denominator of which is the product of $1-G(e^\varepsilon)$ over all generators $\varepsilon$ of edges of $D_l$. The coefficients of these denominators visibly converge to the coefficients of $$(1-G(e^{\varepsilon_{v,1}})(1-G(e^{\varepsilon_{v,1}})\ldots$$ We are thus left to prove that the numerators converge coefficient-wise to $Q_v$.
This is done in complete analogy with the argument proving Lemma~\ref{welldef}. The only difference is that in the last paragraph we use the characterization of the generators given by Proposition~\ref{edges} rather than the one taken from~\cite{me2}.
\end{proof}
On the other hand, let $\Delta_{v,l}$ be the full subgraph of $\Delta_v$ obtained by removing all rows with number less than $-l$ or greater than $l$. Such a $\Delta_{v,l}$ has $m(\lambda)$ connected components each of which is an ordinary graph. We denote these components $\Gamma^l_r\subset \Gamma_r$.
We can now see that we have a natural bijection $$\xi_l:D_{\Gamma^l_1}(b_1,\ldots,b_1)\times\ldots\times D_{\Gamma^l_{m(\lambda)}}(b_{m(\lambda)},\ldots,b_{m(\lambda)})\rightarrow D_l.$$ (Recall that every factor on the right is a cone with vertex $v_{\Gamma_r^l}(b_r)$.) The coordinate
\begin{equation}\label{imcoord}
s_{i,j}(\xi_l(x_1\times\ldots\times x_{m(\lambda)}))
\end{equation}
is equal to the corresponding coordinate of $x_r$ when $(i,j)\in\Gamma_r^l$. When $(i,j)$ is a vertex of $\Delta_v$ with $i<-l$ the coordinate~(\ref{imcoord}) is equal to $s_{i,j}(v)$ and for $i\ge l$ those coordinates are all the same. Proposition~\ref{imcv} shows that this is indeed a bijection. We also have the corresponding bijection $\widebar{\xi_l}$ with image $\widebar{D_l}$.
Propositions~\ref{zpow} and~\ref{qpow} together with our choice of $l$ show that for a certain specialization $\Psi_l$ substituting each $x_i$ with a monomial in $z_1,\ldots,z_{n-1},q$ the following holds. For any tuple of integer points $x_r\in D_{\Gamma^l_r}(b_r,\ldots,b_r)$ we have
\begin{equation}\label{expprod}
G\left(e^{\widebar{\xi_l}(x_1\times\ldots\times x_{m(\lambda)})}\right)=G(e^{\bar v})\Psi_l\left(\prod_{r=1}^{m(\lambda)}F\left(e^{\left(x_r-v_{\Gamma_r^l}(b_r)\right)}\right)\right).
\end{equation}
It is straightforward to describe $\Psi_l$ explicitly, we, however, will not make use of such a description and therefore omit it.
Proposition~\ref{phigraph} together with $l\ge M_v$ shows that for a face of $D_l$ $$f=\xi_l(f_1\times\ldots\times f_{m(\lambda)})$$ with $f_r$ being a face of $D_{\Gamma_r^l}(b_r,\ldots,b_r)$ we have the following identity.
\begin{equation}\label{weightprod}
\varphi(f)=\prod_{r=1}^{m(\lambda)}\varphi_{\Gamma_r^l}(b_r,\ldots,b_r)(f_r).
\end{equation}
Combining~(\ref{expprod}) and~(\ref{weightprod}) we, finally, obtain
\begin{equation}\label{psiprod}
G\left(\sigma_{\bar\varphi}\left(\widebar{D_l}\right)\right)=G(e^{\bar v})\Psi_l\left(\prod_{r=1}^{m(\lambda)}F\left(e^{-v_{\Gamma_r^l(b_r)}}\right)\psi_{\Gamma_r^l}(b_r,\ldots,b_r)\right).
\end{equation}
Now it is time to define the distinguished set of vertices from Theorem~\ref{contrib} which we again refer to as ``relevant". A vertex $v$ is not relevant if and only if the graph $\Delta_v$ has a connected component $E$ with the following property. $E$ has more vertices in row $i+1$ than in row $i$ for some integer $i$.
This definition together with $l\ge M_v$ immediately implies that if $v$ is non-relevant, then one of the components $\Gamma_r^l$ of $\Delta_{v,l}$ contains more vertices in some row than the row above. Combining~(\ref{psiprod}) with Theorem~\ref{zero} and then employing Lemma~\ref{limit} now proves part b) of Theorem~\ref{contrib}.
We move on to considering a relevant $v$. We first discuss the case of a regular $\lambda$, i.e. all $a_i$ being positive. In this case $\Delta_v$ has $n$ components each of which contains a single vertex in row $i$ for $i\ll 0$ (Proposition~\ref{uplimit}).
\begin{proposition}\label{relvert}
For a regular $\lambda$ vertex $v$ of $\Pi$ is non-relevant if and only if we have an $l$ for which $v_l=0$ and $v_{l+n-1}\neq 0$.
\end{proposition}
\begin{proof}
If $v$ is non-relevant we have a component of $\Delta_v$ which contains one vertex $(i-1,j)$ in row $i-1$ and two vertices $(i,j-1)$ and $(i,j)$ in row $i$. This, in particular, shows that $v_{in+(j-1)(n-1)}=0$ while $v_{in+j(n-1)}\neq 0$ which proves the ``only if" part.
Conversely, if $v_l=0$ and $v_{l+n-1}\neq 0$, then in $\Delta_v$ the vertex $(\eta_v(l),\theta_v(l))$ is connected to its upper-right neighbor, while $(\eta_v(l+n-1),\theta_v(l+n-1))$ is not. This, however, means that $(\eta_v(l+n-1),\theta_v(l+n-1))$ is connected to its upper-left neighbor (Proposition~\ref{vertgraph}). This upper-left neighbor is then $(\eta_v(l-1),\theta_v(l-1))$ who is also the upper-right neighbor of $(\eta_v(l),\theta_v(l))$. We therefore see that the corresponding component contains two vertices in row $\eta_v(l)$ which leads to $v$ being non-relevant.
\end{proof}
Such an interpretation of relevant vertices for the case of regular $\lambda$ is in accordance with the one found in~\cite{fjlmm2}. The following information can then be extracted from papers~\cite{fjlmm2} and~\cite{me2}.
\begin{proposition}\label{simpvert}
\begin{enumerate}[label=\alph*)]
\item For regular $\lambda$ the relevant vertices are enumerated by elements of the Weyl group $W$. If $v_w$ is the vertex corresponding to $w\in W$, then $e^{\mu_{v_w}}=w\lambda$.
\item All the cones $D_l$ are simplicial and unimodular.
\item The multiset $\{G(e^{\varepsilon_{v_w,i}})\}$ coincides with the multiset $\{e^{-w\alpha},\alpha\in\Phi^+\}$, where each $\alpha$ is counted $m_\alpha$ times.
\end{enumerate}
\end{proposition}
All we, essentially, are left to prove is the following.
\begin{proposition}
Let $\lambda$ be regular and $v$ be a relevant vertex. For any face $f$ of cone $D_l$ we have $$\varphi(f)=(1-t)^{\dim f}.$$
\end{proposition}
\begin{proof}
We may assume that $\Delta_f$ is a subgraph of $\Delta_v$.
All $n$ connected components of $\Delta_v$ are infinite path graphs, $n-1$ of them infinite in one direction (``up") and one infinite in both directions. This together with Proposition~\ref{phigraph} shows that $\varphi(f)=(1-t)^d$, where $d$ is the number of vertices in $\Delta_f$ not adjacent to any vertex in the row above.
However, Proposition~\ref{dimf} shows that $\dim f=d$ as well.
\end{proof}
The above Proposition together with part b) of Proposition~\ref{simpvert} shows that $G(\sigma_{\bar\varphi}(\widebar{D_l}))$ is the product of $F(e^{\bar v})=e^{\mu_v-\lambda}$ and the quotients $$\frac{1-tF(e^\varepsilon)}{1-F(e^\varepsilon)}$$ over all generators $\varepsilon$ of edges of $D_l$. Applying parts a) and c) of Proposition~\ref{simpvert} and then Lemma~\ref{limit} now proves part a) of Theorem~\ref{contrib} in the case of regular $\lambda$.
On to the case of $\lambda$ being singular, i.e. having at least one $a_i=0$. This case will be deduced from the regular case, so we introduce $\lambda^1$, an arbitrary integral dominant regular weight. We denote the objects corresponding to $\lambda^1$ by adding a $^1$ superscript, e.g. $\Pi^1$, $\varphi^1$, $E_l^1$ etc.
Due to Proposition~\ref{relvert} the relevant vertices of $\Pi^1$ are parametrized by sequences $y=(y_i)$ infinite in both directions with $y_i\in\{0,1\}$ and having the following properties.
\begin{enumerate}
\item For $l\gg 0$ one has $y_l=0$.
\item For $l\ll 0$ one has $y_l=1$.
\item One has $y_{l+n-1}=0$ whenever $y_l=0$.
\end{enumerate}
The vertex $v_y^1$ corresponding to such a sequence is uniquely defined by $v_y^1\in E_l^1$ whenever $y_l=0$ and $v_y^1\in H_l^1$ whenever $y_l=1$. The fact that the $a_i^1$ are all positive implies that different $y$ define different $v_y^1$. Since the relevant vertices of $\Pi^1$ are also parametrized by the affine Weyl group $W$ for each $y$ we may define $w_y\in W$ such that $v_y^1=v_{w_y}^1$. Clearly, $w_y$ does not depend on $\lambda_1$ but only on $y$.
Each sequence $y$ also defines a vertex $v_y$ of $\Pi$ by the same rule. However, some of these $v_y$ may coincide.
\begin{proposition}
The vertices $v_y$ are precisely the relevant vertices of $\Pi$. For any $y$ we have $\mu_{v_y}=w_y\lambda$.
\end{proposition}
\begin{proof}
For any $v_y^1$ we see that if a hyperplane $H_l^1$ or $E_l^1$ contains $v_y^1$, then the corresponding hyperplane $H_l$ or $E_l$ must contain $v_y$. This means that we may assume that the graph $\Delta_{v_y^1}$ is a subgraph of $\Delta_{v_y}$ (with the same set of vertices).
However, visibly, if a component $E$ of $\Delta_{v_y}$ contained more vertices in some row $i$ than in row $i-1$, then so would one of the components of $\Delta_{v^1_y}$ contained in $E$. This would contradict $v^1_y$ being relevant.
Conversely, since every component of $\Delta_{v^1_y}$ contains no less vertices in any row than in the row below, the same holds for every component of $\Delta_{v_y}$. That is because every component of $\Delta_{v_y}$ is obtained by joining components of $\Delta_{v^1_y}$.
For the second part, note that whether $\lambda$ is regular or not, the point $v_y$ depends linearly on $\lambda$ and $\mu_{v_y}$ depends linearly on $v_y$. Thus $\mu_{v_y}$ depends linearly on $\lambda$.
\end{proof}
We now see that the relevant vertices of $\Pi$ do indeed correspond to elements of the orbit $W\lambda$.
To prove part a) of Theorem~\ref{contrib} for singular $\lambda$ it now suffices to show that
\begin{equation}\label{decomp1}
\tau_{\bar v}=\frac 1{[l_1]_t!\ldots[l_{m(\lambda)}]_t!}\sum_{v_y=v}G\left(e^{\bar v-\bar v_y^1}\right)\tau_{\bar v_y^1}.
\end{equation}
For a vertex $v_y^1$ with $v_y=v$ and integer $l\ge M_{v_y^1}$ denote $D_{y,l}^1$ the corresponding face of $C_{v_y^1}$. Choose an $l$ greater than $M_v$ and all of the $M_{v_y^1}$. Due to Lemma~\ref{limit}, identity~(\ref{decomp1}) will follow from
\begin{equation}\label{decomp2}
G\left(\sigma_{\bar\varphi}\left(\widebar{D_l}\right)\right)=\frac 1{[l_1]_t!\ldots[l_{m(\lambda)}]_t!}\sum_{v_y=v} G\left(e^{\bar v-\bar v_y^1}\right)G\left(\sigma_{\bar\varphi^1}\left(\widebar{D_{y,l}^1}\right)\right).
\end{equation}
For all $v_y^1$ with $v_y=v$ the coordinate $s_{i,j}^1(v_y^1)$ is the same when $(i,j)\in\Delta_v$ and $i\le-l$. Denote the sequence of numbers $s_{-l,j}^1(v_y^1)$ with $(-l,j)\in\Gamma_r$ via $c_1^r,\ldots,c_{l_r}^r$. Also, for any such $v_y^1$ the coordinates $s_{i,j}^1(v_y^1)$ with $(i,j)\in\Delta_v$ and $i\ge l$ are all the same. Now consider the polyhedron $D_l^1\subset V^1$ consisting of such $x^1$ that
\begin{enumerate}
\item For any $i\le -l$ the $l_r$ coordinates $s_{i,j}^1(x^1)$ with $(i,j)\in\Gamma_r$ are equal to $c_1^r,\ldots,c_{l_r}^r$ from left to right.
\item All the coordinates $s_{i,j}^1(x^1)$ in rows $i\ge l$ are the same.
\item The coordinates $s_{i,j}^1(x^1)$ satisfy all the inequalities corresponding to edges of $\Delta_v$.
\end{enumerate}
Any vertex of $D_l^1$ is a vertex of $\Pi^1$ and the faces of $D_l^1$ correspond naturally to faces of $\Pi^1$ which allows one to define $\varphi_1$ in faces of $D_l^1$. The vertices of $D_l^1$ that are relevant vertices of $\Pi^1$ are precisely the $v_y^1$ with $v_y=v$. The weighted Brion Theorem for $D_l^1$ (after application of $G$) reads
$$
G\left(\sigma_{\bar\varphi^1}\left(\widebar{D_l^1}\right)\right)=\sum_{v_y} G\left(\sigma_{\bar\varphi^1}\left(\widebar{D_{l,y}^1}\right)\right).
$$
We know that the contributions of other vertices are zero, having already discussed non-relevant vertices.
Clearly, $D_l$ is a degeneration of $D_l^1$, let $\pi$ be the corresponding map between face sets. With previous paragraph taken into account, Lemma~\ref{wdegen} provides
$$
G\left(\sigma_{\bar\varphi'}\left(\widebar{D_l}\right)\right)=\sum_{v_y=v} G\left(e^{\bar v-\bar v_y^1}\right)G\left(\sigma_{\bar\varphi^1}\left(\widebar{D_{y,l}^1}\right)\right),
$$
where for a face $f$ of $D_l$
$$
\varphi'(f)=\sum_{g\in\pi^{-1}(f)}(-1)^{\dim g-\dim f}\varphi^1(g).
$$
All that remains to be shown is that for any $f$ we have
\begin{equation}\label{phirel}
\varphi'(f)=[l_1]_t!\ldots[l_{m(\lambda)}]_t!\varphi(f).
\end{equation}
Now, visibly, we have a bijection
$$
\xi^1_l:D_{\Gamma_1^l}\left(c_1^1,\ldots,c_{l_1}^1\right)\times\ldots\times D_{\Gamma_{m\left(\lambda\right)}^l}\left(c_1^{m\left(\lambda\right)},\ldots,c_{l_{m\left(\lambda\right)}}^{m\left(\lambda\right)}\right)\rightarrow D_l^1.
$$
Moreover, for a face $g$ of $D_l^1$ we have $$\varphi^1(g)=\prod_{r=1}^{m(\lambda)}\varphi_{\Gamma_r^l}(c_1^r,\ldots,c_{l_r}^r)(g_r),$$
where $g=\xi_l^1(g_1\times\ldots\times g_{m(\lambda)})$.
Recall that $D_{\Gamma_r^l}(b_r,\ldots,b_r)$ is a degeneration of $D_{\Gamma_r^l}(c_1^r,\ldots,c_{l_r}^r)$, let $\pi_r$ be the corresponding map between face sets. Visibly, for $g=\xi_l^1(g_1\times\ldots\times g_{m(\lambda)})$ a face of $D_l^1$ we have $$\pi(g)=\xi_l(\pi_1(g_1)\times\ldots\times\pi_{m(\lambda)}(g_{m(\lambda)})).$$ This shows that~(\ref{phirel}) for $f=\xi_l(f_1\times\ldots\times f_{m(\lambda)})$ may be obtained by multiplying together the identities provided by Lemma~\ref{graphsum} for degenerations $\pi_r$ and faces $f_r$ respectively.
We have proved Theorem~\ref{contrib} and, via Theorem~\ref{infbrion}, the main Theorem~\ref{main} follows.
\addtocontents{toc}{\vspace{4pt}}
|
2,869,038,154,069 | arxiv | \section{Introduction}
The instance segmentation problem deals with the pixel-wise delineation of multiple objects, combining segment-level localization and per-pixel object category classification.
This task is more challenging than semantic segmentation, for example the number of object instances is not fixed, unlike the number of object categories.
Additionally, separating instances that share similar local appearances is highly challenging.
Instance segmentation, in particular person instance segmentation is a promising research frontier for a range of applications such as human-robot interaction, sports performance analysis, and action recognition.
Deep convolutional neural networks are the current state-of-the-art methods for the task of instance level segmentation. For example, the entrants to the 2016 COCO segmentation challenge \cite{coco_challenge_16_GRMI_seg} achieve excellent performance on instance segmentation for the $80$ object categories considered on the COCO dataset.
Although these methods work extremely well for \emph{any} category of objects, there is a
potential for human-specific domain knowledge to boost the person segmentation performance.
\begin{figure*}
\begin{center}
\includegraphics[width=6.0in]{pose2Instance_learn.png}
\end{center}
\caption{Pose2Instance model for People Instance Segmentation. This model incorporates a \emph{learnable} component conditioned on the human ``pose". The model generates keypoint heatmaps and segmentations at instance level by sharing CNN parameters up to the penultimate layer and uses the keypoints heatmaps output as an additional input to the segmentation. (best viewed in color)
}
\label{fig:Pose2Instance Model1}
\end{figure*}
In this paper, we investigate the importance of human keypoints as a prior for the task of instance-level person segmentation.
With the availability of image datasets that include both segmentation masks and keypoints annotations, we consider a methodical approach to quantify the importance of keypoints for people instance segmentation.
We explore what happens if an oracle provides all the keypoints, or only bounding boxes, and how people instance segmentation can be improved respectively.
Our motivation is two-fold. First and foremost, we wish to develop a thorough understanding of whether person-specific domain knowledge is useful for person instance segmentation.
Second, we wish to quantify the importance of
human keypoints as a useful domain knowledge for
improving segmentation over the baseline of best performing deep learning models trained only for segmentation.
In order to evaluate the segmentation conditioned on human pose, we consider all instances of people\footnote{We do not include COCO person instances that are marked as "crowd".}
from the COCO segmentation dataset \cite{COCO_eccv14} where the instances also have keypoints ground truth. By comparing the image and instance identifiers from the COCO segmentation and COCO keypoints dataset, we see there exists $45,174$ images in the training dataset and $21,634$ images in the validation dataset.
This amounts to $185,316$ and $88,153$ ground truth person instances with both segmentation and keypoints annotations in the training and validation split respectively.
We call this intersection between COCO instance segmentation dataset and COCO person keypoints dataset as the COCO dataset throughout this paper.
We first explore a human pose prior represented as the distance transform of a skeleton and show how this prior can directly yield instance-level human segmentation when combined with existing semantic segmentation model such as DeepLab \cite{deeplab_chen14semantic} trained for human segmentation.
This analysis also validates the idea of combining even two existing different models, one for pixel-level person segmentation (non-instance) and another for detecting keypoints, for improving instance segmentation.
Next, we propose an approach to directly generate the per-pixel probability of people instances conditioned on human poses
using a deep convolutional neural network (CNN).
We call this pose-conditioned segmentation model \emph{Pose2Instance}.
Figure \ref{fig:Pose2Instance Model1} outlines the approach.
Person instance bounding boxes are either provided by an
oracle or they come from a person detector.
The model is trained for generating keypoints heatmaps and segmentation at instance level by sharing \emph{cnn} parameters up to the penultimate layer and using the keypoints heatmap as an additional input channel for the segmentation output.
\noindent
In summary, we contribute the following.
\vspace{-2.5mm}
\begin{itemize}
\item We show that human pose prior represented as the distance transform of the human skeleton yields significant performance gain for the deep people instance segmentation during inference without any training.
\item{We show
how the \emph{learned} segmentation can be conditioned on the keypoints by learning
additional parameters
specifically for mapping shape to segmentation while training a
DCNN
\emph{jointly} for keypoints and segmentation. }
\item We perform extensive empirical investigation of the proposed Pose2Instance method on the intersection of COCO instance segmentation and COCO keypoints dataset. We show the effectiveness of the pose conditioned deep instance segmentation model by qualitative and quantitative analysis.
\end{itemize}
\section{Related Work}
\label{sec:prior-art}
Our work builds upon a rich literature in both semantic segmentation using convolutional neural networks and joint pose-segmentation modeling.\\
\textbf{Semantic and Instance Segmentation}\\
DeepLab \cite{deeplab_chen14semantic} and FCN \cite{fcn_cvpr_15} achieved significant breakthroughs for the challenging task of semantic segmentation using deep convolutional neural networks.
Subsequently, a set of instance segmentation methods
\cite{rev_inst_seg_cvpr16,mrf_ZhangFU15, itr_inst_seg_LiHM15, propo_free_inst_LiangWSYLY15, Inst_sens_DaiHLRS16, rec_inst_seg_Romera_ParedesT16, ICLR_WS_ParkB15a, Bottom_up_inst_BMVC_2016}
were proposed, which begin with pixel-wise semantic segmentation and generate instance-level segmentation from them.
Recently, \cite{TA-FCN_coco_16} achieved the state-of-the-art
performance on the $80$-category instance segmentation using a fully convolutional end-to-end solution.
Except \cite{itr_inst_seg_LiHM15}, none of these methods look into learning implicit or explicit shapes of different object categories.
\textbf{Human Pose Estimation}\\
Human pose estimation from static images \cite{gram_seg_for_pose_RothrockPZ13, Kohli2008, we_are_family_eccv_EichnerF10, pose_est_review_Liu_2015} or videos \cite{grabcutSensors_2012} with hand-crafted features and explicit modeling gained considerable interest in the last decade.
Human pose estimation using an articulated grammar model is proposed in \cite{gram_seg_for_pose_RothrockPZ13}.
Hern\'andez-Vela \etal \cite{grabcutSensors_2012} proposed Spatio-Temporal GrabCut-based human segmentation that combines tracking and segmentation with hand-crafted initialization.
In \cite{we_are_family_eccv_EichnerF10}, Eichner and
Ferrari proposed a multi-person pose estimator framework that extends pictorial structures for explicitly modeling interaction between people. A detailed review on pose estimation literature survey is available in \cite{pose_est_review_Liu_2015}.
Recently, convolutional neural networks have been successfully applied for pose estimation from videos \cite{pose_video_LinnaKR16}, human body parts segmentation \cite{part_discovery_gabriel}, and multi-person pose estimation \cite{deepcut_cvpr16,deepercut_eccv16,coco_challenge_16_cmu_pose,coco_challenge_16_GRMI_pose}. Additionally, among the most accurate results are those shown by \emph{chained prediction} \cite{Gkioxar_eccv16}.
\textbf{Joint Pose Estimation and Segmentation}\\
The most closely related works to this one are those that also seek to jointly estimate human pose and segmentation in static images or videos \cite{Kohli2008,Lim_2013_ICCV,pose_INRIA_Alahari13,Pose_Inria_Seguin15}.
Kohli \etal \cite{Kohli2008} proposed \emph{PoseCut}, a conditional random field (CRF) framework to tackle segmentation and pose estimation together for a \emph{one} person. The CRF model explicitly combines hand crafted image features and a prior on shape and pose in a Bayesian framework. The prior is represented by the distance transform of a human \emph{skeleton}.
The \textit{inference} in PoseCut
finds the MAP solution of the energy of the pose-specific CRF.
The test time prediction finds MAP by doing optimization over different configurations of the latent shape prior.
With
a \emph{good} initialization
the inference step requires ~$50$ seconds per frame.
Similar inference strategies for deep models are computationally prohibitive.
Among other significant efforts towards joint pixel-wise segmentation and pose estimation of multiple people, Alahari and Seguin \textit{et. al.} \cite{pose_INRIA_Alahari13,Pose_Inria_Seguin15} use additional motion and disparity cues from stereo videos. The appearance and disparity cues are generated using HOG features. The pose estimation model \cite{pose_INRIA_Alahari13} is represented as a set of parts, where a part
refers to a patch centered on a body-joint or on an interpolated
point on a line connecting two joints. They learn up to eight mixture components for each part and an articulated pose mask for the mixture components.
We propose a different and effective framework for incorporating pose prior into deep segmentation models.
The proposed DCNN model consists of additional parameters that are trained/optimized specifically for the mapping of shape to segmentation.
The Pose2Instance \emph{inference} does not require optimization such as finding the MAP solution.
The prediction task involves only
\textit{one} forward pass
through the trained network.
\section{Methods}
\label{sec:methods}
Our Pose2Instance approach looks at the problem of incorporating a pose prior into segmentation in two ways. We begin with a constrained environment study where the keypoints are provided by an oracle, and we investigate a way for improving the instance segmentation inference given a state-of-the-art pixel-level person classifier \cite{deeplab_chen14semantic}. Next, we move to a more realistic case where oracle keypoints are not available and propose a framework to train segmentation model directly while benefiting from a pose estimator.
\begin{figure*}
\begin{center}
\includegraphics[width=2.1in]{frame_inria.png}
\includegraphics[width=2.1in]{sobel_edge_inria.png}
\includegraphics[width=2.1in]{rag_inria_color_coded.png}
\end{center}
\begin{center}
\includegraphics[width=2.1in]{deeplab_out.png}
\includegraphics[width=0.7in]{inria1_stickmen.png}
\includegraphics[width=0.7in]{masks_for_dt.png}
\includegraphics[width=0.7in]{dt_exponentiated.png}
\includegraphics[width=2.1in]{posetoinstance.png}
\end{center}
\caption{Instance Segmentation with oracle skeleton. Top row: An image from Inria stereo people segmentation dataset; Sobel edge responses on the image; RAG with edge strength from sobel responses where warmer color means higher weights. Bottom row: DeepLab \emph{person} segmentation; oracle skeleton, masks for distance transform in RAG space, \emph{pose-instance-maps} and the final instance segmentation which is an argmax on point-wise multiplication of pose-instance map and the DeepLab-people score. }
\label{fig:Pose2instance_oracle_stickmen1}
\end{figure*}
\subsection{Pose2Instance Inference Only}
We first present Pose2Instance within a constrained environment that assumes that the keypoints are provided by an oracle. This allows us to investigate the contribution of the pose prior independent of the other components of the whole system.
In the COCO dataset, $17$ person keypoints along with their corresponding visibility flags are annotated. We will handle these as part of a \emph{skeleton} that links joint keypoints by the corresponding body parts.
In the investigation of the prior alone, with oracle keypoints, we address the inference stage of instance segmentation without any training.
The sole task-specific training is done on the already existing DeepLab \cite{deeplab_chen14semantic} network.
In the section \ref{oracle_keypoints} below,
we first fine-tuned this network for person-specific segmentation on COCO, with other labels discarded. This model directly predicts per-pixel probability of the \emph{person} class label for the whole image.
We call this model \emph{DeepLab-people} in this paper.
\subsubsection{Person Instances from Oracle Keypoints}
\label{oracle_keypoints}
We use the notion of a distance transform of the person skeleton \cite{Kohli2008}, generated from the oracle keypoints, as a prior for the instance segmentation task.
For this proof of concept, we follow the below steps.
We create a Region Adjacency Graph (RAG), \mbox{\textbf{G} = (\textbf{V}, \textbf{E})} where the nodes \textbf{V} are superpixels and the weights of the edges \textbf{E} between the nodes depend on the strength of image edges. We obtain superpixels using SLIC \cite{slic12}, and the image edge responses using Sobel operator.
We can define the pose prior as a distribution over the labels of this graph.
Given a superpixel $p \in \textbf{V}$, we can compute a conditional probability it belongs to
a given instance.
For each instance, we color those nodes where the corresponding superpixel contains a part of the human skeleton line that is generated from the oracle keypoints with valid visibility flags. The colored nodes in the RAG represent a foreground binary mask, and are assigned the highest probability of belonging to this instance.
For each such binary mask corresponding to each person, we apply distance transform in the RAG using Floyd-Warshall \cite{Floyd-Warshall} shortest paths algorithm. A point-wise softmax of this distance transform then represents the likelihood of each person's gross shape. We call this shape-likelihood the \emph{pose-instance map}. For an image of height $h$ and width $w$, with $n$ oracle instances, the shape of pose-instance map is $h\times w \times n$.
Figure \ref{fig:Pose2instance_oracle_stickmen1} shows these intermediate steps of generating the RAG, its nodes and the weights of its edges, the oracle skeletons and the instance segmentations. \\
\textbf{Instance-level to Image-level inference}:
Element-wise multiplication of this pose-instance map and the DeepLab-people score generates instance heatmap of size $h \times w \times n$. Here, $h$, $w$ and $n$ denote the height and width of the image, and the oracle-provided number of person instances respectively. An argmax on instance heatmap produces the final instance segmentation on the image.
Figure \ref{fig:Pose2instance_oracle_stickmen2} shows intermediate results for the inference step in this constrained setup. There are $9$ persons in this image. Combining DeepLab-people score with pose-instance map improves the instance segmentation quality over the pose-instance map.
Quantitative results in section \ref{sec:inference_with_oracle} show that person keypoints represented as the distance transform can be an excellent source of additional domain knowledge for improving people instance segmentation.
\begin{figure}
\begin{center}
\includegraphics[width=1.6in]{inria2_part1.png}
\includegraphics[width=1.6in]{inria2_part2.png}
\end{center}
\caption{Pose2Instance inference with oracle skeleton. Top row: An image from Inria stereo person segmentation containing 9 persons; Instance classification by argmax on pose-instance map;
Bottom row: DeepLab-people score, and the \emph{Pose2Instance} inference output which is an argmax on \emph{instance heatmap} generated by fusing DeepLab-people and \emph{pose-instance map}.}
\label{fig:Pose2instance_oracle_stickmen2}
\end{figure}
\subsubsection{Person Instances from Oracle Bounding Boxes}
As a baseline, we take the approach of snapping the pixel-level DeepLab-people score at oracle bounding boxes for the
COCO validation images.
Though this bounding box approach does not comply with the relative depth ordering or visibility of one instance over another, the method still can be used as a reasonable baseline to compare the performance of Pose2Instance inference. \\
We performed similar experiments with fast-sweeping \cite{fast_sweep_Weber_2008} based distance transform on pixel grid for reducing complexity using single-pixel width skeleton as the binary mask. However, the distance transform produces worse result comparing with the specified non-grid RAG approach.
\subsection{Learning Pose2Instance} \label{shape likelihood}
After this proof of concept in the inference stage in a controlled setup with oracle keypoints, we move to a more realistic scenario where ground truth keypoints annotations are unavailable and we strive for learning a segmentation model by jointly optimizing for the segmentation and pose.
Our proposed network has a DeepLab-style architecture \cite{deeplab_chen14semantic}. This is a modified VGG network \cite{vgg_SimonyanZ14a} that uses atrous convolution with hole filling \cite{deeplab_chen14semantic} and replaces fully-connected layers by fully-convolutional layers. The baseline model is a $2$-class \emph{DeepLab-people} model.
To construct this model, we start with the publicly available Deeplab model trained on the PASCAL VOC dataset, and fine-tune it for predicting only people on the COCO training instances.
The second and third exploratory architectures involve two output layers each, a segmentation output and a $17$-channel heatmap for pose estimation output.
The first among them \emph{Pose and Seg} is a multitask model, where the two parallel output layers share the parameters up to previous convolutional layers. The $2$-class segmentation layer and the $17$-class pose estimation output layers use cross-entropy loss after softmax and sigmoid activations respectively.
The later one, \emph{Pose2Seg}, is
a cascaded model, where the $17$-channel keypoints heatmap is followed by an $1\times1$ convolution to generate the shape likelihood. Segmentation feature maps from the last layer is combined with the above shape likelihood, and the softmax segmentation is trained.
Comparing with the segmentation only model, the cascaded model has only $18$ extra parameters for learning the $1\times1$ convolutional kernels.
$17$ parameters are for the key-points heatmaps, and $1$ for shape likelihood.
Figure \ref{fig:Learning Pose2Instance} shows the two above mentioned architectures. The \emph{stack} operation is a $1\times 1$ convolution on the estimated $17$-channel pose heatmap. Its output can be used as the gross shape-likelihood of a person based on the estimated keypoints.
In the cascaded model, the segmentation output is directly conditioned on the pose heatmap. As we also see in Fig \ref{fig:Pose2instance_learn results2}, the $1\times 1$ convolution on the pose heatmap preserves the general notion of shape of a person from its keypoints, the segmentation model thus
can be thought of conditioned on the latent shape of a person.
\begin{figure*}
\begin{center}
\fbox{\includegraphics[width=3.2in]{multitask_3-eps-converted-to.pdf}}
\fbox{\includegraphics[width=3.2in]{cascaded_3-eps-converted-to.pdf}}
\end{center}
\caption{Pose2Instance: architecture variation for joint pose and segmentation learning. Left: Multitask model where pose estimation and segmentation are two parallel output paths. Right: Cascaded Model where pose dedicated parameters are \emph{learned} for mapping pose to segmentation.}
\label{fig:Learning Pose2Instance}
\end{figure*}
In the Pose2Instance framework, we try to improve the segmentation accuracy from both keypoints and segmentation supervision. In particular, a model learned with one supervisory signal \emph{pose-estimation} acts as a prior to the model learned with another supervisory signal \emph{segmentation}.
Different sources of supervision have proven to be useful for learning segmentation.
For example, ScribbleSup \cite{scribble_sup_LinDJHS16} performs semantic segmentation from additional scribble based supervision broadly in grabcut \cite{grabcut_MSR} like framework. In \cite{Bearman16}, Bearman \etal discussed various levels of supervisions such as pixel-level strong supervision, and sparse point-level supervision for semantic segmentation.
Our method is substantially different from these since none of the above specifically addresses (instance) segmentation problem with one as a prior to the other.
\section{Results}
\label{sec:results}
We implement the Pose2Instance model using TensorFlow-Slim. We train the model on specified COCO training instances. We initialize the model from DeepLab-people and continue training for $20,0000$ iterations using stochastic gradient descent with mini-batch size of $16$ and momentum $0.9$.
\subsection{Pose2Instance in a Constrained Setup} \label{sec:inference_with_oracle}
In order to analyze the Pose2Instance inference with oracle keypoints, we use COCO validation images.
While table \ref{table:oracle_eval_baseline} shows the performance of Pose2Instance inference with oracle keypoints,
figure \ref{fig:Pose2instance_oracle_stickmen_eval} shows some qualitative results
comparing with the oracle bounding box baselines. The figures show that for overlapping person instances, the proposed pose prior significantly outperforms a baseline using the bounding box as an ad-hoc prior. \\
\begin{figure}
\begin{center}
\includegraphics[width=3.1in]{coco_gt1.png}
\end{center}
\begin{center}
\includegraphics[width=3.12in]{coco_gt2.png}
\end{center}
\caption{
From left to right: Ground truth instance segmentations; Corresponding image from COCO keypoints dataset; Pose2Instance inference with oracle keypoints.
Colored boxes show errors in segmentation ground truth that are corrected using our keypoint conditioned model.
}
\label{fig:coco_gt_analysis}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=6in]{oracle2_baseline.png}
\end{center}
\begin{center}
\includegraphics[width=6in]{oracle1_baseline.png}
\end{center}
\caption{Pose2Instance with oracle bounding boxes vs oracle keypoints. From left to right: A frame from COCO keypoints dataset; Ground truth instance segmentations; Baseline instance segmentation from oracle bounding boxes and Pose2Instance inference from oracle keypoints.}
\label{fig:Pose2instance_oracle_stickmen_eval}
\end{figure*}
\begin{table}[h] \label{table:oracle_eval_baseline}
\centering
\begin{tabular}{lll}
\multicolumn{3}{c}{\textbf{Instance Segmentations on COCO validation Images}} \\
\toprule
Methods & $AP^r 0.5$ & $AP^r$ \\
& \@ IoU=0.5 & \@ IoU=\lbrack0.5 to 0.9\rbrack \\
\midrule
DeepLab+Oracle BB & 0.437 & 0.252\\
DeepLab+Oracle keypoints & $\mathbf{0.533}$ & $\mathbf{0.283}$\\
\midrule
FAIRCNN\cite{FAIRCNN_ZagoruykoLLPGCD16} & 0.504 & 0.206 \\
CUHK\cite{coco_leaderboard_1026} & 0.478 & 0.214 \\
\bottomrule
\end{tabular}
\caption{
Oracle keypoints provides $10\%$ to $12\%$ relative improvement over oracle bounding box case at various IOU thresholds when applied on DeepLab-people segmentation model. Results are shown from COCO Leaderboard for FAIRCNN\cite{FAIRCNN_ZagoruykoLLPGCD16} and CUHK\cite{coco_leaderboard_1026} that also use VGG as the base network. }
\end{table}
The inference stage which consists of combining the existing semantic segmentation model and the oracle keypoints outperforms the oracle bounding box case by $10\%$ relative improvement.
FAIRCNN\cite{FAIRCNN_ZagoruykoLLPGCD16} and CUHK\cite{coco_leaderboard_1026} are the instance segmentation models that also use VGG as the base network. We include their instance segmentation results only on \emph{'person'} category from COCO detection challenge Leaderboard as references.
Newer models from the Leaderboard use more powerful ResNet in their backend, so are not directly comparable.
Figure \ref{fig:Pose2instance_oracle_qualitative results} shows qualitative results of instance segmentation on COCO validation dataset in such constrained environments. We note that this method only applies during the inference step with oracle keypoints and does not involve any training.
Human keypoints ground truth is easier to collect than precise segmentation ground truth. Figure \ref{fig:coco_gt_analysis} shows how the errors in the segmentation ground truth can be corrected with our pose-conditioned segmentation model.
\begin{figure*}
\begin{center}
\includegraphics[width=1.24in]{result2.jpg}
\includegraphics[width=1.1in]{result4.jpg}
\includegraphics[width=1.47in]{result6.jpg}
\includegraphics[width=1.15in]{result8.jpg}
\includegraphics[width=1.11in]{result10.jpg}
\end{center}
\caption{Pose2Instance in a constrained setup. Top: DeepLab person segmentation. Bottom:Pose2Instance inference from oracle keypoints on COCO evaluation dataset. (best viewed in color)}
\label{fig:Pose2instance_oracle_qualitative results}
\end{figure*}
\subsection{Pose2Instance in Realistic Environments}
After validating the effectiveness of the inference with keypoint-specific distance transform, we evaluate the proposed Pose2Instance model on COCO validation instances in a more realistic environment where oracle keypoints are unavailable. We assume the availability of oracle bounding boxes. The model estimates the keypoints and segmentations at all instances.
\begin{figure*}
\begin{center}
\includegraphics[width=5.5in]{learning_dist_transform.jpg}
\end{center}
\caption{Qualitative results for shape likelihood from pose estimation. Top to Bottom: Instances from COCO validation dataset; visualization of intermediate latent shape likelihood for (i) \emph{Pose Only model}; (ii) \emph{Multitask model} and (iii) \emph{Cascaded model} respectively. \emph{Pose Only} model produces high likelihood around the keypoints; whereas other two joint models learns to capture the overall person contour shape.}
\label{fig:distance_transform_qualitative results}
\end{figure*}
\begin{table}[h] \label{table:pose2instance_learn}
\centering
\begin{tabular}{lll}
\multicolumn{3}{c}{\textbf{Instance Segmentations without Oracle Keypoints}} \\
\midrule
Methods & $AP^r $ & $AP^r$ \\
& IoU=0.5 & IoU=\lbrack0.5 to 0.95\rbrack \\
\midrule
DeepLab Seg only & 0.79 & 0.38\\
Multitask: Pose and Seg & 0.80 & 0.40\\
Cascaded: Pose2Seg & $\mathbf{0.82}$ & $\mathbf{0.42}$\\
\bottomrule
\end{tabular}
\caption{ Evaluation of segmentation accuracy on instances from COCO validation dataset.
Joint pose estimation and segmentation outperforms the \emph{segmentation only} model. Pose2Instance \emph{cascaded} model achieves improved accuracy over the \emph{multitask} model. Overall, relative improvement from \emph{segmentation only} model is $3.8\%$ to $10.5\%$ at various IOU thresholds.}
\end{table}
In Table \ref{table:pose2instance_learn}, we show the comparative segmentation performance evaluation for the proposed Pose2Instance method without oracle keypoints. Average Precision at $0.5$ IOU improves by $3\%$ over the \emph{segmentation only} model and $2\%$ over the multitask model.
The corresponding improvements for the $[0.5, 0.9]$ IOU are $4\%$ and $2\%$ respectively.
In terms of relative improvements, at $0.5$ IoU and $[0.5, 0.9]$ IoU,
the pose-conditioned segmentation model improves the $AP^r$ by $3.8\%$ and $10.5\%$ over the \emph{segmentation only} model [Table \ref{table:pose2instance_learn}] respectively.
This demonstrates the proof-of-concept of how to incorporate pose prior effectively into deep segmentation model.
\begin{figure*}
\begin{center}
\includegraphics[width=6.5in]{posetoinstance_learn.png}
\end{center}
\caption{Pose2Instance without oracle keypoints. Top row: Instance bounding boxes of COCO validation images. Middle row: Ground truth segmentation at instance level. Bottom row: predicted segmentation masks for the instance bounding boxes. Bounding boxes contain multiple full or partial person instances. While the first three columns show successful instance segmentation results, the last two examples show \emph{yet to improve} segmentation results due to failure of the pose-estimator output in the VGG based Pose2Instance model.
Improving the pose-estimator can improve the accuracy of the pose-conditioned segmentation model.
}
\label{fig:Pose2instance_learn results2}
\end{figure*}
Figure \ref{fig:Pose2instance_learn results2} shows qualitative results on some challenging examples. These rectangular regions contain one or more partial person instances in addition to the primary person instance. We see that the Pose2Instance model learns to produce instance segmentation only for the intended one. The last two figures are examples of most difficult cases with many people in close proximity, and the Pose2Instance predictions are far from being ideal due to the current limitation of the VGG-based pose-estimator output.
In this work, we assess the effectiveness of pose conditioned segmentation performance, and did not evaluate the parallel key-points estimation output. We performed some qualitative analysis for the pose estimation output from the described multitask and cascaded models. Additionally, we implemented another vanilla pose estimator model with the same network except the segmentation output. We call this model a \emph{PoseOnly} model which is optimized only for $17$-class pose-estimation problem. Our subjective analysis of the latent shape likelihood of person include \emph{PoseOnly}, \emph{multitask} and \emph{cascaded} models.
Figure \ref{fig:distance_transform_qualitative results} shows some visualizations of the latent shape likelihood (section \ref{shape likelihood}) on some COCO validation images.
\section{Discussions}
Our experiments suggest that human pose is a useful domain knowledge even atop state-of-the-art deep person segmentation models.
We show that in a constrained environment with oracle keypoints, at various IOU thresholds,
the instance segmentation accuracy achieves $10\%$ to $12\%$ relative improvement
over a strong baseline with oracle bounding boxes without any training.
In a more realistic environment, without the oracle keypoints, the proposed Pose2Instance deep model achieves relatively $3.8\%$ to $10.5\%$ higher segmentation accuracy than the strongest baseline of a
deep network trained only for segmentation.
Our proposed method
is applicable to \emph{any} such architecture that shares
the necessary properties of the Deeplab model.
Models optimized for the segmentation task, including the one covered in our
experiments and future better-performing segmentation models, could
potentially incorporate the same methodology to utilize pose information.
While at present
we show results on images, likely similar dynamics are embedded in videos.
Human keypoints ground truth is easier to collect than precise segmentation masks. Thus, a pose conditioned segmentation model can be more powerful for person instance segmentation for natural scenes where people tend to appear in groups, have dynamic interactions, and partial occlusions.
This work represents a first step towards embedding pose into segmentation in complex scenes. An exploratory follow-up work can include investigation on incorporating keypoints based dynamic person model into video segmentation.
\section*{Acknowledgments}
The authors would like to thank George Papandreou and Tyler Zhu for their extensive assistance and insightful comments.
{\small
\bibliographystyle{ieee}
|
2,869,038,154,070 | arxiv | \section{Science}
Millimiter-wave astronomy offers a variety of important scientific targets: from line emission of interstellar molecules to continuum emission of the CMB.
The W-band lies at an interesting transition between frequencies where Galactic emission is dominated by free-free and synchrotron, and frequencies where is dominated by dust (see : Fig. 1, [2]). In this band, the atmosphere is quite transparent, and then we can perform ground-based observations, avoiding the costs, complications and size limitations of balloon-borne and space-based missions.
\begin{figure}[!h]
\vspace{-0.4cm}
\begin{center}
\includegraphics[scale=0.15]{spectra_color}
\end{center}
\vspace{-0.4cm}
\caption{Brightness temperature rms as a function of frequency for the astrophysical components. (Color figure online)}
\vspace{-0.4cm}
\label{fig:fig1}
\end{figure}
The measurements of B-modes of CMB polarization would provide the final confirmation of the cosmic inflation hypothesis [3]. To date this measurement has not been possible for two reasons: the sensitivity of the detectors and the polarized foreground emissions. The former can be improved with arrays of thousands of independent ultra-sensitive low-temperature detectors. The latter can be carefully separated from the interesting signal with multi-band experiments, where the W-band plays a key role. The same is true at smaller angular scales, where the Sunyaev Zeldovich effect of CMB photons crossing clusters of galaxies allows the study of the intracluster gas and even the use of clusters as cosmology probes [4].
In Fig.~\ref{fig:fig2} we show two simulations of the performance of a W-band Fourier Transform Spectrometer (FTS) equipped with a 100 pixels array at the focus of the SRT.
\begin{figure}[!h]
\vspace{-0.6cm}
\begin{center}
\includegraphics[scale=0.30]{simulations}
\end{center}
\vspace{-0.4cm}
\caption{Simulations of the performance of a W-band FTS equipped with a 100 pixels array at the focus of the SRT. The {\it Left} panel shows a simulation of the measurement of the kinetic SZ effect, in the W-band, of a cluster with Comptonization parameter $y=0.005$ and peculiar velocity $v=480$ km/s, observed with a 1 GHz resolution FTS in 10 hours of integration. The {\it Right} panel shows a simulation of the molecular lines, in the W-band, for the merger galaxy ARP220.}
\label{fig:fig2}
\end{figure}
\section{Experimental Setup}
Our cryogenic system is a four-stages cryostat [5], composed of a pulse-tube (PT) cryocooler, two absorption cryo-pumps (an $^{4}$He and an $^{3}$He
fridges) and an $^{3}$He$/^{4}$He dilution refrigerator. This single-shot cryostat is able to cool the detectors down to 180 mK for 7 hours
The optical system is composed of a window, two lenses, and a chain of four filters: one thermal shader, two low-pass filters and one band-pass filter; Fig.~\ref{fig:fig3}.
\begin{figure}[!h]
\begin{center}
\vspace{-0.5cm}
\includegraphics[scale=0.135]{cryostat_color}\hspace{-0.2cm}
\includegraphics[scale=0.15]{optics_color}\hspace{-0.35cm}
\includegraphics[scale=0.34]{filter_proceeding_color}
\end{center}
\vspace{-0.5cm}
\caption{The {\it Left} panel is a photo of the cryostat. The {\it Center} panel shows the Zemax design of the optical system. The {\it Right} panel shows the filter transmission spectra. (Color figure online.)}
\vspace{-1cm}
\label{fig:fig3}
\end{figure}
\section{Design}
We tested three different detector arrays, consisting of three different types of superconductive film: 40 nm thick Al, 80 nm thick Al and $10/25$ nm thick Ti-Al bi-layer. Each array has 2 pixels. The pixel geometry is inspired by the LEKID architecture [6], consisting of a meandered inductor, and a broader interdigited capacitor coupled to a feedline, Fig.~\ref{fig:fig4}.
Our samples are fabricated in a ISO5$-$ISO6 clean room at Consiglio Nazionale delle Ricerche Istituto di Fotonica e Nanotecnologie (CNR-IFN), on high-quality (FZ method) 2"$\times300$ $\mu$m intrinsic Si(100) substrate, with high resistivity ($\rho>10$ k$\Omega$cm) and double side polished [7].
\begin{figure}[!h]
\begin{center}
\includegraphics[scale=0.29]{photos_color}
\end{center}
\caption{The {\it Left} panel is a photo of the 2" diameter wafer, populated with 12 2-pixel arrays. The {\it Center} panel shows the design of a single pixel. The {\it Right} panel is a photo of 1 array in the holder. (Color figure online.)}
\vspace{-1cm}
\label{fig:fig4}
\end{figure}
\section{Measurements}
The critical temperature, $T_{c}$, of the film, is linked to the minimum frequency, $\nu_{m}$, able to break CPs by
\begin{equation}
h\nu_{m}=2\Delta\left(T_{c}\right)\;.
\end{equation}
Since the W-band starts at 75 GHz, $T_{c}$ has to be lower than 1.03 K.
We measured $T_{c}$ using the 4-wire reading of the resistance of the feedline at different temperatures, obtaining $T_{c}=\left(1.352\pm0.041\right)$ K and $T_{c}=\left(1.284\pm0.039\right)$ K for Aluminum 40 nm and 80 nm thick, respectively. The uncertainties on the temperature are due to the thermometer calibration, and it is about the 3\% of the measure itself.
Independent evidence of $T_{c}$ was obtained by illuminating the chip with a \\Millimeter-Wave source, Fig.~\ref{fig:fig5}. We monitored phase, amplitude and resonant frequency of the resonators while increasing the frequency of the source from 75 to 110 GHz. The minimum source frequencies producing significant response variations of the resonators are $\nu_{m}=95.50$ GHz and $\nu_{m}=93.20$ GHz for Aluminum 40 nm and 80 nm thick respectively. These values correspond to $T_{c}=1.308$ K and $T_{c}=1.277$ K, in full agreement with the 4-wire measurements, Fig.~\ref{fig:fig6}.
\begin{figure}[!h]
\begin{center}
\includegraphics[scale=0.44]{measurement_setup_color}
\vspace{-0.3cm}
\end{center}
\caption{Setup for the measurement of the critical temperature illuminating the chip with a Millimeter-Wave source (VNA). (Color figure online.)}
\label{fig:fig5}
\end{figure}
The 80 nm critical temperature is still too high to operate the KIDs in the lowest half of the W-band. In Fig.~\ref{fig:fig7} we show the 4-wire measurement of the $T_{c}$ for two samples of the 10 nm thick Ti $+$ 25 nm thick Al bi-layer: we find $T_c=\left(0.820\pm0.025\right)$ K , which allows for operation of the LEKID in the entire W-band. Using a VNA, we measured the minimum frequency at which the KID sensor responds to incoming radiation. This is about 65 GHz, as shown in Fig.~\ref{fig:fig8} (\emph{Left} panel) where the relative minima in the phase response, like those at 65.5 GHz and 67.5 GHz, are due to the large shift of the resonance frequency (\emph{Right} panel of Fig.~\ref{fig:fig8}). In this measurement setup, a constant excitation power over frequency is warranted by the Signal Generator of the mm-wave source, so the measured response shown in Fig.~\ref{fig:fig8} has to be attributed entirely to the detector.
\begin{figure}[!h]
\begin{center}
\includegraphics[scale=0.29]{t_c_40nm_color_new}\hspace{-0.17cm}
\includegraphics[scale=0.29]{t_c_80nm_color_new}
\end{center}
\vspace{-0.3cm}
\caption{Measurements of the critical temperature for Aluminum 40 nm thick (\emph{Left} panel) and 80 nm thick (\emph{Right} panel). For each panel, the top plot reports the 4-wires resistance measurement result, while the center and bottom plots display the response measurements with a millimeter-wave source (VNA). (Color figure online.)}
\label{fig:fig6}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[scale=0.29]{T_c_Ti-Al_color_new}
\end{center}
\vspace{-0.3cm}
\caption{Direct measure of the critical temperature of 10 nm thick Ti $+$ 25 nm thick Al bi-layer. (Color figure online.)}
\label{fig:fig7}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[scale=0.29]{bilayer_freq_c}\hspace{-0.2cm}
\includegraphics[scale=0.29]{bilayer_span1MHz_color}
\end{center}
\vspace{-0.3cm}
\caption{\emph{Left} panel: measurement of the cut-on frequency with a millimeter-wave source (VNA). \emph{Right} panel: resonance responses over 1 MHz span around the dark resonance frequency ($\nu_{res}$), for different source frequencies. $\nu_{res}$ (Dark) is our operation point. (Color figure online.)}
\label{fig:fig8}
\end{figure}
\section{Conclusion}
Sensitive Detector Arrays working in W-band are very interesting for operation with large radiotelescopes (like the SRT) and in future space missions. Among the targets are the observation of the SZ effect, the surveys of dust and interstellar medium, the study of AGNs and CMB polarization.
For this purpose, we have designed, fabricated, and tested KIDs covering the whole W-band. The use of Ti-Al bi-layers allows extension of the operation of the detector down to frequencies as low as 65 GHz, thus covering the entire W-band.
We are now performing a full characterization of Ti-Al LEKID, including optical and Noise Equivalent Power measurements. The first results confirm promising performance.
\begin{acknowledgements}
We would like to thank Roberto Diana and Anritsu: the measurements between 58 and 70 GHz were made possible thanks to the loan of Anritsu MS4647B VNA.
This research has been supported by Sapienza University of Rome and the Italian Space Agency.
\end{acknowledgements}
|
2,869,038,154,071 | arxiv | \section{Introduction}
\label{intro}
\noindent
Several implementations have been considered for circuital quantum information processing (QIP) \cite{QIP_DiVincenzo,Review_Ladd,Review_BulutaNori,Review_SiQIP_Morton}.
Nevertheless, scalability is still an issue for many architectures.
In this framework, a Complementary Metal-Oxide-Semiconductor (CMOS) architecture based on silicon would take full advantage of the well-known physical properties of the material and of the mature technological improvements driven by the semiconductor industry.
Here we provide a CMOS-compatible architecture for QIP in silicon and evaluate the performances of the fundamental logic gates and the physical constraints for their scalability to multi-qubit logic circuits.
As a result, two important parameters are determined for this implementation: the maximum surface density of quantum information and the characteristic time for quantum communication between two logic qubits.
Implementations of charge and spin qubits in silicon have been explored in both quantum dots sytems \cite{Review_SiQIP_Morton,MDM_APEX,Prati_Nanotech,Morello_Nature2013_QD}
and single dopant atom transistors \cite{Kane,Yablonovich_SiGe_Qubit,Hollenberg_Charge,KoillerSaraiva_SiQIP,Leti_APL,Mazzeo_APL,Varenna,Prati_Nature,Morello_NanoLett}. Coherent manipulation of quantum states have been demonstrated in both atomic systems \cite{Morello_T1,Morello_T2}, as well as in semiconductor quantum dots (QD), which take advantages from more relaxed bounds on the device dimensions \cite{LossDiVinc,Spin_Qubits_Kloeffel}.
Besides single spin and charge qubits \cite{Koppens,LZS_Charge_Qubit}, that make use of a single electron in a double quantum dot (DQD), in the last decade several architectures have been explored by employing two electron spins (S-T$_0$ $i.e.$ singlet-triplet qubit) \cite{Petta,Yacoby_2qubit} or three spins in double \cite{Shi_Hybrid} and triple QDs \cite{DiVinc_Marcus_3QD}.
Although satisfactory results have been achieved mainly in III-V heterostructures, spin-orbit coupling and hyperfine interactions are weaker in silicon \cite{Tyryshkin_T2_Seconds,Review_SiQIP_Morton,Morello_Nature2013_QD}, suggesting that silicon itself could be a promising platform for QIP.
Focusing on silicon DQD qubits, the controlled manipulation of qubit states has been obtained for a S-T$_0$ qubit \cite{HRL} and a hybrid qubit \cite{Shi_Nature_Hybrid,Hybrid_2014} in SiGe heterostructures.
In particular, the hybrid qubit is an attractive candidate for a large scale integration of QIP, as it allows a fast and all-electrical manipulation of the qubit states, with no need for either a strong magnetic field gradient, like in S-T$_0$ qubits, or microwave antennas, required in single spin qubits.
Besides the study of the electronic properties of Si-MOS QDs for QIP \cite{Pierre_APL,Prati_Nanotech,MDM_APEX}, we also derived the effective hamiltonian for hybrid qubits, defining the pulse sequences to perform universal quantum computation with such architecture \cite{Ferraro_QIP,LavoroLungo,Universal_Set}.
Here we calculate the maximum storage of quantum information processing allowed by a CMOS-compatible implementation of silicon hybrid qubits.
In Section \ref{sec:Hybrid qubits} the logic basis of the hybrid qubit is presented, as well as the operation of 1-qubit and 2-qubit logic gates for universal QIP.
In Section \ref{Technology} their feasibility is discussed in a state of the art CMOS process: the physical requirements for the manipulation of quantum states are compared with the constraints imposed by the existing technologies and realistic devices are designed to implement data and communication qubits.
Finally, in Section \ref{LSI} the large scale integration of silicon hybrid qubits is considered in multi-qubit networks capable of fault tolerant computation and quantum error correction.
The maximum surface density of logic qubits per unit area is estimated, as well as the time load for quantum communication between two logic qubits.
\section{Fundamental logic gates in the hybrid qubit architecture}
\label{sec:Hybrid qubits}
\noindent
This Section is devoted to the description of the main concepts underliying the hybrid qubit architecture.
The main building blocks for such architecture are defined in terms of data and communication qubits.
Data qubits perform quantum information processing, that is one and two qubit logic operations as well as initialization and read-out of individual qubit states.
Communication qubits, conversely, are devoted to the communication of quantum information between distant data qubits.
In Subsection \ref{subsec:data_qubits} the fundamentals of hybrid architecture are introduced, as well as the schematic design and the operation of the quantum logic gates with one and two interacting qubits. In Subsection \ref{subsec:comm_qubits} we show how quantum information can be transmitted between distant hybrid qubits through the sequential repetition of SWAP logic gates between adjacent communication qubits.
\subsection{Data qubits: one and two qubit logic gates}
\label{subsec:data_qubits}
\noindent
An architecture that promises the best compromise among fabrication, fast gate operations, manipulation and scalability is the hybrid qubit proposed in Refs. \cite{Shi_Hybrid,Shi_Nature_Hybrid,Hybrid_2014,Ferraro_QIP,LavoroLungo,Universal_Set,Koh_PNAS}. It consists of two quantum states based on three electrons electrostatically confined in two QDs, with at least one electron in each. The convenience to use such an architecture is due to the possibility of obtaining fast gate operations with purely electrical manipulations. The exchange interaction, which is the dominant mechanism of interaction between adjacent spins, suffices for all the one and two qubits operations. In addition the three electrons spin system removes the need of using oscillating magnetic or electric fields or quasi-static Zeeman field gradient to realize full qubit control, which is required for instance in singlet-triplet qubits \cite{Petta}. Starting from an Hubbard-like model we have derived in Ref. \cite{Ferraro_QIP} a general effective Hamiltonian for the hybrid qubit in terms of only exchange interactions among the three electrons.
To define the logic basis for the hybrid qubit let's first introduce some preliminary notions. The total Hilbert space of three electron spins has a dimension of 8 and the total spin eigenstates form a quadruplet with $S=3/2$ and $S_z=-3/2;-1/2;+1/2;+3/2$ and two doublets each with $S=1/2$ and $S_z=\pm1/2$, where the square of the total spin is $\hbar^2S(S + 1)$ and the z-component of the total spin is $\hbar S_z$. The qubit is encoded in the restricted two-dimensional subspace with spin quantum numbers $S=1/2$ and $S_z=-1/2$, like in Ref. \cite{Shi_Hybrid}. We point out that only states with the same $S$ and $S_z$ can be coupled by spin independent terms in the Hamiltonian. The logic basis $\{|0\rangle,|1\rangle\}$ used is constituted by singlet and triplet states of a pair of electrons in combination with the angular momentum of the third spin, that is:
\begin{equation}\label{01}
|0\rangle\equiv|S\rangle|\downarrow\rangle, \qquad |1\rangle\equiv\sqrt{\frac{1}{3}}|T_0\rangle|\downarrow\rangle-\sqrt{\frac{2}{3}}|T_-\rangle|\uparrow\rangle
\end{equation}
where $|S\rangle$, $|T_0\rangle$ and $|T_-\rangle$ are respectively the singlet and triplet states
\begin{equation}
|S\rangle=\frac{|\uparrow\downarrow\rangle-|\downarrow\uparrow\rangle}{\sqrt{2}}, \quad |T_0\rangle=\frac{|\uparrow\downarrow\rangle+|\downarrow\uparrow\rangle}{\sqrt{2}}, \quad |T_-\rangle=|\downarrow\downarrow\rangle
\end{equation}
in the left dot, and $|\uparrow\rangle$ and $|\downarrow\rangle$ respectively denote a spin-up and spin-down electron in the right dot.
Every logic operation starts with the initialization process, when all the variables are regulated through appropriate external electric and magnetic fields \cite{Shi_Hybrid}. During this procedure, all the qubits composing the system are moved in the state corresponding to the 0 logic state. Starting from this condition it is possible to proceed further with the operations that are generally described by unitary matrices that finally lead to the desired logic gates.
\begin{figure}[ht]
\centerline{\includegraphics[width=1.0\textwidth]{1qubit.png}}
\vspace*{13pt} \fcaption{Left: schematic of the configuration for the hybrid qubit; electrons are denoted by 1, 2 and 3; dotted lines indicate the main interactions. Right: qualitative design of the device holding a single hybrid qubit with the related \emph{reservoir} and SET. The three electrons, highlighted as red circles, are electrostatically confined in the double QD by means of metallic gates (well and barrier). The electron \emph{reservoir} is added to allow the read-out of the qubit state through the SET.}\label{Fig:device}
\end{figure}
In the following, an implementation of the hybrid qubit is presented. A sketch of the device is reported in Figure \ref{Fig:device}, where metal gates form two electrostatic QDs and control the energy barrier between them. However, additional structures are needed to inject electrons in the QDs. This can be achieved by fabricating a \emph{reservoir} as source of electrons near the double QD and by controlling the height of an energy barrier between the \emph{reservoir} itself and the double QD through an electrostatic gate. In addition, the fabrication of a charge sensor is needed for the readout of the qubit state which coincides with the read out of the spin state of electrons confined in the doubly occupied QD. To serve this purpose, a Single Electron Transistor (SET), which is a MOSFET where a QD is formed by placing additional lateral gates orthogonal to the channel, can be used to electrostatically sense the spin state of the electrons in the doubly occupied QD.
Once that the operations on the qubits are concluded, the next step is represented by the read out process, as is described in the following. When read out of the qubit starts, tunneling is allowed from the doubly occupied QD to the \emph{reservoir} by a reduction of the interposed electrostatic barrier. When the electron pair is in a singlet state the corresponding wavefunction is more confined and the tunneling rate to the \emph{reservoir} is lower than that of the triplet state, which has a broader wavefunction. When the electron tunnels, the electrostatic potential landscape changes and so does the current passing through the electrostatically coupled SET. The measurement of the time interval between the read out signal and the current variation in the SET is supposed to reveal the spin state of the electron pair \cite{Shi_Hybrid}.
\begin{figure}[ht]
\centerline{\includegraphics[width=1.0\textwidth]{2qubit.png}}
\vspace*{13pt} \fcaption{Left: schematic of the configuration for the couple of interacting hybrid spin qubits denoted by a and b; electrons are denoted by 1, 2 and 3; dotted lines indicate the main interactions. Right: qualitative design for a couple of interacting hybrid qubits.}
\label{Fig:deviceCNOTs}
\end{figure}
By adopting the same approach as for the single-qubit logic gates, the extended effective Hamiltonian model for two interacting qubits is derived \cite{LavoroLungo}. The total system, composed by six electron spins, is described within the subspace with total angular momentum operator $S=1$ and $S_z=-1$ adopting the basis $\{|00\rangle, |01\rangle, |10\rangle,|11\rangle\}$, where the logic state $|0\rangle$ and $|1\rangle$ are defined in Eq.(\ref{01}).
A possible layout for two hybrid qubit gates is sketched in Figure \ref{Fig:deviceCNOTs}, where two data qubits are put in close connection by a controllable electrostatic barrier.
The Controlled-NOT (CNOT) gate, for example, is obtained by using the sequence reported in Refs. \cite{LavoroLungo,Universal_Set}.
The gates in Figure \ref{Fig:device} and \ref{Fig:deviceCNOTs} are sufficient to carry out arbitrary qubit rotations as well as general two qubit operations, providing a complete set of quantum gates for universal quantum computing.
\subsection{Communication qubits: the SWAP chain}
\label{subsec:comm_qubits}
\noindent
In this paragraph the problem of the communication among data qubits within an interconnecting circuit is analysed and an efficient strategy for an optimal transfer is presented. The model is based on hybrid qubits chains where the exchange interaction is exploited to transfer end to end the logic states through SWAP operations, where the SWAP gate operates an exchange between the states of two adjacent qubits.
In Figure \ref{Fig:deviceSWAPChain} a scheme and a qualitative design of the hybrid qubit chain is shown where an even number of hybrid qubits are put into direct connection.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.6\textwidth]{scheme.png}
\includegraphics[width=0.6\textwidth]{channel.png}
\end{center}
\vspace*{13pt} \fcaption{Top: schematic of a chain of an even number ($2n$) of hybrid qubits. Bottom: example of qualitative design for the implementation of quantum dot chains. The electrons confined in the QDs are highlighted in red.
$d_\text{iD}$ is the inter-dots distance, $l_\text{CQ}$ = $2d_\text{iD}$ is the length of the communication qubit and $d_\text{DQ}$ is the distance between the interconnected data qubits which corresponds to the distance between the head and the tail of the chain.}\label{Fig:deviceSWAPChain}
\end{figure}
From a practical point of view it is necessary firstly to initialize each qubit, which is composed by $3\times2n$ electrons (3 is the number of electrons for each qubit and $2n$ è is the total number of hybrid qubits).
The transfer begins when the state of the system at the head of the chain has been exchanged through a SWAP operation with the state of the adjacent qubit. In this way the first qubit has received the state of the second qubit and vice versa. At the second step the same mechanism involves the second and the third qubit. At the end of the process, the information has been completely transferred to the last qubit.
In this case the exchange is operated sequentially. An optimized control, as depicted in Figure \ref{Fig:SWAPchain}, will allow to operate the exchange in parallel.
It is possible to operate the exchange in parallel instead of in sequence, optimizing the bidirectional transfer of the states as pictured in Figure \ref{Fig:SWAPchain}.
\begin{figure}[h]
\centerline{\includegraphics[width=0.7\textwidth]{SWAPchain.eps}}
\vspace*{13pt} \fcaption{State representation of the hybrid qubit chain as a function of time. A sequence of $2n-1$ SWAP steps, with time duration $t_{SWAP}$, is applied when a chain of $2n$ hybrid qubits is considered. SWAP operations between qubits 1-2, 3-4, 5-6, etc. are required in odd time steps whereas qubit 2-3, 4-5, 6-7, etc. are swapped in even time steps.
As a result, an independent control of all the gates in each SWAP step is not required. Gates can be grouped in two sets and driven alternatively, making the chain control easier. After $2n-1$ SWAP steps, a bidirectional transfer of the states initially localized at the extremities of the chain is obtained.
}\label{Fig:SWAPchain}
\end{figure}
In order to find the gate sequences necessary to generate a SWAP operation between two qubits we employed the same search algorithm used in \cite{LavoroLungo}. The resulting pulse sequence to obtain the SWAP gate is reported in Figure \ref{Fig:SeqSWAPv1}, where the different $J$ terms represents the exchange parameters \cite{LavoroLungo}.
\begin{figure}[h!]
\centerline{\includegraphics[width=0.8\textwidth]{SeqSWAPv1.eps}}
\vspace*{13pt} \fcaption{Waveforms of the effective exchange variables implementing a SWAP gate (up to a global phase) with fixed $J_{12}$ = $J$/2 in both qubits. Times are in unit of $h/J$.}\label{Fig:SeqSWAPv1}
\end{figure}
Only a couple of electrons interact after tuning the tunneling parameters between the dots belonging to the same qubit or to different ones.
Interactions between $1_{a}$ and $2_{a}$ and between $1_{b}$ and $2_{b}$ have been set to a constant value in the search algorithm, as they are not effectively manipulable from the external \cite{LavoroLungo}.
More in detail, $J_{1a2a}$ = $J_{1b2b}$ = $J$/2 where J is the maximum effective exchange interaction between the two dots.
In order to quantitatively design the qubit chain, the simulation results on the single qubit reported in Ref. \cite{LavoroLungo} are used.
The SWAP time, $t_\text{SWAP}$, between the states of two adjacent hybrid qubits depends on the exchange interaction $J$ that again depends on the tunneling rate $t_r$ between the energy levels in the two QDs forming the qubit.
$t_r$ depends on the inter-dots distance $d_\text{iD}$ and it is linked to the length of the communication qubit by $l_\text{CQ}$ =$2d_\text{iD}$. The number of qubits $2n$ forming the qubit chain depends on the ratio between the head to tail distance between data qubits $d_\text{DQ}$ and $d_\text{iD}$. The total time $t_\text{TOT}$ to transfer the information from one extremity to the other by successive SWAP operations is:
\begin{equation}
t_\text{TOT} = (2n-1) \cdot t_\text{SWAP}= \Big(\frac{d_\text{DQ}}{l_\text{CQ}} -1 \Big) \cdot t_\text{SWAP}(d_\text{iD}) = \Big( \frac{d_\text{DQ}}{2d_\text{iD}} -1 \Big) \cdot t_\text{SWAP}(d_\text{iD})
\end{equation}
where $t_\text{SWAP}=t_\text{seq} \cdot h/J$ and $t_\text{seq}$ is the duration of the sequence in units of $h/J$.
$J$ is estimated with $J=t_r^{2}/\Delta E_\text{ST}$ where $\Delta E_\text{ST}$ is the singlet-triplet energy splitting \cite{Shi_Hybrid}.
In Figure \ref{Fig:t-dint} the total chain time $t_\text{TOT}$ is reported as a function of $d_\text{iD}$ for three different values of $d_\text{DQ}$.
Total time $t_\text{TOT}$ increases exponentially as $d_\text{iD}$ raises.
\begin{figure}[ht]
\centerline{\includegraphics[width=0.75\textwidth]{tTOT-diQD.png}}
\vspace*{13pt} \fcaption{Graph of the total time SWAP chain $t_\text{TOT}$ as a function of the inter QD distance $d_\text{iD}$ for three different distances $d_\text{DQ}$ between head and tail data qubits.}\label{Fig:t-dint}
\end{figure}
\section{CMOS implementation of the hybrid qubit architecture}
\label{Technology}
\noindent
In this paragraph we explore the limiting size of the discrete components to implement the hybrid qubit architecture imposed by a CMOS-compatible fabrication process.
The standard in semiconductor industry is set by silicon CMOS manufacturing, due to the capability to fabricate p- and n-channel devices on the same chip and to build devices with a low power consumption \cite{Nishi}.
Hence, the technologic constraints set by the 22 nm technologic node of CMOS nanoelectronics are examined in Section \ref{subsec:process}.
According to such physical constraints, realistic data and communication qubits are designed and reported in Section \ref{subsec:masks}, providing the building blocks for a complete implementation of the hybrid qubit architecture in silicon.
\subsection{Technologic constraints of CMOS manufacturing at the 22 nm node}
\label{subsec:process}
Figure \ref{fig:process} shows a schematic process flow for the realization of Si-MOS hybrid qubits on a Silicon-On-Insulator (SOI) platform.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\columnwidth]{Process2.png}
\vspace*{13pt} \fcaption{Schematic process flow for a CMOS-compatible realization of semiconductor hybrid qubits on SOI wafers.
A schematic top view of the resulting device is shown on the right bottom.}
\label{fig:process}
\end{figure}
Quantum gates are defined by selective etching of the SOI device layer after a lithographic patterning exposure.
Then a gate insulator, for example Al$_2$O$_3$ or other high-$k$ oxides, is deposited on the silicon islands \cite{Nishi,ITRS}. Finally, the metal electrodes for source-drain leads and the electrostatic gates are deposited and patterned by a further lithographic exposure.
The mentioned process flow can be completely adapted to an industrial one: all the steps can be performed with the main deposition and etching techniques employed in a common industrial production line, such as Chemical Vapor Deposition (CVD), Atomic Layer Deposition (ALD) and Reactive Ion Etching (RIE) \cite{Nishi}.
The main critical issues concern the lithographic steps, because a few nanometer resolution is required. %
At the present times, most of the patterning processes in microelectronics and micromachining for Ultra Large Scale Integration (ULSI) are carried out through Deep Ultra-Violet (DUV) lithography, that makes use of ArF laser sources ($\lambda$ = 193 nm) and is capable of high resolution as well as high throughput (100 wafers per hour) \cite{Nishi,ITRS}.
According to the last International Technology Roadmap for Semiconductors (ITRS) Update, 22 nm is the minimum half-pitch of un-contacted Poly-Si in flash memories and it represents a significative benchmark of the ultimate resolution of DUV lithography at the present node \cite{ITRS}.
Further improvements are expected in the very next years to reach the next technological node set at 16 nm \cite{ITRS}.
In this perspective, alternative lithographic techniques, like Extreme Ultra-Violet (EUV) lithography and multi-beam Electron Beam Lithography (EBL), are examined to push technology to the forthcoming nodes \cite{ITRS}.
A minimum feature size of 20 nm is a reasonable design rule for a realistic implementation of the hybrid qubit architecture proposed in Section \ref{sec:Hybrid qubits}.
All the masks for data and communication qubits have been designed accordingly and are compatible wih the 22 nm node technology.
\subsection{Realistic design of data and communication qubits}
\label{subsec:masks}
\noindent
The mask for a single hybrid qubit is reported on the left of Figure \ref{fig:qubit}.
One lithographic level, that defines the SOI islands, is in blue, while the other levels correspond to four superposed metal levels for the electrical contacts, high doping for the electron \emph{reservoirs} and vias to Back End Of Line (BEOL) levels.
In Figure \ref{fig:qubit} we highlighted the silicon regions where the DQD qubit and SET charge sensor are defined.
Grey and green electrodes (level 1 and 3) are used as inter-dot barriers, while red and light blue ones (level 2 and 4) act as plunger gates defining the potential wells and controlling the chemical potential in the quantum dots.
The minimum feature size for structures on the same level is 20 nm.
As a result, the one-qubit gate in Figure \ref{fig:qubit} can be realized on a submicrometer area (300 x 500 nm$^2$).
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{Qubit_copia}
\vspace*{13pt} \fcaption{Lithographic masks for data qubits.
Left: 1-qubit gate is composed of a DQD contacted with an electronic \emph{reservoir} (2DEG) for the system initialization and read-out and a SET for charge sensing of the DQD.
Right: 2-qubit gate. The mask is designed analogously to the 1-qubit gate to obtain full control over two independent qubits coupled by a tunable electrostatic barrier.
A color code on the right side identifies the lithographic levels required for the fabrication process, corresponding respectively to the definition of silicon islands, degenerate doping at the source and drain contacts, four levels for metal gates and vias for electrical connections.
The minimum feature size is 20 nm for both masks, whereas the total area is 300 x 500 nm$^2$ for one qubit and 380 x 500 nm$^2$ for two qubits.}
\label{fig:qubit}
\end{figure}
A two-qubit gate can be obtained as a replica of the one-qubit mask with few adjustments.
The mask on the right of Figure \ref{fig:qubit} covers an active area of 380 x 500 nm$^2$ and it's composed of two DQDs with separated electron \emph{reservoirs} and SET charge sensors for independent initialization and read-out of the two qubits.
The doubly occupied dot is closer to the charge sensor in both qubits and can directly communicate with the electron \emph{reservoir} to facilitate the read-out procedure.
The gates in Figure \ref{fig:qubit} are sufficient to carry out arbitrary qubit rotations as well as general two qubit operations, like the Controlled-NOT (CNOT).
As a results, such devices provide a complete set of logic gates and represent the starting point to perform universal quantum computation with the hybrid qubit architecture.
According to the discussion in Section \ref{sec:Hybrid qubits}, quantum communication can be accomplished by sequential SWAP operations across a qubit chain.
We designed two modular structures, namely the "chain" and the "T" module, to be composed in arbitrary 2-dimensional arrays of communication qubits (Figure \ref{fig:chain}).
The chain module consists of two qubits controlled by independent plunger and barrier gates with the only scope to carry out a SWAP operation between the states of the two qubits with no need to initialize and read-out the logic states.
Analogously, the T-module is a modified and rearranged version of a multi-chain module and acts like a crossroad for flying qubits: orthogonal qubit chains are brought in contact through this module, creating the conditions for 2-dimensional arrays of qubits.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\columnwidth]{Multi_Chain}
\vspace*{13pt} \fcaption{Communication qubits for the coherent transfer of quantum information in a 2-dimensional array of qubits.
Chain modules enable quantum communication between adjacent qubits through a SWAP logic operation. The T module, on the right side, is a modified version of the chain module that makes it possible to bring in contact orthogonal qubit chains.
The lithographic levels are depicted according to the color code in Figure \ref{fig:qubit}.
The shaded squares indicate the active area of the chain module (160 x 460 nm$^2$) and of the T module (1300 x 700 nm$^2$).}
\label{fig:chain}
\end{figure}
\section{Quantum computing on a large scale}
\label{LSI}
\noindent
In this Section a possible integration on a large scale of silicon hybrid qubits is evaluated and the maximum quantum information density per unit surface is estimated.
The occurrence of faulty logic gates and memory errors is taken into account and a Quantum Error Correction scheme is proposed to improve the effective gate fidelity in multi-qubit circuits.
In this framework, two important figure of merit are estimated: the maximum density of logic qubits per unit area and the time for quantum communication between logic qubits.
Gate fidelity is an important metrics in a QIP architecture, since errors are much more frequent in quantum computers than in their classical counterparts.
In fact, information in qubits is rapidly corrupted by decoherence, $i. e.$ the interaction with the sorrounding environment.
In hybrid qubits charge and spin noise are the main sources of decoherence and they induce unwanted rotations in the Bloch sphere.
As a result, charge and spin noise are responsible for errors with a probability of about $10^{-3}$ errors per logic gate \cite{Koh_PNAS}.
Generally, protection against errors in QIP is achieved through bit encoding and fault-tolerant computation \cite{QEC_Beginners}.
A multi-qubit circuit is considered to this extent in Figure \ref{fig:qubyte}, where many data qubits are connected by T-modules on a bus structure.
Here a logic qubit can be encoded by 7 physical qubits according to the $[[7,1,3]]$ Steane code, allowing for fault-tolerant computation and quantum error correction (QEC) \cite{QEC_Beginners,QEC_Steane}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\columnwidth]{Qubyte}
\vspace*{13pt} \fcaption{Design of a logic qubit encoded by the $[[7,1,3]]$ Steane code in 7 physical qubits.
Lithographic levels are represented according to the color code in Figure \ref {fig:qubit}.
This multi-qubit circuit is composed of 8 data qubit gates (the 2-qubit device reported on the right of Figure \ref{fig:qubit}) and 8 T modules for quantum communication (presented in Figure \ref{fig:chain})
The minimum feature size is 20 nm, while the total area is 11.642 $\mu$m$^2$.}
\label{fig:qubyte}
\end{figure}
In this framework, the information of a logic qubit is stored in a 2-dimensional subspace of the $2^7$-dimensional Hilbert space defined by 7 physical qubits.
As a result, a logic qubit is less susceptible to single physical qubit failures, since a faulty gate can be revealed and corrected with standard QEC techniques maintaining the coherence of the logic qubit.
Actually, QEC algorythms require some supplementary qubits for the measurement of the syndrome of the logic qubit.
The number of such extra-qubits, or \emph{ancillae}, generally depends on the quantum code, the QIP architecture and the specific procedure of error detection and correction.
In particular, 12 auxiliary qubits are sufficient for a complete QEC algorithm with the $[[7,1,3]]$ code \cite{QEC_Beginners,IEEE_SiP_Arch}.
In the bus structure of Figure \ref{fig:Log_qubyte}, one of the 8 branches is a logic qubit according to the $[[7,1,3]]$ code and it is composed of 20 physical data qubits, including \emph{ancillae} qubits.
Tab. \ref{Tab:Area} reports the dimensions and compositions of the principal quantum gates and multi-qubit circuits presented in this work.
In particular, the 8-qubit block in Figure \ref{fig:Log_qubyte} is the quantum analog of the classical unit of information (the byte) and can be taken as a first benchmark to estimate the maximum density of logic qubits per unit area.
Such register covers an active area of 25.54 x 12.04 = 307.502 $\mu$m$^2$, that corresponds to a density of information of 2.6 Mqubit per cm$^2$.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{Qubyte_Logic_final.png}
\vspace*{13pt} \fcaption{Lithographic mask of a quantum register made of 8 logic qubits (A-H).
Every logical qubit is composed of 20 double data qubits (see the right side of Figure \ref{fig:qubit}) depicted as blue boxes and 20 T blocks (see Figure \ref{fig:chain}) colored in red.
Connections between logic qubits are provided by chain modules (see Figure \ref{fig:chain}) colored in green and 8 additional T modules.
Such logic quantum byte is composed of 1720 data qubits and 1400 communication qubits and covers an area of 307.502 $\mu$m$^2$}
\label{fig:Log_qubyte}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.85\columnwidth]{Funzionamento}
\vspace*{13pt} \fcaption{Operation of a two-qubit logic gate between encoded qubits. The reported mask refers to the box highlighted in Figure \ref{fig:Log_qubyte}.
A logical qubit (E) is transferred in proximity of a second logical qubit (F) by sequencial SWAP operations through communication qubits.
Then a two-qubit logic gate ($e.g.$ CNOT) is operated between all the couples of physical qubits ($E_i\oplus F_i$) within data qubit blocks, effectively carrying out a CNOT gate between the logic qubits ($E\oplus F$).
The target qubit F is modified by the CNOT gate according to the state of the control qubit E, resulting in a new quantum state F'.
In the end the two qubits are moved to other qubit registers for further logic operations.
}
\label{fig:Operation}
\end{figure}
A remarkable property of the $[[7,1,3]]$ code is that the fundamental logic gates operate on logic qubits in complete analogy to logic gates on physical qubits.
More in detail, the operation principle of a logic gate between two encoded qubits A and B is sketched in Figure \ref{fig:Operation}.
Logic qubit A is firstly transferred through a SWAP channel in proximity to qubit B, where the logic gate is carried out on a one-by-one basis between all the couples of physical qubits.
Finally qubit A is brought back to the starting point or directed to another qubit site for the next step in the algorythm.
Notably, all the logic operations are performed fault-tolerantly within this scheme, $i. e.$ interaction between physical qubits in the same logic qubit never takes place.
As a consequence, the propagation of errors inside a logic qubit is forbidden, preserving the possibility to perform quantum error correction over single faults.
\begin{table}[h!]
\tcaption{\label{Tab:Area}
Physical dimensions and composition of data and communication qubits with $d_{iD}$ = 40 nm. The last two rows report the dimensions of a logic qubit and of a register of 8 logic qubits respectively.
}
\centerline{\footnotesize\smalllineskip
\begin{tabular}{@{}ccccc}
\toprule
Device & Dimensions [$\mu$m$^2$] & Area [$\mu$m$^2$] & Data/Comm. qubits\\
\midrule
One-qubit & 0.3 x 0.5 & 0.15 & 1/0\\
Two-qubit & 0.38 x 0.5 & 0.19 & 2/0\\
Chain & 0.16 x 0.46 & 0.0736 & 0/2\\
T & 1.3 x 0.7 & 0.91 & 0/7\\
1 Log. qubit & 11.38 x 2.52 & 28.6776 & 20/70\\
8 Log. qubits & 25.54 x 12.04 & 307.502 & 1720/1400\\
\bottomrule
\end{tabular}}
\end{table}
In order to evaluate the physical performances of this architecture, we report in Tab. \ref{Tab:Times} the path length between two adjacent physical/logic qubits to estimate the corresponding time needed for quantum communication.
According to the analysis in Subsection \ref{subsec:comm_qubits}, $t_\text{SWAP}$ = 6.47 ns for $d_\text{iD}=40$ nm.
As a result, the time needed to transfer quantum information through a SWAP chain ranges from 71.2 ns for communication between adjacent physical qubits to approximately 2 $\mu$s for coherent transfer between logic qubits within a 8-qubit register.
\begin{table}[h!]
\tcaption{\label{Tab:Times}
Time load for quantum computation and communication between distant data qubits with a 40 nm inter-QD distance.
According to the analysis in Subsection \ref{subsec:comm_qubits} $t_{ \text{SWAP} }$ is 6.47 ns.
The minimum and maximum transfer times for quantum communication have been calculated considering the minimum and maximum distance between physical qubits in the same logic qubit (made of 20 physical qubits) and between different logic qubits in a byte (8 logic bits).
}
\centerline{\footnotesize\smalllineskip
\begin{tabular}{@{}cccc}
\toprule
Operation & Number of Qubits & Distance [$\mu$m] & Time [ns] \\
\midrule
Comm. 2 phys. qubits (min) & 12 & 1 & 71.2 \\
Comm. 2 phys. qubits (max) & 138 & 11 & 886.4 \\
Comm. 2 log. qubits (min) & 192 & 15.4 & 1235.8 \\
Comm. 2 log. qubits (max) & 311 & 24.9 & 2005.7 \\
\bottomrule
\end{tabular}}
\end{table}
The estimated characteristic times for quantum information processing should be compared to the qubit coherence time ${T_2}^*$.
Although the first experimental works reported a short ${T_2}^* \sim$ 20 ns for a silicon hybrid qubit \cite{Shi_Nature_Hybrid,Hybrid_2014}, the expected value from theoretical calculations is of the order of $\mu$s \cite{Shi_Hybrid}.
A fidelity of 99.999\% seems to be within range for 1-qubit operations and could be further improved by tuning the singlet-triplet splitting to a good balance between operational speed and gate fidelity \cite{Shi_Hybrid,Koh_PNAS}.
Besides this, the effects of the principal sources of decoherence could be drastically reduced with several techniques, such as dynamical decoupling and complex pulse sequencies \cite{Bluhm_Dephasing}.
Finally, promising alternatives could be considered to replace the SWAP channels, expecially for quantum communication over long distances, such as teleportation gates and coherent transfer by adiabatic passage \cite{IEEE_SiP_Arch,CTAP_Greentree,CTAP_Platero,Bennett_Teleport}.
We also note that the bus-structure reported in Figure \ref{fig:Log_qubyte} can be easily extended to higher order ramifications in order to introduce recursive coding techniques \cite{QEC_Beginners}.
A recursive code of order $k-1$ gives an error threshold of $(cp)^{2^k} /c$ where $p$ is the error probability of a logic operation and 1/$c$ is the error threshold, $i. e.$ the maximum error rate tolerated by a specific quantum code \cite{QEC_Beginners,IEEE_SiP_Arch}.
As a result, recursive coding rises the error threshold by re-encoding logical qubits in a higher level logic qubit provided that $p < 1/c$.
If this condition is satisfied, the error threshold is enhanced by an exponential law, whereas the circuit area and the computational times increase by a power law \cite{IEEE_SiP_Arch}.
\section{Conclusions}
\noindent
A CMOS-compatible design of the semiconductor hybrid qubit architecture has been proposed.
Such architecture is suitable for large scale quantum computing, since it allows all-electrical manipulation of qubits on a nanosecond timescale.
One- and two-qubit gates have been designed for a Si-CMOS platform, complying with the technologic standards of semiconductor industry.
The fundamental building blocks for quantum computation and communication have been proposed, and the feasability of multi-qubit networks has been discussed.
The requirements of fault-tolerant computation and the introduction of a quantum error correction scheme based on the $[[7,1,3]]$ Steane code have been taken into account.
The time and space resources for universal quantum computation are estimated accordingly in a register of 8 logical qubits.
The calculated maximum surface density of logical qubits is 2.6 Mqubit/cm$^2$.
\nonumsection{References}
\noindent
\bibliographystyle{spmpsci}
|
2,869,038,154,072 | arxiv |
\section{Introduction}\label{sec:intro
Sundials~\cite{HindmarshEtAl:Sundials:2005} is a suite of six numeric
solvers:
\textsc{\it CVODE}{}, for \aclp{ODE},
\textsc{\it CVODES}{}, adding support for quadrature integration and sensitivity
analysis,
\textsc{\it IDA}{}, for \aclp{DAE},
\textsc{\it IDAS}{}, adding support for quadrature integration and sensitivity analysis,
\textsc{\it ARKODE}{}, for \aclp{ODE} using adaptive-step additive Runge-Kutta methods,
and \textsc{\it KINSOL}{}, for non-linear equations.
The six solvers share data structures and operations for vectors, matrices,
and linear solver routines.
They are implemented in the C language.
In this article we describe the design and implementation of a comprehensive
OCaml~\cite{LeroyEtAl:OCamlMan:2018} interface to the Sundials library,
which we call \emph{Sundials/ML}.
The authors of Sundials, Hindmarsh \emph{et
al.}~\cite{Sundials:Cvode:3.1.0}, describe ``a general movement away from
Fortran and toward C in scientific computing'' and note both the utility of
C's pointers, structures, and dynamic memory management for writing such
software, and also the availability, efficiency, and relative ease of
interfacing to it from Fortran.
So, why bother writing an interface from OCaml?
We think that OCaml interfaces to libraries like Sundials are ideal for
\begin{inparaenum}
\item
programs that mix numeric computation with symbolic manipulation, like
interpreters and simulators for hybrid modelling languages;
\item
rapidly developing complex numerical models; and
\item
incorporating numerical approximation into general-purpose applications.
\end{inparaenum}
Compared to C, OCaml detects many mistakes through a combination of static
analyses (strong typing) and dynamic checks (for instance, array bounds
checking), manages memory automatically using garbage collection, and
propagates errors as exceptions rather than return codes.
Not only does the OCaml type and module system detect a large class of
programming errors, it also enables rich library interfaces that clarify and
enforce correct use.
We exploit this possibility in our library; for example, algebraic data
types are used to structure solver configuration as opposed to multiple
interdependent function calls, polymorphic typing ensures consistent
creation and use of sessions and linear solvers, and phantom types are
applied to enforce restrictions on vector and matrix use.
On the other hand, all such interfaces add additional code and thus runtime
overhead and the possibility of bugs---we discuss these issues in
\cref{sec:eval}.
The basic techniques for interfacing OCaml and C are well
understood~\cite[Chapter~20]{LeroyEtAl:OCamlMan:2018}%
\cite{Monnier:OcamlandC:2013}%
\cite[Chapter~12]{ChaillouxManPag:ObjCaml:2000b}%
\cite[Chapters~19--21]{MinskyMadHic:RWOCaml:2013}, and it only takes one or
two weeks to make a working interface to one of the solvers using the basic
array-based vector data structure.
But not all interfaces are created equal!
It takes much longer to treat the full range of features available in the
Sundials suite, like the two solvers with quadrature integration and
sensitivity analysis, the different linear solver modules and their
associated matrix representations, and the diverse vector data structures.
In particular, it was not initially clear how to provide all of these
features in an integrated way with minimal code duplication and good support
from the OCaml type system.
\Cref{sec:tech} presents our solution to this problem.
The intricacies of the Sundials library called for an especially careful
design, particularly in the memory layout of a central vector data structure
which took several iterations to perfect.
The key challenges are to limit copying by modifying data in place, to
interact correctly with the garbage collector to avoid memory leaks and data
corruption, and to exploit the type and module systems to express documented
constraints and provide a convenient interface.
Our interface employs higher-order functions and currying, but such
functional programming techniques are incidental; ultimately, we cannot
escape the imperative nature of the underlying library.
Similarly, in several instances static types are unable to encode
constraints that change with the solver's state, and we are obliged to add
dynamic checks to provide a safe and truly high-level interface.
Otherwise, a significant engineering effort is required to create a robust
build system able to support the various configurations and versions of
Sundials on different platforms, and to translate and debug the 100-odd
standard examples that were indispensable for designing, evaluating, and
testing our interface.
\Cref{sec:eval} summarizes the performance results produced through this
effort.
The Sundials/ML interface is used in the runtime of the Zélus programming
language~\cite{BourkePou:HSCC:2013}.
Zélus is a synchronous language~\cite{BenvenisteEtAl:12Years:2003} extended
with \acp{ODE} for programming embedded systems and modelling their
environments.
Its compiler generates OCaml code to be executed together with a simulation
runtime that orchestrates discrete computations and the numerical
approximation of continuous trajectories.
The remainder of the paper is structured as follows. First, we describe the
overall design of Sundials and Sundials/ML from a library user's
point-of-view with example programs (\cref{sec:overview}).
Then we explain the main technical challenges that we overcame and the
central design choices (\cref{sec:tech}).
Finally, we present an evaluation of the performance of our binding
(\cref{sec:eval}) before concluding (\cref{sec:concl}).
The documentation and source code---approximately \num{15000} lines of
OCaml, and \num{17000} lines of C, not counting a significant body of
examples and tests---of the interface described in this paper is available
under a 3-clause BSD licence at
\url{http://inria-parkas.github.io/sundialsml/}.
The code has been developed intermittently over eight years and adapted for
successive Sundials releases through to the current 3.1.0 version.
\section{Overview}\label{sec:overview
In this section, we outline the \acp{API} of Sundials and Sundials/ML and
describe their key data structures.
We limit ourselves to the elements necessary to explain and justify
subsequent technical points.
A complete example is presented at the end of the section.
\subsection{Overview of Sundials}\label{sec:coverview}
A mathematical description of Sundials can be found in Hindmarsh \emph{et
al.}~\cite{HindmarshEtAl:Sundials:2005}.
The user manuals\footnote{https://computation.llnl.gov/casc/sundials/}
give a thorough overview of the library and the details of every function.
The purposes of the four basic solvers are readily summarized:
\begin{itemize}
\item
\textsc{\it CVODE}{} approximates $Y(t)$ from $\dot{Y} = f(Y)$ and $Y(t_0) = Y_0$.
\item
\textsc{\it IDA}{} approximates $Y(t)$ from $F(Y, \dot{Y}) = 0$, $Y(t_0) = Y_0$,
and $\dot{Y}(t_0) = \dot{Y}_0$.
\item
\textsc{\it ARKODE}{} approximates $Y(t)$
from $M\dot{Y} = f_E(Y) + f_I(Y)$ and $Y(t_0) = Y_0$.
\item
\textsc{\it KINSOL}{} calculates $U$ from $F(U) = 0$ and initial guess $U_0$.
\end{itemize}
The problems are stated over vectors $Y\!$, $\dot{Y}\!$, and $U\!$, and
expressed in terms of a function~$f$ from states to derivatives, a
function~$F$ from states and derivatives to a residual, or the combination
of an explicit function~$f_E$ and an implicit function~$f_I$, both from
states to derivatives, together with a mapping matrix~$M$.
The first three solvers find solutions that depend on an independent
variable~$t$, usually considered to be the simulation time.
The two solvers that are not mentioned above, \textsc{\it CVODES}{} and \textsc{\it IDAS}{},
essentially introduce a set of parameters $P$---giving models $f(Y, P)$ and
$F(Y, \dot{Y}, P)$---and permit the calculation of parameter sensitivities,
$S(t) = \frac{\partial Y(t)}{\partial P}$ using a variety of different
techniques.
Four features are most relevant to the OCaml interface: solver sessions,
vectors, matrices, and linear solvers.
\subsubsection{Solver sessions}\label{sec:coverview:sessions}
\begin{figure}
\begin{center}
\begin{NumberedVerbatim}
cvode_mem = CVodeCreate(CV_BDF, CV_NEWTON);\label{cvodeinit:create}
if(check_flag((void *)cvode_mem, "CVodeCreate", 0)) return(1);
flag = CVodeSetUserData(cvode_mem, data);\label{cvodeinit:userdata}
if(check_flag(&flag, "CVodeSetUserData", 1)) return(1);
flag = CVodeInit(cvode_mem, f, T0, u);\label{cvodeinit:init}
if(check_flag(&flag, "CVodeInit", 1)) return(1);
flag = CVodeSStolerances(cvode_mem, reltol, abstol);\label{cvodeinit:tol}
if (check_flag(&flag, "CVodeSStolerances", 1)) return(1);
LS = SUNSPGMR(u, PREC_LEFT, 0);\label{cvodeinit:spgmr}
if(check_flag((void *)LS, "SUNSPGMR", 0)) return(1);
flag = CVSpilsSetLinearSolver(cvode_mem, LS);\label{cvodeinit:setlinear}
if (check_flag(&flag, "CVSpilsSetLinearSolver", 1)) return 1;
flag = CVSpilsSetJacTimes(cvode_mem, jtv);\label{cvodeinit:jactimes}
if(check_flag(&flag, "CVSpilsSetJacTimes", 1)) return(1);
flag = CVSpilsSetPreconditioner(cvode_mem, Precond, PSolve);\label{cvodeinit:precond}
if(check_flag(&flag, "CVSpilsSetPreconditioner", 1)) return(1);
\end{NumberedVerbatim}
\end{center}
\caption{Extract from the cvDiurnal\_kry example in C, distributed with
Sundials~\cite{HindmarshEtAl:Sundials:2005}.}\label{fig:cvodeinit:c}
\end{figure}
Using any of the Sundials solvers involves the same basic pattern:
\begin{stepenum}
\item\label{step:create}
a session object is created;
\item\label{step:init}
several inter-dependent functions are called to initialize the session;
\item\label{step:set}
``set\,$\ast$'' functions are called to give parameter values;
\item\label{step:solve}
a ``solve'' or ``step'' function is called repeatedly to approximate a
solution;
\item\label{step:get}
``get$\,\ast$'' functions are called to retrieve results;
\item
the session and related data structures are freed.
\end{stepenum}
The sequence in~\cref{fig:cvodeinit:c}, extracted from an example
distributed with Sundials, is typical of
\cref*{step:create,step:init,step:set}.
Line~\ref{cvodeinit:create}
creates a session with the (\textsc{\it CVODE}{}) solver, that is, an abstract type
implemented as a pointer to a structure containing solver parameters and
state that is passed to and manipulated by all subsequent calls.
This function call, and all the others, are followed by statements that
check return codes.
Line~\ref{cvodeinit:userdata} specifies a ``user data'' pointer that is
passed by the solver to all callback functions to provide session-local
storage.
Line~\ref{cvodeinit:init} specifies a callback function~\verb"f" that
defines the problem to solve, here a function from a vector of variable
values to their derivatives ($\dot{X} = f(X)$), an initial value~\verb"T0"
for the independent variable, and a vector of initial variable
values~\verb"u" that implicitly defines the problem size (\mbox{$X_0 = u$}).
The other calls specify tolerance values (line~\ref{cvodeinit:tol}),
instantiate an iterative linear solver (line~\ref{cvodeinit:spgmr}), attach
it to the solver session (line~\ref{cvodeinit:setlinear}), and set callback
functions \verb"jtv", defining the problem Jacobian
(line~\ref{cvodeinit:jactimes}), and \verb"Precond" and \verb"PSolve",
defining preconditioning
(line~\ref{cvodeinit:precond}).
The loop that calculates values of $X$ over time, and the functions that
retrieve solver results and statistics, and free memory are not shown.
While the \textsc{\it IDA}{} and \textsc{\it KINSOL}{} solvers follow the same pattern, using the
\textsc{\it CVODES}{} and \textsc{\it IDAS}{} solvers is a bit more involved.
These solvers provide additional calls that augment a basic solver session
with features for more efficiently calculating certain integrals and for
analyzing the sensitivity of a solution to changes in model parameters using
either so called forward methods or adjoint methods.
The adjoint methods involve solving an \ac{ODE} or \ac{DAE} problem by first
integrating normally, and then initializing new ``backward'' sessions that
are integrated in the reverse direction.
The routines that initialize and configure solver sessions are subject to
rules constraining their presence, order, and parameters.
For instance, in the example, the call to \verb"CVodeSStolerances" must
follow the call to \verb"CVodeInit" and precede calls to the step function;
calling \verb"CVodeCreate" with the parameter \verb"CV_NEWTON" necessitates
calls to configure a linear solver; and the \verb"CVSpilsSetLinearSolver"
call requires an iterative linear solver such as \verb"SUNSPGMR" which, in
turn, requires a call to \verb"CVSpilsSetJacTimes" and, since the
\verb"PREC\_LEFT" argument is given, a call to
\verb"CVSpilsSetPreconditioner" with at least a \verb"PSolve" value.
\subsubsection{Vectors}\label{sec:coverview:vectors}
The manipulation of vectors is fundamental in Sundials.
Vectors of floating-point values are represented as an abstract data type
termed an \emph{nvector} which combines a data pointer and 26 function
pointers to operations that manipulate the data.
Typical of these operations are \emph{nvlinearsum}, which calculates the
scaled sum of nvectors~$X$ and~$Y$ into a third vector~$Z$, that is, $Z = aX
+ bY\!$, and \emph{nvmaxnorm}, which returns the maximum absolute value of
an nvector~$X$.
Nvectors must also provide an \emph{nvclone} operation that produces a new
nvector of the same size and with the same operations as an existing one.
Solver sessions are seeded with an initial nvector that they clone
internally and manipulate solely through the abstract operations---they are
thus defined in a data-independent manner.
Sundials provides eight instantiations of the nvector type: serial
nvectors, parallel nvectors, OpenMP nvectors, Pthreads nvectors, Hypre
ParVector nvectors, PETSC nvectors, RAJA nvectors, and CUDA nvectors.
Serial nvectors store and manipulate arrays of floating-point values.
Parallel nvectors store a local array of floating-point values and a
\ac{MPI} communicator; some operations, like \emph{nvlinearsum},
simply loop over the local array, while others, like \emph{nvmaxnorm},
calculate locally and then synchronize via a global reduce operation.
OpenMP nvectors and Pthreads nvectors operate concurrently on arrays
of floating-point values.
Despite the lack of real multi-threading support in the OCaml runtime,
binding to these nvector routines is unproblematic since they do not call
back into user code.
The serial, OpenMP, and Pthreads nvectors provide operations for accessing
the underlying data array directly; this feature is notably exploited in the
implementation of certain linear solvers.
The Hypre ParVector, PETSC, and RAJA nvectors interface with scientific
computing libraries that have not been ported to OCaml; they are not
supported by Sundials/ML.
We have not yet attempted to support CUDA nvectors.
Finally, library users may also provide their own \emph{custom nvectors} by
implementing the set of basic operations over a custom representation.
\subsubsection{Matrices}\label{sec:coverview:matrices}
In recent versions of Sundials, matrices are treated like nvectors: they are
implemented as an abstract data type combining a data pointer with pointers
to 9 abstract operations~\cite[\textsection{}7]{Sundials:Cvode:3.1.0}.
There are, for instance, operations to clone, to destroy, to scale and add
to another matrix, and to multiply by an nvector.
Implementations are provided for two-dimensional dense, banded, and sparse
matrices, and it is possible for users to define custom matrices by
implementing the abstract operations.
The \emph{matrix content} of dense matrices consists of fields for the
matrix dimensions and data length, a data array of floating-point values,
and an array of pre-calculated column pointers into the data array.
The content of banded matrices, which only contain the main diagonal and a
certain number of diagonals above and below it, are represented similarly
but with extra fields to record the numbers of diagonals.
The content of sparse matrices includes the number of non-zero elements that
can potentially be stored, a data array of non-zero elements, and two
additional integer arrays.
The interpretation of the integer arrays depends on a storage format field.
For \ac{CSC} matrices, an \emph{indexptrs} array maps column numbers (from
zero) to indices of the data array and an \emph{indexvals} array maps
indices of the data array to row numbers (from zero).
So, for instance, the non-zero elements of the $(j-1)$th~column are stored
consecutively in the data array at indices~$\mathit{indexptrs}[j] \le k <
\mathit{indexptrs}[j+1]$, and the row of each element
is~$\mathit{indexvals}[k] + 1$.
For \ac{CSR} matrices, \emph{indexptrs} maps row numbers to data indices and
\emph{indexvals} maps data indices to column numbers.
In the interests of calculation speed, the different Sundials routines often
violate the abstract interface and access the underlying representations
directly.
This induces compatibility constraints: for instance, dense matrices may
only be multiplied with serial, OpenMP, and Pthreads nvectors, and the KLU
linear solver only manipulates sparse matrices.
In \cref{sec:matrices}, we explain how these rules are expressed as typing
constraints in the OCaml interface.
\subsubsection{Linear solvers}\label{sec:coverview:lsolvers}
Each of the solvers must resolve non-linear algebraic systems.
For this, they use either ``functional iteration'' (\textsc{\it CVODE}{} and \textsc{\it CVODES}{}
only), or, usually, `Newton iteration'.
Newton iteration, in turn, requires the solution of linear systems.
Several linear solvers are provided with Sundials.
They are instantiated by generic routines, like \verb"SUNSPGMR" in the
example, and attached to solver sessions.
Generic linear solvers factor out common algorithms that solver-specific
linear solvers specialize and combine with callback routines, like
\verb"jtv", \verb"Precond", and \verb"Psolve" in the example.
Internally, generic linear solvers are implemented, similarly to nvectors
and matrices, as an abstract data type combining a data pointer with 13
function pointers.
Solver-specific linear solvers are invoked through a generic interface
comprising four function pointers in a session object: \emph{linit},
\emph{lsetup}, \emph{lsolve}, and \emph{lfree}.
Users may provide their own \emph{custom} linear solvers by specifying
subsets of the 13 operations, or even their own \emph{alternate} linear
solvers by directly setting the four pointers.
Sundials includes three main linear solver families: a diagonal
approximation of system Jacobians using difference quotients, \acp{DLS} that
perform LU factorization on Jacobian matrices, and
\acp{SPILS} based on Krylov methods.
The \ac{DLS} modules require callback routines to calculate an explicit
representation of the Jacobian matrix of a system.
The \ac{SPILS} modules require callback routines to multiply vectors by an
(implicit) Jacobian matrix and to precondition.
As was the case for solver sessions, the initialization and use of linear
solvers is subject to various rules.
For instance, the \ac{DLS} modules exploit the underlying representation of
serial, OpenMP, and Pthreads nvectors, and they cannot be used with other
nvectors.
The \ac{SPILS} modules combine a method (\acs{SPGMR}, \acs{SPFGMR},
\acs{SPBCGS}, \acs{SPTFQMR}, or \acs{PCG}) with an optional preconditioner.
There are standard preconditioners (left, right, or both) for which users
supply a solve function and, optionally, a setup function, and which work
for any nvector, and also a banded matrix preconditioner that is only
compatible with serial nvectors, and a \ac{BBD} preconditioner that is only
compatible with parallel nvectors.
The encoding of these restrictions into the OCaml interface is described in
\cref{sec:linsolv}.
\subsection{Overview of Sundials/ML}\label{sec:mloverview}
The structure of the OCaml interface mostly follows that of the underlying
library and values are consistently and systematically renamed:
module-identifying prefixes are replaced by module paths and words beginning
with upper-case letters are separated by underscores and put into
lower-case.
For instance, the function name \verb"CVSpilsSetGSType" becomes
\verb"Cvode.Spils.set_gs_type".
This makes it easy to read the original documentation and to adapt existing
source code, like, for instance, the examples provided with Sundials.
We did, however, make several changes both for programming convenience and
to increase safety, namely:
\begin{inparaenum}
\item
solver sessions are mostly configured via algebraic data types rather than
multiple function calls;
\item
errors are signalled by exceptions rather than return codes;
\item
user data is shared between callback routines via partial function
applications (closures);
\item
vectors are checked for compatibility using a combination of static and
dynamic checks; and
\item
explicit free commands are not necessary since OCaml is a garbage-collected
language.
\end{inparaenum}
\SaveVerb{precond}"precond"
\SaveVerb{jtv}"jtv"
\SaveVerb{psolve}"psolve"
\SaveVerb{f}"f"
\SaveVerb{data}"data"
\SaveVerb{tzero}"t0"
\SaveVerb{u}"u"
\SaveVerb{reltol}"reltol"
\SaveVerb{abstol}"abstol"
\begin{figure}
\begin{center}
\begin{NumberedVerbatim}
let cvode_mem =
Cvode.(init BDF
(Newton Spils.(solver (spgmr u)
\mytilde{}jac_times_vec:(None, jtv data)
(prec_left \mytilde{}setup:(precond data) (psolve data))))
(SStolerances (reltol, abstol))
(f data) t0 u)
in
\end{NumberedVerbatim}
\end{center}
\caption{Extract from our OCaml adaptation of cvDiurnal\_kry.
The definitions of the \protect\UseVerb{precond}, \protect\UseVerb{jtv},
\protect\UseVerb{psolve}, and \protect\UseVerb{f} functions and those of the
\protect\UseVerb{data}, \protect\UseVerb{reltol}, \protect\UseVerb{abstol},
\protect\UseVerb{tzero}, and \protect\UseVerb{u} variables are not
shown.}\label{fig:cvodeinit:ocaml}
\end{figure}
The OCaml program extract in \cref{fig:cvodeinit:ocaml} is functionally
equivalent to the C code of \cref{fig:cvodeinit:c}.
But rather than specifying Newton iteration by passing a constant
(\verb"CV_NEWTON") and later calling linear solver routines (like
\verb"CVSpilsSetLinearSolver"), a solver session is configured by passing a
value that contains all the necessary parameters.
This makes it impossible to specify Newton iteration without also properly
configuring a linear solver.
The given value is translated by the interface into the correct sequence of
calls to Sundials.
The interface checks the return code of each function call and raises an
exception if necessary.
In the extract, the \verb"\mytilde{}setup" and
\verb"\mytilde{}jac_times_vec" markers denote labelled
arguments~\cite[\textsection
4.1]{LeroyEtAl:OCamlMan:2018}; we use them for optional arguments, as in
this example, and also to clarify the meaning of multiple arguments of the
same type.
The callback functions---\UseVerb{precond}, \UseVerb{jtv}, \UseVerb{psolve},
and \UseVerb{f}---are all applied to \UseVerb{data}.
This use of partial application over a shared value replaces the ``user
data'' mechanism of Sundials (\verb"CVodeSetUserData") that provides
session-local storage; it is more natural in OCaml and it frees the
underlying user data mechanism for use by the interface code.
As in the C~version, \UseVerb{tzero} is the initial value of the independent
variable, and \UseVerb{u} is the vector of initial values.
\Cref{fig:cvodeinit:ocaml} uses the OCaml local open
syntax~\cite[\textsection 7.7.7]{LeroyEtAl:OCamlMan:2018},
\verb"Cvode.($\cdots$)" and \verb"Spils.($\cdots$)", to access the
functions \verb"Cvode.init", \verb"Cvode.Spils.solver",
\verb"Cvode.Spils.spgmr", and
\verb"Cvode.Spils.prec_left", and also the constructors \verb"Cvode.BDF",
\verb"Cvode.Newton", and \verb"Cvode.SStolerances".
Not all options are configured at session creation, that is, by the call to
\verb"init".
Others are set via later calls; for instance, the residual tolerance value
could be fine tuned by calling:
\begin{Verbatim}
Cvode.Spils.set_eps_lin cvode_mem e
\end{Verbatim}
In choosing between the two possibilities, we strove for a balance between
enforcing correct library use and providing a simple and natural interface.
For instance, bundling the choice of Newton iteration with that of a linear
solver and its preconditioner exploits the type system both to clarify how
the library works and to avoid runtime errors.
Calls like that to \verb"set_eps_lin", on the other hand, are better made
separately since the default values usually suffice and since it may make
sense to change such settings between calls to the solver.
The example code treats a session with \textsc{\it CVODE}{}, but the \textsc{\it IDA}{} and
\textsc{\it KINSOL}{} interfaces are similar.
The \textsc{\it CVODES}{} and \textsc{\it IDAS}{} solvers function differently.
One of the guiding principles behind the C versions of these solvers is
that their extra features
be accessible simply by adding extra calls to existing programs and linking
with a different library.
We respect this choice in the design of the OCaml interface.
For example, additional ``quadrature'' equations are added to the session
created in \cref{fig:cvodeinit:ocaml} by calling:
\begin{Verbatim}
Cvodes.Quadrature.init cvode_mem fq yq
\end{Verbatim}
which specifies a function \verb"fq" to calculate the derivatives of the
additional equations, and also their initial values in \verb"yq".
While it would have been possible to signal such enhancements in the session
type, we decided that this would complicate rather than clarify library use,
especially since several enhancements---namely quadratures, forward
sensitivity, forward sensitivity with quadratures, and adjoint
sensitivity---and their combinations are possible.
We must thus sometimes revert to runtime checks to detect whether features
are used without having been initialized.
These checks are not always performed by the underlying library and misuse
of certain features can give rise to segmentation faults.
The choice of whether to link Sundials/ML with the basic solver
implementations or the enhanced ones is made during installation.
To calculate sensitivities using the adjoint method, a library user must
first ``enhance'' a solver session using \verb"Cvodes.Adjoint.init", then
calculate a solution by taking steps in the forward direction, before
attaching ``backward'' sessions, and then taking steps in the opposite
direction.
Such backward sessions are identified in Sundials by integers and the
functions dedicated to manipulating them take as arguments a forward session
and an integer.
But other more generic functions are shared between normal and backward
sessions and it is necessary to first acquire a backward session pointer
(using \verb"CVodeGetAdjCVodeBmem") before calling them.
Our interface hides these details by introducing a
\verb"Cvodes.Adjoint.bsession" type which is realized by wrapping a standard
session in a constructor (to avoid errors in the interface code) and storing
the parent session and integer code along with other backward-specific
fields as explained in \cref{sec:sessions}.
The details of the interfaces to nvectors and linear solvers are deferred
to~\cref{sec:vectors,sec:linsolv} since the choices made and the types used
are closely tied to the technical details of their representations in
memory.
We mention only that, unlike Sundials, the interface enforces compatibility
between nvectors, sessions, and linear solvers.
Such controls become more important when sensitivity enhancements are used,
since then the vectors used are likely of differing lengths.
Sundials is a large and sophisticated library.
It is thus important to provide high-quality documentation.
For this, ocamldoc~\cite{LeroyEtAl:OCamlMan:2018} and its ability to define
new markup tags are invaluable.
For instance, we combine the existing tags for including \LaTeX{} with the
MathJax library\footnote{\url{http://www.mathjax.org/}} to render
mathematical descriptions inline, and we introduce custom tags to link to
the extensive Sundials documentation.
\subsection{Complete examples}\label{sec:example}
\begin{figure}
\hfil
\begin{minipage}{.4\textwidth}
\begin{tikzpicture}[
force/.style={gray,-latex},
detail/.style={gray},
]
\coordinate (attach) at (0,0);
\path (attach) +(-120:-1) coordinate (wall top);
\path (attach) +(-120:4.5) coordinate (wall bottom left);
\path (attach) +(-120:3.5) coordinate (contact);
\path (contact) node[detail,rotate=-4] {\EightStar};
\fill[draw,gray!30]
(wall top)
-- (wall top-|wall bottom left)
-- (wall bottom left)
-- cycle;
\draw[->]
(wall top-|wall bottom left)
-- +(6.2,0)
node[above] {$x$};
\draw[->]
(wall top-|wall bottom left)
-- ([yshift=-5]wall bottom left)
node[left] {$y$};
\draw (wall top-|attach)
node[above] {0}
++(0,.07) -- +(0,-.14);
\draw (wall top-|attach)
++(3.5,0) node[above] {1}
++(0,.07) -- +(0,-.14);
\draw (wall bottom left|-attach)
node[left] {0}
++(.07,0) -- +(-.14,0);
\draw (wall bottom left|-attach)
++(0,-3.5) node[left] {-1}
++(.07,0) -- +(-.14,0);
\draw[dotted] (0, 0) coordinate (attach) -- (attach|-wall bottom left);
\fill[draw] (attach) circle (1pt);
\draw[very thick,fill]
(attach)
-- +(-50:3.5)
coordinate (mass)
(mass) circle (3pt);
\draw[force]
([xshift=-6]mass)
-- node[below left] {$p$}
([xshift=-6]$(mass)!.3!(attach)$);
\draw[force] (mass) -- node[right] {$g=9.8$} +(0,-1.0);
\draw[detail,<-] (attach) +(-51:1.5cm) arc (-50:-90:1.5cm)
node[pos=0.4,below] {$\theta$};
\draw[detail,<-] (attach) +(-120:.9cm) arc (-120:-90:.9cm)
node[pos=0.35,below] {$\frac{\pi}{6}$};
\draw[detail,decorate,decoration={brace,raise=7pt}]
(attach) -- (mass) node[midway,anchor=south west,shift={(7pt,3pt)}]
{$r=1$};
\draw[detail,<->]
(contact) arc [start angle=-120, end angle=-110, radius=3.5cm]
node[below] {$k = -0.5$};
\end{tikzpicture}
\end{minipage}
\hfil
\begin{minipage}{.55\textwidth}
\begin{minipage}[t]{.4\textwidth}
\centering
Polar \acp{ODE}
\begin{align*}
\theta(0) &= \frac{\pi}{2} \\
\dv{\theta}{t} (0) &= 0 \\
\dv[2]{\theta}{t} &= -g \sin \theta
\end{align*}
\end{minipage}
\hfill
\begin{minipage}[t]{.4\textwidth}
\centering
Cartesian \acp{DAE}
\begin{align*}
x(0) &= 1 & y(0) &= 0 \\
\dv{x}{t} (0) &= 0 & \dv{y}{t} (0) &= 0 \\
\dv[2]{x}{t} &= px & \dv[2]{y}{t} &= py - g \\[1ex]
x^2 + y^2 &= r^2
\end{align*}
\end{minipage}
\end{minipage}
\hfil
\caption{A simple pendulum model.}\label{fig:pendulum}
\end{figure}
We now present two complete programs that use the Sundials/ML interface.
Consider the simple pendulum model shown in \cref{fig:pendulum}: a unit mass
at the end of a steel rod is attached to an inclined wall by a friction-less
hinge.
The rod and mass are raised parallel to the ground and released at time
$t_0=0$.
The only forces acting on the mass are gravity and the tension from the rod.
Our aim is to plot the position of the mass as $t$ increases.
We consider two equivalent models for the dynamics: \acp{ODE} in terms of
the angle~$\theta$ and \acp{DAE} in terms of the Cartesian coordinates~$x$
and~$y$.
When the mass hits the wall, its velocity is multiplied by a (negative)
constant $k$.
\subsubsection{Polar coordinates in \textsc{\it CVODE}{}}\label{sec:example:polar}
The \ac{ODE} model in terms of polar coordinates can be simulated with
\textsc{\it CVODE}{}.
We start by declaring constants from the model and replacing the basic
arithmetic operators with floating-point ones.
\begin{Verbatim}
let r, g, k = 1.0, 9.8, -0.5
and pi = 4. *. atan (1.)
and ( + ), ( - ), ( * ), ( / ) = ( +. ), ( -. ), ( *. ), ( /. )
\end{Verbatim}
The solver manipulates arrays of values, whose 0th elements
track~$\theta$ and whose 1st elements track~$\dv{\theta}{t}$.
We declare constants for greater readability.
\begin{Verbatim}
let theta, theta' = 0, 1
\end{Verbatim}
The dynamics are specified by defining a right-hand-side function that takes
three arguments: \verb"t", the time, \verb"y", a big array of state values,
and \verb"yd", a big array to fill with instantaneous derivative values.
\begin{Verbatim}
let rhs t y yd =
yd.\{theta\} <- y.\{theta'\};
yd.\{theta'\} <- -. g * sin y.\{theta\}
\end{Verbatim}
Apart from the imperative assignments to \verb"yd", side-effects are not
allowed in this function, since it will be called multiple times with
different estimates for the state values.
Interesting events are communicated to the solver through zero-crossing
expressions.
The solver tracks the value of these expressions and signals when they
change sign.
We define a function that takes the same inputs as the last one and that
fills a big array \verb"r" with the values of the zero-crossing expressions.
\begin{Verbatim}
let roots t y r =
r.\{0\} <- -. pi / 6 - y.\{theta\}
\end{Verbatim}
The single zero-crossing expression is negative until the mass collides with
the wall.
We create a big array,\footnote{\verb"RealArray" is a helper module for
\verb"Bigarray.Array1"s of \verb"float"s.} initialized with the initial
state values, and ``wrap'' it as an nvector.
\begin{Verbatim}
let y = RealArray.of_list [ pi/2. ; 0. ]
let nv_y = Nvector_serial.wrap y
\end{Verbatim}
The big array, \verb"y", and the nvector, \verb"nv_y", share the same
underlying storage.
We will access the values through the big array, but the Sundials functions
require an nvector.
We can now instantiate the solver, using the Adams-Moulton formulas,
functional iteration, and the default tolerance values, and staring at $t_0
= 0$.
We pass the \verb"rhs" function defining the dynamics, the \verb"roots"
function defining state events, and the nvector of initial states (which the
solver uses for storage).
\begin{Verbatim}
let s = Cvode.(init Adams Functional default_tolerances
rhs ~roots:(1, roots) 0.0 nv_y)
\end{Verbatim}
Some solver settings are not configured through the \verb"init" routine, but
rather by calling functions that act imperatively on the session.
Here, we set the maximum simulation time to \SI{10}{\second} and specify
that we only care about ``rising'' zero-crossings, where a negative value
becomes zero or positive.
\begin{Verbatim}
Cvode.set_stop_time s 10.0;
Cvode.set_all_root_directions s Sundials.RootDirs.Increasing
\end{Verbatim}
There are two main reasons for not adding these settings as optional
arguments to \verb"init".
First and foremost, these settings may be changed during a simulation, for
instance, to implement mode changes, so separate functions are required in
any case.
Second, calls to \verb"init" can already become quite involved, especially
when specifying a linear solver.
Providing optional settings in separate functions divides the interface into
smaller pieces.
Unlike the features handled by \verb"init", the only ordering constraint
imposed on these functions is that they be called after session
initialization.
The simulation is advanced by repeatedly calling \verb"Cvode.solve_normal".
We define a first function to advance the simulation time~\verb"t" to a
given value~\verb"tnext".
When a zero-crossing is signalled, it updates the array element
for~$\theta'$, reinitializes the solver, and re-executes.
\begin{Verbatim}
let rec stepto tnext t =
if t >= tnext then t else
match Cvode.solve_normal s tnext nv_y with
| (tret, Cvode.RootsFound) ->
y.\{theta'\} <- k * y.\{theta'\};
Cvode.reinit s tret nv_y;
stepto tnext tret
| (tret, _) -> tret
\end{Verbatim}
A second function calls a routine to display the current state of the system
(and pause slightly) before advancing the simulation time by \verb"dt".
\begin{Verbatim}
let rec showloop t = if t < t_end then begin
show (r * sin y.\{theta\}, -. r * cos y.\{theta\});
showloop (stepto (t + dt) t)
end
\end{Verbatim}
The simulation is started by calling this function.
\begin{Verbatim}
showloop 0.0
\end{Verbatim}
Despite tail-recursive calls in \verb"stepto" and \verb"showloop", this
program is undeniably imperative: callbacks update arrays in place, mutable
memory is shared between arrays and nvectors, and sessions are progressively
updated.
This is a consequence of the structure of the underlying library and works
well in an ML language like OCaml.
We nevertheless benefit from a sophisticated type system (which infers all
types in this example automatically), abstract data types and pattern
matching, and exceptions.
\subsubsection{Cartesian coordinates in \textsc{\it IDA}{}}\label{sec:example:cartesian}
\newcommand{v_x}{v_x}
\newcommand{v_y}{v_y}
The \ac{DAE} model in terms of Cartesian coordinates can be simulated with
\textsc{\it IDA}{} once it is rearranged into the form $F(t, X, \dv{X}{t}) = 0$.
We introduce auxiliary variables~$v_x$ and~$v_y$ to represent the velocities
and arrive at the following system with five equations and five unknowns.
\begin{align*}
v_x - \dv{x}{t} &= 0 & v_y - \dv{y}{t} &= 0 &
\dv{v_x}{t} - p x &= 0 & \dv{v_y}{t} - p y + g &= 0 &
x^2 + y^2 - 1 &= 0
\end{align*}
The variable $p$ accounts for the ``pull'' of the rod.
It is the only \emph{algebraic} variable (its derivative does not appear);
all the others are \emph{differential} variables.
The system above is of index 3, whereas an index~1 system is preferred for
calculating the initial conditions.
The index is lowered by differentiating the algebraic constraint twice,
giving the following equation.
\begin{align*}
x \dv{v_x}{t} + y \dv{v_y}{t} + v_x^2 + v_y^2 &= 0
\end{align*}
Different substitutions of~$v_x$ and~$v_y$ with, respectively, $\dv{x}{t}$
and $\dv{y}{t}$ make for fourteen possible reformulations of the constraint
that are equivalent in theory but that may influence the numeric solution.
Here, we choose to implement the following form.
\begin{align*}
x \dv{v_x}{t} + y \dv{v_y}{t} + v_x \dv{x}{t} + v_y \dv{y}{t} &= 0
\end{align*}
Our second OCaml program begins with the same constant declarations as the
first one.
This time the $X$ and $\dv{X}{t}$ arrays track five variables which we
index,
\begin{Verbatim}
let x, y, vx, vy, p = 0, 1, 2, 3, 4
\end{Verbatim}
and there are also five residual equations to index:
\begin{Verbatim}
let vx_x, vy_y, acc_x, acc_y, constr = 0, 1, 2, 3, 4
\end{Verbatim}
The residual function itself is similar in principle to the \verb"rhs"
function of the previous example.
It takes four arguments: \verb"t", the time, \verb"vars", a big array of
variable values, \verb"vars'", a big array of variable derivative values,
and \verb"res", a big array to fill with calculated residuals.
We encode fairly directly the system of equations described above.
\begin{Verbatim}
let residual t vars vars' res =
res.\{vx_x\} <- vars.\{vx\} - vars'.\{x\};
res.\{vy_y\} <- vars.\{vy\} - vars'.\{y\};
res.\{acc_x\} <- vars'.\{vx\} - vars.\{p\} * vars.\{x\};
res.\{acc_y\} <- vars'.\{vy\} - vars.\{p\} * vars.\{y\} + g;
res.\{constr\} <- vars.\{x\} * vars'.\{vx\} + vars.\{y\} * vars'.\{vy\}
+ vars.\{vx\} * vars'.\{x\} + vars.\{vy\} * vars'.\{y\}
\end{Verbatim}
The previous example applied functional iteration which does not require
solving linear constraints.
This is not possible in \textsc{\it IDA}{}, so we will use a linear solver for which we
choose to provide a function to calculate a Jacobian matrix for the system.
The function receives a record containing variables and their derivatives, a
coefficient~\verb"c" that is proportional to the step size, and other values
that we do not require.
It also receives a dense matrix~\verb"out", which we ``unwrap'' into a
two-dimensional big array and fill with the non-zero partial derivatives of
residuals relative to variables.
\begin{Verbatim}
let jac Ida.(\{ jac_y = vars ; jac_y' = vars'; jac_coef = c \}) out =
let out = Matrix.Dense.unwrap out in
out.\{x, vx_x\} <- -.c; out.\{y, vy_y\} <- -.c;
out.\{vx, vx_x\} <- 1.; out.\{vy, vy_y\} <- 1.;
out.\{x, acc_x\} <- -. vars.\{p\}; out.\{y, acc_y\} <- -.vars.\{p\};
out.\{vx, acc_x\} <- c; out.\{vy, acc_y\} <- c;
out.\{p, acc_x\} <- -.vars.\{x\}; out.\{p, acc_y\} <- -.vars.\{y\};
out.\{x, constr\} <- c * vars.\{vx\} + vars'.\{vx\};
out.\{y, constr\} <- c * vars.\{vy\} + vars'.\{vy\};
out.\{vx, constr\} <- c * vars.\{x\} + vars'.\{x\};
out.\{vy, constr\} <- c * vars.\{y\} + vars'.\{y\}
\end{Verbatim}
The zero-crossing function is now defined in terms of the Cartesian
variables.
\begin{Verbatim}
let roots t vars vars' r =
r.\{0\} <- vars.\{x\} - vars.\{y\} * (sin (-. pi / 6.) / -. cos (-. pi / 6.))
\end{Verbatim}
We create two big arrays initialized with initial values and a guess for the
initial value of $p$, and wrap them as nvectors.
\begin{Verbatim}
let vars = RealArray.of_list [x0; y0; 0.; 0.; 0.]
let vars' = RealArray.make 5 0.
let nv_vars, nv_vars' = Nvector.(wrap vars, wrap vars') in
\end{Verbatim}
We can now instantiate the solver session.
We create and pass a generic linear solver on 5-by-5 dense matrices
(\verb"nv_vars" is only provided for compatibility checks) and specialize it
for an \textsc{\it IDA}{} session with the Jacobian function defined above.
We also specify scalar relative and absolute tolerances, the residual
function, the zero-crossing function, and initial values for the time,
variables, and variable derivatives.
\begin{Verbatim}
let s = Ida.(init Dls.(solver ~jac (dense nv_vars (Matrix.dense 5)))
(SStolerances (1e-9, 1e-9))
residual ~roots:(1, roots) 0. nv_vars nv_vars')
\end{Verbatim}
If it were necessary to configure the linear solver or to extract statistics
from it, we would have to declare a distinct variable for it, for example,
\verb"let ls = Ida.Dls.dense nv_vars (Matrix.dense 5)".
Since we will be asking Sundials to calculate initial values for algebraic
variables, that is, for $p$, we declare an nvector that classifies each
variable as either differential or algebraic.
\begin{Verbatim}
let d, a = Ida.VarId.differential, Ida.VarId.algebraic in
let var_types = Nvector.wrap (RealArray.of_list [ d; d; d; d; a ])
\end{Verbatim}
\noindent
This information is given to the solver and algebraic variables are
suppressed from local error tests.
\begin{Verbatim}
Ida.set_id s var_types;
Ida.set_suppress_alg s true
\end{Verbatim}
Initial values for the algebraic variables and their derivatives are then
calculated and updated in the appropriate nvectors for a first use at time
\verb"dt".
\begin{Verbatim}
Ida.calc_ic_ya_yd' s ~y:nv_vars ~y':nv_vars' ~varid:var_types dt
\end{Verbatim}
Then, as in the first program, we define a function to advance the
simulation and check for zero-crossings.
If a zero-crossing occurs, the variables and derivatives are updated, the
solver is reinitialized, and the values of algebraic variables and their
derivatives are recalculated.
\begin{Verbatim}
let rec stepto tnext t =
if t >= tnext then t else
match Ida.solve_normal s tnext nv_vars nv_vars' with
| (tret, Ida.RootsFound) ->
vars.\{vx\} <- k * vars.\{vx\};
vars.\{vy\} <- k * vars.\{vy\};
Ida.reinit s tret nv_vars nv_vars';
Ida.calc_ic_ya_yd' s ~y:nv_vars ~y':nv_vars' ~varid:var_types (t + dt);
stepto tnext tret
| (tret, _) -> tret
\end{Verbatim}
The final function is almost the same as in the first program, except that
now the state values can be passed directly to the display routine.
\begin{Verbatim}
let rec showloop t = if t < t_end then begin
show (vars.\{x\}, vars.\{y\});
showloop (stepto (t + dt) t)
end in
showloop 0.0
\end{Verbatim}
This second program involves more technical details than the first one, even
if, in this case, it solves the same problem.
No new concepts are required from a programming point-of-view.
\section{Technical details}\label{sec:tech
We now describe and justify the main typing and implementation choices made
in the Sundials/ML interface.
For the important but standard details of writing stub functions and
converting to and from OCaml values we refer readers to
\Creflabel{chapter}~20 of the OCaml manual~\cite{LeroyEtAl:OCamlMan:2018}.
We focus here on the representation in OCaml of nvectors
(\cref{sec:vectors}), sessions (\cref{sec:sessions}), and linear solvers
(\cref{sec:linsolv}), describing some of the solutions we tried and
rejected, and presenting the solutions.
We also describe the treatment of Jacobian matrices (\cref{sec:matrices}).
\subsection{Nvectors}\label{sec:vectors}
Nvectors combine an array of floating-point numbers (\verb"double"s) with
implementations of the 26 vector operations.
The OCaml big array library~\cite{LeroyEtAl:OCamlMan:2018} provides arrays
of floating-point numbers that can be shared directly between OCaml and C.
It is the natural choice for interfacing with the payloads of nvectors.
Both Sundials' nvectors and OCaml's big arrays have a flag to indicate
whether or not the payload should be freed when the nvector or big array is
destroyed.
\subsubsection{An attempted solution}\label{sec:vectors:attempted}
A first idea for an interface is to work only with big arrays on the OCaml
side of the library, and to convert them automatically \emph{to} nvectors on
calls into Sundials, and \emph{from} nvectors on callbacks from Sundials.
We did exactly this in early versions of our interface that only supported
serial nvectors.\footnote{This approach is also taken in the NMAG
library~\cite{FangohrEtAl:Nmag:2012} for interfacing with \textsc{\it CVODE}{} for both
serial and parallel nvectors.
The Modelyze implementation~\cite{BromanSie:Modelyze:2012} provides a
similar interface to \textsc{\it IDA}{}, but explicitly copies values to and from
standard OCaml arrays and nvectors.}
For calls into Sundials, like \verb"Cvode.init" from the earlier example,
the interface code creates a temporary serial nvector to pass into \textsc{\it CVODE}{}
by calling the Sundials function
\begin{Verbatim}
N_VMake_Serial(Caml_ba_array_val(b)->dim[0], (realtype *)Caml_ba_data_val(b))
\end{Verbatim}
which creates an nvector from an existing array.
The two arguments extract the array size and the address of the underlying
data from the big array data structure.
Operations on the resulting nvector directly manipulate the data stored
in the big array.
This nvector is destroyed by calling \emph{nvdestroy}---one of the 26
abstract operations defined for nvectors---before returning from the
interface code.
For callbacks into OCaml, we create a big array for each nvector argument
\verb"v" passed in from Sundials by calling the OCaml function
\begin{Verbatim}
caml_ba_alloc(CAML_BA_FLOAT64|CAML_BA_C_LAYOUT, 1, NV_DATA_S(v), &(NV_LENGTH_S(v)))
\end{Verbatim}
From left to right, the arguments request a big array of \verb"double"
values in row-major order indexed from 0, specify the number of dimensions,
pass a pointer to the nvector's payload extracted using a Sundials macro,
and give the length of the array using another Sundials macro (the last
argument is an array with one element for each dimension).
Again, operations on the big array modify the underlying array directly.
After a callback, we set the length of the big array \verb"b" to 0,
\begin{Verbatim}
Caml_ba_array_val(b)->dim[0] = 0
\end{Verbatim}
as a precaution against the possibility, albeit unlikely, that a callback
routine could keep a big array argument in a reference variable that another
part of the program later accesses after the nvector memory has been freed.
The OCaml runtime will eventually garbage collect the emptied big array.
This mechanism can, however, be circumvented by creating a subarray which
will have a distinct header, and hence dimension field, pointing to the same
underlying memory~\cite{Bunzli:Bigarrays:2005}.
This approach has two advantages: it is simple and library users need only
work with big arrays.
As for performance, calls into Sundials require \verb"malloc" and
\verb"free" but they are outside critical solver loops, and, anyway,
``wrapper'' vectors can always be cached within the session value if
necessary.
Callbacks from Sundials do occur inside critical loops, but we can expect
\verb"caml_ba_alloc" to allocate on the relatively fast OCaml heap and a
\verb"malloc" is not required since we pass a data pointer.
This approach has two drawbacks: it does not generalize well to OpenMP,
Pthreads, parallel, and custom nvectors, and a big array header is always
allocated on the major heap which may increase the frequency and cost of
garbage collection.
We tried to generalize this approach to handle parallel nvectors by using
preprocessor macros, but the result was confusing for users, tedious to
maintain, and increasingly unwieldy as we extended the interface to treat
\textsc{\it CVODES}{} and \textsc{\it IDAS}{}.
\subsubsection{The adopted solution}\label{sec:vectors:adopted}
The solution we finally adopted exploits features of the nvector abstract
datatype and polymorphic typing to treat nvectors more generically and
without code duplication.
The idea is straightforward: we pair an OCaml representation of the contents
with a (wrapped) pointer to a C~nvector structure, and we link both to the
same underlying data array as in the previous solution.
The difference is that we maintain both the OCaml view and the C view of the
structure at all times
The memory layout of our nvector is shown in \cref{fig:mem:nvec}.
The OCaml type for accessing this structure is defined in the
\verb"Nvector" module as:
\begin{Verbatim}
type ('data, 'kind) t = 'data * cnvec * ('data -> bool)
\end{Verbatim}
and used abstractly as \verb"('data, 'kind) Nvector.t" throughout the
interface.
The \verb"'data" points to the OCaml view of the payload (labelled
``\verb"payload"'' in \cref{fig:mem:nvec}).
For serial nvectors, \verb"'data" is instantiated as a big array of
\verb"float"s.
The phantom type~\cite{LeijenMei:DomSpecComp:1999} argument \verb"'kind" is
justified subsequently.
The \verb"cnvec" component is a custom block pointing to the \verb"N_Vector"
structure allocated in the C heap.
The last component of the triple is a function that tests the compatibility
of the nvector with another one: for serial nvectors, this means one of the
same length, while for parallel nvectors, global sizes and \ac{MPI}
communicators are also checked.
This component is used internally in our binding to ensure that, for
instance, only compatible nvectors are added together.
Since the compatibility check only concerns the payload, we use a function
from type \verb"'data" rather than type \verb"Nvector.t".
This check together with the type arguments prevents nvectors being used in
ways that would lead to invalid memory accesses.
The two mechanisms help library users: types document how the library is
used and dynamic checks signal problems at their point of occurrence (as an
alternative to long debugging sessions).
With this representation, the \verb"Nvector" module can provide a generic
and efficient function for accessing the data from the OCaml side of the
interface:
\begin{Verbatim}
let unwrap ((payload, _, _) : ('data, 'kind) t) = payload
\end{Verbatim}
with the type \verb"('data, 'kind) Nvector.t -> 'data".
Calls from OCaml into C work similarly by obtaining a pointer to the
underlying Sundials nvector from the \verb"cnvec" field.
Callbacks from C into OCaml require another mechanism.
Stub functions are passed \verb"N_Vector" values from which they must
recover corresponding OCaml representations before invoking a callback.
While it would be possible to modify the nvector contents field to hold both
an array of data values and a pointer to an OCaml value, we wanted to use
the original nvector operations without any additional overhead.
Our solution is to allocate more memory than necessary for an
\verb"N_Vector" so as to add a ``backlink'' field that references the OCaml
representation.
The approach is summarised in \cref{fig:mem:nvec}.
At left are the values in the OCaml heap: an \verb"Nvector.t" and its
\verb"'data" payload.
The former includes a pointer into the C heap to an \verb"N_Vector"
structure extended, hence the `+', with a third field that refers back to
the data payload on the OCaml side.
Callbacks can now easily retrieve the required value:
\begin{Verbatim}
#define NVEC_BACKLINK(nvec) (((struct cnvec *)nvec)->backlink)
\end{Verbatim}
\label{def:NVEC_BACKLINK}
The backlink field must be registered as a global root with the garbage
collector to ensure that it is updated if the payload is moved and also that
the payload is not destroyed inopportunely.
Pointing this global root directly at the \verb"Nvector.t" would create a
cycle across both heaps and necessitate special treatment to avoid memory
leaks.
We thus decided to pass payload values directly to callbacks, with the added
advantage that callback functions have simpler types in terms of
\verb"'data" rather than \verb"('data, 'kind) Nvector.t".
When there are no longer any references to an \verb"Nvector.t", it is
collected and its finalizer frees the \verb"N_Vector" and associated global
root.
This permits the payload to be collected when no other references to it
exist.
We found that this choice works well in practice provided the standard
vector operations are also defined for payload values---that is, directly on
values of type \verb"'data"---since callbacks can no longer use the nvector
operations directly.
The only drawback we experienced was in implementing custom linear solvers.
Such linear solvers take nvector arguments, which thus become payload values
within OCaml, but they also work with callbacks into Sundials that take
nvector arguments.
The OCaml code must thus `rewrap' payload values in order to use the
callbacks.
\begin{figure}
\centering
\begin{tikzpicture}
node distance=1cm,
cell/.style={draw,rectangle,thick,
minimum height=4.0ex,
minimum width=8.0em},
wcell/.style={cell,minimum width=9.1em},
serial/.style={thick,densely dotted},
links/.style={thick}
]
\draw[dash dot dot,gray] (2.2,-4.7) -- (2.2,1.4);
\node[left,gray] at (2.1,1.2) {OCaml heap};
\node[right,gray] at (2.3,1.2) {C heap};
\node[cell] (payload) {\verb"payload"};
\node[cell,below left=.6cm and -1cm of payload] (data) {\verb"'data"};
\node[cell,below=-\the\pgflinewidth of data] (cnvec) {\verb"cnvec"};
\node[cell,below=-\the\pgflinewidth of cnvec,gray]
(compat) {\verb"'data -> bool"};
\node[cell,below=-\the\pgflinewidth of data] {};
\node[above=0 of data] {\verb"Nvector.t"};
\node[wcell,below right=0 and 4.0 of cnvec] (content)
{\verb"content (void *)"};
\node[wcell,below=-\the\pgflinewidth of content] (ops) {\verb"ops"};
\node[above=0 of content]
{\hspace{1em}\verb"*N_Vector"\textsuperscript{+}};
\node[wcell,dashed,below=-\the\pgflinewidth of ops]
(backlink) {\verb"'data" (`backlink')};
\node[below left] at (backlink.north east) {\pgfuseplotmark{square*}};
\node[cell,serial,right=2.5cm of payload] (array)
{\verb"double *"};
\path (array.east) +(0.5,0) coordinate (arraylink);
\draw[serial,->] ([xshift=1cm]payload) |- (array);
\draw[serial,->] (content.east)
-| (arraylink)
|- (array.east)
;
\path ($(data.east)!.5!(payload.west)$) coordinate (hor);
\draw[->,links] (cnvec) -| ([xshift=.2cm]content.north west);
\draw[->,links] (data.east) -| (payload.south);
\draw[->,links]
(backlink.east)
-- ++(1.8,0)
-- ++(0,4.8) coordinate (ver)
-| (payload.north)
;
\node at (-2.5,-4.1) {\raisebox{.5ex}{\pgfuseplotmark{square*}}%
\hspace{.7em}GC root};
\end{tikzpicture
\caption{Interfacing nvectors (dotted lines are for serial
nvectors).\label{fig:mem:nvec}}
\end{figure}
The ``backlink'' field is configured when an \verb"N_Vector" is created by a
function in the OCaml \verb"Nvector" module.
The operations of serial and other \verb"N_Vector"s are unchanged but for
\verb"nvclone", \verb"nvdestroy", and \verb"nvcloneempty" which are replaced
by custom versions.
The replacement \verb"nvclone" allocates and sets up the backlink field and
associated payload.
For serial nvectors, it aliases the contents field to the data field of the
big array payload which is itself allocated in the C heap, as shown in
dotted lines in \cref{fig:mem:nvec}, and registers the backlink as a global
root.
The replacement \verb"nvdestroy" removes the global root and frees memory
allocated directly in the C heap but leaves the payload array to be garbage
collected when it is no longer accessible.
There are thus two classes of nvectors: those created on the OCaml side with
a lifetime linked to the associated \verb"Nvector.t" and determined by the
garbage collector, and those cloned within Sundials, for which there is no
\verb"Nvector.t" and whose lifetime ends when Sundials explicitly destroys
them, though the payload may persist.
Overriding the clone and destroy operations is more complicated than the
first attempted solution, and does not work with nvectors created outside
the OCaml interface (which would not have the extra backlink field).
This means, in particular, that we forgo the possibility of code mixing
Fortran, C, and OCaml, but otherwise this approach is efficient and
generalizes well.
Within the OCaml interface, the serial, OpenMP, and Pthreads nvectors carry
a big array payload, but at the C~level each is represented by a different
type of struct: those for the last two, for example, track the number of
threads in use.
OpenMP and Pthreads nvectors can be used anywhere that Serial nvectors
can---since they are all manipulated through a common set of
operations---except when the underlying representation is important, as in
direct accesses to nvector data or through functions like
\verb"Nvector_pthreads.num_threads".
We enforce these rules using the \verb"'kind" type variable introduced above
and polymorphic variants~\cite{Garrigue:PolyVariants:1998}\cite[\textsection
4.2]{LeroyEtAl:OCamlMan:2018}.
We use three polymorphic variant constructors, marked with backticks, and
declare three type aliases as (closed) sets of these
constructors:\label{page:serialkinds}
\begin{Verbatim}
type Nvector_serial.kind = [`Serial]
type Nvector_pthreads.kind = [`Pthreads | Nvector_serial.kind]
type Nvector_openmp.kind = [`OpenMP | Nvector_serial.kind]
\end{Verbatim}
Here we abuse the syntax of OCaml slightly: in the real implementation, each
kind is declared in the indicated module.
The first line declares \verb"Nvector_serial.kind" as a type whose only
(variant) constructor is \verb"`Serial".
The second line declares \verb"Nvector_pthreads.kind" as a type whose only
constructors are \verb"`Pthreads" and \verb"`Serial", and likewise for the
third line.
In fact, the constructors are never used as values, since the \verb"'kind"
argument is a `phantom'~\cite{LeijenMei:DomSpecComp:1999}: it only ever
occurs on the left-hand side of type definitions.
They serve only to express typing constraints.
Functions that accept any kind of nvector are polymorphic in \verb"'kind",
and those that only accept a specific kind of nvector are constrained with a
specific \verb"kind", like one of the three listed above or others
introduced specifically for parallel or custom nvectors.
Functions that accept any of the three kinds listed above but no others,
since they exploit the underlying serial data representation, take a
polymorphic nvector whose \verb"'kind" is constrained by
\begin{Verbatim}
constraint 'kind = [>Nvector_serial.kind]
\end{Verbatim}
Such an argument can be instantiated with any type that includes at least
the \verb"`Serial" constructor.
The fact that \verb"Nvector.t" is opaque means that it can only be one of
the nvector types \verb"Nvector_serial.t", \verb"Nvector_pthreads.t", or
\verb"Nvector_openmp.t".
An example is given in \cref{sec:linsolv}.
For parallel nvectors, the payload is the triple:
\begin{Verbatim}
(float, float64_elt, c_layout) Bigarray.Array1.t * int * Mpi.communicator
\end{Verbatim}
where the first element is a big array of local \verb"float"s, the second
gives the global number of elements, and the third specifies the \ac{MPI}
processes that communicate together.\footnote{We use the OCaml MPI binding:
\url{https://github.com/xavierleroy/ocamlmpi/}.}
We instantiate the \verb"'data" type argument of \verb"Nvector.t" with
this triple and provide creation and clone functions that create aliasing
for the big array and duplicate the other two elements between the OCaml and
C representations.
A specific kind is declared for parallel nvectors.
Fur custom nvectors, we define a record type containing a field for each
nvector operation, and a compatibility check (\verb"n_vcheck"), over an
arbitrary payload type~\verb"'d":
\begin{Verbatim}
type 'd nvector_ops = \{
n_vcheck : 'd -> 'd -> bool;
n_vclone : 'd -> 'd;
n_vlinearsum : float -> 'd -> float -> 'd -> 'd -> unit;
n_vmaxnorm : 'd -> float;
\vdots
\}
\end{Verbatim}
Such a record can then be used to create a wrapper function that turns
payload values of type~\verb"'d" into custom nvectors, by calling:
\begin{Verbatim}
val make_wrap : 'd nvector_ops -> 'd -> ('d, Nvector_custom.kind) Nvector.t
\end{Verbatim}
The resulting \verb"Nvector.t" carries the type of payload manipulated by
the given operations and a kind, \verb"Nvector_custom.kind", specific to
custom nvectors.
The kind permits distinguishing, for instance, between a custom nvector
whose payload is a big array of \verb"float"s and a standard serial nvector.
While there is little difference from within OCaml---both have the same type
of payload---the differences in the underlying representations are important
from the C~side.
In the associated \verb"N_Vector" data structure, we point the \verb"ops"
fields at generic stub code that calls back into OCaml and store the
closures defining the operations in the \verb"content" field.
This field is registered as a global root with the garbage collector.
The payload is stored using the backlink technique described earlier and
depicted in \cref{fig:mem:nvec}.
It is possible to create an OCaml-C reference loop by referring back to an
\verb"Nvector.t" value from within a custom payload or the set of custom
operations, and thus to inhibit correct garbage collection.
This is unlikely to happen in normal use and is difficult to detect, so we
simply rule it a misuse of the library.
\subsection{Sessions}\label{sec:sessions}
OCaml session values must track an underlying C session pointer and also
maintain references to callback closures and some other bookkeeping details.
We exploit the user data feature of Sundials to implement callbacks.
The main technical challenges are to avoid inter-heap loops and to smoothly
accommodate sensitivity analysis features.
\begin{figure}
\centering
\begin{tikzpicture}
node distance=1cm,
cell/.style={draw,rectangle,thick,
minimum height=4.0ex,
minimum width=7em},
wide cell/.style={cell,minimum width=9em},
serial/.style={thick,densely dotted},
]
\draw[dash dot dot,gray] (3.615,-3.2) -- (3.615,1.4);
\node[left,gray] at (3.515,1.2) {OCaml heap};
\node[right,gray] at (3.715,1.2) {C heap};
\node[cell] (cvode) {\verb"cvode"};
\node[cell,below=-\the\pgflinewidth of cvode] (backref) {\verb"backref"};
\node[cell,below=-\the\pgflinewidth of backref] (rhsfn) {\verb"rhsfn"};
\node[cell,below=-\the\pgflinewidth of rhsfn] (session etc) {\ldots};
\node[above=0 of cvode] {\verb"session"};
\path (cvode.north west) ++(+.2,0) coordinate (cvode link)
++(-.7,.4) coordinate (cvode corner);
\node[wide cell,below right=0cm and 5.0cm of cvode] (cvode_mem)
{\verb"cv_user_data"};
\node[wide cell,below=-\the\pgflinewidth of cvode_mem] (cvode_mem etc)
{\ldots};
\node[above=0 of cvode_mem] {\verb"*cvode_mem"};
\draw[->,thick] (cvode) -| ([xshift=.2cm]cvode_mem.north west);
\node[cell,below right=2\the\pgflinewidth and -1cm of session etc] (weak)
{\verb"Weak.t"};
\node[cell,right=1cm of weak] (root)
{\verb"root"};
\node[below left] at (root.north east) {\pgfuseplotmark{square*}};
\draw[->,thick] (cvode_mem) -| ([xshift=.8em]root.north);
\draw[->,thick] (backref) -| ([xshift=-.8em]root.north);
\draw[->,thick] (root.west) -- (weak.east);
\draw[->,thick,dashed] (weak) -| (cvode corner) -| (cvode link);
\node at (8.8,-2.6) {\raisebox{.5ex}{\pgfuseplotmark{square*}}%
\hspace{.7em}GC root};
\end{tikzpicture
\caption{Interfacing (\textsc{\it CVODE}) sessions.\label{fig:mem:session}}
\end{figure}
The solution we implemented is sketched in \cref{fig:mem:session} and
described below for \textsc{\it CVODE}{} and \textsc{\it CVODES}{}.
The treatment of \textsc{\it IDA}{}, \textsc{\it IDAS}{}, \textsc{\it ARKODE}{}, and \textsc{\it KINSOL}{} is essentially
the same.
As for nvectors, OCaml session types are parametrized by \verb"'data" and
\verb"'kind" type variables.
They are represented internally as records:
\begin{Verbatim}
type ('data, 'kind) session = \{
cvode : cvode_mem;
backref : c_weak_ref;
rhsfn : float -> 'data -> 'data -> unit;
\vdots
mutable sensext : ('data, 'kind) sensext;
\}
\end{Verbatim}
The type variables are instantiated from the nvector passed to the
\verb"init" function that creates session values.
The type variables ensure a coherent use of nvectors, which is essential in
operations that involve multiple nvectors, like \emph{nvlinearsum}, since
the code associated with one of the arguments is executed on the data of all
of the arguments.
The \verb"cvode" field of the session record contains a pointer to the
associated Sundials session value \verb"*cvode_mem".
The \verb"cv_user_data" field of \verb"*cvode_mem" is made to point at a
\verb"malloc"ed \verb"value" that is registered with the garbage collector
as a global root.
This root value cannot be stored directly in \verb"*cvode_mem" because
Sundials only provides indirect access to the \verb"cv_user_data" field
through the functions \verb"CvodeSetUserData" and \verb"CvodeGetUserData".
We would have had to violate the interface to acquire the address of the
field in order to register it as a global root.
The root value must refer back to the \verb"session" value since it is used
in C~level callback functions to obtain the appropriate OCaml closure.
The \verb"rhsfn" field shown in the record above is, for instance, the
closure for the \textsc{\it CVODE}{} `right-hand side function', the \verb"f" of
\cref{sec:overview}.
The root value stores a weak reference that is updated by the garbage
collector if \verb"session" is moved but which does not prevent the
destruction of \verb"session".
This breaks the cycle which exists across the OCaml and C~heaps: storing a
direct reference in the root value would prevent garbage collection of the
\verb"session" but the root value itself cannot be removed unless the
\verb"session" is first finalized.
The \verb"backref" field is used only by the finalizer of \verb"session" to
unregister the global root and to free the associated memory.
\begin{figure}
\begin{center}
\begin{NumberedVerbatim}
static int rhsfn(realtype t, N_Vector y, N_Vector ydot, void *user_data)
\{
CAMLparam0();\label{cbackstub:param}
CAMLlocal2(session, r);
CAMLlocalN(args, 3);\label{cbackstub:localn}
WEAK_DEREF (session, *(value*)user_data);\label{cbackstub:weakderef}
args[0] = caml_copy_double(t);\label{cbackstub:copydouble}
args[1] = NVEC_BACKLINK(y);\label{cbackstub:backlinky}
args[2] = NVEC_BACKLINK(ydot);\label{cbackstub:backlinkydot}
r = caml_callbackN_exn(Field(session, RECORD_CVODE_SESSION_RHSFN), 3, args);\label{cbackstub:rhsfn}
CAMLreturnT(int, CHECK_EXCEPTION (session, r, RECOVERABLE));\label{cbackstub:return}
\}
\end{NumberedVerbatim}
\end{center}
\caption{Typical Sundials/ML callback stub.}\label{fig:cbackstub}
\end{figure}
\Cref{fig:cbackstub} shows the callback stub for the \verb"session.rhsfn"
closure described above.
The C~function \verb"rhsfn()" is registered as the right-hand side function
for every \textsc{\it CVODE}{} session created by the interface.
Sundials calls it with the value of the independent variable, \verb"t", an
nvector containing the current value of the dependent variable, \verb"y", an
nvector for storing the calculated derivative, \verb"ydot", and the
session-specific pointer registered in \verb"cv_user_data".
Lines~\ref{cbackstub:param} to~\ref{cbackstub:localn} contain standard
boilerplate for an OCaml stub function.
Line~\ref{cbackstub:weakderef} follows the references sketched in
\cref{fig:mem:session} to retrieve a \verb"session" record: the
\verb"WEAK_DEREF" macro contains a call to \verb"caml_weak_get".
The weak reference is guaranteed to point to a value since Sundials cannot
be invoked from OCaml without passing the session value used in the
callback.
Line~\ref{cbackstub:copydouble} copies the floating-point argument into the
OCaml heap.
Lines~\ref{cbackstub:backlinky} and~\ref{cbackstub:backlinkydot} recover the
nvector payloads using the macro described in \cref{sec:vectors}.
Line~\ref{cbackstub:rhsfn} retrieves and invokes the \verb"rhsfn" closure
from the session object.
Finally, at line~\ref{cbackstub:return}, the return value is determined by
checking whether or not the callback raised an exception, and if so, whether
it was the distinguished \verb"RecoverableFailure" that signals to Sundials
that recovery is possible.
\begin{figure}
\centering
\begin{tikzpicture}
node distance=1cm,
cell/.style={draw,rectangle,thick,
minimum height=4.0ex,
minimum width=7em},
wide cell/.style={cell,minimum width=9em},
serial/.style={thick,densely dotted},
]
\draw[dash dot dot,gray] ( 3.3,-3.0) -- (3.3, 1.3);
\draw[dash dot dot,gray] (-1.6,-1.625) -- (0.9,-1.625);
\node[left,gray] at (3.2, 1.1) {OCaml heap};
\node[right,gray] at (3.4, 1.1) {C heap};
\node[left,gray] at (3.2,-1.6) {OCaml stack};
\node[cell] (cvode) {\verb"cvode"};
\node[cell,below=-\the\pgflinewidth of cvode] (session etc) {\ldots};
\node[above=0 of cvode] {\verb"session"};
\path (cvode.north west) ++(+.2,0) coordinate (cvode link)
++(-.7,.4) coordinate (cvode corner);
\node[wide cell,below right=0cm and 3.4cm of cvode] (cvode_mem)
{\verb"cv_user_data"};
\node[wide cell,below=-\the\pgflinewidth of cvode_mem] (cvode_mem etc)
{\ldots};
\node[above=0 of cvode_mem] {\verb"*cvode_mem"};
\draw[->,thick] (cvode) -| ([xshift=.2cm]cvode_mem.north west);
\node[cell,below left=1.1cm and 2.5cm of cvode_mem] (stack)
{\verb"argument"};
\draw[->,thick] (cvode_mem) -| ($(cvode_mem)!.35!(stack)$)
|- (stack.east);
\draw[->,thick] (stack) -| (cvode corner) -| (cvode link);
\end{tikzpicture
\caption{Alternative session interface (not
adopted).\label{fig:mem:altsession}}
\end{figure}
\subsubsection{An alternative session interface.}\label{sec:sessions:alt}
Another approach for linking the C \verb"*cvode_mem" to an OCaml
\verb"session" value is outlined in \cref{fig:mem:altsession}.
Since for a callback to occur, control must already have passed into the
Sundials library through the interface, there will be a reference to the
\verb"session" value on the OCaml stack.
It is thus possible to pass the reference to \verb"CvodeSetUserData" before
calling into Sundials.
The reference will be updated by the garbage collector as necessary, but not
moved itself during the call.
This approach is appealing as it requires neither global roots nor weak
references.
It also requires fewer interfacing instructions in performance critical
callback loops due to fewer indirections and because there is no need to
call \verb"caml_weak_get".
Although this implementation is uncomplicated for functions like
\verb"rhsfn" that are only called during solving, it is more invasive for
the error-handling functions which can, in principle, be triggered from
nearly every call;
either updates must be inserted everywhere or care must be taken to avoid an
incorrect memory access.
When using an adjoint sensitivity solver, the user data references of all
backward sessions must be updated before solving, but the error-handling
functions do not require special treatment since they are inherited from the
parent session.
We chose the approach based on weak references to avoid having to think
through all such cases and also because our testing did not reveal
significant differences in running time between the two approaches.
\subsubsection{Quadrature and Sensitivity features.}\label{sec:sessions:sens}
Although the \textsc{\it CVODES}{} solver conceptually extends the \textsc{\it CVODE}{} solver, it
is implemented in a distinct code base (and similarly for \textsc{\it IDAS}{} and
\textsc{\it IDA}{}).
For the OCaml library, we wanted to maintain the idea of an extended
interface without completely duplicating the implementation.
The library thus provides two modules, \verb"Cvode" and \verb"Cvodes",
that share the \verb"session" type.
As both modules need to access the internals of \verb"session" values, we
declare this type, and all the types on which it depends, inside a third
module \verb"Cvode_impl" that the other two include.
To ensure the opacity of session types in external code, we simply avoid
installing the associated \filename{cvode\_impl.cmi} compiled OCaml
interface file.
The mutable \verb"sensext" field of the \verb"session" record tracks the
extra information needed for the sensitivity features.
It has the type:
\begin{Verbatim}
type ('data, 'kind) sensext =
NoSensExt
| FwdSensExt of ('data, 'kind) fsensext
| BwdSensExt of ('data, 'kind) bsensext
\end{Verbatim}
The \verb"NoSensExt" value is used for basic sessions without sensitivity
analysis.
The \verb"FwdSensExt" value is used to augment a session with calculations
of quadratures, forward sensitivities, and adjoint sensitivities.
It contains additional callback closures and also a list of associated
backward session values to prevent their garbage collection while they may
still be required by C-side callback stubs which only hold weak references.
The \verb"BwdSensExt" value is used in backward sessions created for adjoint
sensitivity analysis.
It contains callback closures and a link to the parent session and an
integer identifier sometimes required by Sundials functions.
Reusing the basic session interface for backward sessions mirrors the
approach taken in Sundials and simplifies the underlying implementation at
the cost of some redundant fields---the normal callback closures are never
used,---and an indirection to access the \verb"bsensext" fields.
The only other complication in interfacing to \textsc{\it CVODES}{} (and \textsc{\it IDAS}{}) is
that they often work with arrays of nvectors.
For calls into Sundials, given OCaml arrays of OCaml nvectors, we allocate
short-lived C arrays in the interface code and extract the corresponding C
nvector from each element.
For callbacks from Sundials, we maintain cached OCaml \verb"array"s in the
\verb"sensext" values and populate them with OCaml payloads extracted from
the C nvectors.
\subsection{Matrices}\label{sec:matrices}
\begin{figure}
\centering
\begin{tikzpicture}
node distance=1cm,
cell/.style={draw,rectangle,thick,
minimum height=4.0ex,
minimum width=8.0em},
wcell/.style={cell,minimum width=9.1em},
custom/.style={thick,densely dotted},
links/.style={thick}
]
\draw[dash dot dot,gray] (2.2,-6.8) -- (2.2,1.4);
\node[left,gray] at (2.1,1.2) {OCaml heap};
\node[right,gray] at (2.3,1.2) {C heap};
\node[cell] (bigarray) {\verb"bigarray(s)"};
\node[cell,below left=.6cm and -1cm of bigarray] (cdata) {\verb"'data"};
\node[cell,below=-\the\pgflinewidth of cdata] (cmat) {\verb"'cmat"};
\node[above=0 of cdata] (mcontent) {\verb"matrix_content"};
\node[cell,below left=2.0cm and -1cm of mcontent] (mdata) {\verb"'m"};
\node[cell,below=-\the\pgflinewidth of mdata] (mptr) {\verb"mptr"};
\node[cell,below=-\the\pgflinewidth of mptr] (mdots) {$\cdots$};
\node[above=0 of mdata] (matrix) {\verb"Matrix.t"};
\node[wcell,below right=0 and 4.0 of cmat] (content)
{\verb"data (void *)"};
\node[wcell,below=-\the\pgflinewidth of content] (dots) {$\cdots$};
\node[above=0 of content] {\hspace{1.4em}\verb"*SUNMatrixContent"};
\node[wcell,below right=.2 and 5.5 of mptr] (sunmat)
{\verb"content (void *)"};
\node[wcell,below=-\the\pgflinewidth of sunmat] (cops) {\verb"ops"};
\node[above=0 of sunmat]
{\hspace{1em}\verb"*SUNMatrix"\textsuperscript{+}};
\node[wcell,dashed,below=-\the\pgflinewidth of cops]
(backlink) {\verb"'data" (`backlink')};
\node[below left] at (backlink.north east) {\pgfuseplotmark{square*}};
\node[cell,below right=0 and 3.0cm of bigarray] (array)
{\verb"double/long *"};
\draw[links,->] ([xshift=1cm]bigarray) -| ([xshift=.2cm]array.north west);
\draw[links,->] (content.east) -| ([xshift=1.3cm]array.south);
\node[cell,custom,left=1.2cm of sunmat] (ops) {\verb"matrix_ops"};
\path ($(cdata.east)!.5!(bigarray.west)$) coordinate (hor);
\draw[->,links] (cmat) -| ([xshift=.2cm]content.north west);
\draw[->,links] (cdata.east) -| (bigarray.south);
\draw[->,links]
(backlink.east)
-- ++(1.8,0)
-- ++(0,7.0) coordinate (ver)
-| (mcontent.north)
;
\draw[->,links] (mptr) -| ([xshift=.2cm]sunmat.north west);
\draw[->,links] (mdata.east) -| (cmat.south);
\draw[->,links] (mdata.east) -| (cmat.south);
\draw[->,links] (sunmat.east) -| ([xshift=1.5cm]dots.south);
\draw[custom,->] (sunmat) -- (ops);
\node at (-4.0,-6.2) {\raisebox{.5ex}{\pgfuseplotmark{square*}}%
\hspace{.7em}GC root};
\end{tikzpicture
\caption{Interfacing matrices (dotted elements are for custom matrices
only).\label{fig:mem:mat}}
\end{figure}
The interface for matrices must support callbacks from C into OCaml and
allow direct access to the underlying data through big arrays.
We adapt the solution adopted for nvectors, albeit with an extra level of
`mirroring'.
An outline of our approach is shown in \cref{fig:mem:mat}.
In C, a \verb"SUNMatrix" record pairs a pointer to content with function
pointers to matrix operations.
The content of each matrix type is represented by a specific record type
(\verb"SUNMatrixContent_Dense", \verb"SUNMatrixContent_Band", or
\verb"SUNMatrixContent_Sparse"), which is not exposed to users.
The content records contain the fields described in
\cref{sec:coverview:matrices} and pointers to underlying data arrays.
In OCaml, we implement the following types for matrices.
\begin{Verbatim}
type cmat
type ('mk, 'm, 'data, 'kind) t = \{
payload : 'm;
rawptr : cmat;
...
\}
\end{Verbatim}
The second type is used abstractly as
\verb"('k, 'm, 'data, 'kind) Matrix.t".
The \verb"cmat" component is a custom block pointing to the \verb"SUNMatrix"
structure allocated in the C heap; when garbage collected, its finalizer
destroys the structure.
The \verb"payload" field of type \verb"'m" refers to the OCaml
representation of the matrix content.
The \verb"'mk" phantom type signifies whether the underlying representation
is the standard one provided by Sundials, in which the C-side content points
to a \verb"SUNMatrixContent_*" structure, or a special form for custom
matrices, in which the C-side content field is a global root referring to a
set of OCaml closures for matrix operations.
The other two type arguments, \verb"'data" and \verb"'kind", are used to
express restrictions on the matrix-times-vector operation.
For instance, the built-in matrix types may only be multiplied by serial,
OpenMP, and Pthreads nvectors.
In callbacks from C into OCaml, stub functions are passed \verb"SUNMatrix"
values from which they must recover a corresponding OCaml representation
before invoking OCaml code.
We reuse the mechanism described in \cref{sec:vectors:adopted} for nvectors
by adding a backlink field on the C side.
As before, this requires overriding the clone operation to recreate the
backlink and OCaml-side structures for new matrices, and the destroy
operation to unregister the global root.
We again prefer to avoid cross-heap cycles by not referring directly to the
\verb"Matrix.t" wrapper.
Referring back to a big array would work well enough for dense matrices, but
banded matrices also require tracking size and numbers of diagonals, and
sparse matrices require the \emph{indexptrs} and \emph{indexvals} arrays
described in \cref{sec:coverview:matrices}.
Our solution is to introduce an intermediate structure:
\begin{Verbatim}
type ('data, 'cmat) matrix_content = \{
mutable payload : 'data;
rawptr : 'cmat;
\}
\end{Verbatim}
which is not exposed directly by the interface, but which is rather
instantiated across submodules for dense, banded, sparse, and custom
matrices.
For instance the \verb"Matrix.Dense.t" type is implemented by the following
definitions.
\begin{Verbatim}
type data = (float, Bigarray.float64_elt, Bigarray.c_layout) Bigarray.Array2.t
type cmat
type t = (data, cmat) matrix_content
\end{Verbatim}
The \verb"cmat" represents a custom block containing a
\verb"SUNMatrixContent_Dense" pointer to the content linked (or not) from a
\verb"SUNMatrix" structure and the \verb"payload" field refers to a big
array that wraps the data underlying the content structure.
Similar instantiations are used for the \verb"Matrix.Band.t" and
\verb"'s Matrix.Sparse.t" types, the latter includes a phantom type argument
that tracks the underlying format (\ac{CSC} or \ac{CSR}).
Unfortunately, the scheme described here is made more complicated by the
fact that certain banded and sparse operations sometimes reallocate matrix
storage---for instance, if more non-zero elements are needed to store a
result.
The only solution we found to this problem was to override those operations
by duplicating the original source code and adjusting them to create new big
arrays and link their payloads back into the C-side structures.
In a custom matrix, the matrix operations are simply overloaded by stubs
that retrieve an OCaml closure via the \verb"SUNMatrix" content field and
invoke it with content retrieved through the backlink.
\subsection{Linear solvers}\label{sec:linsolv}
Sundials provides several different linear solvers and various options for
configuring them.
One of our design goals was to clarify and ensure valid configurations using
the OCaml module and type systems.
Our interface allows both custom and alternate linear solvers written in
OCaml and cleanly accommodates parallel preconditioners without introducing
a mandatory dependency on \ac{MPI}.
\subsubsection{Generic linear solvers}
A generic linear solver essentially tries to find a vector~$x$ to satisfy an
equation~$Ax = b$, where $A$ is a matrix and~$b$~is a vector.
Sundials provides a single type for generic linear solvers that encompasses
instances from two families, \ac{DLS} and \ac{SPILS}.
The two families, however, are essentially implemented using different
operations and attached to sessions by different functions with different
supplementary arguments, for instance, \textsc{\it CVODE}{} provides
\verb"CVDlsSetLinearSolver" and \verb"CVSpilsSetLinearSolver".
We thus found it more natural to define two distinct types, each with their
own associated module.
Instances of the \ac{DLS} family manipulate explicit representations of the
matrix~$A$ and the vectors~$b$ and~$x$.
The type for a generic \ac{DLS} is exposed as (slightly abusing syntax):
\begin{Verbatim}
type ('m, 'data, 'kind, 'tag) LinearSolver.Direct.linear_solver
\end{Verbatim}
where \verb"'m" captures the type of~$A$, \verb"'data" and \verb"'kind"
constrain the nvectors for~$b$ and~$x$, and \verb"'tag" is used to restrict
the use of operations that require KLU, SuperLUMT, or custom \ac{DLS}
instances.
Internally, the type is realized by a record that contains a pointer to the
C-side structure, a reference to the matrix used within Sundials for storage
and cloning (to prevent it being prematurely garbage collected), and a
boolean to dynamically track and prevent associations with multiple
sessions.
\ac{DLS} implementations are provided for dense, banded, and sparse
matrices.
The function that creates a generic linear solver over dense matrices is
typical:
\begin{Verbatim}
val dense : 'kind Nvector_serial.any
-> 'kind Matrix.dense
-> (Matrix.Dense.t, 'kind, tag) serial_linear_solver
\end{Verbatim}
The resulting generic linear solver is restricted to serial, OpenMP, or
Pthreads nvectors:
\begin{Verbatim}
type ('m, 'kind, 'tag) serial_linear_solver
= ('m, Nvector_serial.data, [>Nvector_serial.kind] as 'kind, 'tag) linear_solver
\end{Verbatim}
The \verb"[>Nvector_serial.kind]" constraint only allows the type variable
\verb"'kind" to be instantiated by a type that includes the constructor
\verb"Nvector_serial.kind", which was presented on
\cpageref{page:serialkinds}.
A custom \ac{DLS} implementation is created by defining a record of OCaml
functions:
\begin{Verbatim}
type ('m, 'data, 's) ops = \{
init : 's -> unit;
setup : 's -> 'm -> unit;
solve : 's -> 'm -> 'data -> 'data -> unit;
get_work_space : ('s -> int * int) option;
\}
\end{Verbatim}
where \verb"'s" is the type of the internal state of an instance.
The following two functions are provided.
\begin{Verbatim}
val make : ('m, 'data, 's) ops -> 's -> ('mk, 'm, 'data, 'kind) Matrix.t
-> ('m, 'data, 'kind, [`Custom of 's]) linear_solver
val unwrap : ('m, 'data, 'kind, [`Custom of 's]) linear_solver -> 's
\end{Verbatim}
The first takes a set of operations, an initial state, and a storage matrix,
and returns a \ac{DLS} instance.
The matrix kind~\verb"'mk", indicating whether the matrix implementation is
standard or custom, need not be propagated to the result type since
callbacks only receive the matrix contents of type~\verb"'m".
The tag type argument indicates a custom linear solver whose internal state
has the given type.
This tag allows for a generic \verb"unwrap" function and thereby avoids
requiring users to maintain both a custom state---to set properties or get
statistics---and an actual instance.
Instances of the \ac{SPILS} family rely on a function that approximates a
matrix-vector product to model the (approximate) effect of the matrix~$A$
without representing it explicitly.
The success of this approach typically requires the solving a preconditioned
system that results from scaling the matrix~$A$ and vectors~$b$ and~$x$, and
multiplying them by problem-specific matrices on the left, right, or both
sides.
\filbreak
The type for a generic \ac{SPILS} is exposed as:
\begin{Verbatim}
type ('data, 'kind, 'tag) LinearSolver.Iterative.linear_solver
\end{Verbatim}
where \verb"'data" and \verb"'kind" constrain the nvectors used and
\verb"'tag" is used to restrict operations for specific iterative methods.
The internal realization of this type and the creation of custom linear
solvers is essentially the same as for the \ac{DLS} module.
The following \ac{SPILS} instantiation function is typical.
\begin{Verbatim}
val spgmr : ?maxl:int
-> ?max_restarts:int
-> ?gs_type:gramschmidt_type
-> ('data, 'kind) Nvector.t
-> ('data, 'kind, [`Spgmr]) linear_solver
\end{Verbatim}
It takes three optional arguments to configure the linear solver and an
nvector to specify the problem size and compatibility constraints.
\subsubsection{Associating generic linear solvers to sessions}
Generic linear solvers are associated with sessions after having created the
session and before simulating it.
In the OCaml interface, we incorporated these steps into the \verb"init"
functions that return solver sessions, as shown in
\cref{fig:cvodeinit:ocaml} and applied in \cref{sec:example:cartesian}.
This allows us to enforce required typing constraints and ensure that calls
to the underlying library are made in the correct order.
We introduce intermediate types to represent the combination of a generic
linear solver with its session-specific parameters and to group diagonal
approximation, \ac{DLS}, \ac{SPILS}, and alternate modules.
For instance, in the \textsc{\it CVODE}{} solver, we declare:
\begin{Verbatim}
type ('data, 'kind) session_linear_solver
\end{Verbatim}
which is realized internally by a function over a session value and an
nvector, and acts imperatively to configure the session.
Values of this type are provided by solver-specific submodules whose
particularities we now summarize.
For our purposes, the important thing is not what the different modules do,
but rather how the constraints on their use are expressed in Sundials/ML.
\paragraph{Diagonal linear solvers.}
The \textsc{\it CVODE}{} diagonal linear solver is interfaced by the submodule:
\begin{Verbatim}
module Diag : sig
val solver : ('data, 'kind) session_linear_solver
val get_work_space : ('data, 'kind) session -> int * int
val get_num_rhs_evals : ('data, 'kind) session -> int
end
\end{Verbatim}
A \verb"Cvode.Diag.solver" value is passed to \verb"Cvode.init" or
\verb"Cvode.reinit" where it is invoked and makes calls to the Sundials
\verb"CVodeSetIterType" and \verb"CVDiag" functions that set up the diagonal
linear solver.
The \verb"get_"$\ast$ functions retrieve statistics specific to the diagonal
linear solver---here the memory used by the diagonal solver in terms of real
and integer values, or the number of times that the right-hand side callback
has been invoked.
Other linear solvers also provide \verb"set_"$\ast$ functions.
As the underlying implementations of these functions sometimes typecast
memory under the assumption that the associated linear solver is in use, we
implement dynamic checks that throw an exception when a session is
configured with one linear solver and passed to a function that assumes
another.
This constraint cannot be adequately expressed using static types since a
session may be dynamically reinitialized with a different linear solver.
\paragraph{\acfp{DLS}.}
Interfacing \acp{DLS} requires treating nvector compatibility and the matrix
data structures passed to callback functions.
For instance, the \verb"Cvode.Dls" submodule contains the value:
\begin{Verbatim}
val solver : ?jac:'m jac_fn ->
('m, 'kind, 'tag) LinearSolver.Direct.serial_linear_solver ->
'kind serial_session_linear_solver
\end{Verbatim}
where \verb"serial_session_linear_solver" is another abbreviation for
restricting nvectors.
\begin{Verbatim}
type 'kind serial_session_linear_solver =
(Nvector_serial.data, [>Nvector_serial.kind] as 'kind) session_linear_solver
\end{Verbatim}
The \verb"?jac" label marks a named optional argument.
It is used to pass a function that calculates an explicit representation of
the system Jacobian matrix.
The only relevant detail here is the use of \verb"'m" to ensure that the
same matrix type is used by the callback function and the generic linear
solver.
Similarly, \verb"'kind" is propagated to the result type to ensure nvector
compatibility when a session is created.
Each solver has its own \verb"Dls" submodule into which the types and values
of \verb"LinearSolver.Direct" are imported to maximize the effect of the
local open syntax---for instance, as in the call to \verb"Ida.init" in the
example of \cref{sec:example:cartesian}.
\paragraph{\acfp{SPILS}.}
A \ac{SPILS} associates an iterative method with a preconditioner.
Iterative methods are exposed as functions that take an optional Jacobian
multiplication function and a preconditioner, for example,
\begin{Verbatim}
val solver :
('data, 'kind, 'tag) LinearSolver.Iterative.linear_solver
-> ?jac_times_vec:'data jac_times_setup_fn option * 'data jac_times_vec_fn
-> ('data, 'kind) preconditioner
-> ('data, 'kind) session_linear_solver
\end{Verbatim}
As is clear from the type signature, it is the preconditioner that
constrains nvector compatibility.
Internally the \verb"preconditioner" type pairs a preconditioning ``side''
(left, right, both, or none) with a function that configures a
preconditioner given a session and an nvector.
Functions are provided to produce elements of this type.
For instance, the \verb"Cvode.Spils" module provides:
\begin{Verbatim}
val prec_none : ('data, 'kind) preconditioner
val prec_left : ?setup:'data prec_setup_fn
-> 'data prec_solve_fn
-> ('data, 'kind) preconditioner
val prec_right : ?setup:'data prec_setup_fn
-> 'data prec_solve_fn
-> ('data, 'kind) preconditioner
val prec_both : ?setup:'data prec_setup_fn
-> 'data prec_solve_fn
-> ('data, 'kind) preconditioner
\end{Verbatim}
The last three produce preconditioners from optional setup functions and
mandatory solve functions over the nvector payload type.
These preconditioners are compatible with any type of nvector.
Banded preconditioners, on the other hand, are only compatible with serial,
OpenMP, and Pthreads nvectors.
We group them into a submodule \verb"Cvode.Spils.Banded":
\begin{Verbatim}
val prec_left : bandrange
-> (Nvector_serial.data, [> Nvector_serial.kind]) preconditioner
val prec_right : bandrange
-> (Nvector_serial.data, [> Nvector_serial.kind]) preconditioner
val prec_both : bandrange
-> (Nvector_serial.data, [> Nvector_serial.kind]) preconditioner
\end{Verbatim}
The banded preconditioners provide their own setup and solve functions.
The \acf{BBD} preconditioner is only compatible with parallel nvectors.
Its \verb"'data" type variable is instantiated to
\verb"Nvector_parallel.data", the payload of parallel nvectors, and its
\verb"'kind" type variable to \verb"Nvector_parallel.kind".
The declarations are made in a separate module \verb"Cvode_bbd", which is
simply not compiled when \ac{MPI} is not available.
Each solver has a \verb"Spils" submodule into which the types and values of
\verb"LinearSolver.Iterative" are imported to maximize the effect of the
local open syntax---as shown, for instance, in \cref{fig:cvodeinit:ocaml}.
\if0
\subsection{Linking}\label{sec:linking}
While conceptually \textsc{\it CVODES}{} and \textsc{\it IDAS}{} extend, respectively, \textsc{\it CVODE}{} and
\textsc{\it IDA}{} with new functionality, each solver is implemented as a distinct code
base.\tbnote{Is this section (especially) too technical and not interesting
enough?}
There are thus five, counting \textsc{\it KINSOL}{}, distinct
\filename{libsundials\_$\ast$} libraries, two pairs of which export a common
subset of symbols.
To simplify as much as possible basic use of the interface, we produce a
\filename{sundials.cma} library that includes modules containing common data
types (\verb"Sundials", \verb"Dls", and \verb"Spils"), serial and custom
nvector implementations (\verb"Nvector", \verb"Nvector_serial", and
\verb"Nvector_custom"), and all of the solvers (\verb"Cvode",
\verb"Cvodes", \verb"Ida", \verb"Idas", and \verb"Kinsol").
We link it with the \filename{libsundials\_cvodes},
\filename{libsundials\_idas}, \filename{libsundials\_kinsol}, and
\filename{libsundials\_nvecserial} Sundials libraries.
A program using the library is compiled as follows:
\begin{Verbatim}
ocamlc -o myprog.byte -I +sundialsml bigarray.cma sundials.cma myprog.ml
\end{Verbatim}
We also provide an alternate \filename{sundials\_no\_sens.cma} library that
includes the same common and nvector modules, but only the \verb"Cvode",
\verb"Ida", and \verb"Kinsol" solver modules.
It is linked with the \filename{libsundials\_cvode},
\filename{libsundials\_ida}, \filename{libsundials\_kinsol}, and
\filename{libsundials\_nvecserial} Sundials libraries.
Evidently, this library only provide a subset of the solvers and it executes
different underlying code.
The difference can be seen in the results of functions like
\verb"Cvode.get_work_space" and \verb"Ida.get_work_space" that return
different results depending on which Sundials libraries are linked.
The alternate library is thus at least important for the testing described
in \cref{sec:eval} where the outputs of different implementations are
expected to match precisely.
All of the modules with dependencies on \ac{MPI} (\verb"Nvector_parallel",
\verb"Cvode_bbd", \verb"Cvodes_bbd", \verb"Ida_bbd", and
\verb"Idas_bbd") are compiled into the \filename{sundials\_mpi.cma} library
which is linked with \filename{libsundials\_nvecparallel}.
To compile OCaml programs that use parallel nvectors requires adding
\filename{sundials\_mpi.cma} after \filename{sundials.cma} (or
\filename{sundials\_no\_sens.cma}) in calls to \filename{ocamlc}.
\fi
\section{Evaluation}\label{sec:eval}
An interface layer inevitably adds run-time overhead: there is extra code to
execute at each call to, or callback from the library.
This section presents our attempt to quantify this overhead.
Since we are interested in the cost of using Sundials from programs written
in OCaml, rather than count the number of additional instructions per call
or callback we think it more relevant to compare the performance of programs
written in OCaml with equivalent programs written directly in C.
We consider two programs equivalent when they
produce identical sequences of output bytes using the same sequence of
solver steps in the Sundials library.
Here we compare wall clock run times, which, despite the risk of
interference from other processes and environmental factors, have the
advantages of being relatively simple to measure and directly relevant to
users.
Sundials is distributed with example programs (71 with serial, OpenMP, or
Pthreads nvectors and 21 with parallel nvectors---not counting duplicates)
that exercise a wide range of solver features in numerically interesting
ways.
We translated them all into OCaml.\footnote{The translations aim to
facilitate direct comparison with the original code, to ease debugging and
maintenance. They are not necessarily paragons of good OCaml style.}
Comparing the outputs of corresponding OCaml and C versions with
\texttt{diff} led us to correct many bugs in our interface and example
implementations, and even to discover and report several bugs in Sundials
itself.
We also used \verb"valgrind"~\cite{NethercoteSew:Valgrind:2007} and manual
invocations of the garbage collector to reveal memory-handling errors in our
code.
\SaveVerb{unsafe}"--unsafe"
\SaveVerb{buildtype}"CMAKE_BUILD_TYPE=Release"
\begin{figure*}
\begin{center}
\includegraphics[height=.71\textwidth,angle=270,clip,trim=0 0 0 0]{perf}
\end{center}
\caption{Serial examples: C (gcc 6.3.0 with -O3) versus
OCaml native code (4.07.0).
The black bars show results obtained with the \protect\UseVerb{unsafe}
option that turns off array bounds checking and other dynamic checks.
The grey lines show the results with dynamic checks.
Results were obtained under Linux 4.9.0 on an Intel i7 running at
\SI{2.60}{\GHz} with a \SI{1}{\mega\byte} L2 cache, a \SI{6}{\mega\byte} L3
cache, and \SI{8}{\giga\byte} of RAM.
Sundials was compiled with
\protect\UseVerb{buildtype}.}\label{fig:benchmarks:serial}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[height=.71\textwidth,angle=270,clip,trim=0 0 246
0]{perf-mpi}
\end{center}
\caption{Parallel examples: C (gcc 6.3.0 with -O3) versus
OCaml native code (4.07.0).
The black bars show results obtained with the \protect\UseVerb{unsafe}
option that turns off array bounds checking and other dynamic checks.
The grey lines show the results with dynamic checks.
Results were obtained under Linux 4.9.0 on an Intel i7 running at
\SI{2.60}{\GHz} with a \SI{1}{\mega\byte} L2 cache, a \SI{6}{\mega\byte} L3
cache, and \SI{8}{\giga\byte} of RAM.
Sundials was compiled with
\protect\UseVerb{buildtype}.}\label{fig:benchmarks:parallel}
\end{figure*}
After ensuring the equivalence of the example programs, we used them to
obtain and optimize performance results.
As we explain below, most optimizations were in the example programs
themselves, but we were able to validate and evaluate some design choices,
most notably the alternative proposal for sessions described in
\cref{sec:sessions}.
The bars in \cref{fig:benchmarks:serial,fig:benchmarks:parallel} show the
ratios of the execution times of the OCaml code against the C
code.\footnote{The Sundials/ML source code distribution includes all examples
along with the scripts and build targets necessary to reproduce the
experiments described in this paper.}
A value of 2.0 on the horizontal axis means that the OCaml version takes
twice as long as the C version to calculate the same result.
The extent of the bars show 99.5\% confidence intervals for the OCaml/C
ratio, calculated according to Chen~et~al.'s
technique~\cite{ChenEtAl:IEEETrans:2015}.
Formally, if $O$ and $C$ are random variables representing the running time
of a given test in OCaml and C, respectively, then the bars show the range
of all $\gamma$ for which the Mann-Whitney U-test (also known as the
Wilcoxon rank-sum test) does \emph{not} reject the null hypothesis
$
P(\gamma C > O) = P(\gamma C < O)
$
at the 99.5\% confidence level.
Intuitively, if we handicap the C code by scaling its time by $\gamma$, then
the observed measurements are consistent with the assumption that the
handicapped C code beats the non-handicapped OCaml code exactly half of the
time: the observed data may be skewed in one direction or the other, but the
deviation from parity is smaller than random noise would give 99.5\% of the
time if they really were equally competitive.
The results include customized examples: the
\filename{kinFoodWeb\_kry\_custom} example uses custom nvectors with
low-level operations implemented in OCaml on \verb"float array"s; the
\filename{$\ast$\_alt} examples use an alternate linear solver reimplemented
in OCaml using the underlying \verb"Dls" binding.
This involves calls from OCaml to C to OCaml to C.
Each custom example produces the same output as the corresponding original
(their predecessors in the graph).
Two OCaml versions, \filename{idasFoodWeb\_bnd\_omp} and
\filename{idasFoodWeb\_kry\_omp} are faster than the C versions---we explain
why below.
The black bars in \cref{fig:benchmarks:serial,fig:benchmarks:parallel} give
the ratios achieved when the OCaml versions are compiled without checks on
array access, nvector compatibility, or matrix validity (since the C code
does not perform these checks).
The grey lines show the results with dynamic checks; the overhead is
typically negligible.
The graph suggests that the OCaml versions are rarely more than 50\% slower
than the original ones and that they are often less than 20\% slower.
That said, care must be taken extrapolating from the results of this
particular experiment to the performance of real applications, which will
not be iterated \num{1000}s of times and where the costs of garbage
collection can be minimized given sufficient memory.
The actual C run times are not given in
\cref{fig:benchmarks:serial,fig:benchmarks:parallel}.
Most of them are less than \SI{1}{\milli\s}, nearly all of them are less
than \SI{1}{\s}, the longest is on the order of \SI{4}{\s}
(\filename{ark\_diurnal\_kry\_bbd\_p}).
We were not able to profile such short run times directly: the
\filename{time} and \filename{gprof} commands simply show \SI{0}{\s}.
The figures in the graph were obtained by modifying each example (in both C
and OCaml) to repeatedly execute its \verb"main" function.
Since the C code calls \verb"free" on all of its data, we also manually
trigger a full major collection and heap compaction in OCaml at the end of the
program (the ratios are smaller otherwise).
The figure compares the median ratios over 10 such executions.
The fastest examples require \num{1000}s of iterations to produce a
measurable result, so we must vary the number of repetitions per example to
avoid the slower examples taking too long (several hours each).
Iterating the examples so many times sometimes amplifies factors other than
interface overhead.
For the two examples where OCaml apparently performs better than C, the
original C~code includes OpenMP pragmas around loops in the callback
functions.
This actually slows them down and the OCaml code does better because this
feature is not available.
In general, we were able to make the OCaml versions of the examples up to
four times faster by following three simple and unsurprising guidelines.
\begin{enumerate}
\item
We added explicit type annotations to all vector arguments.
For instance, rather than declare a callback with
\begin{Verbatim}
let f t y yd = ...
\end{Verbatim}
we follow the standard approach of adding type annotations,
\begin{Verbatim}
type rarray = (float, float64_elt, c_layout) Bigarray.Array1.t
let f t (y : rarray) (yd : rarray) = ...
\end{Verbatim}
so that the compiler need not generate polymorphic code and can optimize for
the big array layout.
\item
We avoided the functions \verb"Bigarray.Array1.sub",
\verb"Bigarray.Array2.slice_left" that allocate fresh big arrays on the
major heap and thereby increase the frequency and cost of garbage
collection.
They can usually be avoided by explicitly passing and manipulating array
offsets.
We found that when part of an array is to be passed to another function, as
for some \ac{MPI} calls, it can be faster to copy into and out of a
temporary array.
\item
We wrote numeric expressions and loops according to
Leroy's~\cite{Leroy:Numeric:2002} advice to avoid float `boxing'.
\end{enumerate}
As a side benefit of the performance testing, iterating each example program
\num{1000}s of times with some control over the garbage collector revealed
several subtle memory corruption errors in our interface.
We investigated and resolved these using manual code review and a
C~debugger.
In summary, the results obtained, albeit against an admittedly small set of
examples, indicate that OCaml code using the Sundials solvers should rarely
be more than 50\% slower than equivalent code written in C, provided the
guidelines above are followed, and it may be only about 20\% slower.
One response to the question ``Should I use OCaml to solve my numeric
problem?'' is to rephrase it in terms of the cost of calculating results
(``How long must I wait?'') against the cost of producing and maintaining
programs (``How much effort will it take to write, debug, and later modify a
program?'').
This section provides insight into the former cost.
The latter cost is difficult to quantify, but it is arguably easier to write
and debug OCaml code thanks to automatic memory management,
strong static type checking,
bounds checking on arrays,
and higher-order functions.
This combined with the other features of OCaml---like algebraic data types,
pattern matching, and polymorphism---make the Sundials/ML library especially
compelling for programs that combine numeric calculation and symbolic
manipulation.
\section{Conclusion}\label{sec:concl}
We present a comprehensive OCaml interface to the Sundials suite of numeric
solvers.
We outline the main features of the underlying library, demonstrate its use
via our interface, describe the implementation in detail, and summarize
extensive benchmarking.
Sundials is an inherently imperative library.
Data structures like vectors and matrices are reused to minimize memory
allocation and deallocation, and modified in place to minimize copying.
It works well in an ML-style language where it is possible to mix features
of imperative programming---like sequencing, loops, mutable data structures,
and exceptions---with those of functional programming---like higher-order
functions, closures, and algebraic data types.
An interesting question that we did not treat is how to build efficient
numerical solvers in a more functional style.
It turns out that the abstract data types used to structure the Sundials
library are also very useful for implementing a high-level interface.
By overriding elements of these data structures, namely the clone and
destroy operations, we are able to smoothly integrate them with OCaml's
automatic memory management.
Designers of other C libraries that intend to support high-level languages
might also consider this approach.
For Sundials, some minor improvements are possible---for instance, adding
\begin{inparaenum}
\item a user data field to nvectors and matrices that could be exploited for
`backlinks',
\item a function to return the address of the session user data field so
that it can be registered directly with the garbage collector, and
\item a mechanism for overriding the reallocation mechanism within banded
and sparse matrices to eliminate the need to reimplement certain
operations,
\end{inparaenum}---but the approach works well overall.
In our interface, C-side structures are mirrored by OCaml structures that
combine low-level pointers, their associated finalize functions, and
high-level values.
This is a standard approach that we adapted for two particular requirements,
namely, to give direct access to low-level arrays and to treat callbacks
efficiently.
The first requirement is addressed by combining features of the OCaml big
array library with the ability to override the Sundials clone and destroy
operations.
The second requirement necessitates a means to obtain the OCaml ``mirror''
of a given instance of a C data structure.
The backlink fields solve this problem well provided care is taken to avoid
inter-heap cycles.
Besides the usual approach of using weak references, we demonstrate an
alternative for when the mirrored structures are ``wrappers''.
In this case, the pointer necessary to recover an OCaml structure from C
drops a level in the wrapper hierarchy.
This is a simple solution to the cycle problem that is also convenient for
library users.
We note that it engenders two kinds of instance: those created in OCaml and
those cloned in C.
For instances created in OCaml and used in C, care must be taken to maintain
a reference from OCaml until the instance is no longer needed as the
backlink itself does not prevent garbage collection and ensuing instance
destruction.
For instances created in C and passed back into OCaml, there is no such
problem provided the destruction function only unregisters the global root
and does not actually destroy the instance.
Our work relies heavily on several features of the OCaml runtime, most
notably flexible functions for creating and manipulating big arrays, the
ability to register global garbage collection roots in structures on the C
heap, features for invoking callbacks and recovering exceptions from them,
and the ability to associate finalize functions to custom blocks.
We found phantom types combined with polymorphic variants are a very
effective way to express constraints on the use and combination of data
structures that arise from the underlying library.
While \acp{GADT} are used in our implementation, they are not exposed
through the interface, probably because in this library we prefer opaque
types to variant types since they permit a better division into submodules.
The evaluation results show that the programming convenience provided by
OCaml and our library has a measurable runtime cost.
The overall results, however, are not as bad as may perhaps be feared.
We conjecture that, in most cases, the time gained in programming,
debugging, and maintenance will outweigh the cumulative time lost to
interface overhead.
Since programs that build on Sundials typically express sophisticated
numerical models, ensuring their readability through high-level features and
good style is an important consideration.
Ideally, programming languages and their libraries facilitate reasoning
about models, even if only informally, and precisely communicating
techniques, ideas, and knowledge.
We hope our library contributes to these aims.
\section*{Acknowledgements}\label{sec:ack
We thank Kenichi Asai and Mark Shinwell for their persistence and
thoroughness, and the anonymous reviewers for their helpful suggestions.
The work described in this paper was supported by the ITEA 3 project 11004
MODRIO (Model driven physical systems operation).
|
2,869,038,154,073 | arxiv | \section{Introduction}\label{introduction}
Joint models of longitudinal and event-time data have been extensively studied. It has the following general form:
\begin{equation}\label{joint model}
\begin{aligned}
&\lambda\left(t,Z(t)\right)=\lambda_{0}
\left(t\right)
\exp\left(b^{\top}Z(t)\right)\\
&Z\left(t\right)=\mathcal{Z}\left(t,a\right)
\end{aligned}
\end{equation}
where $\lambda\left(t,z\right)$ is the conditional hazard function given time $t$ and the temporal covariate $Z(t)=z$, which has the Cox form and consists of a non-parametric baseline hazard $\lambda_0(t)$ and a parametric component, $\exp\left(b^{\top}z\right)$. The temporal covariate is given through a $\mathfrak{p}$-dimensional stochastic process $\mathcal{Z}$ which is parametrized by a set of parameter $a$.
Joint models of type \eqref{joint model} have many variants, which are derived either from changing the functional form of hazard function or from different specification of longitudinal processes. A variety of alternative functional forms of hazard have been proposed and widely studied in literature \citep{rizopoulos2011dynamic,kim2013joint,
wang2001jointly,tsiatis1995modeling,
taylor1994stochastic,chen2014joint}, although they are quite useful in different application settings, from the perspective of estimation strategy, they do not really make a difference to the ordinary form of Cox hazard function \eqref{joint model}.
In contrast, variations in specification of longitudinal process is more interesting, as different specifications associate with different protocol models in survival analysis, to which different estimation procedures have to be developed.
For instance, when
\begin{equation}\label{stationary cox}
\mathcal{Z}\left(t,a\right)\equiv z
\end{equation}
the longitudinal process reduces to a deterministic constant process ($z$), model \eqref{joint model} reduces to the standard Cox proportional hazard model \citep{cox1972regression} which can be estimated through the classical maximum partial likelihood procedure(MPL) \citep{cox1972regression,andersen1982cox,andersen1992repeated}.
When longitudinal process is given through a linear mixed model as below:
\begin{equation}\label{bio-stat longi}
\mathcal{Z}\left(t,a\right)=\alpha\cdot \mathcal{Z}_1(t)+\beta\cdot \mathcal{Z}_2(t)+e
\end{equation}
model \eqref{joint model} becomes a family of joint models that are extensively applied by bio-statisticians in medical-related fields where clinical-trail data is available \citep{taylor1994stochastic,tsiatis1995modeling,wang2001jointly,rizopoulos2011dynamic,kim2013joint,barrett2015joint,wu2014joint,chen2014joint}. In \eqref{bio-stat longi}, $a=(\alpha,\beta)$, $\alpha$ is a constant parameter vector, while $\beta$ is a random vector assumed to be normally distributed with mean $\mathcal{M}_1$ and co-variance $\Sigma_1$. $e$ is a white-noise observational error subjecting to mean $\mathcal{M}_2$ and co-variance $\Sigma_2$. $\mathcal{Z}_1$, $\mathcal{Z}_2$ are the so-called configuration matrices which are essentially two deterministic and known functions in variable $t$. Due to the existence of unobservable $\beta$, estimation of model \eqref{bio-stat longi} is based on Expectation-Maximization (EM) algorithm \citep{wu2007joint,rizopoulos2010jm}.
Longitudinal processes do not have to be continuous in all dimensions, it can take the jumped counting process as below:
\begin{equation}\label{count longi}
\mathcal{Z}\left(t,a\right)=\#\left\{\textrm{E}_{\tau}=1:\tau\in (0,t)\right\}
\end{equation}
where $\textrm{E}_{\tau}$ is the indicator to the occurrence of some recurrent events at time $\tau$ which is supposed to following some underlying distribution subjecting to parameter $a$, $\#$ operation counts the cardinality of the given set. Model \eqref{count longi} is a special case of panel count models \citep{riphahn2003incentive,sun2014panel}, many procedures are proposed for its estimation and a MPL-based estimation is discussed in \cite{andersen1982cox,andersen1992repeated}.
Although estimation of model \eqref{joint model} with different longitudinal processes has been extensively discussed, in most cases, the discussion focus merely on that there are sufficient amount of longitudinal observations. Availability of longitudinal data may not be a big issue for clinical trails, but in many other applications, such as medical cost study and default risk forecast, it is often the case that longitudinal observations are systematically missed \citep{laird1988missing,hogan1997model,chen2014joint,sattar2017joint}. For instance of medical cost study, in order to protect privacy, many publicly available medical cost databases do not release longitudinal observations of the cost accumulation process during inpatient hospital stay, except the total charge by the discharge day. Medical databases of this type include the National Inpatient Sample (NIS) of United State and the New York State's Statewide Planning and Research Cooperative System (SPARCS). In the study of default rate of small business loans and/or consumer loans, all key financial indicators of borrowers are missing during the entire repayment period, this is because for small business and individual borrowers, financial data is even never collected unless some crucial event occurs as a trigger, such as overdue and/or default. So, it is common in financial applications that values of key variables are only available at the event time, all intermediate observations are missed.
To handle missing longitudinal data, a novel simulation-based estimation procedure is proposed in this paper. Without loss of generality, it is designed for the following form of input data, which is the typical data type in studies of default risk and medical cost:
\begin{equation}\label{partially missing data}
\left\{\left\{T_i,\left\{Z_{i,k,T_i}:k\in \mathcal{K}\right\},\left\{\left(Z_{i,k,t_{i,j}}\right)_{j=1}^{m_i}:k\in P/\mathcal{K}\right\}\right\}:i=1,\dots,n\right\}
\end{equation}
where $n$ subjects are in the input data, $m_i$ is the number of longitudinal observation times for $i$; $t_{i,j}$ and $Z_{i,k,t_{i,j}}$ are the $j$th observation time for subject $i$ and the value of the $k$th variable observed at time $t_{i,j}$ for $i$, respectively. For observation times, we assume the last observation time $t_{i,m_i}$ is always equal to the event time $T_i$ (censoring can be easily incorporated whenever it is uninformative, but it is not the major topic in this study). $P=\{1,\dots,\mathfrak{p}\}$ denotes the set of indices of $\mathfrak{p}$ longitudinal variables. There are two subsets of variables, $\mathcal{K}$ is the set of variables with their longitudinal observations systematically missed except for the observation at event time; while the complement set $P/\mathcal{K}$ consists of variables for which longitudinal observations are available at all observation time.
The simulation-based procedure turns out capable of generating consistent estimators of all parametric and non-parametric components of model \eqref{joint model} from input data of type \eqref{partially missing data}. To our best knowledge, this is the first procedure that can handle \eqref{partially missing data}. Apart from feasibility, our simulation-based procedure is uniformly applicable to all three different specifications of longitudinal process \eqref{stationary cox}, \eqref{bio-stat longi} and \eqref{count longi}, as well as their mixture. The uniform property makes our procedure attractive as most existing procedures are either designed for continuous longitudinal process \eqref{bio-stat longi} \citep{andersen1982cox,karr2017point,lin2000semiparametric} or the counting process \eqref{count longi} \citep{taylor1994stochastic,kim2013joint,zeng2007maximum}, their combination is rarely discussed together with survival analysis. In addition, uniformity makes it possible to integrate estimation of different classes of joint models into one single software package, so it makes our procedure friendly to practitioners.
From the perspective of computation, the simulation-based procedure outperforms many existing procedures \citep{rizopoulos2010jm,guo2004separate} in the sense of being compatible with parallel computing framework. In fact, there are two major steps involved in the approach, a simulation step and a optimization step. The simulation step is carried out path-wisely so is completely parallelizable. From the simulation result, a complete version of likelihood function is derivable without any latent variable involved, so the optimization step doesn't rely on EM algorithm and can be parallelized as well if stochastic descending algorithm is applied. Its compatibility to parallel computing makes the simulation-based procedure useful in handling massive data and financial applications.
The paper is organized as the following. In Section 2, we will present model specification and sketch the estimation procedure in details.
The large sample properties of resulting estimators are stated in
Section 3. Simulation results and the application to massive consumer-loan data
are presented in Section 4. Section 5 discusses some extensions of
our model and concludes. All proofs are collected in Appendix.
\section{Model Specification \& Estimation}\label{model specification}
\subsection{Model Specification}\label{model_specification}
In this study, we consider a $\mathfrak{p}$-dimensional mixture longitudinal processes, its projection to the $i$th dimension, $Z_i$, is either a counting process of type \eqref{count longi} or absolutely continuous in time which, without loss of generality, can be expressed as a time integral:
\begin{equation}\label{time-integral longi}
Z_i\left(t\right)=Z_{i0}+\int_{0}^{t}\epsilon_i(s)ds
\end{equation}
where $\epsilon_i(t)$ is an arbitrary stochastic process with finite first and second moments at every $t$, $Z_{i0}$ is an arbitrary initial random variable. \eqref{time-integral longi} include both of \eqref{stationary cox} and \eqref{bio-stat longi} as special cases, it reduces to \eqref{stationary cox} as long as $\epsilon_i(t)\equiv 0$ and $Z_{i0}\equiv c$, and reduces to \eqref{bio-stat longi} if $\mathcal{Z}_1$ and $\mathcal{Z}_2$ in \eqref{bio-stat longi} are absolutely continuous with respect to $t$, which is usually assumed to hold in practice.
To be general, it is allowed that some of the longitudinal dimension is not continuous in time, say representable as a counting process. For simplicity of presentation, we consider the case where $Z_i$ is a counting process only for one covariate dimension, while our methodology is easily extendible to the multi-dimensional counting process situation without much modification. So, from now on, we will denote longitudinal process as $Z(t)=(Z^{\ast}(t),Z^{-\ast}(t))$ with $Z^{\ast}$ and $Z^{-\ast}$ representing the counting dimension and absolutely continuous dimensions respectively.
For the counting dimension, following \cite{andersen1982cox,lin2000semiparametric}, we assume the conditional jumping intensity of the counting process is given through a cox model:
\begin{equation}\label{jump intensity}
\lambda^{c}\left(t,Z(t)\right)=\lambda^{c}_0(t)\exp\left(b^{c\top}Z(t)\right)
\end{equation}
where the supscript $c$ distinguish the intensity of the counting process from the intensity of the terminal event process.
\subsection{Simulatibility and simulation procedures}\label{simulate}
Since our estimation procedure is simulation-based, it requires the entire longitudinal process to be {\bf simulatible} which is formally defined as below:
\begin{definition}\label{simulatible}
A longitudinal process $Z(t)$ is simulatible if for every $dt>0$, it is possible to generate a sequence of random vectors $\{\zeta_i:\,i=0,1,2,\dots\}$ such that the process
\begin{equation}\label{approximation process}
Z'(t)=\zeta_{\left\lfloor t/dt\right\rfloor}
\end{equation}
converges weakly to the true process $Z(t)$ as $dt\rightarrow 0$, where $\lfloor t/dt\rfloor$ denotes the integral part of $t/dt$.
In addition, a longitudinal process $Z(t)$ is {\bf empirically simulatible} if there exists a simulation procedure such that for every positive integer $N$, $k$ and every $dt>0$, $N$ identically independently distributed (i.i.d.) sample sequences of the form \begin{equation}\label{empirical approximation process}
\{\zeta_{i,l}:\,l=1,\dots,k\}, \,i=1,\dots,N
\end{equation} can be generated from the procedure, such that for every $l\leq k$, the cross section $\{\zeta_{i,l}:\,i=1,\dots,N\}$ is $N$ i.i.d. samples of $\zeta_{l}$ with $\zeta_{l}$ being the $l$th element of the simulatible sequence in \eqref{approximation process} corresponding to $dt$.
\end{definition}
Most widely-studied longitudinal processes are empirically simulatible. Process of type \eqref{stationary cox} is simulatible through computing the time integral \eqref{time-integral longi} with $\epsilon\equiv 0$. Process \eqref{bio-stat longi} is simulatible by the following algorithm:
\begin{minipage}{\linewidth}
\begin{algorithm} [H]
\caption{{\bf GenSim}--generate simulatible sequence for \eqref{bio-stat longi}}\label{main algorithm 0}
\begin{algorithmic} [1]
\Require \\
constant parameter, $\alpha$\\
distribution of $\beta$ in \eqref{bio-stat longi}, $F_{\beta}$\\
distribution of $\alpha\cdot\mathcal{Z}_1(0)+\beta\cdot\mathcal{Z}_2(0)+e$ in \eqref{bio-stat longi}, $F_{0}$\\
configuration matrices $\mathcal{Z}_1$ and $\mathcal{Z}_2$\\
initial time, $t$\\
interval length, $dt$\\
sample size, $N$
\Ensure \\
A sequence of random variables satisfies \eqref{empirical approximation process}, denote as $V$.
\algrule
\State $\textrm{Set } V=\emptyset$
\State $\textrm{Draw }N \textrm{ independent samples from distribution }F_{0} \textrm{, denote }\hat{Z} \textrm{ as the set of samples}$
\State $V\varB{.append}(\hat{Z})$
\State $\textrm{Draw }N \textrm{ independent samples from distribution }F_{\beta} \textrm{, denote }\hat{\beta} \textrm{ as the set of samples}$
\For {$\textbf{each } i \textbf{ in } \mathbb{N}$}
\State $\textrm{Set }\hat{Z}=V\varB{.last}$
\State $\textrm{Set }\hat{Z}_1=\emptyset$
\For {$ j=1 \textbf{ to } N$}
\State $\textrm{Set }z=\hat{Z}[j]$
\State $\textrm{Set }\beta=\hat{\beta}[j]$
\State $\textrm{ReSet }z+=\alpha\cdot (\mathcal{Z}_1(t[j]+i\cdot dt)-\mathcal{Z}_1(t[j]+(i-1)\cdot dt))+\beta\cdot (\mathcal{Z}_2(t[j]+i\cdot dt)-\mathcal{Z}_2(t[j]+(i-1)\cdot dt))$
\State $\hat{Z}_1\varB{.append}\left(z\right)$
\EndFor
\State $V\varB{.append}(\hat{Z}_1)$
\EndFor\\
\Return $V$
\end{algorithmic}
\end{algorithm}
\end{minipage}
\vspace{0.3cm}
In algorithm \ref{main algorithm 0}, $V\varB{.last}$ is the last element of the sequence $V$, $x[j]$ denotes the $j$th element in a sequence $x$; $\mathbb{N}$ is the set of natural number starting from $1$; $append$ operation appends an element to the given sequence. Note that initial time $t$ is a $N$-dimensional vector and considered as an input, whose entries are constantly $0$ in most cases. But algorithm \ref{main algorithm 0} and the following algorithm \ref{main algorithm 1} can be used as an intermediate step in the simulation of longitudinal process with counting component. In that case, initial time $t$ may not always be zero, so we leave $t$ as a free parameter.
In general, denote $\epsilon(t)$ as the vector $(\epsilon_1(t),\dots,\epsilon_{\mathfrak{p}}(t))$, with $\epsilon_i$ the variational rate of the $i$th-dimension time integral in \eqref{time-integral longi}, when $\epsilon(t)$ is Markovian in the following sense:
\begin{equation}\label{markovian}
p\left(\epsilon(t)\in x+dx\mid Z(s),\,s\leq t\right)=p\left(\epsilon(t)\in x+dx\mid Z(t),t\right),
\end{equation}
absolutely continuous process \eqref{time-integral longi} is simulatible via the following Euler-type algorithm, where $p(\cdot\mid Z(s),\,s\leq t)$ is the conditional density of $\epsilon(t)$ given the full information of longitudinal process up to time $t$, $p(\cdot\mid Z(t),t)$ is the conditional density given only the longitudinal observation at time $t$:
\begin{minipage}{\linewidth}
\begin{algorithm} [H]
\caption{{\bf GenSim1}--generate simulatible sequence for \eqref{time-integral longi}}\label{main algorithm 1}
\begin{algorithmic} [1]
\Require \\
longitudinal parameter, $a$\\
Distribution of $Z_0$ for fixed $a$, $F_{0}(\cdot,a)$, or a set of $N$ samples of $Z_0$, $S_N$\\
conditional pdf of the form \eqref{markovian}, $p$\\
initial time, $t$\\
interval length, $dt$\\
sample size, $N$
\Ensure \\
A sequence of random variables satisfies \eqref{empirical approximation process}, denote as $V$.
\algrule
\State $\textrm{Set } V=\emptyset$
\State $\textrm{Set }\hat{Z}=S_N \textrm{ or the set of }N \textrm{ independent samples drawn from distribution }F_{0}(\cdot,a)$
\State $V\varB{.append}(\hat{Z})$
\For {$\textbf{each } i \textbf{ in } \mathbb{N}$}
\State $\textrm{Set }\hat{Z}=V\varB{.last}$
\State $\textrm{Set }\hat{Z}_1=\emptyset$
\For {$j=1\textbf{ to } N$}
\State $\textrm{Draw a random sample, }z' \textrm{, from conditional density }p(\cdot \mid z,t)$
\State $\hat{Z}_1\varB{.append}(z+z'\cdot dt)$
\EndFor
\State $V\varB{.append}(\hat{Z}_1)$
\EndFor\\
\Return $V$
\end{algorithmic}
\end{algorithm}
\end{minipage}\\
Longitudinal process with one counting dimension is also empirically simulatible, while the simulation algorithm becomes tricky. For simplicity of presentation, in the following pseudo-code (algorithm \ref{main algorithm 2}), we assume that the first dimension (indexed by $0$) of the longitudinal process represents the counting component. In addition, we require that at initial time, the counting dimension puts all mass at $0$, this restriction can be easily relaxed but the pseudo-code would become too redundant.
\begin{algorithm
\small
\caption{{\bf GenSim2}--generate simulatible sequence with one counting dimension}\label{main algorithm 2}
\begin{algorithmic} [1]
\Require \\
longitudinal setup, $\Omega=(a, b^c,\lambda_0^c)$\\
distribution of $Z_0$ for fixed longitudinal setup, $F_{0}(0,\cdot,\Omega)$; or a set of $N$ samples of $Z_0$, $S_N$\\
conditional pdf of form \eqref{markovian}, $p$\\
interval length, $dt$\\
sample size, $N$
\Ensure \\
A sequence of random variables satisfies \eqref{empirical approximation process}, denote as $V$.
\algrule
\State $\textrm{Set }\hat{Z}=S_N \textrm{ or the set of }N \textrm{ independent samples drawn from distribution }F_{0}(0,\cdot,\Omega)$
\State $\textrm{Set }V=\emptyset$
\State $\textrm{Set }jump\_t \textrm{ as an }N\textrm{ dimensional vector with all entries being }0$
\For {$\textbf{each } i \textbf{ in } \mathbb{N}$}
\State $\textrm{ReSet }p(j,x\mid z,t)=\begin{cases}
p(x\mid z,t)&\textrm{ if }j=z^{\ast}\\
0 &\textrm{ else}
\end{cases}$
\State $\textrm{Set }V' = \varB{GenSim1}(\Omega,\hat{Z},p,jump\_t,dt,N)$
\If {$i=1$}
\State $\textrm{ReSet }V=V'$
\Else
\For {$j=1 \textrm{ to }N \textrm{ and } k\geq jump\_t[j]$}
\State $\textrm{ReSet }V[k][j]=V'[k-jump\_t[j]][j]$
\EndFor
\EndIf
\For {$\textbf{each } k \textbf{ in } \mathbb{N}$}
\State $\textrm{ReSet }V'[k]=\sum_{j=1}^k \exp\left( - \exp\left(\varB{dot}(V'[k]\cdot dt,b^c\right)\cdot \lambda_0^c(k\cdot dt)\right)$
\EndFor
\For {$ j=1 \textbf{ to } N$}
\State $\textrm{Set }\omega=uni(0,1)$
\State $\textrm{Set }k=\min\{k'\in \mathbb{N}:\,V'[k'][j]\leq \omega\}$
\State $\textrm{ReSet }jump\_t[j]=k$
\For {$k'\geq k$}
\State $\textrm{ReSet }V[k'][j][0]+=1$
\EndFor
\State $\textrm{ReSet }\hat{Z}[j]=V[jump\_t[j]][j]$
\EndFor
\EndFor\\
\Return $V$
\end{algorithmic}
\end{algorithm}
In algorithm \ref{main algorithm 2}, for every $k\in\mathbb{N}$, $V[k]$ is considered as $N\times \mathfrak{p}$ matrix with its $j$th row $V[k][j]$ being the $\mathfrak{p}$ dimensional longitudinal process simulated at time $k\cdot dt$ and $V[k][j][0]$ denote the corresponding value at the counting dimension. $\varB{dot}(.,.)$ is the inner product between matrices with appropriate row- and column-dimensions, the product matrix $V'[k]\cdot dt$ is formed through entry-wise product between $dt$ and entries in $V'[k]$. $uni(0,1)$ is a random number generator that draws a random number from uniform distribution on interval $[0,1]$.
It is critical to notice that the counting component can be considered as time-invariant between two consecutive jump time (reflected through the line 11 of algorithm \ref{main algorithm 2}, where the conditional density is reconstructed and the probability that the counting component jumps out of its current stage $z^{\ast}$ is set to $0$), then the design of algorithm \ref{main algorithm 2} becomes quite simple.
It takes fully use of the local stationarity of the counting component in the way that for every fixed subject, simulation of the longitudinal process with one counting dimension is decomposed as a sequential simulation of longitudinal processes with all their dimensions being absolutely continuous in time. This sequential construction is crucial not only in algorithm design, but is also the key to verify the identifiability of model \eqref{joint model} under specification \eqref{time-integral longi} and \eqref{jump intensity}, the details will be discussed in section \ref{large sample property all}.
It turns out that algorithm \ref{main algorithm 0}-\ref{main algorithm 2} can generate the simulatible sequence required in definition \ref{simulatible} for all longitudinal processes \eqref{stationary cox}, \eqref{bio-stat longi}, \eqref{count longi} and their mixture. Proof for algorithm \ref{main algorithm 0} is quite trivial, while proof for algorithm \ref{main algorithm 1} and \ref{main algorithm 2} are parts of the proof of the consistency of our estimators which, therefore, are combined with the proof of theorem \ref{theorem consistency} and presented in Appendix.
Notice that algorithm \ref{main algorithm 2} is extendible to handle the occurrence of the terminal event. In fact, algorithm \ref{main algorithm 2} can be generalized to the following algorithm \ref{main algorithm 3}, which returns a set of i.i.d. samples of $(Z_T,T)$. $(Z_T,T)$ is the joint of longitudinal observation at event time and the event time itself, its i.i.d. samples are the key to construct the joint probability density function (pdf) and likelihood function in our estimation procedure.
\begin{algorithm}
\caption{{\bf GenSim3}--generate joint samples of longitudinal and event time data }\label{main algorithm 3}
\begin{algorithmic} [1]
\Require \\
longitudinal setup, $\Omega=(a, b,b^c,\lambda,\lambda^c)$\\
distribution of $Z_0$ for fixed longitudinal setup, $F_{0}(0,\cdot,\Omega)$; or a set of $N$ samples of $Z_0$, $S_N$\\
conditional pdf of the form \eqref{markovian}, $p$\\
initial time, $t$\\
interval length, $dt$\\
sample size, $N$ \\
censor bound, $C$
\Ensure \\
$N$ samples of the pair $(Z_{T},T)$ of longitudinal variables at event time.
\algrule
\State $\textrm{Set }V=\emptyset$
\State $\textrm{Set }\hat{Z}=S_N \textrm{ or the set of }N \textrm{ independent samples drawn from distribution }F_{0}(0,\cdot,\Omega)$
\State $\textrm{Set }jump\_t \textrm{ as an }N\textrm{ dimensional vector with all entries to be }0$
\State $\textrm{Set }event\_t \textrm{ as an }N\textrm{ dimensional vector with all entries to be }C$
\While {$\{t:\,t\in event\_t,\,t<C\}\not=event\_t \textrm{ and } \{t:\,t\in jump\_t,\,t\geq C\}\not=\emptyset$}
\State $\textrm{ReSet }p(j,x\mid z,t)=\begin{cases}
p(x\mid z,t)&\textrm{ if }j=z^{\ast}\\
0 &\textrm{ else}
\end{cases}$
\State $\textrm{Set }V' = \varB{GenSim1}(\Omega,\hat{Z},p,jump\_t,dt,N)$
\State $\textrm{Set }V^{''} = V'$
\If {$i=1$}
\State $\textrm{ReSet }V=V'$
\Else
\For {$j=1 \textrm{ to }N \textrm{ and } k\geq jump\_t[j]$}
\State $\textrm{ReSet }V[k][j]=V'[k-jump\_t[j]][j]$
\EndFor
\EndIf
\For {$\textbf{each } k \textbf{ in } \mathbb{N}$}
\State $\textrm{ReSet }V^{''}[k]=\sum_{j=1}^k \exp\left( - \exp\left(\varB{dot}(V'[k]\cdot dt,b\right)\cdot \lambda(k\cdot dt)\right)$
\State $\textrm{ReSet }V'[k]=\sum_{j=1}^k \exp\left( - \exp\left(\varB{dot}(V'[k]\cdot dt,b^c\right)\cdot \lambda^c(k\cdot dt)\right)$
\EndFor
\For {$ j=1 \textbf{ to } N$}
\State $\textrm{Set }\omega=uni(0,1)$
\State $\textrm{Set }\omega'=uni(0,1)$
\State $\textrm{Set }k=\min\{k'\in \mathbb{N}:\,V'[k'][j]\leq \omega\}$
\State $\textrm{Set }k^{\ast}=\min\{k'\in \mathbb{N}:\,V^{''}[k'][j]\leq \omega'\}$
\State $\textrm{ReSet }jump\_t[j]=k$
\If {$k^{\ast}<k$}
\State $\textrm{ReSet }event\_t[j]=k^{\ast}$
\EndIf
\algstore{myalg}
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\begin{algorithmic} [1]
\algrestore{myalg}
\For {$k'\geq k$}
\State $\textrm{ReSet }V[k'][j][0]+=1$
\EndFor
\State $\textrm{ReSet }\hat{Z}[j]=V[jump\_t[j]][j]$
\EndFor
\EndWhile
\State $\textrm{Set }Sample=\emptyset$
\For {$i=1 \textrm{ to }N$}
\State $k=event\_t[i]$
\State $Sample\varB{.append}(\varB{concat(V[k][i],\{k\cdot dt\})})$
\EndFor\\
\Return $Sample$
\end{algorithmic}
\end{algorithm}
In algorithm \ref{main algorithm 3}, all notations follow their interpretations in previous three algorithms. Censor bound $C$ is a prescribed positive constant, it specifies the end of observation. $concat$ operation returns a $n1+n2$ dimensional vector through concatenating two row vectors with dimension $n1$ and $n2$, respectively. Apparently, algorithm \ref{main algorithm 3} simulates the terminal event time in the same way as to simulate the jump time of counting component in algorithm \ref{main algorithm 2}.
\subsection{Estimation procedure}\label{estimation}
The simulation algorithms stated in previous section provide a foundation to construct the estimation procedure of model \eqref{joint model}. Our estimation is based on maximizing the full information likelihood function of observations $\{(Z_{T_i},T_i):\,i=1,\dots,n\}$.
Notice that at this moment, we assume that longitudinal observations are systematically missed for all covariate dimensions, i.e. the input data has a special form of \eqref{partially missing data} with the set $\mathcal{K}=P$. Estimation for more general form of input data is extended through the estimation procedure of the special case. To avoid ambiguity of notation, from now on, we will denote $S$ as the terminal event time derived from simulation and $W_t$, $W_S$ as the longitudinal observation of simulated sample at $t$ and terminal time $S$ respectively. In contrast, for the real observed sample, the terminal event time is denoted as $T$ and longitudinal observations are denoted as $Z_t$ or $Z_T$.
With the aid of simulation algorithms presented in previous section, the construction of likelihood function can be implemented according to following two steps:
\noindent {\bf Step 1}: Fix a sample size $N$, interval length $dt$, and a profile of model parameters $(a,b,b^c)$ and non-parametric components $(\lambda_0,\lambda_0^c)$, executing appropriate simulation algorithms yields $N$ samples of $\left\{(W_{S_i},S_i):\,i=1,\dots,N\right\}$ which turn out to be i.i.d. samples of $(W_S,S)$ subjecting to the given parameters. \\
\noindent {\bf Step 2}: Apply kernel density method to the i.i.d. samples in Step 1, yield an empirical pdf of the random vector $(W_S,S)$ that implicitly depends on $(a,b,b^c,\lambda_0,\lambda_0^c)$ and is expressed as below:
\begin{equation}\label{empirical pdf}
\hat{p}_{N,dt,h}(z,s |a,b,b^c,\lambda_0,\lambda_0^c)=\frac{1}{N}\sum_{i=1}^NK^{(\mathfrak{p}+1)}_h\left(z-W_{S_i},s-S_i\right)
\end{equation}
where $K_h^{(m)}$ is a $m$-dimensional kernel function with bandwidth $h$, for simplicity, we only consider the Gaussian kernel function in this study; $(W_{S_i},S_i)$ are samples yielding from step 1.
The i.i.d. property of samples from Step 1 guarantees that $\hat{p}_{N,dt,h}(\cdot |\Omega)$ yielding from Step 2 converges to the true joint pdf subjecting to $\Omega=(a,b,b^c,\lambda_0,\lambda_0^c)$.
Based on empirical pdf $\hat{p}_{N,dt,h}(\cdot |\Omega)$, an empirical version of the full information likelihood function can be constructed as below:
\begin{equation}\label{likelihood}
l_{n,N,dt,h}(a,b,b^c,\lambda_0,\lambda_0^c)=\prod_{i=1}^{n}\hat{p}_N(Z_{T_i},T_i |a,b,b^c,\lambda_0,\lambda_0^c)
\end{equation}
where $(Z_{T_i},T_i)$ is the observed covariate variables at the event time $T_i$ of the $i$th subject and the event time itself, $n$ is the sample size (the number of subjects) of the input data.
In practice, the non-parametric components $\lambda_0$ and $\lambda_0^c$ in \eqref{likelihood} can be replaced by their step-wise version:
\begin{equation}\label{step-wise-lambda}
\lambda\left(t\right):=\sum_{i=1}^{k}\theta_{i}\cdot I\left(t\in[dt\cdot(i-1),dt\cdot i)\right)
\end{equation}
where $\lambda$ in \eqref{step-wise-lambda} can take either as $\lambda_0$ or $\lambda_0^c$, $k=\min\left\{k'\in \mathbb{N}: k\cdot dt>\max\{T_i:\,i=1,\dots,n\}\right\}$, $I$ is the indicator function, for simplicity, the step length $dt$ is taken the same value as the length of time interval in simulation algorithm \ref{main algorithm 0}-\ref{main algorithm 3}. To guarantee consistency, their values will depend on the sample size $n$.
Substitute $\lambda_0$ and $\lambda_0^c$ of the form \eqref{step-wise-lambda} into \eqref{likelihood} yielding the final form of likelihood function. Our estimator $(\hat{a},\hat{b},\hat{b}^c,\hat{\lambda}_0,\hat{\lambda}_0^c)$ is then derived through maximizing the renewed version of likelihood function \eqref{likelihood}.
\subsection{Estimation with general input-data type \eqref{partially missing data}}\label{estimation_extension}
Although the estimation procedure of maximizing \eqref{likelihood} is directly applicable to the input data of type \eqref{partially missing data}, it does not fully utilize the information provided in input data as there is not any connection between the estimator and the partially existed longitudinal observations. To resolve this issue, we present a way to extend the estimation procedure in previous section, the extension can make better use of longitudinal information and increase estimation efficiency.
We apply the idea of censoring to construct a weighted average likelihood function, estimators fully utilizing the longitudinal data is then derived from maximizing that function. In details, for every fixed time interval $[t,t')$ and the subset of simulated data satisfying $S_i\geq t$, the simulated samples admit to construct the conditional joint pdf of $(W_S,S)$ given $t\leq S< t'$ and the conditional density of censored event $W_t=z$ given the censoring $S>t$. More precisely, given $t$ and $t'>t$, we have the following uncensored pdf:
\begin{equation}\label{uncensored pdf}
\hat{p}_{N_t,dt,h}^{t,t',u}\left(z,s\mid a,b,b^c,\lambda_0,\lambda_0^c\right)=\left(\frac{1}{N_t}\sum_{i=1}^NK^{(\mathfrak{p}+1)}_h\left(z-W_{S_i},s-S_i\right)\cdot I(t\leq S_i\leq t')\right)^{I(t\leq s<t')};
\end{equation}
and the censored pdf
\begin{equation}\label{censored pdf}
\hat{p}_{N_t,dt,h}^{t',c}\left(z^{\mathcal{K}}\mid a,b,b^c,\lambda_0,\lambda_0^c\right)=\left(\frac{1}{N_t}\sum_{i=1}^NK^{(|\mathcal{K}|)}_h\left(z^{\mathcal{K}}-W_{S_i}^{\mathcal{K}}\right)\cdot I(S_i> t')\right)^{I(s\geq t')}
\end{equation}
where $K_h^{(.)}$ follows the interpretation before; $N_t<N$ is the number of simulated subject $i$s with terminal event time $S_i\geq t$; $|\mathcal{K}|$ is the number of element in the set of dimensions that do not have missing longitudinal observations; $x^{\mathcal{K}}$ denotes the projection of vector $x$ onto its sub-coordinate indexed by $\mathcal{K}$; supscript $t$, $t'$ indicates the dependence on $t$ and $t'$, supscript $u$ and $c$ denote \enquote{uncensored} and \enquote{censored} respectively.
With the aid of \eqref{uncensored pdf} and \eqref{censored pdf}, we can partition the time line $[0,\max\left\{T_i:\,i=1,\dots,n\right\}]$ into $m$ disjoint intervals with their boundaries recorded as $0=t_0<t_1<\dots<t_m$, then the mean log-likelihood function is constructed as below:
\begin{equation}\label{mean likelihood}
\Scale[0.8]{
l_{n,N,dt,h,m}(\Omega)=\frac{1}{m}\sum_{j=1}^{m}\frac{1}{n_{t_{j-1}}}\sum_{i=1}^n\left(\log\hat{p}_
{N_{t_{j-1}},dt,h}^{t_{j-1},t_j,u}\left(Z_{i,T_{i}},T_i\mid \Omega\right)+\log\hat{p}_{n_{t_{j-1}},dt,h}
^{t_{j-1},c}\left(Z_{i,\mathcal{K},t_{i,j}}\mid \Omega\right)\right)}
\end{equation}
where $Z_{i,t}$ is the real observed longitudinal vector for subject $i$ at time $t$; $Z_{i,\mathcal{K},t_{i,j}}$ is the observed sub-coordinates of longitudinal vector of subject $i$ at time $t_{i,j}$ corresponding to those non-missing dimensions $\mathcal{K}$; $\Omega=(a,b,b^c,\lambda_0,\lambda_0^c)$; $n_t$ is defined analogous to $N_t$ by replacing simulated sample with real observed sample, formally, $n_t=\#\{i\in\{1,\dots,n\}:T_i\geq t\}$.
Estimators fully utilizing longitudinal information is derived from maximizing the mean likelihood function \eqref{mean likelihood} and denoted as $(\hat{a}^l,\hat{b}^l,\hat{b}^{c,l},\hat{\lambda}^l_0,\hat{\lambda}_0^{c,l})$.
\begin{remark}\label{remark1}
The choice of partition boundaries of $\{t_1,\dots,t_m\}$ and the number of partition cells $m$ is tricky and input-dependent.
When the observation time in \eqref{partially missing data} is uniform for all subject in sample, i.e. $t_{i,j}\equiv t_{i',j}$ for all different $i$, $i'$ and $j<\min(m_i,m_{i'})$, partition boundaries of $\{t_1,\dots,t_m\}$ can be simply selected as $\{t_{i^{\ast},1},\dots,t_{i^{\ast},m_{i^{\ast}}}\}$ where $i^{\ast}$ is the index of the subject who has the greatest terminal event time. This choice can guarantee the most efficient utilizing longitudinal information. Input data with uniform observation time is widely existing in applications to finance and actuarial sciences where many economic variables are collected in a fixed frequency, such as GDP \citep{koopman2008multi,li2017impact}.
When the observation time is not uniform, but for every subject in sample, the observation frequency is relatively high in the sense that $\Delta=\max\{t_{i,j+1}-t_{i,j}:\,i=1,\dots,n;\,j=1,\dots,m_i\}$ converges to $0$ as sample size $n\rightarrow \infty$, partition intervals can be selected with equal length while the number of partition intervals is set as $n$. In this case, interpolation method can be applied as discussed in \cite{andersen1982cox} to set longitudinal value at boundary point $t_j$ for subjects whose longitudinal observation are missing at $t_j$.
Finally if longitudinal observation time is not uniform and has low frequency, the choice of $\{t_1,\dots,t_m\}$ and $m$ becomes quite complicated, we leave it for future discussion.
\end{remark}
\subsection{Parallel computing}\label{parallel computing}
Distinct from the estimators for joint models with longitudinal process specified through \eqref{bio-stat longi} \citep{wu2014joint,rizopoulos2010jm,guo2004separate}, the computation of our estimator is highly compatible with parallel computing framework, especially with the embarrassingly parallel computing framework \cite{guo2012parallel}. The parallelizability of our procedure comes from two sources which correspond to the simulation steps and optimization steps, respectively.
In the step of simulation and construction of empirical pdf, all simulation algorithms in section \ref{model_specification} is implemented in a path-wise manner, In fact, setting sample size $N=1$ for each run of algorithm \ref{main algorithm 0}-\ref{main algorithm 3} and repeating execution for $N>1$ times generates $N$ samples that are essentially identical to the samples by executing the algorithms once with sample size $N$. So there are no interaction between two sample trajectories, an embarrassingly parallel computing framework is perfectly applicable in this setting and can significantly rise up the computation speed of the simulation step.
In the step of optimization, there is no latent variable involved in the empirical likelihood function \eqref{likelihood}, this is quite different from the estimation procedure of joint models with \eqref{bio-stat longi} as longitudinal process, where the involvement of random coefficient $\beta$ leads to latent variables and the reliance on EM algorithm. The main advantage of not using EM algorithm is that there is no need to repeatedly solve a complex integral and a maximization problem which have sequential dependence. Consequently, evolutionary algorithm and/or stochastic descending algorithm \citep{liu2015asynchronous,cauwenberghs1993fast,sudholt2015parallel,tomassini1999parallel,osmera2003parallel} is applicable to maximize the likelihood function \eqref{likelihood}, which is embarrassingly parallelizable.
In sum, the estimation proposed in this paper is highly compatible with parallel computing framework. This is important because in applications to finance or risk management, massive input data is common that imposes a strict requirement on computation speed and efficient memory allocation. Parallel computing can significantly lift computation speed, meanwhile take better use of the limit memory. So, being parallelizable grants our estimation procedure with a great potential in a wide range of applications, especially the application in finance.
\section{Asymptotic Property of Large Sample}\label{large sample property all}\label{asymptotic property}
The consistency and asymptotic normality of estimators, $(\hat{a},\hat{b},\hat{b}^c,\hat{\lambda}_0,\hat{\lambda}_0^c)$ are established
in this section. For convenience of expression, we need the following notations:
\noindent 1. Denote $A$, $B$ and $B^c$ as the domain of parameter $a$, $b$ and $b^c$, respectively.
\noindent 2. Denote $\Omega=(a,b,b^c,\lambda_0,\lambda_0^c)$ as a given model setup, with $\Omega_0$ being the true model setup; denote function $p$ as the theoretical joint pdf depending on model setup $\Omega$, $p(\cdot|\Omega_0)$ is the true joint pdf of observation $(Z_T,T)$.
\noindent 3. Denote
\begin{equation}\label{function-q}
q(j,z,t|\Omega)=\textrm{E}\left(\epsilon(t)\mid Z^{\ast}(t)=j,Z^{-\ast}(t)=z,\Omega\right)
\end{equation}
as the conditional expectation of variational rate of absolutely continuous dimensions of longitudinal process given its observation at time $t$ and model setup $\Omega$, where $\epsilon(t)$ is the vector of instantaneous variational rate as defined in \eqref{time-integral longi}, $Z^{\ast}$ and $Z^{-\ast}$ denote the counting dimension and absolutely continuous dimension of longitudinal process respectively.
To establish the consistency result, we need the following technical conditions:
\noindent $\mathbf{C1}$. For every $j\in\mathbb{N}$, the pdf $f_0(j,.,\Omega_0)$ induced by the true longitudinal process at initial time has full support and satisfies that $\sum_{j\in\mathbb{N}}\int_{\mathbb{R}^{\mathfrak{p}-1}}f_0(j,z,\Omega_0)\cdot\exp(v\cdot z)dz\not=1$ for all non-zero $\mathfrak{p}-1$ dimensional vector $v$. In addition, $\lambda_0(0)\equiv C_1$, $\lambda_0^c(0)\equiv C_2$ for some positive constant $C_1,\,C_2$ and for all $\lambda_0,\,\lambda_0^c$ in consideration.
\noindent $\mathbf{C2}$. For every $\Omega\not=\Omega_0$ and every $j\in \mathbb{N}$, one of the following holds:
\begin{itemize}
\item[(i)] $f_0(j,z,\Omega)\not\equiv f_0(j,z,\Omega_0)$;
\item[(ii)] There exists $t>0$ such that the matrix $\nabla_z q\left(j,z,t\mid \Omega\right)-\nabla_z q\left(j,z,t\mid \Omega'\right) $ converges to some limiting matrix as $\Vert z\Vert\rightarrow \infty$, denote the limiting matrix as $M$ which satisfies the hyperbolic property, i.e. at least one eigenvalue of $M$ must have non-zero real part.
\end{itemize}
\noindent $\mathbf{C3}$. $A$, $B$ and $B^c$ are compact subsets of Euclidean space with appropriate dimension, suppose that they all have open interiors and the true values of parameter vector $a$, $b$ and $b^c$ contained in their open interior.
\noindent $\mathbf{C4}$. $\textrm{E}_{\Omega_{0}}\left(\left|\log\left(p(Z_T,T |\Omega)\right)\right|\right)$,
$\textrm{E}_{\Omega_{0}}\left(\left|\nabla_{x}\log\left(p(Z_T,T |\Omega)\right)\nabla_{x'}\log\left(p(Z_T,T |\Omega)\right)\right|\right)$ and
$\textrm{E}_{\Omega_{0}}(|\nabla^2_{xx'}\\
\log(p(Z_T,T |\Omega))|)$
are finite for all $x,\,x'$ as pairs of coordinates of vector $(a,b,b^c)$, and for all $\Omega$ in its domain; denote $\mathcal{I}$ as a matrix with its $xx'$ entry being $\mathcal{I}_{xx'}=\textrm{E}_{\Omega_{0}}(\nabla_{x}\log(p(Z_T,T |\Omega_0))\nabla_{x'}\log\\
(p(Z_T,T |\Omega_0)))$, denote $\mathcal{H}$ as a matrix with its $xx'$ entry being $\mathcal{H}_{xx'}= \textrm{E}_{\Omega_{0}}\left(\nabla^2_{xx'}\log\left(p(Z_T,T |\Omega_0)\right)\right)$, both matrices $\mathcal{I}$ and $\mathcal{H}$
are positive definite.
\noindent $\mathbf{C5}$. For all combinations of $\Omega$ and all $j\in \mathbb{N}$, $q\left(j,.\mid \Omega\right)\in C^{2}\left(\mathbb{R}^{\mathfrak{p}}\times\mathbb{R}_{+}\right)$;
for every $j\in \mathbb{N}$ the map from the domain of $\Omega$ to $C^{2}\left(\mathbb{R}^{\mathfrak{p}}\times\mathbb{R}_{+}\right)$ given through $\Omega\mapsto q\left(j,.\mid \Omega\right)$
is continuous with respect to the $C^{2}$ topology.
\noindent $\mathbf{C6}$. The true theoretical joint pdf $p(\cdot |\Omega_0)$ is continuously differentiable with bounded first order partial derivatives.
\noindent $\mathbf{C7}$. $n$ is the number of subjects in observation. The choice of parameter $dt$, $N$ and kernel width $h$ satisfies $N \sim O(n)$, $dt\sim O(n^{-1})$, $nh^3\rightarrow 0$ and $nh\rightarrow \infty$.
Condition $\mathbf{C1}$ and $\mathbf{C2}$ are the key to verify model identification. Condition $\mathbf{C3}$-$\mathbf{C5}$ are the standard assumption in maximum likelihood estimation (MLE), which guarantees the consistency and asymptotic normality of MLE estimators. Condition $\mathbf{C6}$ and $\mathbf{C7}$ guarantee that the simulation algorithms introduced in section \ref{model_specification} can generate the required simulatible sequence in \ref{simulatible} and that the empirical joint pdf, $\hat{p}$ in \eqref{empirical pdf}, is consistent.
\begin{remark}
Among the seven conditions, $\mathbf{C1}$ and $\mathbf{C2}$ plays the central role to guarantee identifiability of our estimator. It turns out that $\mathbf{C1}$ and $\mathbf{C2}$ hold for a very general class of longitudinal processes. Particularly, almost all linear mixed longitudinal processes \eqref{bio-stat longi} are belonging to that class. In fact, for all absolutely continuous $\mathcal{Z}_1$, $\mathcal{Z}_2$ such that $\mathcal{Z}_2(0)\not=0$, longitudinal processes \eqref{bio-stat longi} can be rewritten as a time-integral of the form \eqref{time-integral longi}:
\begin{equation}
Z(t)=e+\alpha\cdot\mathcal{Z}_1(0)+\beta\cdot\mathcal{Z}_2(0)
+\int_0^t\alpha\cdot\mathcal{Z}_1'(s)+\beta\cdot\mathcal{Z}_2'(s)ds
\end{equation}
where the initial vector $Z_0=e+\alpha\cdot\mathcal{Z}_1(0)+\beta\cdot\mathcal{Z}_2(0)$
which, by assumption, is a normal random vector, so always satisfies $\mathbf{C1}$ and $\mathbf{C2}$ (i). In fact, $\mathbf{C1}$ and $\mathbf{C2}$ (i) does not only hold for the normal class, but also hold for most popular distribution classes that we can meet in practice. This distribution-free property is crucial, as the computation of the traditional estimators to model \eqref{joint model} is expensive in time and memory, it strongly relies on the normality assumption to simplify the expression of likelihood function \citep{rizopoulos2010jm}. However, computation load of our simulation-based estimator is not sensitive to the normality assumption, because according to algorithm \ref{main algorithm 0}, the variation of distribution class of $e$ and $\beta$ only affects the draw of initial samples, which does not take more time and/or memories for most of the widely-used distribution classes.
\end{remark}
\begin{remark}
$\mathbf{C1}$ and $\mathbf{C2}$ distinguish our simulation-based estimator from the traditional estimators developed for model \eqref{joint model} under longitudinal specification \eqref{bio-stat longi} \citep{rizopoulos2010jm,guo2004separate}. In literature, the standard trick is to consider the latent factor $\beta$ as a random effect, then model \eqref{joint model} becomes a frailty model, conditional independent assumption is utilized to derive explicit expression of the likelihood function and EM algorithm is applied to carry out the estimation. However, once if \eqref{joint model} is treated as a frailty model, its identifiability strongly depends on the number of longitudinal observations, which have to be greater than the dimension of longitudinal processes for the normally distributed $\beta$ as proved in \cite{kim2013joint}. Proof in \cite{kim2013joint} also relies on the assumption of normality, it is not clear if the same trick applies to more general distribution classes. The dependence on normality and the availability of enough amount of longitudinal observations restrict the usefulness of joint model \eqref{joint model} in many fields, such as the credit risk management and actuarial science \citep{li2017impact,koopman2008multi}, where longitudinal processes are not normal in general. More critically, in most cases the observation of covariate variables is only available for a couple of years and collected on a monthly or quarterly base, so only tens of longitudinal records are present. In contrast, there are usually ultra-high dimensional (e.g. hundreds) covariates present. Thus, identifiability of model \eqref{joint model} is always a big concern. For our procedure, condition $\mathbf{C1}$ and $\mathbf{C2}$ guarantee model identifiability without any extra restriction on the number of longitudinal observations, neither on the distribution class of $\beta$. In this sense our simulation-based procedure generalizes the standard estimation procedure of model \eqref{joint model}.
\end{remark}
\begin{theorem}\label{theorem consistency}
Model \eqref{joint model} is identifiable under condition $\mathbf{C1}$ and $\mathbf{C2}$, where its longitudinal process is a mixture of absolutely continuous processes specified through \eqref{time-integral longi} and an one-dimensional counting process satisfying \eqref{jump intensity}. Additionally, if $\mathbf{C3}$-$\mathbf{C7}$ hold,
the estimator $(\hat{a},\hat{b},\hat{b}^c,\hat{\lambda},\hat{\lambda}_0^c)$ is consistent, asymptotically normally distributed for its parametric part $(\hat{a},\hat{b},\hat{b}^c)$, and has the following asymptotic property for its non-parametric part:\\
$ $
\noindent For the two baseline hazard functions $\lambda_0$ and $\lambda^c_{0}$, the estimator
$\hat{\lambda}_0$ and $\hat{\lambda}_0^c$ converge to $\lambda_{0}$ and $\lambda_0^c$ according to the
weak-$*$ topology, the processes $\sqrt{n}\left(\int_{0}^{t}\hat{\lambda}_0\left(\tau\right)-\lambda_{0}\left(\tau\right)d\tau\right)$ and $\sqrt{n}\left(\int_{0}^{t}\hat{\lambda}_0^c\left(\tau\right)-\lambda_{0}^c\left(\tau\right)d\tau\right)$
converge weakly to two Gaussian Processes.
\end{theorem}
The estimator fully utilizing longitudinal information is also consistent and asymptotically normally distributed. In contrast to $(\hat{a},\hat{b},\hat{b}^c,\hat{\lambda}_0,\hat{\lambda}_0^c)$, estimator $(\hat{a}^l,\hat{b}^l,\hat{b}^{c,l},\hat{\lambda}_0^l,\hat{\lambda}_0^{c,l})$ turns out to be more efficient. The details are summarized into the following theorem:
\begin{theorem}\label{theorem consistency full}
Under $\mathbf{C1}$-$\mathbf{C7}$,
the estimator $(\hat{a}^l,\hat{b}^l,\hat{b}^{c,l},\hat{\lambda}_0^l,\hat{\lambda}_0^{c,l})$ is consistent and asymptotically normally distributed, with their asymptotic variance being $1/m$ in scale of the asymptotic variance of $(\hat{a},\hat{b},\hat{b}^c,\hat{\lambda}_0,\hat{\lambda}_0^c)$, where $m$ is the number of censoring intervals.
\end{theorem}
Proof of theorem \ref{theorem consistency} and \ref{theorem consistency full} replies on three technical lemmas \ref{lemma1}-\ref{lemma3} that are stated in Appendix.
\begin{remark}
The proof of theorem \ref{theorem consistency}, \ref{theorem consistency full} and lemma \ref{lemma1}-\ref{lemma3} are quite technical, but the idea behind them are straightforward. Notice that the simulation-based estimator developed in this study is essentially a maximum likelihood estimator, so as long as the model \eqref{joint model} is identifiable and the standard regularity condition $\mathbf{C3}$-$\mathbf{C5}$ hold, consistency and asymptotic normality of our parametric estimator, $(\hat{a},\hat{b},\hat{b}^c)$, is just a consequence of the asymptotic property of the standard maximum likelihood estimator. As for the non-parametric part, $(\hat{\lambda}_0,\hat{\lambda}_0^c)$, its consistency still holds by the fact that when a model is identifiable, the true model setup is the unique maximal point of the entropy function which is a function in $\Omega$ \citep{amemiya1985advanced}. As for the asymptotic normality of non-parametric estimator, $(\hat{\lambda}_0,\hat{\lambda}_0^c)$, the proof is essentially the same as the proof in \cite{zheng2018understanding}.
Therefore, model identifiability of \eqref{joint model} and the convergence of empirical likelihood function \eqref{likelihood} is the key to establish theorem \ref{theorem consistency} and \ref{theorem consistency full}, it is guaranteed by Lemma \ref{lemma1}-\ref{lemma3}.
In fact, lemma \ref{lemma1}, combining with $\mathbf{C1}$ and $\mathbf{C2}$, provides a foundation to lemma \ref{lemma3} and model identifiability. This is done through verifying inequality \eqref{ineq} in an inductive way where condition $\mathbf{C2}$ is applied repeatedly to remove the possibility of existence of an invariant probability measure.
Lemma \ref{lemma2} is crucial to the convergence of likelihood function. It helps confirm that samples generated from algorithm \ref{main algorithm 0}-\ref{main algorithm 3} are drawn correctly and satisfy the i.i.d. property, which guarantee that the empirical joint pdf \eqref{empirical pdf} approaches to the theoretical pdf \eqref{joint density 0 stage}, \eqref{joint density 1 stage} as the simulation sample size $N\rightarrow \infty$. In addition, with appropriate choice of tuning parameter $h,\,dt,\,N$ subject to condition $\mathbf{C7}$, lemma \ref{lemma2} also guarantees the convergence of likelihood function \eqref{likelihood} to the theoretical entropy function.
\end{remark}
\section{Numerical Studies}\label{numerical study}
\subsection{Simulation studies}
In this section, we present an example based on simulation studies to assess the finite sample performance of the proposed method.
\begin{example}\label{example1}
200 random samples, each consisting of $n=100,\,200$ subjects, are generated from the model with $\mathfrak{p}=7$ dimensional longitudinal process with the $7$th dimension being the counting dimension. The six absolutely continuous dimensions are given as a special case of \eqref{bio-stat longi} where the error $e$ and random effect $\beta$ are supposed to be independent normal random vectors and follow $N(\mathcal{M}_1,\Sigma_1)$ and $N(\mathcal{M}_2,\Sigma_2)$, with the mean and co-variance matrix parametrized in the following way:
\begin{equation}\label{mu}
\mathcal{M}_i=(\mu_{i1},\dots,\mu_{i6})^{\top}
\end{equation}
\begin{equation}
\Sigma_i=\varA{diag}(\sigma^2_{i1},\dots,\sigma^2_{i6})
\end{equation}\label{sigma}
\noindent where $i=1,2$, $\varA{diag}(\cdot)$ denotes the diagonal matrix with diagonal elements given as the vector $\cdot$. In this example, we take $\mu_{ik}\equiv 0$ and $\sigma_{ik}\equiv 1$ for all $i$ and $k$, which means both the random effect and error are standard normal random vectors in this example. The configuration matrix $\mathcal{Z}_1\equiv0$ and $\mathcal{Z}_2(t)=t$.
\noindent The counting dimension has its jumping hazard specified through \eqref{jump intensity} such that $b^c=(0,0,0,-1,1,0.6,1)$ and
\begin{equation}\label{lambda0c}
\lambda_0^c(t)=\frac{\exp(-3)+\exp^{-0.5t}}{\exp(-3)+1}.
\end{equation}
Finally, the hazard of terminal events is given by \eqref{joint model} where $b=(1,-1,0.3,0,0,0,1)$ and $\lambda_0$ satisfies that
\begin{equation}\label{lambda0}
\lambda_0(t)=\frac{\exp(-1)+\exp^{-t}}{\exp(-1)+1}.
\end{equation}
\noindent In sum, the model setup to be estimated for this example consist of three parts, the longitudinal parameters $(\mathcal{M}_1,\mathcal{M}_2,\Sigma_1,\Sigma_2)$, the parameter for counting dimension and terminal event $(b^c,b)$ and the non-parametric baseline hazards $(\lambda_0^c,\lambda_0)$.
\end{example}
In estimation stage, for simplicity, we select $dt=1/n$, $N=n$ and $b=n^{-\frac{1}{2}}$, which are naturally compatible with the requirement of condition $\mathbf{C7}$. The combination of simulation algorithm \ref{main algorithm 0} and \ref{main algorithm 2} is utilized to generate simulatible sequence of longitudinal process, then algorithm \ref{main algorithm 3} is applied to generate random samples of $(Z_T,T)$ as well as the empirical likelihood \eqref{likelihood}.
The estimation performance are presented and evaluated through the bias, SSE and CP of longitudinal parameters, regression coefficients $b$, $b^c$ and the baseline hazard $\int_0^t\lambda_0(s)ds$, $\int_0^t\lambda_0^c(s)ds$, where bias is the sample mean of the estimate minus the true value, SSE is the sampling standard error of the estimate, and CP is the empirical coverage probability of the 95\% confidence interval. Results are collected in Table \ref{table: 1} for longitudinal parameters, in Table \ref{table: 2} for parameters of counting dimension and terminal event, and in Figure \ref{fig: fitting} for the non-parametric component.
By the fitting performance in Table \ref{table: 1}, \ref{table: 2} and Figure \ref{fig: fitting}, we conclude that in example \ref{example1} both of the fitting to parametric and non-parametric components are pretty good for both sample size. Meanwhile, there is no significant difference between $n=100$ and $n=200$, in terms of the estimation bias, variance and 95\% credential interval, which implies that the convergence predicted in theorem \ref{theorem consistency} arrives quite fast. So our estimation procedure can even work well for a relatively small sample ($n=100$).
\subsection{Real data examples}\label{real data examples}
In this section, we apply our method to the consumer-loan data collected from one of the biggest P2P loan platform, renrendai.com, in China. This platform provides loan-level details regarding interest rate, loan amount, repayment term and the borrowers' credit records (the number of previous default, overdue, pre-payment, the number of applications, approved applications and pay-offs, credit score), the capacity of repayment (income, whether or not have unpaid mortgage, car loan, collateral situation) and the other miscellaneous background information of borrowers (education level, residential location, job position and industry, description of loan purpose). In addition, the repayment details are also included, such as the repayment amount, the number of anomalous repayment actions (pre-pay, overdue and default). To reflect the macro-economic condition during repayment, we also collect the province-level economic data such as GDP and housing price from National Bureau of Statistics of China.
We collect a sample of loans that were originated after Jan. 2015 from renrendai.com (this is because the publicly available historical record of economic variables can only be traced back to Jan. 2015), and there are almost 225,000 of loan records. Terms of these loans vary from 1 year (12 months) to 3 years (36 months). About 1/5 of the loans have been terminated by the data collection time (2018 Jul.). Among the loans that have already been terminated, only a tiny portion are default or experienced over-due, the vast major portion are either paid-off or pre-paid before the declared term. In addition, among those pre-paid loans, more than two-third of them was closed within 8 months. Therefore, we select whether the loan has been pre-paid-off as the terminal event in interest, the final censoring time is the 8th-month (censor bound $C$ is set to $8$ in algorithm \ref{main algorithm 3}). Finally, after removing the records with missing attribute, there are 29,898 loans remained, which consist of the entire sample. In this real data example, we consider a $20$-dimensional covariate $Z$, which consists of 17 stationary variable (including loan-level and borrower-level variables) and 2 absolutely continuous processes that are monthly GDP and housing price, and 1 counting process, the number of anomalous repayments occurred during repayment period which is available only once, thus is the only covariate that has missing longitudinal observation.
Since longitudinal observations for macro-economic processes, such as GDP and housing prices, are always available in a constant frequency (e.g. monthly), the monthly observations have already formed a simulatible sequence to the underlying stochastic processes. Given a simulatible sequence for absolutely continuous components, the simulatible sequence to the full longitudinal processes can be easily generated by algorithm \ref{main algorithm 2}. Then the joint samples of $(W_S,S)$ and empirical likelihood function \eqref{likelihood} are built up from algorithm \ref{main algorithm 3}.
It turns out that our estimation procedure can be easily combined with variable selection techniques such as LASSO and adaptive LASSO \citep{tibshirani1996regression,zou2006adaptive}. Due to the substantial amount of covariates existing for renrendai data, we adopt an adaptive LASSO approach together with our simulation-based procedure to estimate model setup and identify the real significant variables involved in hazard function \eqref{joint model} and \eqref{jump intensity}.
Dividing every observation time by censor bound $C=8$, we normalize the survival time scale to $1$ in the estimation and fitting plot. The fitting results are displayed in Table \ref{table: 3} and Figure \ref{fig: real}, where Table \ref{table: 3} records the estimated regression coefficients for both of the terminal event and counting covariate, where variable selection is done through adaptive LASSO. Figure \ref{fig: real} shows the point-wise fitting and empirical 95\% deviation (calculated in bootstrap way) of the cumulative baseline hazard of $\int_0^t\lambda_0(s)ds$ and $\int_0^t\lambda_0^c(s)ds$.
Table \ref{table: 3} shows that only a few portion of covariate variables have significantly impact on the prepayment behavior and the number of irregular payments (e.g. over-due) before pay-off, meanwhile the influential factors on these two events are distinct. For prepayment, both of housing price and GDP are quite influential, their influence are positive, indicates that when local economy goes up, borrowers tend to have good liquidity and prefer to pay off the loans in prior to expiration so as to reduce the total financial cost. The type of job position of borrowers can also influence their repayment behavior. For borrowers with relative low job positions, such as clerks, their preference to pre-paying off debt is strong, which implies that this group of borrowers are more sensitive to financial cost. In contrast, self-employed and manager-level borrowers are more likely to have irregular repayments, this observation might be relevant to their relatively unstable cash-flow.
\section{Discussion}\label{conclusion}
In this paper we proposed a simulation-based approach to estimate the joint model of longitudinal and event-time data with a mixed longitudinal process with absolutely continuous components as well as a counting component. Our approach can generate well-performed estimators under the minimal availability condition of longitudinal observations, namely it allows missing longitudinal observations for part of or all of the temporal covariates before event time. So, our approach outperforms most of the existing semi-parametric estimation procedure in its flexibility to accommodate missing data.
In addition, the estimator generated from our approach is essentially an MLE-class estimator, but unlike the alternative method in literature, there is no latent variable involved in the likelihood function. So the computation of our procedure does not rely on the EM algorithm and can be embarrassingly parallelized, which effectively extends the applicability of our approach to massive financial data.
The limitation of our method is the ignorance of unbounded-variation longitudinal processes, such as Brownian motion and its variants, and the issue of how to fully utilize longitudinal information to lift estimation efficiency when longitudinal observation time is irregularly and sparsely distributed, which need to be handed in future studies.
\vspace{2cm}
\noindent{\large\bf Acknowledgements:} This work was partially supported by the Major and Key Technologies Program in Humanities and Social
Sciences of the Universities from Zhejiang Province (18GH037).
\begin{appendix}
\section{Proof of Theorem \ref{theorem consistency} \& \ref{theorem consistency full}}
Proof of Theorem \ref{theorem consistency} is complicated and need three lemmas stated as below. Proof for Theorem \ref{theorem consistency full} is essentially identical to that for Theorem \ref{theorem consistency}, therefore is omitted from the main body of the paper.
\begin{lemma}\label{lemma1}
$ $ \\
\noindent (1). For every joint model \eqref{joint model} with the absolutely continuous components of its longitudinal process specified as \eqref{time-integral longi}, one counting component specified as \eqref{jump intensity}, the joint pdf of $(Z_T,T)$ is expressible through the following iterative process:
\noindent At $j=0$,
\begin{equation} \label{joint density 0 stage}
p\left(j,z,t\mid \Omega\right)=
p_Z\left(j,z,t\mid a\right)\cdot
\rho_c\left(j,z,t\mid a,b^c,\lambda_0^c\right)\cdot \rho\left(j,z,t\mid a,b,\lambda_0\right)\cdot \exp(b^{-\ast\top}z)\lambda_0(t)
\end{equation}
\begin{equation} \label{jump density 0 stage}
p^c\left(j,z,t\mid \Omega\right)=p_Z\left(j,z,t\mid a\right)\cdot
\rho_c\left(j,z,t\mid a,b^c,\lambda_0^c\right)\cdot \exp((b^{c,-\ast})^{\top}z)\lambda_0^c(t)
\end{equation}
\noindent At $j>0$:
\begin{equation}\label{joint density 1 stage}
\Scale[0.8]{
\begin{aligned}
p\left(j,z,t\mid \Omega\right)= &\exp(b^{-\ast\top}z+b^{\ast} j)\lambda_0(t)\cdot
\int_{\mathbb{R}^{\mathfrak{p-1}}}
\int_0^{t}p^c\left(j-1,z',s\mid \Omega\right)\cdot
\rho_c\left(j,z',z,s,t\mid a,b^c,\lambda_0^c\right)\cdot \rho\left(j,z',z,s,t\mid a,b,\lambda_0\right)dsdz'\\
&+p_Z\left(j,z,t\mid a\right)\cdot
\rho_c\left(j,z,t\mid a,b^c,\lambda_0^c\right)\cdot \rho\left(j,z,t\mid a,b,\lambda_0\right)\cdot \exp(b^{-\ast\top}z+b^{\ast}j)\lambda_0(t)
\end{aligned}}
\end{equation}
\begin{equation}\label{jump density 1 stage}
\Scale[0.8]{
\begin{aligned}
p^c\left(j,z,t\mid \Omega\right)= &\exp((b^{c,-\ast})^{\top}z+b^{c,\ast} j)\lambda_0^c(t)\cdot\int_{\mathbb{R}^{\mathfrak{p-1}}}\int_0^{t}p^c\left(j-1,z',s\mid \Omega\right)\cdot
\rho_c\left(j,z',z,s,t\mid a,b^c,\lambda_0^c\right)dsdz'\\
&+p_Z\left(j,z,t\mid a\right)\cdot
\rho_c\left(j,z,t\mid a,b^c,\lambda_0^c\right)\cdot \exp((b^{c,-\ast})^{\top}z+b^{c,\ast}j)\lambda_0^c(t)
\end{aligned}}
\end{equation}
where $\Omega=(a,b,b^c,\lambda_0,\lambda_0^c)$ is a given model setup, $p_Z(j,\cdot,t\mid a)$ is a pdf on $\mathbb{R}^{\mathfrak{p}-1}$ induced by absolutely continuous components of longitudinal process given $t$ and $j$, $p^c(j,z,t\mid \Omega)$ represents the joint density of the event that the counting component $Z^{\ast}$ jump from $j$ to $j+1$ at $t$ and $Z^{-\ast}(t)=z$. Denote supscript $\ast$, $-\ast$ as indicator to the counting component and absolutely continuous components of longitudinal process respectively, conditional probability function of $\rho$, $\rho_c$, $\rho'$ and $\rho_c'$ are defined as below:
\begin{align}\label{condition prob}
&\rho(j,z,t\mid a,b,\lambda_0)=\operatorname{Pr}\expectarg*{T>t| Z^{\ast}(0)=j,Z^{-\ast}(t)=z,a,b,\lambda^0}\\
&\rho_c(j,z,t\mid a,b^c,\lambda_0^c)=\operatorname{Pr}\expectarg*{Z^{\ast}(t)=j|\begin{aligned}
&Z^{\ast}(0)=j,\\
&Z^{-\ast}(t)=z,\\
&a,b^c,\lambda_0^c
\end{aligned}}\\
&
\rho'(j,z',z,s,t\mid a,b,\lambda_0)=\operatorname{Pr}\expectarg*{T>t|
\begin{aligned}
&Z^{\ast}(s)=
Z^{\ast}(t)=j,\\
&Z^{-\ast}(s)=z',\\
&Z^{-\ast}(t)=z,\\
&a,b,\lambda^0
\end{aligned}}\\
&
\rho_c'(j,z',z,s,t\mid a,b^c,\lambda_0^c)=\operatorname{Pr}\expectarg*{Z^{\ast}(t)=j|
\begin{aligned}
&Z^{\ast}(s)=Z^{\ast}(t)=j,\\
&Z^{-\ast}(s)=z',\\
&Z^{-\ast}(t)=z,\\
&a,b^c,\lambda_0^c
\end{aligned}}
\end{align}
\noindent (2). In addition, under condition $\mathbf{C5}$, function $p_Z$ can be expressed as below:
\begin{equation}\label{p-tilde}
p_Z(j,z,t\mid \Omega)=p_Z\left(j,g\left(j,z,t,t\mid \Omega\right),0\mid\Omega\right)\cdot \mathcal{J}_{z|a}(t),
\end{equation}
The function $p_Z(j,z,0\mid \Omega)$ (treated as a function in variable $z$) is the initial pdf induced by $Z^{-\ast}_0$. For every $t$, $\mathcal{J}_{z|\Omega}(t)$ denotes the Jacobian of the function $g\left(j,z,t,t\mid a\right)$ (in the variable $z$) evaluated at the point $z$.
The
function $g$ is determined by conditional expectation $q$ \eqref{function-q} through solving a family
of initial value problems (IVPs). Namely for every fixed $z$ and $t$,
$g\left(j,z,t,.\mid a\right)$ is the solution to the following ordinary differential
equation (ODE) for $s\in\left(0,t\right)$:
\begin{equation}\label{ode}
z'\left(s\right) = -q\left(j,z\left(s\right),t-s\mid a\right)
\end{equation}
subject to the initial condition $g\left(j,z,t,0\right)=z$.
\noindent (3). Functions $\rho$, $\rho_c$, $\rho'$ and $\rho_c'$ satisfies that:
\begin{align}
&\Scale[0.8]{\begin{aligned}
&\rho(j,z,t\mid a,b,\lambda_0)\\
=&\operatorname{E}\expectarg*{\exp\left(-\int_0^t\exp(b^{-\ast\top}Z^{-\ast}(s)+b^{\ast}j)\lambda_0(s)ds\right)|
\begin{aligned}
&Z^{-\ast}(t)=z,\\
&Z^{\ast}(t)=Z^{\ast}(0)=j,\\
&a,b,\lambda_0\end{aligned}}\end{aligned}}\\
&\Scale[0.8]{\begin{aligned}
&\rho_c(j,z,t\mid a,b^c,\lambda_0^c)\\
=&\operatorname{E}\expectarg*{\exp\left(-\int_0^t\exp((b^{c,-\ast})^{\top}Z^{-\ast}(s)+b^{c,\ast}j)\lambda_0^c(s)ds\right)|
\begin{aligned} &Z^{-\ast}(t)=z,\\
&Z^{\ast}(t)=Z^{\ast}(0)=j,\\
&a,b^c,\lambda_0^c\end{aligned}}\end{aligned}}\\
&\Scale[0.8]{\begin{aligned}
&\rho'(j,z',z,s,t\mid a,b^c,\lambda_0^c)\\
=&\operatorname{E}\expectarg*{\exp\left(-\int_s^t\exp(b^{-\ast\top}Z^{-\ast}(\tau)+b^{\ast}j)\lambda_0(\tau)d\tau\right)|
\begin{aligned}
&Z^{-\ast}(t)=z,\\
&Z^{-\ast}(s)=z',\\
&Z^{\ast}(t)=Z^{\ast}(s)=j,\\
&a,b,\lambda_0\end{aligned}}\end{aligned}}\\
&\Scale[0.8]{\begin{aligned}
&\rho_c'(j,z',z,s,t\mid a,b^c,\lambda_0^c)\\
=&\operatorname{E}\expectarg*{\exp\left(-\int_s^t\exp((b^{c,-\ast})^{\top}Z^{-\ast}(\tau)+b^{c,\ast}j)\lambda_0^c(\tau)d\tau\right)| \begin{aligned} &Z^{-\ast}(t)=z,\\
&Z^{-\ast}(s)=z',\\
&Z^{\ast}(t)=Z^{\ast}(s)=j,\\
&a,b^c,\lambda_0^c\end{aligned}}\end{aligned}}
\end{align}
\end{lemma}
\begin{lemma}\label{lemma2}
$ $ \\
(1). For an absolutely continuous longitudinal process with Markovian property \eqref{markovian}, $Z$, denote $\{(\zeta_{1,k},\dots,\zeta_{N,k}):\,k\in\mathbb{N}\}$ as the sequence generated from algorithm \eqref{main algorithm 1} with respect to fixed $dt$, $N$, the initial density $p_Z(\cdot,0)$ and the conditional density $p(\epsilon(t)\in x+dx\mid Z(t),t)$, a stochastic process on the discrete state space $\{1,\dots,N\}$ as below:
\begin{equation}\label{dis emp}
S(i,t)=s_{i,\zeta_{i,\left\lfloor t/dt\right\rfloor}}
\end{equation}
then the following condition holds as $dt\rightarrow 0$ and $N\rightarrow \infty$ for every finite integer $m$, every sequence sequence $t_1<\dots<t_m$ and every $m$-dimensional continuous function $f$:
\begin{equation}\label{weak conv}
\frac{1}{N}\sum_{i=1}^Nf(S(i,t_1),\dots,S(i,t_m))\rightarrow E(f(Z(t_1),\dots,Z(t_m)))
\end{equation}
In the other words, algorithm \ref{main algorithm 2} generate empirically simulatible sequences for every Markovian absolutely continuous longitudinal process $Z$.
\noindent (2). For a longitudinal process with one counting component and its absolutely continuous components satisfying Markovian property \eqref{markovian}, $Z=(Z^{\ast},Z^{-\ast})$, let $\zeta=\{(\zeta_{1,k},\dots,\zeta_{N,k}):\,k\in\mathbb{N}\}$ be the sequence generated from algorithm \eqref{main algorithm 1} with respect to fixed $dt$, $N$, the initial density $p_Z(\cdot,0)$ and the conditional density $p(\epsilon(t)\in x+dx\mid Z(t),t)$. The stochastic process constructed in \eqref{dis emp} with respect to $\zeta$ satisfy the weak convergence property \eqref{weak conv} as well, so algorithm \ref{main algorithm 3} generate empirically simulatible sequences for process $(Z^{\ast},Z^{-\ast})$.
\noindent (3). Algorithm \ref{main algorithm 3} generate i.i.d. samples of $(Z_T,T)$ for every fixed model setup $\Omega$.
\end{lemma}
\begin{lemma}\label{lemma3}
Under condition $\mathbf{C1}$ and $\mathbf{C2}$, for every model setup $\Omega\not=\Omega_0$, the joint pdf of $(Z_T,T)$ associated with $\Omega$ is not identical to that associated with $\Omega_0$, i.e.
\begin{equation}\label{ineq}
p(j,z,t\mid \Omega_0)\not=p(j,z,t\mid \Omega)
\end{equation} for some $(j,z,t)$ in their domain. In the other words, joint model \eqref{joint model} is identifiable.
\end{lemma}
\subsection{Proof of Lemma \ref{lemma1}}
Throughout this proof, we will suppress $\Omega$ from the arguments of function $\rho$, $p_Z$, $p^c$, $p$ and $q$ because they are all fixed constant. In addition, when necessary, we will use supscript $^{\ast}$ and $^{-\ast}$ to represent the value associated with counting component and absolutely continuous components, respectively.
The proof is decomposed to two parts, in the first part, we verify the statement (2) and (3) in lemma \ref{lemma1}. In the second part, we decompose the proof of statement (1) to two cases and firstly validate the expression \eqref{joint density 0 stage}-\eqref{jump density 1 stage} in a simpler case where no counting component are involved in the longitudinal process. Then, we extend the proof of the simple case to the complete case with counting component added.
\begin{proof}[{\bf Part 1}:]
For statement {\bf (2)}, notice that
for every $j\in\mathbb{N}$ and $f\in C_{0}^{1}\left(\mathbb{R}^{\mathfrak{p}-1}\right)$, the following holds:\\
\begin{equation*}
\Scale[0.8]{
\begin{aligned}
\textrm{E}\left(f\left(Z^{-\ast}(t)\right)\right) = & \textrm{E}\left(f\left(Z^{-\ast}_0\right)\right)+
\textrm{E}\left(\intop_{0}^{t}f'\left(Z^{-\ast}(s)\right)
\epsilon_{s}ds\right)\\
= & \textrm{E}\left(f\left(Z^{-\ast}_{0}\right)\right)+\intop_{0}^{t}
\textrm{E}\left(f'\left(Z^{-\ast}(s)\right)\epsilon(s)\right)ds\\
= & \textrm{E}\left(f\left(Z^{-\ast}_{0}\right)\right)+
\intop_{0}^{t}\textrm{E}\left(f'\left(Z^{-\ast}(s)\right)
q\left(j,Z^{-\ast}(s),s\right)\right)ds\\
= & \textrm{E}_{Z^{-\ast}_{0}}\left(f\left(z\right)\right)+
\intop_{0}^{t}\textrm{E}_{Z^{-\ast}(s)}\left(f\left(z\right)
q\left(j,z,s\right)\right)ds
\end{aligned}}
\end{equation*}
If we define the following operator $\mathcal{A}$ over the set of all continuous-time
processes:
\[
\mathcal{A}Z^{-\ast}(t):=Z^{-\ast}_{0}+\intop_{0}^{t}q\left(j,Z^{\ast}(s),s\right)ds
\]
then, the process $\left\{ \mathcal{A}Z^{\ast}(s)\right\} $ satisfies for all $f\in C_{0}^{1}\left(\mathbb{R}^{\mathfrak{p}-1}\right)$:\\
\begin{equation*}
\Scale[0.8]{
\begin{aligned}
\textrm{E}\left(f\left(\mathcal{A}Z^{-\ast}(t)\right)\right) = & \textrm{E}\left(f\left(Z^{-\ast}_{0}\right)\right)+
\intop_{0}^{t}
\textrm{E}\left(f'\left(Z^{-\ast}(s)\right)
q\left(j,Z^{-\ast}(s),s\right)\right)ds\\
= & \textrm{E}_{Z^{-\ast}_{0}}\left(f\left(z\right)\right)+
\intop_{0}^{t}\textrm{E}_{Z^{-\ast}(s)}\left(f'\left(z\right)
q\left(j,z,s\right)\right)ds\\
= & \textrm{E}\left(f\left(Z^{-\ast}(t)\right)\right)\end{aligned}}
\end{equation*}
Therefore, we have the following fact,
{\bf Fact}: For every fixed $j$, if $\{Z^{-\ast}(t)\}$ is an absolutely continuous process, $\{\mathcal{A}Z^{-\ast}(t)\}$
is a process equivalent to $\{Z(t)\}$ in terms of the collection of
induced pdf $p_{Z}(j,\cdot,t)$ for all $t$.
Consequently,
\[
\begin{aligned}
\textrm{E}_{Z^{-\ast}_{0}}\left(f\left(z\right)\right)&+
\intop_{0}^{t}\textrm{E}_{\mathcal{A}Z^{-\ast}(s)}\left(f'\left(z\right)
q\left(j,z,s\right)\right)ds=\\
&\textrm{E}_{Z^{-\ast}_{0}}
\left(f\left(z\right)\right)+\intop_{0}^{t}
\textrm{E}_{Z^{-\ast}(s)}\left(f'\left(z\right)q\left(j,z,s\right)\right)
ds\end{aligned}.
\]
By induction, for every $f\in C_{0}^{1}\left(\mathbb{R}^{\mathfrak{p}-1}\right)$:
\begin{equation}\label{iteration}
\Scale[0.8]{
\begin{aligned}
&\textrm{E}\left(f\left(\mathcal{A}^{n+1}Z^{-\ast}(t)\right)\right) \\
=& \textrm{E}\left(f\left(Z^{-\ast}_{0}\right)\right)+
\intop_{0}^{t}\textrm{E}\left(f'\left(\mathcal{A}^{n}Z^{-\ast}(s)\right)
q\left(j,\mathcal{A}^{n}Z^{-\ast}(s),s\right)\right)ds\\
=& \textrm{E}_{Z^{-\ast}_{0}}\left(f\left(z\right)\right)+
\intop_{0}^{t}\textrm{E}_{\mathcal{A}^{n}Z^{-\ast}(s)}
\left(f'\left(z\right)q\left(j,z,s\right)\right)ds\\
=& \textrm{E}_{Z^{-\ast}_{0}}\left(f\left(z\right)\right)+
\intop_{0}^{t}\textrm{E}_{Z^{-\ast}(s)}\left(f'\left(z\right)
q\left(j,z,s\right)\right)ds\\
= & \textrm{E}\left(f\left(Z^{-\ast}(t)\right)\right)
\end{aligned}}
\end{equation}
Consequently, if we start from a process $\{Z^{-\ast}(t)\}$,
by iteration of the operator $\mathcal{A}$, we would always get a process equivalent
to $\{Z^{-\ast}(t)\}$ in its 1-dimensional marginal pdf for every $t$. By the existence theorem of the solution
to an initial value problem associated with $q$ (\cite{perko2013differential}), we know that the sequence of processes $\left\{ \mathcal{A}^{n}Z^{-\ast}(t)\right\} _{n=0}^{\infty}$
converges point-wisely to some degenerated process satisfying
\begin{equation}\label{YD}
Z^{\mathcal{D}}(t)=Z_{0}+\int_{0}^{t}q\left(j,
Z^{\mathcal{D}}(s),s\right)ds.
\end{equation} The point-wise convergence implies the
equivalence in distribution $\{ p_Z(j,\cdot,t):\, t\in[0,\infty)\} $
between the limit process and the initial process $Z^{-\ast}(t)$.
Obviously the above integral equation is equivalent to the time reversal of the initial value problems \eqref{ode}.
Solving that equation and applying the change of variable formula to its solutions
curves $g$, it is verified that $p_Z(j,\cdot,t)$ has the expression \eqref{p-tilde}. This completes the proof for statement (2).
For statement {\bf (3)}, using the definition of the Cox hazard function \eqref{joint model} in \cite{cox1972regression}, for every fixed $j$, $s<t$ and trajectory $Z_{\omega}(t)$ of longitudinal process $Z$ such that $Z_{\omega}(t)=z$, $Z_{\omega}(s)=z'$ and $Z^{\ast}_{\omega}(\tau)\equiv j$, the following relation holds by \cite{andersen1982cox,andersen1992repeated}:
\begin{equation}\label{condition on single traj}
\textrm{Pr}\left(T>t\mid Z_{\omega}(\tau),\, \tau \in [s,t) \right)=\exp\left(-\int_s^t \lambda(\tau,Z_{\omega}(\tau))d\tau\right)
\end{equation}
Through taking conditional expectation for both sides of \eqref{condition on single traj} with respect to all longitudinal trajectories satisfying $Z^{\ast}(\tau)\equiv j$, $Z^{-\ast}(s)=z'$ and $Z^{-\ast}(t)=z$, the relation for $\rho'$ in statement (3) is established. By similar argument, the remaining three relations in statement (3) can be verified.
\end{proof}
\begin{proof}[{\bf Part 2}:]
In the second part of the proof, we verify the statement (1) of lemma \ref{lemma1}. In this part, we consider two cases:\\
\noindent i) the longitudinal process only consists of absolutely continuous components;\\
\noindent ii) there exist one extra counting component.
For case {\bf i)}, firstly, notice that it is equivalent between that the longitudinal process has $\mathfrak{p}$ dimension and involve no counting component and that the longitudinal process has $\mathfrak{p}+1$-dimensional with one counting component while the counting component is constantly zero. So, lemma \ref{lemma1} holds for case (1) if and only if the joint pdf of $(Z_T,T)$ in case (1) has exactly the form \eqref{joint density 0 stage} with $\rho_c\equiv 1$. To verify this, we consider probability of occurrence of the following event:
\begin{equation}\label{eventA}
A_{z,t,\delta}:=\textrm{Pr}\left\{ Z(t)\in\left(z-\delta,z\right),T<t\right\}
\end{equation}
By definition of function $\rho$ in \eqref{condition prob}, it is obvious that the viability of joint pdf \eqref{joint density 0 stage} is equivalent to the establishment of the following identity:
\begin{equation} \label{A3_2}
A_{z,t,\delta}=\int_{z-\delta}^{z}\int_{0}^{t}\rho(z,s)\cdot \exp(b^{\top}x)\cdot p_Z\left(x,s\right)dsdx.
\end{equation}
To verify \eqref{A3_2}, notice that
\begin{equation}
\Scale[0.8]{
\begin{aligned}
\left\{ Z(t)\in\left(z-\delta,z\right),T< t\right\}
= & \bigcap_{\Delta>0}\bigcup_{s_i\in S_n}\left\{ Z(s_{i})\in\left(z-\delta,z\right),s_{i}\leq T<s_{i+1}\right\} \\
= & \bigcap_{\Delta>0}\bigcup_{s_i\in S_n }\left(\left\{ Z(s_{i})\in\left(z-\delta,z\right),s_{i}\leq T\right\} /\left\{ Z(s_{i})\in\left(z-\delta,z\right),s_{i+1}\leq T\right\} \right),
\end{aligned}}
\end{equation}
where $S_n=\left\{ s_{i}=i\cdot\Delta:i=0,\dots,n,n\cdot\Delta\leq t<\left(n+1\right)\cdot\Delta\right\}$. Therefore,
\begin{equation*} \label{A3_5}
\begin{aligned}
A_{z,t,\delta} = &\Scale[0.8]{\lim_{\Delta\rightarrow0}\sum_{i=0}^{n_{t,\Delta}}\left(\textrm{E}\left(\mathbf{1}_{\left\{ Z(s_{i})\in\left(z-\delta,z\right)\right\} }\cdot\mathbf{1}_{\left\{ s_{i}\leq T\right\} }\right)-\textrm{E}\left(\mathbf{1}_{\left\{ Z(s_{i})\in\left(z-\delta,z\right)\right\} }\cdot\mathbf{1}_{\left\{ s_{i}+\Delta\leq T\right\} }\right)\right)}\\
=&\Scale[0.8]{ \lim_{\Delta\rightarrow0}\sum_{i=0}^{n_{t,\Delta}}\left(\begin{aligned}
&\textrm{E}\left(\mathbf{1}_{\left\{ Z(s_{i})\in\left(z-\delta,z\right)\right\} }\cdot \textrm{E}\left(s_{i}\leq T|Z(s_{i})\right)\right)-\\
&\textrm{E}\left(\mathbf{1}_{\left\{ Z(s_{i})\in\left(z-\delta,z\right)\right\} }\cdot \textrm{E}\left(s_{i}+\Delta\leq T|Z(s_{i})\right)\right)\end{aligned}\right)}
\end{aligned}
\end{equation*}
Then, by the definition of function $\rho$ and $\rho'$ in \eqref{condition prob}, we have the identity \eqref{A3_conclusion} hold
\begin{equation}
\label{A3_conclusion}
\Scale[0.7]{
\begin{aligned}
&A_{z,t,\delta}\\
= & \lim_{\Delta\rightarrow0}\sum_{i=0}^{n_{t,\Delta}}\left(
\begin{gathered}
\int_{z-\delta}^{z}\rho
\left(x,s_{i}\right)\cdot p_Z
\left(x,s_{i}\right)dx\\
-\\
\int_{z-\delta}^{z}\int_{0}^{\infty}\rho'
\left(x,x+\int_{s_{i}}^{s_{i}+\Delta}
\epsilon(\tau)
d\tau,s_{i},s_{i}+\Delta\right)
d\textrm{Pr}\left(\int_{s_{i}}^{s_{i}+
\Delta}\epsilon(\tau)d\tau|Z(s_{i})=x\right)\cdot\rho
\left(x,s_{i}\right)\cdot p_Z
\left(x,s_{i}\right)dx
\end{gathered}
\right)\\
= & -\lim_{\Delta\rightarrow0}\sum_{i=0}^{n_{t,\Delta}}\frac{\int_{z-\delta}^{z}\left(
\int_{0}^{\infty}\rho'\left(x,x+\int_{s_{i}}^{s_{i}+
\Delta}\epsilon(\tau)d\tau,s_{i},s_{i}+\Delta\right)-
1\right)d\textrm{Pr}\left(\int_{s_{i}}^{s_{i}+\Delta}\epsilon_{\tau}d\tau|Z(s_{i})=x\right) \cdot\rho
\left(x,s_{i}\right)\cdot p_Z\left(x,s_{i}\right)dx}{\Delta}\cdot\Delta\\
= & -\lim_{\Delta\rightarrow0}\sum_{i=0}^{n_{t,\Delta}}\frac{\int_{z-\delta}^{z}\int_{0}^{\infty}\left(
\partial_2 \rho'\cdot\int_{s}^{s+\Delta}\epsilon(\tau)d\tau+\partial_4 \rho'\cdot\Delta\right)\left(x,x,s_i,s_i\right)
d\textrm{Pr}\left(\int_{s_{i}}^{s_{i}+\Delta}\epsilon(\tau)d\tau|Z(s_{i})=x\right)\cdot\rho
\left(x,s_{i}\right)\cdot p_Z\left(x,s_{i}\right)dx}{\Delta}\cdot\Delta\\
= & -\lim_{\Delta\rightarrow0}\sum_{i=0}^{n_{t,\Delta}}\frac{\int_{z-\delta}^{z}\left(\partial_2
\rho'\left(x,x,s_i,s_{i}\right)\cdot
\int_{0}^{\infty}\left(\int_{s_{i}}^{s_{i}+
\Delta}\epsilon(\tau)d\tau\right)d\textrm{Pr}\left(\int_{s_{i}}^
{s_{i}+\Delta}\epsilon(\tau)d\tau|Z(s_{i})=x\right)+\partial_4\rho'\left(x,x,s_{i},s_{i}\right)\cdot\Delta\right) p_Z\left(x,s_{i}\right)dx}{\Delta}\cdot\Delta\\
= & -\lim_{\Delta\rightarrow0}\sum_{i=0}^{n_{t,\Delta}}\int_{z-\delta}^{z}\left(\partial_2\rho'
\left(x,x,s_i,s_{i}\right)\cdot \textrm{E}\left(\frac{\int_{s_{i}}^{s_{i}+\Delta}\epsilon(\tau)d\tau}{\Delta}|Z(s_{i})=x\right)+\partial_4\rho'\left(x,x,s_i,s_{i}
\right)\right)\cdot\rho
\left(x,s_{i}\right)\cdot p_Z\left(x,s_{i}\right)dx\cdot\Delta\\
= & -\lim_{\Delta\rightarrow0}\sum_{i=0}^{n_{t,\Delta}}\int_{z-\delta}^{z}\left(\partial_2\rho'
\left(x,x,x_i,s_{i}\right)\cdot \textrm{E}\left(\epsilon(s_{i})\mid Z(s_{i})=x\right)+\partial_4\rho'\left(x,x,s_is_{i}
\right)\right)\cdot\rho
\left(x,s_{i}\right)\cdot p_Z\left(x,s_{i}\right)dx\cdot\Delta\\
= & -\lim_{\Delta\rightarrow0}\sum_{i=0}^{n_{t,\Delta}}\int_{z-\delta}^{z}\left(\partial_2\rho'
\left(x,x,s_i,s_{i}\right)\cdot q\left(x,s_{i}\right)+\partial_2\rho'\left(x,x,s_i,
s_{i}\right)\right)\cdot\rho
\left(x,s_{i}\right)\cdot p_Z\left(x,s_{i}\right)dx\cdot\Delta\\
= & \int_{z-\delta}^{z}\int_{0}^{t}-\left(\partial_2\rho'
\left(x,x,s,s\right)\cdot q\left(x,s\right)+\partial_4\rho'\left(x,x,s,s\right)\right) \cdot\rho
\left(x,s_{i}\right)\cdot p_Z\left(x,s\right)dsdx,
\end{aligned}}
\end{equation}
\noindent where we use the relation $\rho'(x,x,s,s)\equiv 1$ by definition \eqref{condition prob}, $\partial_l$ refers to the partial derivative operator associated with the $l$th argument variable, $q$ is the conditional expectation function for $\epsilon(t)$ defined in \eqref{function-q} with its dependence on $j$ and $\Omega$ suppressed.
\eqref{A3_conclusion} implies
\begin{equation}\label{p}
\Scale[0.8]{
p(z,t)=-\left(\partial_2\rho'
\left(x,x,s,s\right)\cdot q\left(x,s\right)+\partial_4\rho'\left(x,x,s,s\right)\right) \cdot\rho
\left(x,s_{i}\right)\cdot p_Z\left(x,s\right),
}
\end{equation}
from the definition of $\rho$ in \eqref{condition prob}, the meaning of Cox hazard function \eqref{joint model} in \cite{cox1972regression} and \eqref{p}, we have
\begin{equation}\label{substitute}
\Scale[0.8]{
\lambda_0(t,z)=\frac{p_Z\left(z,t\right)\cdot\rho(z,t)}{p(z,t)}=-\partial_2\rho'
\left(x,x,s,s\right)\cdot q\left(x,s\right)-\partial_4\rho'\left(x,x,s,s\right)
}
\end{equation}
Therefore, combining the Cox hazard function in \eqref{joint model} and \eqref{substitute},
\begin{equation}
p(z,t)=p_Z(z,t)\cdot \rho(z,t)\cdot \exp(b^{\top}z)\lambda_0(t)
\end{equation}
that completes the proof for case {\bf i)}.
For case {\bf ii)}, it is derivable straightforwardly by an induction. First, when $j=0$, the joint pdf of \eqref{joint density 0 stage} is justifiable through the argument of case i) and the definition of $\rho_c$ in \eqref{condition prob}. In addition, regardless whether or not the terminal event occurs, the occurrence of event that the counting component increases by $1$ can be completely modelled by case i), so the argument in case i) is directly applicable to verify the expression \eqref{jump density 0 stage}, therefore, we complete the verification in the step $j=0$.
When $j>0$, for every fixed $z'$ and $s$, the integrand involved in the first term of \eqref{joint density 0 stage} is nothing more than the joint density of the following five events:\\
\indent 1. $Z^{-\ast}(t)=z$,\\
\indent 2. $T=t$,\\
\indent 3. $Z^{\ast}(t)=j$,\\
\indent 4. $Z^{-\ast}(s)=z'$,\\
\indent 5. $J_j=s$;\\
\noindent where $J_j$ is the jump time of counting component from stage $j-1$ into $j$. So, through integrating out $z'$ and $s$, the first term of \eqref{joint density 1 stage} gives the joint pdf of $(Z^{-\ast}_T=z,Z^{\ast}_T=j,T=t)$ when the system entered into the stage $Z^{\ast}=j$ at some time before $t$. In contrast, the second term of \eqref{joint density 1 stage} gives the joint pdf of $(Z^{-\ast}_T=z,Z^{\ast}_T=j,T=t)$ when the system was initialized at the stage $Z^{\ast}=j$. Thus, adding the two terms together integrates out the effect of initialization, and returns the complete joint pdf of $(Z^{-\ast}_T=z,Z^{\ast}_T=j,T=t)$, so expression \eqref{joint density 1 stage} is validated. Analogously, the validity of \eqref{jump density 1 stage} can be demonstrated in exactly same way as long as we redefine the event in interest as transition of the counting component from $j$ to $j+1$. Proof for lemma \ref{lemma1} completes.
\end{proof}
\subsection{Proof of Lemma \ref{lemma2}}
\begin{proof}
In this proof, we firstly show statement (2) and (3) on the basis that statement (1) holds, then sketch the proof for statement (1).
Given statement (1) and the functional form \eqref{jump intensity} of the jump hazard, we can think of a simple version of the longitudinal process where the counting component can only jump once, i.e. the range of the counting component is a binary set $\{0,1\}$. In this case, statement (2) is verified directly by the result of statement (1) and the relation between hazard function and survival function \citep{andersen1982cox,andersen1992repeated}.
When multiple jumps exist, statement (2) can be verified by induction. In fact, when $j>1$, the longitudinal process with its counting component having at most $j$ jumps, denoted as $L_j$, is equivalent in distribution to the composition of a longitudinal process with its counting component having at most $j-1$ jumps, denoted as $L_{j-1}$, and a binary event process with its hazard function specified through \eqref{jump intensity}, while for $L_{j-1}$ an empirically simulatible sequence has been generated such that \eqref{weak conv} holds. Then, to verify \eqref{weak conv} for $L_j$, it suffices to show that the simulated occurrence of the binary event by algorithm \ref{main algorithm 2} at every time $t$ when $L_{j-1}(t)=z$ and $L_{j-1}(s)=z'$ asymptotically follows the correct joint probability, which, by \eqref{jump intensity} and the relation between hazard function and survival function \cite{andersen1982cox,andersen1992repeated}, must be represented as an integral of \eqref{condition on single traj} with respect to all trajectories end up with $z$ at $t$ and $z'$ at $s$. In the other word, it suffices to verify the identity \eqref{inductive verify}
\begin{equation}\label{inductive verify}
\Scale[0.8]{
\begin{aligned}
\frac{1}{N}\sum_{i=1}^N\exp&\left(-I(L_{i,j-1}(s+l\cdot dt)\in z+dz, L_{i,j-1}(s)\in z'+dz)\cdot\sum_{k=1}^l\exp
\left(b^{c\top}\cdot L_{i,j-1}(s+k\cdot dt)\right)\cdot \lambda_0^c(s+k\cdot dt)\right)\\
\rightarrow &
\textrm{E}\left(I( L_{\omega,j-1}(t)\in z+dz,L_{\omega,j-1}(s)\in z'+dz)\cdot \exp\left(-\int_s^t \lambda_0^c(\tau)\exp(b^{c\top}\cdot L_{\omega,j-1}(\tau))d\tau\right) \right)
\end{aligned}}
\end{equation}
as $dt\rightarrow 0$, $N\rightarrow\infty$, where $l$ is taken as the least integer such that $l\cdot dt+s>t$, $L_{\omega,j-1}$ denote a sample trajectory of longitudinal process $L_{j-1}$, $L_{i,j-1}$ denote the $i$th sample sequence generated from algorithm \ref{main algorithm 2} for $L_{j-1}$, $I$ denotes the indicator function. By induction assumption, \eqref{inductive verify} holds. For the case $j=1$, \eqref{inductive verify} has already been verified in the previous paragraph, so proof for statement (2) completes.
Note that the same argument in the proof for statement (2) is directly applied to prove statement (3).
\begin{remark}
The proof of statement (2) and (3) only relies on the conclusion of statement (1), but does not depend the Markovian condition \eqref{markovian} required by algorithm \ref{main algorithm 1}. In the other word, as long as an empirically simulatible sequence can be generated for absolutely continuous components of the longitudinal process, algorithm \ref{main algorithm 2} and \ref{main algorithm 3} are still applicable to generate the desired i.i.d. samples. So, in principle, the estimation proposed in this paper is extendible to more general settings.
\end{remark}
Finally, for statement (1), the weak convergence condition \eqref{weak conv} can be established in exactly the same way as the construction of numerical solutions to a stochastic differential equation, we refer the audience to the textbook \cite{karatzas2012brownian} for more details.
\end{proof}
\subsection{Proof of Lemma \ref{lemma3}}
\begin{proof}
The proof follows the induction steps of the construction of joint pdf in lemma \ref{lemma1}. For simplicity, we firstly consider the simple case that the longitudinal process assigns positive mass to but does not fully concentrate on the event $Z^{\ast}(0)=0$. Under this assumption, we will show that if exist some model setup $\Omega=(\tilde{a},\tilde{b},\tilde{b}^c,\tilde{\lambda}_0,\tilde{\lambda}_0^c)$ that also associates to the true joint pdf \eqref{joint density 0 stage}, then i) $\tilde{b}=b$, ii) $\tilde{\lambda}_0=\lambda_0$, iii) $\tilde{a}=a$, iv) $\tilde{b}^c=b^c$, v) $\tilde{\lambda}_0^c=\lambda_0^c$, where $a,\,b,\,b^c,\,\lambda_0,\lambda_0^c$ are the true model setup included in $\Omega_0$.\\
\noindent Proof for {\bf i)}, suppose $\tilde{b}\not=b$. Then, by the condition $\mathbf{C2}$ and the statement (1), (3) of lemma \ref{lemma1}, we have
\begin{equation}\label{initial identity}
\Scale[0.8]{
p(j,z,0\mid \Omega_0)\exp(b^{-\ast\top}z+b^{\ast\top}j) \equiv p(j,z,0\mid \Omega)\exp(\tilde{b}^{-\ast\top}z+\tilde{b}^{\ast\top}j)
}
\end{equation}
Obviously, \eqref{initial identity} contradict to condition $\mathbf{C1}$ as long as $b\not=\tilde{b}$, which enforces that for all $\Omega\not=\Omega_0$ their projection to sub-coordinate $b$ must agree with $\Omega_0$. Notice that \eqref{initial identity} also implies when $\Omega\not=\Omega_0$ can induce the same joint pdf, the initial pdf of longitudinal processes must satisfy $p(j,z,0\mid \Omega_0)\equiv p(j,z,0\mid \Omega)$.\\
\noindent Proof for {\bf ii)}: suppose $\tilde{\lambda}_0(t)\not=\lambda_0(t)$, by i), \eqref{joint density 0 stage} and the assumption that $Z^{\ast}(0)=j$ is assigned with positive mass, we have the identity \eqref{condition-for-lambda} hold.
\begin{equation}\label{condition-for-lambda}
p_Z\left(0,z,t\mid \Omega_0\right)\cdot
\rho_c\left(0,z,t\mid \Omega_0\right)\cdot \rho\left(0,z,t\mid \Omega_0\right)\cdot \lambda_0(t)=p_Z\left(j,z,t\mid \Omega\right)\cdot
\rho_c\left(0,z,t\mid \Omega\right)\cdot \rho\left(0,z,t\mid \Omega\right)\cdot \tilde{\lambda}_0(t)
\end{equation}
Since $\Omega$ and $\Omega_0$ associate with the same joint pdf of $Z_T$ and $T$, identity \eqref{joint survival} must hold as well.
\begin{equation}\label{joint survival}
\Scale[0.8]{
\int_{\mathbb{R}^{\mathfrak{p}-1}}p_Z\left(0,z,t\mid \Omega_0\right)\cdot
\rho_c\left(0,z,t\mid \Omega_0\right)\cdot \rho\left(0,z,t\mid \Omega_0\right)dz\equiv \int_{\mathbb{R}^{\mathfrak{p}-1}}p_Z\left(0,z,t\mid \Omega\right)\cdot
\rho_c\left(0,z,t\mid \Omega\right)\cdot \rho\left(0,z,t\mid \Omega\right)dz},
\end{equation}
This is because both sides of \eqref{joint survival} gives the survival function of $T>0$ when $Z^{\ast}\equiv 0$ which is completely determined by the joint pdf of $(Z_T,T)$ at the stage $0$. Hence, under the assumption that $\Omega$ and $\Omega_0$ corresponds to exactly the same joint pdf, the equation \eqref{condition-for-lambda} enforces that $\tilde{\lambda}_0=\lambda_0$.\\
\noindent Proof for {\bf iii)}: (3) Suppose existing $\tilde{a}\not=a$ for which the joint pdf \eqref{joint density 0 stage} is identical. Without loss of generality, we assume that when $t=0$, $\mathbf{C2}$ (ii) holds. In general, for $\mathbf{C2}$ (ii) holds at some $t^{\ast}>0$, the following proof is essentially the same, the only modification is to change replace $p_Z(0,z,0\mid \Omega_0)$ with $p_Z(0,z,t^{\ast}\mid \Omega_0)\cdot \rho(0,z,t\mid a,b,\lambda_0)\cdot\rho_c(0,z,t\mid a,b^c,\lambda_0^c)$.
\noindent In fact, when joint pdf are identical for $a\not=\tilde{a}$, the \eqref{weak-identity} holds
\begin{equation}\label{weak-identity}
\Scale[0.9]{
\begin{aligned}
&p_Z\left(0,z,0\mid \Omega_0\right)\mathcal{J}_{g^{-1}(0,z,0,t|a)|a}(t)\rho(0,g^{-1}(0,z,0,t|a),t\mid\Omega_0)\rho_c(0,g^{-1}(0,z,0,t|a),t\mid\Omega_0)
\lambda_0(t)\\
&=
p_Z\left(0,\tilde{z},0\mid \Omega_0\right)\mathcal{J}_{g^{-1}\left(0,\tilde{z},0,t\mid \tilde{a}\right)|\tilde{a}}(t)\rho(0,g^{-1}\left(0,\tilde{z},0,t\mid \tilde{a}\right),t\mid\Omega)\rho_c(0,g^{-1}\left(0,\tilde{z},0,t\mid \tilde{a}\right),t\mid\Omega)
\tilde{\lambda}_0(t)).
\end{aligned}}
\end{equation}
for all pairs $(z,\tilde{z})$ such that $z=g\left(0,g^{-1}\left(0,\tilde{z},0,t\mid \tilde{a}\right),t,t\mid a\right)$ where $g^{-1}\left(0,z,s,t\mid a\right)$ is
the inverse trajectories of $g$ and defined through the relation
\begin{equation}\label{g-inverse}
g\left(g^{-1}\left(0,z,s,t\mid a\right),s+t,t\mid a\right)=z.
\end{equation} Factor out \eqref{weak-identity} by $\mathcal{J}_{g^{-1}(0,z,0,t|a)|a}(t)\rho(0,g^{-1}(0,z,0,t|a),t\mid\Omega_0)\rho_c(0,g^{-1}(0,z,0,t|a),t\mid\Omega_0)
\lambda_0(t)$ and take the limit as $t\rightarrow 0$ yielding the following identity:
\begin{equation}\label{invariant-measure}
p_Z(0,\mathcal{T}_{r}(z),0\mid \Omega_0)\cdot\mathcal{J}_{\mathcal{T}_r}(z)=p_Z(0,z,0\mid \Omega_0)
\end{equation}
where for every $r\in \mathbb{R}$, the map $\mathcal{T}_{r}:\mathbb{R}^{p}\rightarrow \mathbb{R}^{p}$ is the diffeomorphism obtained from solving the ODE system:
\begin{equation}\label{ode2}
z'=q\left(0,z,0\mid a\right)-q\left(0,z,0\mid \tilde{z}\right),
\end{equation}
$\mathcal{T}_r(z_0)$ is just the point reached at the time $r$ by the trajectory starting at $z_0$ that solves \eqref{ode2}. $\mathcal{J}_{\mathcal{T}_r}$ is the Jacobian associated with $\mathcal{T}_r$. By the language of ergodic theory, Eq. \eqref{invariant-measure} implies that the probability measure corresponding to the initial pdf $p_Z(0,\cdot,0\mid \Omega_0)$ is invariant under the $\mathbb{R}$-action on the space $\mathbb{R}^{\mathfrak{p}-1}$ induced by $\mathcal{T}$, the family of solutions to \eqref{ode2}. However, under the condition $\mathbf{C2}$ (ii), the action $\mathcal{T}$ associated with the pair of $a$ and $\tilde{a}$ does not allow any invariant probability measure fully supported on $\mathbb{R}^{\mathfrak{p}-1}$ unless $\tilde{a}=a$. This contradiction guarantees the condition $\tilde{a}=a$.\\
\noindent Combining i) and iii), we have
\begin{equation}\label{pz identity}
p_Z(j,z,t\mid\Omega)\equiv p_Z(j,z,t\mid \Omega_0)
\end{equation}
and the identity
\begin{align}\label{rho identity}
&\rho(\cdot\mid\Omega)\equiv\rho(\cdot\mid\Omega_0),\\ &\rho'(\cdot\mid\Omega)\equiv\rho'(\cdot\mid\Omega_0)
\end{align}
whenever $\Omega$ and $\Omega_0$ induce the same joint pdf of \eqref{joint density 0 stage} and \eqref{joint density 1 stage}. The identity for $\rho$ and $\rho'$ is because they are completely determined by $b$, $\lambda_0$, the trajectory information encoded in parameter $a$ and the prescribed stage where the counting component is on by the statement (3) of lemma \ref{lemma1}.\\
\noindent Proof for {\bf iv)} and {\bf v)}: Using identity \eqref{pz identity} and \eqref{rho identity}, and the assumption that $\Omega$ $\Omega_0$ associate with the same joint pdf \eqref{joint density 0 stage} and \eqref{joint density 1 stage}, the following identity follows immediately:
\begin{equation}
\rho_c(0,z,t\mid \Omega)\equiv \rho_c(0,z,t\mid \Omega_0)
\end{equation}
which, together with the statement (3) of lemma \ref{lemma1}, implies that
\begin{equation}\label{intensity equ}
\exp((\tilde{b}^{c,-\ast})^{\top}z)\tilde{\lambda}_0^c(t)=
\exp((b^{c,-\ast})^{\top}z)\lambda_0^c(t)
\end{equation}
\eqref{intensity equ} implies the identity, $\tilde{b}^{c,-\ast}=b^{c,-\ast}$ and $\tilde{\lambda}_0^c=\equiv\lambda_0^c$. Using statement (3) of lemma \ref{lemma1} once again, the identity in $b^{c,-\ast}$ and $\lambda_0^c$ enforces the identity $\rho_c'(0,z',z,s,t\mid\Omega)\equiv\rho_c'(0,z',z,s,t\mid\Omega)$
which furthermore enforces the identity in \eqref{jump density 0 stage} between $\Omega$ and $\Omega_0$. Consequently, the first summand in \eqref{joint density 1 stage} must be identical for $\Omega$ and $\Omega_0$. Combining it with all the identities of $a$, $b$, $\lambda_0$, $\lambda_0^c$ and $b^{c,-\ast}$, $\tilde{b}^{c,\ast}=b^{c,\ast}$ is guaranteed.
Finally, if the initial $p_Z(\cdot\mid \Omega_0)$ assign zeros mass to $Z^{\ast}(0)=0$, we can still adopt exactly the same proof as above, the only modification is replacing $j=0$ to $j'$ such that $j'$ is the smallest positive integer with $p_Z(j',\cdot,0\mid \Omega_0)>0$, such $j'$ must exist because $p_Z(\cdot,\cdot,0\mid \Omega_0)$ is a well-defined probability density function. Then, proof for lemma \ref{lemma3} is completed.
\end{proof}
\section{Tables \& Figures}
\subsection{Tables}
\begin{minipage}{\linewidth}
\begin{center}
\captionof{table}{Fitting Performance for $\mathcal{M}$ and $\Sigma$}\label{table: 1}
\centering
\resizebox{8cm}{4.5cm}{%
\begin{tabular}{llll|lll}
\hline
& n=100 & & & n=200 & & \tabularnewline
\hline
Var & Bias & SSE & 95\% CP & Bias & SSE & 95\% CP\tabularnewline
\hline
$\mu_{11}$ & 0.013 & 0.208 & 0.408 & 0.021 & 0.208 & 0.408\tabularnewline
$\mu_{12}$ & 0.006 & 0.218 & 0.427 & 0.006 & 0.222 & 0.435\tabularnewline
$\mu_{13}$ & -0.017 & 0.209 & 0.41 & 0.004 & 0.223 & 0.437\tabularnewline
$\mu_{14}$ & -0.002 & 0.197 & 0.386 & 0.012 & 0.201 & 0.394\tabularnewline
$\mu_{15}$ & -0.003 & 0.226 & 0.443 & 0.011 & 0.204 & 0.4\tabularnewline
$\mu_{16}$ & 0.009 & 0.203 & 0.398 & 0.014 & 0.203 & 0.398\tabularnewline
$\mu_{21}$ & -0.056 & 0.191 & 0.374 & -0.066 & 0.2 & 0.392\tabularnewline
$\mu_{22}$ & 0.039 & 0.206 & 0.404 & 0.035 & 0.191 & 0.374\tabularnewline
$\mu_{23}$ & -0.01 & 0.199 & 0.39 & -0.015 & 0.194 & 0.38\tabularnewline
$\mu_{24}$ & -0.019 & 0.199 & 0.39 & -0.003 & 0.188 & 0.368\tabularnewline
$\mu_{25}$ & 0.018 & 0.183 & 0.359 & -0.004 & 0.182 & 0.357\tabularnewline
$\mu_{26}$ & 0 & 0.215 & 0.421 & -0.026 & 0.207 & 0.406\tabularnewline
$\sigma_{11}^{2}$ & 0.118 & 0.272 & 0.533 & 0.099 & 0.234 & 0.459\tabularnewline
$\sigma_{12}^{2}$ & 0.107 & 0.244 & 0.478 & 0.085 & 0.248 & 0.486\tabularnewline
$\sigma_{13}^{2}$ & 0.097 & 0.259 & 0.508 & 0.097 & 0.214 & 0.419\tabularnewline
$\sigma_{14}^{2}$ & 0.102 & 0.26 & 0.51 & 0.075 & 0.222 & 0.435\tabularnewline
$\sigma_{15}^{2}$ & 0.096 & 0.273 & 0.535 & 0.117 & 0.238 & 0.466\tabularnewline
$\sigma_{16}^{2}$ & 0.087 & 0.249 & 0.488 & 0.109 & 0.25 & 0.49\tabularnewline
$\sigma_{21}^{2}$ & 0.053 & 0.219 & 0.429 & 0.038 & 0.219 & 0.429\tabularnewline
$\sigma_{22}^{2}$ & 0.067 & 0.211 & 0.414 & 0.025 & 0.216 & 0.423\tabularnewline
$\sigma_{23}^{2}$ & 0.077 & 0.207 & 0.406 & 0.074 & 0.219 & 0.429\tabularnewline
$\sigma_{24}^{2}$ & 0.051 & 0.222 & 0.435 & 0.041 & 0.214 & 0.419\tabularnewline
$\sigma_{25}^{2}$ & 0.072 & 0.222 & 0.435 & 0.07 & 0.205 & 0.402\tabularnewline
$\sigma_{26}^{2}$ & 0.068 & 0.224 & 0.439 & 0.039 & 0.211 & 0.414\tabularnewline
\hline\hline
\end{tabular}%
}
\end{center}
\end{minipage}\\
\vspace{1cm}
\begin{minipage}{\linewidth}
\begin{center}
\captionof{table}{Fitting Performance for $b$ and $b^c$}\label{table: 2}
\centering
\resizebox{8cm}{3.5cm}{%
\begin{tabular}{llll|lll}
\hline\hline
& n=100 & & & n=200 & & \tabularnewline
\hline
Var & Bias & SSE & 95\% CP & Bias & SSE & 95\% CP\tabularnewline
\hline
$b_{1}$ & 0.022 & 0.199 & 0.39 & 0.038 & 0.221 & 0.433\tabularnewline
$b_{2}$ & 0.007 & 0.199 & 0.39 & 0 & 0.211 & 0.414\tabularnewline
$b_{3}$ & 0.004 & 0.209 & 0.41 & 0.007 & 0.216 & 0.423\tabularnewline
$b_{4}$ & 0.006 & 0.206 & 0.404 & 0.024 & 0.205 & 0.402\tabularnewline
$b_{5}$ & 0.01 & 0.22 & 0.431 & 0.001 & 0.212 & 0.416\tabularnewline
$b_{6}$ & 0.006 & 0.226 & 0.443 & 0.01 & 0.218 & 0.427\tabularnewline
$b_{7}$ & 0.005 & 0.199 & 0.39 & 0.003 & 0.196 & 0.384\tabularnewline
$b_{1}^{c}$ & 0.005 & 0.227 & 0.445 & 0.022 & 0.219 & 0.429\tabularnewline
$b_{2}^{c}$ & 0.001 & 0.203 & 0.398 & -0.024 & 0.211 & 0.414\tabularnewline
$b_{3}^{c}$ & 0.007 & 0.199 & 0.39 & -0.001 & 0.206 & 0.404\tabularnewline
$b_{4}^{c}$ & 0.017 & 0.207 & 0.406 & -1.018 & 0.181 & 0.355\tabularnewline
$b_{5}^{c}$ & 0.011 & 0.2143 & 0.42 & 0.995 & 0.218 & 0.427\tabularnewline
$b_{6}^{c}$ & 0.003 & 0.229 & 0.449 & 0.623 & 0.227 & 0.445\tabularnewline
$b_{7}^{c}$ & 0.022 & 0.232 & 0.455 & 0.991 & 0.206 & 0.404\tabularnewline
\hline\hline
\end{tabular}%
}
\end{center}
\end{minipage}\\
\vspace{1cm}
\begin{minipage}{\linewidth}
\begin{center}
\captionof{table}{Estimated $b$ and $b^c$ for renrendai data}\label{table: 3}
\centering
\resizebox{7cm}{4.5cm}{%
\begin{tabular}{lll}
\hline\hline
Var & $b$ & $b^{c}$\tabularnewline
\hline
$Z_{1}$ Term & 0 & 0\tabularnewline
$Z_{2}$ Interest Rate & 0 & 0\tabularnewline
$Z_{3}$ Principal & 0 & 0\tabularnewline
$Z_{4}$ Age & 0 & 0\tabularnewline
$Z_{5}$ Credit Score & 0 & 0\tabularnewline
$Z_{6}$ Education & 0 & 0\tabularnewline
$Z_{7}$ Income & 0 & 0\tabularnewline
$Z_{8}$ Married & 0 & 0\tabularnewline
$Z_{9}$ Divorce & 0 & -0.157\tabularnewline
$Z_{10}$Unpaid Car Loan & 0 & 0.012\tabularnewline
$Z_{11}$Car Owned & 0 & 0\tabularnewline
$Z_{12}$Unpaid Mortgage & 0 & 0\tabularnewline
$Z_{13}$House Owned & 0 & 0\tabularnewline
$Z_{14}$Clerk & 0.088 & 0.006\tabularnewline
$Z_{15}$Self Employed & 0 & 0.091\tabularnewline
$Z_{16}$Business Owner/Manager & 0 & 0.298\tabularnewline
$Z_{17}$Company Scale & 0 & 0.02\tabularnewline
$Z_{18}$Local GDP & 0.617 & -0.066\tabularnewline
$Z_{19}$Local Housing Price & 0.667 & 0\tabularnewline
$Z_{20}$Irregular Payments & -0.037 & 0\tabularnewline
\hline\hline
\end{tabular}%
}
\end{center}
\end{minipage}\\
\subsection{Figures}
\begin{minipage}\linewidth
\begin{center}
\includegraphics[width=15cm,height=16cm]{lambda_fitting_eps}
\captionof{figure}{Estimated $\int_{0}^{t}\hat{\lambda}_0(\tau)d\tau$, $\int_{0}^{t}\hat{\lambda}_0^c(\tau)d\tau$ v.s. True $\int_{0}^{t}\lambda_0(\tau)d\tau$, $\int_{0}^{t}\lambda_0^c(\tau)d\tau$}\label{fig: fitting}
\end{center}
\end{minipage}\\
\vspace{1cm}
\begin{minipage}\linewidth
\begin{center}
\includegraphics[width=15cm,height=10cm]{evaluate_fitting_with_empirical_CI_e3_eps}
\captionof{figure}{Estimated $\int_{0}^{t}\hat{\lambda}_0(\tau)d\tau$, $\int_{0}^{t}\hat{\lambda}_0^c(\tau)d\tau$ for renrendai data}\label{fig: real}
\end{center}
\end{minipage}\\
\end{appendix}
|
2,869,038,154,074 | arxiv |
\section*{ACKNOWLEDGMENTS}
We would like to thank Charlie Guinn for discussions about calibration and tomography techniques.
This work is funded in part by EPiQC, an NSF Expedition
in Computing, under grants CCF-1730082/1730449; in part
by STAQ under grant NSF Phy-1818914; in part by NSF
Grant No. 2110860; by the US Department of Energy Office
of Advanced Scientific Computing Research, Accelerated
Research for Quantum Computing Program; and in part by
NSF OMA-2016136 and the Q-NEXT DOE NQI Center. SS is supported by the Department of Defense (DoD) through the National Defense Science \& Engineering Graduate Fellowship (NDSEG) Program. FTC is Chief Scientist at Super.tech and an
advisor to Quantum Circuits, Inc.
\input{txt/appendix}
\bibliographystyle{IEEEtran}
\section{Introduction}\label{sec_intro}
Quantum computers have the potential to solve problems currently intractable for conventional computers \cite{shor1999polynomial}, but current computations are limited by errors \cite{preskill2018nisq}, particularly when interacting two qubits to perform a quantum gate operation. This is not surprising, as qubits are engineered to preserve quantum state and be isolated from the environment, but a quantum operation is a moment in time where external control is applied from the environment to deliberately alter a qubit's state. To accomplish low-error gates, the control mechanisms are carefully designed and the control signals are calibrated for each qubit or pair of qubits.
Similar to how classical computers use a small set of classical logic gates (AND, OR, NOT, XOR...) as building blocks for larger circuits, current superconducting quantum devices typically only directly support a universal gate set consisting of a few two-qubit (2Q) gates and a continuous set of single-qubit (1Q) gates. This paper will refer to the set of directly supported quantum gates as \textit{basis gates}. In the space of 2Q gates (see Fig.\ref{fig:weyl_chamber}), any point that does not coincide with SWAP or Identity has nonzero entangling power. Any of these 2Q entangling gates can achieve universal computation when added to a continuous set of 1Q gates \cite{bremner2002practical}.
\begin{figure}
\centering
\scalebox{0.6}{
\includesvg{figs/weyl-plain.svg}
}
\caption{The Weyl chamber of 2Q quantum gates, explained in Sec \ref{bg_kak}. The non-local part of a 2Q gate is fully described by its position in the Weyl chamber. As the duration of an entangling gate pulse increases, the 2Q gate evolves, traversing a Cartan trajectory in the Weyl chamber. CNOT and CZ are both represented by $(\frac{1}{2},0,0)$. The SWAP gate is at the top vertex $(\frac{1}{2},\frac{1}{2},\frac{1}{2})$. On the bottom surface, $(t_x,t_y,0)$ and $(1-t_x, t_y, 0)$ represent the same equivalent class of gates. For example, the two points $I_0 = (0,0,0)$ and $I_1 = (1,0,0)$ both represent the 2Q identity gate $I$.}
\label{fig:weyl_chamber}
\end{figure}
Using the minimal set of gates needed for universal computing is rarely a desirable thing to do. For example, while the NAND is universal in classical computing, building circuits from it alone is less efficient than using a larger set of logic gates. However, the intensive calibrations necessary for
high fidelity 2Q gates between qubits in a large quantum computer make it impractical to support a large set of 2Q basis gates. All logical 2Q gates scheduled to run on a quantum computer have to be decomposed by its compiler into alternating layers of pre-calibrated 1Q and 2Q basis gates. Thus, the choice of which 2Q gates to directly support is critical to enabling high-performance quantum computing. On the hardware side, the 2Q basis gates must have high-fidelity hardware implementations. On the software side, they must enable the low-depth decomposition of other 2Q gates.
Superconducting qubits support XX- and XY-type 2Q interactions \cite{Abrams2020, Kwon2021}. The strength of each of these interactions depends on the type of coupling, the coupling strength, and the frequency detuning between the qubits \cite{Kwon2021}. The Weyl chamber space of 2Q gates (Fig. \ref{fig:weyl_chamber}) is a useful way to visualize these interactions: the coordinates of a gate correspond to its non-local part in Cartan's KAK decomposition (see Section \ref{bg_kak}). In the Weyl chamber, gates in the XX family form a straight trajectory from Identity to CNOT/CZ, while gates in the XY family form a trajectory from Identity to iSWAP. Cartan trajectories are generated by increasing the duration of an entangling gate pulse, which evolves the 2Q gate.
Section \ref{deviation} describes the various difficulties associated with reliably performing standard 2Q gates like CZ and iSWAP on today's superconducting quantum computers. Whether a deviation from a standard Cartan trajectory is a 2Q error depends on what the target 2Q gate is. If the target 2Q gate has to be a certain standard gate (e.g. iSWAP), even a small amount of coherent crosstalk between the two qubits will cause the gate to have a small CZ component, and so the gate will not be identically iSWAP. However, if that coherent crosstalk is a stable systematic that does not add noise or cause decoherence, the gate could still be an effective, high fidelity entangler that is useful for computing; the target 2Q gate would just have to be a nonstandard unitary rather than the standard iSWAP. In Section \ref{deviation} we also show an example of an experimentally measured Cartan trajectory of 2Q gates on a superconducting qubit device. The measured Cartan trajectory includes very fast nonstandard 2Q gates with high entangling power, but it does not pass through traditional basis gates like iSWAP and CZ. This example motivates us to develop methods that enable the use of nonstandard 2Q gates for quantum computing. If we allow for deviations from standard 2Q gates, we can use high fidelity, non-standard quantum hardware for practical quantum computing.
Using nonstandard 2Q basis gates requires methods for identifying good basis gates on a general 2Q gate trajectory, calibrating a nonstandard gate, and compiling with nonstandard basis gates. The primary focus of our work is to construct and demonstrate a framework for efficiently identifying a ``good'' set of 2Q basis gates from a nonstandard trajectory. But we also propose solutions for calibration and compilation.
What are our standards for a good set of 2Q basis gates? Following the principle of Amdahl’s Law, we pay most attention to the SWAP gate as a target for synthesis because of its importance for communication within programs executing on superconducting devices. To mitigate crosstalk and satisfy other hardware constraints, superconducting devices usually have the sparse connectivity of a grid lattice or a hexagonal lattice. Therefore, the compiler has to schedule a series of SWAP gates before it can interact two qubits that are not adjacent to each other. Although clever mapping from logical to physical qubits can result in a smaller number of inserted SWAP gates, we still observe a high proportion of SWAP gates in post-mapping quantum circuits. Besides efficient synthesis of the SWAP gate, our framework also allows one to prioritize other target gates, including but not limited to CNOT, iSWAP, and the B gate. It also enables the simultaneous prioritisation of multiple target gates.
The calibration of a non-standard 2Q basis gate requires identifying a gate duration that gives an ideal basis gate and then accurately characterizing the corresponding gate so that we use the right unitary for compiling. Our proposed calibration protocol address both without causing a long downtime on a quantum device. However, we point out that in order to \textit{precisely} characterize a non-standard gate, one should consider using gate set tomography (GST) as opposed to quantum process tomography (QPT). The data collected from GST experiments may require several hours of classical processing. Before that finishes, one would have to use the calibration results from the previous cycle. The speedup of GST's classical processing, which is an active field of research \cite{pygsti}, would help reduce the cost of calibrating non-standard gates. In addition, we observed that the systematic deviations are stable over days (Fig. \ref{fig:Expt_Fig2}). If the change in deviation is negligible, one may not need to apply GST in every calibration cycle.
Compiling with non-standard 2Q basis gates requires a conversion from arbitrary 2Q gates into the basis gates. There isn't a general analytical formula that works for arbitrary target and basis gate, so a numerical search is needed. However, we can analytically obtain information on the minimum circuit depth needed for a perfect synthesis and use it to facilitate the numerical search. Besides, the circuits that synthesise common gates from the basis gates can be pre-computed after each calibration cycle, so that one wouldn't need to re-compute them for every program.
Our contributions are summarized below.
\begin{itemize}
\item
Our work is the first to consider using 2Q basis gates from general non-standard gate trajectories that are not parametrized by a simple function.
\item We provide a theoretical framework for identifying and visualizing the set of good 2Q basis gates, given a set of target 2Q gates to prioritize. With an emphasis on SWAP, we characterize the sets of gates that enable the synthesis of SWAP in 1, 2, and 3 layers, respectively. As another example, we visualize the gates that are able to both synthesize SWAP in 3 layers and CNOT in 2 layers. After identifying the volume of desirable basis gates in the Weyl chamber, one can select the first intersection of the trajectory with the volume as the 2Q basis gate. (Section \ref{sec_theory})
\item We propose a practical calibration protocol that is agnostic as to whether a 2Q gate is standard or non-standard. (Section \ref{sec_calibration})
\item We discuss a practical approach to compiling with non-standard 2Q basis gates. (Section \ref{sec_compile})
\item We apply our methods to a case study entangling gate architecture with far-detuned transmon qubits \cite{hamiltonian_source}. First, we use our theoretical framework to select 2Q basis gates from simulated nonstandard Cartan trajectories that are realistic for this case study architecture. By increasing the entangling pulse drive amplitude we get a significant 2Q basis gate speedup but introduce a deviation into the Cartan trajectory. Then we use these 2Q basis gates to run a variety of benchmark circuits including BV\cite{BV}, QAOA\cite{qaoa}, the QFT adder\cite{qft}, and the Cuccaro Adder\cite{cuccaro2004new}, and compare to the results from using the $\sqrt{iSWAP}$ gate on the standard XY-type trajectory. (Section \ref{case_study})
\end{itemize}
\section{Case study: entangling fixed \\ frequency far-detuned \\ transmons with a tunable \\ coupler}\label{case_study}
\subsection{Introduction to the case study entangling gate architecture}
Many efforts are being made in industry and academia to design a 2Q entangling gate architecture that can be used for scaling up to a general quantum computer \cite{ibmCR,foxen,sung2020realization}. The all-microwave cross-resonance gate was recently used by IBM to do a high fidelity CNOT gate in 90 ns \cite{ibmCR}, but to suppress the always-on ZZ crosstalk mentioned in Section \ref{deviation}, precise crosstalk cancellation pulses applied to both qubits during run time were required, adding complexity to the architecture. Google Quantum AI and MIT have each developed entangling gate architectures for high fidelity CZ and iSWAP gates, with Google's architecture supporting a continuous set of these standard gates \cite{foxen,sung2020realization}. Google's architecture requires all qubits and couplers to be flux-tunable, which adds complexity and additional sources of leakage and noise to their architecture. Similarly, in order to suppress the always-on ZZ crosstalk, MIT's architecture requires one qubit per pair to be tunable as well as the coupler.
The unit cell of our case study entangling gate architecture is a pair of qubits and a coupler. This unit cell, first proposed in \cite{hamiltonian_source}, was designed to perform a diverse set of 2Q gates, including iSWAP and CZ; the full list of 2Q gates can be found in Table 1 of \cite{hamiltonian_source}. The two qubits are fixed frequency transmon qubits; the benefits of fixed frequency transmons are that they are easy to fabricate and can be reliably engineered to have high coherence $> 100$ us \cite{place}. The two qubits are also far detuned from each other so there is reduced single qubit control crosstalk. The coupler is a generalized flux qubit which has been designed to have several good properties for qubit control. Notably, because the coupler's positive anharmonicity has been designed to balance out the negative anharmonicity of the two qubits, the eigenspectrum of this architecture's unit cell can support a zero-ZZ crosstalk bias point. This architecture is relatively simple to implement because fixed frequency transmons have high coherence, there is only one flux-tunable element in the unit cell (the coupler), and it is easy to bias the unit cell to zero-ZZ crosstalk.
A model Hamiltonian of the two qubits coupled with a tunable coupler is shown in Appendix A. Here we highlight the time-dependent term, $\hat{H}_c(t)$ that describes the coupler dynamics:
\begin{equation}
\hat{H}_c(t) = \omega_c(t) \hat{c}^\dagger \hat{c} + \frac{\alpha_c}{2} \hat{c}^{\dagger 2} \hat{c}^2
\label{eqn:hamiltonian}
\end{equation}
where $\alpha_c$ is the coupler anharmonicity, $\hat{c}$ is the annihilation operator and the coupler frequency $\omega_c(t)$, corresponding to the transition to its first excited state, can be varied in time via the flux through its superconducting loop. Low-crosstalk 2Q gates are realized by AC modulating this coupler frequency after DC biasing it to the zero-ZZ crosstalk bias point.
\begin{figure}
\centering
\includegraphics[scale=0.4]{figs/figure_device.png}
\caption{(a) Optical image of the device presented in \cite{experimental_work} shows two fixed frequency transmons coupled via a tunable coupler. (b) Schematic for modelling the device adapted from \cite{hamiltonian_source}.}
\label{fig:device}
\end{figure}
In \cite{experimental_work} an early prototype device (shown in Fig. \ref{fig:device}) for this case study architecture demonstrated a fast perfect entangler biased to zero-ZZ crosstalk. This device produced the nonstandard 2Q gate trajectory shown in Figure \ref{fig:Expt_Fig1}, which included a 13 ns perfect entangler. Figure \ref{fig:Expt_Fig2} shows how the measured trajectories were similar over a range of entangling pulse drive amplitudes that did not exceed $\xi =$ 0.01$\Phi_0$, the point at which strong drive effects would be expected to become non-negligible \cite{hamiltonian_source}. So in this early prototype device, the measured trajectories in Figures \ref{fig:Expt_Fig1} and \ref{fig:Expt_Fig2} were not nonstandard because of strong drive effects, but because of some other systematic in the experiment.
\subsection{Our simulation approach}
The case study entangling gate architecture natively supports strong parametrically activated interactions between the two qubits. Since the full Hamiltonian for this architecture is computationally intensive to model \cite{hamiltonian_source}, for our simulation we use the simplified effective Hamiltonian from \cite{hamiltonian_source} that models the device using fewer parameters while still capturing all of the essential physics of the device (see Appendix A). Our general protocol for simulating Cartan trajectories is as follows:
\begin{enumerate}
\item We input the simulated device parameters into our Hamiltonian. These parameters include the qubit frequencies $\omega_a$ and $\omega_b$, and the qubit coherence times. This generates the eigenspectrum of the simulated device.
\item We bias the coupler frequency ($\omega_c^0$) between the two qubit frequencies ($\omega_a,\omega_b$) such that the static ZZ term (i.e. for $\delta(t)=0$) between the two qubits is tuned to zero.
\item We specify the drive amplitude $\xi$ of our entangling pulse. In this case study we implement a iSWAP-like entangler, so the entangling pulse is driven at the frequency $\omega_d$ that generates maximal population swapping between the two qubits. For $\xi \leq $ 0.01$\Phi_0$, the entangling pulse frequency $\omega_d$ is essentially identical to the difference frequency of the two qubits $|\omega_a-\omega_b|$. However, increasing $\xi > 0.01\Phi_0$ activates the two-photon process in Equation \ref{eqn:hamiltonian}, causing population to enter the second excited state of the coupler and modify the entangling interaction. This in turn causes $\omega_d$ to deviate from $|\omega_a-\omega_b|$. The entangling pulse is modulated by a rectangular envelope, as was done in experiment to obtain the measured trajectories; due to qubit controllers typically having a time resolution of 1 ns, short entangling gates $\sim$10 ns have to be implemented using a pulse with a fast rise time. Experimentalists typically choose between a flat top Gaussian pulse with a short rise time, or a rectangular pulse for simplicity.
\item We evolve the time-dependent Hamiltonian and project the evolution propagator on the computational subspace to obtain the effective unitary operation with respect to the entangling pulse drive duration. This time ordered sequence of unitary operations can be represented as a trajectory in the Weyl space using Cartan coordinates. By examining the trace of the effective unitary propagator we can obtain the leakage outside the computational space. We confirm that the leakage rates are much below the expected gate errors due to decoherence.
\end{enumerate}
In this case study we simulate standard and nonstandard 2Q trajectories. The simplest and most consistent way to do this is to use the same simulated devices but to vary the drive power $\xi$. For $\xi \leq $ 0.01$\Phi_0$ we expect the above protocol to result in a standard iSWAP interaction between the two qubits. But for $\xi > 0.01\Phi_0$, we expect strong drive effects to begin to emerge and cause the Cartan trajectory to deviate away from a standard iSWAP. We note that the simulated trajectories differ in several ways from the measured trajectories in Figures \ref{fig:Expt_Fig1} and \ref{fig:Expt_Fig2}. Firstly, the measured trajectories are nonstandard even for $\xi \leq $ 0.01$\Phi_0$ due to an additional systematic effect in the experiment. Secondly, the simulated trajectories are consistently slower than the measured trajectories; e.g. at $\xi$ = 0.01$\Phi_0$, the simulated trajectories are slower by a factor of 3.5 than the measured trajectory, which included a 13 ns $\sqrt{iSWAP}$-like entangling gate. These discrepancies can both be explained by the simulation model Hamiltonian being significantly simpler than the true device Hamiltonian. Aside from these discrepancies, our simulations are realistic; our trajectories are generated using parameters and techniques that closely resemble those used in experiment and our method for generating standard and nonstandard trajectories using a single simulated device is physically intuitive.
Simulating Cartan trajectories over a range of entangling pulse amplitudes $\xi$ we observe the correct intuitive behavior. The simulated trajectories deviate more and more from the standard iSWAP as the entangling pulse amplitude increases beyond $\xi$ = 0.01$\Phi_0$. Secondly, the speed of the simulated trajectories scales linearly with $\xi$. This agrees with the experimental data shown in Figure \ref{fig:Expt_Fig2} where the measured trajectory doubled in speed when $\xi$ increased by a factor of two.
\subsection{Methodology}
We simulate a 10 by 10 device with grid connectivity (Fig. \ref{fig:connectivity}),where the qubit frequencies of each pair of neighbors are sampled from two normal distributions respectively with means that differ by 2 GHz.
We use a $5\%$ standard deviation for sampling the qubit frequencies. Improved fabrication techniques have reduced the smaller standard deviation to about $0.5\%$ \cite{qubit_freq_std}, but we use a larger standard deviation to show that our method is robust to variations in device fabrication.
\begin{figure}
\centering
\input{figs/connectivity}
\caption{Device simulation. The high and low frequency qubits are shown in different colors. Each edge connects two qubits with different colors.}
\label{fig:connectivity}
\end{figure}
Between each pair of neighboring qubits on the $10\times 10$ grid, we simulate two types of 2Q trajectory by varying the entangling pulse amplitude $\xi$: 1) A baseline trajectory generated with a low entangling pulse amplitude of $\xi$ = 0.005$\Phi_0$ and 2) a nonstandard trajectory due to strong drive effects resulting from a larger $\xi$ = 0.04$\Phi_0$.
Then on each nonstandard trajectory, we select 2Q basis gates using Criterion 1 and 2 (respectively) introduced in Section \ref{calibration_strategy}. We test these 3 sets of 2Q basis gates on common application circuits as benchmarks. We use the Qiskit\cite{Qiskit} transpiler with the ``SABRE''\cite{sabre} layout and routing methods to map the benchmarks circuits to the $10\times 10$ grid connectivity. With the nonstandard basis gates, we compile circuits using the methods from Section \ref{sec_compile}. With the $\sqrt{iSWAP}$ from the standard trajectories, we use the analytic approach in \cite{sqiswap}. Like the 2Q basis gates selected with Criterion 2, $\sqrt{iSWAP}$ decomposes SWAP in 3 layers and CNOT in 2 layers, but we can also use it to directly decompose other 2Q gates (like the CRZ gates in the QFT benchmarks) analytically. For the 1Q gates in the gate and circuit synthesis, we use a duration of $20$ ns, which is typical for fixed-frequency transmon qubit processors \cite{jurcevic2021demonstration}.
Decoherence is the dominant hardware noise in our noise model, because crosstalk is suppressed by the high detuning in the qubits. For each qubit, we model the decoherence error as $1-e^{-t/T}$, where $T$ is the coherence time of the qubit. We set $T$ to a typical value of 80 $\mu$s for all qubits. We compute $t$ as $t_f - t_i$, where $t_i$ is the start of the first gate on the qubit and $t_f$ is the end of the last gate on the qubit. The total coherence-limited fidelity of a circuit is the product over the $e^{-t/T}$ term from each qubit. The decomposition errors in gate synthesis are negligible compared to the decoherence errors, and can be reduced to arbitrarily close to zero in theory. Thus we only show the coherence-limited fidelities in the results.
\subsection{Results}
Before discussing our results, as a disclaimer we note that while increasing the entangling pulse drive amplitude is one way to speed up 2Q gates, it is by no means an all-purpose solution that we generally advocate for. We chose to do this in our simulation case study only because it was a simple and intuitive way to compare standard and nonstandard simulated gates for the same case study entangling architecture. For this case study architecture, the drive amplitudes chosen were realistic in an experimental setting.
\begin{table}[!h]
\centering
\renewcommand{\arraystretch}{1.5}
\resizebox{0.38\textwidth}{!}{
\begin{tabular}{|l|r|r|r|}
\hline
& Basis & SWAP & CNOT \\ [0.5ex]
\hline
\multirow{2}{*}{Baseline} & 83.04 ns & 329.1 ns & 226.1 ns\\
& 99.884\% & 99.541\% & 99.684\% \\
\hline
\multirow{2}{*}{Criterion 1} & 10.15 ns & 110.5 ns & 110.5 ns\\
& 99.986\% & 99.845\% & 99.845\% \\
\hline
\multirow{2}{*}{Criterion 2} & 10.76 ns & 112.3 ns & 81.51 ns\\
& 99.985\% & 99.843\% & 99.886\% \\
\hline
\end{tabular}}
\caption{Average duration (top) and coherence-limited gate fidelity (bottom) of the 2Q basis gates and the synthesized SWAP and CNOT gates, from baseline, Criterion 1, and Criterion 2.}
\label{table:gate_duration}
\end{table}
\input{figs/bench/benchmark-results-table.tex}
The average durations and coherence limited fidelities (obtained using the Qiskit Ignis \texttt{coherence\_limit} function \cite{ignis}) of the synthesized SWAP and CNOT gates from the two approaches are summarized in Table \ref{table:gate_duration}. In Table \ref{table:circuit_fid_results}, we show the coherence-limited circuit fidelities of 5 sets of benchmark circuits, when transpiled to different sets of 2Q basis gates.
We first observe that the faster nonstandard 2Q basis gates have $\sim$8x lower coherence-limited infidelities than the baseline standard 2Q gates. We also observe that the synthesized SWAP (CNOT) gates from Criterion 1 and 2 are 3.0x and 2.9x (2.0x and 2.8x) faster than the baseline, respectively. Due to the relation between gate fidelity and circuit fidelity, fidelity improvements scale exponentially in benchmark size.
Next, we observe that Criterion 2 performs better than Criterion 1. This is not surprising since it has significantly faster CNOT gates and only slightly slower SWAP gates compared to Criterion 1.
For the baseline case, the 1Q gate duration is 4x shorter than the standard 2Q basis gate, and therefore $\sim$24\% of the duration of the compiled SWAP/CNOT gate is spent performing 1Q gates. In contrast, for the nonstandard case, the 1Q gate duration is 2x longer than the nonstandard 2Q basis gate, and $\sim$72\% of the duration of the compiled SWAP/CNOT gate is spent performing 1Q gates. This puts us in the regime of today’s fastest large superconducting processors such as Google's Sycamore device, where the optimal processor configuration that minimizes the overall effects of gate error has the 1Q gates being roughly twice as long as the 2Q gates \cite{Arute2019}.
\section{Background}\label{sec_bg}
\subsection{Qubits and gates}
Unlike a classical bit that is either 0 or 1, a quantum bit (qubit) can exist in a linear superposition of $|0\rangle$ and $|1\rangle$; A general quantum state can be expressed as $\alpha |0\rangle + \beta |1\rangle$ where $\alpha, \beta$ are complex amplitudes that satisfy $|\alpha|^2 + |\beta|^2 = 1$. Thus, the state of one qubit can be represented by a 2-vector of the amplitudes $\alpha$ and $\beta$. A system of $n$ qubits can exist in a superposition of up to $2^n$ basis states, and its state can be represented by a $2^n$-vector of complex amplitudes. A quantum gate that acts on $n$ qubits can be represented by a $2^n \times 2^n$ unitary matrix.
\subsection{Geometric characterization of 2Q gates}\label{bg_kak}
Two 2Q quantum gates $U_1, U_2 \in SU(4)$ are \textit{locally equivalent} if it is possible to obtain one from the other by adding 1Q operations. In other words, 2Q operations $U_1$ and $U_2$ are locally equivalent if there exist $k_1, k_2 \in SU(2) \otimes SU(2)$ such that $U_1 = k_1 U_2 k_2$. For example, CNOT and CZ are locally equivalent via Hadamard gates.
Any 2Q quantum gate $U\in SU(4)$ can be written in the form of
\begin{equation}\label{kak}
U = k_1 \exp (-i\frac{\pi}{2} (t_x X\otimes X + t_y Y\otimes Y + t_z Z\otimes Z )) k_2
\end{equation}
where $X, Y, Z$ are the Pauli gates. This is called the Cartan decomposition.
The space of two-qubit quantum gates can be represented geometrically in a Weyl chamber (Fig. \ref{fig:weyl_chamber}), where each point stands for a set of gates that are locally equivalent to each other \cite{zhang2003geometric}. The Cartan coordinates $(t_x, t_y, t_z)$ in Eq. \eqref{kak} are the coordinates of $U$ in the Weyl chamber. They fully characterize the \textit{non-local} part of a 2Q gate. On the bottom surface, $(t_x,t_y,0)$ and $(1-t_x, t_y, 0)$ represent the same equivalent class of gates. The other points in the Weyl chamber each represent a different equivalence class of 2Q gates. We refer the interested readers to \cite{crooks_tutorial} for a more thorough introduction to the Weyl chamber. Note that other conventions of the Cartan coordinates are also common. They usually differ from ours by a constant factor of $\pi$ or $2\pi$.
In this paper, when we talk about some gate $G$ in the Weyl chamber, we usually mean the local equivalence class of 2Q gates that includes $G$.
\subsection{Entangling power of 2Q gates}\label{background_ep}
The entangling power \cite{entanglingpowerdef} is a widely accepted quantitative measure of the capacity of a 2Q gate to entangle the qubits that it acts on. It is typically a good indicator of the ability of a specific 2Q gate to synthesize arbitrary 2Q gates. For a unitary operator $U$, the entangling power $e_p(U) \in [0, \frac{2}{9}]$ is defined as the average linear entropy of the states produced by $U$ acting on the manifold of all separable states \cite{entanglingpowerdef}. It is solely based on the non-local part of $U$, which is characterized by the position of $U$ in the Weyl chamber.
A 2Q gate has 0 entangling power if and only if it is locally equivalent to the Identity or the SWAP gate. Conversely, 2Q gate $U$ is called a \textit{perfect entangler} if it can produce a maximally entangled state from an unentangled one\cite{zhang2003geometric}. Perfect entanglers (PE) have entangling power no less than $\frac{1}{6}$. They constitute a polyhedron in the Weyl chamber that is exactly half of the total volume. The 6 vertices of the PE polyhedron are CZ(CNOT), iSWAP, $\sqrt{SWAP}$, $\sqrt{SWAP}^\dagger$, and the 2 points that both represent $\sqrt{iSWAP}$. The perfect entanglers with maximal entangling power of $\frac{2}{9}$ are also called \textit{special perfect entanglers}\cite{pe2004}. In the Weyl chamber, they are on the line segment from CNOT to iSWAP. The B gate, which is at the midpoint of this line segment, has the property that it can synthesize any arbitrary 2Q gates within 2 layers\cite{bgate}. However, there has been no proposal to directly implement the B gate in hardware.
\section{Conclusion}\label{sec_conclusion}
The idea of a uniform set of basis gates naturally arose from early notions of universal gate sets, which experimentalists then implemented on various qubit platforms. By looking at the theory of possible entanglers, we have found that there are many options for good 2Q basis gates, and that these gates behave differently on each pair of interacting qubits in a processor. This led us to a radically new idea, why be constrained to a single canonical gate (e.g. CX or CZ)? Why not tune up the gate that will have the highest fidelity between every pair of qubits, allowing each to differ and instead adjust for these variations in software? If we do not treat all the coherent deviations in gate trajectories as errors, we will have more freedom in hardware design and achieve a higher gate fidelity.
In this paper, we examined the space of possible entanglers and developed a practical method for finding a high-fidelity entangler between every pair of qubits. In the case study, we find heterogeneous basis gates that are $\sim$8x faster than the baseline, and use them to synthesize faster SWAP and CNOT gates than synthesized by the baseline $\sqrt{iSWAP}$ gate from the standard XY-type trajectories.
We then evaluate these heterogeneous basis gates on a number of benchmark circuits and find fidelity improvements that scale exponentially in benchmark size.
Our approach successfully uses software to overcome the limitations of today's hardware. Such types of adaptive basis-gate design will be essential to pioneering innovative future quantum systems.
\section{Identifying good 2Q basis gates}\label{sec_theory}
\subsection{Fidelity of a synthesized gate}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figs/combined1.png}
\caption{(a) Gate A, decomposed into 2 layers with 2Q gates B, C and 1Q gates $U_a$,$U_b$,$U_c$,$U_d$,$U_e$,$U_f$. (b) A general 2-layer decomposition of the SWAP gate. Here $\ast, \ast_{mirror}$ can be replaced by any pair of 2Q gates capable of synthesizing a SWAP in 2 layers. (c) The SWAP gate, decomposed into 3 CNOT gates. (d) A general 3-layer decomposition of the SWAP gate. Here the $\ast$ can be replaced by any 2Q gate capable of synthesizing a SWAP in 3 layers.}
\label{fig:combined_circuits}
\end{figure*}
If a 2Q quantum gate is not directly supported on a device, it needs to be implemented by alternating layers of 1Q and 2Q gates from the set of basis gates that are directly supported. See Figure \ref{fig:combined_circuits} for examples. We say that a decomposition is $n$-layer if it contains $n$ layers of 2Q gates. Besides the errors that come from noises in the quantum hardware, a synthesized gate also suffers from the approximation error in gate decomposition. Thus the total fidelity of a gate should be the product of the hardware-limited fidelity and the decomposition fidelity. In this work, the decomposition errors are negligible compared to the hardware errors.
In our error model, decoherence is the dominant source of hardware error. So two factors determine whether a 2Q gate set is ideal for synthesizing a target gate: the duration of the basis gates, and the depth of the decomposition circuit. We need to take both into account when deciding on a strategy for selecting basis gates.
\subsection{An analytic method for determining 2Q circuit depth}
When deciding whether a potential basis gate is ideal for synthesizing a target gate, we consider the depth of the decomposition circuit as one of the factors. Given a 2Q target gate $A$, and a 2Q gate $B$ (or a gate set $S$), how to determine the minimum circuit depth required for a decomposition of A into $B$ (or $S$) and 1Q gates? One can take a practical, numerical approach to finding this decomposition. For a given number of layers, one can fix the 2Q gates and then numerically search for the 1Q gates that can minimize the discrepancy between the target unitary and the synthesized gate. One can start the numerical search from 1 layer, and increment the number of layers until the decomposition error gets below a threshold. But a more efficient and accurate way to determine the circuit depth is to apply the analytic method developed by Peterson et al. \cite{peterson}.
Without going into the technical details, here we summarize a key result from \cite{peterson} that we adapt and apply in Section \ref{swap_synthesis} and \ref{other_synthesis}.
\begin{thm}\label{peterson_thrm}
There exists a 2-layer decomposition of 2Q gate A into B, C, and 1Q gates as in Figure \ref{fig:combined_circuits}(a), if and only if any of the 1 to 8 sets of 72 inequalities that depend on the non-local parts of A, B, C is all satisfied.
\end{thm}
For details of the theorem, the readers can look at Theorem 23 of \cite{peterson} or the implementation of the function in our code \footnote{Our code can be found at \url{https://github.com/SophLin/nonstandard_2qbasis_gates}}. Note that Reference \cite{peterson} characterizes the space of 2Q gates with LogSpec instead of the Cartan coordinates. Both are valid ways to represent the non-local part of a 2Q gate, but care must be taken when converting between the two. A gate $U$ usually maps to 1 point in the Weyl chamber, but it usually maps to 2 points in the LogSpec space: $LogSpec(U) = (a,b,c,d)$ and $\rho(LogSpec(U)) = (c+\frac{1}{2},d+\frac{1}{2},a-\frac{1}{2},b-\frac{1}{2})$. If $LogSpec(U) = \rho(LogSpec(U))$ for all A, B, and C, we only need to check one set of inequalities. If $LogSpec(U) \neq LogSpec(U)$ for 1, 2, or all 3 of A, B, and C, we need to plug in different versions of the LogSpec and check 2, 4, or 8 versions of the 72 inequalities, respectively.
\begin{figure*}[!ht]
\centering
\includegraphics[width=\textwidth]{figs/combined2.png}
\caption{(a) Gates that are able to synthesize SWAP in 2 layers form 2 line segments in the Weyl chamber. The red one is from the B gate to $\sqrt{SWAP}$, and the green one is from the B gate to $\sqrt{SWAP}^\dagger$. (b) Pairs of gates that are able to synthesize a SWAP in 2 layers. In blue is an example trajectory that deviates from the standard XY interaction, in orange are the points that would complement the blue ones in synthesizing a SWAP in 2 layers. (c) Gates that are NOT able to synthesize a SWAP in 3 layers. (d) Gates that are NOT able to synthesize a SWAP in 3 layers. The 4 tetrahedra are defined by vertices $\{I_0, CZ, (\frac{1}{4}, \frac{1}{4},0), (\frac{1}{6},\frac{1}{6},\frac{1}{6})\}$, $\{CZ, I_1, (\frac{3}{4}, \frac{1}{4}, 0), (\frac{5}{6}, \frac{1}{6}, \frac{1}{6})\}$, $\{SWAP, (\frac{1}{2}, \frac{1}{6}, \frac{1}{6}), (\frac{1}{6}, \frac{1}{6}, \frac{1}{6}), (\frac{1}{3}, \frac{1}{3}, \frac{1}{6})\}$, and $\{SWAP, (\frac{1}{2}, \frac{1}{6}, \frac{1}{6}), (\frac{5}{6}, \frac{1}{6}, \frac{1}{6}), (\frac{2}{3}, \frac{1}{3}, \frac{1}{6})\}$. (e) Gates that are NOT able to synthesize CNOT in 2 layers. The 3 tetrahedra in the plot are defined by vertices $\{I_0, (\frac{1}{4},0,0), (\frac{1}{4},\frac{1}{4},\frac{1}{4}), \sqrt{SWAP}\}$, $\{I_1, (\frac{3}{4}, 0,0), (\frac{3}{4},\frac{1}{4}, 0), \sqrt{SWAP}^\dagger\}$, and $\{SWAP,\sqrt{SWAP}, \sqrt{SWAP}^\dagger, (\frac{1}{2}, \frac{1}{2}, \frac{1}{4}) \}$. (f) Gates that are able to decompose SWAP in 3 layers and CNOT in 2 layers.}
\label{fig:combined_chambers}
\end{figure*}
\subsection{Synthesis of the SWAP gate}\label{swap_synthesis}
On bounded connectivity architectures, SWAPs make up a significant portion of all two-qubit gates. A SWAP gate exchanges the quantum states of two neighboring qubits. A 2Q gate in a quantum program can be directly scheduled if it acts on two physical qubits that are connected to each other, but this is not the case in general. Superconducting devices are usually designed to have sparse connectivity, because otherwise crosstalk errors would be difficult to suppress. As a result, quantum programs usually contain a large proportion of SWAP gates after they are compiled to run on a superconducting device.
When we select the 2Q basis gate set for each pair of qubits, a top priority is to optimize the fidelity of the SWAP gate that is built from the gate set.
We discuss three approaches towards synthesizing a SWAP gate: decompose it into 1, 2, or 3 layers of hardware 2Q gates.
\textbf{SWAP in 1 layer:} This requires a basis gate that is locally equivalent to SWAP. In other words, the trajectory of the available native gates needs to pass through the top vertex of the Weyl chamber.
\textbf{SWAP in 2 layers:} We consider 2 cases: 2-layer decomposition of SWAP using a single 2Q basis gate, and using two different 2Q basis gates.
In the first case, the set of 2Q gates that are capable of synthesizing SWAP in 2 layers are represented by 2 line segments in the Weyl chamber as shown in Figure \ref{fig:combined_circuits}(b). One is from the B gate to $\sqrt{SWAP}$ and the other is from $B$ to $\sqrt{SWAP}^\dagger$. We denote them by $L_0$ and $L_1$, respectively.
In the second case, for each point $\ast$ in the Weyl chamber, (as derived in Appendix B) there is exactly one point $\ast_{mirror}$ such that they together enable a 2-layer decomposition of SWAP (see Figure \ref{fig:combined_circuits}(b)). The line segment from $\ast$ to $\ast_{mirror}$ always has one of $L_0, L_1$ as its perpendicular bisector. Thus, given $\ast$, we can locate $\ast_{mirror}$ by rotating $\ast$ by $\pi$ around the closer one of $L_0, L_1$. One example pair of such points is CNOT and iSWAP. For a trajectory that deviates from the standard XY trajectory (goes from Identity to a point near iSWAP), its ``mirror'' is a trajectory from SWAP to a point near CNOT (Figure \ref{fig:combined_chambers}(b)). Since there's no overlap between the example trajectory and the ``mirror'', we conclude that the trajectory does not contain any pair of points that is able to synthesize SWAP together in 2 layers.
\textbf{SWAP in 3 layers:} It is a well-known result that 3 invocations of CNOT are required to implement a SWAP \cite{shende2003cnot3}. We show the circuit in Figure \ref{fig:combined_circuits}(c).
In fact, CNOT and iSWAP share the property that they can synthesize any arbitrary 2Q gate in 3 layers but only a $0$-volume set of gates (in the Weyl chamber) in 2 layers \cite{peterson}.
For our purpose, we need to know what other gates are capable of decomposing SWAP in 3 layers. We only consider 3-layer decomposition of SWAP using a single 2Q basis gate as in Figure \ref{fig:combined_circuits}(d). Let $S_{SWAP,3}$ denote the set of gates that satisfy our requirement. To determine whether a 2Q basis gate $G$ is in $S_{SWAP,3}$, we first locate the corresponding $G_{mirror}$ such that $G$ and $G_{mirror}$ together can provide a 2-layer decomposition of SWAP. Then we apply Theorem \ref{peterson_thrm} with $G_{mirror}$ as target and $G$ as basis gate to check if there exists a 2-layer decomposition of $G_{mirror}$ into $G$.
We apply the method above to a sample of points in the Weyl chamber, and obtain the distribution of gates that are able to synthesize SWAP in 3 layers. Since the complement of the set has a simpler shape, here we show a plot of $\overline{S_{SWAP,3}}$, the points that are not able to synthesize SWAP in 3 layers, in Figure \ref{fig:combined_chambers}(c). A visual inspection tells us $\overline{S_{SWAP,3}}$ consists of 4 tetrahedra in the Weyl chamber. After locating the vertices of the tetrahedra, we obtain Figure \ref{fig:combined_chambers}(d). We also learn that the volume of $S_{SWAP,3}$ is $68.5\%$ the volume of the Weyl chamber.
A 2Q gate trajectory starts from either $I_0$ (or $I_1$) and goes out of the bottom left (or the bottom right) tetrahedron in Figure \ref{fig:combined_chambers}(d). If the trajectory does not go directly to SWAP, it will enter $S_{SWAP,3}$ after leaving the bottom tetrahedron that it starts from. Thus, the fastest gate on the trajectory that synthesizes SWAP in 3 layers can be found by locating the intersection of the trajectory with the face $\{CZ, (\frac{1}{4}, \frac{1}{4},0), (\frac{1}{6},\frac{1}{6},\frac{1}{6})\}$ or $\{CZ, (\frac{3}{4}, \frac{1}{4}, 0), (\frac{5}{6}, \frac{1}{6}, \frac{1}{6})\}$.
\textbf{Summary:} Given a 2Q gate trajectory that deviates from XY or XX, the most suitable 2Q gate for SWAP synthesis is the fastest one on the trajectory that is capable of synthesizing SWAP in 3 layers. Although some gates in the Weyl chamber are able to synthesize SWAP in 1 or 2 layers, it is unlikely that the early part of the trajectory overlaps any of them.
\subsection{Synthesis of other gates}\label{other_synthesis}
The techniques that we use to study the synthesis of SWAP also applies to other 2Q gates. For example, by applying Theorem \ref{peterson_thrm} to a sample of points in the Weyl chamber, with CNOT as target, we learn that the gates that are able to synthesize CNOT in 2 layers (denoted $S_{CNOT,2}$ here) takes up $75\%$ of the volume in the Weyl chamber. The complement $\overline{S_{CNOT,2}}$ consists of 3 tetrahedra, as shown in Figure \ref{fig:combined_chambers}(e). Therefore, on a 2Q gate trajectory, we can locate the fastest gate that synthesizes CNOT in 2 layers by taking the intersection of the trajectory with the face $\{(\frac{1}{4},0,0), (\frac{1}{4},\frac{1}{4},\frac{1}{4}), \sqrt{SWAP}\}$ or $\{(\frac{3}{4}, 0,0), (\frac{3}{4},\frac{1}{4}, 0), \sqrt{SWAP}^\dagger\}$. We can also locate the fastest gate from the trajectory that can both synthesize CNOT in 2 layers and synthesize SWAP in 3 layers, by taking the first intersection of the trajectory with $S_{CNOT,2} \cap S_{SWAP,3}$ (See Figure \ref{fig:combined_chambers}(f)).
\subsection{A strategy for locating good 2Q basis gates}\label{calibration_strategy}
Our framework allows one to prioritize different combinations of target 2Q gates. In Section \ref{case_study}, we test the following two criteria for selecting 2Q basis gates from native 2Q trajectories.
\begin{enumerate}
\item Select the fastest gate on the trajectory that can synthesize SWAP in 3 layers.
\item Select the fastest gate on the trajectory that can both synthesize SWAP in 3 layers and synthesize CNOT in 2 layers.
\end{enumerate}
As explained in Section \ref{swap_synthesis}, the gate that meets Criterion 1 can be found at the intersection of the 2Q trajectory and one of the 2 faces $\{CZ, (\frac{1}{4}, \frac{1}{4},0), (\frac{1}{6},\frac{1}{6},\frac{1}{6})\}$ and $\{CZ, (\frac{3}{4}, \frac{1}{4}, 0), (\frac{5}{6}, \frac{1}{6}, \frac{1}{6})\}$. And as explained in Section \ref{other_synthesis}, the gate that meets Criterion 2 can be found similarly. With this insight, we can locate a desired 2Q basis gate in an experimental setting using the methods in Section \ref{sec_calibration}.
Our framework can be easily adapted to other criteria for selecting basis gates. For instance, we can select the fastest gate that can decompose another set of target gates within a certain number of layers. We can also incorporate other metrics like the entangling power into a criterion, e.g. we can locate the fastest gate on the trajectory that is both a PE and can synthesize SWAP in 3 layers.
\section{Systematic deviations in 2Q \\ gates}\label{deviation}
The 2Q gate is a critical building block that must be well-engineered before it is used to construct a quantum computer with many qubits. In practice, engineering 2Q gates in the lab involves iterating prototypes of the devices to minimize any and all systematic errors that result from imperfect device design or control along with nonuniformities in device fabrication.
Even if unwanted crosstalk between the qubits is successfully reduced and the 2Q gate is shown to be an effective entangler with a consistent identity, if the gate's identity is somehow nonstandard, one would normally assume it is not useful. The constraint of requiring 2Q gates to be standard is most burdensome for the superconducting qubit platform, where device Hamiltonians are engineered from scratch and there is no 2Q gate that is truly native to the platform - unlike, for instance, the SWAP gates that are native to atomic qubits \cite{Vool2017}.
Today's multi-qubit superconducting devices are not able to perform perfectly identical 2Q gates between every pair of qubits because of device-level imperfections, tradeoffs and uncertainties. Experimentalists model the expected rate of information leakage between on-chip elements using microwave circuit design software \cite{microwaveoffice, ansys}, but it is inevitable that irregularities arise during device fabrication and packaging. The devices are at least partially handmade and every fabrication tool has a finite precision. Also, the various materials that make up the layers of the superconducting device can host physical two-level systems that act as sources of noise and even can coherently interact with qubits \cite{tls1, tls2}; reducing the effect of these two-level systems is an active field of research \cite{tls3}. Another active field of research is reducing irregularities in the fabrication of Josephson junctions, which are critical on-chip elements \cite{lbnl, qubit_freq_std}. For a given device, it can be difficult for the experimentalist to determine whether a systematic 2Q gate deviation is caused by an imperfection in the device design or in its control. For example, a common source of systematic 2Q gate deviation is the imperfect mitigation of the static ZZ crosstalk which is a dominant source of 2Q gate error for transmon qubits \cite{mundada_et_al_2019,ku2020suppression,sung2020realization,noguchi2020fast,kandala2020demonstration,zhao2020suppression}. Devices can be designed to suppress the static ZZ crosstalk but unless the device is properly fabricated, packaged, biased and controlled there will be nonzero static ZZ crosstalk which will cause the 2Q gate to deviate from the target unitary.
Superconducting devices can also have higher order Hamiltonian terms that result in the experimentally measured Cartan trajectory of 2Q gates deviating from the expected Cartan trajectory. This deviation is particularly significant for fast gates enabled by large coupling or large drive strength \cite{hamiltonian_source, McKay2016, jurcevic2021demonstration}. Experimentalists have historically tried to suppress these deviations by reducing the 2Q gate drive strength, which has the negative consequence of slowing the 2Q gate down. It is in general difficult to accurately model the effect of the strong drives that perform fast 2Q gates on the Hamiltonian level, and this is an active field of research \cite{hamiltonian_source}.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.99\columnwidth]{figs/expt_fig1.pdf}
\caption{Experimental data showing a nonstandard Cartan coordinate trajectory. An experimental implementation \cite{experimental_work} of the iSWAP gate with the entangler architecture proposed in \cite{hamiltonian_source} yielded a nonstandard Cartan coordinate trajectory close to the plane of $I_0$, SWAP, and iSWAP. The first instance of a perfect entangler was at an entangler duration of 13 ns. In this nonstandard trajectory, the 13 ns entangler is offset from the Cartan coordinate for the square root of iSWAP and the 26 ns entangler is likewise offset from the Cartan coordinate for iSWAP. Note that due to an experimental hardware constraint the shortest possible entangling pulse duration was 4 ns, so the measured Cartan trajectory begins there.}
\label{fig:Expt_Fig1}
\end{figure}
Plotting measured 2Q gates in Cartan coordinates is a valuable tool experimentalists can use to easily visualize and study any deviations their gates may have from the expected Cartan trajectory. For example, Figure \ref{fig:Expt_Fig1} shows a measured Cartan trajectory that is nonstandard. This experimental data was collected from one of the first iterations of a superconducting device \cite{experimental_work} that was designed to implement a recently proposed entangling gate architecture \cite{hamiltonian_source}. The data includes a very fast (13 ns) perfect entangler.
Since the measured trajectory was systematically offset from the predicted one (XY), the experimentalists investigated potential sources of that systematic offset. Since this source of deviation could be eliminated with better device and control engineering, the experimentalists began to optimize their next device iteration accordingly. But in this work we suggest that there is nothing inherently unusable about measured Cartan trajectories that are nonstandard due to this kind of coherent systematic offset, and that the 13 ns nonstandard perfect entangler identified in Figure \ref{fig:Expt_Fig1} could be treated as a native 2Q basis gate by the compiler.
Our work seeks to enable the use of the nonstandard 2Q gates that can be native to superconducting devices.
If 2Q gate calibration and compiling protocols became more flexible, usable superconducting 2Q gate yield would increase considerably, enabling more rapid and effective prototyping of 2Q gates which could be scaled to a computer. Furthermore, any number of novel superconducting devices with very fast 2Q gates that happen to be nonstandard could be effectively utilized for computing.
\section{Related work}\label{sec_related}
To the best of our knowledge, no prior work involves using 2Q basis gates from arbitrary nonstandard gate trajectories. In parallel with this work, Lao et al. \cite{lao2022software} propose to mitigate coherent parasitic errors in 2Q gates by software and present methods of compilation. Our work is more general then \cite{lao2022software}, although we share the insight that coherent errors in 2Q gates can be treated as part of the gate for compilation. While our framework works for general irregular trajectories and select basis gates on them using the approach detailed in Section \ref{sec_theory}, they focus on iSWAP-like (XY) gates with an unwanted CPHASE (XX) component (which belongs to the FSim gate set so is not truly non-standard) and always use CPHASE($\psi$)iSWAP($\pi$/4) because it has similar expressivity as iSWAP($\pi$/4) for small deviation $\psi$. They do not discuss calibration. Their baseline for evaluation is similar to the baseline in our case study, which is to make the trajectory more standard by lengthening the gate duration.
Recent research from both the experimental \cite{foxen,moskalenko_cont2Q,xiong_cont2Q,reagor_cont2Q} and theory sides has utilized 2Q (and 3Q) basis gates from a continuous set of standard gates, as opposed to only building and compiling with the best-known gates like CNOT and iSWAP. The works that are most relevant to this project are those that look for a small set of 2Q basis gates (from a continuous standard gate set) that are the most valuable to calibrate. Lao et al. \cite{prakash} use a numerical approach to test the performance of different gates from the fSim and XY gate sets on a range of application circuits, with the overall circuit success rate as the objective. Peterson el al. \cite{peterson2021xx} from IBM use analytic techniques to find that the gate set $\{CX, CX^{1/2}, CX^{1/3}\}$ is almost as good as the entire continuous set of XX gates in implementing random 2Q gates. They try to minimize the expected (average) infidelity in implementing random 2Q gates under an experimentally motivated error model. Huang et al. \cite{sqiswap} proposes using the $\sqrt{iSWAP}$ as 2Q basis gate, instead of using iSWAP or CNOT, and implement it using a 2-fluxonium qubit device. Recent proposals for novel nonstandard 2Q gates in the superconducting qubit literature that are informed by the current experimental challenges in scaling up with standard 2Q gates include \cite{perez_nonstandard2Qgate,xu_nonstandard2Qgate}.
\section{Calibration of nonstandard 2Q gates}\label{sec_calibration}
We propose two stages for calibrating a 2Q basis gate on an unknown trajectory of 2Q gates: first, a more costly ``initial tuneup'' stage that does not assume any knowledge of the trajectory and then a less costly ``retuning'' stage that utilizes information from the last initial tuneup and the retunings after it. In a well-controlled industry setup we would imagine the initial tuneup being done once a month and retuning being done daily. In a less well-controlled environment (e.g. one prone to low frequency drift), the initial tuneup could be done more frequently, as needed.
Our proposed calibration approach uses two techniques for experimentally characterizing the unitary of a potentially non-standard 2Q gate: quantum process tomography (QPT) \cite{qpt1,qpt2} and gate set tomography (GST) \cite{greenbaum2015_GST,nielsen2021_GST, Xue2022, Madzik2022}. QPT is a simple way to estimate a unitary but it cannot separate state preparation and measurement (SPAM) errors from gate errors \cite{qpt3}. GST is a highly general and accurate tomography technique that characterizes all the operations in a gate set (including SPAM) simultaneously and self-consistently.
GST is simple to run, taking minutes to acquire on a superconducting device. GST acquisition is followed by classical processing of the data that can be computed on a cluster in about two hours. Note that during the classical processing, the quantum device can still be used with gates from the previous calibration cycle.
The speedup of GST's classical processing is an active field of research and may be obtained by allowing physics to inform the dominant errors that are expected \cite{pygsti}. The most relevant returns for fine tuning the unitary are the error generators \cite{Blume_Kohout_2022} for the gate set. The error generators are a basis for writing the transformation between the measured unitary and the unitary that GST expects. It measures coherent differences and estimates stochastic noise levels. GST is thus a valuable tool for directly characterizing 2Q gates.
Here we list the steps in the initial tuneup stage.
\begin{enumerate}
\item Do preliminary coarse tuning experiments such as amplitude and frequency calibration of the entangling pulse drive to estimate the entangling pulse duration of interest. For example, a resonant $iSWAP$-like interaction may have an amplitude and a frequency to tune for optimal population swapping. (5 minutes per pulse)
\item Perform QPT for each 2Q gate in the Cartan trajectory leading up to the approximate 2Q gate of interest. The qubit controller resolution (typically $\sim$1 ns) will determine the spacing between the trajectory points. Based on the findings in Step 1 the trajectory can be cropped around the entangling pulse duration of interest. The unitaries found will be the full list of candidate gates. (30-60 minutes per trajectory)
\item From the candidate gates in the previous steps, use Section \ref{sec_theory} to identify which of them might be the fastest ones that also are good 2Q basis gates. In this step the list of candidate basis gates is narrowed down. We are not able to narrow down to one basis gate due to the imprecision of QPT.
\item
Perform GST to obtain full information about each candidate 2Q gate, including an accurate gate unitary and a breakdown of error sources.
Then the set of 2Q basis gates can be chosen. ($\sim$10 minutes for each 2Q gate, followed by classical processing)
\end{enumerate}
The second calibration stage is the quick ``retuning'' of the 2Q basis gates that relies on the results of the initial tuneup. Once the precise unitary for each 2Q basis gate is found, the gates can be simply retuned using the coarse tuning procedures in Step 1 of the initial tuneup. The information gained in the initial tuneup would allow experimentalists to prescribe a different retuning protocol to each 2Q basis gate according to what it needs. In practice, retuning would most likely be a simple combination of amplitude calibration and frequency calibration of the elements involved in each 2Q basis gate, and it would take approximately 1-5 minutes per 2Q basis gate.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.99\columnwidth]{figs/expt_fig2.pdf}
\caption{Stability over drive amplitude of the experimentally measured Cartan coordinate trajectories. In the same experimental implementation from Figure \ref{fig:Expt_Fig1}, as the entangling pulse drive amplitude $\xi$ increased from 0.005$\Phi_0$ to 0.01$\Phi_0$, the Cartan coordinate trajectories were found to double in speed but still be qualitatively similar. The data was collected over a two day period. As in Figure \ref{fig:Expt_Fig1}, due to an experimental hardware constraint the shortest possible entangling pulse duration was 4 ns, so the measured Cartan trajectories begin there.}
\label{fig:Expt_Fig2}
\end{figure}
The extent to which previously gathered information can help reduce the cost of retuning depends on the stability of the gate trajectories over time. Figure \ref{fig:Expt_Fig2} shows the nonstandard Cartan trajectories measured on two days, over two entangling pulse drive amplitudes. Over the five day period that Cartan trajectories were measured for this device, the trajectories were all found to look qualitatively similar, as in Figure \ref{fig:Expt_Fig2}. While limited, this experimental data suggests that the measured Cartan trajectories obtained in the initial tuneup stage could potentially be used for several days afterward to provide an initial guess for the duration of the good 2Q basis gates.
Our calibration protocol does not include the use of randomized benchmarking (RB) \cite{rb1,rb2,rb3}. RB is best suited for architectures with specific target gates that are members of the Clifford group. Furthermore, interleaved RB \cite{irb} will estimate the gate infidelity but will provide no information about an error budget. In our setting we do not have a fixed 2Q gate as the goal of implementation and understanding the gate unitaries themselves is a primary goal. We have thus decided GST and QPT are more suitable for precise gate characterization.
The scalability of our proposed calibration method is not different from traditional approaches. Calibration techniques like QPT, RB, and GST can be applied to multiple 2Q gates on the same device in parallel, as long as the gates do not act on the same qubits. One can use an edge-coloring of the device connectivity graph to determine which gates to calibrate simultaneously. An edge-coloring of the grid graph takes 4 colors, one for a sparser connectivity (e.g. heavy hexagonal) takes fewer colors. Thus, for a superconducting device with typical connectivity, the calibration overhead on the quantum device does not scale with the size of the device.
\section{Compiling with non-standard 2Q basis gates}\label{sec_compile}
Most quantum programs and benchmarks are already specified at the 2 or 3 qubit gate level. Therefore, like previous works \cite{sqiswap}\cite{prakash}\cite{peterson2021xx} that discuss choice of 2Q basis gate and how to use less conventional 2Q basis gates for compilation, we use a transpiler pass to convert other 2Q gates in a circuit into our own 2Q basis gates, instead of building an entirely new compiler.
Some of the prior works decompose 2Q gates from application circuits into 1Q gates and native 2Q gates using a numerical approach \cite{prakash}, while others take an analytical approach \cite{sqiswap} \cite{peterson2021xx}. Note that such a decomposition requires finding the 1Q local unitaries, not just determining the required circuit depth. The analytical and numerical approaches each have their advantages. The numerical approach is more flexible. It can be applied to any 2Q basis and target gates. The analytic methods have limits on what gates they can be applied to, but are faster and some of them guarantee optimal results. There is currently no analytic formula that convert between arbitrary sets of 2Q gates. Huang et al. \cite{sqiswap} and Peterson et al. \cite{peterson2021xx} develop analytic algorithms that decomposes an arbitrary 2Q gates into $\sqrt{iSWAP}$ and discrete sets of XX-type gates, respectively. The {\small \texttt{decompose\_two\_qubit\_interaction\_into\_four\_ \\fsim\_gates}} function in Cirq \cite{Cirq} implements an analytic formula that decomposes an arbitrary 2Q gate into 4 layers of a given fSim gate, via the B gate.
In this project, we need to synthesize other 2Q gates from 2Q basis gates that are even less conventional than the ones considered in previous work. Therefore, we take a mostly numerical approach to gate synthesis and write our numerical search code based on NuOp from \cite{prakash}. The difference is, we use knowledge about decomposition circuit depth computed analytically to inform and speedup the numerical search for 1Q local unitaries. NuOp first attempts to search for a 1-layer decomposition, and moves on to 1 more layer upon failure to find solution, until it meets the target decomposition error rate. Using the analytic techniques for determining circuit depth developed by \cite{peterson} and extended by our work for SWAP, we are able to skip to the step in NuOp in which a perfect decomposition is guaranteed by theory. This significantly speeds up the numerical search and also guarantee that the solution has optimal depth.
Synthesizing all 2Q gates in the application programs directly into the basis gates might incur a compilation overhead. We avoid it by computing in advance and storing the decompositions of a few common 2Q gates into our basis gates. This only needs to be done once per calibration cycle (usually 1 day) and costs little time. In this work (see Section \ref{case_study}) we only directly decompose SWAP and CNOT into our basis gates. But instead of taking this minimalist approach, one can alternatively prepare decompositions for a larger set of potential target gates into the basis gates. The cost would still quite small. We imagine that one can identify a set of potentially useful target gates using an approach similar to \cite{prakash}, except that \cite{prakash} looks for a set of gates to calibrate instead of decompose. In addition, in the scenario where programs wait in long queues before execution, one might be able to afford directly decomposing all 2Q gates in the circuits into the basis gates.
\section{Hamiltonian of 2 qubits coupled with a tunable coupler}
The system Hamiltonian of the two qubits coupled with a tunable coupler can be modelled as in \cite{hamiltonian_source}:
\begin{align}\label{Eq:Model}
\hat{H}(t) = \hat{H}_a + \hat{H}_b + \hat{H}_c(t) + \hat{H}_g,
\end{align}
with
\begin{align}
\begin{split}
\hat{H}_a &= \omega_a \hat{a}^\dagger \hat{a} + \frac{\alpha_a}{2} \hat{a}^{\dagger 2} \hat{a}^2, \\
\hat{H}_b &= \omega_b \hat{b}^\dagger \hat{b} + \frac{\alpha_b}{2} \hat{b}^{\dagger 2} \hat{b}^2, \\
\hat{H}_c(t) &= \omega_c(t) \hat{c}^\dagger \hat{c} + \frac{\alpha_c}{2} \hat{c}^{\dagger 2} \hat{c}^2. \\
\hat{H}_g &= -{g}_{ab} \hat{a}^\dagger \hat{b} - {g}_{bc} \hat{b}^\dagger \hat{c} - {g}_{ca} \hat{c}^\dagger \hat{a}\\ & -{g}_{ab}^* \hat{a} \hat{b}^\dagger - {g}_{bc}^* \hat{b} \hat{c}^\dagger - {g}_{ca}^* \hat{c} \hat{a}^\dagger
\end{split}
\end{align}
where $\omega_{a(b)}$ corresponds to the qubit a(b) frequency, $g_{ij}$ represents capacitive coupling strength between elements $i$ and $j$. The entangling interaction is realized by modulating the coupler frequency as $\omega_c(t) = \omega_c^0 + \delta \sin(\omega_d t)$.
\section{SWAP synthesis in 2 layers}
See the circuit in Fig. \ref{fig:combined_circuits}(a). Let $A=SWAP$ we get the equation $$SWAP = (e\otimes f) C (c\otimes d) B (a\otimes b).$$
Move $e\otimes f$ and $a\otimes b$ to the other side and move $e\otimes f$ through SWAP,
\begin{align*}
C (c\otimes d) B &=(e\otimes f)^\dagger SWAP (a\otimes b)^\dagger\\
&= SWAP (f\otimes e )^\dagger (a \otimes b)^\dagger\\
&= SWAP (fa\otimes eb )^\dagger.
\end{align*}
Move $(fa\otimes eb )^\dagger$ to the LHS, and C to the RHS,
$$(c\otimes d) B (fa\otimes eb) = C^\dagger SWAP.$$
This equation tells us that, $B$ and $C$ can synthesize SWAP as in Fig. \ref{fig:combined_circuits}(a) if and only if the Cartan coordinates of $B$ are equal to the Cartan coordinates of $C^\dagger SWAP$ up to canonicalization. Let $B \sim (x,y,z)$ and $C \sim (x',y',z')$, then we have $(x,y,z) \sim (-x',-y',-z') + (\frac{1}{2}, \frac{1}{2}, \frac{1}{2})$. From this we can tell that for every local equivalence class $[B]$ of 2Q gates, there is exactly one local equivalence class $[C]$ such that $[B]$ and $[C]$ together can synthesize SWAP in 2 layers. And since we know how to canonicalize Cartan coordinates into points within the Weyl chamber, given $[B]$ we will be able to find the corresponding $[C]$. Here we do not elaborate on how we identify the geometric relation between $[B]$ and $[C]$ inside the Weyl chamber, but the readers can check our claim by applying Theorem \ref{peterson_thrm}.
|
2,869,038,154,075 | arxiv | \section{Introduction}
\label{sec:intro}
Suppose that \(T \colon M \to M\) is a dynamical system,
and \(v \colon M \to {\mathbb{R}}^d\) is an observable.
Let \(v_n = \sum_{k=0}^{n-1} v \circ T^k\) denote the Birkhoff sums.
Given a probability measure \(\mu\) on \(M\), let
\((v_n, \mu)\) denote the discrete time random process given by
\(v_n\) on the probability space \((M,\mu)\).
In the study of statistical properties of \(v_n\), such as the
the central limit theorem, various choices for \(\mu\) come up naturally,
giving rise to different random processes.
For example, if \(M=[0,1]\) and \(T\) is a nonuniformly
expanding map as in Young~\cite{Y99} such as intermittent or logistic
with a Collet-Eckmann parameter,
then \(\mu\) may be
(a) the Lebesgue measure,
(b) the absolutely continuous invariant probability measure (a.c.i.p.),
(c) the a.c.i.p.\ for the associated \emph{induced map}
(see Section~\ref{sec:UEM}).
The interest in the Lebesgue measure comes from physics:
it is a \emph{natural choice} of initial condition.
The a.c.i.p.\ has an important advantage over the Lebesgue measure:
if \(\mu\) is the a.c.i.p., then the increments of the process \((v_n, \mu)\) are stationary.
It is standard to prove and state limit theorems in terms of the a.c.i.p.
The measure in (c) appears in a widely used technical argument,
when \(T\) is reduced by a time change \emph{(inducing)} to a uniformly expanding map,
which may be easier to work with. Then statistical properties
of the induced map are used to prove results on the original map.
We explore the relation between processes defined with respect to
different measures. Our motivation is the study of almost sure approximations
by Brownian motion.
\begin{definition}
\label{defn:ASIP}
We say that \(v_n\) satisfies the
\emph{Almost Sure Invariance Principle} (ASIP), if without changing the distribution,
\(\{v_n, n \geq 0\}\) can be redefined on a new probability space
with a Brownian motion \(W_t\), such that with some \(\beta < 1/2\),
\[
v_n = W_n + o(n^{\beta})
\qquad \text{almost surely}
.
\]
\end{definition}
The ASIP is a strong statistical property,
it implies the central limit theorem (CLT) and
the law of iterated logarithm (LIL), which in one dimension take form
\[
{\mathbb{P}}\Bigl( \frac{v_n}{\sqrt{n}} \in [a,b] \Bigr)
\xrightarrow{n \to \infty} \frac{1}{\sqrt{2 \pi \sigma^2} }\int_a^b e^{-\frac{x^2}{2 \sigma^2}} \, dx
\quad \text{for all } a \leq b
\]
and
\[
\limsup_{n \to \infty} \frac{v_n}{\sqrt{n \log \log n}} = \sqrt{2} \sigma
\quad \text{almost surely}.
\]
The ASIP also implies functional versions of the CLT and the LIL as well
as other laws, see Philipp and Stout \cite[Chapter~1]{PS75}.
Melbourne and Nicol \cite{MN05,MN09} proved
\begin{theorem}
Suppose that \(T\) is nonuniformly expanding with return times in
\(L^p\), \(p > 2\) (see Section~\ref{sec:UEM} for definitions)
with an absolutely continuous invariant probability measure \(\rho\).
If \(v \colon M \to {\mathbb{R}}^d\) is a H\"older continuous continuous
observable with \(\int_M v \, d\rho = 0\), then the process
\(v_n = \sum_{k=0}^{n-1} v \circ T^k\), defined on a probability space
\((M, \rho)\), satisfies the ASIP.
\end{theorem}
\begin{remark}
\label{rmk:NUH}
Following the approach of \cite{B75,S72},
the ASIP for nonuniformly expanding systems extends to
a large class of nonuniformly hyperbolic systems
which satisfy the hypotheses of Young~\cite{Y98},
for example Sinai billiards or H\'enon maps. See \cite[Lemma~3.2]{MN05}.
\end{remark}
Later Gou\"ezel discovered a gap in~\cite{MN05,MN09}:
what Melbourne and Nicol actually proved is the ASIP
for a different starting measure, the one invariant invariant under the
induced map. A similar issue appears in Denker and Philipp~\cite{DP84},
though they do not claim the ASIP for the invariant measure.
Even though there is a close relation between the two measures,
the argument relating the ASIP-s was missing. The main goal of this
paper is to fill this gap.
\begin{remark}
Despite the gap, the usual corollaries of the
ASIP (such as the functional central limit theorem
and functional law of iterated logarithm) can be obtained from~\cite{MN05,MN09},
as it is done in~\cite{DP84}.
\end{remark}
\begin{remark}
Besides~\cite{MN05,MN09}, there are other results which
cover nonuniformly hyperbolic systems, but only partially:
\begin{itemize}
\item Chernov~\cite{C06}: scalar ASIP for dispersing billiards.
\item Gou\"ezel~\cite{G10}: vector valued ASIP for dynamical systems
with an exponential multiple decorrelation assumption (includes
dispersing billiards).
\item Cuny and Merlev\`ede~\cite{CM15}: scalar ASIP for
reverse martingale differences (applies to nonuniformly expanding maps,
see~\cite{KKM16mart}).
\end{itemize}
Problems which are only covered by~\cite{MN05} and~\cite{MN09}
include the vector valued ASIP for maps with slower than
exponential rate of decay of correlations, such as the intermittent
family~\cite{LSV99}.
\end{remark}
We work in the setting where \(T\) is a nonuniformly
expanding map (as in~\cite{Y99}) and \(v_n = \sum_{k=0}^{n-1} v \circ T^k\)
are Birkhoff sums. Given two probability measures \(\mu\) and \(\rho\),
we compare the random processes \(X_n = (v_n, \mu)\) and \(Y_n = (v_n, \rho)\).
Our main result is that if \(v\) is bounded, then
in a large class of probability measures, it is possible to redefine
\(\{X_n, n \geq 0\}\) and \(\{Y_n, n \geq 0\}\) on a new
probability space so that
\(
Z = \sup_{n \geq 0} |X_n - Y_n|
\)
is finite almost surely.
\begin{remark}
Technically, the statement above means that there exists
a probability space \((\Omega, {\mathbb{P}})\), supporting processes \(X'_n\), \(Y'_n\),
such that:
\begin{itemize}
\item \(\{X_n, n \geq 0\}\) is equal in distribution to \(\{X'_n, n \geq 0\}\),
\item \(\{Y_n, n \geq 0\}\) is equal in distribution to \(\{Y'_n, n \geq 0\}\),
\item \(Z' = \sup_{n \geq 0} |X'_n - Y'_n|\) is finite almost surely.
\end{itemize}
\end{remark}
In addition, we estimate the tails of \(Z\) (i.e.\ \({\mathbb{P}}(Z \geq a)\)
for \(a \geq 0\)) in terms of \(|v|_\infty\) and parameters of \(T\)
such as distortion bound and asymptotics of return times.
For a fixed \(n \geq 0\), we estimate the distance
between \(X_n\) and \(Y_n\) in L\'evy-Prokhorov and Wasserstein
metrics. We expect such estimates to be useful for families
of dynamical systems as in~\cite{KKM15}.
\begin{remark}
Our approach is in many ways similar to the \emph{Coupling Lemma} for
dispersing billiards \cite[Lemma~7.24]{CM06},
due to Chernov, Dolgopyat and Young.
Also, after the first version of this paper was circulated,
the author was made aware that some of the techniques are analogous to
those in Zweim\"uller~\cite{Z09}. Notably, our disintegration~\eqref{eq:jnngg}
corresponds to Zweim\"uller's \emph{regenerative partition of unity}.
\end{remark}
The paper is organized as follows.
In Section~\ref{sec:UEM} we give the definition of nonuniformly expanding maps
and state our results.
In Section~\ref{sec:app} we present some applications, including the
ASIP in Subsection~\ref{sec:asip}.
Section~\ref{sec:proofs} contains the proofs.
\section{Abstract setup and results}
\label{sec:UEM}
\subsection{Nonuniformly expanding maps}
We use notation \({\mathbb{N}}=\{1,2,\ldots\}\) and \({\mathbb{N}}_0 ={\mathbb{N}} \cup \{0\}\).
Let \((M,d)\) be a metric space with a Borel probability measure \(m\)
and \(T \colon M \to M\) be a nonsingular transformation.
We assume that there exists \(Y \subset M\) with \(m(Y)>0\) and
\(\diam Y < \infty\), an at most countable partition \(\alpha\) of \(Y\)
(modulo a zero measure set)
and \(\tau \colon Y \to {\mathbb{N}}\) with \(\int_Y \tau \, dm < \infty\)
such that for every \(a \in \alpha\),
\begin{itemize}
\item \(m(a) > 0\),
\item \(\tau\) assumes a constant value \(\tau(a)\) on \(a\),
\item \(T^{\tau(a)} a \subset Y\).
\end{itemize}
Let \(F \colon Y \to Y\), \(F y = T^{\tau(y)} y\). We require that
there are constants \(\lambda>1\), \(\hat{K} \geq 0\) and \(\eta \in (0,1]\),
such that for each \(a \in \alpha\) and \(x,y \in a\):
\begin{itemize}
\item \(F\) restricts to a (measure-theoretic) bijection
from \(a\) onto \(Y\),
\item \(d(F x, F y) \geq \lambda d(x,y)\),
\item the inverse Jacobian \(\zeta_m = \frac{dm}{dm \circ F}\) of \(F\)
has bounded distortion:
\[
\bigl| \log \zeta_m (x) - \log \zeta_m (y) \bigr|
\leq \hat{K} d(F x, F y)^\eta
.
\]
\end{itemize}
We call such maps \(T\) \emph{nonuniformly expanding}.
We refer to \(F\) as \emph{induced map} and to \(\tau\) as
\emph{return time function.}
The class of nonuniformly expanding maps includes
logistic maps at Collet-Eckmann parameters,
intermittent maps \cite{Y99} and Viana maps.
To simplify the exposition, we assume that \(\diam Y \leq 1\) and \(\eta = 1\).
The general case can be always reduced to this by
replacing the metric \(d\) with \(d'\) given by \(d'(x,y) = c d(x,y)^\eta\),
where \(c\) is a sufficiently small constant.
It is standard that there exists a unique \(F\)-invariant
absolutely continuous probability
measure \(\mu\) on \(Y\). Let \(\zeta = \frac{d\mu}{d\mu \circ F}\).
By \cite[Propositions 2.3 and 2.5]{KKM16},
\begin{equation}
\label{eq:antt}
K^{-1} \leq \frac{d\mu}{dm} \leq K
\qquad \text{and} \qquad
\bigl| \log \zeta (x) - \log \zeta (y) \bigr|
\leq K d(F x, F y)
\end{equation}
for all \(x,y \in a\), \(a \in \alpha\),
where \(K\) is a constant which depends continuously
(only) on \(\lambda\) and \(\hat{K}\).
Where convenient, we view \(\mu\) as a measure on \(M\)
supported on \(Y\).
For a function \(\phi \colon Y \to {\mathbb{R}}\) denote
\[
|\phi|_\infty = \sup_{x \in Y} |\phi(x)|,
\qquad
|\phi|_d = \sup_{x \neq y \in Y} \frac{|\phi(x) - \phi(y)|}{d(x,y)}
\qquad and \qquad \|\phi\|_d = |\phi|_\infty + |\phi|_d.
\]
For \(\phi \colon Y \to (0,\infty)\), denote
\(|\phi|_{d, \ell} = |\log \phi|_d\).
\subsection{Coupling of processes}
Fix a constant \(R' > K \lambda / (\lambda - 1)\).
\begin{definition}
\label{defn:reg}
We call a probability measure \(\rho\) on \(M\)
\emph{regular} if it is supported on \(Y\) and \(d \rho = \phi \, d\mu\),
where \(\phi \colon Y \to [0, \infty)\)
satisfies \(|\phi|_{d, \ell} \leq R'\).
\end{definition}
\begin{definition}
\label{defn:freg}
We say that a probability measure \(\rho\) on \(M\) is \emph{forward regular}, if
it allows a disintegration
\begin{equation}
\label{eq:nu}
\rho = \int_{E} \rho_z \, d\varkappa (z),
\end{equation}
where \((E, \varkappa)\) is a probability space
and \(\{\rho_z\}\) is a measurable family of probability
measures on \(M\), and
there exists a function \(r \colon E \to {\mathbb{N}}_0\) such that
\(T_*^{r(z)} \rho_z\) is a regular measure for each \(z\).
We refer to \(r\) as a \emph{jump function.}
\end{definition}
Define \(s \colon M \times M \to {\mathbb{N}}_0 \cup \{\infty\}\),
\begin{equation}
\label{eq:ffo}
s(x,y) =
\inf \bigl\{ \max\{k, n\} \colon k, n \geq 0, \, T^k x = T^n y \bigr\}.
\end{equation}
Note that if \(s(x,y) < \infty\), then the trajectories
\(T^k x\) and \(T^k y\), \(k \geq 0\) coincide
up to a time shift and possibly different beginnings.
\begin{theorem}
\label{thm:yeop}
Suppose that a probability measure \(\rho\) on \(M\) is forward regular.
Then there exists a probability measure \({\hat{\rho}}\) on \(M \times M\)
with marginals \(\rho\) and \(\mu\) on the first and second
components respectively such that \(s\) is finite \({\hat{\rho}}\)-almost surely.
In addition, with \((E, \varkappa)\) and \(r\) as in Definition~\ref{defn:freg},
\begin{enumerate}[label=(\alph*)]
%
\item (Weak polynomial moments)
If \(\varkappa(r \geq n) \leq C_\beta n^{-\beta}\) and
\(\mu(\tau \geq n) \leq C_\beta n^{-\beta}\) for all \(n \geq 1\) with some
constants \(\beta > 1\) and \(C_\beta > 0\), then
\[
{\hat{\rho}}(s \geq n) \leq C n^{-\beta} \quad \text{for all } n > 0,
\]
where the constant \(C\) depends continuously (only) on
\(\lambda\), \(K\), \(R'\), \(\beta\) and \(C_\beta\).
%
\item (Strong polynomial moments)
If \(\int r^\beta \, d\varkappa \leq C_\beta\) and
\(\int \tau^\beta \, d\mu \leq C_\beta\) with some
constants \(\beta > 1\) and \(C_\beta > 0\), then
\[
\int s^\beta \, d{\hat{\rho}} \leq C,
\]
where the constant \(C\) depends continuously (only) on
\(\lambda\), \(K\), \(R'\), \(\beta\) and \(C_\beta\).
%
\item (Exponential and stretched exponential moments)
If \(\varkappa(r \geq n) \leq C_{\alpha,\gamma} e^{-\alpha n^\gamma}\) and
\(\mu(\tau \geq n) \leq C_{\alpha,\gamma} e^{-\alpha n^\gamma}\)
for all \(n\)
with some constants \(\alpha > 0\), \(\gamma \in (0,1]\), \(C_{\alpha, \gamma} > 0\),
then
\[
{\hat{\rho}}(s \geq n) \leq C e^{-A n^\gamma} \quad \text{for all } n > 0,
\]
where the constants \(C > 0\) and \(A > 0\) depend continuously (only) on
\(\lambda\), \(K\), \(R'\), \(\alpha\), \(\gamma\) and \(C_{\alpha,\gamma}\).
%
\end{enumerate}
\end{theorem}
Let \(v \colon M \to {\mathbb{R}}^d\) be a bounded observable and
\(v_n = \sum_{k=0}^{n-1} v \circ T^k\). Denote
\(|v|_\infty = \sup_{x \in M} |v(x)|\).
\begin{remark}
\label{rmk:aaggg}
\(|v_n(x) - v_n(y)| \leq 2 |v|_\infty s(x,y)\)
for all \(x,y \in M\) and \(n \geq 0\).
\end{remark}
Let \(\rho_j\), \(j=1,2\) be two forward regular probability measures
with disintegrations
\(
\rho_j = \int_{E_j} \rho_{j,z} \, d\varkappa_j (z)
\)
and jump functions \(r_j\).
Let \(X_n = (v_n, \rho_1)\) and \(Y_n = (v_n, \rho_2)\) be the
related random processes.
\begin{theorem}
\label{thm:yeoc}
The processes \(\{X_n, n \geq 0\}\) and \(\{Y_n, n \geq 0\}\)
can be redefined on the same probability space \((\Omega, {\mathbb{P}})\)
such that
\(
Z = \sup_{n \geq 0} |X_n - Y_n|
\)
is finite with probability one. Also:
\begin{enumerate}[label=(\alph*)]
%
\item (Weak polynomial moments)
If \(\varkappa_1(r_1\geq n) \leq C_\beta n^{-\beta}\),
\(\varkappa_2(r_2\geq n) \leq C_\beta n^{-\beta}\) and
\(\mu(\tau \geq n) \leq C_\beta n^{-\beta}\) for all \(n\) with
some constants \(C_\beta > 0\) and \(\beta > 1\), then
\[
{\mathbb{P}}(Z \geq x) \leq C x^{-\beta} \quad \text{for all } x > 0,
\]
where the constant \(C\) depends continuously (only) on
\(\lambda\), \(K\), \(R'\), \(\beta\), \(C_\beta\) and \(|v|_\infty\).
%
\item (Strong polynomial moments)
If \(\int r_1^\beta \, d\varkappa_1 \leq C_\beta\),
\(\int r_2^\beta \, d\varkappa_2 \leq C_\beta\) and
\(\int \tau^\beta \, d\mu \leq C_\beta\) with
some constants \(C_\beta > 0\) and \(\beta > 1\), then
\[
\int Z^\beta \, d{\mathbb{P}} \leq C,
\]
where the constant \(C\) depends continuously (only) on
\(\lambda\), \(K\), \(R'\), \(\beta\), \(C_\beta\) and \(|v|_\infty\).
%
\item (Exponential and stretched exponential moments)
If \(\varkappa_1(r_1 \geq n) \leq C_{\alpha,\gamma} e^{-\alpha n^\gamma}\),
\(\varkappa_2(r_2 \geq n) \leq C_{\alpha,\gamma} e^{-\alpha n^\gamma}\) and
\(\mu(\tau \geq n) \leq C_{\alpha,\gamma} e^{-\alpha n^\gamma}\)
for all \(n\)
with some constants \(\alpha > 0\), \(\gamma \in (0,1]\), \(C_{\alpha, \gamma} > 0\),
then
\[
{\mathbb{P}}(Z \geq x) \leq C e^{-A x^\gamma} \quad \text{for all } x > 0,
\]
where the constants \(C > 0\) and \(A > 0\) depend continuously (only) on
\(\lambda\), \(K\), \(R'\), \(\alpha\), \(\gamma\), \(C_{\alpha,\gamma}\)
and \(|v|_\infty\).
%
\end{enumerate}
\end{theorem}
Proofs of Theorems~\ref{thm:yeop} and~\ref{thm:yeoc} are
in Section~\ref{sec:proofs}.
\section{Applications}
\label{sec:app}
\subsection{L\'evy-Prokhorov and Wasserstein distances}
Let \(X\) and \(Y\) be \({\mathbb{R}}^d\)-valued random variables, and \({\mathbb{P}}_X\), \({\mathbb{P}}_Y\)
be the associated probability measures on \({\mathbb{R}}^d\). Recall the following
definitions:
\begin{definition}
The L\'evy-Prokhorov distance between \(X\) and \(Y\) is
\begin{align*}
d_{LP} (X, Y)
= \inf \{ & {\varepsilon} > 0 \colon {\mathbb{P}}_X(A) \leq {\mathbb{P}}_Y (A^{\varepsilon}) + {\varepsilon}
\;\text{ and }\; {\mathbb{P}}_Y(A) \leq {\mathbb{P}}_X (A^{\varepsilon}) + {\varepsilon}
\\ & \text{ for all Borel } A \subset {\mathbb{R}}^d\},
\end{align*}
where \(A^{\varepsilon} = \{x \colon \inf_{y \in A } |x-y| \leq {\varepsilon} \} \).
\end{definition}
\begin{definition}
For \(p \geq 1\), the \(p^{\text{th}}\) Wasserstein distance between \(X\) and \(Y\) is
\[
d_{W,p} (X, Y)
= \inf \Bigl[ {\mathbb{E}} \bigl( |X-Y|^p \bigr) \Bigr]^{1/p},
\]
where the infimum is taken over all couplings of \(X\) and \(Y\).
\end{definition}
Suppose that \(X_n\) and \(Y_n\) are as in Theorem~\ref{thm:yeoc} (a),
under the assumption of the polynomial tails.
Then Theorem~\ref{thm:yeoc} implies the following:
\begin{corollary}
For each \(n \geq 0\) and \(1 \leq p < \beta\),
\[
d_{LP} (X_n, Y_n) \leq C_{LP}
\qquad \text{and} \qquad
d_{W,p} (X_n, Y_n) \leq C_{W,p}
,
\]
where the constants \(C_{LP}\) and \(C_{W,p}\) depend continuously (only) on
\(p\) and the constant \(C\) from Theorem~\ref{thm:yeoc} (a).
In particular, they do not depend on \(n\).
\end{corollary}
\begin{proof}
Let \(n\) be fixed. Theorem~\ref{thm:yeoc} provides us with a
coupling of \(X_n\) and \(Y_n\) on a probability space
\((\Omega, {\mathbb{P}})\) such that \(Z = |X_n - Y_n|\) satisfies
\({\mathbb{P}}(Z \geq x) \leq C x^{-\beta}\) for all \(x > 0\).
By definition, \(d_{W, p}(X_n, Y_n) \leq \bigl( {\mathbb{E}} ( Z^p ) \bigr)^{1/p}\),
and the bound on \(d_{W, p}(X_n, Y_n)\) follows.
By \cite[Theorem 2]{GS02}, \(d_{LP} (X_n, Y_n)
\leq \sqrt{d_{W,1} (X_n, Y_n)}\).
\end{proof}
\begin{remark}
Our estimates on the distances between \(X_n\) and \(Y_n\) do not depend on
\(n\). It follows that the distances between their normalized versions,
such as \(n^{-1/2} X_n\) and \(n^{-1/2} Y_n\),
converge to zero as \(n\) goes to infinity.
\end{remark}
\subsection{Disintegration for the \(T\)-invariant measure}
\label{sec:jnu}
Recall that \(\mu\) is the absolutely continuous \(F\)-invariant measure.
Following \cite{Y99},
there exists a unique \(T\)-invariant ergodic probability measure \(\rho\)
on \(M\), with respect to which \(\mu\) is absolutely continuous.
To define the regular measures, we fix
\(R' > K \lambda / (\lambda - 1)\).
Here we show that \(\rho\) fits the setup of
Theorems~\ref{thm:yeop} and~\ref{thm:yeoc}:
\begin{proposition}
\label{prop:jnu}
The measure \(\rho\) is forward regular:
\(
\rho = \int_{E} \rho_z \, d\varkappa (z),
\)
with jump function \(r \colon E \to {\mathbb{N}}_0\) such that
\(\varkappa(r = n) = \bar{\tau}^{-1} \mu(\tau \geq n)\),
where \( \bar{\tau} = \int_Y \tau \, d\mu\).
\end{proposition}
\begin{proof}
We start by constructing a Young tower
\(
\breve{M}
= \{(y, \ell) \in Y \times {\mathbb{Z}} \colon 0 \leq \ell < \tau(y) \}
\)
with the tower map
\[
\breve{T} (y, \ell) =
\begin{cases}
(y, \ell+1), & \ell < \tau(y) - 1, \\
(Fy, 0), & \ell = \tau(y) - 1
\end{cases}
.
\]
The projection \(\pi \colon \breve{M} \to M\), \(\pi(y,\ell) = T^\ell(y)\)
serves as a semiconjugacy between \(\breve{T}\) and \(T\).
The natural probability measure
\(
\breve{\rho} =
\mu \times \text{counting}
\)
on \(\breve{M}\) is \(\breve{T}\)-invariant,
and its projection \(\rho = \pi_* \breve{\rho}\) is the only
\(T\)-invariant ergodic probability measure \(M\)
such that \(\mu \ll \rho\).
Using the definition of \(\breve{\rho}\) and
\(\pi\), we can write \(\rho\) as
\[
\rho
= \bar{\tau}^{-1}
\sum_{a \in \alpha} \sum_{\ell=0}^{\tau(a)-1}
\mu(a) T_*^{\ell} \mu_a
,
\]
where \(\mu_a\) is the normalized restriction of \(\mu\) to \(a\), i.e.\
\(\mu_a (S) = (\mu(a))^{-1} \mu (a \cap S)\)
for all \(S \subset M\).
Let \(E = \{ (a, \ell) \in \alpha \times {\mathbb{Z}}
\colon 0 \leq \ell < \tau(a)\}\) and
\(\varkappa(a, \ell) = \bar{\tau}^{-1} \mu(a)\).
Then \(\varkappa\) is a probability measure on \(E\), and
\[
\rho = \sum_{(a, \ell) \in E} \rho_{a,\ell} \, \varkappa(a, \ell),
\qquad \text{where} \,\,
\rho_{a, \ell} = T_*^\ell \mu_a
\]
is the disintegration we are after.
Further, let \(r \colon E \to {\mathbb{Z}}\), \(r(a, \ell) = \tau(a)-\ell\).
Then for every \(a, \ell\),
the measure \(T_*^{r(a, \ell)} \rho_{a, \ell} = F_* \mu_a\)
is supported on \(Y\), and its density
is \(p_a(y) = (\mu(a))^{-1} \zeta({y_a})\),
where \(y_a\) is the unique preimage of \(y\)
in \(a\) under \(F\).
By \eqref{eq:antt}, \(T_*^{r(a, \ell)} \rho_{a, \ell}\) is regular.
Finally,
\begin{align*}
\varkappa(r = n)
& = \sum_{(a,\ell) \in E} 1_{\ell = \tau(a) - n} \varkappa(a,\ell)
= \bar{\tau}^{-1} \sum_{a \in \alpha \colon \tau(a) \geq n} \mu(a)
= \bar{\tau}^{-1} \mu(\tau \geq n)
.
\end{align*}
\end{proof}
\subsection{Intermittent maps}
\label{sec:LSV}
Consider a family of Pomeau-Manneville maps, as in \cite{LSV99},
\(T \colon [0,1] \to [0,1]\),
\[
T (x) =
\begin{cases}
x ( 1 + 2^\gamma x^\gamma), & x \leq 1/2 \\
2 x - 1, & x > 1/2
\end{cases}
,
\]
where \(\gamma \in (0,1)\) is a parameter. This is a popular example
of maps with polynomial decay of correlations (sharp rate for
H\"older observables is \(n^{1-1/\gamma}\) \cite{G04,H04,S02,Y99}).
Let \(M = [0,1]\). It is standard (see \cite{Y99}) that
\(T\) fits the setup of Section~\ref{sec:UEM}
with \(Y = [1/2,1]\), and \(\tau\) being
the first return time to \(Y\).
We consider three natural probability measures on \(M\):
\begin{itemize}
\item \(m\), the Lebesgue measure,
\item \(\rho\), the unique absolutely continuous measure,
\item \(\mu\), the absolutely continuous invariant measure
for the induced map, as in Section~\ref{sec:UEM}.
\end{itemize}
Let \(v \colon M \to {\mathbb{R}}^d\) be a bounded observable,
\(v_n = \sum_{k=0}^{n-1} v \circ T^k\), and
\(X_{m,n} = (v_n, m)\), \(X_{\rho,n} = (v_n, \rho)\)
and \(X_{\mu,n} = (v_n, \mu)\) be the corresponding
random processes.
\begin{theorem}
\label{thm:sshw}
The processes \(\{X_{m,n}, n \geq 0\}\),
\(\{X_{\mu,n}, n \geq 0\}\) and
\(\{X_{\rho,n}, n \geq 0\}\) can be redefined on the
same probability space \((\Omega, {\mathbb{P}})\) so that
\begin{itemize}
\item
\(Z_{m,\mu} = \sup_{n \geq 0} |X_{m,n} - X_{\mu,n}|\)
satisfies \({\mathbb{P}} (Z_{m, \mu} \geq x) \leq C x^{-1/\gamma}\)
for \(x > 0\).
\item
\(Z_{m,\rho} = \sup_{n \geq 0} |X_{m,n} - X_{\rho,n}|\)
satisfies \({\mathbb{P}} (Z_{m, \rho} \geq x) \leq C x^{-1/\gamma + 1}\)
for \(x > 0\).
\item
\(Z_{\rho,\mu} = \sup_{n \geq 0} |X_{\rho,n} - X_{\mu,n}|\)
satisfies \({\mathbb{P}} (Z_{\rho, \mu} \geq x) \leq C x^{-1/\gamma + 1}\)
for \(x > 0\).
\end{itemize}
The constant \(C\) depends continuously (only) on \(\gamma\) and \(|v|_\infty\).
\end{theorem}
\begin{proof}
We write \(a \ll b\), if there is a constant \(C\) which
depends continuously only on \(\gamma\) such that \(a \leq C b\).
It is enough to show that with an appropriate choice of
the constant \(R'\) in Definition~\ref{defn:reg},
the measures \(m\) and \(\rho\) are forward regular:
\begin{itemize}
\item[(a)]
\(
m = \int_{E_m} m_z \, d\varkappa_m(z)
\)
with \(r_m \colon E_m \to {\mathbb{N}}_0\)
for which \(T^{r_m(z)}_* m_z\) are regular probability measures.
Also, \(\varkappa_m( r_m \geq n) \ll n^{-1/\gamma}\) for all \(n > 0\).
\item[(b)]
\(
\rho = \int_{E_\rho} \rho_z \, d\varkappa_\rho(z)
\)
with \(r_\rho \colon E_\rho \to {\mathbb{N}}_0\)
for which \(T^{r_\rho(z)}_* \rho_z\) are regular probability measures.
Also, \(\varkappa_\rho (r_\rho \geq n) \ll n^{-1/\gamma + 1}\)
for all \(n > 0\).
\end{itemize}
Then the results follow from Theorem~\ref{thm:yeoc} and Lemma~\ref{lemma:joc}.
We use the bound \(\mu(\tau \geq n) \ll n^{-1/\gamma}\),
(see \cite{K16} for the proof with uniform constants).
By Proposition~\ref{prop:jnu}, \(\rho\) is forward regular and
\[
\varkappa_\rho(r_\rho \geq n) = \sum_{k \geq n} \varkappa_\rho(r_\rho = k)
= \bar{\tau}^{-1} \sum_{k \geq n} \mu(\tau \geq k)
\ll n^{-1/\gamma + 1}
.
\]
This proves (b). Further we prove (a).
We extend \(\tau \colon Y \to {\mathbb{N}}\) to \(\tau \colon M \to {\mathbb{N}}\)
by \(\tau(x) = \min \{ k \geq 1 \colon T^k (x) \in Y\}\),
and accordingly set \(F \colon M \to Y\),
\(F(x) = T^{\tau(x)}(x)\), extending the previous definition.
It is standard \cite{K16} that \(M\) can be partitioned
(modulo a zero measure set) into countably many
subintervals \([a_k, b_k]\), \(k \in {\mathbb{N}}\), on which
\(\tau\) is constant, and \(F \colon [a_k, b_k] \to Y\)
is a diffeomorphism with bounded distortions, i.e.\
\begin{equation}
\label{eq:ann}
\Bigl| \log \frac{F'(x)}{F'(y)} \Bigr|
\ll |F(x) - F(y)|
\qquad \text{ for all } k \text{ and } x,y \in [a_k, b_k].
\end{equation}
Further, \(m(\tau \geq n) \ll n^{-1/\gamma}\).
Let \(m_k\) denote the normalized Lebesgue measure on \([a_k, b_k]\).
It follows from~\eqref{eq:ann} and~\eqref{eq:antt} that \(F_* m_k\) is a regular
measure with \(R'\) depending continuously (only) on \(\gamma\).
It follows that \(m\) is forward regular:
\(
m = \sum_{k \in {\mathbb{N}}} \varkappa_m(k) m_k,
\)
with the probability space \(({\mathbb{N}}, \varkappa_m)\),
\(\varkappa_m(k) = |b_k - a_k|\), and \(r_m \colon {\mathbb{N}} \to {\mathbb{N}}_0\),
\(r_m(k) = \tau\bigr|_{[a_k, b_k]}\).
Finally, observe that \(\varkappa_m( r_m \geq n) \ll n^{-1/\gamma}\).
\end{proof}
\subsection{Almost sure invariance principle}
\label{sec:asip}
Let \(v \colon M \to {\mathbb{R}}^d\), and \(v_n = \sum_{k=0}^{n-1} v \circ T^k\).
Recall that \(\mu\) is the absolutely continuous \(F\)-invariant measure
on \(Y\).
Let \(\rho\) be the \(T\)-invariant measure on \(M\) as in
Subsection~\ref{sec:jnu}. Suppose that \(\int_M v \, d\rho = 0\).
Let \(X_n = (v_n, \rho)\) and \(Y_n = (v_n, \mu)\).
Under the assumptions that \(\tau \in L^p\), \(p > 2\) and
\(v\) is H\"older continuous, Melbourne and Nicol prove in \cite{MN05,MN09} the
ASIP for \(Y_n\) (with rates), and claim the ASIP for \(X_n\).
However, their argument does not cover the transition from \(Y_n\) to \(X_n\).
Here we close this gap.
\begin{theorem}
\label{thm:vahu}
The ASIP for \(X_n\) is equivalent to the ASIP for \(Y_n\), with the same rates.
\end{theorem}
\begin{remark}
In~\cite{MN05,MN09}, the authors prove the ASIP for nonuniformly expanding systems
and then extend the result to nonuniformly hyperbolic systems \cite[Section 3]{MN05}.
In Theorem~\ref{thm:vahu}, \(T\) is a nonuniformly expanding system,
but proving it, we close the gap in both situations.
\end{remark}
\begin{proof}[Proof of Theorem~\ref{thm:vahu}]
Assume the ASIP for \(X_n\) as in Definition \ref{defn:ASIP},
with a Brownian motion \(W_n\) and rate \(o(n^\beta)\).
Proposition~\ref{prop:jnu} allows us to use Theorem~\ref{thm:yeoc} to
redefine the processes \(\{X_n, n \geq 0\}\) and \(\{Y_n, n \geq 0\}\)
on the same probability space so that
\(\sup_{n \geq 0} |X_n - Y_n|\) is finite almost surely.
Using Lemma~\ref{lemma:joc}, we can redefine \(\{X_n, n \geq 0\}\),
\(\{Y_n, n \geq 0\}\) and \(W_t\)
on the same probability space so that
\(\sup_{n \geq 0} |X_n - Y_n| < \infty\) and
\(X_n = W_n + o( n^{\beta} )\) almost surely.
Then also \(Y_n = W_n + o( n^{\beta} )\) almost surely.
We proved that the ASIP for \(X_n\) implies the ASIP for \(Y_n\),
with the same rates. The same argument proves the other direction.
\end{proof}
\section{Proof of Theorems~\ref{thm:yeop} and~\ref{thm:yeoc}}
\label{sec:proofs}
\subsection{Outline of the proof}
Recall that \(\mu\) is the absolutely continuous probability measure, invariant
under the induced map \(F\).
To prove Theorem~\ref{thm:yeop}, we:
\begin{enumerate}[label=(\alph*)]
\item\label{thm:yeop:rpu} Build (Subsection~\ref{sec:dis}) a countable probability space \({\mathcal{A}}\)
with a function \(t \colon {\mathcal{A}} \to {\mathbb{N}}_0\)
and show that if \(\rho\) is a probability measure such that \(T_*^n \rho\) is
regular for some \(n \geq 0\), then \(\rho\) has a representation
\begin{equation}
\label{eq:jnngg}
\rho = \sum_{a \in {\mathcal{A}}} {\mathbb{P}}(a) \rho_k
\qquad \text{with } T_*^{n + t(a)} \rho_a = \mu
\text{ for all } a
,
\end{equation}
where \({\mathbb{P}}\) is a probability measure on \({\mathcal{A}}\).
(C.f.\ \emph{regenerative partition of unity} in \cite{Z09}).
\item Show that the tails \({\mathbb{P}}(t \geq n)\) can be bounded uniformly for all
regular measures (Subsection~\ref{sec:tails}).
\item\label{thm:yeop:diss}
Consider a particularly simple case, when \(\rho\) is such that \(T_*^n \rho = \mu\)
for some \(n \geq 0\). Then we take \({\hat{\rho}} = (U_n)_* \rho\), where
\(U_n \colon M \to M \times M\), \(U_n(x) = (x, T^n x)\).
We observe that the marginals of \({\hat{\rho}}\) on the first and second coordinates
are \(\rho\) and \(\mu\) respectively and \({\hat{\rho}}(s \geq n) = 0\).
\item\label{thm:yeop:dis}
The procedure in \ref{thm:yeop:diss} transparently extends to weighted sums of measures,
as in \eqref{eq:jnngg}.
We take
\[
{\hat{\rho}} = \sum_{a \in {\mathcal{A}}} {\mathbb{P}}(a) (U_{n + t(a)})_* \rho_a
.
\]
Observe that then \({\hat{\rho}}(s \geq n + k ) \leq {\mathbb{P}}(t \geq k)\) for all \(k \geq 0\).
\item Now, \ref{thm:yeop:rpu} and \ref{thm:yeop:dis} already prove
Theorem~\ref{thm:yeop} for the case when \(T_*^n \rho\)
is regular. In Subsection~\ref{proof:thm:yeop} we extend this to the class of
all forward regular measures.
\end{enumerate}
The idea of the proof of Theorem~\ref{thm:yeoc} is that
if \(\rho_1\) and \(\rho_2\) are forward regular measures,
then each of them can be coupled with
\(\mu\) in the sense of Theorem~\ref{thm:yeop}.
Then we couple \(\rho_1\) and \(\rho_2\) through their couplings with \(\mu\) by
a standard argument in Probability Theory, see Appendix~\ref{app:joc}.
\subsection{Disintegration}
\label{sec:dis}
Let $P\colon L^1(Y)\to L^1(Y)$ be the transfer operator corresponding
to $F$ and $\mu$, so
$\int_Y P\phi\,\psi\,d\mu=\int_Y\phi\,\psi\circ F\,d\mu$
for all $\phi\in L^1$ and $\psi\in L^\infty$.
Then $P \phi$ is given explicitly by
\[
(P \phi)(y) = \sum_{a \in \alpha} \zeta(y_a) \phi(y_a),
\]
where $y_a$ is the unique preimage of $y$ under $F$ lying in $a$.
Recall that \(R'\) is a fixed constant, and \(R' > K \lambda / (\lambda -1)\).
Let \(R = \lambda(R' - K)\). Then \(R > K + \lambda^{-1} R\).
Choose \(\xi \in (0,e^{-R})\) such that
\(R (1-\xi e^R) \geq K + \lambda^{-1} R\).
\begin{proposition}
\label{prop:bamh}
Assume that \(\phi \colon Y \to (0, \infty)\) is such that
\(|\phi|_{d, \ell} \leq R'\).
Then \(\phi = \xi \int_Y \phi \, d\mu + \psi\), where
\(|\psi|_{d, \ell} \leq R\).
In addition, \(|P(1_a \psi)|_{d, \ell} \leq R'\)
for every \(a \in \alpha\).
\end{proposition}
\begin{proof}
See \cite[Propositions 3.1 and 3.2]{KKM16}.
\end{proof}
Let \({\mathcal{A}}\) denote the countable set of all finite words in the alphabet
\(\alpha\), including the empty word. For \(a \in {\mathcal{A}}\), let
\([a]\) denote the subset of words in \({\mathcal{A}}\)
which begin with \(a\). Let \(\ell(a)\) denote the length of \(a\).
Define \(t \colon {\mathcal{A}} \to {\mathbb{Z}}\),
\(t(a)=\sum_{k=1}^{\ell(a)} \tau(a_k)\), where \(a_k\) is the \(k\)-th
letter of \(a\).
\begin{proposition}
\label{prop:vaoe}
Let \(\rho\) be a probability measure on \(M\) such that
\(T_*^n \rho\) is regular for some \(n \geq 0\). Then there is a
decomposition
\(\rho = \xi \rho' + \sum_{a \in \alpha} r_a \rho_a\),
where \(\rho'\) and all \(\rho_a\) are probability measures
and \(r_a > 0\), such that
\begin{itemize}
\item
\(
e^{-R} (1-\xi) \mu(a)
\leq r_a \leq
e^{ R} (1-\xi) \mu(a)
\),
\item \(T_*^n \rho' = \mu\),
\item \(T_*^{n+\tau(a)} \rho_a\) is a regular measure
for every \(a \in \alpha\).
\end{itemize}
\end{proposition}
\begin{proof}
Let \(\chi = T_*^n \rho\). Since \(\chi\) is regular probability measure,
there exists \(\phi \colon Y \to (0, \infty)\) such that
\(|\phi|_{d, \ell} \leq R'\),
\(d\chi = \phi \, d\mu\) and \(\int_Y \phi \, d\mu = 1\).
By Proposition~\ref{prop:bamh},
\( \phi = \xi + \psi\), where \(|\psi|_{d, \ell} \leq R\).
For \(a \in \alpha\), define \(r_a = \int_a \psi \, d\mu\)
and \(\psi_a = r_a^{-1} 1_a \psi \).
Then \(\int_Y \psi_a \, d\mu = 1\) and
by Proposition~\ref{prop:bamh}, \(|P \psi_a|_{d, \ell} \leq R'\).
Define \(\chi_a\) to be a probability
measure on \(M\) given by
\( d \chi_a = \psi_a \, d\mu\).
Then \(T_*^{\tau(a)} \chi_a\) is
a regular probability measure with density \(P \psi_a\).
Observe that
\begin{equation}
\label{eq:hhja}
\chi = \xi \mu + \sum_{a \in \alpha} r_a \chi_a.
\end{equation}
By \cite[(3.1)]{KKM16},
\[
e^{-R} (1-\xi) =
e^{-R} \int_Y \psi \, d\mu
\leq \psi \leq
e^{ R} \int_Y \psi \, d\mu
= e^R (1-\xi).
\]
Therefore
\(
e^{-R} (1-\xi) \mu(a)
\leq r_a \leq
e^{ R} (1-\xi) \mu(a)
\).
Now we use~\eqref{eq:hhja} to decompose \(\rho\) similarly.
Define \(\rho'\) to be a measure on \(M\) given by
\(
\frac{d\rho'}{d\rho} = \frac{d\mu}{d\chi} \circ T^n.
\)
Then \(T_*^n \rho' = \mu\).
Similarly define \(\rho_a\), \(a \in \alpha\) by
\(
\frac{d\rho_a}{d\rho} = \frac{d\chi_a}{d\chi} \circ T^n.
\)
Then \(T_*^n \rho_a = \chi_a\). Finally note that
\(\rho = \xi \rho' + \sum_{a \in \alpha} r_a \rho_a\).
\end{proof}
\begin{lemma}
\label{lemma:b87e}
Let \(n \geq 0\) and \(\rho\) be a probability measure
on \(M\) such that \(T_*^n \rho\) is regular.
There exists a probability measure \({\mathbb{P}}\) on \({\mathcal{A}}\)
and a disintegration
\begin{equation}
\label{eq:atyi}
\rho = \sum_{a \in {\mathcal{A}}} {\mathbb{P}}(a) \rho_a,
\end{equation}
where \(\rho_a\), \(a \in \alpha\) are probability measures on \(M\)
such that \(T_*^{n+t(a)} \rho_a = \mu\).
The measure \({\mathbb{P}}\) satisfies
\begin{equation}
\label{eq:ajiw}
\begin{gathered}
{\mathbb{P}}(\ell = k) = (1-\xi)^k \xi,
\\
e^{-R} (1-\xi) \mu(a_{k+1})
\leq \, {\mathbb{P}}([a_1 \cdots a_{k+1}] \mid [a_1 \cdots a_k])
\leq e^{ R} (1-\xi) \mu(a_{k+1}),
\end{gathered}
\end{equation}
for all \(k \geq 0\) and \(a_1, \ldots, a_{k+1} \in \alpha\).
\end{lemma}
\begin{proof}
Write \(\rho = \xi \rho' + \sum_{x \in \alpha} r_x \rho_x\) as in
Proposition~\ref{prop:vaoe}. Then for each \(x \in \alpha\) apply
Proposition~\ref{prop:vaoe} again and write
\(\rho_x = \xi \rho_x' + \sum_{y \in \alpha} r_{xy} \rho_{xy} \).
Apply the same to each \(\rho_{xy}\) and so on.
Then
\[
\rho = \xi \rho' + \sum_{x \in \alpha} r_x \xi \rho_x'
+ \sum_{x,y \in \alpha} r_x r_{xy} \xi \rho_{xy}' + \cdots
\]
This is a disintegration as in \eqref{eq:atyi} with
\({\mathbb{P}}(a) = r_{a_1} r_{a_1 a_2} \cdots r_{a_1 a_2 \cdots a_n} \xi\)
for \(a = a_1 \cdots a_n \in {\mathcal{A}}\).
Conditions~\eqref{eq:ajiw} are immediate.
\end{proof}
\subsection{Polynomial and exponential tails}
\label{sec:tails}
Let \(\rho\) be a measure as in Lemma~\ref{lemma:b87e}
and \({\mathbb{P}}\) be the corresponding measure on \({\mathcal{A}}\).
Recall that \(t \colon {\mathcal{A}} \to {\mathbb{Z}}\) is the word length.
In this subsection we obtain elementary estimates of moments of \(t\)
in situations when \(\int_Y \tau^p \, d\mu < \infty\)
for some \(p > 1\), or
\(\int_Y e^{\gamma \tau} \, d\mu < \infty\) for some
\(\gamma > 0\).
For \(n \geq 1\), let \({\mathcal{A}}_n\) be the subset of \({\mathcal{A}}\) of
all words of length \(n\).
By Lemma~\ref{lemma:b87e}, \({\mathbb{P}}({\mathcal{A}}_n) = (1-\xi)^n \xi\).
Let \({\mathbb{P}}_n\) denote the conditional probability measure on \({\mathcal{A}}_n\).
Elements of \({\mathcal{A}}_n\) have the form \(a = a_1 \cdots a_n\),
and \(a_1, \ldots, a_n\) can be considered as random variables
with values in \(\alpha\), and \(t = \tau(a_1) + \cdots + \tau(a_n)\).
It follows from Lemma~\ref{lemma:b87e} that
for all \(k \leq n\) and \(x \in \alpha\),
\begin{equation}
\label{eq:lgg}
{\mathbb{P}}_n(a_k = x \mid a_1, \ldots, a_{k-1})
\leq e^R \mu(x)
\end{equation}
\subsubsection{Polynomial tails}
\begin{proposition}
\label{prop:fwqjq}
Suppose that there exist \(C_\tau > 0\) and \(\beta > 1\) such that
\(m(\tau \geq \ell) \leq C_\tau \ell^{-\beta}\) for \(\ell \geq 1\).
Then \({\mathbb{P}}(t \geq \ell) \leq C \ell^{-\beta}\), where the constant
\(C > 0\) depends continuously on \(R\), \(\xi\) and \(C_\tau\).
\end{proposition}
\begin{proof}
Let \(k \leq n\), and \(a = a_1 \cdots a_n \in {\mathcal{A}}_n\).
By \eqref{eq:lgg},
\[
{\mathbb{P}}_n(\tau(a_k) \geq \ell)
\leq e^R m(\tau \geq \ell)
\leq C_\tau e^R \ell^{-\beta}
.
\]
Next,
\[
{\mathbb{P}}_n( t \geq \ell)
\leq \sum_{k=1}^n {\mathbb{P}}_n(\tau(a_k) \geq \ell/n)
\leq n C_\tau e^R (\ell/n)^{-\beta}
.
\]
Finally,
\[
{\mathbb{P}}(t \geq \ell)
= \sum_{n=1}^\infty {\mathbb{P}}({\mathcal{A}}_n) {\mathbb{P}}_n(t \geq \ell)
\leq C_\tau e^R \xi \ell^{-\beta} \sum_{n=1}^\infty (1-\xi)^n n^{1+\beta}
.
\]
\end{proof}
\begin{proposition}
\label{prop:qjqwf}
Suppose that there exist \(C_\tau > 0\) and \(\beta > 1\) such that
\(\int \tau^\beta \, dm \leq C_\tau\).
Then \(\int t^\beta \, d{\mathbb{P}} \leq C\), where the constant
\(C > 0\) depends continuously on \(R\), \(\xi\) and \(C_\tau\).
\end{proposition}
\begin{proof}
Let \(k \leq n\), and \(a = a_1 \cdots a_n \in {\mathcal{A}}_n\).
By \eqref{eq:lgg},
\[
\int \tau^\beta(a_k) \, d{\mathbb{P}}_n
\leq e^R \int \tau^\beta \, dm
\leq C_\tau e^R
.
\]
Next,
\[
t^\beta (a)
= (\tau(a_1) + \cdots + \tau(a_n))^\beta
\leq n^{\beta-1} (\tau^\beta(a_1) + \cdots + \tau^\beta(a_n)),
\]
thus
\[
\int t^\beta \, d{\mathbb{P}}_n
\leq n^{\beta-1} C_\tau e^R
.
\]
Finally,
\[
\int t^\beta \, d{\mathbb{P}}
= \sum_{n=1}^{\infty} {\mathbb{P}}({\mathcal{A}}_n) \int t^\beta \, d{\mathbb{P}}_n
\leq C_\tau e^R \xi \sum_{n=1}^\infty (1-\xi)^n n^{\beta-1}
.
\]
\end{proof}
\subsubsection{(Stretched) exponential tails}
\begin{proposition}
\label{prop:aexp}
Let \(X_1, \ldots, X_n\) be nonnegative random variables.
Suppose that there exist \(\alpha>0\), $\gamma\in(0,1]$, such that
\[
{\mathbb{P}}( X_k \geq \ell\,|\,X_1=x_1,\dots,X_{k-1}=x_{k-1})
\leq C e^{-\alpha \ell^\gamma}
\]
for all \(\ell \geq 0\), $1\le k\le n$ and $x_1,\dots,x_{k-1}\ge0$.
Then for all $A \in (0,\alpha/2]$, $\ell\ge0$,
\[
{\mathbb{P}}(X_1 + \cdots + X_n \geq \ell)
\leq (1+A C_1)^n e^{-A \ell^\gamma},
\]
where \(C_1\) depends continuously
on $C$, \(\gamma\) and \(\alpha\).
\end{proposition}
\begin{proof}
See \cite[Proposition 4.11]{KKM16}.
\end{proof}
\begin{proposition}
\label{prop:afsame}
Suppose that there exist \(C_\tau > 0\), \(\alpha > 0\) and
\(\gamma \in (0,1]\) such that
\(m(\tau \geq \ell) \leq C_\tau e^{-\alpha \ell^{\gamma}}\)
for \(\ell \geq 1\).
Then \({\mathbb{P}}(t \geq \ell) \leq C e^{ - A \ell^{\gamma}}\),
where the constants \(C > 0\) and \(A \in (0, \alpha)\)
depend continuously on \(R\), \(\xi\), \(C_\tau\), \(\alpha\)
and \(\gamma\).
\end{proposition}
\begin{proof}
Let \(k \leq n\), and \(a = a_1 \cdots a_n \in {\mathcal{A}}_n\).
By \eqref{eq:lgg},
\[
{\mathbb{P}}_n(\tau(a_k) \geq \ell \mid a_1, \ldots, a_{k-1})
\leq e^R m(\tau \geq \ell)
\leq C_\tau e^{-\alpha \ell^\gamma}
.
\]
By Proposition~\ref{prop:aexp},
\[
{\mathbb{P}}_n(t \geq \ell)
\leq (1+A C_1)^n e^{-A \ell^\gamma}
\]
for all \(A \in (0, \alpha/2)\).
Taking \(A\) small enough, we obtain
\[
{\mathbb{P}}(t \geq \ell)
= \sum_{n=1}^{\infty} {\mathbb{P}}({\mathcal{A}}_n) {\mathbb{P}}_n(t \geq \ell)
\leq \xi e^{-A \ell^\gamma} \sum_{n=1}^{\infty} (1-\xi)^n (1+A C_1)^n
= C e^{-A \ell^\gamma}
\]
with \(C < \infty\).
\end{proof}
\subsection{Coupling}
Recall that \(s \colon M \times M \to {\mathbb{N}}_0 \cup \{\infty\}\) is defined by
\[
s(x,y) =
\inf \bigl\{\max\{k, n\} \colon k, n \geq 0, \, T^k x = T^n y \bigr\}.
\]
\begin{lemma}
\label{lemma:abb}
Let \(n \geq 0\) and \(\rho\) be a probability measure on
\(M\) such that \(T_*^n \rho\) is regular.
Then there exists a measure \({\hat{\rho}}\) on \(M \times M\)
with marginals \(\rho\) and \(\mu\) on the first and second
coordinates respectively, such that \(s(x,y) < \infty\)
for \({\hat{\rho}}\)-almost every \((x, y) \in M \times M\).
If there exist \(C_\tau > 0\) and \(\beta > 1\) such that
\(m(\tau \geq \ell) \leq C_\tau \ell^{-\beta}\) for \(\ell \geq 1\),
then
\(
{\hat{\rho}} (s \geq \ell) \leq C (\ell-n)^{-\beta}
\)
for \(\ell \geq n+1\) and some constant \(C>0\).
If there exist constants \(C_\tau > 0\), \(\alpha > 0\) and
\(\gamma \in (0,1]\) such that
\(m(\tau \geq \ell) \leq C_\tau e^{-\alpha \ell^{\gamma}}\)
for \(\ell \geq 1\), then
\(
{\hat{\rho}} (s \geq \ell) \leq C e^{-A (\ell-n)^\gamma}
\)
for \(\ell \geq n + 1\) and some constants \(A \in (0, \alpha)\)
and \(C > 0\).
In both cases above, the constants \(C\) and \(A\) depend continuously
(only) on \(R\), \(\xi\), \(C_\tau\), \(\beta\),
\(\alpha\) and \(\gamma\).
\end{lemma}
\begin{proof}
Lemma~\ref{lemma:b87e} provides us with the
decomposition
\(\rho = \sum_{a \in {\mathcal{A}}} {\mathbb{P}}(a) \rho_a\)
such that
\(T_*^{n+t(a)} \rho_a = \mu\) for every \(a\).
For \(k \geq 0\) define \(U_k \colon M \to M \times M\),
\(U_k(x) = (x, T^k x)\).
Define
\[
{\hat{\rho}} = \sum_{a \in {\mathcal{A}}} {\mathbb{P}}(a) \, (U_{n+t(a)})_* \rho_a.
\]
It is clear that the marginals of \((U_{n+t(a)})_* \rho_a\)
on the first and second components are \(\rho_a\) and \(\mu\)
respectively. Therefore the marginals of \({\hat{\rho}}\) are
\(\rho\) and \(\mu\).
Observe that \(s(x,y) \leq n+ t(a)\) for
\((U_{n+t(a)})_* \rho_a\)-almost every \((x,y) \in M \times M\).
Thus \(s < \infty\) for \({\hat{\rho}}\)-almost every \((x,y) \in M \times M\).
It remains to estimate \({\hat{\rho}}(s \geq \ell)\). Note that
\({\hat{\rho}}(s \geq \ell) \leq {\mathbb{P}} (t \geq \ell - n)\).
The results follow directly from
Propositions~\ref{prop:fwqjq} and~\ref{prop:afsame}.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:yeop}}
\label{proof:thm:yeop}
By Lemma~\ref{lemma:abb}, for every \(z \in E\) there exists
a probability measure \({\hat{\rho}}_z\) on \(M \times M\) with marginals
\(\rho_z\) and \(\mu\) respectively such that \(s < \infty\)
almost surely.
\begin{remark}
\label{rmk:mf}
In Proposition~\ref{prop:vaoe} and Lemma~\ref{lemma:b87e},
we construct the measures \(\rho_a\), \(a \in {\mathcal{A}}\)
(as in Lemma~\ref{lemma:b87e}) by explicit formulas,
and it is a straightforward verification that,
as long as \(\rho_z\) is a measurable family, so are the respective
\(\rho_{z,a}\) for each \(a \in {\mathcal{A}}\). Further, \({\hat{\rho}}_z\) are explicitly
constructed from \(\rho_{z,a}\) in Lemma~\ref{lemma:abb}, so the
family \({\hat{\rho}}_z\) is measurable.
\end{remark}
Define
\(
{\hat{\rho}} = \int_E {\hat{\rho}}_z \, d\varkappa(z)
\).
Then the marginals of \({\hat{\rho}}\) are
\(\rho\) and \(\mu\) respectively, and \(s < \infty\)
almost surely with respect to \({\hat{\rho}}\).
It remains to estimate the tails \({\hat{\rho}}(s \geq n)\).
We prove the weak polynomial case, the others are similar.
Using Lemma~\ref{lemma:abb}, write
\begin{align*}
{\hat{\rho}}(s \geq n)
&= \int_E {\hat{\rho}}_z (s \geq n) \, d\varkappa(z)
\ll \int_E \min \{1, (n - r(z))^{-\beta}\} \, d\varkappa(z)
\\ & \leq \varkappa(r \geq n/2) + \int_E (n/2)^{-\beta} \, d\varkappa(z)
\ll n^{-\beta}
.
\end{align*}
\subsection{Proof of Theorem~\ref{thm:yeoc}}
Assume without loss that \(|v|_\infty \leq 1/2\).
Let \(U_n = (v_n, \mu)\). It follows from Theorem~\ref{thm:yeop}
that the processes \(\{X_n, n \geq 0\}\) and \(\{U_n, n \geq 0\}\)
can be redefined on the probability space
\((M \times M, {\hat{\rho}}_{XU})\) where \(s<\infty\) \({\hat{\rho}}_{XU}\)-almost surely.
By Remark~\ref{rmk:aaggg}, \(Z_{XU} = \sup_{n}|X_n - U_n| \leq s\),
thus \(Z_{XU}\) is also finite \({\hat{\rho}}_{XU}\)-almost surely.
Similarly, \(\{Y_n, n \geq 0\}\) and \(\{U_n, n \geq 0\}\)
can be redefined on \((M \times M, {\hat{\rho}}_{YU})\)
with \({\hat{\rho}}_{YU}\)-almost surely finite \(Z_{YU} = \sup_{n}|Y_n - U_n|\).
By Lemma~\ref{lemma:joc}, all three processes
\(\{X_n, n \geq 0\}\), \(\{Y_n, n \geq 0\}\) and
\(\{U_n, n \geq 0\}\) can be redefined on the same probability space
\((\Omega, {\mathbb{P}})\) so that the joint distributions of pairs
\(\{(X_n, U_n), n \geq 0\}\) and \(\{(Y_n, U_n), n \geq 0\}\) are as above.
Further we work on this probability space.
Observe that \(Z = \sup_{n} |X_n -Y_n| \leq Z_{XU} + Z_{YU}\).
It follows that \(Z\) is almost surely finite.
It remains to estimate \({\mathbb{P}}(Z \geq x)\) for \(x \geq 0\). The bounds
follow transparently from Theorem~\ref{thm:yeop} and the relation
\begin{align*}
{\mathbb{P}}(Z \geq x)
& \leq {\mathbb{P}}(Z_{XU} \geq x/2) + {\mathbb{P}}(Z_{YU} \geq x/2)
\\ & \leq {\hat{\rho}}_{XU}(s \geq x/2) + {\hat{\rho}}_{YU}(s \geq x/2)
.
\end{align*}
|
2,869,038,154,076 | arxiv | \section{Introduction}
The classical theory of total positivity concerns matrices in
which all minors are non-negative. While this theory was pioneered
by Gantmacher, Krein, and Schoenberg in the 1930's, the past decade
has seen a flurry of research in this area initiated by
Lusztig \cite{Lusztig1, Lusztig2, Lusztig3}.
Motivated by surprising connections
he discovered between his theory of canonical bases for quantum
groups and the theory of total positivity, Lusztig extended
this subject by introducing the totally non-negative
variety $\mathbb{G}_{\geq 0}$ in an arbitrary reductive group $\mathbb{G}$ and the
totally non-negative part $(\mathbb{G}/P)_{\geq 0}$ of a real flag variety
$\mathbb{G}/P$.
Recently Postnikov \cite{Postnikov} investigated the
combinatorics of the totally non-negative part of a
Grassmannian $(Gr_{k,n})_{\geq 0}$: he established a relationship
between $(Gr_{k,n})_{\geq 0}$ and certain planar bicolored graphs,
producing a combinatorially explicit cell decomposition
of $(Gr_{k,n})_{\geq 0}$. To each such graph $G$ he constructed
a parameterization $\mathrm{Meas}_G$ of a corresponding cell of
$(Gr_{k,n})_{\geq 0}$ by $(\mathbb{R}_{> 0})^{\# \mathrm{Faces}(G)-1}$.
In fact, this cell decomposition
is a special case of a cell
decomposition of $(\mathbb{G}/P)_{\geq 0}$
which was conjectured by
Lusztig and proved by Rietsch \cite{Rietsch1}, although that
cell decomposition was described in quite different terms.
Other combinatorial aspects of $(Gr_{k,n})_{\geq 0}$, and more
generally of $(\mathbb{G}/P)_{\geq 0}$, were investigated
by Marsh and Rietsch \cite{MR}, Rietsch \cite{Rietsch2},
and the third author \cite{Williams1, Williams2}.
It is known that $(\mathbb{G}/P)_{\geq 0}$ is contractible \cite{Lusztig1}
and it is conjectured that $(\mathbb{G}/P)_{\geq 0}$ with
its cell decomposition is a regular CW complex homeomorphic to a ball. In \cite{Williams2},
the third author proved the combinatorial analogue of this conjecture,
proving that the partially ordered set (poset) of cells of
$(\mathbb{G}/P)_{\geq 0}$ is in fact the poset of cells of a regular CW complex homeomorphic to a ball.
In this paper we give an approach to this conjecture which uses toric geometry
to extend $\mathrm{Meas}_G$ to a
map onto the closure of the corresponding cell of $(Gr_{k,n})_{\geq 0}$.
Specifically, given a {\it plane-bipartite} graph $G$, we construct a toric variety $X_G$ and a rational map $m_G : X_G \to Gr_{k,n}$. We show that $m_G$ is well-defined on the totally non-negative part of $X_G$ and that its image is the closure of the corresponding cell of $(Gr_{k,n})_{\geq 0}$.
The totally non-negative part of $X_G$ is homeomorphic to
a certain polytope (the {\it moment polytope})
which we denote $P(G)$, so we can equally well think of this result as a parameterization of our cell by $P(G)$.
The restriction of $m_G$ to the toric interior of the non-negative part of $X_G$ (equivalently, to the interior of $P(G)$) is $\mathrm{Meas}_G$.
Our technology proves that the cell decomposition of the totally non-negative part of the Grassmannian
is in fact a CW complex. While our map $m_G$ is
well-defined on $(X_G)_{\geq 0}$ (which is a closed ball) and
is a homeomorphism on the interior, in general $m_G$ is not a homeomorphism on the boundary of
$(X_G)_{\geq 0}$; therefore this does not lead directly to a proof of the conjecture.
However, we do obtain more evidence that the conjecture is true: using Williams' result \cite{Williams2}
that the face poset of $(\mathbb{G}/P)_{\geq 0}$ is {\it Eulerian}, it follows that
the Euler characteristic of the closure of each cell of $(Gr_{k,n})_{\geq 0}$ is $1$.
The most elegant part of our story is how the combinatorics of the
plane-bipartite graph $G$ reflects
the structure of the
polytope $P(G)$ and hence the structure of $X_G$. See Table
\ref{PlabicTable} for some of these connections.
The torus fixed points of $X_G$ correspond to
{\it perfect orientations} of $G$,
equivalently, to {\it almost perfect matchings} of $G$.
The other faces of $X_G$ correspond to certain
\emph{elementary subgraphs} of $G$, that is, to unions of almost perfect matchings
of $G$. Every face of $X_{G}$ is of the form $X_{G'}$ for some plane-bipartite
graph $G'$ obtained by deleting some edges of $G$, and
$m_{G}$ restricted to $X_{G'}$ is $m_{G'}$.
It will follow from this that, for every face $Z$ of $X_G$, the interior of $Z$ is mapped to the
interior of a cell of the totally non-negative Grassmannian with fibers that
are simply affine spaces. We hope that this explicit description of the
topology of the parameterization will be useful in studying the topology
of $(Gr_{k,n})_{\geq 0}$.
\begin{table}[h]
\begin{tabular}{|p{7.5 cm}|p{4 cm}|}
\hline
{\bf Plane-Bipartite graph $G$} & {\bf Polytope $P(G)$} \\
\hline
$\# \mathrm{Faces}(G) - 1$ & Dimension of $P(G)$ \\
\hline
Perfect orientations / almost perfect matchings &
Vertices of $P(G)$ \\
\hline
Equivalence classes of edges & Facets of $P(G)$ \\
\hline
Lattice of elementary subgraphs & Lattice of faces of $P(G)$ \\
\hline
\end{tabular}
\vspace{.5cm}
\caption{How $G$ reflects $P(G)$}
\label{PlabicTable}
\end{table}
The structure of this paper is as follows. In Section~\ref{review} we review the combinatorics
of plane-bipartite graphs and perfect orientations. Next, in Section~\ref{Toric} we review toric varieties
and their non-negative parts, and prove a lemma which is key to our CW complex result.
We then, in Section~\ref{Match}, introduce the polytopes which will give rise to the toric varieties of interest to us.
Using these polytopes, in Section~\ref{MatroidPolytope} we make the connection between our polytopes $P(G)$ and
matroid polytopes and explain the relation of our results to problems arising in cluster algebras and tropical geometry.
In Section \ref{CWComplex} we use these polytopes to prove that
the cell decomposition of $(Gr_{k,n})_{\geq 0}$ is in fact a CW complex.
In Section \ref{faces} we analyze the combinatorics of our polytopes in greater detail, giving a combinatorial
description of the face lattice of $P(G)$ in terms of matchings and unions of matchings of $G$.
Finally, in Section \ref{Numerology},
we calculate $f$-vectors, Ehrhart series, volumes, and the degrees of the
corresponding toric varieties for a few small plane-bipartite graphs.
\textsc{Acknowledgements:} We are grateful to Vic Reiner for pointing
out the similarity between our polytopes $P(G)$ and Birkhoff polytopes, and to
Allen Knutson for many helpful conversations.
\section{The totally non-negative Grassmannian and plane-bipartite graphs}
\label{review}
In this section we review some material from \cite{Postnikov}.
We have slightly modified the notation from \cite{Postnikov}
to make it more convenient for the present paper.
Recall that the (real) Grassmannian $Gr_{k,n}$ is the space of all
$k$-dimensional subspaces of $\mathbb{R}^n$, for $0\leq k\leq n$. An element of
$Gr_{k,n}$ can be viewed as a full-rank $k\times n$ matrix modulo left
multiplications by nonsingular $k\times k$ matrices. In other words, two
$k\times n$ matrices represent the same point in $Gr_{k,n}$ if and only if they
can be obtained from each other by row operations.
Let $\binom{[n]}{k}$ be the set of all $k$-element subsets of $[n]:=\{1,\dots,n\}$.
For $I\in \binom{[n]}{k}$, let $\Delta_I(A)$
denote the maximal minor of a $k\times n$ matrix $A$ located in the column set $I$.
The map $A\mapsto (\Delta_I(A))$, where $I$ ranges over $\binom{[n]}{k}$,
induces the {\it Pl\"ucker embedding\/} $Gr_{k,n}\hookrightarrow \mathbb{RP}^{\binom{n}{k}-1}$.
\begin{definition} {\rm \cite[Section~3]{Postnikov}} \
The {\it totally non-negative Grassmannian} $(Gr_{k,n})_{\geq 0}$
is the subset of the real Grassmannian $Gr_{k,n}$
that can be represented by $k\times n$ matrices $A$
with all maximal minors
$\Delta_I(A)$ non-negative.
For $\mathcal{M}\subseteq \binom{[n]}{k}$,
the {\it positive Grassmann cell\/} $C_\mathcal{M}$ is
the subset of the elements in $(Gr_{k,n})_{\geq 0}$ represented by all $k\times n$ matrices $A$ with
the prescribed collection of maximal minors strictly positive $\Delta_I(A)>0$,
for $I\in \mathcal{M}$, and the remaining
maximal minors equal to zero $\Delta_J(A)=0$, for $J\not\in \mathcal{M}$.
A subset $\mathcal{M}\subseteq \binom{[n]}{k}$ such that $C_\mathcal{M}$ is nonempty
satifies the base axioms of matroid. These special matroids are called
{\it positroids.}
\end{definition}
Clearly $(Gr_{k,n})_{\geq 0}$ is a disjoint union of the positive Grassmann cells $C_\mathcal{M}$.
It was shown in \cite{Postnikov} that each of these cells $C_\mathcal{M}$ is really a cell,
that is, it is homeomorphic to an open ball of appropriate dimension $d$.
Moreover, one can explicitly construct a parametrization $\mathbb{R}_{>0}^d \buildrel\sim\over\to C_\mathcal{M}$
using certain planar graphs, as follows.
\begin{definition}
A {\it plane-bipartite graph\/} is an undirected graph $G$ drawn inside a disk
(considered modulo homotopy)
with $n$ {\it boundary vertices\/} on the boundary of the disk,
labeled $b_1,\dots,b_n$ in clockwise order, as well as some
colored {\it internal vertices\/}.
These internal vertices
are strictly inside the disk and are
colored in black and white such that:
\begin{enumerate}
\item Each edge in $G$ joins two vertices of different colors.
\item Each boundary vertex $b_i$ in $G$ is incident to a single edge.
\end{enumerate}
A {\it perfect orientation\/} $\mathcal{O}$ of a plane-bipartite graph $G$ is a
choice of directions of its edges such that each
black internal vertex $u$ is incident to exactly one edge
directed away from $u$; and each white internal vertex $v$ is incident
to exactly one edge directed towards $v$.
A plane-bipartite graph is called {\it perfectly orientable\/} if it has a perfect orientation.
Let $G_\mathcal{O}$ denote the directed graph associated with a perfect orientation $\mathcal{O}$ of $G$. The {\it source set\/} $I_\mathcal{O} \subset [n]$ of a perfect orientation $\mathcal{O}$ is the set of $i$ for which $b_i$
is a source of the directed graph $G_\mathcal{O}$. Similarly, if $j \in \bar{I}_{\mathcal{O}} := [n] \setminus I_{\mathcal{O}}$, then $b_j$ is a sink of $\mathcal{O}$.
\end{definition}
All perfect orientations of a fixed
$G$ have source sets of the same size $k$ where
$k-(n-k) = \sum \mathrm{color}(v)\,(\deg(v)-2)$. Here the sum is over all internal vertices $v$, $\mathrm{color}(v) = 1$ for a black vertex $v$, and $\mathrm{color}(v) = -1$ for a white vertex;
see~\cite{Postnikov}. In this case we say that $G$ is of {\it type\/} $(k,n)$.
Let us associate a variable $x_e$ with each edge of $G$.
Pick a perfect orientation $\mathcal{O}$ of $G$. For $i\in I_\mathcal{O}$ and $j\in \bar I_\mathcal{O}$,
define the {\it boundary measurement\/} $M_{ij}$ as the following
power series in the $x_e^{\pm 1}$:
$$
M_{ij}:=\sum_{P} (-1)^{\mathrm{wind}(P)}\, x^P,
$$
where the sum is over all directed paths in $G_\mathcal{O}$ that start at the boundary vertex $b_i$
and end at the boundary vertex $b_j$. The Laurent monomial $x^P$ is given by $x^P:=\prod' x_{e'}/\prod'' x_{e''}$,
where the product $\prod'$ is over all edges $e'$ in $P$ directed
from a white vertex to a black vertex,
and the product $\prod''$ is over all edges $e''$ in $P$ directed
from a black vertex to a white vertex.
For any path $P$, let $\sigma_1$,$\sigma_2$, \ldots, $\sigma_r \in \mathbb{R}/2 \pi \mathbb{Z}$ be the directions of the edges of $P$ (in order). Let $Q$ be the path through $\mathbb{R}/2 \pi \mathbb{Z}$ which travels from $\sigma_1$ to $\sigma_2$ to $\sigma_3$ and so forth, traveling less then $\pi$ units of arc from each $\sigma_i$ to the next. The {\it winding index\/} $\mathrm{wind}(P)$ is the number of times $Q$ winds around the circle $\mathbb{R}/2 \pi \mathbb{Z}$, rounded to the nearest integer.
The index $\mathrm{wind}(P)$ is congruent to the number of self-intersections of the path $P$
modulo $2$.
\begin{remark}
Let us mention several differences in the notations given above
and the ones from \cite{Postnikov}.
The construction in \cite{Postnikov} was done for {\it plabic graphs,} which
are slightly more general than the plane-bipartite graphs defined above. Edges
in plabic graphs are allowed to join vertices of the same color.
One can easily transform a plabic graph into a plane-biparte graph, without
much change in the construction, by contracting edges
which join vertices of the same color, or alternatively, by inserting vertices
of different color in the middle of such edges.
Another difference is that we inverted the edge variables
from \cite{Postnikov} for all edges directed from a black
vertex to a white vertex.
In \cite{Postnikov} the boundary measurements $M_{ij}$ were
defined for any planar directed graph drawn inside a disk.
It was shown that one can easily transform
any such graph into a plane-bipartite graph with a perfect
orientation of edges that has the same boundary measurements.
\end{remark}
Let $E(G)$ denote the edge set of a plane-bipartite graph $G$, and let $\mathbb{R}_{>0}^{E(G)}$ denote the set of
vectors $(x_e)_{e\in E(G)}$ with strictly positive real coordinates $x_e$.
\begin{lemma}
\label{lem:Mij_subtractive_free}
\cite[Lemma 4.3]{Postnikov}
The sum in each boundary measurement $M_{ij}$ evaluates to a subtraction-free rational expression
in the $x_e$. Thus it gives a well-defined positive function on $\mathbb{R}_{> 0}^{E(G)}$.
\end{lemma}
For example, suppose that $G$ has two boundary vertices, $1$ and $2$ and two internal vertices $u$ and $v$, with edges $a$, $b$, $c$ and $d$ running connecting $1 \to u$, $u \to v$, $v \to u$ and $v \to 2$. Then $M_{12} = abd - abcbd + abcbcbd - \cdots = abd/(1+bc)$. The sum only converges when $|bc| < 1$ but, by interpreting it as a rational function, we can see that it gives a well defined value for any $4$-tuple $(a,b,c,d)$ of positive reals.
If the graph $G_\mathcal{O}$ is acyclic then there are finitely many directed paths $P$,
and $\mathrm{wind}(P)=0$ for any $P$. In this case the $M_{ij}$ are clearly Laurent polynomials in the $x_e$
with positive integer coefficients, and the above lemma is trivial.
For a plane-biparte graph $G$ of type $(k,n)$ and a perfect orientation $\mathcal{O}$ with the source set $I_\mathcal{O}$,
let us construct the $k\times n$ matrix $A=A(G,\mathcal{O})$ such that
\begin{enumerate}
\item The $k\times k$ submatrix of $A$ in the column set $I_\mathcal{O}$ is the identity matrix.
\item For any $i\in I_\mathcal{O}$ and $j\in \bar I_\mathcal{O}$, the minor $\Delta_{(I_\mathcal{O}\setminus \{i\})\cup \{j\}}(A)$ equals
$M_{ij}$.
\end{enumerate}
These conditions uniquely define the matrix $A$. Its entries outside the column set $I_\mathcal{O}$ are
$\pm M_{ij}$. The matrix $A$ represents an element of the Grassmannian $Gr_{k,n}$.
Thus, by Lemma~\ref{lem:Mij_subtractive_free},
it gives the well-defined {\it boundary measurement map\/}
$$
\mathrm{Meas}_G:\mathbb{R}_{>0}^{E(G)}\to Gr_{k,n}.
$$
Clearly, the matrix $A(G,\mathcal{O})$ described above will be different for different
perfect orientations $\mathcal{O}$ of $G$. However, all these different matrices
$A(G,\mathcal{O})$ represent the same point in the Grassmannian $Gr_{k,n}$.
Note that once we have constructed the matrix $A$, we can determine
which cell of $(Gr_{k,n})_{\geq 0}$ we are in by simply noting
which maximal minors are nonzero and which are zero.
\begin{proposition}
\label{prop:same}
\cite[Theorem 10.1]{Postnikov}
For a perfectly orientable plane-bipartite graph $G$, the boundary measurement map
$\mathrm{Meas}_G$ does not depend on a choice of perfect orientation of $G$.
\end{proposition}
If we multiply the edge variables $x_e$ for all edges incident to an internal
vertex $v$ by the same factor, then the boundary measurement $M_{ij}$ will not
change. Let $V(G)$ denote the set of internal vertices of $G$. Let
$\mathbb{R}_{>0}^{E(G)/V(G)}$ be the quotient of $\mathbb{R}_{>0}^{E(G)}$ modulo the action of
$\mathbb{R}_{>0}^{V(G)}$ given by these rescalings of the $x_e$. If the graph $G$ does
not have isolated connected components without boundary vertices\footnote{Clearly, we
can remove all such isolated components without affecting the boundary measurements.}, then
$\mathbb{R}_{>0}^{E(G)/V(G)} \simeq \mathbb{R}_{>0}^{|E(G)|-|V(G)|}$.
The map $\mathrm{Meas}_G$ induces the map
$$
\widetilde{M}eas_G: \mathbb{R}_{>0}^{E(G)/V(G)} \to Gr_{k,n},
$$
which (slightly abusing the notation) we also call the boundary measurement map.
Talaska \cite{Talaska} has given an explicit combinatorial formula for the
maximal minors (also called Pl\"ucker coordinates) of such matrices $A=A(G,\mathcal{O})$.
To state her result, we need a few definitions.
A {\it conservative flow} in a perfect orientation $\mathcal{O}$ of $G$ is a (possibly empty)
collection of pairwise vertex-disjoint oriented cycles. (Each cycle is self-avoiding,
i.e. it is not allowed to pass through a vertex more than once.) For
$|J|=|I_{\mathcal{O}}|$, a {\it flow from $I_{\mathcal{O}}$ to $J$} is a collection of self-avoiding walks
and cycles, all pairwise vertex-disjoint, such that the sources of these walks are $I_{\mathcal{O}} \setminus (I_{\mathcal{O}} \cap J)$
and the destinations are $J \setminus (I_{\mathcal{O}} \cap J)$. So a conservative flow can also be described as a flow from $I_{\mathcal{O}}$ to $I_{\mathcal{O}}$. The {\it weight}
$\mathrm{weight}(F)$ of a flow $F$ is the product
of the weights of all its edges directed from the white to the black vertex,
divided by the product of all its edges directed from the black to the white
vertex.\footnote{Note that here we slightly differ from Talaska's convention in order
to be consistent with our previous convention in defining $M_{ij}$.}
A flow with no edges has weight $1$.
\begin{theorem} \cite[Theorem 1.1]{Talaska}\label{TalaskaTheorem}
\label{prop:noncrossing}
Fix a perfectly orientable $G$ and a perfect orientation $\mathcal{O}$.
The minor $\Delta_J(A)$ of $A=A(G,\mathcal{O})$, with columns in position $J$, is given by
$$
\Delta_J = \left(\sum_{F} \mathrm{weight}(F)\right)/\left(\sum_{F'} \mathrm{weight}(F')\right).
$$
Here the sum in the numerator is over flows $F$
from $I_{\mathcal{O}}$ to $J$ and the sum in the denominator is over all
conservative flows $F'$.
\end{theorem}
A point in the Grassmannian only depends on its Pl\"ucker coordinates up to multiplication by a common scalar.
For our purposes, it is best to clear the denominators in Theorem~\ref{TalaskaTheorem}, and give a purely (Laurent)
polynomial formula:
\begin{corollary} \label{TalaskaCor}
Using the notation of Theorem~\ref{TalaskaTheorem}, the point of $Gr_{k,n}$ corresponding to the row span of $A$ has Pl\"ucker coordinates
$$p_J := \left(\sum_{F} \mathrm{weight}(F)\right)$$
where the sum is over flows $F$ from $I_{\mathcal{O}}$ to $J$.
\end{corollary}
Theorem~\ref{prop:noncrossing} implies that the image of the boundary
measurement map $\widetilde{M}eas_G$ lies in the totally non-negative Grassmannian
$(Gr_{k,n})_{\geq 0}$. Moreover, the image is equal to a certain
positive cell in $(Gr_{k,n})_{\geq 0}$.
\begin{proposition}
\label{prop:plabic_cells}
\cite[Theorem~12.7]{Postnikov}
Let $G$ be any perfectly orientable plane-bipartite graph of type $(k,n)$.
Then the image of the boundary measurement map $\widetilde{M}eas_G$ is a certain positive Grassmann cell
$C_\mathcal{M}$ in $(Gr_{k,n})_{\geq 0}$.
For every cell $C_\mathcal{M}$ in $(Gr_{k,n})_{\geq 0}$, there is a perfectly orientable plane-bipartie graph $G$
such that $C_\mathcal{M}$ is the image of $\widetilde{M}eas_G$.
The map $\widetilde{M}eas_G$ is a fiber bundle with fiber an
$r$-dimensional affine space, for some non-negative $r$.
For any cell of $(Gr_{k,n})_{\geq 0}$, we can always choose a graph $G$ such that $\widetilde{M}eas_G$ is a homeomorphism onto this cell.
\end{proposition}
Let us say that a plane-bipartite graph $G$
is {\it reduced\/} if $\widetilde{M}eas_G$ is a homeomorphism,
and $G$ has no isolated connected components
nor internal vertices incident to a single edge;
see \cite{Postnikov}.
An {\it almost perfect matching} of a plane-bipartite graph $G$ is a
subset $M$ of edges such that each internal vertex is incident to
exactly one edge in $M$ (and the boundary vertices $b_i$ are incident
to either one or no edges in $M$).
There is a bijection between perfect orientations of $G$ and almost perfect
matchings of $G$ where, for a perfect orientation $\mathcal{O}$ of $G$, an edge $e$ is
included in the corresponding matching if $e$ is directed away
from a black vertex
or to a white vertex in $\mathcal{O}$.\footnote{Note that typically
$e$ is directed away from a black vertex if and only if it is
directed towards a white vertex. However, we have used the
word {\it or} to make the bijection well-defined when
boundary vertices are not colored.}
For a plane-bipartite graph $G$ and the corresponding cell
$C_\mathcal{M} = \mathrm{Image}(\mathrm{Meas}_G)$ in $(Gr_{k,n})_{\geq 0}$,
one can combinatorially construct the matroid $\mathcal{M}$
from the graph $G$, as follows.
\begin{proposition}
\cite[Propostion~11.7, Lemma~11.10]{Postnikov}
A subset $I\in \binom{[n]}{k}$ is a base of the matroid $\mathcal{M}$ if and only
there exists a perfect orientation $\mathcal{O}$ of $G$ such that $I=I_\mathcal{O}$.
Equivalently, assuming that all boundary vertices $b_i$ in $G$ are black,
$I$ is a base of $\mathcal{M}$ if and only if there exists an almost perfect
matching $M$ of $G$ such that
$$
I = \{i\mid b_i \text{ belongs to an edge from $M$}\}.
$$
\end{proposition}
\section{Toric varieties and their non-negative parts}
\label{Toric}
We may define a (generalized) projective toric variety
as follows \cite{Cox, Sottile}.
Let $S=\{\mathbf{m}_i \ \vert \ i=1, \dots, \ell\}$ be any finite subset
of $\mathbb{Z}^n$, where $\mathbb{Z}^n$ can be thought of as the character group
of the torus $(\mathbb{C}^*)^n$.
Here
$\mathbf{m}_i=(m_{i1}, m_{i2},\dots ,m_{in})$.
Then consider
the map $\phi: (\mathbb{C}^*)^n \to \mathbb{P}^{\ell-1}$ such that
$\mathbf{x}=(x_1, \dots , x_n) \mapsto [\mathbf{x^{m_1}}, \dots , \mathbf{x^{m_\ell}}]$.
Here $\mathbf{x^{m_i}}$ denotes $x_1^{m_{i1}} x_2^{m_{i2}} \dots x_n^{m_{in}}$.
We then define the toric variety $X_S$
to be the Zariski closure of the image of this map. We write $\tilde{\phi}$ for the inclusion of $X_S$ into $\mathbb{P}^{\ell-1}$
The {\it real part} $X_S(\mathbb{R})$ of $X_S$ is defined to be the
intersection of $X_S$ with $\mathbb{R}\mathbb{P}^{\ell-1}$; the
{\it positive part} $X_S^{>0}$ is defined to be the image of
$(\mathbb{R}_{>0})^n$ under $\phi$; and the {\it non-negative part}
$X_S^{\geq 0}$ is defined to be the closure of
$X_S^{>0}$ in $X_S(\mathbb{R})$. We note for future reference that $X_S$, $X_S(\mathbb{R})$ and $X_S^{\geq 0}$ are unaltered by translating the set $S$ by any integer vector.
Note that $X_S$ is not necessarily a toric variety in the sense of \cite{Fulton}, as
it may not be normal;
however, its normalization is a toric variety in that sense. See \cite{Cox} for more details.
Let $P$ be the convex hull of $S$.
There is a homeomorphism from $X_S^{\geq 0}$ to $P$, known as the moment map.
(See \cite[Section 4.2, page 81]{Fulton}
and \cite[Theorem 8.4]{Sottile}).
In particular,
$X_S^{\geq 0}$ is homeomorphic to a closed ball.
We now prove a simple but very important lemma.
\begin{lemma}\label{important}
Suppose we have a map $\Phi: (\mathbb{R}_{>0})^n \to \mathbb{P}^{N-1}$ given by
\begin{equation*}
(t_1, \dots , t_n) \mapsto [h_1(t_1,\dots,t_n), \dots , h_N(t_1,\dots,t_n)],
\end{equation*}
where the $h_i$'s are Laurent polynomials with positive coefficients. Let $S$ be the
set of all exponent vectors in $\mathbb{Z}^n$ which occur among the (Laurent) monomials
of the $h_i$'s, and let $P$ be the convex hull of the points of $S$.
Then the map $\Phi$ factors through the totally positive part
$(X_P)_{>0}$, giving a map
$\tau_{>0}: (X_P)_{>0} \to \mathbb{P}^{N-1}$. Moreover $\tau_{>0}$ extends continuously to the
closure to give a well-defined map
$\tau_{\ge 0}:(X_P)_{\ge 0} \to \overline{\tau_{>0}((X_{P})_{>0})}$.
\end{lemma}
\begin{proof}
Let $S = \{\mathbf{m_1},\dots,\mathbf{m_{\ell}}\}$.
Clearly the map $\Phi$ factors
as the composite map $t=(t_1,\dots,t_n) \mapsto
[\mathbf{t^{m_1}}, \dots , \mathbf{t^{m_\ell}}] \mapsto [h_1(t_1,\dots,t_n),\dots,
h_N(t_1,\dots,t_n)]$,
and the image of $(\mathbb{R}_{>0})^n$ under the first map is precisely
$(X_P)_{>0}$.
The second map, which we will call $\tau_{>0}$,
takes a point $[x_1,\dots, x_{\ell}]$ of $(X_P)_{>0}$ to
$[g_1(x_1,\dots,x_{\ell}), \dots, g_N(x_1,\dots,x_{\ell})]$,
where the $g_i$'s are homogeneous polynomials of degree $1$ with positive coefficients.
By construction, each $x_i$ occurs in at least one of the $g_i$'s.
Since $(X_P)_{\geq 0}$ is the closure inside $X_P$ of $(X_P)_{>0}$,
any point $[x_1,\dots,x_{\ell}]$ of $(X_P)_{\geq 0}$ has all $x_i$'s non-negative;
furthermore, not all of the $x_i$'s are equal to $0$. And now since the $g_i$'s
have positive coefficients and they involve {\it all} of the $x_i$'s, the image of
any point $[x_1,\dots,x_{\ell}]$ of $(X_P)_{\geq 0}$ under $\tau_{>0}$ is well-defined.
Therefore $\tau_{>0}$ extends continuously to the closure to give a well-defined map
$\tau_{\ge 0}:(X_P)_{\ge 0} \to \overline{\tau_{>0}((X_{P})_{>0})}$.
\end{proof}
In Section \ref{CWComplex}
we will use this lemma to prove that $(Gr_{k,n})_{\geq 0}$
is a
CW complex.
\section{Matching polytopes for plane-bipartite graphs} \label{Match}
In this section we will define a family of polytopes $P(G)$ associated to
plane-bipartite graphs $G$.
\begin{definition}
Given an almost perfect
matching of a plane-bipartite graph
$G$, we associate to it the 0-1 vector in $\mathbb{R}^{E(G)}$
where the coordinates associated to edges in the matching are $1$ and all
other coordinates are $0$.
We define $P(G)$ to be the convex hull of these $0$-$1$ vectors.
\end{definition}
\begin{remark}
Note that more generally, we could define $P(G)$ for any
graph $G$ with a distinguished subset of ``boundary" vertices.
Many of our forthcoming results about $P(G)$ for plane-bipartite
graphs $G$ should be extendable to this generality.
\end{remark}
Because all of the $0$-$1$ vectors above have the property that
$\sum_{e \ni v} x_e =1$ for all internal vertices $v$ of $V(G)$, the polytope
$P(G)$ lies in the subspace of
$\mathbb{R}^{E(G)}$ defined by $\{\sum_{e \ni v} x_e =1 \ \vert \ v\in V(G) \}$.
We will now see how one can arrive at these polytopes in another way.
Recall that for each $G$ we have the boundary measurement map
$\widetilde{M}eas_G: \mathbb{R}_{>0}^{E(G)/V(G)} \to Gr_{k,n}$. Embedding the image
into projective space via the Pl\"ucker embedding, we have
an explicit formula for the coordinates given by Talaska (Corollary
\ref{TalaskaCor}).
In the following definition, we use the notation of Theorem
\ref{TalaskaTheorem}.
\begin{definition}\label{polytopedef}
Fix a perfect orientation $\mathcal{O}$ of $G$.
We define $P(G,\mathcal{O})$ to be the convex hull
of the exponent vectors of the weights of all flows starting
at $I_{\mathcal{O}}$. A priori this polytope lies
in $\mathbb{R}^{E(G)}$, but we will see that
$P(G,\mathcal{O})$ lies in a subspace of
$\mathbb{R}^{E(G)}$.
\end{definition}
\begin{remark}\label{usefulforCW}
Note that what we are doing in Definition \ref{polytopedef}
is taking the convex hull of all exponent vectors
which occur in the $p_J(A)$ from
Corollary~\ref{TalaskaCor}, as $J$ ranges over all subsets of columns
of size $|I_{\mathcal{O}}|$.
\end{remark}
We now relate $P(G)$ and $P(G, \mathcal{O})$. We continue to use the notion of flows introduced in shortly before Theorem~\ref{TalaskaTheorem}.
\begin{lemma}\label{differ}
Fix a plane-bipartite graph $G$ and a perfect orientation $\mathcal{O}_1$.
If we choose a flow in $\mathcal{O}_1$ and switch the direction of all edges
in this flow, we obtain another perfect orientation. Conversely,
one can obtain any perfect orientation $\mathcal{O}_2$ of $G$
from $\mathcal{O}_1$ by switching all directions
of edges in a flow
in $\mathcal{O}_1$.
\end{lemma}
\begin{proof}
The first claim is simple: a perfect orientation is one in which
each black vertex has a unique outcoming edge and each white vertex
has a unique incoming edge. If we switch the orientation of all edges
along one of the paths or cycles in the flow, clearly this property will
be preserved.
To see the converse, let $E'$ denote the set of edges of $G$ in which
the orientations $\mathcal{O}_1$ and $\mathcal{O}_2$ disagree.
It follows from the definition of perfect orientation that
every edge $e$ in $E'$ incident
to some vertex $v$ can be paired uniquely with another edge $e'$ in $E'$
which is also incident to $v$ (note that at each vertex $v$ of $G$ there
are either $0$ or $2$ incident edges which are in $E'$). This pairing
induces a decomposition of $E'$ into a union of
vertex-disjoint
(undirected)
cycles and paths. Moreover, each such cycle or
path is directed in both $\mathcal{O}_1$ and $\mathcal{O}_2$ (but of course in
opposite directions).
This set of cycles and paths is the relevant flow.
\end{proof}
Because of the bijection between perfect orientations and almost perfect matchings
(see Section \ref{review}), Lemma \ref{differ} implies the following.
\begin{corollary}
Fix $G$ and a perfect orientation $\mathcal{O}$. Flows in $\mathcal{O}$ are in bijection with
perfect orientations of $G$ (obtained by reversing all edges of the flow in $\mathcal{O}$)
which are in bijection with almost perfect matchings
of $G$.
\end{corollary}
We can now see the following.
\begin{corollary}
For any perfect orientation $\mathcal{O}$, the polytope $P(G,\mathcal{O})$ is a translation of $P(G)$
by an integer vector.
\end{corollary}
\begin{proof}
Let $F$ denote the empty flow on $\mathcal{O}$,
$F'$ be some other flow in $\mathcal{O}$, and $\mathcal{O}'$ the perfect orientation obtained from $\mathcal{O}$
by reversing the directions of all edges in $F'$. Let $M$ and $M'$ be the almost
perfect matchings associated to $\mathcal{O}$ and $\mathcal{O}'$. Let $x(F)$, $x(F')$,
$x(M)$, and $x(M')$ be the vectors in $\mathbb{R}^{E(G)}$ associated to this flow
and these perfect orientations. Of course $x(F)$ is the all-zero vector.
We claim that $x(M')-x(M)=x(F)-x(F')$.
Fix an edge $e$ of $G$: we will check that the $e$-coordinates of $x(M')-x(M)$ and $x(F)-x(F')$
are equal. First, suppose that $e$ does not occur in $F'$. Then either $e$ appears in both $M$ and $M'$, or in neither.
So $x(F)_e=x(F')_e=0$ and either $x(M)_e=x(M')_e=0$ or $x(M)_e=x(M')_e=1$.
Now, suppose that $e$ occurs in $F'$, and is oriented from its white to its black endpoint in $\mathcal{O}$.
So $x(F)_e=0$ and $x(F')=1$.
The edge $e$ occurs in the matching $M'$ and not in the matching $M$, so $x(M)_e=0$ and $x(M')_e=1$.
Finally, suppose $e$ occurs in $F'$, and is oriented from its black to its white endpoint in $\mathcal{O}$.
Then $x(F)_e=0$ and $x(F')=-1$.
The edge $e$ occurs in the matching $M$ and not in the matching $M'$, so $x(M)_e=1$ and $x(M')_e=0$.
\end{proof}
In particular, up to translation, $P(G,\mathcal{O})$ does not depend on $\mathcal{O}$. Recall that translating a polytope does not affect the corresponding toric variety.
In Figure \ref{P2}, we fix a plane-bipartite graph $G$ corresponding to the cell
of $(Gr_{2,4})_{\geq 0}$ such that the Pl\"ucker coordinates
$P_{12}, P_{13}, P_{14}$ are positive and all others are $0$.
We display the three perfect orientations
and the vertices of $P(G)$.
\begin{figure}[h]
\centering
\includegraphics[height=1.5in]{P2.eps}
\caption{}
\label{P2}
\end{figure}
In Figure \ref{P1P1}, we fix a plane-bipartite graph $G$ corresponding to the cell
of $(Gr_{2,4})_{\geq 0}$ such that the Pl\"ucker coordinates
$P_{12}, P_{13}, P_{24}, P_{34}$ are positive while
$P_{14}$ and $P_{23}$ are $0$.
We display the four perfect orientations
and the vertices of $P(G)$.
\begin{figure}[h]
\centering
\includegraphics[height=1.5in]{P1P1.eps}
\caption{}
\label{P1P1}
\end{figure}
In Figure \ref{Graph} we have
fixed a plane-bipartite graph $G$ corresponding to the
top-dimensional cell of $(Gr_{2,4})_{\geq 0}$.
$G$ has seven perfect orientations.
We have
drawn the edge graph of the
four-dimensional polytope $P(G)$.
This time
we have depicted the vertices of $P(G)$ using matchings instead of
perfect orientations. Next to each matching, we have also
listed the source set of the corresponding perfect orientation.
\begin{figure}[h]
\centering
\includegraphics[height=3.2in]{G24graph.eps}
\caption{}
\label{Graph}
\end{figure}
\section{Connections with matroid polytopes and cluster algebras}\label{MatroidPolytope}
Every perfectly orientable plane-bipartite graph
encodes a realizable {\it positroid}, that is, an oriented matroid in which
all orientations are positive. The bases of the positroid associated to a plane-bipartite
graph $G$ of type $(k,n)$ are precisely the $k$-element subsets $I \subset [n]$ which
occur as source sets of perfect orientations of $G$. This is easy to see,
as each
perfect orientation of $G$ gives rise to a parametrization of the cell $\Delta_G$ of
$(Gr_{k,n})_{\geq 0}$ in which the Pl\"ucker coordinate corresponding to
the source set $I$ is $1$.
Furthermore, if one takes a
(directed) path in a perfect orientation $\mathcal{O}$ and switches the orientation of each of its edges, this encodes a basis exchange.
Given this close connection of perfectly orientable plane-bipartite graphs to positroids, it is
natural to ask whether there is a connection between our polytopes $P(G)$ and matroid
polytopes. We first recall the definition of a matroid polytope.
Let $M$ be a matroid of rank $k$ on the ground set $[n]$. The
{\it matroid polytope} $Q(M)$ is the convex hull of the vectors
$\{ e(J) \ \vert \ J \text{ is a basis of }M\}$ where $e(J)$ is the
$0-1$ vector in $\mathbb{R}^n$ whose $i$th coordinate is $1$ if $i \in J$ and
is $0$ otherwise \cite{GGMS}. The vertices are in one-to-one correspondence with
bases of $M$. This polytope lies in the hyperplane
$x_1 + \dots + x_n = 0$ and, if the matroid $M$ is connected, has dimension $n-1$.
\begin{proposition} \label{projection}
There is a linear projection $\Psi$ from $P(G)$ to
$Q(M_G)$.
The fibers of this projection over the vertices of $Q(M_G)$ are the
Newton polytopes for the Laurent polynomials which express the
Pl\"ucker coordinates on $X_G$ in terms of the edge variables.
\end{proposition}
\begin{proof}
If $G$ is a plane-bipartite graph of type $(k,n)$, one can associate
to each vertex $v_{M}$ of $P(G)$ the basis of the corresponding positroid
corresponding to the boundary edges which are matched in $G$. In terms of the
bijection between perfect matchings and perfect orientations, this is the source
set of the corresponding perfect orientation. This gives the linear projection
$\Psi$ from $P(G)$ to $Q(M_G)$. To see that the statement about the fibers
is true,
see Corollary~\ref{TalaskaCor}, and remember the relationship
between matchings and flows.
\end{proof}
The second and third authors, in
\cite{SpeyerWilliams}, related the Newton polytopes of Proposition \ref{projection}
to the
positive part of the tropical Grassmannian; our results in that
paper can be summarized by saying that the positive part of the
tropical Grassmannian is combinatorially isomorphic to the dual
fan of the fiber polytope of the map $P(G) \to Q(M_G)$.
\footnote{We worked with {\it face variables} rather than edge
variables in \cite{SpeyerWilliams}, but the two corresponding
realizations of $P(G)$ are linearly isomorphic.}
The fact that the Pl\"ucker coordinates on $X_G$ can all be expressed as Laurent polynomials in the edge weights is not simply a fortunate coincidence, but is a consequence\footnote{This consequence is not completely straightforward; one must express certain ratios of the edge weights as Laurent monomials in the variables of a certain cluster, and this involves a nontrivial ``chamber Ansatz''.} of the fact that the coordinate ring of $X_G$ has the structure of a cluster algebra. (See \cite{FZ} for the definition of cluster algebras, \cite{Scott} for the verification that the largest cell of the Grassmannian has the structure of a cluster algebra and \cite{Postnikov} for the fact that every $X_G$ has this structure.)
In general, if we had a better understanding of the Newton polytopes of Laurent polynomials arising from cluster algebras, we could resolve many of
the open questions in that theory.
\begin{example}
Consider the plane-bipartite graph $G$ from Figure \ref{Graph}.
This corresponds
to the positroid of rank two on the ground set $[4]$ such that
all subsets of size $2$ are independent. The edge graph of
the four-dimensional polytope $P(G)$ is shown in Figure \ref{Graph},
and each vertex is labeled with the basis it corresponds to.
The matroid polytope of this matroid is the
(three-dimensional) octahedron with
six vertices corresponding to the two-element subsets of $[4]$.
Under the map $\Psi$, each vertex of $P(G)$ corresponding to
the two-element subset $ij$ gets mapped to the vertex of the
octahedron whose $i$th and $j$th coordinates are $1$ (all
other coordinates being $0$).
\end{example}
\section{$(Gr_{k,n})_{\geq 0}$ is a CW complex}\label{CWComplex}
We now prove that the cell decomposition of $(Gr_{k,n})_{\geq 0}$
is a CW complex, and obtain as a corollary that the Euler characteristic
of the closure of each cell is $1$.
To review the terminology, a {\it cell complex} is
a decomposition of a space $X$ into a disjoint union
of {\it cells}, that is
open balls. A {\it CW complex} is a cell complex together
with the extra data of {\it attaching maps}.
More specifically, each cell in a CW complex is {\it attached} by
gluing a closed $i$-dimensional ball $D^i$ to the $(i-1)$-skeleton
$X_{i-1}$, i.e.\ the union of all lower dimensional cells.
The gluing is specified by a continuous function $f$ from
$\partial D^i = S^{i-1}$ to $X_{i-1}$. CW complexes are defined
inductively as follows: Given $X_0$ a discrete space
(a discrete union of $0$-cells), and inductively constructed subspaces
$X_i$ obtained from $X_{i-1}$ by attaching some collection of $i$-cells,
the resulting colimit space $X$ is called a {\it CW complex} provided
it is given the weak topology and every closed cell is covered by a finite
union of open cells.
Although we don't need this definition here, we note that a {\it regular}
CW complex is a CW complex such that the closure of each cell
is homeomorphic to a closed ball and the boundary of each cell is
homeomorphic to a sphere. It is not known if
the cell decomposition of $(Gr_{k,n})_{\geq 0}$ is regular, although
the results of \cite{Williams2} suggest that the answer is yes.
To prove our main result, we will also use the following lemma, which can be
found in \cite{Postnikov, Rietsch2}.
\begin{lemma}\cite[Theorem 18.3]{Postnikov}, \cite[Proposition 7.2]{Rietsch2}\label{Closure}
The closure of a cell $\Delta$ in $(Gr_{k,n})_{\geq 0}$ is the union of
$\Delta$ together with lower-dimensional cells.
\end{lemma}
\begin{theorem}
The cell decomposition of $(Gr_{k,n})_{\geq 0}$ is a finite CW complex.
\end{theorem}
\begin{proof}
All of these cell complexes contain only finitely many cells; therefore
the closure-finite condition in the definition of a CW complex
is automatically satisfied.
What we need to do is define the attaching maps for the cells:
we need to prove that for each $i$-dimensional cell
there is a continuous map
$f$ from $D^i$ to $X_{i}$ which maps
$\partial D^i = S^{i-1}$ to $X_{i-1}$ and extends
the parameterization of the cell (a map from the interior
of $D^i$ to $X_{i}$).
By Corollary~\ref{TalaskaCor}, if we are
given a perfectly orientable plane-bipartite graph $G$, the image of the parameterization
$\mathrm{Meas}_G$ of the cell $\Delta_G$ under the Pl\"ucker embedding can be described
as a map $(t_1, \dots , t_n) \mapsto [h_1(t_1,\dots,t_n), \dots , h_N(t_1,\dots,t_n)]$
to projective space,
where the $h_i$'s are Laurent polynomials with positive coefficients.
By Lemma \ref{important} and Remark \ref{usefulforCW},
the map $\mathrm{Meas}_G$ gives rise to a rational map
$m_G: X_{P(G)} \to Gr_{k,n}$ which is
well-defined on $(X_{P(G)})_{\geq 0}$ (a closed ball).
Furthermore, it is clear that
the image of $m_G$ on $(X_{P(G)}){\geq 0}$ lies in $(Gr_{k,n})_{\geq 0}$.
Since the totally positive part of the toric variety $X_{P(G)}$ is dense in the
non-negative part, and the interior gets mapped to the cell
$\Delta_G$, it follows that $(X_{P(G)})_{\geq 0}$ gets mapped to the
closure of $\Delta_G$. Furthermore,
by construction, $(X_{P(G)})_{>0}$ maps homeomorphically to the
cell $\Delta_G$.
And now by Lemma \ref{Closure}, it follows
that the boundary of $(X_{P(G)})_{\geq 0}$ gets mapped to the
$(i-1)$-skeleton of $(Gr_{k,n})_{\geq 0}$.
This completes the proof that the cell decomposition of
$(Gr_{k,n})_{\geq 0}$ is a CW complex.
\end{proof}
It has been conjectured that the cell decomposition of
$(Gr_{k,n})_{\geq 0}$ is a regular CW complex which is homeomorphic to a ball.
In particular, if a CW complex is regular then it follows
that the Euler characteristic
of the closure of each cell is $1$.
In \cite{Williams2}, the third author proved that
the poset of cells of $(\mathbb{G}/P)_{\geq 0}$ is thin and lexicographically shellable,
hence in particular, {\it Eulerian}. In other words,
the {\it Mobius function} of
the poset of cells takes values $\mu(\hat{0},x) = (-1)^{\rho(x)}$
for any $x$ in the poset. As the Euler characteristic of
a finite CW complex is defined to be the number of even-dimensional
cells minus the number of odd-dimensional cells,
we obtain the following result.
\begin{corollary}
The Euler characteristic of the closure of each cell of
$(Gr_{k,n})_{\geq 0}$ is $1$.
\end{corollary}
\section{The face lattice of $P(G)$} \label{faces}
We now consider the lattice of faces of $P(G)$, and
give a description in terms of
unions of matchings of $G$. This description is very similar to the description
of the face lattice of the Birkhoff polytopes, as described by Billera and
Sarangarajan \cite{Billera}. In
fact our proofs are very similar to those in \cite{Billera}; we just need
to adapt the proofs of Billera and Sarangarajan
to the setting of plane-bipartite graphs.
We begin by giving an inequality description of the polytope $P(G)$.
\begin{proposition} \label{Inequalities}
For any plane bipartite graph $G$, the polytope $P(G)$ is
given by the following inequalities and equations:
$x_e \geq 0$ for all edges $e$, and
$\sum_{e \ni v} x_e=1$ for each internal vertex $v$.
If every edge of $G$ is used in some almost perfect matching,
then the affine linear space defined by the above equations
is the affine linear space spanned by $P(G)$.
\end{proposition}
\begin{proof}
Let $Q$ be the polytope defined by these inequalities. Clearly, $P(G)$ is contained in $Q$. Note that $Q$ lies in the cube $[0,1]^{E(G)}$ because if $e$ is any edge of $G$ and $v$ an endpoint of $e$ then everywhere on $Q$ we have $x_e = 1-\sum_{e' \ni v,\ e' \neq e} x_e \leq 1$.
Let $u$ be a vertex of $Q$. We want to show that $u$ is a $(0-1)$-vector.
Suppose for the sake of contradiction that $u$ is not a $(0-1)$-vector; let $H$ be the subgraph of
$G$ consisting of edges $e$ for which $0 < u_e < 1$. Note that, if $v$ is a vertex of $H$, then $v$ has degree
at least $2$ in $H$ since $\sum_{e \ni v} u_e=1$. Therefore, $H$ contains a cycle or a path from one boundary vertex of $G$ to another. We consider the case where $H$ contains a cycle, the other case is similar. Let $e_1$, $e_2$, \ldots, $e_{2r}$ be the edges of this cycle; the length of the cycle is even because $G$ is bipartite.
Define the vector $w$ by $w_{e_i}=(-1)^{i}$ and $w_e=0$ if $e \not \in \{ e_1, e_2, \ldots, e_{2n} \}$. Let $\epsilon=\min_{i} (\min(u_{e_i}, u_{1-e_i}))$. Then $u+\epsilon w$ and $u-\epsilon w$ are both in $Q$, contradicting that $u$ was assumed to be a vertex of $Q$.
Now, assume that every edge of $G$ is used in some almost perfect matching. Then $P(G)$ meets the interior of the orthant $(\mathbb{R}_{\geq 0})^{E(G)}$, so the affine linear space spanned by $P(G)$ is the same as the affine linear space which cuts it out of this orthant.
\end{proof}
\begin{corollary} \label{DimP}
Suppose that every edge of $G$ is used in some almost perfect matching. Then $P(G)$ has dimension $\# \mathrm{Faces}(G)-1$.
\end{corollary}
\begin{proof}
By proposition~\ref{Inequalities}, the affine linear space spanned by $P(G)$ is parallel to the vector space cut out by the equations $\sum_{e \ni v} x_e=0$. This is precisely $H_1(G, \partial G)$, where $\partial G$ is the set of boundary vertices of $G$. Let $\tilde{G}$ be the graph formed from $G$ by identifying the vertices of $\partial G$. We embed $\tilde{G}$ in a sphere by contracting the boundary of the disc in which $G$ lives to a point. Then $H_1(G, \partial G) \cong H_1(\tilde{G})$, which has dimension $\# \mathrm{Faces}(\tilde{G})-1=\# \mathrm{Faces}(G)-1$.
\end{proof}
Note that Corollary~\ref{DimP} is correct even when some components of $G$ are not connected to the boundary, in which case some of the faces of $G$ are not discs.
\subsection{The lattice of elementary subgraphs}
Following \cite{Matching}, we call a subgraph $H$ of $G$
{\it elementary} if it contains every vertex of $G$ and
if every edge of $H$ is used in some almost perfect matching of $H$.
Equivalently, the edges of $H$ are obtained by taking
a union of several almost perfect matchings of $G$.
(To see the equivalence, if $\mathrm{Edges}(H) = \bigcup M_i$, then each edge of $H$ occurs in some $M_i$, which is an almost perfect matching of $H$. Conversely, if $H$ is elementary, then let $M_1$, $M_2$, \dots, $M_r$ be the almost perfect matchings of $G$ contained in $H$ then, by the definition of ``elementary", $\mathrm{Edges}(H) = \bigcup M_i$.)
The main result of this section is the following.
\begin{theorem}\label{facelattice}
The face lattice of $P(G)$ is isomorphic to the lattice of all elementary subgraphs
of $G$, ordered by inclusion.
\end{theorem}
\begin{proof}
We give the following maps between faces of $P(G)$ and elementary subgraphs. If $F$ is a face of $P(G)$, let $K(F)$ be the set of edges $e$ of
$G$ such that $x_e$ is not identically zero on $F$, and let $\gamma(F)$ be the subgraph of $G$ with edge set $K(F)$. Since $F$ is a face of a $(0-1)$-polytope,
$F$ is the convex hull of the characteristic vectors of some set of matchings, and $\gamma(F)$ is the union of these matchings. Thus, $F \mapsto \gamma(F)$ is a map from faces of $P(G)$ to
elementary subgraphs. Conversely, if $H$ is a subgraph of $G$, let $\phi(H) = P(G) \cap \bigcap_{e \not \in H} \{ x_e =0 \}$. Since $\{ x_e =0 \}$ defines a face of $P(G)$, the intersection $\phi(H)$ is a face of $P(G)$. From the description in Proposition~\ref{Inequalities}, every face of $P(G)$ is of the form $\phi(H)$ for some subgraph $H$ of $G$. Note also that $\phi(H)=P(H)$.
We need to show that these constructions give mutually inverse bijections between the faces of $P(G)$ and the elementary subgraphs. For any face $F$ of $P(G)$, it is clear that $\phi(\gamma(F)) \supseteq F$. Suppose for the sake of contradiction that $F \neq \phi(\gamma(F))$. Then $F$ is contained in some proper face of $\phi(\gamma(F))$; let this proper face be $\phi(H)$ for some $H \subsetneq \gamma(F)$. Then there is an edge $e$ of $\gamma(F)$ which is not in $H$. By the condition that $e$ is in $\gamma(F)$, the function $x_e$ cannot be zero on $F$, so $F$ is not contained in $\phi(H)$ after all. We deduce that $F=\phi(\gamma(F))$.
Conversely, let $H$ be an elementary subgraph of $G$. It is clear that $\gamma(\phi(H)) \subseteq H$. Suppose for the sake of contradiction that there is an edge $e$ of $H$ which
is not in $\gamma(\phi(H))$. Since $H$ is elementary, there is a matching $M$ of $H$ which contains the edge $e$. Let $\chi_M$ be the corresponding vertex of $\phi(H)$. Then $x_e$ is not zero on $\phi(H)$, so $e$ is in $\gamma(\phi(H))$ after all and we conclude that $H=\gamma(\phi(H))$.
\end{proof}
The minimal nonempty elementary subgraphs of $G$ are the matchings, corresponding
to vertices of $P(G)$.
\begin{corollary}
Consider a cell $\Delta_G$ of $(Gr_{k,n})_{\geq 0}$ parameterized by
a plane-bipartite graph $G$. For any cell $\Delta_H$ in the closure of $\Delta_G$,
the corresponding polytope $P(H)$ is a face of $P(G)$.
\end{corollary}
\begin{proof}
By \cite[Theorem 18.3]{Postnikov}, every cell in the closure of $\Delta_G$
can be parameterized using a plane-bipartite graph $H$ which is obtained by deleting
some edges from $G$. $H$ is perfectly orientable and hence is an elementary
subgraph of $G$. Therefore by Theorem \ref{facelattice}, the polytope
$P(H)$ is a face of $P(G)$.
\end{proof}
\subsection{Facets and further combinatorial structure of $P(G)$}
We now give a description of the facets of $P(G)$.
Let us say that two edges $e$ and $e'$ of $G$ are {\it equivalent}
if they separate the same pair of (distinct)
faces $f$ and $f'$ with the same orientation.
That is, if we travel across $e$ from face $f$ to $f'$, the black vertex of $e$
will be to our left if and only if when we travel across $e'$ from $f$ to $f'$,
the black vertex of $e'$ is to our left.
\begin{lemma} \label{restrictions}
If every edge of $G$ is used in an almost perfect matching then two edges $e$ and $e'$ are equivalent if and only if the linear functionals $x_e$ and $x_{e'}$ have the same restriction to $P(G)$.
\end{lemma}
\begin{proof}
By Proposition~\ref{Inequalities}, the affine linear space spanned by $P(G)$ is cut out by the equations $\sum_{e \ni v} x_e=1$, where $v$ runs through the internal vertices of $G$. Let $L$ be the linear space cut out by the equations $\sum_{e \ni v} x_e=0$; the polytope $P(G)$ is parallel to $L$ and thus the functionals $x_e$ and $x_{e'}$ have the same restriction to $P(G)$ if and only if they have the same restriction to $L$. In the proof of Corollary~\ref{DimP} we identified $L$ with $H_1(G, \partial G)$. So we just want to determine when the restrictions of $x_e$ and $x_{e'}$ to $H_1(G, \partial G)$ are the same.
The restrictions of $x_e$ and $x_{e'}$ to $H_1(G, \partial G)$ are elements of the dual vector space $H^1(G, \partial G)$. We can identify $H^1(G, \partial G)$ with the vector space of functions on $\mathrm{Faces}(G)$ summing to zero as follows: Map $\mathbb{R}^{E(G)}$ to $\mathbb{R}^{\mathrm{Faces}(G)}$ by sending an edge $e$ to the function which is $1$ on one of the faces it borders and $-1$ on the other; the sign convention is that the sign is positive or negative according to whether $F$ lies to the right or left of $e$, when $e$ is oriented from black to white. Then $H^1(G, \partial G)$, which is defined as a quotient of $\mathbb{R}^{E(G)}$, is the image of this map.
We now see that $x_e$ and $x_{e'}$ restrict to the same functional on $L$ if and only if they correspond to the same function on the faces of $G$. This occurs if and only if they separate the same pair of faces with the same orientation.
\end{proof}
\begin{theorem}
Suppose that $G$ is elementary. Then the facets of
$P(G)$ correspond to the elementary subgraphs of the form $G \setminus E$, where $E$ is an equivalence class as above.
\end{theorem}
\begin{proof}
First, note that if $e$ and $e'$ are not equivalent then, by Lemma~\ref{restrictions}, $x_e$ and $x_{e'}$ have different restrictions to $P(G)$. Thus, there is no facet of $P(G)$ on which they both vanish. On the other hand, if $e$ and $e'$ are equivalent then, again by Lemma~\ref{restrictions}, on every facet of $P(G)$ where $x_e$ vanishes, $x_{e'}$ also vanishes. So we see that every facet of $P(G)$ is of the form $\phi(G \setminus E)$, where $E$ is an equivalence class in $E(G)$. (Here $\phi$ is the function introduced in the proof of Theorem~\ref{facelattice}.)
If $\phi(G \setminus E)$ is a facet of $P(G)$ then $G \setminus E$ is elementary, by Theorem~\ref{facelattice}. Conversely, if $G \setminus E$ is elementary then $\phi(G \setminus E)$ is a face of $P(G)$. Since all the edges of $E$ separate the same pair of faces, $G \setminus E$ has one less face than $G$, so $\phi(G \setminus E)$ is a facet of $P(G)$, as desired.
\end{proof}
As a special case of the preceding propositions, we get the following.
\begin{remark}\label{edge}
Let $N$ be a face of $P(G)$ and let $r$ be the number of regions into which the edges
of $H(N)$
divide the disk in which $G$ is embedded. Then $N$ is an edge of $P(G)$ if and only if
$r=2$.
Equivalently, two vertices $v_{\mathcal{O}_1}$ and $v_{\mathcal{O}_2}$ of $P(G)$ form an edge
if and only if $\mathcal{O}_2$ can be obtained from $\mathcal{O}_1$ by switching the orientation along
a self-avoiding path or cycle in $\mathcal{O}_1$.
\end{remark}
Recall that the {\it Birkhoff polytope} $B_n$ is the convex hull of
the $n!$ points
in $\mathbb{R}^{n^2}$
$\{X(\pi): \pi \in S_n\}$ where $X(\pi)_{ij}$ is equal to $1$ if $\pi(i)=j$ and is equal to $0$
otherwise. It is well-known that $B_n$ is an $(n-1)^2$ dimensional polytope, whose
face lattice of $B_n$ is isomorphic to the lattice of all elementary subgraphs of the
complete bipartite graph $K_{n,n}$ ordered by inclusion \cite{Billera}.
Our polytopes $P(G)$ can be thought of as analogues
of the Birkhoff polytope for
planar graphs embedded in a disk.
\section{Appendix: numerology of the polytopes $P(G)$}\label{Numerology}
In this section we give some statistics about a few of the polytopes $P(G)$.
Our computations were made with the help of the software {\tt polymake} \cite{Polymake}.
\begin{figure}[h]
\centering
\includegraphics[height=1.6in]{Examples.eps}
\caption{}
\label{examples}
\end{figure}
Let $G24$ denote the plane-bipartite graph from Figure \ref{Graph}, and
let $G25$, $G26$, and $G36$ denote the plane-bipartite graphs shown in Figures \ref{examples}.
These plane-bipartite graphs give parameterizations of the top cells in $(Gr_{2,4})_{\geq 0}, (Gr_{2,5})_{\geq 0}, (Gr_{2,6})_{\geq 0}$, and $(Gr_{3,6})_{\geq 0}$,
respectively.
The $f$-vectors of the matching polytopes $P(G24)$, $P(G25)$, $P(G26)$ and $P(G36)$ are
\ $(7,17,18,8)$, \ $(14,59,111,106,52,12)$, \ $(25,158,440,664,590,315,98,16)$, \ and \
$(42,353,1212,2207,2368,1557,627,149,19)$ respectively.
The Ehrhart series for $P(G24)$, $P(G25)$ and $P(G26)$,
which give the Hilbert series of the corresponding toric varieties, are
$\frac{1+2t+t^2}{(1-t)^5}$, $\frac{1+7t+12t^2+4t^3}{(1-t)^7}$,
and $\frac{1+16t+64t^2+68t^315t^4}{(1-t)^9}$.
The volumes of the four polytopes are $\frac{1}{6}= \frac{4}{4!}$, $\frac{1}{30}=\frac{24}{6!}$, $\frac{41}{10080}=\frac{164}{8!}$, and $\frac{781}{181440}=\frac{1562}{9!}$. Thus, the degrees of the corresponding toric varieties are $4$, $24$, $164$, and $1562$.
\begin{proposition}
Let $G2n$ (for $n \geq 4$) be the family of graphs that extend the
first two graphs shown in Figure \ref{examples}. Then
the number of vertices of $G2n$ is given by
$f_0(G2n)= {n \choose 3} + n-1$.
\end{proposition}
\begin{proof}
This can be proved by induction on $n$ by removing the leftmost
black vertex. We leave this as an exercise for the reader.
\end{proof}
Note that in general there is more than one plane-bipartite graph giving a parameterization of a given cell.
But even if two plane-bipartite graphs $G$ and $G'$ correspond to the same cell, in general
we have $P(G) \neq P(G')$. For example, the plane-bipartite graph in Figure \ref{Alternate} gives a parameterization
of the top cell of $(Gr_{2,6})_{\geq 0}$. Let us refer to this graph as $\hat{G}26$. However,
$P(\hat{G}26) \neq P(G26)$: the $f$-vector of $P(\hat{G}26)$ is
$(26, 165, 460, 694, 615, 326, 100, 16)$.
\begin{figure}[h]
\centering
\includegraphics[height=1in]{G26Alternate.eps}
\caption{}
\label{Alternate}
\end{figure}
|
2,869,038,154,077 | arxiv |
\section{Introduction}
In this paper we consider an optimal control problem in the presence of uncertainty: the target function is the solution of an elliptic partial differential equation (PDE), steered by a control function, and having a random field as input coefficient. The random field is in principle infinite-dimensional, and in practice might need a large finite number of terms for accurate approximation. The novelty lies in the use and analysis of a specially designed quasi-Monte Carlo method to approximate the possibly high-dimensional integrals with respect to the stochastic variables.
Specifically, we consider the optimal control problem of finding
\begin{equation}
\min_{z \in L^2(\Omega)} J(u,z)\,,\quad J(u,z) := \frac{1}{2} \int_{\Xi}\, \int_{\Omega} (u(\pmb x,\pmb y)-u_0(\pmb x))^2\, \mathrm d\pmb x\, \mathrm d\pmb y + \frac{\alpha}{2} \int_{\Omega} z(\pmb x)^2\, \mathrm d\pmb x\,, \label{eq:1.1}
\end{equation}
subject to the partial differential equation
\begin{align}
- \nabla \cdot (a(\pmb x,\pmb y) \nabla u(\pmb x,\pmb y)) &= z(\pmb x) \quad &\pmb x \in \Omega\,,&\quad \pmb y \in \Xi\,,\label{eq:1.2} \\
u(\pmb x,\pmb y) &= 0 \quad &\pmb x \in \partial \Omega\,,& \quad \pmb y\in \Xi\,, \label{eq:1.3} \\
z_{\min}(\pmb x) \leq&\ z(\pmb x) \leq z_{\max}(\pmb x) \quad &\text{a.e.~in}\ \Omega\,,& \label{eq:1.4}
\end{align}
for $\alpha > 0$ and a bounded domain $\Omega \subset \mathbb R^d$ with Lipschitz boundary $\partial \Omega$, where $d = 1,2$ or $3$. Further we assume
\begin{align}
u_0,z_{\min}, z_{\max} & \in L^2(\Omega)\,, \notag \\
z_{\min} \leq z_{\max} &\text{ a.e.~in } L^2(\Omega)\,. \label{eq:1.7}
\end{align}
Hence $\mathcal Z$, the set of \emph{feasible controls}, is defined by
\begin{align*}
\mathcal Z = \{ z \in L^2(\Omega)\ :\ z_{\min} \leq z \leq z_{\max}\quad \text{a.e.~in } \Omega \}\,.
\end{align*}
Note that $\mathcal Z$ is bounded, closed and convex and by \cref{eq:1.7} it is non-empty.
The gradients in \cref{eq:1.2} are understood to be with respect to the physical variable $\pmb x \in \Omega$, whereas $\pmb y \in \Xi$ is an infinite-dimensional vector $\pmb y = (y_j)_{j \geq 1}$ consisting of a countable number of parameters $y_j$, which are assumed to be independently and identically distributed (i.i.d.) uniformly in $[-\frac{1}{2},\frac{1}{2}]$ and we denote
\begin{displaymath}
\Xi := \left[-\tfrac{1}{2},\tfrac{1}{2}\right]^{\mathbb N}\,.
\end{displaymath}
The parameter $\pmb y$ is then distributed on $\Xi$ with probability measure $\mu$, where
\begin{displaymath}
\mu(\mathrm d\pmb y) = \bigotimes_{j\geq1} \mathrm dy_j = \mathrm d\pmb y
\end{displaymath}
is the uniform probability measure on $\Xi$.
The input uncertainty is described by the parametric diffusion coefficient $a(\pmb x,\pmb y)$ in \cref{eq:1.2}, which is assumed to depend linearly on the parameters $y_j$, i.e.,
\begin{equation}
a(\pmb x,\pmb y) = \bar{a}(\pmb x) + \sum_{j\geq1} y_j\, \psi_j (\pmb x)\,, \quad \pmb x \in \Omega\,,\quad \pmb y \in \Xi\,. \label{eq:1.9}
\end{equation}
In order to ensure that the diffusion coefficient $a(\pmb x,\pmb y)$ is well defined for all $\pmb y \in \Xi$, we assume
\begin{align}
\bar{a} & \in L^{\infty}(\Omega)\,, \quad \sum_{j\geq 1}\ \|\psi_j\|_{L^{\infty}(\Omega)} < \infty\,. \label{eq:1.10}
\end{align}
Later in this article we shall impose a number of assumptions on the coefficients $a(\pmb x,\pmb y)$ as required.
A comprehensive overview of other possible formulations of the optimal control problem \cref{eq:1.1,eq:1.2,eq:1.3,eq:1.4} can be found, e.g., in \cite{AUH,BSSW}. They differ primarily in the computational cost and the robustness of the control with respect to the uncertainty. A lot of work \cite{AUH,ChenGhattas,KHRB,KunothSchwab2} has been done on formulations with stochastic controls, i.e., when the control depends directly on the uncertainty. Since practitioners often require a single deterministic control, the so-called robust deterministic formulation \cref{eq:1.1,eq:1.2,eq:1.3,eq:1.4} has received increasing attention in the recent past. This deterministic reformulation of the optimal control problem is based on a risk measure, such as the expected value, the conditional value-at-risk \cite{KouriSurowiec} or the combination of the expected value and the variance \cite{vanBarelVandewalle}.
Approaches to solve the resulting robust optimization problems include, e.g., Taylor approximation methods \cite{CVG}, sparse grids \cite{SparseGrids,KouriSurowiec} and multilevel Monte Carlo methods \cite{vanBarelVandewalle}. Multilevel Monte Carlo methods have first been analyzed for robust optimal control problems in the fundamental work \cite{vanBarelVandewalle}. Together with confirming numerical evidence, the theory in \cite{vanBarelVandewalle} shows the vast potential cost savings resulting from the application of multilevel Monte Carlo methods. Monte Carlo based methods do not require smoothness of the integrand with respect to the uncertain parameters. However, for many robust optimization problems, the integrands in the robust formulations are in fact smooth with respect to the uncertainty.
In this paper we propose the application of a quasi-Monte Carlo method to approximate the expected values with respect to the uncertainty. Quasi-Monte Carlo methods have been shown to perform remarkably well in the application to PDEs with random coefficients \cite{DKGNS, DKGS, GKNSSS, KKS, Kuo2016ApplicationOQ, KuoNuyens2018, KSSSU, Kuo2012QMCFEM, KSS2015, Schwab}. The reason behind their success is that it is possible to design quasi-Monte Carlo rules with error bounds not dependent on the number of uncertain variables, which achieve faster convergence rates compared to Monte Carlo methods in case of smooth integrands. In addition, quasi-Monte Carlo methods preserve the convexity structure of the optimal control problem due to their nonnegative (equal) quadrature weights. This work focuses on error estimates and convergences rates for the dimension truncation, the finite element discretization and the quasi-Monte Carlo quadrature, which are presented together with confirming numerical experiments.
This paper is structured as follows. The parametric weak formulation of the PDE problem is given in \cref{sec:2}. The corresponding optimization problem is discussed in \cref{sec:3}, with the unique solvability of the optimization problem considered in \cref{subsection31} and the requisite optimality conditions given in \cref{subsection32}. The gradient descent algorithm and its projected variant as they apply to our problem are presented in \cref{sec:Gradient descent} and \cref{subsection41}, respectively. The error analysis of \cref{sec:5} contains the main new theoretical results of this paper. \cref{subsection51} is concerned with the dimension truncation error, while \cref{section:FE discretization} addresses the finite element discretization error of the PDE problem. The regularity of the adjoint PDE problem is the topic of \cref{subsection54}, which leads to \cref{subsection:QMC} covering the quasi-Monte Carlo (QMC) integration error. \cref{section:optimalweights} details the design of optimally chosen weights for the QMC algorithm. Finally, the combined error and convergence rates for the PDE-constrained optimization problem are summarized in \cref{subsection56}.
\section{Parametric weak formulation}
\label{sec:2}
We state the variational formulation of the parametric elliptic boundary value problem \cref{eq:1.2,eq:1.3} for each value of the parameter $\pmb y \in \Xi$ together with sufficient conditions for the existence and uniqueness of solutions.
Our variational setting of \cref{eq:1.2} and \cref{eq:1.3} is based on the Sobolev space $H_0^1(\Omega)$ and its dual space $H^{-1}(\Omega)$ with the norm in $H_0^1(\Omega)$ defined by
\begin{align*}
\|v\|_{H_0^1(\Omega)} := \|\nabla v\|_{L^2(\Omega)}\,.
\end{align*}
The duality between $H_0^1(\Omega)$ and $H^{-1}(\Omega)$ is understood to be with respect to the pivot space $L^2(\Omega)$, which we identify with its own dual. We denote by $\langle \cdot,\cdot \rangle$ the $L^2(\Omega)$ inner product and the duality pairing between $H_0^1(\Omega)$ and $H^{-1}(\Omega)$. We introduce the continuous embedding operators $E_1: L^2(\Omega) \to H^{-1}(\Omega)$ and $E_2: H_0^1(\Omega) \to L^2(\Omega)$, with the embedding constants $c_1,c_2>0$ for the norms
\begin{align}
\|v\|_{H^{-1}(\Omega)} &\leq c_1 \|v\|_{L^2(\Omega)} \label{c1}\,,\\
\|v\|_{L^2(\Omega)} &\leq c_2 \|v\|_{H_0^1(\Omega)} \label{c2}\,.
\end{align}
For fixed $\pmb y \in \Xi$, we obtain the following parameter-dependent weak formulation of the parametric deterministic boundary value problem \cref{eq:1.2,eq:1.3}:
for $\pmb y \in \Xi$ find $u(\cdot,\pmb y) \in H_0^1(\Omega)$ such that
\begin{equation}
\int_{\Omega} a(\pmb x,\pmb y) \nabla u(\pmb x,\pmb y) \cdot \nabla v(\pmb x)\, \mathrm d\pmb x = \int_{\Omega} z(\pmb x) v(\pmb x)\, \mathrm d\pmb x\quad \forall v \in H_0^1(\Omega)\,. \label{eq:2.1}
\end{equation}
The parametric bilinear form $b(\pmb y;w,v)$ for $\pmb y \in \Xi$ is given by
\begin{equation}
b(\pmb y;w,v) := \int_{\Omega} a(\pmb x,\pmb y) \nabla w(\pmb x) \cdot \nabla v(\pmb x)\ \mathrm d\pmb x \quad \forall w,v \in H_0^1(\Omega)\,, \label{eq:2.2}
\end{equation}
allowing us to write the weak form of the PDE as
\begin{align}
b(\pmb y;u(\cdot,\pmb y),v) = \langle z,v \rangle \quad \forall v \in H_0^1(\Omega)\,.\label{eq:parametricweakproblem}
\end{align}
Throughout this paper we assume in addition to \cref{eq:1.9,eq:1.10} that
\begin{equation}
0 < a_{\min} \leq a(\pmb x,\pmb y) \leq a_{\max} <\infty\,, \quad \pmb x \in \Omega\,, \quad \pmb y \in \Xi\,, \notag
\end{equation}
for some positive real numbers $a_{\min}$ and $a_{\max}$. Then the parametric bilinear form is continuous and coercive on $H_0^1(\Omega) \times H_0^1(\Omega)$, i.e., for all $\pmb y \in \Xi$ and all $w,v \in H_0^1(\Omega)$ we have
\begin{displaymath}
b(\pmb y;v,v) \geq a_{\min}\ \|v\|_{H_0^1(\Omega)}^2 \quad \text{ and } \quad |b(\pmb y;w,v)| \leq a_{\max}\ \|w\|_{H_0^1(\Omega)}\ \|v\|_{H_0^1(\Omega)}\,.
\end{displaymath}
With the Lax--Milgram lemma we may then infer that for every $z \in H^{-1}(\Omega)$ and given $\pmb y \in \Xi$, there exists a unique solution to the parametric weak problem: find $u(\cdot,\pmb y) \in H_0^1(\Omega)$ such that \cref{eq:parametricweakproblem} holds.
Hence we obtain the following result, which can also be found, e.g., in \cite{CohenDeVoreSchwab} and \cite{Kuo2012QMCFEM}.
\begin{theorem}\label{theorem:theorem1}
For every $z \in H^{-1}(\Omega)$ and every $\pmb y \in \Xi$, there exists a unique solution $u(\cdot,\pmb y) \in H_0^1(\Omega)$ of the parametric weak problem \cref{eq:2.1} (or equivalently, \cref{eq:parametricweakproblem}), which satisfies
\begin{equation}
\|u(\cdot,\pmb y)\|_{H_0^1(\Omega)} \leq \frac{\| z\|_{H^{-1}(\Omega)}}{a_{\min}}\,.\notag
\end{equation}
In particular, because of \cref{c1} it holds for $z \in L^2(\Omega)$ that
\begin{equation}
\|u(\cdot,\pmb y)\|_{H_0^1(\Omega)} \leq \frac{c_1\|z\|_{L^{2}(\Omega)}}{a_{\min}}\,. \label{eq:2.5}
\end{equation}
\end{theorem}
\section{The optimization problem}
\label{sec:3}
For the discussion of existence and uniqueness of solutions of the optimal control problem \cref{eq:1.1,eq:1.2,eq:1.3,eq:1.4}, we reformulate the problem to depend on $z$ only, a form often referred to as the \emph{reduced form of the problem}.
Due to \cref{c2} we can interpret the solution operator as a linear continuous operator with image in $L^2(\Omega)$, which leads to the following definition.
\begin{definition}\label{def:solutionoperator}
For arbitrary $\pmb y \in \Xi$ we call the unique mapping $S_{\pmb y}:L^2(\Omega) \to L^2(\Omega)$, which for every $\pmb y \in \Xi$ assigns to each $f \in L^2(\Omega)$ the unique solution $g \in L^2(\Omega)$ of the weak problem: find $g\in H_0^1(\Omega)$ such that
\begin{align*}
b(\pmb y;g,v) = \langle f,v \rangle \quad \forall v \in H_0^1(\Omega)\,.
\end{align*}
\end{definition}
Note that the solution operator $S_{\pmb y}$ depends on $\pmb y \in \Xi$ as indicated by the subscript. Further, $S_{\pmb y}$ is a self-adjoint operator, i.e., $S_{\pmb y}= S_{\pmb y}^*$, where $S_{\pmb y}^*$ is defined by $\langle S^*_{\pmb y} g,f \rangle = \langle g,S_{\pmb y}f \rangle$ $\forall f,g \in L^2(\Omega)$. The self-adjoint property holds since for all $f,g \in L^2(\Omega)$ we have $\langle S_{\pmb y}^* g, f\rangle = \langle g, S_{\pmb y}f\rangle = b(\pmb y; S_{\pmb y}g,S_{\pmb y}f) = \langle S_{\pmb y}g,f \rangle$. In the following we will omit the $*$ in $S^*_{\pmb y}$.
By \cref{def:solutionoperator} and \cref{eq:parametricweakproblem} it clearly holds that $u(\cdot,\pmb y) = S_{\pmb y}z$ for every $\pmb y \in \Xi$. Therefore we can write
\begin{align*}
u(\cdot,\pmb y, z) := S_{\pmb y}z
\end{align*}
as a function of $z$ and call it the \emph{state} corresponding to the control $z \in L^2(\Omega)$. The optimal control problem then becomes a quadratic problem in the Hilbert space $L^2(\Omega)$: find
\begin{equation}
\min_{z\in \mathcal Z} J(z)\,, \quad J(z) := \frac{1}{2} \int_{\Xi} \|S_{\pmb y}z-u_0\|^2_{L^2(\Omega)}\ \mathrm d\pmb y + \frac{\alpha}{2} \|z\|^2_{L^2(\Omega)}\,. \label{eq:3.1}
\end{equation}
\subsection{Existence and uniqueness of solutions}
\label{subsection31}
Results on the existence of solutions for formulations of the optimization problem with stochastic controls, i.e., where it is assumed that the control $z$ is dependent on the parametric variable $\pmb y$, can be found, e.g., in \cite{ChenGhattas} and \cite{KunothSchwab2}. In \cite{KouriSurowiec} an existence result for solutions of a risk-averse PDE-constrained optimization problem is stated, where the objective is to minimize the conditional value-at-risk (CVaR).
\begin{theorem}
There exists a unique optimal solution $z^*$ of the problem \cref{eq:3.1}.
\end{theorem}
\begin{proof}
By assumption \cref{eq:1.7} there exists a $z_0 \in \mathcal Z$. For any $z \in \mathcal Z$ satisfying $\|z\|^2_{L^2(\Omega)} > \frac{2}{\alpha} J(z_0)$ it holds that
\begin{displaymath}
J(z) = \frac{1}{2} \int_{\Xi} \|S_{\pmb y}z - u_0\|^2_{L^2(\Omega)}\, \mathrm d\pmb y + \frac{\alpha}{2} \|z\|^2_{L^2(\Omega)} \geq \frac{\alpha}{2} \|z\|^2_{L^2(\Omega)} > J(z_0)\,.
\end{displaymath}
Hence, to find the optimal control $z^*$, we can restrict to the set $\widetilde{\mathcal Z} := \mathcal Z \cap \{ z \in L^2(\Omega) : \|z\|^2_{L^2(\Omega)} \leq \frac{2}{\alpha} J(z_0)\}$.
As $J(z) \geq 0$, the infimum $\widetilde J := \inf_{z \in \widetilde{\mathcal Z}} J(z)$ exists. Hence there exists a sequence $(z_i)_i \subset \widetilde{\mathcal Z}$ such that $J(z_i) \to \widetilde J$ as $i \to \infty$.
Since $\widetilde{\mathcal Z}$ is bounded, closed and convex it is weakly sequentially compact. Therefore there exists a subsequence $(z_{i_k})_k$, which converges weakly to $z^* \in \widetilde{\mathcal Z}$, i.e., $\langle z_{i_k},v \rangle \to \langle z^*,v \rangle$ $\forall v \in L^2(\Omega)$ as $k \to \infty$.
Since $\|S_{\pmb y}z-u_0\|^2_{L^2(\Omega)}$ as a function of $z$ is convex and continuous it is weakly lower semicontinuous. In consequence we have
\begin{displaymath}
\|S_{\pmb y}z^*-u_0\|^2_{L^2(\Omega)} \leq \liminf_{k \to \infty} \|S_{\pmb y}z_{i_k}-u_0\|^2_{L^2(\Omega)}\,.
\end{displaymath}
It follows that
\begin{align*}
J(z^*) &= \frac{1}{2} \int_{\Xi} \|S_{\pmb y}z^*-u_0\|^2_{L^2(\Omega)}\, \mathrm d\pmb y + \frac{\alpha}{2} \|z^*\|^2_{L^2(\Omega)}\\
&\leq \frac{1}{2} \int_{\Xi} \liminf_{k \to \infty} \|S_{\pmb y}z_{i_k}-u_0\|^2_{L^2(\Omega)}\, \mathrm d\pmb y + \liminf_{k \to \infty} \frac{\alpha}{2} \|{z_{i_k}}\|^2_{L^2(\Omega)}\\
&\leq \liminf_{k \to \infty} J(z_{i_k}) = \widetilde J\,,
\end{align*}
where the last step follows by Fatou's lemma.
As $\widetilde J$ is the infimum of all possible values $J(z)$ and $z^* \in \widetilde{\mathcal Z}$, it follows that $J(z^*) = \widetilde J$ and hence $z^*$ is an optimal control. The uniqueness follows from the strict convexity of $J$.
\end{proof}
\subsection{Optimality conditions}
\label{subsection32}
From standard optimization theory for convex $J$, we know that $z^*$ solves \cref{eq:3.1} if and only if the representer $J'$ of the Fr\'{e}chet derivative of $J$ satisfies the variational inequality $\langle J'(z^*),z-z^*\rangle \geq 0$ $\forall z \in \mathcal Z$.
It can be shown that
\begin{align}
J'(z) = \int_{\Xi} S_{\pmb y}(S_{\pmb y}z - u_0)\, \mathrm d\pmb y + \alpha z\,.\label{eq:gradient}
\end{align}
In the following we call $J'(z)$ the gradient of $J(z)$.
\begin{definition}\label{def:adjointstate}
For every $\pmb y \in \Xi$ and every $z \in \mathcal Z$, with $u(\cdot,\pmb y, z) = S_{\pmb y} z$ we call $q(\cdot,\pmb y,z) := S_{\pmb y}(S_{\pmb y}z-u_0) = S_{\pmb y}(u(\cdot,\pmb y ,z) - u_0) \in L^2(\Omega)$ the adjoint state corresponding to the control $z$ and the state $u(\cdot, \pmb y,z)$.
\end{definition}
Note that $q(\cdot,\pmb y,z) \in L^2(\Omega)$ is by \cref{def:adjointstate} the unique solution of the adjoint parametric weak problem: find $q(\cdot,\pmb y,z) \in H_0^1(\Omega)$ such that
\begin{align}
b(\pmb y;q(\cdot,\pmb y,z),w) = \langle (u(\cdot,\pmb y,z)-u_0),w \rangle \quad \forall w \in H_0^1(\Omega)\,,\label{eq:adjointparametricweakproblem}
\end{align}
where $u(\cdot,\pmb y,z)$ is the unique solution of
\begin{align}
b(\pmb y; u(\cdot,\pmb y,z),v) = \langle z,v\rangle \quad \forall v \in H_0^1(\Omega)\,.\label{eq:33b}
\end{align}
The following result is a corollary to \cref{theorem:theorem1}.
\begin{corollary}\label{coro}
For every $z \in L^{2}(\Omega)$ and every $\pmb y \in \Xi$, there exists a unique solution $q(\cdot,\pmb y,z) \in H_0^1(\Omega)$ of the parametric weak problem \cref{eq:adjointparametricweakproblem}, which satisfies
\begin{equation}
\|q(\cdot,\pmb y,z)\|_{H_0^1(\Omega)} \leq \frac{c_1\|u(\cdot,\pmb y,z) - u_0\|_{L^2(\Omega)}}{a_{\min}} \leq C_q \left(\| z\|_{L^{2}(\Omega)} + \| u_0\|_{L^{2}(\Omega)}\right)\,, \label{coro:2.4}
\end{equation}
where $C_q := \max\left(\frac{c_1}{a_{\min}}, \frac{c_1^2c_2}{a^2_{\min}}\right)$ and $c_1,c_2> 0$ are the embedding constants in \cref{c1,c2}.
\end{corollary}
As a consequence of \cref{eq:gradient} and \cref{def:adjointstate} we get \begin{align}
J'(z) = \int_{\Xi} q(\cdot,\pmb y,z)\, \mathrm d\pmb y + \alpha z\,,\label{eq:gradient3}
\end{align}
which directly leads to the following result.
\begin{lemma} \label{theorem:3.4}
A control $z^* \in \mathcal Z$ solves \cref{eq:3.1} if and only if
\begin{align}
\left\langle \int_{\Xi} q(\cdot,\pmb y,z^*)\, \mathrm d\pmb y + \alpha z^*, z - z^*\right\rangle \geq 0 \quad \forall z \in \mathcal Z\,, \label{eq:gradient2}
\end{align}
where $q(\cdot,\pmb y,z^*)$ is the adjoint state corresponding to $z^*$.
\end{lemma}
The variational inequality $\langle J'(z^*),z-z^*\rangle \geq 0$ $\forall z \in \mathcal Z$ holds if and only if there exist a.e.~nonnegative functions $\mu_a,\mu_b \in L^2(\Omega)$ such that $J'(z^*) - \mu_a + \mu_b = 0$ and that the complementary constraints $(z^* - z_{\min})\mu_a = (z_{\max} - z^*)\mu_b = 0$ are satisfied a.e.~in $\Omega$, cf. \cite[Theorem 2.29]{Troeltzsch}. Thus we obtain the following KKT-system.
\begin{theorem}\label{theorem:variational inequality}
A control $z^* \in L^2(\Omega)$ is the unique minimizer of \cref{eq:3.1} if and only if it satisfies the following KKT-system:
\begin{align}\label{eq:KKT}
\begin{cases}
- \nabla \cdot (a(\pmb x,\pmb y) \nabla u(\pmb x,\pmb y,z^*)) = z^*(\pmb x) &\pmb x \in\Omega\,, \quad \pmb y \in \Xi\,,\\
u(\pmb x,\pmb y,z^*) = 0 &\pmb x \in \partial \Omega\,, \quad \!\!\! \pmb y \in \Xi\,,\\[.7em]
- \nabla \cdot (a(\pmb x,\pmb y) \nabla q(\pmb x,\pmb y,z^*)) = u(\pmb x,\pmb y,z^*) - u_0(\pmb x) &\pmb x \in\Omega\,, \quad \pmb y \in \Xi\,,\\
q(\pmb x,\pmb y,z^*) = 0 &\pmb x \in \partial \Omega\,, \quad \!\!\! \pmb y \in \Xi\,,\\[.7em]
\displaystyle \int_{\Xi} q(\pmb x,\pmb y,z^*)\, \mathrm d\pmb y + \alpha z^*(\pmb x) - \mu_a(\pmb x) + \mu_b(\pmb x) = 0 \quad &\pmb x \in\Omega\,,\\[.7em]
z_{\min}(\pmb x) \leq z^*(\pmb x) \leq z_{\max}(\pmb x)\,, \quad \mu_a(\pmb x) \geq 0\,, \quad \mu_b(\pmb x) \geq 0\,, &\pmb x \in\Omega\,,\\
(z^*(\pmb x) - z_{\min}(\pmb x))\mu_a(\pmb x) = (z_{\max}(\pmb x) - z^*(\pmb x))\mu_b(\pmb x) = 0\,, &\pmb x \in\Omega\,.
\end{cases}
\end{align}
\end{theorem}
\section{Gradient descent algorithms}
We present a gradient descent algorithm to solve the optimal control problem for the case without control constraints ($\mathcal Z = L^2(\Omega)$) in \cref{sec:Gradient descent} and a projected variant of the algorithm for the problem with control constraints in \cref{subsection41}.
\subsection{Gradient descent}
\label{sec:Gradient descent}
Consider problem \cref{eq:3.1} with $z_{\min} = -\infty$ and $z_{\max} = \infty$, i.e., $\mathcal Z = L^2(\Omega)$. Then $\mu_{a} = 0 = \mu_{b}$ and $z^*$ is unique minimizer of \cref{eq:3.1} if and only if $J'(z^*) = 0$.
To find the minimizer $z^*$ of $J$ we use the gradient descent method, for which the descent direction is given by the negative gradient $-J'$, see \cref{alg:gradient descent}.
\begin{algorithm}[t]
\caption{Gradient descent}
\label{alg:gradient descent}
Input: starting value $z \in L^2(\Omega)$
\begin{algorithmic}[1]
\WHILE{$\| J'(z)\|_{L^2(\Omega)} >$TOL}
\STATE\label{444}{find step size $\eta$ using \cref{alg:Armijo}}
\STATE{set $z := z - \eta J'(z)$}
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[t]
\caption{Armijo rule}
\label{alg:Armijo}
Input: current $z$, parameters $\beta,\gamma \in (0,1)$\\
Output: step size $\eta > 0$
\begin{algorithmic}[1]
\STATE{set $\eta := 1$}
\WHILE{$J(z - \eta J'(z))- J(z) > - \eta \gamma \|J'(z)\|^2_{L^2(\Omega)}$}
\STATE{set $\eta := \beta \eta$}
\ENDWHILE
\end{algorithmic}
\end{algorithm}
Note that in every iteration in \cref{alg:gradient descent} several evaluations of $q$ are required in order to approximate the infinite-dimensional integral $\int_{\Xi}\ q(\cdot,\pmb y,z)\ \mathrm d\pmb y$ in the gradient of $J$, see \cref{eq:gradient2}. Further, for each evaluation of $q$ one needs to solve the state PDE and the adjoint PDE.
\begin{theorem}\label{theorem:convergenceGradDesc2}
For arbitrary starting values $z_0 \in L^2(\Omega)$ and $z_{\min} = -\infty$ and $z_{\max} = \infty$, the sequence $\{z_i\}$ generated by \cref{alg:gradient descent} satisfies $J'(z_i) \to 0$ as $i \to \infty$ and the sequence converges to the unique solution $z^*$ of \cref{eq:3.1}.
\end{theorem}
\begin{proof}
The first part is shown in \cite[Theorem 2.2]{HinzePinnauUlbrich}. Now let $z^*$ be the unique solution of \cref{eq:3.1}. Then
\begin{align*}
\alpha \| z_i - z^* \|_{L^2(\Omega)}^2 &\leq \int_{\Xi} \|S_{\pmb y}(z_i-z^*)\|_{L^2(\Omega)}^2\, \mathrm d\pmb y + \alpha \|z_i - z^*\|_{L^2(\Omega)}^2\\
&= \left\langle z_i-z^*, \int_{\Xi} (S_{\pmb y}S_{\pmb y} + \alpha I) (z_i-z^*)\, \mathrm d\pmb y \right\rangle\\
&= \left\langle z_i-z^*, \int_{\Xi} \left((S_{\pmb y}S_{\pmb y} + \alpha I)z_i - S_{\pmb y}u_0 \right) \mathrm d\pmb y \right\rangle\\
&= \left\langle z_i-z^*, J'(z_i) \right\rangle
\leq \|z_i-z^*\|_{L^2(\Omega)} \|J'(z_i)\|_{L^2(\Omega)}\,,
\end{align*}
where we used Fubini's Theorem in the first equality and then $\int_\Xi q(\cdot,\pmb y,z^*)\, \mathrm d\pmb y = -\alpha z^*$.
Hence we obtain
\begin{align*}
\|z_i-z^*\|_{L^2(\Omega)} \leq \frac{1}{\alpha} \|J'(z_i)\|_{L^2(\Omega)} \to 0\quad \text{as } i \to \infty\,,
\end{align*}
and thus $z_i \to z^*$ in $L^2(\Omega)$. Further, by continuity of $J$ it follows that $J(z_i) \to J(z^*)$.
\end{proof}
\subsection{Projected gradient descent}
\label{subsection41}
Consider now problem \cref{eq:3.1} with
\begin{align*}
-\infty < z_{\min} < z_{\max} < \infty\quad \text{a.e.~in}\ \Omega \,,
\end{align*}
i.e., $\mathcal Z \subsetneq L^2(\Omega)$.
The application of \cref{alg:gradient descent} to feasible $z_i$ might lead to infeasibility of $z_i - \eta J'(z_i)$ even for small stepsizes $\eta >0$. On the other hand, considering only those $\eta >0$ for which $z_i - \eta J'(z_i)$ stays feasible is not viable since this might result in very small step sizes $\eta$.
To incorporate these constraints we use the projection $P_{\mathcal Z}$ onto $\mathcal Z$ given by
\begin{equation}\label{eq:projection}
P_{\mathcal Z}(z)(\pmb x) = P_{[z_{\min}(\pmb x),z_{\max}(\pmb x)]}(z(\pmb x)) = \max(z_{\min}(\pmb x),\min(z(\pmb x),z_{\max}(\pmb x)))\,,
\end{equation}
and perform a line search along the projected path $\{P_{\mathcal{Z}}(z_i - \eta J'(z_i)):\ \eta > 0\}$.
One can show (\cite[Lemma 1.10]{HinzePinnauUlbrich}) that the variational inequality \cref{eq:gradient2} is equivalent to $z^* - P_{\mathcal Z}(z^* - J'(z^*)) = 0$.
This leads to \cref{alg:projected gradient descent}, which is justified by \cref{theorem:projgraddesc}.
\begin{algorithm}[t]
\caption{Projected gradient descent}
\label{alg:projected gradient descent}
Input: feasible starting value $z \in \mathcal Z$
\begin{algorithmic}[1]
\WHILE{$\| z - P_{\mathcal Z}(z - J'(z)) \|_{L^2(\Omega)} >$TOL}
\STATE{find step size $\eta$ using \cref{alg:projected Armijo}}
\STATE{set $z := P_{\mathcal Z}(z - \eta J'(z))$}
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[t]
\caption{Projected Armijo rule}
\label{alg:projected Armijo}
Input: current $z$, parameters $\beta,\gamma \in (0,1)$\\
Output: step size $\eta > 0$
\begin{algorithmic}[1]
\STATE{set $\eta := 1$}
\WHILE{$J(P_{\mathcal Z}(z - \eta J'(z)))- J(z) > -\frac{\gamma}{\eta} \|z-P_{\mathcal Z}(z - \eta J'(z))\|^2_{L^2(\Omega)}$}
\STATE{set $\eta := \beta \eta$}
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\begin{theorem}\label{theorem:projgraddesc}
For feasible starting values $z_0 \in \mathcal Z$, the sequence $\{z_i\}$ generated by \cref{alg:projected gradient descent} satisfies
\begin{align*}
\lim_{i \to \infty} \|z_i - P_{\mathcal Z}(z_i - J'(z_i))\|_{L^2(\Omega)} = 0\,,
\end{align*}
where $P_{\mathcal Z}$ is defined by \cref{eq:projection}. Moreover, the sequence $\{z_i\}$ converges to the unique solution $z^*$ of \cref{eq:3.1}.
\end{theorem}
\begin{proof}
For the proof of the first result we refer to \cite[Theorem 2.4]{HinzePinnauUlbrich}.
By construction $J(z_i)$ is monotonically decreasing in $i$ and $J(z) \geq 0$ for all $z \in L^2(\Omega)$. Thus we know $\lim_{i \to \infty} J(z_i) = \widetilde J = \inf_{i \in \mathbb N} J(z_i)$. Together with the projected Armijo rule in \cref{alg:projected Armijo} this further implies
\begin{align*}
J(z_{i+1}) - J(z_{i}) \leq -\frac{\gamma}{\eta_i}\|z_i -P_{\mathcal Z}(z_i - \eta_i J'(z_i))\|_{L^2(\Omega)}^2 = -\frac{\gamma}{\eta_i}\|z_{i} - z_{i+1}\|_{L^2(\Omega)}^2 \to 0\,,
\end{align*}
and thus $z_i \to \widetilde z$ in $L^2(\Omega)$ as $i \to \infty$, for some $\widetilde z \in \mathcal Z$.
By continuity of $\|\cdot\|_{L^2(\Omega)}$, continuity of $J'(\cdot)$ and continuity of $P_{\mathcal Z}(\cdot)$ we know
\begin{align*}
\lim_{i \to \infty} \|z_i - P_{\mathcal Z}(z_i - J'(z_i)) \|_{L^2(\Omega)} &= \|\lim_{i \to \infty} z_i - P_{\mathcal Z}(\lim_{i \to \infty} z_i - J'(\lim_{i \to \infty}z_i) )\|_{L^2(\Omega)}\\
&= \|\widetilde z - P_{\mathcal Z}(\widetilde z + J'(\widetilde z)) \|_{L^2(\Omega)} = 0\,,
\end{align*}
which is equivalent to
\begin{align*}
\widetilde z - P_{\mathcal Z}(\widetilde z - J'(\widetilde z)) = 0\,.
\end{align*}
Thus $\widetilde z$ satisfies the variational inequality \cref{eq:gradient2} and is the unique minimizer $z^*$ of \cref{eq:3.1}.
\end{proof}
\section{Discretization of the problem and error expansion}
\label{sec:5}
In the following we consider an approximation/discretization of problem \cref{eq:1.1,eq:1.2,eq:1.3,eq:1.4}.
Given $s \in \mathbb N$ and $\pmb y \in \Xi$, we notice that truncating the sum in \cref{eq:1.9} after $s$ terms is the same as setting $y_j = 0$ for $j \geq s+1$.
For every $\pmb y \in \Xi$ we denote the unique solution of the parametric weak problem \cref{eq:33b} corresponding to the dimensionally truncated diffusion coefficient $a(\cdot,(y_1,y_2,\ldots,y_s,0,0,\ldots))$ by $u_s(\cdot ,\pmb y,z) := u(\cdot , (y_1,y_2,\ldots,y_s,0,0,\ldots),z)$. Similarly we write $q_s(\cdot,\pmb y,z) := q(\cdot , (y_1,y_2,\ldots,y_s,0,0,\ldots),z)$ for any $\pmb y \in \Xi$ for the unique solution of the adjoint parametric weak problem \cref{eq:adjointparametricweakproblem} corresponding to the dimensionally truncated diffusion coefficient and truncated right-hand side $u_s(\cdot,\pmb y,z)-u_0$.
We further assume that we have access only to a finite element discretization $u_{s,h}(\cdot,\pmb y,z)$ of the truncated solution to \cref{eq:33b}, to be defined precisely in \cref{section:FE discretization}, and we write $q_{s,h}(\cdot,\pmb y,z)$ for the truncated adjoint state corresponding to $u_{s,h}(\cdot,\pmb y,z)$.
By abuse of notation we also write $u_s(\cdot,\pmb y,z) = u_s(\cdot,\pmb y_{\{1:s\}},z) =S_{\pmb y_{\{1:s\}}}z$ and $q_s(\cdot,\pmb y,z) =q_s(\cdot,\pmb y_{\{1:s\}},z)$ in conjunction with $u_{s,h}(\cdot,\pmb y,z) = u_{s,h}(\cdot,\pmb y_{\{1:s\}},z) = S_{\pmb y_{\{1:s\}},h}z$ and $q_{s,h}(\cdot,\pmb y,z)= q_{s,h}(\cdot,\pmb y_{\{1:s\}},z)$ for $s$-dimensional $\pmb y_{\{1:s\}} \in \Xi_s:= \left[-\frac{1}{2},\frac{1}{2}\right]^s$. Here and in the following $\{1:s\}$ is a shorthand notation for the set $\{1,2,\ldots,s\}$ and $\pmb y_{\{1:s\}}$ denotes the variables $y_j$ with $j \in \{1:s\}$.
Finally we use an $n$-point quasi-Monte Carlo approximation for the integral over $\Xi_s$ leading to the following discretization of \cref{eq:3.1}
\begin{align}
\min_{z \in \mathcal Z} J_{s,h,n}(z)\,, \quad J_{s,h,n}(z) := \frac{1}{2n} \sum_{i=1}^{n} \| S_{\pmb y^{(i)},h} z -u_0\|^2_{L^2(\Omega)} + \frac{\alpha}{2} \| z\|^2_{L^2(\Omega)}\,, \label{eq:QMCFEobjective}
\end{align}
for quadrature points $\pmb y^{(i)} \in \Xi_s$, $i \in \{1,\ldots,n\}$, to be defined precisely in \cref{subsection:QMC}.
In analogy to \cref{eq:gradient3} it follows that the gradient of $J_{s,h,n}$, i.e., the representer of the Fr\'{e}chet derivative of $J_{s,h,n}$ is given by
\begin{align*}
J'_{s,h,n}(z) = \frac{1}{n} \sum_{i=1}^n q_{s,h}(\cdot,\pmb y^{(i)},z) + \alpha z\,.
\end{align*}
Due to the positive weights of the quadrature rule, \cref{eq:QMCFEobjective} is still a convex minimization problem. Existence and uniqueness of the solution $z^{*}_{s,h,n}$ of \cref{eq:QMCFEobjective} follow by the previous arguments. Quasi-Monte Carlo methods are designed to have convergence rates superior to Monte Carlo methods. Other candidates for obtaining faster rates of convergence include, e.g., sparse grid methods, but the latter involve negative weights, meaning that the corresponding discretized optimization problem will be generally non-convex, see, e.g., \cite{SparseGrids}.
\begin{theorem}\label{theorem:theorem51}
Let $z^*$ be the unique minimizer of \cref{eq:3.1} and let $z^{*}_{s,h,n}$ be the unique minimizer of \cref{eq:QMCFEobjective}. It holds that
\begin{align}\label{eq:convergence}
\|z^*-z^{*}_{s,h,n}\|_{L^2(\Omega)} \leq \frac{1}{\alpha} \left\| \int_{\Xi} q(\cdot,\pmb y,z^*)\, \mathrm d\pmb y - \frac{1}{n} \sum_{i = 1}^{n} q_{s,h}(\cdot,\pmb y^{(i)},z^*)\ \right\|_{L^2(\Omega)} \,,
\end{align}
for quadrature points $\pmb y^{(i)} \in \left[-\frac{1}{2},\frac{1}{2}\right]^s$, $i \in \{1,\ldots,n\}$.
\end{theorem}
\begin{proof}
By the optimality of $z_{s,h,n}^*$ it holds for all $z \in \mathcal Z$ that
$\langle J'_{s,h,n}(z_{s,h,n}^{*}), z - z_{s,h,n}^{*} \rangle \geq 0$
and thus in particular $\langle J'_{s,h,n}(z_{s,h,n}^{*}), z^* - z_{s,h,n}^{*} \rangle \geq 0$.
Similarly it holds for all $z \in \mathcal Z$ that
$\langle J'(z^{*}), z - z^{*} \rangle \geq 0$ and thus in particular $\langle -J'(z^*) , z^* - z_{s,h,n}^{*} \rangle \geq 0$.
Adding these inequalities leads to
\begin{align*}
\langle J'_{s,h,n} (z_{s,h,n}^{*}) - J'(z^*), z^* -z_{s,h,n}^{*} \rangle \geq 0\,.
\end{align*}
Thus
\begin{align*}
\alpha \|z^* - z^{*}_{s,h,n} \|^2_{L^2(\Omega)} &\leq \alpha \| z^* - z^{*}_{s,h,n} \|^2_{L^2(\Omega)} + \left\langle J'_{s,h,n}(z^{*}_{s,h,n}) - J'(z^*) , z^* - z^{*}_{s,h,n} \right\rangle\\
&=\left\langle J'_{s,h,n}(z^{*}_{s,h,n}) - \alpha z^{*}_{s,h,n} - J'(z^*) + \alpha z^*, z^* - z^*_{s,h,n} \right\rangle\\
&=\left\langle J'_{s,h,n}(z^{*}_{s,h,n}) - \alpha z^{*}_{s,h,n} - J'_{s,h,n}(z^*) + \alpha z^*, z^* - z^*_{s,h,n} \right\rangle\\
&\quad + \left\langle J'_{s,h,n}(z^*) - \alpha z^* - J'(z^*) + \alpha z^*, z^* - z^*_{s,h,n} \right\rangle\\
&=- \frac{1}{n} \sum_{i = 1}^{n} \|u_{s,h}(\cdot,\pmb y^{(i)},z_{s,h,n}^*) -u_{s,h}(\cdot,\pmb y^{(i)},z^*)\|^2_{L^2(\Omega)}\\
&\quad + \left\langle \frac{1}{n}\sum_{i=1}^nq_{s,h}(\cdot, \pmb y^{(i)},z^*) - \int_{\Xi} q(\cdot,\pmb y, z^*)\, \mathrm d\pmb y, z^* - z^*_{s,h,n} \right\rangle\\
&\leq \left\|\frac{1}{n}\sum_{i=1}^n q_{s,h}(\cdot, \pmb y^{(i)},z^*) - \int_{\Xi} q(\cdot,\pmb y, z^*)\, \mathrm d\pmb y\right\|_{L^2(\Omega)} \| z^* - z^*_{s,h,n}\|_{L^2(\Omega)}\,,
\end{align*}
where in the fourth step we used the fact that
$J'_{s,h,n}(z^{*}_{s,h,n}) - \alpha z^{*}_{s,h,n} - J'_{s,h,n}(z^*) + \alpha z^*=\frac{1}{n} \sum_{i=1}^n (S_{\pmb y^{(i)},h}(S_{\pmb y^{(i)},h}z_{s,h,n}^*) - S_{\pmb y^{(i)},h}(S_{\pmb y^{(i)},h}z^*))$ together with the self-adjointness of the operator $S_{\pmb y^{(i)},h}$ in order to obtain
$
\frac{1}{n}\sum_{i=1}^n\langle S_{\pmb y^{(i)},h}(S_{\pmb y^{(i)},h}z_{s,h,n}^*) -S_{\pmb y^{(i)},h}(S_{\pmb y^{(i)},h}z^*), z^* - z^*_{s,h,n} \rangle
=-\frac{1}{n}\sum_{i=1}^n\| u_{s,h}(\cdot,\pmb y^{(i)},z_{s,h,n}^*) -u_{s,h}(\cdot,\pmb y^{(i)},z^*)\|_{L^2(\Omega)}\,.
$
The result then follows from $\alpha > 0$.
\end{proof}
We can split up the error on the right-hand side in \cref{eq:convergence} into dimension truncation error, FE discretization error and QMC quadrature error as follows
\begin{align}\label{eq:errorexpansion}
\int_{\Xi} q(\pmb x,\pmb y,z)\, \mathrm d\pmb y - \frac{1}{n} \sum_{i = 1}^{n} q_{s,h}(\pmb x,\pmb y^{(i)},z)
&= \underbrace{\int_{\Xi} \left(q(\pmb x,\pmb y,z) - q_s(\pmb x,\pmb y,z)\right)\, \mathrm d\pmb y}_{\text{truncation error}}\\
&+ \underbrace{\int_{\Xi_s} \left( q_s(\pmb x,\pmb y_{\{1:s\}},z) - q_{s,h}(\pmb x,\pmb y_{\{1:s\}},z)\right)\, \mathrm d \pmb y_{\{1:s\}}}_{\text{FE discretization error}}\notag\\
& + \underbrace{\int_{\Xi_s} q_{s,h}(\pmb x,\pmb y_{\{1:s\}},z)\, \mathrm d \pmb y_{\{1:s\}} - \frac{1}{n} \sum_{i = 1}^{n} q_{s,h}(\pmb x,\pmb y^{(i)},z)}_{\text{QMC quadrature error}}.\notag
\end{align}
These errors can be controlled as shown in \cref{theorem:Truncationerror}, \cref{theorem:FEapprox} and \cref{theorem:QMCerror} below. The errors will be analysed separately in the following subsections.
\subsection{Truncation error}
\label{subsection51}
The proof of the following theorem is motivated by \cite{Gantner}. However, in this paper we do not apply a bounded linear functional to the solution of the PDE $q(\cdot,\pmb y,z)$. Moreover, the right-hand side $u(\cdot,\pmb y,z) - u_0$ of the adjoint PDE depends on the parametric variable $\pmb y$. Further, we do not need the explicit assumption that the fluctuation operators $B_j$ (see below) are small with respect to the mean field operator $A(\pmb 0)$ (see below), i.e., $\sum_{j \geq 1}\|A^{-1}(\pmb 0)B_j\|_{\mathcal L(H_0^1(\Omega))} \leq \kappa < 2$, cf. \cite[Assumption 1]{Gantner}. Here and in the following $\mathcal L(H_0^1(\Omega))$ denotes the space of all bounded linear operators in $H_0^1(\Omega)$. For these reasons the proof of our result differs significantly from the proof in \cite{Gantner}.
To state the proof of the subsequent theorem, we introduce the following notation:
for a multi-index $\pmb \nu = (\nu_j)_j$ with $\nu_j \in \{0,1,2,\ldots\}$, we denote its order $|\pmb \nu| := \sum_{j \geq1} \nu_j$ and its support as $\text{supp}(\pmb \nu) := \{j \geq 1: \nu_j \geq 1\}$. Furthermore, we denote the countable set of all finitely supported multi-indices by
\begin{displaymath}
\mathcal D := \{ \pmb \nu \in \mathbb N_0^{\infty} : \left|\text{supp}(\pmb \nu)\right| < \infty\}\,.
\end{displaymath}
Let $b_j$ be defined by
\begin{align}
b_j := \frac{\|\psi_j\|_{L^{\infty}(\Omega)}}{a_{\min}}, \ j \geq 1\,.\label{b_j}
\end{align}
Then we write $\pmb b := (b_j)_{j\geq 1}$ and $\pmb b^{\pmb \nu} := \prod_{j \geq 1} b_j^{\nu_j}$.
\begin{theorem}[Truncation error]\label{theorem:Truncationerror}
Assume there exists $0<p<1$ such that
\begin{align*}
\sum_{j\geq 1}\|\psi_j\|_{L^{\infty}(\Omega)}^p < \infty\,.
\end{align*}
In addition let the $\psi_j$ be ordered such that $\|\psi_j\|_{L^{\infty}(\Omega)}$ are nonincreasing:
\begin{align*}
\|\psi_1\|_{L^{\infty}(\Omega)} \geq \|\psi_2\|_{L^{\infty}(\Omega)} \geq \|\psi_3\|_{L^{\infty}(\Omega)} \geq \cdots\,.
\end{align*}
Then for $z \in L^2(\Omega)$, for every $\pmb y \in \Xi$, and every $s \in \mathbb N$, the truncated adjoint solution $q_s(\cdot,\pmb y,z)$ satisfies
\begin{align}\label{eq:truncationerror1}
\left\| \int_{\Xi} \left(q(\cdot,\pmb y,z) - q_s(\cdot,\pmb y,z)\right)\, \mathrm d\pmb y \right\|_{L^2(\Omega)} \leq C \left(\|z\|_{L^2(\Omega)}+ \|u_0\|_{L^2(\Omega)}\right) s^{-\left(\frac{2}{p}-1\right)}\,,
\end{align}
for some constant $C > 0$ independent of $s$, $z$ and $u_0$.
\end{theorem}
\begin{proof}
First we note that the result holds trivially without the factor $s^{-(2/p-1)}$ in the error estimate. This is true since \cref{coro} holds for the special case $\pmb y = (y_1,y_2,\ldots,y_s,0,0,\ldots)$, and so \cref{eq:truncationerror1} holds without the factor $s^{-(2/p-1)}$ for $C = 2c_2C_q$.
As a consequence, it is sufficient to prove the result for sufficiently large $s$, since it will then hold for all $s$, by making, if necessary, an obvious adjustment of the constant.
To this end we define $A = A(\pmb y): H_0^1(\Omega) \to H^{-1}(\Omega)$ by
\begin{align*}
\langle A(\pmb y)w,v\rangle := b(\pmb y; w,v) \quad \forall v,w \in H_0^1(\Omega)\,,
\end{align*}
and $A_s(\pmb y) := A((y_1,y_2,\ldots,y_s,0,0,\ldots))$. Both $A(\pmb y)$ and $A_s(\pmb y)$ are boundedly invertible operators from $H_0^1(\Omega)$ to $H^{-1}(\Omega)$ since
for all $\pmb y \in \Xi$ it holds for $w \in H_0^1(\Omega)$ and $z \in H^{-1}(\Omega)$ that
\begin{align*}
\|A(\pmb y) w\|_{H^{-1}(\Omega)} = \sup_{v \in H_0^1(\Omega)} \frac{\langle A(\pmb y) w, v \rangle}{\|v\|_{H_0^1(\Omega)}} = \sup_{v \in H_0^1(\Omega)} \frac{b(\pmb y;w,v)}{\|v\|_{H_0^1(\Omega)}} \leq a_{\max} \|w\|_{H_0^1(\Omega)}\,,
\end{align*}
together with a similar bound for $A_s(\pmb y)$; and in the reverse direction, from \cref{theorem:theorem1}
\begin{align}
\|A^{-1}(\pmb y) z\|_{H_0^1(\Omega)} \leq \frac{\|z\|_{H^{-1}(\Omega)}}{a_{\min}}\,, \qquad \|A_s^{-1}(\pmb y) z\|_{H_0^1(\Omega)} \leq \frac{\|z\|_{H^{-1}(\Omega)}}{a_{\min}}\,. \label{eq:Amin}
\end{align}
It follows that the solution operator $S_{\pmb y}: L^2(\Omega) \to L^2(\Omega)$ defined in \cref{def:solutionoperator} can be written as $S_{\pmb y} = E_2 A^{-1}(\pmb y)E_1$, where $E_1: L^2(\Omega) \to H^{-1}(\Omega)$ and $E_2: H_0^1(\Omega) \to L^2(\Omega)$ are the embedding operators defined in \cref{sec:2}.
We define $B_j : H^1_0(\Omega) \to H^{-1}(\Omega)$ by $\langle B_j v,w\rangle := \langle
\psi_j \nabla v,\nabla w\rangle$ for all $v,w\in H^1_0(\Omega)$ so that $A(\pmb y)-A_s(\pmb y) =
\sum_{j\ge s+1} y_j\, B_j$, and define also
\begin{align*}
T_s(\pmb y) := \sum_{j\ge s+1} y_j\, A_s^{-1}(\pmb y) B_j = A_s^{-1}(\pmb y)(A(\pmb y)-A_s(\pmb y))\,.
\end{align*}
Then for all $v \in H_0^1(\Omega)$ we can write using \cref{eq:Amin}
\begin{align*}
\|A_s^{-1}(\pmb y)B_jv\|_{H_0^1(\Omega)} \leq \frac{\|B_jv\|_{H^{-1}(\Omega)}}{a_{\min}}
&= \frac{1}{a_{\min}} \sup_{w \in H_0^1(\Omega)} \frac{|\langle B_jv,w\rangle|}{\|w\|_{H_0^1(\Omega)}}\\
&= \frac{1}{a_{\min}} \sup_{w \in H_0^1(\Omega)} \frac{|\langle \psi_j \nabla v, \nabla w\rangle |}{\|w\|_{H_0^1(\Omega)}}\\
&\leq \frac{\|v\|_{H_0^1(\Omega)}}{a_{\min}} \|\psi_j\|_{L^{\infty}(\Omega)} = \|v\|_{H_0^1(\Omega)} b_j\,.
\end{align*}
We conclude that
\begin{align}
\sup_{\pmb y \in \Xi} \|A_s^{-1}(\pmb y)B_j\|_{\mathcal L(H_0^1(\Omega))} \leq b_j\,, \label{apple}
\end{align}
and consequently
\begin{align*}
\sup_{\pmb y \in \Xi} \|T_s(\pmb y)\|_{\mathcal L(H_0^1(\Omega))} \leq \frac{1}{2} \sum_{j\geq s+1} b_j\,.
\end{align*}
Let $s^*$ be such that $\sum_{j \geq s^*+1} b_j \leq \frac{1}{2}$, implying that $\sup_{\pmb y \in \Xi}\|T_s(\pmb y)\|_{\mathcal L(H_0^1(\Omega))} \leq \frac{1}{4}$. Then for all $s \geq s^*$, by the bounded invertibility of $A(\pmb y)$ and $A_s(\pmb y)$ for all $\pmb y \in \Xi$, we can write (omitting $\pmb y$ in the following) the inverse of $A$ in terms of the Neumann series, as
\[
A^{-1} \,=\, (I+T_s)^{-1} A_s^{-1} \,=\, \sum_{k\ge 0} (-T_s)^k A_s^{-1}\,.
\]
So
\[
A^{-1} - A_s^{-1} \,=\, \sum_{k\ge 1} (-T_s)^k A_s^{-1}.
\]
Now let $E:=E_1E_2$ be the embedding operator of $H_0^1(\Omega)$ in $H^{-1}(\Omega)$. Then we can write the adjoint solution $q$ as $q = A^{-1}E_1(E_2u-u_0)$ and
we write
\begin{align*}
q - q_s
&\,=\, A^{-1}E_1(E_2u-u_0) - A_s^{-1}E_1(E_2u_s - u_0) \\
&\,=\, A^{-1}E_1(E_2u_s-u_0) - A_s^{-1}E_1(E_2u_s - u_0) + A^{-1}E(u-u_s) \\
&\,=\, (A^{-1}-A_s^{-1})E_1(E_2u_s-u_0) + A^{-1}E(A^{-1}-A_s^{-1})E_1z \\
&\,=\, \sum_{k\ge 1} (-T_s)^k A_s^{-1}E_1(E_2u_s-u_0) + \sum_{\ell\ge 0} (-T_s)^\ell A_s^{-1} E\bigg(\sum_{k\ge 1} (-T_s)^k A_s^{-1} E_1z\bigg) \\
&\,=\, \sum_{k\ge 1} (-1)^k\, T_s^k q_s + \sum_{\ell\ge 0} \sum_{k\ge 1} (-1)^{\ell+k}\, T_s^\ell A_s^{-1} E\, T_s^k u_s\,,
\end{align*}
Thus
\begin{align*}
\int_\Xi (q - q_s) \,{\mathrm{d}}\pmb y
&\,=\, \sum_{k\ge 1} (-1)^k \underbrace{\int_\Xi T_s^k q_s \,{\mathrm{d}}\pmb y}_{=:\;{\rm Integral}_1}
+ \sum_{\ell\ge 0} \sum_{k\ge 1} (-1)^{\ell+k} \underbrace{\int_\Xi T_s^\ell A_s^{-1} E\, T_s^k u_s \,{\mathrm{d}}\pmb y}_{=:\;{\rm Integral}_2},
\end{align*}
giving
\begin{align*}
&\bigg\|\int_\Xi (q - q_s) \,{\mathrm{d}}{\pmb y}\bigg\|_{L^2}
\,\le\, c_2\,\bigg\|\int_\Xi (q - q_s) \,{\mathrm{d}}{\pmb y}\bigg\|_{H^1_0} \\
&\qquad\,\le\, c_2 \bigg(
\underbrace{\sum_{k\ge 1} \bigg\|\int_\Xi T_s^k q_s\,{\mathrm{d}}{\pmb y}\bigg\|_{H^1_0}}_{=:\,{\rm Term}_1}
+
\underbrace{\sum_{\ell\ge 0} \sum_{k\ge 1} \bigg\|\int_\Xi T_s^\ell A_s^{-1} E\, T_s^k u_s\,{\mathrm{d}}{\pmb y}\bigg\|_{H^1_0}}_{=:\,{\rm Term}_2}
\bigg).
\end{align*}
Noting that $B_j$ is independent of~${\pmb y}$, we write
\[
T_s^k \,=\, \bigg(\sum_{j\ge s+1} y_j\, A_s^{-1} B_j\bigg)^k
\,=\, \sum_{{\pmb \eta}\in\{s+1:\infty\}^k} \prod_{i=1}^k (y_{\eta_i}\, A_s^{-1} B_{\eta_i})\,,
\]
where we use the shorthand notation $\{s+1:\infty\}^k = \{s+1,s+2,\ldots,\infty\}^k$.
\bigskip
First we consider ${\rm Integral}_1$. We have
\begin{align*}
&{\rm Integral}_1 \,=\, \int_\Xi \sum_{{\pmb \eta}\in\{s+1:\infty\}^k} \bigg(\prod_{i=1}^k (y_{\eta_i}\, A_s^{-1} B_{\eta_i})\bigg)
q_s\,{\mathrm{d}}{\pmb y} \\
&\,=\, \sum_{{\pmb \eta}\in\{s+1:\infty\}^k}
\bigg(\int_{\Xi_{s+}} \prod_{i=1}^k y_{\eta_i}\,{\mathrm{d}}{\pmb y}_{\{s+1:\infty\}}\bigg)
\bigg(\int_{\Xi_s} \bigg(\prod_{i=1}^k (A_s^{-1} B_{\eta_i})\bigg)
q_s\,{\mathrm{d}}{\pmb y}_{\{1:s\}} \bigg),
\end{align*}
where we were able to separate the integrals for ${\pmb y}_{\{1:s\}}$ and ${\pmb y}_{\{s+1:\infty\}} := (y_j)_{j\ge s+1}$ and $\Xi_{s+} := \{(y_j)_{j\geq s+1}\,:\,y_j \in \left[-\frac{1}{2},\frac{1}{2}\right],\, j\geq s+1\}$, an essential step of this proof. The integral over ${\pmb y}_{\{s+1:\infty\}}$ is nonnegative due to the simple yet
crucial observation that
\begin{align} \label{eq:simple}
\int_{-\frac{1}{2}}^{\frac{1}{2}} y_j^{n}\,{\mathrm{d}} y_j \,=\,
\begin{cases}
0 & \mbox{if $n$ is odd}, \\
\frac{1}{2^{n}(n+1)} & \mbox{if $n$ is even}.
\end{cases}
\end{align}
Using \cref{coro:2.4} and \cref{apple}, the $H^1_0$-norm of the integral over ${\pmb y}_{\{1:s\}}$ can be estimated by
\begin{align*}
&\sup_{{\pmb y}_{\{1:s\}}\in \Xi_s} \bigg\|\bigg(\prod_{i=1}^k (A_s^{-1} B_{\eta_i})\bigg)
q_s\bigg\|_{H^1_0} \le\, C_1\, \prod_{i=1}^k b_{\eta_i}\,,
\end{align*}
with $C_1 := C_q\,\big(\|z\|_{L^2(\Omega)}+\|u_0\|_{L^2(\Omega)}\big)$.
Hence we obtain
\begin{align*}
{\rm Term}_1
&\,\le\, C_1\,
\sum_{k\ge 1}
\sum_{{\pmb \eta}\in\{s+1:\infty\}^k}
\bigg(\int_{\Xi_{s+}} \prod_{i=1}^k y_{\eta_i}\,{\mathrm{d}}{\pmb y}_{\{s+1:\infty\}}\bigg)\,
\prod_{i=1}^k b_{\eta_i} \\
&\,=\, C_1\,
\sum_{k\ge 1}
\int_{\Xi_{s+}}
\sum_{{\pmb \eta}\in\{s+1:\infty\}^k}
\bigg(\prod_{i = 1}^k y_{\eta_i} b_{\eta_i}\bigg)\,
\mathrm d{\pmb y}_{\{s+1:\infty\}} \\
&\,=\, C_1\,
\sum_{k\ge 1}
\sum_{\substack{|{\pmb \nu}|=k \\ \nu_j=0\;\forall j\le s}} \binom{k}{{\pmb \nu}}
\bigg(\prod_{j\ge s+1}\int_{-\frac{1}{2}}^{\frac{1}{2}} y_j^{\nu_j}\,{\mathrm{d}} y_j\bigg)\,
\prod_{j\ge s+1} b_j^{\nu_j} \\
&\,\leq\, C_1\,
\sum_{k\ge 2\;{\rm even}}
\sum_{\substack{|{\pmb \nu}|=k \\ \nu_j=0\;\forall j\le s \\ \nu_j\;{\rm even}\;\forall j\ge s+1}} \binom{k}{{\pmb \nu}}
\prod_{j\ge s+1} b_j^{\nu_j},
\end{align*}
where the second equality follows from the multinomial theorem with
${\pmb \nu}\in\mathcal D$ a multi-index,
$\binom{k}{{\pmb \nu}} = k!/(\prod_{j\ge 1}\nu_j!)$, while the last inequality
follows from~\eqref{eq:simple}.
Now we split the sum into a sum over $k \geq k^*$ and the initial terms $2\leq k< k^*$, and estimate
\begin{align*}
{\rm Term}_1
&\,\le\, C_1\, \sum_{k\ge k^*\;{\rm even}} \bigg(\sum_{j\ge s+1} b_j\bigg)^k
+ C_1\, \sum_{2\le k < k^* \;{\rm even}} k! \bigg( \prod_{j\ge s+1} \Big(1 + \sum_{t=2}^k b_j^t\Big) -1\bigg) \\
&\,\le\, C_1\, \cdot\, \frac{4}{3} \bigg(\sum_{j\ge s+1} b_j\bigg)^{k^*}
+ C_1\, k^*\, k^*!\, \cdot\, 2(\mathrm e-1) \sum_{j\ge s+1} b_j^2.
\end{align*}
The estimate for the sum over $k \geq k^*$ follows from the multinomial theorem, the geometric series formula, and that $\sum_{j\geq s+1} b_j \leq \frac{1}{2}$ for $s\geq s^*$. For the sum over $2 \leq k < k^*$ we use the fact that for $s \geq s^*$ we have $b_j \leq \frac{1}{2}$ for all $j \geq s+1$ and $\sum_{j \geq s+1} b_j^2 \leq \sum_{j\geq s+1} b_j \leq \frac{1}{2}$, and thus
\begin{align*}
\prod_{j\ge s+1} \Big(1 + \sum_{t=2}^k b_j^t\Big) -1 &= \prod_{j \geq s+1} \Big( 1 + b_j^2 \frac{1-b_j^{k-1}}{1-b_j} \Big) - 1 \leq
\prod_{j\geq s+1} \left(1+\frac{b_j^2}{1-b_j}\right) -1\\ &\leq \prod_{j \geq s+1} \left(1+2 b_j^2\right)-1 \leq \exp\left(2 \sum_{j\geq s+1} b_j^2\right) -1 \leq 2(\mathrm e-1)\sum_{j \geq s+1} b_j^2\,,
\end{align*}
since $\mathrm e^r-1 \leq r(\mathrm e-1)$ for $r \in [0,1]$.
\bigskip
Next we estimate ${\rm Integral}_2$ in a similar way. We have
\begin{align*}
{\rm Integral}_2 &=
\int_\Xi \! \sum_{{\pmb \mu}\in\{s+1:\infty\}^\ell} \!
\bigg(\prod_{i=1}^\ell (y_{\mu_i}\, A_s^{-1} B_{\mu_i})\bigg)
A_s^{-1} E \bigg(\sum_{{\pmb \eta}\in\{s+1:\infty\}^k}
\bigg(\prod_{i=1}^k (y_{\eta_i}\, A_s^{-1} B_{\eta_i})\bigg)\bigg) u_s\,{\mathrm{d}}{\pmb y}\\
\,&=\,
\sum_{{\pmb \mu}\in\{s+1:\infty\}^\ell} \sum_{{\pmb \eta}\in\{s+1:\infty\}^k}
\bigg(
\int_{\Xi_{s+}} \bigg(\prod_{i=1}^\ell y_{\mu_i}\bigg)
\bigg(\prod_{i=1}^k y_{\eta_i}\bigg)\,{\mathrm{d}}{\pmb y}_{\{s+1:\infty\}}\bigg) \\
&\qquad\qquad\qquad\qquad \cdot
\bigg(\int_{\Xi_s}
\bigg(\prod_{i=1}^\ell (A_s^{-1} B_{\mu_i})\bigg)
A_s^{-1} E \bigg(\prod_{i=1}^k (A_s^{-1} B_{\eta_i})\bigg) u_s\,{\mathrm{d}}{\pmb y}_{\{1:s\}}\bigg),
\end{align*}
where we again separated the integrals for ${\pmb y}_{\{s+1:\infty\}}$ and ${\pmb y}_{\{1:s\}}$. With \cref{eq:2.5} and \cref{apple} we have
\begin{align*}
&\sup_{{\pmb y}_{\{1:s\}}\in \Xi_s} \bigg\|\bigg(\prod_{i=1}^\ell (A_s^{-1} B_{\mu_i})\bigg)
A_s^{-1}E \bigg(\prod_{i=1}^k (A_s^{-1} B_{\eta_i})\bigg) u_s\bigg\|_{H^1_0} \le\, C_2\, \bigg(\prod_{i=1}^\ell b_{\mu_i}\bigg)\bigg(\prod_{i=1}^k b_{\eta_i}\bigg)\,,
\end{align*}
with $C_2 := c_1c_2\frac{c_1\|z\|_{L^2(\Omega)}}{a_{\min}^2}$.
Hence we obtain
\begin{align*}
{\rm Term}_2&\,\le\, C_2
\sum_{\ell\ge 0} \sum_{k\ge 1}
\sum_{{\pmb \mu}\in\{s+1:\infty\}^\ell} \sum_{{\pmb \eta}\in\{s+1:\infty\}^k}
\bigg(
\int_{\Xi_{s+}} \bigg(\prod_{i=1}^\ell y_{\mu_i}\bigg)
\bigg(\prod_{i=1}^k y_{\eta_i}\bigg){\mathrm{d}}{\pmb y}_{\{s+1:\infty\}}\bigg)\\
&\qquad\qquad\qquad\qquad\qquad \cdot\bigg(\prod_{i=1}^\ell b_{\mu_i}\bigg)\bigg(\prod_{i=1}^k b_{\eta_i}\bigg) \\
&\,=\, C_2\,
\sum_{\ell\ge 0} \sum_{k\ge 1}
\int_{\Xi_{s+}}
\sum_{{\pmb \mu}\in\{s+1:\infty\}^\ell} \sum_{{\pmb \eta}\in\{s+1:\infty\}^k}
\bigg(\prod_{i=1}^\ell y_{\mu_i} b_{\mu_i} \bigg)
\bigg(\prod_{i=1}^k y_{\eta_i} b_{\eta_i} \bigg)\,{\mathrm{d}}{\pmb y}_{\{s+1:\infty\}} \\
&\,=\, C_2\,
\sum_{\ell\ge 0} \sum_{k\ge 1}
\sum_{\substack{|{\pmb m}|=\ell \\ m_j=0\;\forall j\le s}}
\sum_{\substack{|{\pmb \nu}|=k \\ \nu_j=0\;\forall j\le s}} \binom{\ell}{{\pmb m}} \binom{k}{{\pmb \nu}}
\bigg(\prod_{j\ge s+1} \int_{-\frac{1}{2}}^{\frac{1}{2}} y_j^{m_j+\nu_j}\,{\mathrm{d}} y_j\bigg)
\prod_{j\ge s+1} b_j^{m_j+\nu_j} \\
&\,\leq\, C_2\,
\sum_{\ell\ge 0} \sum_{k\ge 1}
\sum_{\substack{|{\pmb m}|=\ell \\ m_j=0\;\forall j\le s}}
\sum_{\substack{|{\pmb \nu}|=k \\\nu_j=0\;\forall j\le s \\ m_j+\nu_j\;{\rm even}\;\forall j\ge s+1}}
\binom{\ell}{{\pmb m}} \binom{k}{{\pmb \nu}}
\prod_{j\ge s+1} b_j^{m_j+\nu_j}.
\end{align*}
Now we split the sums and estimate them in a similar way to the sums in $\rm {Term}_1$:
\begin{align*}
{\rm Term}_2
&\,\le\, C_2\, \sum_{\ell\ge 0} \sum_{\substack{k\ge 1 \\ \ell\ge \ell^* \;{\rm or}\; k\ge k^* \;{\rm or\;both}}}
\bigg(\sum_{j\ge s+1} b_j\bigg)^{\ell+k} \\
&\qquad + C_2\,
\sum_{0\le \ell < \ell^*} \sum_{\substack{1\le k < k^* \\ \ell+k\;{\rm even}}} \ell!\, k!
\sum_{\substack{|\pmb w|=\ell+k \\ w_j=0\;\forall j\le s \\ w_j\;{\rm even}\;\forall j\ge s+1}}
\bigg(\prod_{j\ge s+1} b_j^{w_j}\bigg)
\bigg(\sum_{|\pmb m|=\ell} \sum_{\substack{|\pmb \nu|=k \\ \pmb m+\pmb \nu = \pmb w}} 1\bigg)\,.
\end{align*}
For $s \geq s^*$ and denoting $P:= \sum_{j\geq s+1} b_j\,\leq\, \frac{1}{2}$ we can simplify the first part as
\begin{align*}
\sum_{\ell\ge 0} \sum_{\substack{k\ge 1 \\ \ell\ge \ell^* \;{\rm or}\; k\ge k^* \;{\rm or\;both}}}P^{\ell+k}
&=\sum_{\ell\geq \ell^\ast}\sum_{k=1}^{k^\ast-1}P^{\ell+k}+\sum_{\ell=0}^{\ell^\ast-1}\sum_{k\geq k^\ast}P^{\ell+k}+\sum_{\ell\geq \ell^\ast}\sum_{k\geq k^\ast}P^{\ell+k}\\
&=\sum_{\ell\geq \ell^\ast}\frac{P^{\ell+1}(1-P^{k^\ast-1})}{1-P} +\sum_{\ell=0}^{\ell^\ast-1} \frac{P^{\ell+k^\ast}}{1-P}+\sum_{\ell\geq \ell^\ast} \frac{P^{\ell+k^\ast}}{1-P}\\
&=\frac{P^{\ell^\ast+1}}{(1-P)^2}+ \frac{P^{k^\ast}}{(1-P)^2}- \frac{P^{\ell^\ast+k^\ast}}{(1-P)^2} \leq 8P^{\min(k^*,\ell^*)}\,.
\end{align*}
For a given multi-index $\pmb w$ satisfying $|\pmb w|=\ell+k$, $w_j=0$ for
all $j\le s$, and $w_j$ is even for all $j\ge s+1$, we need to count the
number of pairs of multi-indices $\pmb m$ and $\pmb \nu$ such that
$|\pmb m|=\ell$, $|\pmb \nu|=k$, and $\pmb m+\pmb \nu=\pmb w$ to estimate the second part. Clearly we have $w_j\le
\ell+k$, $m_j\le\ell$, $\nu_j\le k$ for all $j$. So the number of ways to
write any component $w_j$ as a sum $m_j + \nu_j$ is at most
$\min(w_j+1,\ell+1,k+1) \le \min(\ell,k)+1$. Moreover, since all $w_j$ are
even, there are at most $(\ell+k)/2$ nonzero components of $w_j$.
Therefore
\begin{align*}
\sum_{|\pmb m|=\ell} \sum_{\substack{|\pmb \nu|=k \\ \pmb m+\pmb \nu = \pmb w}} 1
\,\le\, [\min(\ell,k)+1]^{(\ell+k)/2}.
\end{align*}
Thus we obtain
\begin{align*}
{\rm Term}_2
&\,\le\, C_2\, \sum_{\ell\ge 0} \sum_{\substack{k\ge 1 \\ \ell\ge \ell^* \;{\rm or}\; k\ge k^* \;{\rm or\;both}}}
\bigg(\sum_{j\ge s+1} b_j\bigg)^{\ell+k} \\
&\qquad + C_2\, \sum_{0\le \ell < \ell^*} \sum_{\substack{1\le k < k^* \\ \ell+k\;{\rm even}}}
\ell!\, k!\,[\min(\ell,k)+1]^{(\ell+k)/2}
\bigg( \prod_{j\ge s+1} \Big(1 + \sum_{t=2}^{\ell+k} b_j^t\Big) -1\bigg) \\
&\,\le\, C_2\,\cdot\, 8 \left(\sum_{j\ge s+1} b_j\right)^{\min(\ell^*,k^*)} \\
&\qquad + C_2\, \ell^*\,k^*\, \ell^*! \,k^*!
[\min(\ell^*,k^*)+1]^{(\ell^*+k^*)/2}\,\cdot\,2(\mathrm{e}-1)\,\sum_{j\ge s+1} b_j^2.
\end{align*}
From \cite[Theorem 5.1]{Kuo2012QMCFEM} we know that
\begin{align*}
\sum_{j \geq s+1} b_j \leq \min{\left( \frac{1}{\frac{1}{p}-1},1 \right)} \left(\sum_{j \geq 1} b_j^p \right)^{\frac{1}{p}} s^{-\left(\frac{1}{p}-1\right)}\,.
\end{align*}
Further there holds
$
b_j^p \leq \frac{1}{j} \sum_{l = 1}^j b_l^p \leq \frac{1}{j} \sum_{l = 1}^{\infty} b_l^p
$
and therefore
\begin{align*}
\sum_{j\geq s+1} b_j^2 = \sum_{j\geq s+1} (b_j^p)^{\frac{2}{p}} \leq \sum_{j \geq s+1} \left(\frac{1}{j} \sum_{l =1}^{\infty} b_l^p\right)^{\frac{2}{p}} = \left(\sum_{j\geq s+1} j^{-\frac{2}{p}} \right) \left(\sum_{l = 1}^{\infty} b_l^p \right)^{\frac{2}{p}}\,,
\end{align*}
and
\begin{align*}
\sum_{j \geq s+1} j^{-\frac{2}{p}} \leq \int_s^{\infty} t^{-\frac{2}{p}}\, \mathrm dt = \frac{1}{\frac{2}{p}-1} s^{-\frac{2}{p}+1} \leq s^{-\left(\frac{2}{p}-1\right)}\,,
\end{align*}
for $0<p< 1$ as desired.
To balance the two terms within $\rm Term_1$ and the two terms within $\rm Term_2$, we now choose $\ell^* = k^* = \lceil(2-p)/(1-p)\rceil$. We see that $\rm Term_1$ and $\rm Term_2$ are then of the order $s^{-(2/p-1)}$, which is what we aimed to prove.
\end{proof}
\begin{remark}\label{remarkUTrunc}
By the same analysis as for $\rm Term_1$ in the proof of \cref{theorem:Truncationerror} with $q_s$ replaced by $u_s$, we get the following
\begin{align*}
\left\|\int_{\Xi}\left(u(\cdot,\pmb y,z)-u_s(\cdot,\pmb y,z)\right)\,\mathrm d\pmb y\,\right\|_{L^2(\Omega)}
&\leq c_2 \sum_{k\geq1} \left\|\int_{\Xi} T_s^k u_s(\cdot,\pmb y,z)\,\mathrm d\pmb y\right\|_{H_0^1(\Omega)} \\
&\leq \tilde{C}_1 \Bigg(\frac{4}{3} \Big( \sum_{j\geq s+1} b_j\Big)^{k^*} + k^*\, k^*!\,\cdot\,2(\mathrm e - 1)\sum_{j \geq s+1} b_j^2\Bigg)\\
&\leq \tilde{C}\,s^{-\left(\frac{2}{p}-1\right)}\,,
\end{align*}
where $\tilde{C}_1 = c_2 \frac{\|z\|_{H^{-1}(\Omega)}}{a_{\min}}$, and some constant $\tilde{C}>0$ independent of $s$.
\end{remark}
\subsection{FE discretization}\label{section:FE discretization}
We follow \cite{Kuo2012QMCFEM} and in order to obtain convergence rates of the finite element solutions we make the following additional assumptions
\begin{align}
\text{$\Omega \subset \mathbb R^d$ is convex bounded polyhedron with plane faces}\,,\label{eq:A6}\\
\bar a \in W^{1,\infty}(\Omega)\,, \quad \sum_{j\geq 1} \|\psi_j\|_{W^{1,\infty}(\Omega)} < \infty\,,\label{eq:A4}
\end{align}
where $\|v\|_{W^{1,\infty}(\Omega)} := \max\{\|v\|_{L^{\infty}(\Omega)}\,, \|\nabla v\|_{L^{\infty}(\Omega)}\}$.
The assumption that the geometry of the computational domain $\Omega$ is approximated exactly by the FE mesh simplifies the forthcoming analysis, however, this assumption can substantially be relaxed. For example, standard results on FE analysis as, e.g., in \cite{Ciarlet} will imply corresponding results for domains $\Omega$ with curved boundaries.
In the following let $\{V_h\}_h$ denote a one-parameter family of subspaces $V_h \subset H_0^1(\Omega)$ of dimensions $M_h < \infty$, where $M_h$ is of exact order $h^{-d}$, with $d = 1,2,3$ denoting the spatial dimension.
We think of the spaces $V_h$ as spaces spanned by continuous, piecewise linear finite element basis functions on a sequence of regular, simplicial meshes in $\Omega$ obtained from an initial, regular triangulation of $\Omega$ by recursive, uniform bisection of simplices. Then it is well known (see details, e.g., in \cite{Gilbarg,Kuo2012QMCFEM}) that for functions $v \in H_0^1(\Omega) \cap H^2(\Omega)$ there exists a constant $C>0$, such that as $h \to 0$
\begin{align}\label{FEabschaetzung}
\inf_{v_h \in V_h} \|v-v_h\|_{H_0^1(\Omega)} \leq C\, h\, \|v\|_{H_0^1(\Omega) \cap H^2(\Omega)}\,,
\end{align}
where $\|v\|_{H_0^1(\Omega) \cap H^2(\Omega)} := ( \|v\|_{L^2(\Omega)}^2 + \|\Delta v\|_{L^2(\Omega)}^2)^{1/2}$. Note that we need the higher regularity in order to derive the asymptotic convergence rate as $h \to 0$.
For any $\pmb y \in \Xi$ and every $z \in L^2(\Omega)$, we define the parametric finite element approximations $u_h(\cdot,\pmb y,z) \in V_h$ and $q_h(\cdot,\pmb y,z) \in V_h$ by
\begin{align}
b(\pmb y;u_h(\cdot,\pmb y,z),v_h) = \langle z, v_h\rangle \quad \forall v_h \in V_h\,,\label{eq:FEapprox2}
\end{align}
and then
\begin{align}
b(\pmb y;q_h(\cdot,\pmb y,z),w_h) =\langle u_h(\cdot,\pmb y,z)-u_0,w_h\rangle \quad \forall w_h \in V_h\,,\label{eq:FEapprox3}
\end{align}
where $b(\pmb y;\cdot,\cdot)$ is the parametric bilinear form \cref{eq:2.2}. In particular the FE approximation \cref{eq:FEapprox2} and \cref{eq:FEapprox3} are defined pointwise with respect to $\pmb y \in \Xi$ so that the application of a QMC rule to the FE approximation is well defined.
To stress the dependence on $s$ for truncated $\pmb y = (y_1,\ldots,y_s,0,0,\ldots) \in \Xi$ we write $u_{s,h}$ and $q_{s,h}$ instead of $u_{h}$ and $q_h$ in \cref{eq:FEapprox2} and \cref{eq:FEapprox3}.
\begin{theorem}[Finite element discretization error] \label{theorem:FEapprox}
Under assumptions \cref{eq:A6} and \cref{eq:A4}, for $z \in \mathcal Z$, there holds the asymptotic convergence estimate as $h\to 0$
\begin{align*}\label{eq:theoremFEapproxNitsche}
\sup_{\pmb y \in \Xi} \|q(\cdot,\pmb y,z)-q_h(\cdot,\pmb y,z)\|_{L^2(\Omega)} \leq C h^2 \left(\|z\|_{L^2(\Omega)} + \|u_0\|_{L^2(\Omega)} \right)\,,
\end{align*}
and
\begin{align*}\label{eq:EqualWeightSup}
\left\| \int_{\Xi} \left(q(\cdot,\pmb y,z) - q_{h}(\cdot,\pmb y,z)\right)\, \mathrm d\pmb y \right\|_{L^2(\Omega)} \leq Ch^2 \left(\|z\|_{L^2(\Omega)} + \|u_0\|_{L^2(\Omega)} \right)\,,
\end{align*}
where $C>0$ is independent of $h$, $z$ and $u_0$ and $\pmb y$.
\end{theorem}
For truncated $\pmb y = (y_1,\ldots,y_s,0,0,\ldots) \in \Xi$, the result of \cref{theorem:FEapprox} clearly holds with $q$ and $q_h$ replaced by $q_s$ and $q_{s,h}$ respectively.
\begin{proof}
Let $S_{\pmb y,h}$ be the self-adjoint solution operator defined analogously to \cref{def:solutionoperator}; which for every $\pmb y\in \Xi$ assigns to each function $f \in L^2(\Omega)$ the unique solution $g_h(\cdot,\pmb y) \in V_h \subset H_0^1(\Omega) \subset L^2(\Omega)$. In particular $S_{\pmb y,h}$ is the solution operator of the problem: find $g_h \in V_h$ such that $b(\pmb y;g_h,v_h) = \langle f,v_h\rangle$ $\forall v_h \in V_h$. Note that $S_{\pmb y,h}$ is a bounded and linear operator for given $\pmb y \in \Xi$.
For every $\pmb y \in \Xi$, we can thus estimate
\begin{align}
\|q(\cdot,\pmb y,z)-q_h(\cdot,\pmb y,z)\|_{L^2(\Omega)} &= \|S_{\pmb y}(u(\cdot,\pmb y,z)-u_0)-S_{\pmb y,h}(u_h(\cdot,\pmb y,z)-u_0)\|_{L^2(\Omega)}\notag\\
&\leq \|S_{\pmb y}(u(\cdot,\pmb y,z)-u_0)-S_{\pmb y,h}(u(\cdot,\pmb y,z)-u_0)\|_{L^2(\Omega)}\notag\\
&\quad+ \|S_{\pmb y,h}u(\cdot,\pmb y,z)-S_{\pmb y,h}u_h(\cdot,\pmb y,z)\|_{L^2(\Omega)}\notag\\
&\leq \|(S_{\pmb y}-S_{\pmb y,h})(u(\cdot,\pmb y,z)-u_0)\|_{L^2(\Omega)}\notag\\
&\quad+ \frac{c_1c_2}{a_{\min}} \|u(\cdot,\pmb y,z)-u_h(\cdot,\pmb y,z)\|_{L^2(\Omega)}\,.\label{eq:AUB2}
\end{align}
The last step is true because \cref{eq:2.5} holds for all $v \in H_0^1(\Omega)$ and therefore it holds in particular for $u_h \in V_h \subset H_0^1(\Omega)$. Hence we can bound $\|S_{\pmb y,h}\|_{\mathcal L(L^2(\Omega))} \leq \frac{c_1c_2}{a_{\min}}$.
We can now apply the Aubin--Nitsche duality argument (see, e.g., \cite{Gilbarg}) to bound \cref{eq:AUB2}:
for $w\in L^2(\Omega)$ it holds that
\begin{align}\label{eq:AubinNitscheTrick1}
\|w\|_{L^2(\Omega)} = \sup_{g \in L^2(\Omega) \setminus \{0\}} \frac{\langle g,w\rangle}{\|g\|_{L^2(\Omega)}}\,.
\end{align}
From \cref{eq:parametricweakproblem} and \cref{eq:FEapprox2} follows the Galerkin orthogonality: $b(\pmb y;u(\cdot,\pmb y,z)-u_h(\cdot,\pmb y,z), v_h) = 0$ for all $v_h \in V_h$. Further we define $u_g(\cdot,\pmb y)$ for every $\pmb y \in \Xi$ as the unique solution of the problem: find $u_g(\cdot,\pmb y) \in H_0^1(\Omega)$ such that
\begin{align*}
b(\pmb y;u_g(\cdot,\pmb y),w) = \langle g,w\rangle \quad \forall w \in H_0^1(\Omega)\,,
\end{align*}
which leads together with the choice $w:= u - u_h$ and the Galerkin orthogonality of the FE discretization to
\begin{align*}
\langle g, u(\cdot,\pmb y,z) - u_h(\cdot,\pmb y,z) \rangle &= b(\pmb y;u_g(\cdot,\pmb y),u(\cdot,\pmb y,z) - u_h(\cdot,\pmb y,z) )\\
&= b(\pmb y; u_g(\cdot, \pmb y) - v_h, u(\cdot,\pmb y,z) - u_h(\cdot,\pmb y,z) )\\
&\leq a_{\max} \|u_g(\cdot,\pmb y) - v_h\|_{H_0^1(\Omega)} \|u(\cdot,\pmb y,z) - u_h(\cdot,\pmb y,z)\|_{H_0^1(\Omega)}\,.
\end{align*}
With \cref{eq:AubinNitscheTrick1} we get for every $\pmb y \in \Xi$ that
\begin{align*}
\|u(\cdot,\pmb y,z) - u_h(\cdot,\pmb y,z)\|_{L^2(\Omega)}& = \sup_{g \in L^2(\Omega) \setminus \{0\}} \frac{\langle g, u(\cdot,\pmb y,z) - u_h(\cdot,\pmb y,z) \rangle}{\|g\|_{L^2(\Omega)}}&\hfill\\
&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\leq a_{\max} \|u(\cdot,\pmb y,z) - u_h(\cdot,\pmb y,z)\|_{H_0^1(\Omega)} \sup_{g \in L^2(\Omega) \setminus \{0\}} \left\{ \inf_{v_h \in V} \frac{\|u_g(\cdot,\pmb y) - v_h\|_{H_0^1(\Omega)} }{\|g\|_{L^2(\Omega)}} \right\}\,.
\end{align*}
Now from \cref{FEabschaetzung} we infer for every $\pmb y \in \Xi$ that
\begin{align*}
\inf_{v_h \in V} \|u_g(\cdot,\pmb y) - v_h\|_{H_0^1(\Omega)} \leq C_3\,h\, \|u_g(\cdot,\pmb y) \|_{H_0^1(\Omega) \cap H^2(\Omega)}
\leq C_4 C_3 \,h\, \|g\|_{L^2(\Omega)}\,,
\end{align*}
where $C_3$ is the constant in \cref{FEabschaetzung}. The last step follows from \cite[Theorem 4.1]{Kuo2012QMCFEM} with $t=1$, and $C_4$ is the constant in that theorem.
For every $\pmb y \in \Xi$, we further obtain with C\'{e}a's lemma, \cref{FEabschaetzung} and \cite[Theorem 4.1]{Kuo2012QMCFEM
\begin{align*}
\|u(\cdot,\pmb y,z) - u_h(\cdot,\pmb y,z)\|_{H_0^1(\Omega)}
&\leq \frac{a_{\max}}{a_{\min}} \inf_{v_h \in V} \|u(\cdot,\pmb y,z) - v_h\|_{H_0^1(\Omega)}\\
&\leq \frac{a_{\max}}{a_{\min}} C_3 \,h\, \|u(\cdot,\pmb y,z)\|_{H_0^1(\Omega) \cap H^2(\Omega)}\\
&\leq \frac{a_{\max}}{a_{\min}} C_4 C_3 \,h\, \|z\|_{L^2(\Omega)}\,.
\end{align*}
Thus for every $\pmb y \in \Xi$ it holds that
\begin{align}
\|u(\cdot,\pmb y,z)-u_h(\cdot,\pmb y,z)\|_{L^2(\Omega)} \leq \frac{a_{\max}^2}{a_{\min}} C^2_4 C^2_3\, h^2\, \|z\|_{L^2(\Omega)}\,.\label{eq:combFE1}
\end{align}
By the same argument we get for every $\pmb y \in \Xi$ that
\begin{align}
\|(S_{\pmb y}-S_{\pmb y,h})(u(\cdot,\pmb y,z)-u_0)\|_{L^2(\Omega)}
\leq \frac{a_{\max}^2}{a_{\min}} C^2_4 C^2_3\, h^2 \left(\frac{c_1c_2}{a_{\min}}\|z\|_{L^2(\Omega)} + \|u_0\|_{L^2(\Omega)}\right).\label{eq:combFE2}
\end{align}
Combining \cref{eq:combFE1} and \cref{eq:combFE2} in \cref{eq:AUB2} leads for every $\pmb y \in \Xi$ to
\begin{align*}
\|q(\cdot,\pmb y,z)-q_h(\cdot,\pmb y,z)\|_{L^2(\Omega)}
&\leq \frac{a_{\max}^2}{a_{\min}} C^2_4 C^2_3 \,h^2\left( \frac{2\,c_1c_2}{a_{\min}} \|z\|_{L^2(\Omega)} + \|u_0\|_{L^2(\Omega)}\right) .
\end{align*}
The second result easily follows from the first result since
\begin{align*}
\left\| \int_{\Xi} \left(q(\cdot,\pmb y,z) - q_{h}(\cdot,\pmb y,z)\right)\, \mathrm d\pmb y\right\|_{L^2(\Omega)}^2 \leq \int_{\Xi} \| q(\cdot,\pmb y,z)-q_{h}(\cdot,\pmb y,z)\|_{L^2(\Omega)}^2\, \mathrm d\pmb y\,.
\end{align*}
\end{proof}
\subsection{Regularity of the adjoint solution}
\label{subsection54}
In the subsequent QMC error analysis we shall require bounds on the mixed first partial derivatives of the parametric solution $u$ as well as bounds on the mixed first partial derivatives of the adjoint parametric solution $q$.
For the solution $u(\cdot,\pmb y,z)$ of the state equation \cref{eq:parametricweakproblem} we know the following result.
\begin{lemma}\label{lemma:Derivative}
For every $z \in H^{-1}(\Omega)$, every $\pmb y \in \Xi$ and every $\pmb \nu \in \mathcal D$
we have
\begin{align*}
\|(\partial^{\pmb \nu} u)(\cdot,\pmb y,z) \|_{H_0^1(\Omega)} := \| \nabla (\partial^{\pmb \nu} u)(\cdot,\pmb y,z) \|_{L^2(\Omega)} \leq |\pmb \nu|!\, \pmb b^{\pmb \nu} \frac{\|z\|_{H^{-1}(\Omega)}}{a_{\min}}\,.
\end{align*}
\end{lemma}
This lemma can be found, e.g., in \cite{CohenDeVoreSchwab}.
In contrast to the parametric weak problem \cref{eq:parametricweakproblem}, the right-hand side of the adjoint parametric weak problem \cref{eq:adjointparametricweakproblem} depends on $\pmb y \in \Xi$.
In particular the problem is of the following form: for every $\pmb y \in \Xi$, find $q(\cdot,\pmb y,z) \in H_0^1(\Omega)$ such that
\begin{equation}\label{eq:4.11}
\int_{\Omega} a(\pmb x,\pmb y) \nabla q(\pmb x,\pmb y,z) \cdot \nabla v(\pmb x)\, \mathrm d\pmb x = \int_{\Omega} \tilde f(\pmb x,\pmb y,z) v(\pmb x)\, \mathrm d\pmb x\,, \quad v \in H_0^1(\Omega)\,,
\end{equation}
where the right-hand side $\tilde f(\pmb x,\pmb y,z) := u(\pmb x,\pmb y,z)-u_0(\pmb x)$ now also depends on $z \in L^2(\Omega)$ and $\pmb y \in \Xi$.
\cref{lemma:adjointDerivative} below gives a bound for the mixed derivatives of the solution $q(\cdot,\pmb y,z) \in H_0^1(\Omega)$ of \cref{eq:4.11}. Similar regularity results to the following can be found in \cite{KunothSchwab} (uniform case) and \cite{ChenGhattas} (log-normal case) for problems with stochastic controls $z$, depending on $\pmb y$. In particular, in the unconstrained case $\mathcal Z = L^2(\Omega)$ the KKT-system \cref{eq:KKT} reduces to an affine parametric linear saddle point operator and the theory, e.g., from \cite{KunothSchwab, Schwab} can be applied.
\begin{lemma}\label{lemma:adjointDerivative}For every $z \in L^2(\Omega)$, every $\pmb y \in \Xi$ and every $\pmb \nu \in \mathcal D$, we have for the corresponding adjoint state $q(\cdot,\pmb y,z)$ that
\begin{align*}
\|(\partial^{\pmb \nu} q)(\cdot,\pmb y,z) \|_{H_0^1(\Omega)} \leq (|\pmb \nu| + 1)!\, \pmb b^{\pmb \nu}\, C_q\,(\|z\|_{L^2(\Omega)} + \|u_0\|_{L^2(\Omega)})\,,
\end{align*}
where $C_q$ is defined in \cref{coro}.
\end{lemma}
\begin{proof}
The case $\pmb \nu = \pmb 0$ is given by the \emph{a priori} bound \cref{coro:2.4}. Now consider $\pmb \nu \neq \pmb 0$.
Applying the mixed derivative operator $\partial^{\pmb \nu}$ to \cref{eq:4.11} and using the Leibniz product rule, we obtain the identity
\begin{align}
\int_{\Omega} \left( \sum_{\pmb m\leq \pmb \nu} \begin{pmatrix} \pmb \nu \\ \pmb m \end{pmatrix} (\partial^{\pmb \nu}a)(\pmb x,\pmb y) \nabla (\partial^{\pmb \nu- \pmb m}q)(\pmb x,\pmb y,z) \cdot \nabla v(\pmb x) \right)\, \mathrm d\pmb x\\ = \int_{\Omega} (\partial^{\pmb \nu} \tilde f)(\pmb x,\pmb y,z)\ v(\pmb x)\, \mathrm d\pmb x \quad \forall v \in H_0^1(\Omega)\notag\,,
\end{align}
where by $\pmb m \leq \pmb \nu$ we mean $m_j \leq \nu_j$ for all $j$ and $\binom{\pmb \nu}{\pmb m} := \prod_{j\geq1} \binom{\nu_j}{m_j}$.
Due to the linear dependence of $a(\pmb x,\pmb y)$ on the parameters $\pmb y$, the partial derivative $\partial^{\pmb \nu}$ of $a$ with respect to $\pmb y$ satisfies
\begin{align*}
(\partial^{\pmb m} a)(\pmb x,\pmb y) = \begin{cases}
a(\pmb x,\pmb y) & \text{if } \pmb m = \pmb 0\,,\\
\psi_j(\pmb x) & \text{if } \pmb m = \pmb e_j\,, \\
0 & \text{else}\,.
\end{cases}
\end{align*}
Setting $v = (\partial^{\pmb \nu}q)(\cdot,\pmb y,z)$ and separating out the $\pmb m= \pmb 0$ term, we obtain
\begin{align*}
\int_{\Omega} a |\nabla (\partial^{\pmb \nu} q)(\pmb x,\pmb y,z)|^2\, \mathrm d\pmb x = &-\sum_{j \in \text{supp}(\pmb \nu)} \nu_j \int_{\Omega} \psi_j(\pmb x) \nabla (\partial^{\pmb \nu- \pmb e_j}q)(\pmb x,\pmb y,z) \cdot \nabla(\partial^{\pmb \nu}q)(\pmb x,\pmb y,z)\, \mathrm d\pmb x\\
&+ \int_{\Omega} (\partial^{\pmb \nu} \tilde f)(\pmb x,\pmb y,z) (\partial^{\pmb \nu}q)(\pmb x, \pmb y,z)\, \mathrm d\pmb x\,,
\end{align*}
which yields
\begin{align*}
a_{\min} \| (\partial^{\pmb \nu} q )(\cdot,\pmb y,z) \|^2_{H_0^1(\Omega)} &\leq \sum_{j \geq 1} \nu_j \|\psi_j\|_{L^{\infty}(\Omega)} \|(\partial^{\pmb \nu- \pmb e_j} q)(\cdot,\pmb y,z)\|_{H_0^1(\Omega)} \| (\partial^{\pmb \nu}q)(\cdot,\pmb y,z)\|_{H_0^1(\Omega)}\\
&\quad+ \|(\partial^{\pmb \nu} \tilde f)(\cdot,\pmb y,z) \|_{H^{-1}(\Omega)} \|(\partial^{\pmb \nu}q)(\cdot,\pmb y,z)\|_{H_0^1(\Omega)} \\
&= \sum_{j \geq 1} \nu_j \|\psi_j\|_{L^{\infty}(\Omega)} \|(\partial^{\pmb \nu- \pmb e_j} q)(\cdot,\pmb y,z)\|_{H_0^1(\Omega)} \| (\partial^{\pmb \nu}q)(\cdot,\pmb y,z)\|_{H_0^1(\Omega)}\\
&\quad+ \|(\partial^{\pmb \nu} \tilde f)(\cdot,\pmb y,z) \|_{H^{-1}(\Omega)} \| (\partial^{\pmb \nu}q)(\cdot,\pmb y,z)\|_{H_0^1(\Omega)} \,,
\end{align*}
and hence
\begin{align*}
\| (\partial^{\pmb \nu} q )(\cdot,\pmb y,z) \|_{H_0^1(\Omega)} &\leq \sum_{j \geq 1} \nu_j b_j \|(\partial^{\pmb \nu-\pmb e_j} q)(\cdot,\pmb y,z)\|_{H_0^1(\Omega)} + \frac{ \|(\partial^{\pmb \nu} \tilde f)(\cdot,\pmb y,z) \|_{H^{-1}(\Omega)} }{a_{\min} }\,.
\end{align*}
With $\tilde f(\cdot,\pmb y,z) = u(\cdot,\pmb y,z)-u_0(\cdot)$ this reduces to
\begin{align}
\| (\partial^{\pmb \nu} q )(\cdot,\pmb y,z) \|_{H_0^1(\Omega)} &\leq \sum_{j \geq 1} \nu_j b_j \|(\partial^{\pmb \nu- \pmb e_j} q)(\cdot,\pmb y,z)\|_{H_0^1(\Omega)} + \frac{ \|(\partial^{\pmb \nu} u)(\cdot,\pmb y,z) \|_{H^{-1}(\Omega)} }{a_{\min} }\,. \label{eq:4.14}
\end{align}
With \cref{lemma:Derivative} we get
\begin{align*}
\|(\partial^{\pmb \nu} u)(\cdot,\pmb y,z) \|_{H^{-1}(\Omega)} \leq c_1c_2 \|(\partial^{\pmb \nu}u)(\cdot,\pmb y,z)\|_{H_0^1(\Omega)} \leq c_1c_2\, |\pmb \nu|!\, \pmb b^{\pmb \nu} \frac{\|z\|_{H^{-1}(\Omega)}}{a_{\min}}\,,
\end{align*}
where $c_1,c_2>0$ are embedding constants, see \cref{c1,c2}.
Then \cref{eq:4.14} becomes, for $\pmb \nu \neq \pmb 0$,
\begin{align*}
\| (\partial^{\pmb \nu} q )(\cdot,\pmb y,z) \|_{H_0^1(\Omega)} &\leq \sum_{j \geq 1} \nu_j b_j \|(\partial^{\pmb \nu- \pmb e_j} q)(\cdot,\pmb y,z)\|_{H_0^1(\Omega)} + c_1c_2\, |\pmb \nu|!\, \pmb b^{\pmb \nu} \frac{\|z\|_{H^{-1}(\Omega)}}{a^2_{\min}} \,.
\end{align*}
Now we apply \cite[Lemma 9.1]{Kuo2016ApplicationOQ} to obtain the final bound. For this to work we need the above recursion to hold also for the case $\pmb \nu = \pmb 0$, which is not true when we compare it with the \emph{a priori} bound \cref{coro:2.4}. We therefore enlarge the constants so that the recursion becomes
\begin{align*}
\| (\partial^{\pmb \nu} q )(\cdot,\pmb y,z) \|_{H_0^1(\Omega)} &\leq \sum_{j \geq 1} \nu_j b_j \|(\partial^{\pmb \nu- \pmb e_j} q)(\cdot,\pmb y,z)\|_{H_0^1(\Omega)} + |\pmb \nu|!\, \pmb b^{\pmb \nu}\, C_q\,(\|z\|_{L^2(\Omega)} + \|u_0\|_{L^2(\Omega)}) \,,
\end{align*}
which by \cite[Lemma 9.1]{Kuo2016ApplicationOQ} gives
\begin{align*}
\| (\partial^{\pmb \nu} q )(\cdot,\pmb y,z) \|_{H_0^1(\Omega)} &\leq \sum_{\pmb m \leq\pmb \nu} \begin{pmatrix} \pmb \nu \\ \pmb m \end{pmatrix} |\pmb m|!\ \pmb b^{\pmb m}\, |\pmb \nu -\pmb m|!\ \pmb b^{\pmb \nu -\pmb m}\, C_q\,(\|z\|_{L^2(\Omega)} + \|u_0\|_{L^2(\Omega)})\\
&= \pmb b^{\pmb \nu}\, C_q\,(\|z\|_{L^2(\Omega)} + \|u_0\|_{L^2(\Omega)}) \sum_{\pmb m \leq \nu} \begin{pmatrix}\pmb \nu \\ \pmb m \end{pmatrix} |\pmb m|!\ |\pmb \nu -\pmb m|!\\
&= \pmb b^{\pmb \nu}\, C_q\,(\|z\|_{L^2(\Omega)} + \|u_0\|_{L^2(\Omega)})\, (|\pmb \nu| + 1)!\,,
\end{align*}
where the last equality from \cite[equation 9.4]{Kuo2016ApplicationOQ} and $C_q$ is defined in \cref{coro}.
\end{proof}
\subsection{QMC integration error}\label{subsection:QMC}
In this section we review QMC integration over the $s$-dimensional unit cube $\Xi_s = \left[-\frac{1}{2},\frac{1}{2}\right]^s$ centered at the origin, for finite and fixed $s$. An $n$-point QMC approximation is an equal-weight rule of the form
\begin{align*}
\int_{\left[-\frac{1}{2},\frac{1}{2}\right]^s} F(\pmb y_{\{1:s\}})\, \mathrm d\pmb y_{\{1:s\}} \approx \frac{1}{n} \sum_{i=1}^{n} F(\pmb y^{(i)})\,,
\end{align*}
with carefully chosen points $\pmb y^{(1)},\ldots,\pmb y^{(n)} \in \Xi_{s}$.
We shall assume that for each $s\geq 1$ the integrand $F$ belongs to a weighted unanchored Sobolev space $\mathcal W_{\pmb \gamma,s}$, which is a Hilbert space containing functions defined over the unit cube $\left[-\frac{1}{2},\frac{1}{2}\right]^s$, with square integrable mixed first derivatives, with norm given by
\begin{align*}
\|F\|_{\mathcal W_{\pmb \gamma,s}} := \left( \sum_{{\mathfrak u }\subseteq \{1:s\}} \gamma_{{\mathfrak u}}^{-1} \int_{\left[-\frac{1}{2},\frac{1}{2}\right]^{|{\mathfrak u}|}}\ \left( \int_{\left[-\frac{1}{2},\frac{1}{2}\right]^{s-|{\mathfrak u|}}} \frac{\partial^{|{\mathfrak u}|}F}{\partial \pmb y_{{\mathfrak u}}}(\pmb y_{{\mathfrak u}};\pmb y_{\{1:s\}\setminus{\mathfrak u}})\, \mathrm d\pmb y_{\{1:s\}\setminus{\mathfrak u}} \right)^2\, \mathrm d\pmb y_{{\mathfrak u}} \right)^{\frac{1}{2}}\,,\label{sobolevnorm}
\end{align*}
where we denote by $\frac{\partial^{|{\mathfrak u}|}F}{\partial \pmb y_{{\mathfrak u}}}$ the mixed first derivative with respect to the active variables $y_j$ with $j \in {\mathfrak u} \subset \mathbb N$ and $\pmb y_{\{1:s\}\setminus {\mathfrak u}}$ denotes the inactive variables $y_j$ with $j \notin {\mathfrak u}$.
We assume there is a weight parameter $ \gamma_{{\mathfrak u}} \geq 0$ associated with each group of variables $\pmb y_{{\mathfrak u}} = (y_j)_{j \in {\mathfrak u}}$ with indices belonging to the set ${\mathfrak u}$. We require that if $\gamma_{{\mathfrak u}} = 0$ then the corresponding integral of the mixed first derivative is also zero and we follow the conventions that $0/0 = 0$, $\gamma_{\emptyset} = 1$ and by $\pmb \gamma$ we denote the set of all weights.
See \cref{section:optimalweights} for the precise choice of weights.
In this work we focus on shifted rank-1 lattice rules, which are QMC rules with quadrature points given by
\begin{align*}
\pmb y^{(i)} = \text{frac}\left(\frac{i\pmb z}{n}+\pmb{\Delta} \right) - \left(\frac{1}{2},\ldots,\frac{1}{2}\right)\,, \quad i = 1,\ldots,n\,,
\end{align*}
where $\pmb z \in \mathbb N^s$ is known as the generating vector, $\pmb{\Delta} \in [0,1]^s$ is the shift and frac$(\cdot)$ means to take the fractional part of each component in the vector. The subtraction of $ \left(\frac{1}{2},\ldots,\frac{1}{2}\right)$ ensures the translation from the usual unit cube $[0,1]^s$ to $\left[-\frac{1}{2},\frac{1}{2}\right]^s$.
\begin{theorem}[QMC quadrature error] \label{theorem:QMCerror}
For every $\pmb y_{\{1:s\}} \in \Xi_s$ let $q_{s,h}(\cdot,\pmb y_{\{1:s\}},z) \in V_h \subset H_0^1(\Omega)$ denote the dimensionally truncated adjoint FE solution corresponding to a control $z \in \mathcal Z$. Then for ${\mathfrak u} \subset \mathbb N$, $s,m \in \mathbb N$ with $n = 2^m$ and weights $\pmb \gamma = (\gamma_{ {\mathfrak u}})$, a randomly shifted lattice rule with $n$ points in $s$ dimensions can be constructed by a CBC algorithm such that the root-mean-square $L^2$-error $e_{s,h,n}$ for approximating the finite-dimensional integral $\int_{\Xi_s} q_{s,h}(\cdot,\pmb y_{\{1:s\}},z)\ \mathrm d\pmb y_{\{1:s\}}$ satisfies, for all $\lambda \in (\frac{1}{2},1]$,
\begin{align*}
e_{s,h,n} :=& \sqrt{\mathbb E_{\pmb{\Delta}} \left[ \left\|\int_{\left[-\frac{1}{2},\frac{1}{2}\right]^s} q_{s,h}(\cdot,\pmb y_{\{1:s\}},z)\, \mathrm d\pmb y_{\{1:s\}} - \frac{1}{n} \sum_{i = 1}^{n} q_{s,h}(\cdot,\pmb y^{(i)},z) \right\|_{L^2(\Omega)}^2 \right]}\\
\leq& \sqrt{c_2}\, C_{\pmb \gamma,s}(\lambda) \left(\frac{2}{n}\right)^{\frac{1}{2\lambda}}\, C_q\,(\|z\|_{L^2(\Omega)} + \|u_0\|_{L^2(\Omega)})\,,
\end{align*}
where
\begin{align*}
C_{\pmb \gamma,s}(\lambda) := \left( \sum_{{\mathfrak u} \subseteq \{1:s\}} \gamma_{{\mathfrak u}}^{\lambda} \left(\frac{2\zeta(2\lambda)}{(2\pi^2)^{\lambda}}\right)^{|{\mathfrak u}|} \right)^{\frac{1}{2\lambda}} \left(\sum_{{\mathfrak u} \subseteq \{1:s\}} \frac{((|{\mathfrak u}|+1)!)^2}{\gamma_{{\mathfrak u}}} \prod_{j \in {\mathfrak u}} b_j^2 \right)^{\frac{1}{2}}\,,
\end{align*}
where $\mathbb E_{\pmb{\Delta}}[\cdot]$ denotes the expectation with respect to the random shift which is uniformly distributed over $[0,1]^s$, and $\zeta(x) := \sum_{k=1}^{\infty} k^{-x}$ is the Riemann zeta function for $x>1$. Further, the $b_j$ are defined in \cref{b_j} and $C_q$ is defined in \cref{coro}.
\end{theorem}
\begin{proof}
We have
\begin{align*}
(e_{s,h,n})^2 &= \mathbb E_{\pmb{\Delta}} \left[ \int_{\Omega} \left| \int_{\Xi_s} q_{s,h}(\pmb x,\pmb y_{\{1:s\}},z)\, \mathrm d\pmb y_{\{1:s\}} - \frac{1}{n} \sum_{i = 1}^n q_{s,h}(\pmb x,\pmb y^{(i)},z) \right|^2 \mathrm d\pmb x \right]\\
&= \int_{\Omega} \mathbb E_{\pmb{\Delta}} \left[ \left| \int_{\Xi_s} q_{s,h}(\pmb x,\pmb y_{\{1:s\}},z)\, \mathrm d\pmb y_{\{1:s\}} - \frac{1}{n} \sum_{i = 1}^n q_{s,h}(\pmb x,\pmb y^{(i)},z) \right|^2 \right]\,\mathrm d\pmb x \\
&\leq \left( \sum_{\emptyset \neq \mathfrak u \subseteq \{1:s\}} \gamma_{{\mathfrak u}}^{\lambda} \left(\frac{2\zeta(2\lambda)}{(2\pi^2)^{\lambda}}\right)^{|{\mathfrak u|}} \right)^{\frac{1}{\lambda}} \left(\frac{2}{n}\right)^{\frac{1}{\lambda}}\int_{\Omega} \|q_{s,h}(\pmb x,\cdot,z)\|_{\mathcal W_{\pmb \gamma,s}}^2\, \mathrm d\pmb x\,,
\end{align*}
where we used Fubini's theorem in the second equality and \cite[Theorem 2.1]{Kuo2012QMCFEM} to obtain the inequality. Now from the definition of the $\mathcal W_{\pmb \gamma,s}$-norm (see \cref{sobolevnorm}), we have
\begin{align*}
&\int_{\Omega} \|q_{s,h}(\pmb x,\cdot,z)\|_{\mathcal W_{\pmb \gamma,s}}^2\, \mathrm d\pmb x\\
&= \int_{\Omega} \sum_{{\mathfrak u} \subseteq \{1:s\}} \frac{1}{\gamma_{{\mathfrak u}}} \int_{\left[-\frac{1}{2},\frac{1}{2}\right]^{|{\mathfrak u}|}} \left( \int_{\left[-\frac{1}{2},\frac{1}{2}\right]^{s-|{\mathfrak u}|}} \frac{\partial^{|\mathfrak u|}q_{s,h}}{\partial\pmb y_{{\mathfrak u}}}(\pmb x,(\pmb y_{{\mathfrak u}};\pmb y_{\{1:s\} \setminus {\mathfrak u}}),z)\, \mathrm d\pmb y_{\{1:s\}\setminus {\mathfrak u}} \right)^2\, \mathrm d\pmb y_{{\mathfrak u}}\, \mathrm d\pmb x\\
&\leq \sum_{{\mathfrak u} \subseteq \{1:s\}} \frac{1}{\gamma_{{\mathfrak u}}} \int_{\Omega} \int_{\left[-\frac{1}{2},\frac{1}{2}\right]^{|{\mathfrak u}|}} \int_{\left[-\frac{1}{2},\frac{1}{2}\right]^{s-|{\mathfrak u}|}} \left( \frac{\partial^{|{\mathfrak u}|}q_{s,h}}{\partial\pmb y_{{\mathfrak u}}}(\pmb x,(\pmb y_{{\mathfrak u}};\pmb y_{\{1:s\} \setminus {\mathfrak u}}),z) \right)^2\, \mathrm d\pmb y_{\{1:s\}\setminus {\mathfrak u}}\, \mathrm d\pmb y_{{\mathfrak u}}\, \mathrm d\pmb x\\
&= \sum_{{\mathfrak u} \subseteq \{1:s\}} \frac{1}{\gamma_{{\mathfrak u}}} \int_{\Xi_{s}} \left\| \frac{\partial^{|{\mathfrak u}|}q_{s,h}}{\partial \pmb y_{{\mathfrak u}}}(\cdot,\pmb y_{\{1:s\}},z) \right\|^2_{L^2(\Omega)}\, \mathrm d\pmb y_{\{1:s\}}\\
&\leq c_2 \sum_{{\mathfrak u} \subseteq \{1:s\}} \frac{1}{\gamma_{{\mathfrak u}}} \int_{\Xi_{s}} \left\| \frac{\partial^{|{\mathfrak u}|}q_{s,h}}{\partial \pmb y_{{\mathfrak u}}}(\cdot,\pmb y_{\{1:s\}},z) \right\|^2_{H_0^1(\Omega)}\, \mathrm d\pmb y_{\{1:s\}}\\
&\leq c_2 \sum_{{\mathfrak u} \subseteq \{1:s\}} \frac{1}{\gamma_{{\mathfrak u}}} \left( (|{\mathfrak u}|+1)!\, C_q\, (\|z\|_{L^2(\Omega)} + \|u_0\|_{L^2(\Omega)}) \prod_{j \in {\mathfrak u}} b_j \right)^2\,,
\end{align*}
where the first inequality uses the Cauchy--Schwarz inequality and the last inequality uses \cref{lemma:adjointDerivative}.
\end{proof}
\begin{remark}\label{remarkU}
From the proof of \cref{theorem:QMCerror} it can easily be seen that we can get an analogous result to \cref{theorem:QMCerror} by replacing $q_{s,h}$ with $u_{s,h}$ and using \cref{lemma:Derivative} instead of \cref{lemma:adjointDerivative} in the last step of the proof.
\end{remark}
\subsection{Optimal weights}\label{section:optimalweights}
In the following we choose weights $\gamma_{{\mathfrak u}}$ so that $C_{\pmb \gamma,s}(\lambda)$ in \cref{theorem:QMCerror} is bounded independently of $s$. To do so we follow and adjust the discussion in \cite{KuoNuyens2018} and therefore assume
\begin{align}\label{assump:psummability}
\sum_{j \geq 1} \|\psi_j\|_{L^{\infty}(\Omega)}^{p} < \infty\,,
\end{align}
for $p \in (0,1)$.
For any $\lambda$, $C_{\pmb \gamma,s}(\lambda)$ is minimized with respect to the weights $\gamma_{{\mathfrak u}}$ by
\begin{align}\label{eq:weights}
\gamma_{{\mathfrak u}} = \Bigg((|{\mathfrak u}|+1)! \prod_{j \in {\mathfrak u}} \frac{b_j}{\big(\frac{2\zeta(2\lambda)}{(2\pi^2)^{\lambda}}\big)^{1/2}} \Bigg)^{2/(1+\lambda)}\,,
\end{align}
see also \cite[Lemma 6.2]{Kuo2012QMCFEM}.
We substitute \cref{eq:weights} into $C_{\pmb \gamma,s}(\lambda)$ and simplify the expression to
\begin{align}\label{rmserror2}
C_{\pmb \gamma,s}(\lambda) = \left( \sum_{{\mathfrak u} \subseteq \{1:s\}} { \left((|{\mathfrak u}|+1)! \prod_{j \in {\mathfrak u}} b_j \left(\frac{2\zeta(2\lambda)}{(2\pi^2)^{\lambda}}\right)^{1/(2\lambda)} \right)^{2\lambda/(1+\lambda)}} \right)^{(1+\lambda)/(2\lambda)} \,.
\end{align}
Next derive a condition on $\lambda$ for which \cref{rmserror2} is bounded independently of $s$.
Let $\phi := b_j \left(\frac{2\zeta(2\lambda)}{(2\pi^2)^{\lambda}}\right)^{1/(2\lambda)}$ and $k := \frac{2\lambda}{1+\lambda}$, then it holds that
\begin{align*}
\sum_{{\mathfrak u} \subseteq \{1:s\}} \left( (|{\mathfrak u}| + 1)! \prod_{j \in {\mathfrak u}} \phi_j \right)^k = \sum_{l=0}^s ((l+1)!)^k \sum_{\substack{{\mathfrak u} \subseteq \{1:s\}\\ |{\mathfrak u}| = l}} \prod_{j \in {\mathfrak u}} \phi_j^k \leq \sum_{l = 0}^s \frac{((l+1)!)^k}{l!} \left(\sum_{j=1}^s \phi_j^k\right)^l \,.
\end{align*}
With the ratio test we obtain, that the right-hand side is bounded independently of $s$ if $\sum_{j=1}^{\infty} \phi_j^k < \infty$ and $k <1$. We have $\sum_{j=1}^{\infty} \phi_j^k = \left(\frac{2\zeta(2\lambda)}{(2\pi^2)^{\lambda}}\right)^{1/(1+\lambda)} \sum_{j=1}^{\infty} b_j^k < \infty$ if $k \geq p$, where $p$ is the summability exponent in \cref{assump:psummability}.
Thus we require
\begin{align}\label{eq:constraintsonlambda}
p \leq \frac{2\lambda}{1+\lambda} < 1 \quad \Leftrightarrow \quad \frac{p}{2-p} \leq \lambda < 1\,.
\end{align}
Since the best rate of convergence is obtained for $\lambda$ as small as possible, combining \cref{eq:constraintsonlambda} with $\lambda \in \left(\frac{1}{2},1\right]$ yields
\begin{align}\label{eq:choicelambda}
\lambda =
\begin{cases}
\frac{1}{2-2\delta} & \text{for all } \delta \in \left(0,\frac{1}{2}\right) \text{ if } p \in \left(0,\frac{2}{3} \right]\,,\\
\frac{p}{2-p} & \hfill \text{ if } p \in \left(\frac{2}{3} ,1\right)\,.
\end{cases}
\end{align}
\begin{theorem}[Choice of the weights]\label{theorem:choiceofweights}
Under assumption \cref{assump:psummability}, the choice of $\lambda$ as in \cref{eq:choicelambda} together with the choice of the weights \cref{eq:weights} ensures that the bound on $e_{s,h,n}$ is finite independently of $s$.
(However, $C_{\pmb \gamma,s}\left(\frac{1}{2-2\delta}\right) \to \infty$ as $\delta \to 0$ and $C_{\pmb \gamma,s}\left(\frac{p}{2-p}\right) \to \infty$ as $p \to (2/3)^+$.) In consequence under assumption \cref{assump:psummability} and the same assumptions as in \cref{theorem:QMCerror}, the root-mean-square error in \cref{theorem:QMCerror} is of order
\begin{align}
\kappa(n) :=
\begin{cases}
n^{-(1-\delta)} & \text{for all } \delta \in \left(0,\frac{1}{2}\right) \text{ if } p \in \left(0,\frac{2}{3} \right]\,,\\
n^{-(1/p - 1/2)} & \hfill \text{ if } p \in \left(\frac{2}{3} ,1\right)\,.
\end{cases}\label{kappa}
\end{align}
\end{theorem}
\subsection{Combined error and convergence rates}
\label{subsection56}
Combining the results of the preceding subsections gives the following theorem.
\begin{theorem}[Combined error]\label{theorem:combinederror}
Let $z^*$ be the unique solution of \cref{eq:3.1} and $z^*_{s,h,n}$ the unique solution of \cref{eq:QMCFEobjective}. Then under the assumptions of \cref{theorem:Truncationerror}, \cref{theorem:FEapprox}, \cref{theorem:QMCerror} and \cref{theorem:choiceofweights}, we have
\begin{align*}\label{eq:finalresult1}
\sqrt{\mathbb E_{\pmb{\Delta}}[\|z^*-z^*_{s,h,n}\|_{L^2(\Omega)}^2]} \leq \frac{C}{\alpha} (\|z^*\|_{L^2(\Omega)} + \|u_0\|_{L^2(\Omega)})\left( s^{-\frac{2}{p}+1} + h^2 + \kappa(n) \right)\,,
\end{align*}
where $\kappa(n)$ is given in \cref{kappa}.
\end{theorem}
\begin{proof}
Squaring \cref{eq:convergence} and using the expansion \cref{eq:errorexpansion} we get by taking expectation with respect to the random shift $\pmb \Delta$
\begin{align*}
\mathbb E_{\pmb{\Delta}} \left[ \|z^*-z^*_{s,h,n}\|^2_{L^2(\Omega)} \right]
&\leq \frac{2}{\alpha^2} \left\| \int_{\Xi} \left(q(\cdot,\pmb y,z^*) - q_s(\cdot,\pmb y,z^*)\right)\, \mathrm d\pmb y \right\|_{L^2(\Omega)}^2\\
&+ \frac{2}{\alpha^2} \left\| \int_{\Xi_s} \left(q_s(\cdot,\pmb y_{\{1:s\}},z^*) - q_{s,h}(\cdot,\pmb y_{\{1:s\}},z^*)\right)\, \mathrm d\pmb y_{\{1:s\}} \right\|_{L^2(\Omega)}^2\\
&\!\!\!\!\!\!\!\!\!+ \frac{1}{\alpha^2} \mathbb E_{\pmb{\Delta}} \left[ \left\| \int_{\Xi_s} q_{s,h}(\cdot,\pmb y_{\{1:s\}},z^*)\, \mathrm d\pmb y_{\{1:s\}} - \frac{1}{n} \sum_{i = 1}^{n} q_{s,h}(\cdot,\pmb y^{(i)},z^*) \right\|_{L^2(\Omega)}^2\right] \,.
\end{align*}
The result then immediately follows from \cref{theorem:Truncationerror}, \cref{theorem:FEapprox} and \cref{theorem:QMCerror}.
\end{proof}
Using the error bound for the control $z_{s,h,n}^*$ in \cref{theorem:combinederror} we obtain an error estimate for the state $u_{s,h}(\cdot,\pmb y,z_{s,h,n}^*)$ in the following corollary.
\begin{corollary}\label{finalcorollary}
Let $z^*$ be the unique solution of \cref{eq:3.1} and $z^*_{s,h,n}$ the unique solution of \cref{eq:QMCFEobjective}, then under the assumptions of \cref{theorem:combinederror} we have
\begin{align*}
&\sqrt{\mathbb E_{\pmb{\Delta}}\left[ \int_\Xi \|u(\cdot,\pmb y,z^*) - u_{s,h}(\cdot,\pmb y,z_{s,h,n}^*)\|^2_{L^2(\Omega)}\, \mathrm d\pmb y \right] }\\ &\quad \quad \leq C (\|z^*\|_{L^2(\Omega)} + \|u_0\|_{L^2(\Omega)})\left( s^{-\left(\frac{1}{p}-1\right)} + h^2 + \kappa(n) \right)\,,
\end{align*}
where $\kappa(n)$ is given in \cref{kappa}.
\end{corollary}
\begin{proof}
We observe that the error in $u_{s,h}(\cdot,\pmb y,z_{s,h,n}^*)$ compared to $u(\cdot,\pmb y,z^*)$ has three different sources, which can be estimated separately as follows
\begin{align*}
\|u(\cdot,\pmb y,z^*) - u(\cdot,\pmb y,z_{s,h,n}^*)\|_{L^2(\Omega)}
&\leq \|u(\cdot,\pmb y, z^*) - u_s(\cdot,\pmb y,z^*)\|_{L^2(\Omega)}\\
&\qquad + \|u_s(\cdot,\pmb y,z^*) - u_{s,h}(\cdot,\pmb y,z^*)\|_{L^2(\Omega)}\\
&\qquad + \|u_{s,h}(\cdot,\pmb y,z^*) - u_{s,h}(\cdot,\pmb y,z_{s,h,n}^*)\|_{L^2(\Omega)}\\
&\leq \tilde{C}_1\, \|z^*\|_{L^2(\Omega)}\, s^{-\left(\frac{1}{p}-1\right)}
+ \tilde{C}_2\, \|z^*\|_{L^2(\Omega)}\, h^2\\
&\qquad + \frac{c_1\,c_2}{a_{\min}}\, \|z^* - z_{s,h,n}^*\|_{L^2(\Omega)}\,,
\end{align*}
where the bound for the first summand follows from \cite[Theorem 5.1]{Kuo2012QMCFEM}, the bound for the second summand follows from \cref{eq:combFE1} and the bound for the last summand can be obtained using \cref{theorem:theorem1}. Squaring both sides, taking expectation with respect to $\pmb y$ and with respect to the random shifts $\pmb{\Delta}$ and \cref{theorem:combinederror} gives the result.
\end{proof}
From the proof of \cref{finalcorollary} it can easily be seen that its statement remains true if the integral with respect to $\pmb y$ is replaced by the supremum over all $\pmb y \in \Xi$. In \cref{finalcorollary}, in contrast to \cref{theorem:Truncationerror}, we do not obtain the enhanced rate of convergence $s^{-(2/p-1)}$ for the dimension truncation. This is due to the difference in the order of application of the integral (with respect to $\pmb y$) and the $L^2(\Omega)$-norm.
\section{Numerical experiments}
\noindent We consider the coupled PDE system \cref{eq:adjointparametricweakproblem,eq:33b}
in the two-dimensional physical domain $\Omega=(0,1)^2$ equipped with the diffusion coefficient \cref{eq:1.9}.
We set $\bar{a}({\pmb x})\equiv 1$ as the mean field and use the parametrized family of fluctuations
\begin{align}
\psi_{j}({\pmb x})=\frac{1}{(k_{j}^2+\ell_{j}^2)^\vartheta}\sin(\pi k_{j}x_1)\sin(\pi \ell_{j}x_2)\quad\text{for}~\vartheta>1~\text{and}~j\in\mathbb{N},\label{eq:fluctuation}
\end{align}
where the sequence $(k_{j},\ell_{j})_{j\geq1}$ is an ordering of the elements of $\mathbb{N} \times \mathbb{N}$, so that the sequence $(\|\psi_j\|_{L^\infty(\Omega)})_{j\geq 1}$ is non-increasing. This implies that $\|\psi_{j}\|_{L^\infty(\Omega)}\sim j^{-\vartheta}$ as $j\to\infty$ by Weyl's asymptotic law for the spectrum of the Dirichlet Laplacian (cf.~\cite{weyl} as well as the examples in ~\cite{DKGS,Gantner}).
We use a first order finite element solver to compute the solutions to the system~\cref{eq:adjointparametricweakproblem,eq:33b
~numerically over an ensemble of regular hierarchical FE meshes $\{\mathcal{T}_h\}_h$ of the square domain $\Omega$, parametrized using the one-dimensional mesh widths $h\in\{2^{-k}: k\in\mathbb{N}\}$.
In the numerical experiments in \cref{subsectionNum1} to \cref{subsectionNum3}, we fix the source term $z({\pmb x})=x_2$ and set $u_0({\pmb x})=x_1^2-x_2^2$ for ${\pmb x}=(x_1,x_2)\in \Omega$. The lattice QMC rule was generated in all experiments by using the fast CBC implementation of the QMC4PDE software~\cite{KN,Kuo2016ApplicationOQ}, with the weights chosen to appropriately accommodate the fluctuations~\eqref{eq:fluctuation} in accordance with \cref{theorem:choiceofweights}. In particular, we note that while all the lattice rules in the subsequent numerical examples were designed with the adjoint solution $q$ in mind, the same lattice rules have been used in the sequel to analyze the behavior of the state solution $u$ of \cref{eq:33b
~as well. All computations were carried out on the Katana cluster at UNSW Sydney.
\subsection{Finite element error}
\label{subsectionNum1}
In this section, we assess the validity of the finite element error bounds given in \cref{theorem:FEapprox}.
Two numerical experiments were carried out:
\begin{itemize}
\item[(a)] The $L^2$ errors $\|u_s(\cdot,{\pmb y},z)-u_{s,h}(\cdot,{\pmb y},z)\|_{L^2(\Omega)}$ and $\|q_s(\cdot,{\pmb y},z)-q_{s,h}(\cdot,{\pmb y},z)\|_{L^2(\Omega)}$ of the FE solutions to the state and adjoint PDEs, respectively, were computed using the parameters $s=100$ and $h\in\{2^{-k}:k\in\{1,\ldots,9\}\}$ for a \emph{single} realization of the parametric vector ${\pmb y}\in [-1/2,1/2]^{100}$ drawn from $U([-1/2,1/2]^{100})$.
\item[(b)] The terms $\big\|\int_{\Xi_s}(u_s(\cdot,{\pmb y},z)-u_{s,h}(\cdot,{\pmb y},z))\,{\rm d}{\pmb y}\big\|_{L^2(\Omega)}$ and $\big\|\int_{\Xi_s}(q_s(\cdot,{\pmb y},z)-\linebreak[4]q_{s,h}(\cdot,{\pmb y},z))\,{\rm d}{\pmb y}\big\|_{L^2(\Omega)}$ were approximated by using a lattice rule with a single fixed random shift to evaluate the parametric integrals with dimensionality $s=100$, $n=2^{15}$ nodes and mesh width $h\in\{2^{-k}:k\in\{1,\ldots,6\}\}$.
\end{itemize}
The value $\vartheta=2.0$ was used in both experiments as the rate of decay for the fluctuations~\eqref{eq:fluctuation}. As the reference solutions $u_s$ and $q_s$, we used FE solutions computed using the mesh width $h=2^{-10}$ for experiment (a) and $h=2^{-7}$ for experiment (b). The $L^2$ errors were computed by interpolating the coarser FE solutions onto the grid corresponding to the reference solution. The numerical results are displayed in \cref{fig:femerr}. In the case of a single fixed vector ${\pmb y}\in[-1/2,1/2]^{100}$, we obtain the rates $\mathcal{O}(h^{2.01688})$ and $\mathcal{O}(h^{2.00542})$ for the state and adjoint solutions, respectively. The corresponding rates averaged over $n=2^{15}$ lattice quadrature nodes are $\mathcal{O}(h^{2.04011})$ for the state PDE and $\mathcal{O}(h^{2.01617})$ for the adjoint PDE. In both cases, the observed rates adhere nicely with the theoretical rates given in \cref{theorem:FEapprox}.
\begin{figure}[!h]
\centering
\subfloat[]{{\includegraphics[height=.45\textwidth]{oc_numerics_0309/images/femerror_single.eps}}}\quad\subfloat[]{{\includegraphics[height=.45\textwidth]{oc_numerics_0309/images/femerror_averaged.eps}}}
\caption{The computed finite element errors displayed against the theoretical rates.}\label{fig:femerr}
\end{figure}
\subsection{Dimension truncation error}\label{subsectionNum2}
The dimension truncation error was estimated by approximating the quantities
$$
\bigg\|\int_{\Xi}(u(\cdot,{\pmb y},z)-u_s(\cdot,{\pmb y},z))\,{\rm d}{\pmb y}\bigg\|_{L^2(\Omega)}\quad\text{and}\quad \bigg\|\int_{\Xi}(q(\cdot,{\pmb y},z)-q_s(\cdot,{\pmb y},z))\,{\rm d}{\pmb y}\bigg\|_{L^2(\Omega)}
$$
using a lattice quadrature rule with $n=2^{15}$ nodes and a single fixed random shift to evaluate the parametric integrals. The coupled PDE system was discretized using the mesh width $h=2^{-5}$ and, as the reference solutions $u$ and $q$, we used the FE solutions corresponding to the parameters $s=2^{11}$ and $h=2^{-5}$. The obtained results are displayed in \cref{fig:dimtrunc} for the fluctuation operators $(\psi_{j})_{j\geq 1}$ corresponding to the decay rates $\vartheta\in\{1.5,2.0\}$ and dimensions $s\in\{2^k:k\in\{1,\ldots,9\}\}$. The numerical results are accompanied by the corresponding theoretical rates, which are $\mathcal{O}(s^{-2})$ for $\vartheta=1.5$ and $\mathcal{O}(s^{-3})$ for $\vartheta=2.0$ according to \cref{theorem:Truncationerror}.
In all cases, we find that the observed rates tend toward the expected rates as $s$ increases. In particular, by carrying out a least squares fit for the data points corresponding to the values $s\in\{2^5,\ldots,2^9\}$, the calculated dimension truncation error rate for the state PDE is $\mathcal{O}(s^{-2.00315})$ (corresponding to the decay rate $\vartheta=1.5$) and $\mathcal{O}(s^{-2.83015})$ (corresponding to the decay rate $\vartheta=2.0$). For the adjoint PDE, the corresponding rates are $\mathcal{O}(s^{-2.0065})$ and $\mathcal{O}(s^{-2.72987})$, respectively. The discrepancy between the obtained rate and the expected rate in the case of the decay parameter $\vartheta=2.0$ may be explained by two factors: the lattice quadrature error rate is at best linear, so the quadrature error is likely not completely eliminated with $n=2^{15}$ lattice quadrature points. Moreover, the rate obtained in \cref{theorem:Truncationerror} is sharp only for potentially high values of $s$. This phenomenon may also be observed in the slight curvature of the data presented in \cref{fig:dimtrunc}.
\begin{figure}[!h]
\centering
\subfloat{{\includegraphics[height=.45\textwidth]{oc_numerics_0309/images/dimtrunc_primal.eps}}}\qquad\subfloat{{\includegraphics[height=.45\textwidth]{oc_numerics_0309/images/dimtrunc_adjoint.eps}}}
\caption{The computed dimension truncation errors displayed against the expected rates.}\label{fig:dimtrunc}
\end{figure}
\subsection{QMC error}\label{subsectionNum3}
We assess the rate in \cref{theorem:QMCerror} by using the root-mean-square approximation
\begin{align*}
&\sqrt{\mathbb{E}_{\boldsymbol{\Delta}}\bigg\|\int_{\Xi_s}q_{s,h}(\cdot,{\pmb y}_{\{1:s\}},z)\,{\rm d}{\pmb y}_{\{1:s\}}-\frac{1}{n}\sum_{i=1}^nq_{s,h}(\cdot,\{\boldsymbol{t}^{(i)}+\boldsymbol{\Delta}\}-\tfrac{\boldsymbol{1}}{\boldsymbol{2}},z)\bigg\|_{L^2(\Omega)}^2}\\
&\approx \sqrt{\frac{1}{R(R-1)}\sum_{r=1}^R\big\|\overline{Q} - Q^{(r)}\big\|_{L^2(\Omega)}^2}\,,
\end{align*}
where
$Q^{(r)} := \frac{1}{n}\sum_{i=1}^n q_{s,h}(\cdot,\{\boldsymbol{t}^{(i)}+\boldsymbol{\Delta}^{(r)}\}-\tfrac{\boldsymbol{1}}{\boldsymbol{2}},z)
$ and $
\overline{Q} = \frac{1}{R}\sum_{r=1}^R Q^{(r)}$,
for a randomly shifted lattice rule with $n=2^m$, $m\in\{7,\ldots,15\}$, lattice points $(\boldsymbol{t}^{(i)})_{i=1}^n$ in $[0,1]^s$ and $R=16$ random shifts $\boldsymbol{\Delta}^{(r)}$ drawn from $U([0,1]^s)$ with $s=100$. The FE solutions were computed using the mesh width $h=2^{-6}$. The results are displayed in \cref{fig:qmc}. In both cases, the theoretical rate is $\mathcal{O}(n^{-1+\delta})$, $\delta>0$. For the decay rate $\vartheta=1.5$, we observe the rates $\mathcal{O}(n^{-0.984193})$ for the state PDE and $\mathcal{O}(n^{-0.987608})$ for the adjoint PDE. When the decay rate is $\vartheta=2.0$, we obtain the rates $\mathcal{O}(n^{-1.01080})$ and $\mathcal{O}(n^{-1.012258})$ for the state and adjoint PDE, respectively.
\begin{figure}[!h]
\centering
\subfloat{{\includegraphics[height=.45\textwidth]{oc_numerics_0309/images/qmcerr_15.eps}}}\qquad\subfloat{{\includegraphics[height=.45\textwidth]{oc_numerics_0309/images/qmcerr_20.eps}}}
\caption{The computed root-mean-square errors for the randomly shifted lattice rules.}\label{fig:qmc}
\end{figure}
\subsection{Optimal control problem}\label{subsectionNum4}
We consider the problem of finding the optimal control $z\in\mathcal{Z}$ that minimizes the functional \cref{eq:1.1}
subject to the PDE constraints~\cref{eq:1.2,eq:1.3}.
~We choose $u_0({\pmb x})=x_1^2-x_2^2$, set $\vartheta=1.5$, and fix the space of admissible controls $\mathcal{Z}= \{z \in L^2(\Omega)\,:\, z_{\min} \leq z \leq z_{\max}\, \text{ a.e.~in }\Omega\}$ with
$$
z_{\min}(\pmb x) =
\begin{cases}
0 & \pmb x \in \left[\tfrac{1}{8},\tfrac{3}{8}\right] \times \left[\tfrac{5}{8},\tfrac{7}{8}\right],\\
0 & \pmb x \in \left[\tfrac{5}{8},\tfrac{7}{8}\right] \times \left[\tfrac{5}{8},\tfrac{7}{8}\right],\\
-1 & \text{otherwise}
\end{cases}
\qquad \text{and} \qquad
z_{\max}(\pmb x) =
\begin{cases}
0 & \pmb x \in \left[\tfrac{1}{8},\tfrac{3}{8}\right] \times \left[\tfrac{1}{8},\tfrac{3}{8}\right],\\
0 & \pmb x \in \left[\tfrac{5}{8},\tfrac{7}{8}\right] \times \left[\tfrac{1}{8},\tfrac{3}{8}\right],\\
1 & \text{otherwise}.
\end{cases}
$$ We use finite elements with mesh width $h=2^{-6}$ to discretize the spatial domain $\Omega=(0,1)^2$. The integrals over the parametric domain $\Xi$ are discretized using a lattice rule with a single fixed random shift with $n=2^{15}$ points and the truncation dimension $s=2^{12}$.
We consider the regularization parameters $\alpha\in\{0.1,0.01\}$ for the minimization problem. To minimize the discretized target functional, we use the projected gradient descent algorithm (\cref{alg:projected gradient descent}) in conjunction with the projected Armijo rule (\cref{alg:projected Armijo}) with $\gamma = 10^{-4}$ and $\beta = 0.5$. For both experiments, we used $z_0({\pmb x})=P_{\mathcal Z}(x_2)$ as the initial guess and track the averaged least square difference of the state $u$ and the target state $u_0$. The results
are displayed in \cref{fig:frechet1}.
We observe that for a larger value of $\alpha$ the algorithm converges faster and the averaged difference between the state $u$ and the target state $u_0$ increases.
\begin{figure}[!h]
\centering
\subfloat{{\includegraphics[height=.45\textwidth]{oc_numerics_0309/Newpictures/projected_oc.eps}}}\qquad\subfloat{{\includegraphics[width=.45\textwidth]{oc_numerics_0309/Newpictures/projected_oc_reconstruction.eps}}}
\caption{Left: Averaged least square difference of the state $u$ and the target state $u_0$ at each step of the projected gradient descent algorithm for different values of the regularization parameter $\alpha$. Right: The control corresponding to $\alpha = 0.1$ after $152$ projected gradient descent iterations.}\label{fig:frechet1}
\end{figure}
The same behaviour is observed in the unconstrained case with $\mathcal{Z} = L^2(\Omega)$. We fix the same parameters as before and use the gradient descent algorithm \cref{alg:gradient descent} together with the Armijo rule \cref{alg:Armijo} with $\gamma = 10^{-4}$ and $\beta = 0.5$. We choose $z_0({\pmb x})=x_2$ as the initial guess and track the averaged least square difference of the state $u$ and the target state $u_0$. The results
are displayed in \cref{fig:frechet2}.
\begin{figure}[!h]
\centering
\subfloat{{\includegraphics[height=.45\textwidth]{oc_numerics_0309/Newpictures/unconstrained_oc.eps}}}\qquad\subfloat{{\includegraphics[width=.45\textwidth]{oc_numerics_0309/Newpictures/unprojected_oc_reconstruction.eps}}}
\caption{Left: Averaged least square difference of the state $u$ and the target state $u_0$ at each step of the gradient descent algorithm for different values of the regularization parameter $\alpha$. Right: The control corresponding to $\alpha = 0.1$ after $152$ gradient descent iterations.}\label{fig:frechet2}
\end{figure}
\section{Conclusion and future work}
We presented a specially designed quasi-Monte Carlo method for the robust optimal control problem. Our proposed method provides error bounds for the approximation of the stochastic integral, which do not depend on the number of uncertain variables. Moreover, the method results in faster convergence rates compared to Monte Carlo methods. In addition our method preserves the convexity structure of the optimal control problem due to the nonnegative (equal) quadrature weights. Moreover we presented error estimates and convergence rates for the dimension truncation and the finite element discretization together with confirming numerical experiments.
Based on this work and motivated by \cite{vanBarelVandewalle}, multilevel \cite{AUH,KSSSU,KSS2015} and multi-index \cite{DFS} strategies can be developed in order to further decrease the computational burden. Furthermore the regularity results of this work can be used for the application of higher order QMC rules \cite{DKGNS}. Depending on the application it may also be of interest to consider different objective functions, e.g., the conditional value-at-risk, a combination of the expected value and the variance or different regularization terms. In addition it remains to extend the theory to a class of different forward problems such as affine parametric operator equations \cite{KunothSchwab,KunothSchwab2,Schwab} and different random fields as coefficients of the PDE system \cite{GKNSSS,KKS,KSSSU}. Other possible improvements include more sophisticated optimization algorithms such as Newton based methods.
|
2,869,038,154,078 | arxiv | \section{Introduction}
Bayesian non-parametric statistics is a field that has been
introduced by Ferguson in 1973 and has become increasingly popular
among the theoretical statisticians in the past few decades. The
philosophy behind this field is to assume that the common
(unknown) distribution $P$ of a given sample
$\underline{X}=(X_{1}, \ldots, X_{n})$ is also governed by
randomness, and therefore can be regarded as a stochastic process
(indexed by sets). The best way for a Bayesian statistician to
guess the ``shape'' of the prior distribution $P$ is to identify
the posterior distribution of $P$ given $\underline{X}$ and to
prove that it satisfies the same properties as the prior.
Formalizing these ideas, we can say that a typical problem in
Bayesian nonparametric statistics is to identify a class $\Sigma$
of ``random distributions'' $P$ such that if $\underline{X}$ is a
sample of $n$ observations drawn according to $P$, then the
posterior distribution of $P$ given $\underline{X}$ remains in the
class $\Sigma$. The purpose of this paper is to introduce a new
class $\Sigma$ for which this property is preserved. This is the
class of ${\cal Q}$-Markov processes (or distributions), which
contains the extensively studied class of neutral to the right
processes.
There are two major contributions in the literature in this field.
The first one is Ferguson's (1973) fundamental paper where it is
shown that the posterior distribution of a Dirichlet process is
also Dirichlet. (By definition, a {\em Dirichlet process} with
parameter measure $\alpha$ has a Dirichlet finite dimensional
distribution with parameters $\alpha(A_{1}), \ldots,
\alpha(A_{k}), \alpha((\cup_{i=1}^{k}A_{i})^{c})$ over any
disjoint sets $A_{1}, \ldots,A_{k} \in {\cal B}$.) The second one
is Doksum's (1974) fundamental paper where it is proved that if
${\cal X}={\bf R}$, then the posterior distribution of a neutral
to the right process is also neutral to the right. (A random
probability distribution function $F:=(F_{t})_{t \in {\bf R}}$ is
{\em neutral to the right} if $F_{t_{1}},
(F_{t_{2}}-F_{t_{1}})/(1-F_{t_{1}}), \ldots,
(F_{t_{k}}-F_{t_{k-1}})/(1-F_{t_{k-1}})$ are independent $\forall
t_{1}< \ldots <t_{k}$, or equivalently, $Y_{t}:=- \ln(1-F_{t}),t
\in {\bf R}$ is a process with independent increments.) A quick
review of the literature to date (Ferguson, 1974; Ferguson and
Phadia, 1979; Dykstra and Laud, 1981; Hjort, 1990; Walker and
Muliere, 1997; Walker and Muliere, 1999) reveals that neutral to
the right processes have received considerably attention in the
past three decades, especially because of their appealing
representation using L\'{e}vy processes and because of their
applications in survival analysis, reliability theory, life
history data.
In the present paper we extend Doksum's result to the class of
${\cal Q}$-{\em Markov} processes introduced in Balan and Ivanoff
(2002), which are characterized by Markov-type finite dimensional
distributions. Unlike Doksum's paper (and unlike most of the
statistical papers generated by it) our results are valid for
arbitrary sample spaces ${\cal X}$, which can be endowed with a
certain topological structure (in particular for ${\cal X}={\bf
R}^{d}$). Our main result (Theorem \ref{main}) proves that if
$P:=(P_{A})_{A \in {\cal B}}$ is a set-Markov random probability
measure and $X_{1}, \ldots,X_{n}$ is a sample from $P$, then the
conditional distribution of $P$ given $X_{1}, \ldots,X_{n}$ is
also set-Markov. This result is new even in the case ${\cal
X}={\bf R}$, when the set-Markov property coincides with the
classical Markov property.
The paper is organized as follows:
In Section 2 we describe the structure that has to be imposed on
the sample space ${\cal X}$ (which will be assumed for the entire
paper); under this structure we identify the necessary ingredients
for the construction of set-Markov (respectively ${\cal
Q}$-Markov) random probability measure.
In Section 3 we introduce the Bayesian nonparametric framework and
we prove that a set-Markov prior distribution leads to a
set-Markov posterior distribution. The essence of all calculations
is an integral form of Bayes' formula.
In Section 4 we define neutral to the right processes and using
their ${\cal Q}$-Markov property we prove that a neutral to the
right prior distribution leads to a neutral to the right posterior
distribution.
The paper also includes two appendices: Appendix A contains two
elementary results which are used for the proof of Theorem
\ref{main}; Appendix B contains a Bayes property of a classical
Markov chain, which is interesting by itself and which has
motivated this paper.
\section{Q-Markov random probability measures}
Let $({\cal X}, {\cal B})$ be an arbitrary measurable space (the
sample space).
\begin{definition}
{\rm A collection $P:=(P_{A})_{A \in {\cal B}}$ of $[0,1]$-valued
random variables is called a {\em random probability measure} if
\noindent {\bf (i)} it is finitely additive in distribution, i.e.,
for every disjoint sets $(A_{j})_{j=1, \ldots,k}$ and for every $1
\leq i_{1}< \ldots <i_{m} \leq k$, the distribution of
$(P_{\cup_{j=1}^{i_{1}}A_{j}},
\ldots,P_{\cup_{j=i_{m}}^{k}A_{j}})$ coincides with the
distribution of $(\sum_{j=1}^{i_{1}}P_{A_{j}}, \ldots,
\sum_{j=i_{m}}^{k}P_{A_{j}})$;
\noindent {\bf (ii)} $P_{\cal X}=1$ a.s.; and
\noindent {\bf (iii)} it is countably additive in distribution,
i.e., for every decreasing sequence $(A_{n})_{n} \subseteq {\cal
B}$ with $\cap_{n}A_{n}=\emptyset$ we have $\lim_{n}P_{A_{n}}=0$
a.s.}
\end{definition}
Note that the almost sure convergence of {\bf (iii)} (in the above
definition) is equivalent to the convergence in distribution and
the convergence in mean.
In order to construct a random probability measure $P$ on ${\cal
B}$ it is enough to specify its finite dimensional distributions
$\mu_{A_{1} \ldots A_{k}}$ over all un-ordered collections
$\{A_{1}, \ldots, A_{k}\}$ of disjoint sets in ${\cal B}$. Some
conditions need to be imposed.
\vspace{2mm}
{\em Condition C1.} If $\{A_{1}, \ldots,A_{k}\}$ is an un-ordered
collection of disjoint sets and we let
$A'_{l}:=\cup_{j=i_{l-1}+1}^{i_{l}}A_{j};l=1, \ldots,m$ for $1
\leq i_{1}< \ldots <i_{m} \leq k$, then $\mu_{A'_{1} \ldots
A'_{m}}=\mu_{A_{1} \ldots A_{k}} \circ \alpha^{-1}$, where
$\alpha(x_{1}, \ldots, x_{k})=(\sum_{j=1}^{i_{1}}x_{j}, \ldots,
\sum_{j=i_{m-1}+1}^{i_{m}}x_{j})$.
\vspace{2mm}
{\em Condition C2.} For every $(A_{n})_{n} \subseteq {\cal B}$
with $A_{n+1} \subseteq A_{n}, \forall n$ and
$\cap_{n}A_{n}=\emptyset$, we have
$\lim_{n}\mu_{A_{n}}=\delta_{0}$.
\vspace{2mm}
In this paper we will assume that the sample space
${\cal X}$ has an additional underlying structure which we begin
now to explain.
Let ${\cal X}$ be a (Hausdorff) topological space and ${\cal B}$
its Borel $\sigma$-field. We will assume that there exists a
collection ${\cal A}$ of closed subsets of ${\cal X}$ which
generates ${\cal B}$ (i.e. ${\cal B}=\sigma({\cal A})$) and which
has the following properties:
\begin{enumerate}
\item $\emptyset,{\cal X} \in {\cal A}$; \item ${\cal A}$ is a
semilattice i.e., ${\cal A}$ is closed under arbitrary
intersections;
\item $\forall A,B \in {\cal A}; A,B \not = \emptyset \Rightarrow
A \cap B \not = \emptyset$;
\item There exists a sequence $({\cal A}_{n})_{n}$ of finite
sub-semilattices of ${\cal A}$ such that $\forall A \in {\cal A}$,
there exist $A_{n} \in {\cal A}_{n}(u), \forall n$ with
$A=\cap_{n}A_{n}$ and $A \subseteq A_{n}^{0}, \forall n$. (Here
${\cal A}_{n}(u)$ denotes the class of all finite unions of sets
in ${\cal A}_{n}$.)
\end{enumerate}
More details about this type of structure can be found in Ivanoff
and Merzbach (2000), where ${\cal A}$ is called an {\em indexing
collection}. By properties 2 and 3, the collection ${\cal A}$ has
the finite intersection property, and hence its minimal set
$\emptyset':= \cap_{A \in {\cal A} \verb2\2 \{\emptyset\}}A$ is
non-empty.
The typical example of a sample space ${\cal X}$ which can be
endowed with an indexing collection is ${\bf R}^{d}$; in this case
${\cal A}=\{[0,z];z \in {\bf R}^{d}\} \cup \{\emptyset, {\bf
R}^{d}\}$ and the approximation sets $A_{n}$ have vertices with
dyadic coordinates.
We denote with ${\cal A}(u)$ the class of all finite unions of
sets in ${\cal A}$, with ${\cal C}$ the semialgebra of the sets
$C=A \verb2\2 B$ with $A \in {\cal A},B \in {\cal A}(u)$
and with ${\cal C}(u)$ the algebra of sets generated by ${\cal C}$. Note that ${\cal B}=\sigma({\cal C}(u))$.
We introduce now the definition of the ${\cal Q}$-Markov property.
This definition has been originally considered in Balan and
Ivanoff (2002) for finitely additive real-valued processes indexed
by the algebra ${\cal C}(u)$. In this paper, we will restrict our
attention to random probability measures.
\begin{definition}
\label{definition-Q} {\bf (a)} For each $B_{1},B_{2} \in {\cal
A}(u)$ with $B_{1} \subseteq B_{2}$, let $Q_{B_{1}B_{2}}$ be a
transition probability on $[0,1]$. The family ${\cal
Q}:=(Q_{B_{1}B_{2}})_{B_{1} \subseteq B_{2}}$ is called a {\bf
transition system} if $\forall B_{1} \subseteq B_{2} \subseteq
B_{3}$ in ${\cal A}(u), \forall z_{1} \in [0,1], \forall
\Gamma_{3} \in {\cal B}([0,1])$
$$Q_{B_{1}B_{3}}(z_{1}; \Gamma_{3})=\int_{[0,1]}Q_{B_{2}B_{3}}(z_{2};\Gamma_{3})Q_{B_{1}B_{2}}(z_{1};dz_{2})$$
\noindent {\bf (b)} Given a transition system ${\cal
Q}:=(Q_{B_{1}B_{2}})_{B_{1} \subseteq B_{2}}$, a random
probability measure $P:=(P_{A})_{A \in {\cal B}}$, defined on a
probability space $(\Omega, {\cal F}, {\cal P})$, is called {\bf
${\cal Q}$-Markov} if $\forall B_{1} \subseteq B_{2}$ in ${\cal
A}(u)$, $\forall \Gamma_{2} \in {\cal B}([0,1])$
$${\cal P}[P_{B_{2}} \in \Gamma_{2}|{\cal F}_{B_{1}}]=Q_{B_{1}B_{2}}(P_{B_{1}};
\Gamma_{2}) \ \ {\rm a.s.}$$ where ${\cal
F}_{B_{1}}:=\sigma(\{P_{A}; A \in {\cal A}, A \subseteq B_{1}\})$.
\end{definition}
A ${\cal Q}$-Markov random probability measure can be constructed
using the following additional consistency condition.
\vspace{3mm}
{\em Condition C3.} If $(Y_{1}, \ldots, Y_{k})$ is a vector with
distribution $\mu_{C_{1} \ldots C_{k}}$ where
$C_{1}=B_{1};C_{i}=B_{i} \verb2\2 B_{i-1};i=2, \ldots,k$ and
$B_{1} \subseteq \ldots \subseteq B_{k}$ are sets in ${\cal
A}(u)$, then for every $i=2, \ldots,k$, the distribution of
$Y_{i}$ given $Y_{1}=y_{1}, \ldots, Y_{i-1}=y_{i-1}$ depends only
on $y:=\sum_{j=1}^{i-1}y_{j}$ and is equal to $Q_{B_{i-1}B_{i}}(y;
y+ \cdot)$.
\vspace{3mm}
\noindent The next result follows immediately by Kolmogorov's
extension theorem.
\begin{theorem}
\label{constr-set-Markov} Let ${\cal Q}:=(Q_{B_{1}B_{2}})_{B_{1}
\subseteq B_{2}}$ be a transition system. For each un-ordered
collection $\{A_{1}, \ldots,A_{k}\}$ of disjoint sets in ${\cal
B}$ let $\mu_{A_{1} \ldots A_{k}}$ be a probability measure on
$([0,1]^{k}, {\cal B}([0,1])^{k})$
such that C1-C3 hold; let $\mu_{\emptyset}=\delta_{0}, \mu_{\cal X}=\delta_{1}$.
Then there exists a probability measure ${\cal P}^{1}$ on
$([0,1]^{\cal B}, {\cal B}([0,1])^{\cal B})$ under which the
coordinate-variable process $P:=(P_{A})_{A \in {\cal B}}$ is a
${\cal Q}$-Markov random probability measure whose finite
dimensional distributions are the measures $\mu_{A_{1} \ldots
A_{k}}$.
\end{theorem}
{\bf Examples}:
\begin{enumerate}
\item Let $P$ be the Dirichlet process with parameter measure
$\alpha$. For any disjoint sets $A_{1}, \ldots, A_{k}$ in ${\cal
B}$, $(P_{A_{1}}, \ldots, P_{A_{k}})$ has a {\em Dirichlet}
distribution with parameters $\alpha(A_{1}), \ldots,\alpha(A_{k}),
\alpha((\cup_{i=1}^{k}A_{i})^{c})$. The ratio
$P_{A_{i}}/(1-\sum_{j=1}^{i-1}P_{A_{j}})$ is independent of
$P_{A_{1}}, \ldots, P_{A_{i-1}}$ and has a Beta distribution with
parameters $\alpha(A_{i}), \alpha((\cup_{j=1}^{i}A_{j})^{c})$;
hence the distribution of $P_{A_{i}}$ given $P_{A_{1}}, \ldots,
P_{A_{i-1}}$ depends only on $\sum_{j=1}^{i-1}P_{A_{j}}$. The
process $P$ is ${\cal Q}$-Markov with $Q_{B_{1}B_{2}}(z_{1};
\Gamma_{2})$ equal to the value at $(\Gamma_{2}-z_{1})/(1-z_{1})$
of the Beta distribution with parameters $\alpha(B_{2} \verb2\2
B_{1}),\alpha(B_{2}^{c})$.
\item Let $P:=(1/N) \sum_{j=1}^{N}\delta_{Z_{j}}$ be the empirical
measure of a sample $Z_{1}, \ldots ,Z_{N}$ from a non-random
distribution $P_{0}$ on ${\cal X}$. For any disjoint sets $A_{1},
\ldots, A_{k}$ in ${\cal B}$, $(NP_{A_{1}}, \ldots, NP_{A_{k}})$
has a {\em multinomial} distribution with $N$ trials and
$P_{0}(A_{1}), \ldots, P_{0}(A_{k})$ probabilities of success;
hence the distribution of $NP_{A_{i}}$ given $NP_{A_{1}}, \ldots,
NP_{A_{i-1}}$ depends only on $\sum_{j=1}^{i-1}P_{A_{j}}$ (it is a
binomial distribution with $N(1- \sum_{j=1}^{i-1}P_{A_{j}})$
trials and $P_{0}(A_{i})/(1-\sum_{j=1}^{i-1}P_{0}(A_{j}))$
probability of success). The process $P$ is ${\cal Q}$-Markov with
$$Q_{B_{1}B_{2}}\left(\frac{m_{1}}{N}; \left\{ \frac{m_{2}}{N} \right\}\right)=
\left( \begin{array}{c}N-m_{1} \\ m_{2}-m_{1} \end{array} \right)
\frac{P_{0}(C)^{m_{2}-m_{1}}P_{0}(B_{2}^{c})^{N-m_{2}}}{P_{0}(B_{1}^{c})^{N-m_{1}}}$$
where $\left( \begin{array}{c} a \\ b \end{array}
\right)=a!/b!(a-b)!$ is the binomial coefficient and $C=B_{2}
\verb2\2 B_{1}$.
\item Let $P:=(1/N) \sum_{j=1}^{N}\delta_{W_{j}}$ be the empirical
measure of a sample $W_{1}, \ldots ,W_{N}$ from a Dirichlet
process with parameter measure $\alpha$. For any disjoint sets
$A_{1}, \ldots, A_{k}$ in ${\cal B}$, $(NP_{A_{1}}, \ldots,
NP_{A_{k}})$ has a {\em P\'{o}lya} distribution with $N$ trials
and parameters $\alpha(A_{1}), \ldots,\alpha(A_{k}),
\alpha((\cup_{i=1}^{k}A_{i})^{c})$; hence the distribution of
$NP_{A_{i}}$ given $NP_{A_{1}}, \ldots, NP_{A_{i-1}}$ depends only
on $\sum_{j=1}^{i-1}P_{A_{j}}$ (it is a P\'{o}lya distribution
with $N(1- \sum_{j=1}^{i-1}P_{A_{j}})$ trials and parameters
$\alpha(A_{i}), \alpha((\cup_{j=1}^{i}A_{j})^{c})$). The process
$P$ is ${\cal Q}$-Markov with
$$Q_{B_{1}B_{2}}\left(\frac{m_{1}}{N}; \left\{ \frac{m_{2}}{N} \right\}\right)=
\left( \begin{array}{c}N-m_{1} \\ m_{2}-m_{1} \end{array} \right)
\frac{\alpha(C)^{[m_{2}-m_{1}]}\alpha(B_{2}^{c})^{[N-m_{2}]}}{\alpha(B_{1}^{c})^{[N-m_{1}]}}$$
where $\alpha^{[x]}=\alpha (\alpha+1) \ldots (\alpha+x-1)$ and
$C=B_{2} \verb2\2 B_{1}$.
\end{enumerate}
\section{The posterior distribution of a Q-Markov random probability measure}
We begin to introduce the Bayesian nonparametric framework.
Let $P:=(P_{A})_{A \in {\cal B}}$ be a ${\cal Q}$-Markov random
probability measure defined on a probability space $(\Omega, {\cal
F}, {\cal P})$ and $X_{i}: \Omega \rightarrow {\cal X}, i=1,
\ldots,n$ some ${\cal F}/{\cal B}$-measurable functions such that
$\forall A_{1}, \ldots,A_{n} \in {\cal B}$
$${\cal P}[X_{1} \in A_{1}, \ldots,X_{n} \in A_{n}|P]=\prod_{i=1}^{n}P_{A_{i}} \ \ {\rm a.s.}$$
We say that $\underline{X}:=(X_{1}, \ldots,X_{n})$ is a {\em
sample from $P$}. The distribution of $P$ is called {\em prior},
while the distribution of $P$ given $\underline{X}$ is called {\em
posterior}. Note that $(P_{A})_{A \in {\cal B}}$ and $X_{1},
\ldots, X_{n}$ can be constructed as coordinate-variables on the
space $([0,1]^{\cal B} \times {\cal X}^{n}, {\cal B}([0,1])^{\cal
B} \times {\cal B}^{n})$ under the probability measure ${\cal P}$
defined by
$${\cal P}(D \times \prod_{i=1}^{n}A_{i}):=\int_{D} \prod_{i=1}^{n}\omega_{A_{i}} \ {\cal P}^{1}(d\omega), \ \ D \in {\cal B}([0,1])^{\cal B}, A_{i} \in {\cal B}$$
where ${\cal P}^{1}$ is the probability measure given by Theorem
\ref{constr-set-Markov}.
The goal of this section is to prove that the posterior
distribution of $P$ given $\underline{X}={\underline x}$ is ${\cal
Q}^{({\underline x})}$-Markov (for some ``posterior'' transition
system ${\cal Q}^{({\underline x})}$).
\vspace{3mm}
Let $\alpha_{n}$ be the law of $\underline{X}$ under ${\cal P}$
and $\mu_{A_{1}, \ldots, A_{k}}$ be the law of $(P_{A_{1}},
\ldots, P_{A_{k}})$ under ${\cal P}$, for every $A_{1}, \ldots,
A_{k} \in {\cal B}$. Note that
$\alpha_{n}(\prod_{i=1}^{n}A_{i})={\cal
E}[\prod_{i=1}^{n}P_{A_{i}}]$, where ${\cal E}$ denotes the
expectation with respect to ${\cal P}$.
For each set $B_{1} \in
{\cal A}(u)$, let $\nu_{B_{1}}$ be the law of $(X_{1}, \ldots,
X_{n},P_{B_{1}})$ under ${\cal P}$. Note that
$\nu_{B_{1}}(\prod_{i=1}^{n}A_{i} \times \Gamma_{1})={\cal
E}[\prod_{i=1}^{n}P_{A_{i}} \cdot I_{\Gamma_{1}}(P_{B_{1}})]$ and
\begin{equation}
\label{disint-nu-B1-n} \nu_{B_{1}}(\tilde{A} \times \Gamma_{1}) =
\int_{\tilde{A}} \mu_{B_{1}}^{(\underline{x})} (\Gamma_{1})
\alpha_{n}(d \underline{x}) = \int_{\Gamma_{1}}
\tilde{Q}_{B_{1}}(z_{1};\tilde{A}) \mu_{B_{1}}(dz_{1})
\end{equation}
where $\mu_{B_{1}}^{(\underline{x})}(\Gamma_{1}):={\cal
P}[P_{B_{1}} \in \Gamma_{1}|\underline{X}=\underline{x}]$ and
$\tilde{Q}_{B_{1}}(z_{1};\tilde{A}):={\cal P}[\underline{X} \in
\tilde{A}|P_{B_{1}}=z_{1}]$.
\vspace{2mm}
For each sets $B_{1},B_{2} \in {\cal A}(u);B_{1} \subseteq B_{2}$,
let $\nu_{B_{1}B_{2}}$ be the law of $(X_{1}, \ldots, X_{n},
\linebreak P_{B_{1}}, P_{B_{2}})$ under ${\cal P}$. Note that
$\nu_{B_{1}B_{2}}(\prod_{i=1}^{n}A_{i} \times \Gamma_{1} \times
\Gamma_{2}) = {\cal E}[\prod_{i=1}^{n}P_{A_{i}} \cdot
I_{\Gamma_{1}}(P_{B_{1}}) \linebreak I_{\Gamma_{2}}(P_{B_{2}})]$
and
\begin{eqnarray}
\label{disint-nu-B1-B2-n} \nu_{B_{1}B_{2}}(\tilde{A} \times
\Gamma_{1} \times \Gamma_{2}) & = & \int_{\tilde{A}}
\int_{\Gamma_{1}}
Q_{B_{1}B_{2}}^{(\underline{x})}(z_{1};\Gamma_{2})
\mu_{B_{1}}^{(\underline{x})}(dz_{1})
\alpha_{n}(d \underline{x}) \\
& = & \int_{\Gamma_{1} \times \Gamma_{2}} \tilde{Q}_{B_{1}B_{2}}(z_{1},z_{2}; \tilde{A}) \mu_{B_{1}B_{2}}(dz_{1} \times dz_{2})
\end{eqnarray}
where
\begin{equation}
\label{definition-Q-x}
Q_{B_{1}B_{2}}^{(\underline{x})}(z_{1}; \Gamma_{2}):={\cal
P}[P_{B_{2}} \in \Gamma_{2}|\underline{X}=
\underline{x},P_{B_{1}}=z_{1}] \end{equation}
and
$\tilde{Q}_{B_{1}B_{2}}(z_{1},z_{2}; \tilde{A}):= {\cal
P}[\underline{X} \in \tilde{A}|P_{B_{1}}=z_{1},P_{B_{2}}=z_{2}]$.
(For the first equality we used the first integral in the
decomposition (\ref{disint-nu-B1-n}) of $\nu_{B_{1}}$).
\noindent Using the second integral in the decomposition
(\ref{disint-nu-B1-n}) of $\nu_{B_{1}}$ and the ${\cal Q}$-Markov
property for representing $\mu_{B_{1}B_{2}}$ we get: (for
$\mu_{B_{1}}$-almost all $z_{1}$)
\begin{equation}
\label{central-Bayes}
\int_{\tilde{A}}Q_{B_{1}B_{2}}^{(\underline{x})}(z_{1};\Gamma_{2})
\tilde{Q}_{B_{1}}(z_{1};d \underline{x})= \int_{\Gamma_{2}}
\tilde{Q}_{B_{1}B_{2}}(z_{1},z_{2};
\tilde{A})Q_{B_{1}B_{2}}(z_{1};dz_{2}).
\end{equation}
This very important equation is the key for determining the
posterior transition probabilities
$Q_{B_{1}B_{2}}^{(\underline{x})}$ from the prior transition
probabilities $Q_{B_{1}B_{2}}$, providing that
$\tilde{Q}_{B_{1}}(z_{1}; \prod_{i=1}^{n}A_{i})={\cal
E}[\prod_{i=1}^{n}P_{A_{i}}| P_{B_{1}}=z_{1}]$ and
$\tilde{Q}_{B_{1}B_{2}}(z_{1},z_{2}; \prod_{i=1}^{n}A_{i}) = {\cal
E}[\prod_{i=1}^{n}P_{A_{i}}| P_{B_{1}}=z_{1},P_{B_{2}}=z_{2}]$ are
easily computable.
We note that each $Q_{B_{1}B_{2}}^{(x)}(z_{1}; \cdot)$ is
well-defined only for $\nu_{B_{1}}$-almost all $(\underline{x},
z_{1})$. Moreover, as we will see in the proof of Theorem
\ref{main} and it was correctly pointed out by an anonymous
referee, ${\cal Q}^{(\underline{x})}$ may not be a genuine
transition system as introduced by Definition
\ref{definition-Q}.(a). To avoid any confusion we introduce the
following terminology.
\begin{definition}
The family ${\cal Q}^{(x)}:=(Q_{B_{1}B_{2}}^{(x)})_{B_{1}
\subseteq B_{2}}$ defined by (\ref{definition-Q-x}) is called a
{\bf posterior transition system} (corresponding to $P$ and
$\underline{X}$) if $\forall B_{1} \subseteq B_{2} \subseteq
B_{3}$ in ${\cal A}(u)$, $\forall \Gamma_{3} \in {\cal B}([0,1])$
and for $\nu_{B_{1}}$-almost all $(\underline{x},z_{1})$
$$Q_{B_{1}B_{3}}^{(\underline{x})}(z_{1}; \Gamma_{3})= \int_{[0,1]}
Q_{B_{2}B_{3}}^{(\underline{x})}(z_{2};
\Gamma_{3})Q_{B_{1}B_{2}}^{(\underline{x})}(z_{1};dz_{2})$$ In
this case, we will say that the conditional distribution of $P$
given $\underline{X}=\underline{x}$ is {\bf ${\cal
Q}^{(\underline{x})}$-Markov} if $\forall B_{1} \subseteq B_{2}$
in ${\cal A}(u)$, $\forall \Gamma_{2} \in {\cal B}([0,1]$
$${\cal P}[P_{B_{2}} \in \Gamma_{2}|{\cal F}_{B_{1}},\underline{X}]=Q_{B_{1}B_{2}}^{(\underline{X})}(P_{B_{1}};
\Gamma_{2}) \ \ {\rm a.s.}$$
\end{definition}
\vspace{3mm}
We proceed now to the proof of the main theorem. Two preliminary
lemmas are needed.
Let $B_{1} \subseteq B_{2}$ be some arbitrary sets in ${\cal
A}(u)$, $C:=B_{2} \verb2\2 B_{1}$ and $0 \leq l \leq r \leq n$.
The next lemma shows us what happens intuitively with the
probability that the first $l$ observations fall in $B_{1}$, the
next $r-l$ observations fall in $C$ and the remaining $n-r$
observations fall in $B_{2}^{c}$, given $P_{B_{1}}$ and
$P_{B_{2}}$.
\begin{lemma}
\label{Qtilde-disintegration} For each $B_{1} \subseteq B_{2}$ in
${\cal A}(u)$ and $A_{1}, \ldots, A_{n} \in {\cal B}$, let
\begin{equation}
\label{tilde-A} \tilde{A}:=\prod_{i=1}^{l}(A_{i} \cap B_{1})
\times \prod_{i=l+1}^{r}(A_{i} \cap C) \times
\prod_{i=r+1}^{n}(A_{i} \cap B_{2}^{c})
\end{equation}
where $C:=B_{2} \verb2\2 B_{1}$ and $0 \leq l \leq r \leq n$. Let
$\tilde{A}_{1}:=\prod_{i=1}^{l}(A_{i} \cap B_{1}) \times {\cal
X}^{n-l}$, $\tilde{A}_{2}:= \prod_{i=l+1}^{r}(A_{i} \cap C) \times
{\cal X}^{n-r+l}$, $\tilde{A}_{3}:= \prod_{i=r+1}^{n}(A_{i} \cap
B_{2}^{c}) \times {\cal X}^{r}$, $\tilde A_{23}:=\tilde A_{2} \cap
\tilde A_{3}$.
(a) For $\mu_{B_{1}}$-almost all $z_{1}$,
$\tilde{Q}_{B_{1}}(z_{1};
\tilde{A})=\tilde{Q}_{B_{1}}(z_{1};\tilde{A}_{1}) \cdot
\tilde{Q}_{B_{1}}(z_{1}; \tilde{A}_{23})$.
(b) For $\mu_{B_{1}B_{2}}$-almost all $(z_{1},z_{2})$,
$$\tilde{Q}_{B_{1}B_{2}}(z_{1},z_{2}; \tilde{A})=\tilde{Q}_{B_{1}}(z_{1};\tilde{A}_{1}) \cdot
\tilde{Q}_{B_{1}B_{2}}(z_{1},z_{2}; \tilde{A}_{2}) \cdot
\tilde{Q}_{B_{2}}(z_{2}; \tilde{A}_{3}).$$
\end{lemma}
\noindent {\bf Proof}: We will prove only (b) since part (a)
follows by a similar argument. Note that the sets $\tilde A$ form
a $\pi$-system generating the $\sigma$-field ${\cal B}^{n}$ on
$B_{1}^{l} \times C^{r-l} \times (B_{2}^{c})^{n-r}$.
Since $\sigma({\cal A})={\cal B}$ and ${\cal A}$ is a
$\pi$-system, using a Dynkin system argument, it is enough to
consider the case $A_{1}, \ldots,A_{n} \in {\cal A}$. Note that
$${\cal E}[\prod_{i=r+1}^{n}P_{A_{i} \cap B_{2}^{c}}|{\cal F}_{B_{2}}] =
{\cal E}[\prod_{i=r+1}^{n}P_{A_{i} \cap B_{2}^{c}}|P_{B_{2}}]=
\tilde{Q}_{B_{2}}(P_{B_{2}};\tilde{A}_{3}).$$
\noindent By double conditionning with respect to ${\cal
F}_{B_{2}}$, we have
$$\tilde{Q}_{B_{1}B_{2}}(z_{1},z_{2}; \tilde{A})={\cal E}[\prod_{i=1}^{l}P_{A_{i} \cap B_{1}} \prod_{i=l+1}^{r}P_{A_{i} \cap C} \prod_{i=r+1}^{n}P_{A_{i} \cap B_{2}^{c}} \ | \ P_{B_{1}}=z_{1},P_{B_{2}}=z_{2}]=$$
$$\tilde{Q}_{B_{2}}(z_{2};\tilde{A}_{3}) \cdot {\cal E}[\prod_{i=1}^{l}P_{A_{i} \cap B_{1}} \cdot \prod_{i=l+1}^{r}P_{A_{i} \cap C} \ | \ P_{B_{1}}=z_{1},P_{B_{2}}=z_{2}].$$
\noindent For the second term we have
$${\cal E}[\prod_{i=1}^{l}P_{A_{i} \cap B_{1}} \prod_{i=l+1}^{r}P_{A_{i} \cap C} \ | \ P_{B_{1}},P_{B_{2}}]=$$
$${\cal E}[\prod_{i=1}^{l}P_{A_{i} \cap B_{1}} {\cal E}[\prod_{i=l+1}^{r}P_{A_{i} \cap C}|(P_{A_{i} \cap B_{1}})_{i \leq l},P_{B_{1}},P_{B_{2}}] \ | \ P_{B_{1}},P_{B_{2}}].$$
\noindent Since $P_{A_{i} \cap C}=P_{B_{1} \cup (A_{i} \cap
B_{2})}-P_{B_{1}}$, using Lemma \ref{lemmaA1} (Appendix A)
$${\cal E}[\prod_{i=l+1}^{r}P_{A_{i} \cap C}|(P_{A_{i} \cap B_{1}})_{i \leq l},P_{B_{1}},P_{B_{2}}]=\tilde{Q}_{B_{1}B_{2}}(P_{B_{1}},P_{B_{2}}; \tilde{A}_{2}).$$
\noindent (In order to use Lemma \ref{lemmaA1}, we need $A_{l+1}
\subseteq A_{l+2} \subseteq \ldots \subseteq A_{r}$. Note that
this is not a restriction since if we can consider the minimal
semilattice $\{A'_{1},\ldots,A'_{m}\}$ determined by the sets
$A_{l+1}, \ldots,A_{r}$, which is ordered such that $A'_{j} \not
\subseteq \cup_{l \not = j}A'_{l} \forall j$, and we let
$B'_{j}=\cup_{s=1}^{j}A'_{s}$ and $C'_{j}=B'_{j} \verb2\2
B'_{j-1}$, then each $A_{i}= \dot \cup_{j \in J_{i}}C'_{j}$ for
some $J_{i} \subseteq \{1, \ldots,m\}$. We have $A_{i} \cap C=\dot
\cup_{j \in J_{i}}[(B'_{j} \cap C) \verb2\2 (B'_{j-1} \cap C)]$
and $\prod_{i=l+1}^{r}P_{A_{i} \cap C}=h(P_{B'_{1} \cap C},
\ldots, P_{B'_{m} \cap C})$ for some function $h$.)
\noindent Finally, since ${\cal F}_{B_{1}}$ is conditionally
independent of $P_{B_{2}}$ given $P_{B_{1}}$ and $P_{A_{i} \cap
B_{1}}, i \leq l$ are ${\cal F}_{B_{1}}$-measurable, we have
${\cal E}[\prod_{i=1}^{l}P_{A_{i} \cap B_{1}} \ | \
P_{B_{1}},P_{B_{2}}] = {\cal E}[\prod_{i=1}^{l}P_{A_{i} \cap
B_{1}} \ | \ P_{B_{1}}] \linebreak
=\tilde{Q}_{B_{1}}(P_{B_{1}};\tilde{A}_{1})$, which concludes the
proof. $\Box$
\vspace{3mm}
\noindent {\em Note}: Let $\tilde A_{12}:=\tilde A_{1} \cap \tilde
A_{2}$. By a similar argument one can show that
\begin{equation}
\label{Q-tilda-B1-B2'} \tilde{Q}_{B_{1}B_{2}}(z_{1},z_{2};
\tilde{A}_{12})= \tilde{Q}_{B_{1}}(z_{1};\tilde{A}_{1}) \cdot
\tilde{Q}_{B_{1}B_{2}}(z_{1},z_{2}; \tilde{A}_{2})
\end{equation}
\begin{equation}
\label{Q-tilda-B1-B2''} \tilde{Q}_{B_{1}B_{2}}(z_{1},z_{2};
\tilde{A}_{23})= \tilde{Q}_{B_{1}B_{2}}(z_{1},z_{2};\tilde{A}_{2})
\cdot \tilde{Q}_{B_{2}}(z_{2};\tilde{A}_{3})
\end{equation}
The next lemma tells us that if $B_{1} \subseteq B_{2}$ are
``nicely-shaped'' regions and we want to predict the value of
$P_{B_{2}}$ given the value of $P_{B_{1}}$ and a sample
$\underline{X}$ from $P$, then we can forget all about those
values $X_{i}$ which fall inside the region $B_{1}$. The reason
for this phenomenon is the very essence of the Markov property
given by Definition \ref{definition-Q}.(b), which says that for
predicting the value of $P_{B_{2}}$ it suffices to know the value
of $P_{B_{1}}$, i.e. all the information about the values of $P$
inside the region $B_{1}$ can be discarded.
\begin{lemma}
For every $B_{1},B_{2} \in {\cal A}(u)$ with $B_{1} \subseteq
B_{2}$, for every $\Gamma_{2} \in {\cal B}([0,1])$ and for
$\nu_{B_{1}}$-almost all $(\underline{x},z_{1})$,
$Q_{B_{1}B_{2}}^{(\underline{x})}(z_{1}; \Gamma_{2})$ does not
depend on those $x_{i}$'s that fall in $B_{1}$; in particular, for
$\nu_{B_{1}}$-almost all $(\underline{x},z_{1})$ in $B_{1}^{n}
\times [0,1]$, $Q_{B_{1}B_{2}}^{(\underline{x})}(z_{1};
\Gamma_{2})=Q_{B_{1}B_{2}}(z_{1}; \Gamma_{2})$.
\end{lemma}
\noindent {\bf Proof}: Let $A_{1}, \ldots, A_{n} \in {\cal B}$ and
$\tilde{A}$ defined by (\ref{tilde-A}). Using
(\ref{central-Bayes}) and Lemma \ref{Qtilde-disintegration},(b)
combined with (\ref{Q-tilda-B1-B2''}) we have
$$\int_{\tilde{A}} Q_{B_{1}B_{2}}^{(\underline{x})}(z_{1};\Gamma_{2}) \tilde{Q}_{B_{1}}(z_{1};d \underline{x}) =\int_{\Gamma_{2}} \tilde{Q}_{B_{1}B_{2}}(z_{1},z_{2};\tilde{A})Q_{B_{1}B_{2}}(z_{1};dz_{2})=$$
$$\tilde{Q}_{B_{1}}(z_{1};\tilde{A}_{1}) \int_{\Gamma_{2}} \tilde{Q}_{B_{1}B_{2}}(z_{1},z_{2};\tilde{A}_{23})Q_{B_{1}B_{2}}(z_{1};dz_{2})=$$
$$\tilde{Q}_{B_{1}}(z_{1};\tilde{A}_{1})
\int_{\tilde{A}_{23}}
Q_{B_{1}B_{2}}^{(\underline{x})}(z_{1};\Gamma_{2})
\tilde{Q}_{B_{1}}(z_{1};d \underline{x}).$$
\noindent The result follows by Lemma \ref{lemmaA2} (Appendix A)
since on the set $B_{1}^{l} \times C^{r-l} \times
(B_{2}^{c})^{n-l}$, $\tilde{Q}_{B_{1}}(z_{1};\cdot)$ is the
product measure between its marginal with respect to the first $l$
components restricted to $B_{1}^{l}$ and its marginal with respect
to the remaining $n-l$ components restricted to $C^{r-l} \times
(B_{2}^{c})^{n-r}$ (by Lemma \ref{Qtilde-disintegration},(a)).
$\Box$
Here is the main result of the paper.
\begin{theorem}
\label{main} If $P:=(P_{A})_{A \in {\cal B}}$ is a ${\cal
Q}$-Markov random probability measure and $\underline{X}:=(X_{1},
\ldots, X_{n})$ is a sample from $P$, then the family ${\cal
Q}^{(\underline{x})}=(Q_{B_{1}B_{2}}^{(\underline{x})})_{B_{1}
\subseteq B_{2}}$ defined by (\ref{definition-Q-x}) is a posterior
transition system and the conditional distribution of $P$ given
$\underline{X}=\underline{x}$ is ${\cal
Q}^{(\underline{x})}$-Markov.
\end{theorem}
\noindent {\bf Proof}: By Proposition 5 of Balan and Ivanoff
(2002), it is enough to show that $\forall B_{1} \subseteq B_{2}
\subseteq \ldots \subseteq B_{k}$ in ${\cal A}(u)$, $\forall
\tilde{\Gamma} \in {\cal B}([0,1])^{k}$ and for
$\alpha_{n}$-almost all $\underline{x}$
$${\cal P}[(P_{B_{1}}, \ldots,P_{B_{k}}) \in \tilde{\Gamma}|\underline{X}=\underline{x}] =\int_{\tilde{\Gamma}}Q_{B_{k-1}B_{k}}^{(\underline{x})}(z_{k-1};dz_{k}) \ldots
Q_{B_{1}B_{2}}^{(\underline{x})}(z_{1};dz_{2})\mu_{B_{1}}^{(\underline{x})}(dz_{1})$$
or equivalently, for every $\tilde{A} \in {\cal B}^{n}$
\begin{equation}
\label{posterior-chain} {\cal P}(\underline{X} \in \tilde{A},
(P_{B_{j}})_{j} \in \tilde{\Gamma}) = \int_{\tilde{A}}
\int_{\tilde{\Gamma}}Q_{B_{k-1}B_{k}}^{(\underline{x})}(z_{k-1};dz_{k})
\ldots \mu_{B_{1}}^{(\underline{x})}(dz_{1}) \alpha_{n}(d
\underline{x}).
\end{equation}
\noindent Note also that (\ref{posterior-chain}) will imply that
${\cal Q}^{(\underline{x})}$ is a posterior transition system.
For the proof of (\ref{posterior-chain}) we will use an induction
argument on $k \geq 2$. The statement for $k=2$ is exactly
(\ref{disint-nu-B1-B2-n}). Assume that the statement is true for
$k-1$. For each $B_{1} \subseteq B_{2} \subseteq \ldots \subseteq
B_{k}$ in ${\cal A}(u)$ we let $\nu_{B_{1} \ldots B_{k}}$ be the
law of $(X_{1}, \ldots, X_{n}, P_{B_{1}}, \ldots, P_{B_{k}})$
under ${\cal P}$. Note that $\forall A_{1}, \ldots,A_{n} \in {\cal
B}, \forall \Gamma_{1}, \ldots,\Gamma_{k} \in {\cal B}([0,1])$,
$\nu_{B_{1} \ldots B_{k}}(\prod_{i=1}^{n}A_{i} \times
\prod_{j=1}^{k} \Gamma_{j}) = {\cal E}[\prod_{i=1}^{n}P_{A_{i}}
\cdot \prod_{j=1}^{k}I_{\Gamma_{j}}(P_{B_{j}})]$. On the other
hand, $\nu_{B_{1} \ldots B_{k}}(\tilde{A} \times \prod_{j=1}^{k}
\Gamma_{j})$ is also equal to
\begin{equation}
\label{k-sets} \int_{\tilde{A} \times
\prod_{j=1}^{k-1}\Gamma_{j}}Q_{B_{1} \ldots
B_{k}}^{(\underline{x})}(z_{1}, \ldots,z_{k-1};\Gamma_{k})
\nu_{B_{1} \ldots B_{k-1}}(d \underline{x} \times dz_{1} \times
\ldots \times dz_{k-1})=
\end{equation}
$$\int_{\prod_{j=1}^{k}\Gamma_{j}} \tilde{Q}_{B_{1} \ldots B_{k}}(z_{1}, \ldots,z_{k};\tilde{A}) \mu_{B_{1} \ldots B_{k}}(dz_{1} \times \ldots \times dz_{k})$$
where $Q_{B_{1} \ldots B_{k}}^{(\underline{x})}(z_{1},
\ldots,z_{k-1};\Gamma_{k}) := {\cal P}[P_{B_{k}} \in
\Gamma_{k}|\underline{X}=\underline{x}, P_{B_{j}}=z_{j}, j< k]$
and $\tilde{Q}_{B_{1} \ldots B_{k}}(z_{1},
\ldots,z_{k};\prod_{i=1}^{n}A_{i}) := {\cal P}[X_{1} \in A_{1},
\ldots,X_{n} \in A_{n}|P_{B_{j}}=z_{j}, j \leq k]
= {\cal E}[\prod_{i=1}^{n}P_{A_{i}} | P_{B_{j}}=z_{j}, j \leq
k]$.
\noindent Using the induction hypothesis, the measure $\nu_{B_{1}
\ldots B_{k-1}}$ disintegrates as
$$Q_{B_{k-2}B_{k-1}}^{(\underline{x})}(z_{k-2};dz_{k-1}) \ldots Q_{B_{1}B_{2}}^{(\underline{x})}(z_{1};dz_{2}) \mu_{B_{1}}^{(\underline{x})}(dz_{1}) \alpha_{n}(d \underline{x})$$
Therefore, it is enough to prove that for every $\Gamma_{k} \in
{\cal B}([0,1])$ and for $\nu_{B_{1} \ldots B_{k-1}}$-almost all
$(\underline{x},z_{1}, \ldots,z_{k-1})$
\begin{equation}
\label{Markov-property-n-k} Q_{B_{1} \ldots
B_{k}}^{(\underline{x})}(z_{1},
\ldots,z_{k-1};\Gamma_{k})=Q_{B_{k-1}B_{k}}^{(\underline{x})}(z_{k-1};
\Gamma_{k})
\end{equation}
\noindent On the other hand, the measure $\nu_{B_{1} \ldots
B_{k-1}}$ disintegrates also as
$$\tilde{Q}_{B_{1} \ldots B_{k-1}}(z_{1}, \ldots z_{k-1};d \underline{x}) \mu_{B_{1} \ldots B_{k-1}}(dz_{1} \times \ldots \times dz_{k-1})$$
with respect to its marginal $\mu_{B_{1} \ldots B_{k-1}}$ with
respect to the last $k-1$ components. By the ${\cal Q}$-Markov
property, the measure $\mu_{B_{1} \ldots B_{k}}$ disintegrates as
$$Q_{B_{k-1}B_{k}}(z_{k-1};dz_{k})\mu_{B_{1} \ldots B_{k-1}}(dz_{1} \times \ldots \times dz_{k-1}).$$
\noindent Using (\ref{k-sets}) we can conclude that for
$\mu_{B_{1} \ldots B_{k-1}}$-almost all $(z_{1}, \ldots,z_{k-1})$
\begin{equation}
\label{Bayes-n-k} \int_{\tilde{A}} Q_{B_{1} \ldots
B_{k}}^{(\underline{x})}(z_{1}, \ldots,z_{k-1};\Gamma_{k})
\tilde{Q}_{B_{1} \ldots B_{k-1}}(z_{1}, \ldots,z_{k-1};d
\underline{x})=
\end{equation}
$$\int_{\Gamma_{k}} \tilde{Q}_{B_{1} \ldots B_{k}}(z_{1}, \ldots,z_{k}; \tilde{A}) Q_{B_{k-1}B_{k}}(z_{k-1};dz_{k}).$$
Let $C_{1}=B_{1};C_{j}=B_{j} \verb2\2 B_{j-1},j=2,
\ldots,k;C_{k+1}=B_{k}^{c}$. Note that each $C_{j} \in {\cal
C}(u)$ and $(C_{1}, \ldots,C_{k+1})$ is a partition of ${\cal X}$;
hence each point $x_{i}$ falls into exactly one set of this
partition.
We proceed to the proof of (\ref{Markov-property-n-k}) and we will
suppose that for some $0 \leq l \leq r \leq n$, the points $x_{1},
\ldots,x_{l}$ fall into $B_{k-1}$ (more precisely, each $x_{i}$
falls into some $C_{j_{i}}$ with $1 \leq j_{1} < \ldots < j_{l}
\leq k-1$), the points $x_{l+1}, \ldots,x_{r}$ fall into $C_{k}$
and the points $x_{r+1}, \ldots,x_{n}$ fall into $C_{k+1}$.
The main tool will be (\ref{Bayes-n-k}) where we will consider a
set $\tilde{A}$ of the form $$\tilde{A}:=\prod_{i=1}^{l}(A_{i}
\cap C_{j_{i}}) \times \prod_{i=l+1}^{r}(A_{i} \cap C_{k}) \times
\prod_{i=r+1}^{n}(A_{i} \cap C_{k+1}), \ A_{i} \in {\cal B}.$$
Let $\tilde{A}_{2}:=\prod_{i=l+1}^{r}(A_{i} \cap C_{k}) \times {\cal X}^{n-r+l}, \tilde{A}_{3}:=\prod_{i=r+1}^{n}(A_{i} \cap C_{k+1}) \times {\cal X}^{r}$ and $\tilde{A}_{23}:=\tilde{A}_{2} \cap \tilde{A}_{3}$. We will prove that
\begin{equation}
\label{Q-tilda-1,k} \tilde{Q}_{B_{1} \ldots B_{k}}(z_{1},
\ldots,z_{k}; \tilde{A})=M \cdot
\tilde{Q}_{B_{k-1}B_{k}}(z_{k-1},z_{k}; \tilde{A}_{23})
\end{equation}
\begin{equation}
\label{Q-tilda-1,k-1} \tilde{Q}_{B_{1} \ldots B_{k-1}}(z_{1},
\ldots,z_{k-1}; \tilde{A})=M \cdot \tilde{Q}_{B_{k-1}}(z_{k-1};
\tilde{A}_{23})
\end{equation}
where $M:=\prod_{i=1}^{l}
\tilde{Q}_{B_{j_{i}-1}B_{j_{i}}}(z_{j_{i}-1},z_{j_{i}}; (A_{i}
\cap C_{j_{i}}) \times {\cal X}^{n-1})$. Then we will have
$$\int_{\tilde{A}} Q_{B_{1} \ldots B_{k}}^{(\underline{x})}(z_{1}, \ldots,z_{k-1};\Gamma_{k}) \tilde{Q}_{B_{1} \ldots B_{k-1}}(z_{1}, \ldots,z_{k-1};d \underline{x})=$$
$$M \cdot \int_{\Gamma_{k}} \tilde{Q}_{B_{k-1}B_{k}}(z_{k-1},z_{k}; \tilde{A}_{23})Q_{B_{k-1}B_{k}}(z_{k-1};dz_{k})=$$
$$M \cdot \int_{\tilde{A}_{23}}Q_{B_{k-1}B_{k}}^{(\underline{x})}(z_{k-1};\Gamma_{k}) \tilde{Q}_{B_{k-1}}(z_{k-1};d \underline{x})=$$
$$\int_{\tilde{A}}Q_{B_{k-1}B_{k}}^{(\underline{x})}(z_{k-1};\Gamma_{k})\tilde{Q}_{B_{1} \ldots B_{k-1}}(z_{1}, \ldots,z_{k-1};d \underline{x})$$
where we used (\ref{Bayes-n-k}) and (\ref{Q-tilda-1,k}) for the
first equality, (\ref{central-Bayes}) for the second equality and
(\ref{Q-tilda-1,k-1}) for the third equality (taking in account
that $Q_{B_{k-1}B_{k}}^{(\underline{x})}(z_{k-1};\Gamma_{k})$ does
not depend on $x_{1}, \ldots,x_{l}$). Relation
(\ref{Markov-property-n-k}) will follow immediately.
It remains to prove (\ref{Q-tilda-1,k}) and (\ref{Q-tilda-1,k-1}).
Using Lemma 3 of Balan and Ivanoff (2002) we have (for $A_{i} \in
{\cal A}$):
$${\cal E}[\prod_{i=r+1}^{n}P_{A_{i} \cap C_{k+1}}|{\cal F}_{B_{k}}] = {\cal E}[\prod_{i=r+1}^{n}P_{A_{i} \cap C_{k+1}}|P_{B_{k}}] = \tilde{Q}_{B_{k}}(P_{B_{k}}; \tilde{A}_{3})$$
\noindent and therefore, by double conditioning with respect to
${\cal F}_{B_{k}}$
$$\tilde{Q}_{B_{1} \ldots B_{k}}((z_{j})_{j \leq k}; \tilde{A})=
{\cal E}[\prod_{i=1}^{l}P_{A_{i} \cap C_{j_{i}}}
\prod_{i=l+1}^{r}P_{A_{i} \cap C_{k}} \prod_{i=r+1}^{n}P_{A_{i}
\cap C_{k+1}}|P_{B_{j}}=z_{j},j \leq k]$$
$$=\tilde{Q}_{B_{k}}(z_{k}; \tilde{A}_{3}) \cdot
{\cal E}[\prod_{i=1}^{l}P_{A_{i} \cap C_{j_{i}}}
\prod_{i=l+1}^{r}P_{A_{i} \cap C_{k}} \ | \ P_{B_{j}}=z_{j},j \leq
k].$$
\noindent For the second term we have
$${\cal E}[\prod_{i=1}^{l}P_{A_{i} \cap C_{j_{i}}} \prod_{i=l+1}^{r}P_{A_{i} \cap C_{k}} \ | \ P_{B_{j}},j \leq k]=$$
$${\cal E}[\prod_{i=1}^{l}P_{A_{i} \cap C_{j_{i}}} \cdot {\cal E}[\prod_{i=l+1}^{r}P_{A_{i} \cap C_{k}}|P_{ B_{j_{i}-1} \cup (A_{i} \cap B_{j_{i}}) },i \leq l;P_{B_{j}},j \leq k] \ | \ P_{B_{j}},j \leq k].$$
\noindent Since $P_{A_{i} \cap C_{k}}=P_{B_{k-1} \cup (A_{i} \cap
B_{k})}-P_{B_{k-1}}$, using Lemma \ref{lemmaA1} (Appendix A)
$${\cal E}[\prod_{i=l+1}^{r}P_{A_{i} \cap C_{k}}|P_{B_{j_{i}-1} \cup (A_{i} \cap B_{j_{i}}) },i \leq l;P_{B_{j}}, j \leq k] =\tilde{Q}_{B_{k-1}B_{k}}(P_{B_{k-1}},P_{B_{k}}; \tilde{A}_{2}).$$
\noindent (In order to use Lemma \ref{lemmaA1} we need $A_{l+1}
\subseteq A_{l+2} \subseteq \ldots \subseteq A_{r}$, but this is
not a restriction as we have seen in the proof of Lemma
\ref{Qtilde-disintegration}.)
\noindent Note that by (\ref{Q-tilda-B1-B2''}),
$\tilde{Q}_{B_{k-1}B_{k}}(z_{k-1},z_{k}; \tilde{A}_{2}) \cdot
\tilde{Q}_{B_{k}}(z_{k};
\tilde{A}_{3})=\tilde{Q}_{B_{k-1}B_{k}}(z_{k-1},z_{k};
\tilde{A}_{23})$. Hence the proof of (\ref{Q-tilda-1,k}) will be
complete once we show that
\begin{equation}
\label{tilda-Q-M} {\cal E}[\prod_{i=1}^{l}P_{A_{i} \cap C_{j_{i}}}
\ | \ P_{B_{j}}, j \leq k]=\prod_{i=1}^{l}
\tilde{Q}_{B_{j_{i}-1}B_{j_{i}}}(z_{j_{i}-1},z_{j_{i}}; (A_{i}
\cap C_{j_{i}}) \times {\cal X}^{n-1}).
\end{equation}
But this follows by induction on $l$, using Lemma \ref{lemmaA1}
(Appendix A).
We turn now to the proof of (\ref{Q-tilda-1,k-1}). Using Lemma 3
of Balan and Ivanoff (2002) we have:
$${\cal E}[\prod_{i=l+1}^{r}P_{A_{i} \cap C_{k}} \prod_{i=r+1}^{n}P_{A_{i} \cap C_{k+1}}|{\cal F}_{B_{k-1}}] =\tilde{Q}_{B_{k-1}}(P_{B_{k-1}}; \tilde{A}_{23})$$
\noindent and thereferore, by double conditioning with respect to
${\cal F}_{B_{k-1}}$ we obtain the following expression for
$\tilde{Q}_{B_{1} \ldots B_{k-1}}(z_{1}, \ldots,z_{k-1};
\tilde{A}_{23})$:
$${\cal E}[\prod_{i=1}^{l}P_{A_{i} \cap C_{j_{i}}} \prod_{i=l+1}^{r}P_{A_{i} \cap C_{k}} \prod_{i=r+1}^{n}P_{A_{i} \cap C_{k+1}}|P_{B_{j}}=z_{j},j \leq k-1]=$$
$$\tilde{Q}_{B_{k-1}}(z_{k-1}; \tilde{A}_{23}) \cdot
{\cal E}[\prod_{i=1}^{l}P_{A_{i} \cap C_{j_{i}}} \ | \
P_{B_{j}}=z_{j},j \leq k-1]$$ and (\ref{Q-tilda-1,k-1}) follows,
using (\ref{tilda-Q-M}). The proof of the theorem is complete.
$\Box$
\vspace{3mm}
The posterior distribution of a Dirichlet process is also
Dirichlet. In the case of an empirical measure which corresponds
to a sample either from a non-random distribution or from a
Dirichlet process, the calculations for the posterior transition
probabilities $Q_{B_{1}B_{2}}^{(\underline{x})}$ are not
straightforward for samples of size greater than $1$; however, in
the case of a sample of size $1$ we have the following result.
\begin{proposition}
If $P:=(P_{A})_{A \in {\cal B}}$ is the empirical measure of a
sample of size $N$ from a non-random distribution $P_{0}$
(respectively from a Dirichlet process with parameter measure
$\alpha$) and $X$ is a sample of size $1$ from $P$, then the
conditional distribution of $P$ given $X=x$ is ${\cal
Q}^{(x)}$-Markov with
$$Q_{B_{1}B_{2}}^{(x)} \left(\frac{m_{1}}{N}; \left\{ \frac{m_{2}}{N} \right\}\right)=
Q_{B_{1}B_{2}}^{(1)} \left(\frac{m_{1}- \delta_{x}(B_{1})}{N-1};
\left\{ \frac{m_{2}- \delta_{x}(B_{2})}{N-1} \right\}\right)$$
where ${\cal Q}^{(1)}$ is the transition system of the empirical
measure of a sample of size $N-1$ from $P_{0}$ (respectively from
a Dirichlet process with parameter measure $\alpha$).
\end{proposition}
\noindent {\bf Proof}: Let $P$ be the empirical measure of a
sample from a non-random distribution $P_{0}$. Note that
$\alpha_{1}(A)={\cal E}[P_{A}]=P_{0}(A), \forall A \in {\cal B}$.
We have
$$Q_{B_{1}B_{2}}^{(x)} \left(\frac{m_{1}}{N}; \left\{ \frac{m_{2}}{N} \right\}\right)=
Q_{B_{1}B_{2}} \left(\frac{m_{1}}{N}; \left\{ \frac{m_{2}}{N}
\right\}\right)= Q_{B_{1}B_{2}}^{(1)} \left(\frac{m_{1}-1}{N-1};
\left\{ \frac{m_{2}-1}{N-1} \right\}\right)$$ for
$\alpha_{1}$-almost all $x \in B_{1}$. The fact that
$$Q_{B_{1}B_{2}}^{(x)} \left(\frac{m_{1}}{N}; \left\{ \frac{m_{2}}{N} \right\}\right)=
Q_{B_{1}B_{2}}^{(1)} \left(\frac{m_{1}}{N-1}; \left\{
\frac{m_{2}-1}{N-1} \right\}\right)$$ for $\alpha_{1}$-almost all
$x \in C$ follows from (\ref{central-Bayes}),
since for every $A \in {\cal B}$
$$\tilde{Q}_{B_{1}}\left( \frac{m_{1}}{N};A \cap C \right)={\cal E}[P_{A \cap C}|P_{B_{1}}=\frac{m_{1}}{N}]=\frac{N-m_{1}}{N} \cdot \frac{P_{0}(A \cap C)}{P_{0}(B_{1}^{c})}$$
$$\tilde{Q}_{B_{1}B_{2}}\left( \frac{m_{1}}{N},\frac{m_{2}}{N};A \cap C \right)={\cal E}[P_{A \cap C}|P_{B_{1}}=\frac{m_{1}}{N},P_{B_{2}}=\frac{m_{2}}{N}]=\frac{m_{2}-m_{1}}{N} \cdot \frac{P_{0}(A \cap C)}{P_{0}(C)}.$$
\noindent Similarly one can show that
$$\tilde{Q}_{B_{1}}\left( \frac{m_{1}}{N};A \cap B_{2}^{c} \right)=\frac{N-m_{1}}{N} \cdot \frac{P_{0}(A \cap B_{2}^{c})}{P_{0}(B_{1}^{c})}$$
$$\tilde{Q}_{B_{1}B_{2}}\left( \frac{m_{1}}{N},\frac{m_{2}}{N};A \cap B_{2}^{c} \right)=\frac{N-m_{2}}{N} \cdot \frac{P_{0}(A \cap B_{2}^{c})}{P_{0}(B_{2}^{c})}$$
and hence
$$Q_{B_{1}B_{2}}^{(x)} \left(\frac{m_{1}}{N}; \left\{ \frac{m_{2}}{N} \right\}\right)=
Q_{B_{1}B_{2}}^{(1)} \left(\frac{m_{1}}{N-1}; \left\{
\frac{m_{2}}{N-1} \right\}\right)$$ for $\alpha_{1}$-almost all
$x$ in $B_{2}^{c}$.
If $P$ is the empirical measure of a sample from a Dirichlet
process with parameter measure $\alpha$, then
$\alpha_{1}(A)=\alpha(A)/\alpha({\cal X})$ and a similar argument
can be used. $\Box$
\section{Neutral to the right random probability measures}
Let $P:=(P_{A})_{A \in {\cal B}}$ be a random probability measure
on ${\cal X}$. For every sets $B_{1},B_{2} \in {\cal A}(u)$ with
$B_{1} \subseteq B_{2}$, we define $V_{B_{1}B_{2}}$ to be equal to
$(P_{B_{2}}-P_{B_{1}})/(1-P_{B_{1}})$ on the set $\{P_{B_{1}}<1\}$
and $1$ elsewhere; let $F_{B_{1}B_{2}}$ be the distribution of
$V_{B_{1}B_{2}}$. The next definition generalizes the definition
of Doksum (1974).
\begin{definition}
{\rm A random probability measure $P:=(P_{A})_{A \in {\cal B}}$ is
called {\em neutral to the right} if for every sets $B_{1}
\subseteq \ldots \subseteq B_{k}$ in ${\cal A}(u)$, $P_{B_{1}},
V_{B_{1}B_{2}}, \ldots,V_{B_{k-1}B_{k}}$ are independent.}
\end{definition}
{\em Comments}: 1. A random probability measure $P:=(P_{A})_{A \in
{\cal B}}$ is neutral to the right if and only if $\forall
B_{1},B_{2} \in {\cal A}(u),B_{1} \subseteq B_{2}$,
$V_{B_{1}B_{2}}$ is independent of ${\cal F}_{B_{1}}$.
2. The Dirichlet process with parameter measure $\alpha$ is
neutral to the right with $F_{B_{1}B_{2}}$ equal to the Beta
distribution with parameters $\alpha(B_{2} \verb2\2 B_{1}),
\alpha(B_{2}^{c})$.
3. If we denote $C_{1}=B_{1}; C_{i}=B_{i} \verb2\2 B_{i-1};i=2,
\ldots,k$, then $(P_{C_{1}}, \ldots,P_{C_{k}})$ has a `completely
neutral' distribution (see Definition \ref{completely-neutral});
this distribution was formally introduced by Connor and Mosimann
(1969), although the concept itself goes back to Halmos (1944).
Note that the Dirichlet process is the only non-trivial process
which has completely neutral distributions over any disjoint sets
$\{A_{1}, \ldots,A_{k}\}$ in ${\cal B}$ (according to Ferguson
1974, p. 622).
4. In general, the process $Y_{A}:=- \ln(1-P_{A}), A \in {\cal B}$
is not additive and hence it does not have independent increments,
even if $Y_{B_{1}}, Y_{B_{2}}-Y_{B_{1}}, \ldots,
Y_{B_{k}}-Y_{B_{k-1}}$ are independent for any sets $B_{1}
\subseteq B_{2} \subseteq \ldots \subseteq B_{k}$ in ${\cal A}(u)$
(the increment $Y_{B_{2} \verb2\2 B_{1}}$ is not equal to
$Y_{B_{2}}-Y_{B_{1}}$); therefore, the theory of processes with
independent increments cannot be used in higher dimensions.
\begin{proposition}
\label{neutral-QMarkov} A neutral to the right random probability
measure is ${\cal Q}$-Markov with
\begin{equation}
\label{trans-neutral} Q_{B_{1}B_{2}}(z_{1}; \Gamma_{2}):=
\left\{
\begin{array}{ll}
F_{B_{1}B_{2}} \left( \frac{\Gamma_{2}-z_{1}}{1-z_{1}} \right) & \mbox{if $z_{1}<1$} \\
\delta_{1}(\Gamma_{2}) & \mbox{if $z_{1}=1$}
\end{array}
\right.
\end{equation}
\end{proposition}
\noindent {\bf Proof}: For any sets $B_{1} \subseteq \ldots
\subseteq B_{k}$ in ${\cal A}(u)$, $P_{B_{1}}, \ldots, P_{B_{k}}$
is a Markov chain:
\begin{eqnarray*}
\lefteqn{ {\cal P}[P_{B_{j}} \in \Gamma_{j}|P_{B_{1}}=z_{1}, \ldots, P_{B_{j-1}}=z_{j-1}] = } \\
& & {\cal P} [ V_{B_{j-1}B_{j}} \in \frac{\Gamma_{j}-z_{j-1}}{1-z_{j-1}}|P_{B_{1}}=z_{1}, V_{B_{1}B_{2}}=v_{2}, \ldots, V_{B_{j-2}B_{j-1}}=v_{j-1} ] = \\
& & {\cal P} [ V_{B_{j-1}B_{j}} \in \frac{\Gamma_{j}-z_{j-1}}{1-z_{j-1}} ] =
{\cal P} [ V_{B_{j-1}B_{j}} \in \frac{\Gamma_{j}-z_{j-1}}{1-z_{j-1}}|P_{B_{j-1}}=z_{j-1}] = \\
& & {\cal P}[P_{B_{j}} \in \Gamma_{j}|P_{B_{j-1}}=z_{j-1}]
\end{eqnarray*}
where $v_{i}:=(z_{i}-z_{i-1})/(1-z_{i-1}),i=2, \ldots,j-1$ and
assuming $z_{i}<1, \forall i$. $\Box$
\vspace{2mm}
For any sets $B_{1} \subseteq B_{2} \subseteq B_{3}$ in ${\cal
A}(u)$,
$V_{B_{1}B_{3}}=V_{B_{1}B_{2}}+V_{B_{2}B_{3}}-V_{B_{1}B_{2}} \cdot
V_{B_{2}B_{3}}$. This leads us to the following definition.
\begin{definition}
\label{definition-F} For each $B_{1}, B_{2} \in {\cal A}(u)$ with
$B_{1} \subseteq B_{2}$, let $F_{B_{1}B_{2}}$ be a probability
measure on $[0,1]$. The family $(F_{B_{1}B_{2}})_{B_{1} \subseteq
B_{2}}$ is called a {\bf neutral to the right system} if $\forall
B_{1} \subseteq B_{2} \subseteq B_{3}$ in ${\cal A}(u)$
$$F_{B_{1}B_{3}}(\Gamma)=\int_{[0,1]^{2}} I_{\Gamma}(y+z-yz) F_{B_{2}B_{3}}(dz) F_{B_{1}B_{2}}(dy).$$
\end{definition}
{\em Comments}: 1. If we let $U_{B_{1}B_{2}}:=-
\ln(1-V_{B_{1}B_{2}})$ and $G_{B_{1}B_{2}}$ be the distribution of
$U_{B_{1}B_{2}}$, then for every $B_{1} \subseteq B_{2} \subseteq
B_{3}$ in ${\cal A}(u)$,
$U_{B_{1}B_{3}}=U_{B_{1}B_{2}}+U_{B_{2}B_{3}}$ and
$G_{B_{1}B_{3}}=G_{B_{1}B_{2}} \ast G_{B_{2}B_{3}}$.
2. Let $Q_{B_{1}B_{2}}(z_{1};
\Gamma_{2}):=F_{B_{1}B_{2}}((\Gamma_{2}-z_{1})/(1-z_{1}))$ for
$z_{1}<1$ and $Q_{B_{1}B_{2}}(1; \cdot)=\delta_{1}$; then
$(F_{B_{1}B_{2}})_{B_{1} \subseteq B_{2}}$ is a neutral to the
right system if and only if $(Q_{B_{1}B_{2}})_{B_{1} \subseteq
B_{2}}$ is a transition system.
\vspace{2mm}
The following result is the converse of Proposition
\ref{neutral-QMarkov}.
\begin{proposition}
\label{QMarkov-neutral} If $P:=(P_{A})_{A \in {\cal B}}$ is a
${\cal Q}$-Markov random probability measure with a transition
system ${\cal Q}$ given by (\ref{trans-neutral}) for a neutral to
the right system $(F_{B_{1}B_{2}})_{B_{1} \subseteq B_{2}}$, then
$P$ is neutral to the right.
\end{proposition}
\noindent {\bf Proof}: We want to prove that for every $B_{1},
B_{2} \in {\cal A}(u)$ with $B_{1} \subseteq B_{2}$ and for every
$A_{1}, \ldots, A_{k} \in {\cal A}, A_{i} \subseteq
B,A_{k}=B_{1}$, $V_{B_{1}B_{2}}$ is independent of $(P_{A_{1}},
\ldots,P_{A_{k}})$. Using the ${\cal Q}$-Markov property we have:
${\cal P}[V_{B_{1}B_{2}} \in \Gamma|P_{A_{i}}=z_{i}; i=1,\ldots,k]
= {\cal P}[P_{B_{2}} \in z_{k}+(1-z_{k})\Gamma|P_{B_{1}}=z_{k}]=
Q_{B_{1}B_{2}}(z_{k}; z_{k}+(1-z_{k})\Gamma)=
F_{B_{1}B_{2}}(\Gamma)={\cal P}(V_{B_{1}B_{2}} \in \Gamma)$. Since
this holds for any Borel set $\Gamma$ in $[0,1]$, the proof is
complete. $\Box$
In what follows we will prove that the posterior distribution of a
neutral to the right random probability measure is also neutral to
the right, by showing that the posterior transition probabilities
$Q_{B_{1}B_{2}}^{(\underline{x})}$ are of the form
(\ref{trans-neutral}) for a ``posterior'' neutral to the right
system $(F_{B_{1}B_{2}}^{(\underline{x})})_{B_{1} \subseteq
B_{2}}$. This extends Doksum's (1974) result to an arbitrary space
${\cal X}$, which can be endowed with an indexing collection
${\cal A}$.
Let $P:=(P_{A})_{A \in {\cal B}}$ be a neutral to the right
process and $\underline{X}:=(X_{1}, \ldots, X_{n})$ a sample from
$P$. In order to define the probability measures
$F_{B_{1}B_{2}}^{(\underline{x})}$ we will use the same Bayesian
technique as in Section 3.
For each sets $B_{1},B_{2} \in {\cal A}(u);B_{1} \subseteq B_{2}$,
let $\phi_{B_{1}B_{2}}$ be the law of $X_{1}, \ldots,X_{n},
\linebreak V_{B_{1}B_{2}}$ under ${\cal P}$. Note that
$\phi_{B_{1}B_{2}}(\prod_{i=1}^{n}A_{i} \times \Gamma)={\cal
E}[\prod_{i=1}^{n}P_{A_{i}} \cdot I_{\Gamma}(V_{B_{1}B_{2}})]$. On
the other hand, we have
\begin{equation}
\label{disint-phi} \phi_{B_{1}B_{2}}(\tilde{A} \times \Gamma)=
\int_{\tilde{A}} F_{B_{1}B_{2}}^{(\underline{x})}(\Gamma)
\alpha_{n}(d \underline{x})
= \int_{\Gamma} \tilde{T}_{B_{1}B_{2}}(z; \tilde{A}) F_{B_{1}B_{2}}(dz)
\end{equation}
where
\begin{equation}
\label{definition-F-x}
F_{B_{1}B_{2}}^{(\underline{x})}(\Gamma):={\cal P}[V_{B_{1}B_{2}}
\in \Gamma|\underline{X}= \underline{x}]
\end{equation}
and $\tilde{T}_{B_{1}B_{2}}(z; \tilde{A}):={\cal P}[\underline{X}
\in \tilde{A}|V_{B_{1}B_{2}}=z]$.
In the proof of Theorem \ref{main-neutral} we will see that
$(F_{B_{1}B_{2}}^{(\underline{x})})_{B_{1} \subseteq B_{2}}$ may
not be a genuine neutral to the right system as introduced by
Definition \ref{definition-F}. Therefore we need to introduce the
following terminology.
\begin{definition}
The family $(F_{B_{1}B_{2}}^{(\underline{x})})_{B_{1} \subseteq
B_{2}}$ defined by (\ref{definition-F-x}) is called a {\bf
posterior neutral to the right system} (corresponding to $P$ and
$\underline{X}$) if $\forall B_{1} \subseteq B_{2} \subseteq
B_{3}$ in ${\cal A}(u)$, $\forall \Gamma \in {\cal B}([0,1])$ and
for $\alpha_{n}$-almost all $\underline{x}$
$$F_{B_{1}B_{3}}^{(\underline{x})}(\Gamma)= \int_{[0,1]^{2}}
I_{\Gamma}(y+z-yz)
F_{B_{2}B_{3}}^{(\underline{x})}(dz)F_{B_{1}B_{2}}^{(\underline{x})}(dy).$$
\noindent The conditional distribution of $P$ given
$\underline{X}=\underline{x}$ is called {\bf neutral to the right}
if $\forall B_{1} \subseteq B_{2}$ in ${\cal A}(u)$,
$V_{B_{1}B_{2}}$ is conditionally independent of ${\cal
F}_{B_{1}}$ given $\underline{X}$.
\end{definition}
Let $C:=B_{2} \verb2\2 B_{1}$. For fixed $0 \leq l \leq r
\leq n$ we will consider sets of the form
$\tilde{A}_{23}:=\prod_{i=l+1}^{r}(A_{i} \cap C) \times
\prod_{i=r+1}^{n}(A_{i} \cap B_{2}^{c})$, where $A_{i} \in {\cal
B}$.
\begin{lemma}
\label{tildeQ-neutral-23} (a) For $\mu_{B_{1}}$-almost all
$z_{1}$,
$$\tilde{Q}_{B_{1}}(z_{1}; \tilde{A}_{23}) = \frac{(1-z_{1})^{n-l}}{\alpha_{n}((B_{1}^{c})^{n-l} \times {\cal X}^{l})} \cdot \alpha_{n}(\tilde{A}_{23}).$$
(b) For $\mu_{B_{1}B_{2}}$-almost all $(z_{1},z_{2})$,
$$\tilde{Q}_{B_{1}B_{2}}(z_{1},z_{2}; \tilde{A}_{23}) = \frac{(1-z_{1})^{n-l}}{\alpha_{n}((B_{1}^{c})^{n-l} \times {\cal X}^{l})} \cdot \tilde{T}_{B_{1}B_{2}}
\left( \frac{z_{2}-z_{1}}{1-z_{1}};\tilde{A}_{23}\right).$$
\end{lemma}
\noindent {\bf Proof}: Without loss of generality we will assume
that $A_{i} \in {\cal A}, \forall i$. We have
\begin{equation}
\label{neutral-Q-tilde} \prod_{i=l+1}^{r}P_{A_{i} \cap C} \cdot
\prod_{i=r+1}^{n}P_{A_{i} \cap B_{2}^{c}} = (1-P_{B_{1}})^{n-l}
\cdot \prod_{i=l+1}^{r} \frac{P_{A_{i} \cap C}}{1-P_{B_{1}}} \cdot
\prod_{i=r+1}^{n} \frac{P_{A_{i} \cap B_{2}^{c}}}{1-P_{B_{1}}}.
\end{equation}
\noindent Note that $P_{A_{i} \cap
C}/(1-P_{B_{1}})=V_{B_{1},(A_{i} \cap B_{2}) \cup B_{1}}, P_{A_{i}
\cap B_{2}^{c}}/(1-P_{B_{1}})=V_{B_{1}, A_{i} \cup
B_{2}}-V_{B_{1}B_{2}}$ and $P_{B_{1}}$ is independent of
$V_{B_{1}, (A_{i} \cap B_{2}) \cup B_{1}}, i=l+1, \ldots, r,
V_{B_{1}B_{2}}$ and $V_{B_{1},A_{i} \cup B_{2}}, i=r+1, \ldots,
n$.
\noindent (a) Take ${\cal E}[ \ \cdot \ ]$, respectively ${\cal
E}[\ \cdot \ |P_{B_{1}}=z_{1}]$ in (\ref{neutral-Q-tilde}); we get
\begin{equation}
\label{expect-tildeA23} {\cal E}[\prod_{i=l+1}^{r} \frac{P_{A_{i}
\cap C}}{1-P_{B_{1}}} \cdot \prod_{i=r+1}^{n} \frac{P_{A_{i} \cap
B_{2}^{c}}}{1-P_{B_{1}}}]=
\frac{\alpha_{n}(\tilde{A}_{23})}{\alpha_{n}((B_{1}^{c})^{n-l}
\times {\cal X}^{l})}
\end{equation}
$$\tilde{Q}_{B_{1}}(z_{1}; \tilde{A}_{23})=(1-z_{1})^{n-l} \cdot {\cal E} [\prod_{i=l+1}^{r}P_{A_{i} \cap C} \cdot \prod_{i=r+1}^{n}P_{A_{i} \cap B_{2}^{c}}]=\frac{(1-z_{1})^{n-l} \alpha_{n}(\tilde{A}_{23})}{\alpha_{n}((B_{1}^{c})^{n-l} \times {\cal X}^{l})}.$$
\noindent (b)
Take ${\cal E}[\ \cdot \ |V_{B_{1}B_{2}}=z]$, respectively ${\cal
E}[\ \cdot \ |P_{B_{1}}=z_{1},P_{B_{2}}=z_{2}]$ in
(\ref{neutral-Q-tilde}); we get
\begin{equation}
\label{cond-expect-tildeA23} {\cal
E}[\prod_{i=l+1}^{r}\frac{P_{A_{i} \cap C}}{1-P_{B_{1}}}
\prod_{i=r+1}^{n} \frac{P_{A_{i} \cap
B_{2}^{c}}}{1-P_{B_{1}}}|V_{B_{1}B_{2}}=z]=\frac{\tilde{T}_{B_{1}B_{2}}(z;\tilde{A}_{23})}{\alpha_{n}((B_{1}^{c})^{n-l}
\times {\cal X}^{l})}
\end{equation}
$$\tilde{Q}_{B_{1}B_{2}}(z_{1},z_{2}; \tilde{A}_{23})=(1-z_{1})^{n-l}{\cal E} [\prod_{i=l+1}^{r}\frac{P_{A_{i} \cap C}}{1-P_{B_{1}}} \prod_{i=r+1}^{n}\frac{P_{A_{i} \cap B_{2}^{c}}}{1-P_{B_{1}}}|V_{B_{1}B_{2}}=\frac{z_{2}-z_{1}}{1-z_{1}}]$$
$$=(1-z_{1})^{n-l} \cdot \frac{1}{\alpha_{n}((B_{1}^{c})^{n-l} \times {\cal X}^{l})} \cdot \tilde{T}_{B_{1}B_{2}} \left(\frac{z_{2}-z_{1}}{1-z_{1}};\tilde{A}_{23} \right)$$
which concludes the proof. $\Box$
\begin{lemma}
For every $B_{1},B_{2} \in {\cal A}(u)$ with $B_{1} \subseteq
B_{2}$, for every $\Gamma \in {\cal B}([0,1])$ and for
$\alpha_{n}$-almost all $\underline{x}$,
$F_{B_{1}B_{2}}^{(\underline{x})}(\Gamma)$ does not depend on
those $x_{i}$'s that fall in $B_{1}$; in particular, for
$\alpha_{n}$-almost all $\underline{x}$ in $B_{1}^{n}$,
$F_{B_{1}B_{2}}^{(\underline{x})}(\Gamma)=F_{B_{1}B_{2}}(\Gamma)$.
\end{lemma}
\noindent {\bf Proof}: For arbitrary $A_{1}, \ldots, A_{n} \in
{\cal A}$ we write
\begin{eqnarray*}
\lefteqn{\prod_{i=1}^{l}P_{A_{i} \cap B_{1}} \prod_{i=l+1}^{r}P_{A_{i} \cap C} \prod_{i=r+1}^{n}P_{A_{i} \cap B_{2}^{c}} = } \\
& & (1-P_{B_{1}})^{n-l} \prod_{i=1}^{l}P_{A_{i} \cap B_{1}} \prod_{i=l+1}^{r} \frac{P_{A_{i} \cap C}}{1-P_{B_{1}}} \prod_{i=r+1}^{n} \frac{P_{A_{i} \cap B_{2}^{c}}}{1-P_{B_{1}}}.
\end{eqnarray*}
\noindent Taking ${\cal E}[\ \cdot \ ], {\cal E}[\ \cdot \
|V_{B_{1}B_{2}}=z]$ and using (\ref{expect-tildeA23}),
respectively (\ref{cond-expect-tildeA23}) we get
$$\alpha_{n}(\tilde{A})=\frac{\alpha_{n}(\prod_{i=1}^{l}(A_{i} \cap B_{1}) \times (B_{1}^{c})^{n-l})}{\alpha_{n}((B_{1}^{c})^{n-l} \times {\cal X}^{l})} \cdot \alpha_{n}(\tilde{A}_{23})$$
$$\tilde{T}_{B_{1}B_{2}}(z; \tilde{A})=\frac{\alpha_{n}(\prod_{i=1}^{l}(A_{i} \cap B_{1}) \times (B_{1}^{c})^{n-l})}{\alpha_{n}((B_{1}^{c})^{n-l} \times {\cal X}^{l})} \cdot \tilde{T}_{B_{1}B_{2}}(z; \tilde{A}_{23}).$$
\noindent Using (\ref{disint-phi}) we get
$$\int_{\tilde{A}} F_{B_{1}B_{2}}^{(\underline{x})}(\Gamma) \alpha_{n}(d \underline{x})
= \int_{\Gamma} \tilde{T}_{B_{1}B_{2}}(z; \tilde{A})
F_{B_{1}B_{2}}(dz)=$$
$$\frac{\alpha_{n}(\prod_{i=1}^{l}(A_{i} \cap B_{1}) \times (B_{1}^{c})^{n-l})}{\alpha_{n}((B_{1}^{c})^{n-l} \times {\cal X}^{l})} \cdot \int_{\Gamma} \tilde{T}_{B_{1}B_{2}}(z; \tilde{A}_{23}) F_{B_{1}B_{2}}(dz)=$$
$$\frac{\alpha_{n}(\prod_{i=1}^{l}(A_{i} \cap B_{1}) \times (B_{1}^{c})^{n-l})}{\alpha_{n}((B_{1}^{c})^{n-l} \times {\cal X}^{l})} \cdot \int_{\tilde{A}_{23}} F_{B_{1}B_{2}}^{(\underline{x})}(\Gamma) \alpha_{n}(d \underline{x}).$$
\noindent The result follows by Lemma \ref{lemmaA2} (Appendix A).
$\Box$
\vspace{3mm}
Here is the main result of this section.
\begin{theorem}
\label{main-neutral} If $P:=(P_{A})_{A \in {\cal B}}$ is a neutral
to the right random probability measure and
$\underline{X}:=(X_{1}, \ldots,X_{n})$ is a sample from $P$, then
the conditional distribution of $P$ given
$\underline{X}=\underline{x}$ is also neutral to the right.
\end{theorem}
\noindent {\bf Proof}: Since $P$ is ${\cal Q}$-Markov, by Theorem
\ref{main} the conditional distribution of $P$ given
$\underline{X}=\underline{x}$ is ${\cal
Q}^{(\underline{x})}$-Markov. Using Lemma \ref{tildeQ-neutral-23},
the key equation (\ref{central-Bayes}) becomes
$$\int_{\tilde{A}_{23}}{\cal Q}_{B_{1}B_{2}}^{(\underline{x})}(z_{1}; \Gamma_{2}) \alpha_{n}(d \underline{x})= \int_{\Gamma_{2}} \tilde{T}_{B_{1}B_{2}} \left( \frac{z_{2}-z_{1}}{1-z_{1}}; \tilde{A}_{23}\right) Q_{B_{1}B_{2}}(z_{1};dz_{2}).$$
\noindent Using Proposition \ref{neutral-QMarkov} and relation
(\ref{disint-phi}), the right-hand side becomes (for $z_{1}<1$)
$$\int_{\frac{\Gamma_{2}-z_{1}}{1-z_{1}}} \tilde{T}_{B_{1}B_{2}}(z; \tilde{A}_{23})F_{B_{1}B_{2}}(dz)=\int_{\tilde{A}_{23}}F_{B_{1}B_{2}}^{(\underline{x})} \left(\frac{\Gamma_{2}-z_{1}}{1-z_{1}} \right) \alpha_{n}(d \underline{x}).$$
\noindent This proves that $\forall z_{1} \in [0,1), \forall
\Gamma_{2} \in {\cal B}([0,1])$ and for $\alpha_{n}$-almost all
$\underline{x}$
$${\cal Q}_{B_{1}B_{2}}^{(\underline{x})}(z_{1}; \Gamma_{2})=
F_{B_{1}B_{2}}^{(\underline{x})}
\left(\frac{\Gamma_{2}-z_{1}}{1-z_{1}} \right).$$
\noindent Since ${\cal Q}^{(\underline{x})}$ is a posterior
transition system, it follows that
$(F_{B_{1}B_{2}}^{(\underline{x})})_{B_{1} \subseteq B_{2}}$ is a
posterior neutral to the right system. By Proposition
\ref{QMarkov-neutral}, the distribution of $P$ given
$\underline{X}$ is neutral to the right. $\Box$
\vspace{2mm}
The next result gives some simple formulas for calculating the
posterior distribution of $P_{B_{1}}$ when all the observations
fall outside $B_{1}$, and the posterior distribution of
$V_{B_{1}B_{2}}$ when all the observations fall outside $B_{2}
\verb2\2 B_{1}$.
\begin{proposition}
(a) For $\alpha_{n}$-almost all $\underline{x}$ with $x_{i} \in
B_{1}^{c} \ \forall i$
$$\mu_{B_{1}}^{(\underline{x})}(\Gamma)={\cal P}[P_{B_{1}} \in \Gamma |\underline{X}=\underline{x}]= \frac{{\cal E}[I_{\Gamma}(P_{B_{1}})(1-P_{B_{1}})^{n}]}{{\cal E}[(1-P_{B_{1}})^{n}]}.$$
(b) For $\alpha_{n}$-almost all $\underline{x}$ with $x_{i} \in
(B_{2} \verb2\2 B_{1})^{c} \ \forall i$
$$F_{B_{1}B_{2}}^{(\underline{x})}(\Gamma)={\cal P}[V_{B_{1}B_{2}} \in \Gamma|\underline{X}=\underline{x}]=\frac{{\cal E}[I_{\Gamma}(V_{B_{1}B_{2}})(1-P_{B_{2}})^{m}]}{{\cal E}[(1-P_{B_{2}})^{m}]}$$
where $m$ denotes the number of $x_{i}$'s that fall outside
$B_{2}$.
\end{proposition}
\noindent {\bf Proof}: Note that (a) is a particular case of (b)
since $\mu_{B_{1}}^{(\underline{x})}=F_{\emptyset
B_{1}}^{(\underline{x})}$. We proceed to the proof of (b). For
fixed $0 \leq l \leq n$, let $\tilde{A}:=\prod_{i=1}^{l}(A_{i}
\cap B_{1}) \times \prod_{i=l+1}^{n}(A_{i} \cap B_{2}^{c})$, where
$A_{i} \in {\cal B}$. We claim that
\begin{equation}
\label{tildeT-outsideC}
\tilde{T}_{B_{1}B_{2}}(z;\tilde{A})=(1-z)^{n-l} \cdot
\frac{\alpha_{n}((B_{1}^{c})^{n-l} \times {\cal
X}^{l})}{\alpha_{n}((B_{2}^{c})^{n-l} \times {\cal X}^{l})} \cdot
\alpha_{n}(\tilde{A})
\end{equation}
\noindent Using (\ref{disint-phi}), it follows that
$$\int_{\tilde{A}} F_{B_{1}B_{2}}^{(\underline{x})}(\Gamma) \alpha_{n}(d \underline{x})
= \frac{\alpha_{n}((B_{1}^{c})^{n-l} \times {\cal X}^{l})}{\alpha_{n}((B_{2}^{c})^{n-l} \times {\cal X}^{l})} \cdot \alpha_{n}(\tilde{A}) \cdot
\int_{\Gamma} (1-z)^{n-l} F_{B_{1}B_{2}}(dz)$$ and hence for
$\alpha_{n}$-almost all $\underline{x}$ with $x_{i} \in (B_{2}
\verb2\2 B_{1})^{c}, \forall i$
$$F_{B_{1}B_{2}}^{(\underline{x})}(\Gamma)=\frac{\alpha_{n}((B_{1}^{c})^{n-l} \times {\cal X}^{l})}{\alpha_{n}((B_{2}^{c})^{n-l} \times {\cal X}^{l})} \cdot \int_{\Gamma} (1-z)^{n-l} F_{B_{1}B_{2}}(dz)=$$
$$\frac{{\cal E}[(1-P_{B_{1}})^{n-l}]}{{\cal E}[(1-P_{B_{1}})^{n}]} \cdot {\cal E}[I_{\Gamma}(V_{B_{1}B_{2}}) (1-V_{B_{1}B_{2}})^{n-l}]=
\frac{{\cal E}[I_{\Gamma}(V_{B_{1}B_{2}})
(1-P_{B_{2}})^{n-l}]}{{\cal E}[(1-P_{B_{1}})^{n}]}$$ since
$P_{B_{1}}$ is independent of $V_{B_{1}B_{2}}$.
We turn now to the proof of (\ref{tildeT-outsideC}). Without loss
of generality we will assume that $A_{i} \in {\cal A}, \forall i$.
Let $\tilde{A}_{2}= \prod_{i=l+1}^{n}(A_{i} \cap B_{2}^{c}) \times
{\cal X}^{l}$. We have
$$\prod_{i=1}^{l}P_{A_{i} \cap B_{1}} \prod_{i=l+1}^{n}P_{A_{i} \cap B_{2}^{c}}=
(1-P_{B_{1}})^{n-l} \prod_{i=1}^{l} P_{A_{i} \cap B_{1}}
(1-V_{B_{1}B_{2}})^{n-l} \prod_{i=l+1}^{n} \frac{P_{A_{i} \cap
B_{2}^{c}}}{1-P_{B_{2}}}.$$
\noindent Note that $(1-P_{B_{1}})^{n-l} \prod_{i=1}^{l} P_{A_{i}
\cap B_{1}}$ is ${\cal F}_{B_{1}}$-measurable and ${\cal
F}_{B_{1}}$ is independent of $V_{B_{1}B_{2}},V_{B_{2},A_{i} \cup
B_{2}},i=l+1, \ldots,n$. By taking ${\cal E}[\ \cdot \
|V_{B_{1}B_{2}}=z]$ we get
$$\tilde{T}_{B_{1}B_{2}}(z;\tilde{A})=(1-z)^{n-l} \cdot {\cal E}[\prod_{i=1}^{l} P_{A_{i} \cap B_{1}} \cdot (1-P_{B_{1}})^{n-l}] \cdot {\cal E}[\prod_{i=l+1}^{n}\frac{P_{A_{i} \cap B_{2}^{c}}}{1-P_{B_{2}}}]=$$
$$(1-z)^{n-l} \cdot \alpha_{n}(\prod_{i=1}^{l}(A_{i} \cap B_{1}) \times (B_{1}^{c})^{n-l}) \cdot \frac{\alpha_{n}(\tilde{A}_{2})}{\alpha_{n}((B_{2}^{c})^{n-l} \times {\cal X}^{l})}.$$
\noindent Finally, by taking expectation in
$$\prod_{i=1}^{l}P_{A_{i} \cap B_{1}} \cdot \prod_{i=l+1}^{n}P_{A_{i} \cap B_{2}^{c}}= \prod_{i=1}^{l} P_{A_{i} \cap B_{1}} \cdot (1-P_{B_{1}})^{n-l} \cdot \prod_{i=l+1}^{n} \frac{P_{A_{i} \cap B_{2}^{c}}}{1-P_{B_{1}}}$$
we get $\alpha_{n}(\tilde{A}) =\alpha_{n}(\prod_{i=1}^{l}(A_{i}
\cap B_{1}) \times (B_{1}^{c})^{n-l}) \cdot
\alpha_{n}(\tilde{A}_{2})/\alpha_{n}((B_{1}^{c})^{n-l} \times
{\cal X}^{l})$. $\Box$
\vspace{3mm}
\footnotesize{{\em Acknowledgement.} I would like to thank
Jyotirmoy Dey and Giovanni Petris for drawing my attention to the
original works of Ferguson and Doksum. I would also like to thank
Professor Jean Vaillancourt for the useful discussions which have
helped clarify the ideas. Finally, I am very grateful to an
anonymous referee who read the paper very carefully and made
numerous suggestions for improvement.
\normalsize{
|
2,869,038,154,079 | arxiv | \section{Introduction}
Autism spectrum disorder (ASD) is a neurodevelopmental disorder that affects both social behavior and mental health \cite{nation2006patterns}. ASD is typically diagnosed with behavioral tests, and recently functional MRI (fMRI) has been applied to analyze the cause of ASD \cite{di2017enhancing}.
Connectome analysis in fMRI aims to elucidate neural connections in the brain and can be generally categorized into two types: the functional connectome (FC) \cite{van2010exploring} and the effective connectome (EC) \cite{friston2003dynamic}. The FC typically calculates the correlation between time-series of different regions-of-interest (ROIs) in the brain, which is typically robust and easy to compute; however, FC does not reveal the underlying dynamics.
EC models the directed influence between ROIs,
and is widely used in analysis of EEG \cite{kiebel2008dynamic} and fMRI \cite{seghier2010identifying}.
EC is typically estimated using dynamic causal modeling (DCM) \cite{friston2003dynamic}.
DCM can be viewed as a Bayesian framework for parameter estimation in a dynamical system represented by an ordinary different equation (ODE). A DCM model is typically optimized using the expectation-maximization (EM) algorithm \cite{moon1996expectation}. Despite its wide application and good theoretical properties, a drawback is we need to re-derive the algorithm when the forward model changes, which limits its application. Furthermore, current DCM can not handle large-scale systems, hence is unsuitable for whole-brain analysis. Recent works such as rDCM \cite{frassle2020regression}, spectral-DCM \cite{razi2017large} and sparse-DCM \cite{prando2020sparse} modify DCM for whole-brain analysis of resting-state fMRI, yet they are limited to a linear dynamical system and use the EM algorithm for optimization, hence cannot be used as off-the-shelf methods for different forward models.
In this project, we propose the Multiple-Shooting Adjoint (MSA) method for parameter estimation in DCM. Specifically, MSA uses the multiple-shooting method \cite{bock1984multiple} for robust fitting of an ODE, and uses the adjoint method \cite{pontryagin2018mathematical} for gradient estimation in the continuous case; after deriving the gradient, generic optimizers such as stochastic gradient descent (SGD) can be applied.
Our contributions are: (1) MSA is implemented as an off-the-shelf method, and can be easily applied to generic non-linear cases by specifying the forward model without re-deriving the optimization algorithm. (2) In toy examples, we validate the accuracy of MSA in parameter estimation; we also validated its ability to handle large-scale systems. (3) We apply MSA in the whole-brain dynamic causal modeling for fMRI; in a classification task of ASD vs. control, EC estimated by MSA achieves better performance than FC.
\section{Methods}
We first introduce the notations and problem in Sec 2.1, then introduce mathematical methods in Sec2.3-2.4, and finally introduce DCM for fMRI in Sec2.5.
\subsection{Notations and formulation of problem}
We summarize notations here for the ease of reading, which correspond to Fig.~\ref{fig:shoot}.
\renewcommand\labelitemi{$\bullet$}
\begin{itemize}
\item $z(t), \widetilde{z(t)}, \overline{z(t)}$: $z(t)$ is the true time-series, $\widetilde{z(t)}$ is the noisy observation, and $\overline{z(t)}$ is the estimation. If $p$ time-series are observed, then they are $p$-dimensional vectors for each time $t$.\vspace{1mm}
\item $(t_i,\widehat{z_i})_{i=0}^N$: $\{\widehat{z_i}\}_{i=0}^N$ are corresponding guesses of states at split time points $\{t_i\}_{i=0}^N$. See Fig.~\ref{fig:shoot}. $\widehat{z_i}$ are discrete points, while $\widetilde{z(t)}, z(t), \overline{z(t)}$ are trajectories. \vspace{1mm}
\item $f_\eta$: Hidden state $z(t)$ follows the ODE $\frac{d z}{dt}=f(z,t)$, $f$ is parameterized by $\eta$.\vspace{1mm}
\item $\theta$: $\theta=[\eta, z_0, ... z_N]$. We concatenate all optimizable parameters into one vector for the ease of notation, denoted as $\theta$. \vspace{1mm}
\item $\lambda(t)$: Lagrangian multiplier in the continuous case, used to derive the adjoint state equation.
\end{itemize}
The task of DCM can be viewed as a parameter estimation problem for a continuous dynamical system, and can be formulated as:
\begin{equation}
\label{eq:formulation}
\operatorname*{argmin}_{\eta} \int \Big( \overline{z(\tau)} - \widetilde{z(\tau)} \Big)^2 d\tau \ \ \ s.t.\ \ \frac{d \overline{z(\tau)}}{d \tau} = f_\eta (\overline{z(\tau)},\tau)
\end{equation}
The goal is to estimate $\eta$ from observations $\widetilde{z}$.
\begin{figure}[t]
\begin{subfigure}{0.40\textwidth}
\includegraphics[width=\linewidth]{figures/shooting.png}
\end{subfigure}
\hfill
\begin{subfigure}{0.54\textwidth}
\includegraphics[width=\linewidth]{figures/multi-shoot.png}
\end{subfigure}
\caption{ \small{ Left: illustration of the shooting method. Right: illustration of the multiple-shooting method. Blue dots represent the guess of state at split time $t_i$.}}
\label{fig:shoot}
\end{figure}
In the following sections, we first briefly introduce the multiple-shooting method, which is related to the numerical solution of a continuous dynamical system; next, we introduce the adjoint state method, which efficiently determines the gradient for parameters in continuous dynamical systems; next, we introduce the proposed MSA method, which combines multiple-shooting and the adjoint state method, and can be applied with general forward models and gradient-based optimizers; finally, we introduce the DCM model, and demonstrate the application of MSA.
\subsection{Multiple-shooting method}
The shooting method is commonly used to fit an ODE under noisy observations, which is crucial for parameter estimation in ODE. In this section, we first introduce the shooting method, then explain its variant, the multiple-shooting method, for long time-series.
\subsubsection{Shooting method} The shooting method typically reduces a boundary-value problem to an initial value problem \cite{hildebrand1987introduction}. An example is shown in Fig.~\ref{fig:shoot}: to find a correct initial condition (at $t_0=0$) that reaches the target (at $t_1=1$), the shooting algorithm first takes an initial guess (e.g. $\widehat{z_0(0)}$), then integrate the curve to reach point $(t_1, \overline{z_0(1))}$; the error term $target-z_0(1)$ is used to update the initial condition (e.g. $\widehat{z_1(0)}$) so that the end-time value $\overline{z_1(1)}$ is closer to target. This process is repeated until convergence. Besides the initial condition, the shooting method can be applied to update other parameters
\subsubsection{Multiple-shooting method} The multiple-shooting method \cite{bock1984multiple} is an extension of the shooting method to long time-series; it splits a long time-series into chunks, and applies the shooting method to each chunk. Integration of a dynamical system for a long time is typically subject to noise and numerical error, while solving short time-series is generally easier and more robust.
As shown in the right subfigure of Fig.~\ref{fig:shoot}, a guess of initial condition at time $t_0$ is denoted as $\widehat{z_0}$, and we can use any ODE solver to get the estimated integral curve $\overline{z(t)}, t\in [t_0, t_1]$. Similarly, we can guess the initial condition at time $t_1$ as $\widehat{z_1}$, and get $\overline{z(t)}, t\in [t_1, t_2]$ by integration as in Eq.~\ref{eq:estimate}. Note that each time chunk is shorter than the entire chunk ($\vert t_{i+1} - t_i \vert <\vert t_3-t_0 \vert, i \in \{ 1,2 \}$), hence easier to solve. The split causes another issue: the guess might not match estimation at boundary points (e.g. $\overline{z(t_1)} \neq \widehat{z_1}, \overline{z(t_2)} \neq \widehat{z_2}$). Therefore, we need to consider this error of mismatch when updating parameters, and minimizing this mismatch error is typically easier compared to directly analyzing the entire sequence.
The multiple-shooting method can be written as:
\begin{equation}
\label{eq:multi-shoot-loss}
\operatorname*{argmin}_{\eta, z_0, ... z_N}J = \operatorname*{argmin}_{\eta, z_0, ... z_N} \sum_{i=0}^N \int_{t_i}^{t_{i+1}} \Big(\overline{z(\tau)} - \widetilde{z(\tau)} \Big)^2 \mathrm{d}\tau + \alpha \sum_{i=0}^N \Big( \overline{z(t_i)} - \widehat{z_i} \Big)^2
\end{equation}
\begin{equation}
\label{eq:estimate}
\overline{z(t)} = \widehat{z_i} + \int_{t_i}^t f_\eta \big( \overline{ z(\tau)}, \tau \big) \mathrm{d} \tau,\ \ \ \ t_i < t < t_{i+1},\ \ \ i \in \{0, 1, 2,...N \}
\end{equation}
where $N$ is the total number of chunks discretized at points $\{t_0, ... t_N\}$, with corresponding guesses $\{\widehat{z_0}, ... \widehat{z_N}\}$. We use $\overline{z(t)}$ to denote the estimated curve as in Eq.~\ref{eq:estimate}; suppose $t$ falls into the chunk $[t_i,t_{i+1}]$, $z(t)$ is determined by solving the ODE from $(\widehat{z_i},t_i)$, where $\widehat{z_i}$ is the guess of initial state at $t_i$. We use $\widetilde{z(t)}$ to denote the observation. The first part in Eq.~\ref{eq:multi-shoot-loss} corresponds to the difference between estimation $\overline{z(t)}$ and observation $\widetilde{z(t)}$, while the second part corresponds to the mismatch between estimation (orange square, $\overline{z(t_i)}$) and guess (blue circle, $\widehat{z_i}$) at split time points $t_i$. The second part is weighted by a hyper-parameter $\alpha$. The ODE function $f$ is parameterized by $\eta$. The optimization goal is to find the best $\eta$ that minimizes loss in Eq.~\ref{eq:multi-shoot-loss}, besides model parameters $\eta$, we also need to optimize the guess $\widehat{z_i}$ for state at time $t_i, i \in \{0,1,...N\}$.
Note that though previous work typically limits $f$ to have a linear form, we don't have such limitations. Instead, multiple-shooting is generic for general $f$.
\subsection{Adjoint state method}
Our goal is to minimize the loss function in Eq.~\ref{eq:multi-shoot-loss}. Let $\theta = [\eta, z_0, ...,z_N]$ represent all learnable parameters. After fitting an ODE, we derive the gradient of loss $L$ $w.r.t$ parameter $\theta$ and state guess $\widehat{z_i}$ for optimization.
\subsubsection{Adjoint state equation} Note that different from discrete case, the gradient in continuous case is slightly complicated. We refer to the adjoint method \cite{pontryagin2018mathematical,zhuang2020adaptive,chen2018neural}. Consider the following problem:
\begin{equation}
\label{eq:adjoint_ode}
\frac{d \overline{z(t)}}{dt}= f_\theta \Big( \overline{z(t)},t \Big),\ \ s.t.\ \ \overline{z(0)}=x,\ \ t \in [0,T], \ \ \theta=[\eta, z_0, ... z_N]
\end{equation}
\begin{equation}
\label{eq:adjoint:loss}
\hat{y} = \overline{z(T)},\ \ J \Big(\hat{y},y\Big) = J \Big( \overline{z(0)}+\int_0^T f_\theta( \overline{z},t)dt, y \Big)
\end{equation}
where the initial condition $z(0)$ is specified by input $x$, output $\hat{y}= \overline{ z(T)}$. The loss function $J$ is applied on $\hat{y}$, with target $y$. Compared with Eq.~\ref{eq:formulation} to Eq.~\ref{eq:estimate}, for simplicity, we use $\theta$ to denote both model parameter $\eta$ and guess of initial conditions $\{\widehat{z_i}\}$. The Lagrangian is
\begin{equation}
L = J \Big( \overline{z(T)}, y \Big) + \int_0^T \lambda(t)^ \top \Big[\frac{d \overline{z(t)}}{dt} - f_\theta( \overline{z(t)},t) \Big] dt
\label{eq:loss}
\end{equation}
where $\lambda(t)$ is the continuous Lagrangian multiplier. Then we have the following:
\begin{equation}
\frac{\partial J}{\partial \overline{z(T)}} + \lambda(T) = 0
\label{eq:lambda_bound}
\end{equation}
\begin{equation}
\frac{d\lambda(t)}{dt} + \Big ( \frac{\partial f_\theta( \overline{z(t)},t)}{\partial \overline{z(t)}} \Big )^\top \lambda(t) = 0\ \ \forall t\in (0,T)
\label{eq:lambda_ode}
\end{equation}
\begin{equation}
\frac{dL}{d\theta} = \int_T^0 \lambda(t)^\top \frac{\partial f_\theta( \overline{ z(t)},t)}{\partial \theta}dt
\label{eq:analytic_grad}
\end{equation}
We skip the proof for simplicity. In general, the adjoint method determines the initial condition $\lambda(T)$ by Eq.~\ref{eq:lambda_bound}, then solves Eq.~\ref{eq:lambda_ode} to get the trajectory of $\lambda(t)$, and finally integrates $\lambda(t)$ as in Eq.~\ref{eq:analytic_grad} to get the final gradient. Note that Eq.~\ref{eq:lambda_bound} to Eq.~\ref{eq:analytic_grad} is generic for general $\theta$, and in case of Eq.~\ref{eq:multi-shoot-loss} and Eq.~\ref{eq:estimate}, we have $\theta = [ \eta, z_0, ... z_N ]$, and $ \nabla \theta = [ \frac{\partial L}{\partial \eta}, \frac{\partial L}{\partial z_0}, ... \frac{\partial L}{\partial z_N}]$. Note that we need to calculate $\frac{\partial f}{\partial z}$ and $\frac{\partial f}{\partial \theta}$, which can be easily computed by a single backward pass; we only need to specify the forward model without worrying about the backward, because automatic differentiation is supported in frameworks such as PyTorch and Tensorflow. After deriving the gradient of all parameters, we can update these parameters by general gradient descent methods.
Note that though $J(\overline{z(T)},y)$ is defined on a single point in Eq.~\ref{eq:loss}, it can be defined as the integral form in Eq.~\ref{eq:multi-shoot-loss}, or a sum of single-point loss and integral form. The key observation is that for any loss in the integral form, e.g. $\int_{t=0}^T loss(t) dt $, we can defined an auxiliary variable $F$ such that $\frac{d F(t)}{dt}=loss(t), F(0)=0$, then $F(T)$ is just the value of the integral; in this way, we can transform integral form $\int_{0}^T loss(t) dt$ into a single point form $F(T)$.
\subsubsection{Adaptive checkpoint adjoint}
Eq.~\ref{eq:lambda_bound} to Eq.~\ref{eq:analytic_grad} are the analytical form of the gradient in the continuous case, yet the numerical implementation is crucial for empirical performance. Note that $\overline{z(t)}$ is solved in forward-time (0 to $T$), while $\lambda(t)$ is solved in reverse-time ($T$ to 0), yet the gradient in Eq.~\ref{eq:analytic_grad} requires both $\overline{z(t)}$ and $\lambda(t)$ in the integrand. Memorizing a continuous trajectory $\overline{z(t)}$ requires much memory; to save memory, most existing implementations forget the forward-time trajectory of $\overline{z(t)}$, and instead only record the end-time state $\overline{z(T)}$ and $\lambda(T)$ and solve Eq.~\ref{eq:adjoint_ode} and Eq.~\ref{eq:lambda_bound} to Eq.~\ref{eq:analytic_grad} in reverse-time on-the-fly.
While memory cost is low, existing implementations of the adjoint method typically suffer from numerical error: since the forward-time trajectory (denoted as $\overrightarrow{z(t)}=\overline{z(t)}$) is deleted, and the reverse-time trajectory (denoted as $\overleftarrow{z(t)}$) is reconstructed from the end-time state $z(T)$ by solving Eq.~\ref{eq:adjoint_ode} in reverse-time, $\overrightarrow{z(t)}$ and $\overleftarrow{z(t)}$ cannot accurately overlap due to inevitable errors with numerical ODE solvers. The error $\overrightarrow{z(t)}-\overleftarrow{z(t)}$ propagates to the gradient in Eq.~\ref{eq:analytic_grad} in the $\frac{\partial f(z, t)}{\partial z}$ term. Please see \cite{zhuang2020adaptive} for a detailed explanation.
To solve this issue, the adaptive checkpoint adjoint (ACA) \cite{zhuang2020adaptive} records $\overrightarrow{z(t)}$ using a memory-efficient method to guarantee numerical accuracy. In this work, we use ACA for its accuracy.
\begin{algorithm}[t]
\SetAlgorithmName{Algorithm}{} \\
\textbf{Input} Observation $\widetilde{z(t)}$, number of chunks $N$, learning rate $lr$.\\
\textbf{Initialize} model parameter $\eta$, state $\{\widehat{z_i}\}_{i=0}^N$ at discretized points $\{t_i\}_{i=0}^N$ \\
\textbf{Repeat until convergence} \\
\hspace{8mm}(1) Estimate trajectory $\overline{z(t)}$ from current parameters by the multiple shooting method as in Eq.~\ref{eq:estimate}. \\
\hspace{8mm}(2) Compute the loss $J$ in Eq.~\ref{eq:multi-shoot-loss}, plug $J$ in Eq.~\ref{eq:loss}. Derive the gradient by the adjoint method as in Eq.~\ref{eq:lambda_bound} to Eq.~\ref{eq:analytic_grad}. \\
\hspace{8mm}(3) Update parameters $\theta \leftarrow \theta - lr \times \nabla \theta$
\caption{Multiple-shooting adjoint method}
\label{algo:MSA}
\end{algorithm}
\vspace{-1mm}
\subsection{Multiple-Shooting Adjoint (MSA) method}
\subsubsection{Procedure of MSA}MSA is a combination of the multiple-shooting and the adjoint method, which is
generic for various $f$. Details are summarized in Algo.~\ref{algo:MSA}. MSA iterates over the following steps until convergence: (1) estimate the trajectory based on the current parameters, using the multiple-shoot method for integration; (2) compute the loss and derive the gradient using the adjoint method; (3) update the parameters based on the gradient.
\vspace{-1mm}
\subsubsection{Advantages of MSA} Previous work has used the multiple-shooting method for parameter estimation in ODEs \cite{peifer2007parameter}, yet MSA is different in the following aspects: (A) Suppose the parameters have $k$ dimensions. MSA uses an element-wise update, hence has only $O(k)$ computational cost in each step; yet the method in \cite{peifer2007parameter} requires the inversion of a $k \times k$ matrix, hence might be infeasible for large-scale systems. (B) The implementation of \cite{peifer2007parameter} does not tackle the mismatch between forward-time and reverse-time trajectory, while we use ACA \cite{zhuang2020adaptive} for accurate gradient estimation in step (2) of Algo.~\ref{algo:MSA}. (C) From a practical perspective, our implementation is based on PyTorch which supports automatic-differentiation, therefore we only need to specify the forward model $f$ without the need to manually compute the gradient $\frac{\partial f}{\partial z}$ and $\frac{\partial f}{\partial \theta}$. Hence, our method is off-the-shelf for general models, while the method of \cite{peifer2007parameter} needs to re-implement $\frac{\partial f}{\partial z}$ and $\frac{\partial f}{\partial \theta}$ for different $f$, and conventional DCM with EM needs to re-derive the entire algorithm when $f$ changes.
\subsection{Dynamic causal modeling}
We briefly introduce the dynamical causal modeling here. Suppose there are $p$ nodes (ROIs) and denote the observed fMRI time-series signal as $s(t)$, which is a $p$-dimensional vector at each time $t$. Denote the hidden neuronal state as $z(t)$; then $z(t)$ and $s(t)$ are $p$-dimensional vectors for each time point $t$. Denote the hemodynamic response function (HRF) \cite{lindquist2009modeling} as $h(t)$, and denote the external stimulation as $u(t)$, which is an $n$-dimensional vector for each $t$. The forward-model is:
\begin{equation}
\label{eq:dcm}
f\Big( [z(t)\ \ D(t)] \Big)=
\begin{bmatrix}
d z(t) /dt \\
d D(t)/dt
\end{bmatrix}
=
\begin{bmatrix}
D(t) z(t) + C u(t) \\
B u(t)
\end{bmatrix},\ \ D(0)=A
\end{equation}
\begin{equation}
s(t) = \Big( z(t) + \epsilon(t) \Big) * h(t),\ \ \widetilde{z(t)} = z(t) + \epsilon(t) = Deconv\Big(s(t), h(t) \Big)
\end{equation}
where $\epsilon(t)$ is the noise at time $t$, which is assumed to follow an independent Gaussian distribution, and $*$ represents convolution operation. Note that a more general model would be $s(t)=\Big( z(t) + \epsilon_1(t) \Big) * h(t) + \epsilon_2 (t)$, where $\epsilon_1 (t)$ is the inherent noise in neuronal state $z(t)$, and $\epsilon_2(t)$ is the measurement noise. We omit $\epsilon_2 (t)$ for simplicity in this project; it's possible to model both noises, even model HRF as learnable parameters, but would cause a more complicated model and require more data for accurate parameter estimation.
$D(t)$ is a $p \times p$ matrix for each $t$, representing the effective connectome between nodes. $A$ is a matrix of shape $p \times p$, representing the interaction between ROIs.
$B$ is a tensor of shape $p\times p \times n$, representing the effect of stimulation on the effective connectome. $C$ is a matrix of shape $p \times n$, representing the effect of stimulation on neuronal state. An example of $n=1,p=3$ is shown in Fig.~\ref{fig:toy}.
\begin{SCfigure}[][t]
\includegraphics[width=0.4\textwidth]{figures/toy2.png}
\captionof{figure}{\small Toy example of dynamic causal modeling with 3 nodes (labeled 1 to 3). $u$ is a 1-D stimulation signal, so $n=1,p=3$. $A,B,C$ are defined as in Eq.~\ref{eq:dcm}. For simplicity, though $A$ is a $3\times3$ matrix, we assume only three elements $A_{1,3},A_{3,2},A_{2,1}$ are non-zero.}
\label{fig:toy}
\end{SCfigure}
The task is to estimate parameters $A,B,C$ from noisy observation $s(t)$. For simplicity, we assume $h(t)$ is fixed and use the empirical result from Nitime project \cite{rokem2009nitime} in our experiments. By deconvolution of $s(t)$ with $h(t)$, we get a noisy observation of $z(t)$, denoted as $\widetilde{z(t)}$; $z(t)$ follows the ODE defined in Eq.~\ref{eq:dcm}. By plugging $f$ into Eq.~\ref{eq:formulation}, and viewing $\eta$ as $[A,B,C]$, this problem turns into a parameter estimation problem for ODEs, which can be efficiently solved by Algo.~\ref{algo:MSA}. We emphasize that Algo.~\ref{algo:MSA} is generic and in fact MSA can be applied to any form of $f$, where here the linear form of $f$ in Eq.~\ref{eq:dcm} is a special case for a specific model for fMRI.
\section{Experiments}
\subsection{Validation on toy examples}
We first validate MSA on toy examples of linear dynamical systems, then validate its performance on large-scale systems and non-linear dynamical systems.
\begin{figure}
\begin{subfigure}{0.50\textwidth}
\includegraphics[width=\linewidth]{figures/param_estimate_error.png}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\includegraphics[width=\linewidth]{figures/dcm_toy.png}
\end{subfigure}
\caption{\small Results for the toy example of a linear dynamical system in Fig.~\ref{fig:toy}. Left: error in estimated value of connection $A_{1,3},A_{3,2},A_{2,1}$, other parameters are set as 0 in simulation. Right: from top to bottom are the results for node 1, 2, 3 respectively. For each node, we plot the observation and estimated curve from MSA and EM methods. Note that the estimated curve is generated by integration of the ODE under estimated parameters with only the initial condition known, not smoothing of
noisy observation.}
\label{fig:dcm_error}
\end{figure}
\subsubsection{A linear dynamical system with 3 nodes} We first start with a simple linear dynamical system with only 3 nodes. We further simplify the matrix $A$ as in Fig.~\ref{fig:toy}, where only three elements in $A$ are non-zero. We set $B$ as a zeros matrix, and $u(t)$ as a 1-dimensional signal. The dynamical system is linear:
\begin{equation}
\label{eq:toy_small_linear}
\begin{bmatrix}
d z(t)/dt \\
d D(t)/dt
\end{bmatrix}
=
\begin{bmatrix}
D(t) z(t) + C u(t) \\
0
\end{bmatrix},\ D(0)=A,\ \
u(t) =
\begin{cases}
1, & floor(\frac{t}{2})\%2=0 \\
0, & otherwise
\end{cases}
\end{equation}
\begin{equation}
\widetilde{z(t)} = z(t) + \epsilon(t),\ \ \ \epsilon(t) \sim N(0, \sigma^2)
\end{equation}
$u(t)$ is an alternating block function at a period of 2, taking values 0 or 1. The observed function $\widetilde{z(t)}$ suffers from $i.i.d$ Gaussian noise $\epsilon(t)$ with 0 mean and uniform variance $\sigma^2$.
We perform 10 independent simulations and parameter estimations. For estimation of DCM with the EM algorithm, we use the SPM package \cite{penny2011statistical}, which is a widely used standard baseline. The estimation in MSA is implemented in PyTorch, using ACA \cite{zhuang2020adaptive} as the ODE solver. For MSA, we use the AdaBelief optimizer \cite{zhuang2020adabelief} to update parameters with the gradient; though other optimizers such as SGD can be used, we found AdaBelief converges faster in practice.
For each of the non-zero elements in $A$, we show the boxplot of error in estimation in Fig.~\ref{fig:dcm_error}. Compared with EM, the error by MSA is significantly closer to 0 and has a smaller variance. An example of a noisy observation and estimated curves are shown in Fig.~\ref{fig:dcm_error}, and the estimation by MSA is visually closer to the ground-truth compared to the EM algorithm. We emphasize that the estimated curve is not a simple smoothing of the noisy observation; instead, after estimating the parameters of the ODE, the estimated curve (for $t>0$) is generated by solving the ODE using only the initial state. Therefore, the match between estimated curve and observation demonstrates that our method learns the underlying dynamics of the system.
\subsubsection{Application to large-scale systems}
After validation on a small system with only 3 nodes, we validate MSA on large scale systems with more nodes. We use the same linear dynamical system as in Eq.~\ref{eq:toy_small_linear}, but with the node number $p$ ranging from 10 to 100. Note that the dimension of $A$ and $B$ grows at a rate of $O(p^2)$, and the EM algorithm estimates the covariance matrix of size $O(p^4)$, hence the memory for EM method grows extremely fast with $p$. For various settings, the ground truth parameter is randomly generated from a uniform distribution between -1 and 1, and the variance of measurement noise is set as $\sigma=0.5$. For each setting, we perform 5 independent runs, and report the mean squared error (MSE) between estimated parameter and ground truth.
As shown in Table~\ref{table:large-scale}, for small-size systems (number of nodes $<= 20$), MSA consistently generates a lower MSE than the EM algorithm. For large-scale systems, since the memory cost of the EM algorithm is $O(p^4)$, the algorithm quickly runs out-of-memory. On the other hand, the memory cost for MSA is $O(p^2)$ because it only uses the first-order gradient. Hence, MSA is suitable for large-scale systems such as in whole-brain fMRI analysis.
\subsubsection{Application to general non-linear systems}
Since neither the multiple-shoot method nor the adjoint state method requires the ODE $f$ to be linear, our MSA can be applied to general non-linear systems. Furthermore, since our implementation is in PyTorch which supports automatic differentiation, we only need to specify $f$ when fitting different models, and the gradient will be calculated automatically. Therefore, MSA is an off-the-shelf method, and is suitable for general non-linear ODEs both in theory and implementation.
We validate MSA on the Lotka-Volterra (L-V) equations \cite{volterra1928variations}, a system of non-linear ODEs describing the dynamics of predator and prey populations. The L-V equation can be written as:
\begin{equation}
f\Big( [z_1(t), z_2(t)] \Big) =
\begin{bmatrix}
d z_1(t)/ dt \\
d z_2(t)/ dt
\end{bmatrix}
=
\begin{bmatrix}
\zeta z_1(t) - \beta z_1(t) z_2(t) \\
\delta z_1(t) z_2(t) - \gamma z_2(t) \\
\end{bmatrix},\ \
\begin{bmatrix}
\widetilde{z_1(t)} \\
\widetilde{z_2(t)}
\end{bmatrix}
=
\begin{bmatrix}
z_1(t) + \epsilon_1(t) \\
z_2(t) + \epsilon_2(t)
\end{bmatrix}
\end{equation}
where $\zeta, \beta, \delta, \gamma$ are parameters to estimate, $\widetilde{z(t)}$ is the noisy observation, and $\epsilon(t)$ is the independent noise. Note that there are non-linear terms $z_1(t)z_2(t)$ in the ODE, making EM derivation difficult. Furthermore, the EM method needs to explicitly derive the posterior mean, hence needs to be re-derived for every different $f$; while MSA is generic and hence does not require re-derivation.
Besides the L-V model, we also consider a modified L-V model, defined as:
\begin{align}
dz_1(t)/dt &= \zeta z_1(t) - \beta \phi( z_2(t) )z_1(t) z_2(t) \\
dz_2(t)/dt &= \delta \phi(z_1(t)) z_1(t) z_2(t) - \gamma z_2(t)
\end{align}
where $\phi(x)=1/(1+e^{-x})$ is the sigmoid function. We use this example to demonstrate the ability of MSA to fit highly non-linear ODEs.
We compare MSA with LMFIT \cite{newville2016lmfit}, which is a well-known python package for non-linear fitting. We use L-BFGS solver in LMFIT, which generates better results than other solvers. We did not compare with original DCM with EM because it's unsuitable for general non-linear models. The estimation of the curve for $t>0$ is solved by integrating using the estimated parameters and initial conditions. As shown in Fig.~\ref{fig:lotka} and Fig.~\ref{fig:modify-lv}, compared with LMFIT, MSA recovers the system accurately. LMFIT directly fits the long sequences, while MSA splits long-sequences into chunks for robust estimation, which may partially explain the better performance of MSA
\begin{table}[t]
\centering
\caption{\small Mean squared error ($\times 10^{-3}$, \textbf{lower} is better) in estimation of parameters for a linear dynamical system with different number of nodes. ``OOM'' represents ``out of memory''.}
\label{table:large-scale}
\begin{tabular}{c|cccc}
\hline
& 10 Nodes & 20 Nodes & 50 Nodes & 100 Nodes \\ \hline
EM & $3.3 \pm 0.2$ & $3.0 \pm 0.2$ & OOM & OOM \\
MSA & $0.7 \pm 0.1$ & $0.9 \pm 0.3$ & $0.8 \pm 0.1$ & $0.8 \pm 0.2$ \\ \hline
\end{tabular}
\end{table}
\begin{figure}[t]
\begin{minipage}[]{0.49\textwidth}
\includegraphics[width=\linewidth]{figures/lotka.png}
\vspace{-4mm}
\captionof{figure}{\small Results for the L-V model.}
\label{fig:lotka}
\end{minipage}
\hfill
\begin{minipage}[]{0.49\textwidth}
\includegraphics[width=\linewidth]{figures/lotka_sigmoid.png}
\vspace{-4mm}
\captionof{figure}{\small Results for the modified L-V model.}
\label{fig:modify-lv}
\end{minipage}
\end{figure}
\begin{figure}[t]
\begin{subfigure}{0.30\textwidth}
\includegraphics[width=\linewidth]{figures/frame_3.png}
\end{subfigure}
\begin{subfigure}{0.30\textwidth}
\includegraphics[width=\linewidth]{figures/frame_131.png}
\end{subfigure}
\begin{subfigure}{0.35\textwidth}
\includegraphics[width=\linewidth]{figures/correlation.png}
\end{subfigure}
\caption{\small An example of MSA for one subject. Left: effective connectome during task 1. Middle: effective connectome during task 2. Right: top and bottom represents the effective connectome for task 1 and 2 respectively. Blue and red edges represent positive and negative connections respectively. Only top 5\% strongest connections are visualized.}
\label{fig:connectome}
\end{figure}
\begin{figure}[t]
\begin{subfigure}{0.31\textwidth}
\includegraphics[width=\linewidth]{figures/comaprison_rf.png}
\vspace{-3mm}
\caption{\small Classification result based on Random Forest.}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=\linewidth]{figures/comaprison_invnet.png}
\vspace{-3mm}
\caption{\small Classification result based on InvNet.}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=\linewidth]{figures/roc_auc.png}
\vspace{-3mm}
\caption{\small ROC-AUC curve for results with InvNet.}
\end{subfigure}
\vspace{-2mm}
\caption{\small Classification results for ASD vs. control.}
\label{fig:classification}
\end{figure}
\subsection{Application to whole-brain dynamic causal modeling with fMRI}
We apply MSA on whole-brain fMRI analysis with dynamic causal modeling. fMRI for 82 children with ASD and 48 age and IQ-matched healthy controls were acquired. A biological motion perception task and a scrambled motion task \cite{kaiser2010neural} were presented in alternating blocks. The fMRI (BOLD, 132 volumes, TR = 2000ms, TE = 25ms, flip angle = 60$^{\circ}$, voxel size 3.44×3.44×4 $mm^3$) was acquired on a Siemens MAGNETOM Trio 3T scanner.
\subsubsection{Estimation of EC}
We use the AAL atlas \cite{tzourio2002automated} containing 116 ROIs. For each subject, the parameters for dynamic causal modeling as in Eq.~\ref{eq:dcm} is estimated using MSA. An example snapshot of the effective connectome (EC) during the two tasks is shown in Fig.~\ref{fig:connectome}, showing MSA captures the dynamic EC during different tasks.
\subsubsection{Classification task}
We conduct classification experiments for ASD vs. control using EC and FC as input respectively. The EC estimated by MSA at each time point provides a data sample, and the classification of a subject is based on the majority vote of the predictions across all time points. The FC is computed using Pearson correlation. We experimented with a random forest model and InvNet \cite{zhuang2019invertible}. Results for a 10-fold subject-wise cross validation are shown in Fig.~\ref{fig:classification}. For both models, using EC as input generates better accuracy, F1 score and AUC score (threshold range is [0,1]). This indicates that estimating the underlying dynamics of fMRI helps identification of ASD.
\section{Conclusion}
We propose the multiple-shooting adjoint (MSA) method for parameter estimation in ODEs, enabling whole-brain dynamic causal modeling. MSA has the following advantages: robustness for noisy observations, ability to handle large-scale systems, and a general off-the-shelf framework for non-linear ODEs. We validate MSA in extensive toy examples and apply MSA to whole-brain fMRI analysis with DCM. To our knowledge, our work is the first to successfully apply whole-brain dynamic causal modeling in a classification task based on fMRI. Finally, MSA is generic and can be applied to other problems such as EEG and modeling of biological processes.
\small
|
2,869,038,154,080 | arxiv | \section{Introduction}
\label{sec:intro}
Due to the increasing popularity and use of open source software (OSS), a large number of OSS ecosystems have emerged,
containing huge collections of interdependent software packages. \changed{These ecosystems are usually supported by large communities of contributors, and can be considered as software supply chains formed by upstream transitive dependencies of packages and their downstream dependents.}
Typical examples of such ecosystems are package distributions for specific programming languages (e.g.,\xspace~{npm}\xspace for {JavaScript}\xspace, {PyPI}\xspace for {Python}\xspace, {RubyGems}\xspace for {Ruby}\xspace), totalling millions of interdependent reusable libraries maintained by hundreds of thousands of developers, and used by millions of software projects in a daily basis.
\changed{Given the sheer size of package distributions, combined with their open source nature, \minor{many packages are being affected by known or unknown vulnerabilities.}
Since package distributions are known to form huge and tightly interconnected dependency networks~\cite{Decan2019}, a single vulnerable package may potentially expose a considerable fraction of the package dependency network to its vulnerabilities~\cite{decan2018impact}.
This exposure does not even stop at the boundaries of the package distribution, since dependent external software projects may also become exposed to these vulnerabilities~\cite{Lauinger2017}.}
According to a study carried out by {Snyk}\xspace~\cite{snyk2017}, one of the leading companies in analysing and detecting software vulnerabilities in {Node.Js}\xspace and {Ruby}\xspace packages, 77\% of the 430k websites run at least one front-end package with a known security vulnerability in place.
\changed{This exposure of software to vulnerabilities in its dependencies} is considered as one of the OWASP top 10 application security risks~\cite{topOWASP}.
There are many examples of such cases.
For example, in November 2018 the widely used {npm}\xspace package \texttt{event-stream} was found to depend on a malicious package named \texttt{flatmap-stream}~\footnote{\url{https://github.com/dominictarr/event-stream/issues/116}} containing a Bitcoin-siphoning malware. The \texttt{event-stream} package is very popular \changed{getting} roughly two million downloads per week. Just because it was used as a dependency in \texttt{event-stream}, the malicious \texttt{flatmap-stream} package was downloaded millions of times since its inclusion in September 2018 and until its discovery and removal.
The main objective of this paper is \changed{therefore to empirically analyse and quantify the impact of vulnerabilities in open source packages, on transitively dependent packages that are shared through the same package distribution as well as on external projects distributed via {GitHub}\xspace that rely on such packages.} \changed{We conduct our study on two different package distributions, {npm}\xspace and {RubyGems}\xspace. All along our research questions, we compare between the results found in these two package distributions. More specifically, we answer the following research questions:}
\begin{itemize}
\item \textbf{\changed{$RQ_0$: How prevalent are \minor{disclosed} vulnerabilities in \npm and \rubygems packages?}}
\changed{This preliminary research question explores the dataset of vulnerabilities extracted from {Snyk}\xspace's database and provides insights about their characteristics and evolution over time. }
%
\item \textbf{\changed{$RQ_1$: How much time elapses until a vulnerability is disclosed?}}
\changed{By answering this question for both {npm}\xspace and {RubyGems}\xspace, security researchers in these ecosystems will be able to assess how quick they are in finding and disclosing vulnerabilities. Users of these two ecosystems will be able to know which community has active security researchers which will eventually help them to assess their trust on the third-party packages they depend on.}
%
\item \textbf{\minor{$RQ_2$: For how long do packages remain affected by disclosed vulnerabilities?}}
\changed{Package dependents will gain insights about the number of dependency releases they should expect to be affected by a newly disclosed vulnerability and how many more releases are going to be affected by the same vulnerability even after its disclosure. The answer to this question will also help dependents to know to which type of package releases (i.e.,\xspace major, minor or patch) they should update to have their dependency vulnerabilities fixed.}
%
\item \textbf{\changed{$RQ_3$: To \minor{what} extent are dependents exposed to their vulnerable dependencies?}}
\changed{Vulnerable dependencies can expose their dependents to vulnerable code that might lead to security breaches. We will identify all direct and indirect vulnerable dependencies that are exposing packages and external {GitHub}\xspace projects, and characterize their vulnerabilities. \minor{In this and the next two research questions we study dependents as if they were deployed on 12 January 2020 (i.e.,\xspace the snapshot of the vulnerability dataset).}}
%
\item \textbf{\changed{$RQ_4$: How are vulnerabilities spread in the dependency tree?}}
\changed{The answer to this question will inform us how deep in the dependency tree we can find vulnerabilities to which packages and external {GitHub}\xspace projects are exposed. This helps us to quantify the transitive impact that vulnerable packages may have on their transitive dependents.}
%
\item \textbf{$RQ_5$: Do exposed dependents upgrade their vulnerable dependencies when a vulnerability fix is released?}
\changed{We will quantify how much dependent packages and dependent external {GitHub}\xspace projects would benefit from updating their dependencies for which there is a known fix available. We will also report on the number of vulnerable dependencies that could be reduced by only doing backward compatible updates.}
\item \textbf{\minor{$RQ_6$: To what extent are dependents exposed to their vulnerable dependencies at their release time?}}
\minor{We will identify the dependencies that were only affected by vulnerabilities disclosed before each dependent's release time. Answering this research question will help us to assess whether developers are careful about incorporating dependencies with already disclosed vulnerabilities.}
\end{itemize}
The remainder of this article is structured as follows: \sect{sec:related} discusses related work and highlights the differences to previous studies. \sect{sec:method} explains the research method and the data extraction process, and presents a preliminary analysis of the selected dataset. \sect{sec:results} empirically studies the research questions for the {npm}\xspace and {RubyGems}\xspace package distributions. \sect{sec:discussion} highlights the novel contributions, discusses our findings, and outlines possible directions for future work.
\sect{sec:threats} discusses the threats to the validity of this work and \sect{sec:conclusion} concludes.\section{Related work}
\label{sec:related}
\subsection{\changed{Terminology}}
\label{subsec:terminology}
This section introduces the terminology used throughout this article. All main terms are highlighted in \textbf{boldface}.
\textbf{Package distributions} (such as {npm}\xspace and {RubyGems}\xspace) are collections of (typically open source) software \textbf{packages}, distributed through some package registry.
Each of these packages has one or more \textbf{releases}.
New releases of a package are called \textbf{package updates}.
Each package release is denoted by a unique \textbf{version number}. The version number reflects the sequential order of all releases of a package.
\textbf{Semantic versioning}, hereafter abbreviated as \emph{semver}\xspace\footnote{See \url{https://semver.org}}, proposes a multi-component version numbering scheme \textsf{major.minor.patch[-tag]}
to specify the type of changes that have been made in a package update.
Backward incompatible changes require an increment of the \textbf{major} version component, important backward compatible changes (e.g.,\xspace adding new functionality that does not affect existing dependents) require an increment of the \textbf{minor} component, and backward compatible bug fixes require an increment of the \textbf{patch} component.
The main purpose of package distributions is to facilitate software reuse. To do so,
a package release $R$ can explicitly declare a \textbf{dependency relation} to another package $P$.
$R$ will be called a \textbf{(direct) dependent} of $P$, while $P$ will be called a \textbf{(direct) dependency} of $R$.
Dependency relations come with {\bf dependency constraints}
that specify which releases of the dependency $P$
are allowed to be selected for installation when $R$ is installed. Such constraints express a \textbf{version range}. For example, constraint \verb|<2.0.0| defines the version range \verb|[0.0.0, 2.0.0)|, signifying that \emph{any} release below version 2.0.0 of the dependency is allowed to be installed. The highest available version within this range will be selected for installation by the package manager.
Combining \emph{semver}\xspace with dependency constraints enables maintainers of dependents to restrict the version range of dependencies to those releases that are expected to be backward compatible~\cite{decan2019package}.
For exemple, a dependency relation in an {npm}\xspace release $R$ could express a constraint \verb|^1.2.3| to allow the version range \verb|[1.2.3,2.0.0)| of backward compatible releases. In {RubyGems}\xspace, dependency constraint \verb|~>1.2| would allow the version range \verb|[1.2.0,2.0.0)| of backward compatible releases.
The collection of all package releases and their dependencies in a package distribution forms a \textbf{package dependency network}.
A release $R$ is an \textbf{indirect dependent} of another release $D$ if there is a chain of length 2 or longer between them in the dependency network. Conversely $D$ will be called an \textbf{indirect dependency} of $R$.
We will refer to the union of direct and indirect dependents (respectively, dependencies) of a release as \textbf{transitive dependents} (respectively, dependencies) of that release.
\changed{In the context of this paper and more specifically in $RQ_3$ to $RQ_6$, we only study the vulnerabilities coming from dependencies used in the latest release of each package in {npm}\xspace and {RubyGems}\xspace. \minor{Because of this, we will occasionally use the term package to refer to the latest release of a dependent package.}
By abuse of terminology we declare a \emph{package} to be a (direct/indirect) dependent of $P$ if its latest available \emph{release} depends (directly or indirectly) on $P$.}
Not only \emph{packages} can depend on other packages within a package dependency network, but the same is true for \emph{external projects} that are developed and/or distributed outside of the package distribution (e.g.,\xspace on {GitHub}\xspace). By extension of the term dependent package, we use the term \textbf{dependent project} to refer to a {GitHub}\xspace repository containing an external software project that (directly or indirectly) depends on one of the packages of the considered package distribution. For example, the project \textsf{Atom}~\footnote{\url{https://github.com/atom/atom/blob/master/package.json}} is a dependent of {npm}\xspace package \textsf{mocha}~\footnote{\url{https://www.npmjs.com/package/mocha}}. Similarly, \textsf{Discourse}~\footnote{\url{https://github.com/discourse/discourse/blob/master/Gemfile}} is a dependent project of {RubyGems}\xspace package \textsf{json}~\footnote{\url{https://rubygems.org/gems/json}}.
A \textbf{vulnerability} is a known reported security threat that affects some releases of some packages in the package distribution. The packages will be called \textbf{vulnerable packages} and their affected releases will be called \textbf{vulnerable releases}.
The package's vulnerability is \textbf{fixed} as soon as a package update that is no longer affected by the vulnerability becomes available.
A \textbf{vulnerable dependency} is a vulnerable release that is used as a dependency by another package or project (directly or indirectly).
Vulnerable dependencies can expose their dependents to the vulnerability.
We refer to those as \textbf{exposed dependents}. We can distinguish between \emph{directly exposed} dependents (if a direct dependency is vulnerable) and \emph{indirectly exposed} dependents (if an indirect dependency is vulnerable).
To distinguish between dependent releases and dependent projects that may be exposed, we use the terms \textbf{exposed package} (release) and \textbf{exposed project}, respectively.
\subsection{\changed{Package dependency networks}}
\label{subsec:pdn}
Software dependency management and package dependency networks have been subject to many research studies for different software ecosystems. Wittern et al.\xspace\cite{wittern2016look} examined the npm ecosystem in an extensive study that covers package descriptions, the dependencies among them, download metrics, and the use of {npm}\xspace packages in publicly available repositories.
One of their findings is that the number of {npm}\xspace packages and their updates is growing superlinearly. They also observed that packages are increasingly connected through dependencies. More than 80\% of npm packages have at least one direct dependency. %
Kikas et al.\xspace\cite{kikas2017structure} analysed the dependency network structure and evolution of the JavaScript, Ruby, and Rust package distributions. One of their findings is that the number of transitive
dependencies for JavaScript has grown by 60\% in 2016. They also found that the negative consequences of removing a popular package (e.g.,\xspace the left-pad incident~\footnote{https://github.com/left-pad/left-pad/issues/4}) are increasing.
In a more extensive study, Decan et al.\xspace\cite{decan2017empirical} empirically compared the impact of dependency issues in the {npm}\xspace, {CRAN}\xspace and {RubyGems}\xspace package distributions. A follow-up study~\cite{Decan2019} expanded this comparison with four more distributions, namely {CPAN}\xspace, {Packagist}\xspace, {Cargo}\xspace and {NuGet}\xspace. They observed important differences between the considered ecosystems that are related to ecosystem specific factors. Similarly, Bogart et al.\xspace~\cite{bogart2016break} performed multiple case studies of three software ecosystems with different tooling and philosophies toward change (Eclipse, {CRAN}\xspace, and {npm}\xspace), to understand how developers make decisions about change and change-related costs and what practices, tooling, and policies are used. %
They found that the three ecosystems differ significantly in their practices, policies and community values. \changed{Gonzalez-Barahona et al.\xspace~\cite{gonzalez2017technical,zerouali2018empirical,zerouali2021multi,decan2018evolution} introduced the notion of technical lag to quantify the degree of outdatedness of packages and dependencies, along different dimensions, including time lag, version lag and vulnerability lag. }
\subsection{\changed{Security vulnerabilities}}
\label{subsec:sv}
\minor{Software vulnerabilities are discovered on a daily basis, and need to be identified and fixed as soon as possible.}
This explains the many studies conducted by software engineering researchers on the matter (e.g.,\xspace~\cite{pham_detecting_2010,Shin2010TSE,Pashchenko2018,ruohonen2018empirical,chinthanet2020code}). Several researchers observed that outdated dependencies are a potential source of security vulnerabilities.
Cox~et al.\xspace\cite{cox2015measuring} analysed 75 Java projects that manage their dependencies through Maven. They observed that projects using outdated dependencies were four times more likely to have security issues and backward incompatibilities than systems that were up-to-date. Gkortzis et al.\xspace~\cite{Gkortzis2020jss} studied the relationship between software reuse and security threats by empirically investigating 1,244 open-source Java projects to explore the distribution of vulnerabilities between the code created by developers and the code reused through dependencies. Based on a static analysis of the source code, they observed that large projects are associated with a higher number of potential vulnerabilities. Additionally, they found that the number of dependencies in a project is strongly correlated to its number of vulnerabilities. \changed{Massacci et al.\xspace~\cite{massacci2021technical} investigated whether leveraging on FOSS Maven-based Java libraries is a potential source of security vulnerabilities. They found that small and medium libraries have disproportionately more leverage on FOSS dependencies in comparison to large libraries. They also found that libraries with higher leverage have 1.6 higher odds of being vulnerable in comparison to the libraries with lower leverage. }
Ponta~et al.\xspace\cite{Ponta2020EMSE} presented a code-centric and usage-based approach to detecting and assessing OSS vulnerabilities, and to determining their reachability through direct and transitive dependencies of {Java}\xspace applications. Their combination of static and dynamic analysis improves upon the state of the art which commonly considers dependency meta-data only without verifying to which dependents the vulnerability actually propagates. The Eclipse Steady tool instantiates the approach, and has been shown to report fewer false positives (i.e.,\xspace vulnerabilities that do not really propagate to dependencies) than earlier tools.
In a similar vein, Zapata et al.\xspace\cite{zapata2018towards} carried out an empirical study that analysed vulnerable dependency migrations at the function level for 60 JavaScript packages. They provided evidence that many outdated projects are free of vulnerabilities as they do not really rely on the functionality affected by the vulnerability. Because of this, the authors claim that security vulnerability analysis at package dependency level is likely to be an overestimation.
Decan et al.\xspace~\cite{decan2018impact} conducted an empirical analysis of 399 vulnerabilities reported in the {npm}\xspace package dependency network containing over 610k {JavaScript}\xspace packages in 2017.
They analysed how and when these vulnerabilities are \changed{disclosed} and to which extent this affects directly dependent packages.
They did not consider the effect of vulnerabilities on transitive dependents, nor did they study the impact on external {GitHub}\xspace projects depending on {npm}\xspace packages.
They observed that it often takes a long time before an introduced vulnerability is \changed{disclosed}.
A non-negligible proportion of vulnerabilities (15\%) are considered to be more risky because they are fixed only after public announcement of the vulnerability, or not fixed at all.
They also found that the presence of package dependency constraints plays an important role in vulnerabilities not being fixed, mainly because the imposed constraints prevent fixes from being installed.
Zimmermann et al.\xspace\cite{zimmermann2019small} studied 609 vulnerabilities in {npm}\xspace packages, providing evidence that this ecosystem suffers from single points of failure, i.e.,\xspace a very small number of maintainer accounts could be used to inject malicious code into the majority of all packages (e.g.,\xspace the \textsf{event-stream} incident~\footnote{\url{https://www.theregister.com/2018/11/26/npm_repo_bitcoin_stealer/}}). This problem increases with time, and unmaintained packages threaten large code bases, as the lack of maintenance causes many packages to depend on vulnerable code, even years after a vulnerability has become public. They studied the transitive impact of vulnerable dependencies, as well as the problems related to lack of maintenance of vulnerable packages.
Alfadel~et al.\xspace\cite{alfadelempirical} carried out a study on 550 vulnerability reports affecting 252 {Python}\xspace packages from {PyPI}\xspace. They found that the number of vulnerabilities \changed{disclosed} in {Python}\xspace packages increases over time, and some take more than 3 years to be \changed{disclosed}. \changed{Meneely~et al.\xspace~\cite{meneely2013patch} inspected 68 vulnerabilities in the Apache HTTP server and traced them back to the first commits that contributed to the vulnerable code. They manually found 124 Vulnerability-Contributing Commits (VCCs). After analyzing these VCCs, they found that VCCs have more than twice as much code churn on average than non-VCCs. They also observed that commits authored by new developers have more chances to be VCCs than other commits.}
Prana et al.\xspace~\cite{Prana2021EMSE} analysed vulnerabilities in open-source libraries used by 450 software projects written in {Java}\xspace, {Python}\xspace, and {Ruby}\xspace. Using an industrial software composition analysis tool, they scanned versions of the sample projects after each commit. They found that project activity level, popularity, and developer experience do not translate into better or worse handling of dependency vulnerabilities. As a recommendation to software developers, they highlighted the importance of managing the number of dependencies and of performing timely updates.
\changed{Pashchenko et al.\xspace~\cite{Pashchenko2018} studied the over-inflation problem of academic and industrial approaches for reporting vulnerable dependencies in OSS software. After inspecting 200 Java libraries they found that 20\% of their dependencies affected by a known vulnerability are not deployed, and therefore, they do not represent a danger to the analyzed library because they cannot be exploited in practice. They also found that 81\% of vulnerable direct dependencies may be fixed by simply updating to a new version. In our article, we follow the procedure recommended by Pashchenko et al.\xspace~\cite{Pashchenko2018} by only focusing on run-time dependencies which are essential for deployment.}
\subsection{\changed{Novelty of our contribution}}
\label{subsec:noc}
The empirical study proposed in this article expands upon previous work in different ways.
We conduct a quantitative comparison of vulnerabilities in both the {npm}\xspace and {RubyGems}\xspace package dependency networks based on a more recent dataset of packages and their vulnerabilities (2020). We study the impact of a large set of 2,786 vulnerabilities, of which 2,118 for {npm}\xspace and 668 for {RubyGems}\xspace while grouping them by their severity levels. We also consider dependencies of external {GitHub}\xspace projects on vulnerable {npm}\xspace and {RubyGems}\xspace packages.
For the latter, we are the first to study the prevalence, \changed{disclosure} and fixing time of their vulnerabilities.
When studying the impact of vulnerable packages on their dependents, we do not only focus on direct dependencies, but also consider the indirect ones. For those indirect dependencies we study the evolution and \changed{spread} of vulnerable indirect dependencies at different levels in the dependency tree. Finally, we are the first to compare the vulnerability of packages distributed via package distributions with the vulnerability of {GitHub}\xspace projects that use these packages. \changed{Such a comparison is important since packages are supposed to be reused as libraries, their maintainers are supposed to be more careful than developers of external projects that just depend on these reusable libraries and that are much less likely to have other projects depending on them.} We are also the first to report results on how vulnerability \changed{disclosure} time duration is evolving over time.
\section{Dataset}
\label{sec:method}
This paper analyzes \changed{the Common Vulnerabilities and Exposures (CVE\footnote{\url{cve.mitre.org}}) affecting the {npm}\xspace and {RubyGems}\xspace distributions of reusable software packages.
Both package distributions are well-established and mature ({RubyGems}\xspace was created in 2004 and {npm}\xspace in 2010) and both recommend \emph{semver}\xspace for package releases~\cite{decan2019package}. Due to an important difference in popularity of the targeted programming language (JavaScript and Ruby, respectively), the number of packages distributed through {npm}\xspace is an order of magnitude higher than the number of packages distributed through {RubyGems}\xspace.
We focus on these package distributions because they have a community that is actively looking for and reporting vulnerabilities. The vulnerabilities in both distributions are reported and tracked by well-known Central Numbering Authorities (CNA)}.
\subsection{Vulnerability dataset}
\label{subsec:vulnerability}
To detect vulnerabilities in {npm}\xspace and {RubyGems}\xspace packages we rely on a database of vulnerability reports of third-party package vulnerabilities collected by the continuous security monitoring service {Snyk}\xspace~\footnote{\url{https://snyk.io/vuln}}.
We received a snapshot of this vulnerability database on 17 April 2020.
\changed{This snapshot contained 2,874 vulnerability reports for the considered package distributions, of which 2,188 for {npm}\xspace and 686 for {RubyGems}\xspace. The higher number of reported vulnerabilities for {npm}\xspace can be explained by the higher number of packages contained in it.}
Each vulnerability report contains information about the affected package, the range of affected releases, the severity of the vulnerability \minor{as reported by Snyk's security team}, its origin (i.e.,\xspace the package distribution), the date of \changed{disclosure}, the date when it was published in the database, the first fixed version (if available), and the unique CVE identifier.
\fig{fig:example-vul} shows an example of a vulnerability report of the popular {RubyGems}\xspace package \texttt{rest-client}~\footnote{\url{https://snyk.io/vuln/SNYK-RUBY-RESTCLIENT-459900}}. Vulnerability reports for {npm}\xspace packages contain similar information.
\begin{figure}[!ht]
\centering
\begin{tabular}{|ll|}
\hline
\textbf{Vulnerability name:} & Malicious Package \\
\textbf{Severity:} & critical \\
\textbf{Affected package:} & rest-client \\
\textbf{Affected versions:} & \textgreater{}=1.6.10, \textless{}1.7.0.rc1 \\
\textbf{Package manager:} & RubyGems \\
\textbf{\changed{Disclosure} date:} & 2019-08-19 \\
\textbf{Publication date:} & 2019-08-20 \\
\textbf{Version with the first fix:} & 1.7.0.rc1 \\
\textbf{CVE identifier:} & CVE-2019-15224 \\ \hline
\end{tabular}
\caption{Excerpt of vulnerability report for {RubyGems}\xspace package {\sf rest-client}.}
\label{fig:example-vul}
\end{figure}
\subsection{Dependency dataset of packages and external projects}
\label{subsec:dependency}
Using %
version 1.6.0 of the \emph{libraries.io}\xspace Open Source Repository and Dependency Dataset~\cite{librariesio2020Jan} that was released on 12 January 2020, we identified all \emph{package releases} in the {npm}\xspace and {RubyGems}\xspace package distributions. \changed{As this dataset was released three months before the snapshot date of the {Snyk}\xspace vulnerability database (17 April 2020), it does not contain releases that are affected by vulnerabilities disclosed after 12 January 2020. We therefore ignore these releases in our analysis.%
We also identified all \emph{external projects} hosted on GitHub and referenced in \emph{libraries.io}\xspace dataset as depending on {npm}\xspace or {RubyGems}\xspace packages. Repositories of these projects do not correspond to the development history of any of the considered packages, and are not forked from any other repository. This way we ensure that packages and external projects are mutually exclusive, and we avoid considering the same projects multiple times in the analysis.}
A manual analysis revealed that \emph{libraries.io}\xspace is not always accurate about the dependencies used by external projects. \changed{More specifically, we occasionally observed a mix between run-time and development dependencies for projects that use {npm}\xspace, i.e.,\xspace dependencies that are declared as development dependencies in the project's repository in {GitHub}\xspace are reported as run-time dependencies in \emph{libraries.io}\xspace.
Because of this inaccuracy,} we only relied on \emph{libraries.io}\xspace
to identify the \changed{names} of the most starred projects, i.e.,\xspace those who received 90\% of all stars within the same ecosystem ({npm}\xspace or {RubyGems}\xspace). \changed{Overall, the selected {GitHub}\xspace projects received 5.37M stars out of 5.96M. The minimum number of stars found in the resulting dataset was 62. Afterwards,} to determine the {npm}\xspace and {RubyGems}\xspace package dependencies for these projects, we extracted and analysed their \emph{package.json} and \emph{Gemfile} from {GitHub}\xspace, which are the files in which the dependency metadata is stored for the respective package distributions.
As older package releases are more likely to be exposed to vulnerable packages than recent versions~\cite{cox2015measuring,zerouali2019saner}, this might bias the analysis results.
We therefore decided to focus only on the latest available version of each considered package, and on the snapshot of the last commit before 12 January 2020 of each considered external {GitHub}\xspace project.
We also decided to focus only on packages and external projects with {\em run-time dependencies}, thereby ignoring development and optional dependencies. Run-time dependencies are needed to deploy and run the dependent in production, while development dependencies are only needed while the dependent is being developed (e.g.,\xspace for testing dependencies). We ignore the latter in our study because they are unlikely to affect the production environment\changed{~\cite{Pashchenko2018}}.
Based on all of the above, we selected the latest available releases in \emph{libraries.io}\xspace of 842,697 packages \changed{(748,026 for {npm}\xspace and 94,671 for {RubyGems}\xspace) and 24,593 external projects hosted on {GitHub}\xspace (13,930 using {npm}\xspace packages and 10,663 using {RubyGems}\xspace packages)} with run-time dependencies. %
\fig{fig:packages_evolution} shows the evolution over time of the cumulative number of packages (left y-axis) and external projects (right y-axis) considered in this study, grouped by package distribution.
\begin{figure}[!ht]
\begin{center}
\setlength{\unitlength}{1pt}
\footnotesize
\includegraphics[width=0.98\columnwidth]{packages_evolution.pdf}
\caption{Evolution of the cumulative number of packages (straight lines, using the scale on the left y-axis) and external {GitHub}\xspace projects (dotted lines, using the scale on the right y-axis) for {npm}\xspace and {RubyGems}\xspace}
%
%
%
\label{fig:packages_evolution}
\end{center}
\end{figure}
Using the dependency constraint resolver proposed in~\cite{decan2019package}, which supports several package distributions, we determined the appropriate version of packages to be installed for each dependent according to the constraints for its run-time dependencies.
As some constraints may resolve to different versions at different points in time, we use the \emph{libraries.io}\xspace snapshot date as the resolution date. This implies that we study vulnerabilities in packages and external projects as if they were installed or deployed on 12 January 2020.
Having determined the versions of all direct dependencies, we turn to the indirect ones. All considered {npm}\xspace packages have a total of 68,597,413 dependencies of which 3,638,361 are direct (i.e.,\xspace 5.3\%), while all {RubyGems}\xspace packages have 1,258,829 dependencies of which 224,959 are direct (i.e.,\xspace 17.9\%).
The considered external projects have 2,814,544 {npm}\xspace dependencies of which 147,622 are direct (i.e.,\xspace 5.2\%), and 544,506 {RubyGems}\xspace dependencies of which 101,079 are direct (i.e.,\xspace 18.6\%).
We observe that {RubyGems}\xspace packages and external projects have more than thrice as many direct dependencies as {npm}\xspace packages and external projects. This is in line with the observations made by Decan~et al.\xspace\cite{decan2019package}. \fig{fig:dependency_evolution} shows the evolution over time of the cumulative number of direct and indirect dependencies for \changed{package latest releases} and external projects considered in this study, grouped by package distribution.
\begin{figure}[!ht]
\begin{center}
\setlength{\unitlength}{1pt}
\footnotesize
\includegraphics[width=0.98\columnwidth]{dependency_evolution.pdf}
\caption{Monthly evolution of the cumulative number of direct and indirect dependencies for packages (top figure) and external projects (bottom figure) for {npm}\xspace and {RubyGems}\xspace.}
\label{fig:dependency_evolution}
%
%
%
%
\end{center}
\end{figure}
\section{Empirical Analysis}
\label{sec:results}
Using the datasets of \sect{sec:method}, this section answers the research questions introduced in \sect{sec:intro}. %
For $RQ_0$ to $RQ_2$, we present statistical analyses based on the affected {npm}\xspace and {RubyGems}\xspace package releases according to the vulnerability dataset.
For $RQ_3$ to \minor{$RQ_6$} we study the impact \changed{of affected releases on exposed packages and exposed projects that directly or indirectly depend on them.}
\changed{As part of the statistical analyses, %
we use the non-parametric Mann-Whitney U test to compare various types of distributions without assuming them to follow a normal distribution. The null hypothesis $H_0$ states that there is no difference between two distributions.} We set a global confidence level of 95\%, corresponding to a significance level of $\alpha = 0.05$. To achieve this overall confidence, the $p$-value of each individual test is compared against a lower $\alpha$ value, following a Bonferroni correction\footnote{If $n$ different tests are carried out over the same dataset, for each individual test one can only reject $H_0$ if $p< \frac{0.05}{n}$. In our case \changed{$n=48$, i.e.,\xspace $p<0.001$}.}.
\changed{If the null hypothesis can be rejected, we report the effect size with Cliff's delta $d$, a non-parametric measure that quantifies the difference between two populations beyond the interpretation of $p$-values.} Following the guidelines of Romano~et al.\xspace~\cite{romano2006exploring}, we interpret the effect size to be \emph{negligible} if $|d|\in[0,0.147[$, \emph{small} if $|d|\in[0.147,0.33[$, \emph{moderate} if $|d|\in [0.33,0.474[$ and \emph{large} if $|d|\in [0.474,1]$.
We use the technique of survival analysis~\cite{Klein2013} to estimate the probability that an event of interest will happen. Survival analysis creates a model estimating the survival rate of a population over time until the occurrence of an event, considering the fact that some subjects may leave the study, while for others the event of interest might not occur during the observation period.
We rely on the non-parametric Kaplan-Meier statistic estimator commonly used to estimate survival functions.
All code used to carry out this analysis is available in a replication package~\footnote{\url{https://github.com/AhmedZerouali/vulnerability_analysis}}.\subsection{\changed{$RQ_0$: How prevalent are \minor{disclosed} vulnerabilities in \npm and \rubygems packages?}}
\label{subsec:rq0}
This research question aims to characterise the \changed{vulnerability dataset of \sect{subsec:vulnerability}, its evolution over time as well as the number of package releases affected by these vulnerabilities.
After the identification of package releases and dependencies in \sect{subsec:dependency}, we found that 88 vulnerabilities do not affect any package release that is used as a dependency. Therefore, our final dataset contains 2,786 vulnerabilities, of which 2,118 directly affecting {npm}\xspace packages and 668 directly affecting {RubyGems}\xspace packages.}
\fig{fig:number_vulns} depicts the number of considered vulnerabilities in this study, and the proportion of \textit{low, medium, high} or \textit{critical} severities for each package distribution. We observe that most of the vulnerabilities are of \textit{medium} or \textit{high} severity (76\% for {npm}\xspace and 89\% for {RubyGems}\xspace). We also observe that {npm}\xspace has nearly thrice as many \textit{critical} vulnerabilities as {RubyGems}\xspace. Another difference is that {npm}\xspace has more \textit{high} vulnerabilities than \textit{medium} ones, while the inverse is true for {RubyGems}\xspace. The collected vulnerabilities affect 1,672 {npm}\xspace and 321 {RubyGems}\xspace packages. The oldest vulnerability in {npm}\xspace was \changed{disclosed} in June 2011, whereas the oldest one in {RubyGems}\xspace was \changed{disclosed} in August 2006. We also found that 1,175 (42\%) of the vulnerabilities did not have any known fix, of which 1,058 from {npm}\xspace and 117 from {RubyGems}\xspace.
\minor{Finding the reasons behind the observed differences between both ecosystems in terms of number, severity, and types of vulnerabilities (see \tab{tab:vuln_names}) can be hard.}
Each package distribution has its own tools, practices and policies \cite{Bogart2021} as well as differences in topological structure of the dependency network and size and growth of the ecosystem \cite{Decan2019}.
Both package distributions also focus on different programming languages.
\fig{fig:number_vulns} therefore does not normalize against the size of each ecosystem. The proportion of vulnerable packages is 0.22\% for {npm}\xspace while it is 0.4\% for {RubyGems}\xspace, even though {npm}\xspace is 8 times larger than {RubyGems}\xspace.
\begin{figure}[!ht]
\begin{center}
\setlength{\unitlength}{1pt}
\footnotesize
\includegraphics[width=0.98\columnwidth]{all_vuls_prop.pdf}
\caption{Proportion and number of considered vulnerabilities affecting {npm}\xspace and {RubyGems}\xspace packages, grouped by severity.}
\label{fig:number_vulns}
\end{center}
\end{figure}
\fig{fig:affected_packages} depicts the evolution over time of the cumulative number of vulnerabilities (straight lines) grouped by severity, and their corresponding affected packages (dotted lines). The y-axis scale for {npm}\xspace is different from the one for {RubyGems}\xspace because more vulnerabilities have been reported for {npm}\xspace.
The number of reported vulnerabilities and thereby affected packages increases for both {npm}\xspace and {RubyGems}\xspace over time. We also see that before 2017, the number of medium vulnerabilities was higher than the number of high vulnerabilities in {npm}\xspace. Since 2016 and until 2018, the number of high vulnerabilities started increasing following a different trend. %
A similar trend can be observed for the critical vulnerabilities in 2019. This means that proportionally speaking, the severity of {npm}\xspace vulnerabilities tends to increase over time, while such an observation cannot be made for {RubyGems}\xspace.
\begin{figure}[!ht]
\begin{center}
\setlength{\unitlength}{1pt}
\footnotesize
\includegraphics[width=0.98\columnwidth]{affected_packages.pdf}
\caption{Temporal evolution of the cumulative number of \changed{disclosed} vulnerabilities (straight lines) and the corresponding number of affected packages (dotted lines) per severity.}
%
%
\label{fig:affected_packages}
\end{center}
\end{figure}
Looking at \textit{ALL} vulnerabilities we can see an exponential growth in the number of {npm}\xspace vulnerabilities, while {RubyGems}\xspace shows a linear growth rate.
This is statistically confirmed by a regression analysis using linear and exponential growth models. The $R^2$ values reflecting the goodness of fit are summarized in \tab{tab:r_square_vuls}.~\footnote{$R^2 \in [0,1]$ and the closer to 1 the better the model fits the data.}
For both vulnerabilities and affected packages, {npm}\xspace follows an exponential growth while {RubyGems}\xspace follows a linear one.
The exponential growth of {npm}\xspace is in line with the exponential growth of its total number of packages\changed{~\cite{Decan2019}}.
Given that {npm}\xspace is by far the largest package distribution available\footnote{According to \emph{libraries.io}\xspace, in May 2021, {npm}\xspace contained 1.79M packages compared to ``only'' 173K packages in {RubyGems}\xspace.}, it is considerably more likely to \changed{contain reported vulnerabilities. {npm}\xspace's popularity may attract more malicious developers on the one hand, that want to exploit vulnerabilities contained in some of its packages, and on the other hand it may attract more security researchers that aim to find and report vulnerabilities before they can be exploited.}
\begin{table}[!ht]
\centering
\caption{$R^2$-values of regression analysis on the evolution of the number of vulnerabilities and affected packages.}
\label{tab:r_square_vuls}
\begin{tabular}{l|r|r|r|r}
\multirow{2}{*}{} & \multicolumn{2}{c|}{\bf {npm}\xspace} & \multicolumn{2}{c}{\bf {RubyGems}\xspace} \\
& \bf \# vulns & \bf \# packages & \bf \# vulns & \bf \# packages \\ \hline
\bf linear & 0.85 & 0.84 & \underline{0.94} & \underline{ 0.93} \\
\bf exponential & \underline{0.96} & \underline{0.97} & 0.90 & 0.90
\end{tabular}
\end{table}
For each vulnerability report in the vulnerability dataset, we identified the affected range of releases of the specified package in our package dataset.
Here too we relied on the constraint parser proposed in~\cite{decan2019package}.
While the vulnerability dataset corresponds to a relatively low number of vulnerable packages (see above), the majority of their releases is in fact affected: $67.4\%$ (i.e.,\xspace $43,330$ out of $64,236$) for {npm}\xspace and $63.8\%$ (i.e.,\xspace $11,488$ out of $17,987$) for {RubyGems}\xspace.
$89.9\%$ of these affected releases (comprising both {npm}\xspace and {RubyGems}\xspace) concern vulnerabilities of either \textit{medium} severity ($52.9\%$) or \textit{high} severity ($37\%$ \textit{high}).
Regardless of their severity, $57.3\%$ and $27.8\%$ of all {npm}\xspace and {RubyGems}\xspace vulnerabilities are affecting more than 90\% of the releases of their corresponding packages. \changed{Most of these vulnerabilities (85.6\% and 62.9\%, respectively) are open ones that do not have any fix. Focusing only on fixed vulnerabilities, this proportion decreases to 16.2\% and 12.5\% for {npm}\xspace and {RubyGems}\xspace vulnerabilities affecting more than 90\% of their package releases, respectively. This means, for the clients of these packages, that they should be using the latest available releases, especially the clients of {npm}\xspace packages.}
\tab{tab:vuln_names} shows the top ten vulnerability types affecting {npm}\xspace and {RubyGems}\xspace packages, with the number of vulnerabilities of each type grouped by severity.
We observe that the most prevalent vulnerability type is \emph{Malicious Package} and \emph{Cross-site Scripting (XSS)} found in 19.3\% of all {npm}\xspace vulnerability reports, and 17.4\% of all {RubyGems}\xspace vulnerability reports. We observe that {npm}\xspace and {RubyGems}\xspace packages are exposed to similar vulnerabilities but with different occurrences. Some vulnerability types seem to affect {JavaScript}\xspace packages more than {Ruby}\xspace packages, e.g.,\xspace \emph{Malicious Package} and \emph{Directory Traversal}.
\begin{table}[!ht]
\centering
\caption{Top ten vulnerability types affecting {npm}\xspace and {RubyGems}\xspace packages, with the number of vulnerabilities of each type grouped by severity (C~=~{\color{darkred}critical\xspace}, H~=~{\color{orange}high\xspace}, M~=~\medium, L~=~{\color{darkgreen}low\xspace}).}
\label{tab:vuln_names}
\begin{tabular}{lr|rrrr}
\bf {npm}\xspace vulnerability types & \bf \#vulns & \bf C & \bf H & \bf M & \bf L \\
\hline
Malicious Package & 410 & 345 & 64 & 1 & 0 \\
Directory Traversal & 331 & 5 & 297 & 28 & 1\\
Cross-site Scripting & 322 & 15 & 55 & 245 & 7 \\
Resource Downloaded over Insecure Protocol & 154 & 0 & 145 & 8 & 1\\
Regular Expression Denial of Service & 138 & 0 & 57 & 50 & 31\\
Denial of Service & 91 & 5 & 52 & 32 & 2\\
Prototype Pollution & 77 & 1 & 38 & 36 & 2\\
Command Injection & 57 & 0 & 20 & 34 & 3\\
Arbitrary Code Execution & 44 & 11 & 24 & 7 & 2 \\
Arbitrary Code Injection & 36 & 7 & 9 & 20 & 0 \\
\\ \bf {RubyGems}\xspace vulnerability types & \bf \#vulns & \bf C & \bf H & \bf M & \bf L \\
\hline
Cross-site Scripting & 116 & 0 & 6 & 108 & 2 \\
Denial of Service & 54 & 1 & 18 & 34 & 1 \\
Arbitrary Command Execution & 47 & 2 & 29 & 16 & 0\\
Information Exposure & 40 & 1 & 6 & 27 & 6 \\
Arbitrary Code Execution & 27 & 2 & 16 & 9 & 0\\
Man-in-the-Middle & 26 & 0 & 3 & 22 & 1\\
Malicious Package & 19 & 18 & 1 & 0 & 0\\
Cross-site Request Forgery & 18 & 0 & 4 & 14 & 0\\
SQL Injection & 18 & 1 & 11 & 6 & 0\\
Directory Traversal & 16 & 0 & 5 & 10 & 1\\
\end{tabular}
\end{table}
\smallskip
\noindent\fbox{%
\parbox{0.98\textwidth}{%
%
The number of reported vulnerabilities is increasing exponentially for {npm}\xspace and linearly for {RubyGems}\xspace.
The relative proportion of high and critical vulnerabilities seems to increase over time.
Two out of three releases of vulnerable packages are affected by at least one vulnerability for both ecosystems.
The types of most prevalent vulnerabilities differ between {npm}\xspace and {RubyGems}\xspace.
}%
}
\subsection{\changed{$RQ_1$: How much time elapses until a vulnerability is disclosed?}}
\label{subsec:rq1}
Delayed fixing of security vulnerabilities puts software packages and their dependents at risk as it lengthens the window that hackers have to discover and exploit the vulnerability. Unknown vulnerabilities may linger and remain to be present in more recent releases of a vulnerable package, exposing the dependents of an entire release range for a substantial amount of time.
For example, in January 2021, a vulnerability named ``Baron Samedit" was discovered in the popular Linux package {\sf sudo}~\footnote{\url{https://snyk.io/vuln/SNYK-DEBIAN9-SUDO-1065095}}. This vulnerability allowed local users to gain root-level access and was introduced in the {\sf sudo} code back in July 2011, effectively exposing all releases during the past decade and therefore millions of deployments.
\changed{$RQ_1$ aims to study how much time it takes until a vulnerability is discovered. Since it is not possible to accurately know when a vulnerability was actually discovered, we rely instead on the vulnerability \emph{disclosure} date as a proxy for the \emph{discovery} date and we study the time needed before a lingering vulnerability gets disclosed.
Usually, when a vulnerability is discovered, a CVE identifier will be reserved for it so it can be uniquely identified later. These CVE IDs can be reserved by a Central Numbering Authorities (CNAs) that have the priority to reserve CVE IDs from MITRE~\footnote{\url{https://cve.mitre.org/cve/request_id.html}}.
{Snyk}\xspace, {GitHub}\xspace and HackerOne are examples of such CNAs. To study the disclosure time in more detail, we inspected MITRE using the CVE identifiers we obtained for the vulnerabilities in our vulnerability dataset. We extracted the reservation date and CNA of each CVE and found that only 1,487 vulnerabilities (55\%) have a CVE linked to them. 820 of them were disclosed before the CVE reservation date, 639 were disclosed after that date, and 100 vulnerabilities had a disclosure date that coincided with the CVE reservation date. We found many organizations responsible for reserving the CVEs of vulnerabilities in {npm}\xspace and {RubyGems}\xspace packages, including Red Hat, Microsoft and GitHub. The top three of CNAs with the most CVEs was HackerOne, Mitre and Snyk for {npm}\xspace, and Mitre, Red Hat and HackerOne for {RubyGems}\xspace.
For all vulnerabilities, we computed the \emph{disclosure lag}, i.e.,\xspace the number of days between the first affected release and the vulnerability \changed{disclosure} date. \fig{fig:discovery_evolution} shows the evolution over time of the disclosure lag \minor{distribution}, grouped by CNA. \minor{A gap can be observed between 2018 and 2020 for \textit{RubyGems - NO CVE} because during this period we did not find any disclosed vulnerability for {RubyGems}\xspace with the NO CVE id.}
The disclosure lag tends to increase in both package distributions, implying that recently reported vulnerabilities take longer to disclose than older ones. This trend can be observed for all CNAs. We also observe a longer disclosure lag for {RubyGems}\xspace vulnerabilities than for {npm}\xspace ones.
Comparing between CNAs, \fig{fig:discovery_evolution} also reveals that some CNAs tend to disclose and report vulnerabilities faster than others.
\begin{figure}[!ht]
\begin{center}
\setlength{\unitlength}{1pt}
\footnotesize
\includegraphics[width=0.98\columnwidth]{discovery_evolution_cna.pdf}
\caption{Evolution of the vulnerability disclosure lag \minor{distribution} in {npm}\xspace and {RubyGems}\xspace, grouped by CVE Central Numbering Authority. The shaded areas represent the interval between the $25^{th}$ and $75^{th}$ percentile.}
%
\label{fig:discovery_evolution}
\end{center}
%
\end{figure}
To make a fair comparison, we decided to focus on recent vulnerabilities that have been disclosed in the last three years, i.e.,\xspace after 2017-04-17.
In addition, we focus only on vulnerabilities that have been disclosed in packages that received at least one update in the last two years.
Indeed, since inactive packages have less maintenance activity, it would be unfair to compare vulnerabilities disclosed in these packages with vulnerabilities of packages receiving continuous maintenance, including vulnerability inspection.
This led us to consider only 1,276 vulnerabilities (45.8\%) for $RQ_1$ and $RQ_2$. \fig{fig:number_vulns_active} shows the proportion and number of vulnerabilities kept after this filtering. Compared to the unfiltered set in \fig{fig:affected_packages} there is a considerably higher proportion of critical vulnerabilities.
}
\begin{figure}[!ht]
\begin{center}
\setlength{\unitlength}{1pt}
\footnotesize
\includegraphics[width=0.98\columnwidth]{active_vuls_prop.pdf}
\caption{Proportion and number of vulnerabilities after filtering out old vulnerabilities and inactive packages.}
\label{fig:number_vulns_active}
\end{center}
\end{figure}
\fig{fig:discovery_proportion} shows the cumulative proportion of \changed{disclosed vulnerabilities and their disclosure lag, grouped by severity level. For {npm}\xspace, we observe that critical vulnerabilities are the fastest to disclose. It only takes 3.1 months to disclose 50\% of all \textit{critical} vulnerabilities, while it takes respectively 49.3, 49.3 and 44.5 months to disclose 50\% of all \textit{low}, \textit{medium} and \textit{high} vulnerabilities.
For {RubyGems}\xspace, the disclosure lag seems to depend much less on its severity level.
Using log-rank tests for both package distributions, we could only confirm a statistically significant difference for the comparisons with \textit{critical} vulnerabilities in {npm}\xspace (e.g.,\xspace \textit{critical} vs \textit{high}). %
For all other comparisons in {RubyGems}\xspace, the null hypothesis stating that there is no difference in disclosure lag could not be rejected.
Disregarding the severity status, vulnerabilities have a lower disclosure lag for {npm}\xspace than for {RubyGems}\xspace (in {npm}\xspace it takes 31.5 months to disclose 50\% of all vulnerabilities compared to 84.4 months for {RubyGems}\xspace). To statistically confirm this observed difference in disclosure lag, we carried out a Mann-Whitney U test. The null hypothesis could be rejected with a \textit{large} effect size ($|d|=0.53$), statistically confirming that vulnerabilities take longer to disclose for {RubyGems}\xspace than for {npm}\xspace. This could also be observed when grouping the analysis by CNA. Possible explanations for this finding may be that {npm}\xspace has better security detection tools and more security researchers than {RubyGems}\xspace. We found that vulnerabilities of {npm}\xspace packages were disclosed by 635 security researchers and communicated to MITRE via 21 CNAs, while {RubyGems}\xspace vulnerabilities were disclosed by 236 security researchers and communicated to MITRE via 17 CNAs.
Redoing the same analysis for \emph{inactive} libraries, we found that it took 24.8 and 34.3 months to disclose 50\% of the vulnerabilities in inactive {npm}\xspace and {RubyGems}\xspace packages, respectively. To compare between inactive and active packages, we carried out Mann-Whitney U tests. We could only find a statistically significant difference with a \textit{large} effect size ($|d|=0.48$) in favor of active packages when doing the comparison for {RubyGems}\xspace. This means that, for {RubyGems}\xspace, vulnerabilities of inactive packages took less time to be disclosed than those of active packages.
Since \textit{Malicious Package} vulnerabilities are intentionally injected in the form of new updates or packages, we compared them to other vulnerabilities and expected them to be disclosed faster. Indeed, we found that 50\% of the \textit{Malicious Package} vulnerabilities in {npm}\xspace are disclosed within 12 days, while those of {RubyGems}\xspace are disclosed within 35 days. Carrying out Mann-Whitney U tests between \textit{Malicious Package} vulnerabilities and other vulnerabilities, we could only find a statistically significant difference with a \textit{small} effect size ($|d|=0.25$) in favor of non-\textit{Malicious Package} vulnerabilities when doing the comparison for {npm}\xspace. This means that, for {npm}\xspace, \textit{Malicious Package} vulnerabilities are disclosed faster than other vulnerabilities.}
\begin{figure}[!ht]
\begin{center}
\setlength{\unitlength}{1pt}
\footnotesize
\includegraphics[width=0.98\columnwidth]{discovery_proportion.pdf}
\caption{Cumulative proportion of \changed{disclosed} vulnerabilities in function of the time elapsed since the first affected package release, grouped by severity level.}
\label{fig:discovery_proportion}
%
%
%
\end{center}
\end{figure}
\smallskip
\noindent\fbox{%
\parbox{0.98\textwidth}{%
%
In {npm}\xspace, \changed{critical vulnerabilities are disclosed faster.}
Vulnerabilities in {npm}\xspace are \changed{disclosed} faster than in {RubyGems}\xspace.
It takes \changed{2.3 and 7 years to disclose half of the vulnerabilities} lingering in {npm}\xspace and {RubyGems}\xspace packages, respectively. The \changed{vulnerability disclosure lag} has been increasing over time in both package distributions. \changed{In {npm}\xspace, Malicious Package vulnerabilities are disclosed faster than other vulnerability types.}
}%
}
\subsection{\minor{$RQ_2$: For how long do packages remain affected by disclosed vulnerabilities?}}
\label{subsec:rq2}
\changed{$RQ_1$ studied the \emph{disclosure lag} between the first package release affected by a vulnerability and the moment this vulnerability was disclosed.
$RQ_2$ investigates \minor{the time that a vulnerability remains in a package until its fix (1) since the first affected release and (2) since the disclosure time. Case (1) refers to the number of days between the release date of the first affected release and the release date of the first package update that is no longer affected by the vulnerability, while case (2) refers to the number of days between the disclosure date of the vulnerability and the release date of the first package update that is no longer affected by the vulnerability.}
This \minor{analysis} is relevant since the longer a package remains affected, the longer it will remain a source of vulnerabilities to potential users and dependents. The latter will be obliged to rely on vulnerable package releases as long as the package maintainers did not release a package update that fixes the vulnerability. This also harms the package dependency network since if a vulnerability fix is being delayed, more and more dependents may potentially make use of the vulnerable package or may become potentially exposed through transitive dependencies. The later a fix becomes available, the more difficult it becomes for all dependents to update their transitive dependencies.
$RQ_2$ uses the same filtered dataset as $RQ_1$, ignoring inactive packages and focusing only on recent vulnerabilities.
In addition, we exclude \textit{Malicious Package} vulnerabilities since they are known to be fixed in a different way, most frequently by simply removing the malicious package (release) from the registry.
\minor{The analysis of $RQ_2$ will be subdivided into three parts: $RQ_2^a$ a characterisation of the type of versions in which vulnerabilities are fixed; $RQ_2^b$ the time between the first affected release and the fix; and $RQ_2^c$ the time between the vulnerability disclosure and the fix.}
\minor{\textbf{\subsubsection*{$RQ_2^a$ Version types in which vulnerabilities are fixed.}}}
Relying on the \emph{semver}\xspace specification~\cite{preston2013semantic,decan2019package}, we studied the \changed{package release version (i.e.,\xspace patch, minor or major) that fixes a vulnerability. The purpose is to determine whether there is a relation between the severity of the vulnerability and the version type of the first release that included a fix. According to \emph{semver}\xspace, patch releases are the most likely candidates for vulnerability fixes.}
As both {npm}\xspace and {RubyGems}\xspace promote and encourage the use of \emph{semver}\xspace, we expect to find most of their vulnerable packages to be fixed in a patch release.
\fig{fig:releases_rq2} presents stacked bar plots showing the proportion of fixed vulnerabilities per severity, \changed{grouped by the version type of the first unaffected release.\footnote{We implicitly assume here that the first unaffected release is the one containing the fix.} Our expectations are confirmed, since the majority of vulnerabilities are fixed in patch releases. Disregarding the severity level, 65\% of the vulnerabilities were fixed in patch releases, 22.2\% in minor releases and only 12.8\% in major releases. The severity of a vulnerability does not seem to play a major role in the version type of the release that contains its fix. An exception are the \textit{low} vulnerabilities in {RubyGems}\xspace: in contrast to {npm}\xspace, they do not seem to be considered as important, as they are fixed more often in minor and major releases that incorporate other types of changes as well.}
\begin{figure}[!ht]
\begin{center}
\setlength{\unitlength}{1pt}
\footnotesize
\includegraphics[width=0.98\columnwidth]{releases_rq2_new.pdf}
\caption{Proportion of fixed vulnerabilities per severity, grouped by version type of the first unaffected release.}
\label{fig:releases_rq2}
%
\end{center}
\end{figure}
\smallskip
\noindent\fbox{%
\parbox{0.98\textwidth}{%
%
\changed{65\% of all \minor{disclosed} vulnerabilities are fixed in patch releases. \textit{Low} severity vulnerabilities in {RubyGems}\xspace are fixed less often in patch releases. The severity of a vulnerability does not seem to have an impact on the first release type in which the vulnerability is fixed.}
}%
}
\smallskip
\minor{\textbf{\subsubsection*{$RQ_2^b$ Time between the first affected release and the fix.}}}
Since many vulnerabilities in our dataset did not yet receive a fix (128 out of 693 for {npm}\xspace, and 9 out of 143 for {RubyGems}\xspace), we used a survival analysis~\cite{Klein2013} to estimate the probability over time for the event ``vulnerability is fixed" with respect to the date of the first affected release, and grouped by severity type. \fig{fig:survival_rq2} shows the Kaplan-Meier survival curves for this analysis. For {npm}\xspace, the confidence intervals of all survival curves overlap, suggesting that there is no difference in \minor{the time to fix a vulnerability since its first appearance} depending on the severity of the vulnerability.
A similar observation can be made for {RubyGems}\xspace, with the exception of \textit{low} severity vulnerabilities that seem to be fixed considerably faster than any of the other severity types.
It takes only 22 months to fix 50\% of all \textit{low} vulnerabilities in {RubyGems}\xspace, while this is 81, 99 and 74 months for, \textit{medium}, \textit{high} and \textit{critical} vulnerabilities.
However, log-rank tests did not allow us to confirm any statistical differences between severity levels for any package distribution.
When ignoring the severity level, however, a log-rank test could confirm a statistically significant difference in fixing \minor{time (since the first affected release)} between {npm}\xspace and {RubyGems}\xspace. Vulnerabilities in {npm}\xspace have a considerably smaller fixing time. For example, half of all vulnerabilities in {npm}\xspace are fixed after 55 months, while it takes 94 months in {RubyGems}\xspace.}
\begin{figure}[!ht]
\begin{center}
\setlength{\unitlength}{1pt}
\footnotesize
\includegraphics[width=0.98\columnwidth]{survival_rq2_time.pdf}
\caption{Survival probability for event ``vulnerability is fixed'' since the first affected release. The shaded colored areas represent the confidence intervals ($\alpha=0.05$) of the survival curves.}
\label{fig:survival_rq2}
\end{center}
%
\end{figure}
\changed{The fairly long observed time between \minor{the first affected release and the fix} suggests that vulnerabilities affect many releases before they are fixed. Indeed, focusing only on vulnerabilities that have received a fix, an {npm}\xspace vulnerability affects a median of 30 package releases before it is fixed. While for {RubyGems}\xspace, it affects a median of 59 releases.
The boxen plots of \fig{fig:affected_versions} show how many releases of a vulnerable package are affected by each severity level. We observe that the number of affected releases per vulnerable package is quite high. We also observe that for {RubyGems}\xspace, \textit{low} severity vulnerabilities affect less releases than other vulnerabilities. This is in line with the observations made in \fig{fig:survival_rq2}.
\begin{figure}[!ht]
\begin{center}
\setlength{\unitlength}{1pt}
\footnotesize
\includegraphics[width=0.98\columnwidth]{affected_versions.pdf}
\caption{Boxen plots of the distribution of the number of affected releases of vulnerable packages in {npm}\xspace and {RubyGems}\xspace, grouped by severity.}
\label{fig:affected_versions}
\end{center}
\end{figure}
\smallskip
\noindent\fbox{%
\parbox{0.98\textwidth}{%
%
\minor{Disclosed} vulnerabilities in {npm}\xspace~\minor{take a considerably shorter time to fix since the first affected release}. Half of all \minor{disclosed} {npm}\xspace vulnerabilities take 55 months to fix since their introduction, compared to 94 months for \minor{disclosed} {RubyGems}\xspace vulnerabilities. As a consequence, the impact of vulnerabilities is higher for {RubyGems}\xspace, affecting a median of 59 package releases compared to 30 package releases for {npm}\xspace.
}%
}
}
\smallskip
\minor{\textbf{\subsubsection*{$RQ_2^c$ Time between the vulnerability disclosure and the fix.}}}
\minor{It is important to fix vulnerabilities rapidly after their discovery, and especially after they have been publicly disclosed. If a vulnerability is publicly disclosed before a fix is available, more attackers will know about it and will be able to exploit it. Hence, when an open source vulnerability is reported to a security monitoring service, it is usually first disclosed privately in order to give the maintainers time to fix it before it is made public. For example, when {Snyk}\xspace receives a report about a vulnerable package, it informs the package maintainers and gives them 90 days to issue a remediation of the vulnerability. An extension can be granted at the maintainers' request, depending on the severity of the discovered vulnerability. To investigate whether maintainers fix their vulnerabilities within 90 days, we computed the time difference between the vulnerability fix date and the date when it was disclosed~\footnote{This analysis included Malicious Package vulnerabilities}. We found a significant proportion of vulnerabilities that exceeded this 90 day period: 17.8\% for {npm}\xspace and 10\% for {RubyGems}\xspace.
According to a survey carried out by {Snyk}\xspace in 2017, \textit{``$34\%$ of maintainers said they could respond to a security issue within a day of learning about it, and $60\%$ said they could respond within a week"~\cite{snyk2017}}. Our own quantitative observations align with this claim:
For {npm}\xspace, 36.9\% of all vulnerabilities with a known fix were fixed within a day and 54.4\% were fixed within a week, while for {RubyGems}\xspace 46.2\% were fixed within a day and 63\% were fixed within a week. Overall, 38.9\% of all vulnerabilities with a known fix were fixed within a day, and 56.3\% were fixed within a week after their disclosure.
\smallskip
\noindent\fbox{%
\parbox{0.98\textwidth}{%
%
For {npm}\xspace, 17.8\% of the fixed vulnerabilities needed more than 90 days after their disclosure to be fixed, while this proportion is 10\% for {RubyGems}\xspace. 38.9\% of all fixed vulnerabilities were fixed within a day, and 56.3\% were fixed within a week after their disclosure.
}%
}
}\subsection{\changed{$RQ_3$: To \minor{what} extent are dependents exposed to their vulnerable dependencies?}}
\label{subsec:rq3}
We have so far only studied vulnerable package releases.
\changed{$RQ_3$ focuses on packages as well as external projects that may} be \emph{exposed} to a vulnerability within their direct or indirect dependencies.
This exposure may lead to a security breach if the affected functionality of the vulnerable dependency is being used.
$RQ_3^a$ will study the exposure of \emph{\minor{the latest} package releases} to direct or indirect vulnerable dependencies, whereas
$RQ_3^b$ will study the exposure of \emph{external projects} to such vulnerable dependencies.
\subsubsection*{\textbf{\changed{$RQ_3^a$: To \minor{what} extent are packages exposed to their vulnerable dependencies?}}}
For all 842,697 of the latest package releases available in the {npm}\xspace and {RubyGems}\xspace snapshots, we determined the direct and indirect dependencies by resolving their dependency constraints.
We narrowed down the analysis to those dependencies that are referenced in the vulnerability dataset.
42.1\% of all considered {npm}\xspace{} \minor{packages} (315,315 out of 748,026) and 39\% of all considered {RubyGems}\xspace{} \minor{packages} (36,957 out of 94,671) were found to have at least one vulnerable direct or indirect dependency \minor{in their latest release}. More specifically, 15.7\% of {npm}\xspace{} \minor{latest} releases and 17.8\% of {RubyGems}\xspace{} \minor{latest} releases are directly exposed, while 36.5\% of {npm}\xspace{} releases and 27.1\% of {RubyGems}\xspace releases are indirectly exposed~\footnote{The two categories of directly and indirectly exposed package releases are non-exclusive.}. \changed{This is in line with the findings of Zimmerman ~et al.\xspace~\cite{zimmermann2019small} reporting that up to 40\% of all packages depend on code with at least one publicly known vulnerability.}
\smallskip
\noindent\fbox{%
\parbox{0.98\textwidth}{%
%
More than 15\% of the \minor{(latest)} dependent package releases are exposed to vulnerable {\bf direct} dependencies.
\changed{36.5\% of {npm}\xspace and 27.1\% of {RubyGems}\xspace latest package releases are exposed to vulnerabilities coming from vulnerable {\bf indirect} dependencies.}
}%
}
\paragraph{\textbf{Vulnerable direct dependencies.}}
We found only a small minority of direct dependencies (i.e.,\xspace package releases on which at least one other package directly depends) to be vulnerable (because they contain at least one vulnerability).
Of all 3,638,361 dependencies considered for {npm}\xspace, only 154,455 (4.2\%) are vulnerable; and of all 224,959 dependencies considered for {RubyGems}\xspace, only 20,336 (9\%) are vulnerable.
\fig{fig:direct_deps} shows the distribution, per severity category, of the number of vulnerable package releases found among the direct dependencies in {npm}\xspace and {RubyGems}\xspace.
We observe that medium and high severity vulnerabilities are the most common among vulnerable dependencies.
We also observe that vulnerable dependencies in {RubyGems}\xspace tend to be more often of medium severity than in {npm}\xspace (blue boxen-plot), and less often of high severity (orange boxen-plot).
To statistically confirm these observed differences between {npm}\xspace and {RubyGems}\xspace, we carried out Mann-Whitney U tests to compare the number of vulnerabilities per severity type. The null hypothesis could be rejected for all comparisons. %
However, the effect size (shown in \tab{tab:rq3_severities}) was {\em negligible} for all comparisons, except for the category of high severity vulnerabilities where a {\em moderate} effect size was reported.
Disregarding the severity category, the effect size was {\em small} in favour of {npm}\xspace, suggesting that direct dependencies in {npm}\xspace tend to have more vulnerabilities.
\begin{figure}[!ht]
\begin{center}
\setlength{\unitlength}{1pt}
\footnotesize
\includegraphics[width=0.98\columnwidth]{direct_deps.pdf}
\caption{Boxen plots showing the distribution of the number of vulnerabilities in vulnerable {\bf direct} dependencies of the {npm}\xspace and {RubyGems}\xspace snapshots, grouped by severity.}
%
%
%
\label{fig:direct_deps}
\end{center}
\end{figure}
\begin{table}[!ht]
\centering
\caption{Mean and median number of vulnerabilities found in direct dependencies, in addition to effect sizes and their directions when comparing {npm}\xspace and {RubyGems}\xspace dependency vulnerabilities.}
\label{tab:rq3_severities}
\begin{tabular}{l|rr|rr|ccr}
\multirow{2}{*}{} & \multicolumn{2}{c|}{\bf{npm}\xspace} & \multicolumn{2}{c|}{\bf{RubyGems}\xspace} & \multirow{2}{*}{\bf direction} & \multirow{2}{*}{ $|\textbf{d}|$} & \multirow{2}{*}{\bf effect size}\\
& \bf mean & \bf median & \bf mean & \bf median & & \\
\toprule
\bf {\color{darkgreen}low\xspace} & 1.07 & 1 & 1.02 & 1 & \textgreater{} & 0.05 & negligible \\
\bf \medium & 2.55 & 1 & 2.73 & 2 & \textless{} & 0.06 & negligible \\
\bf {\color{orange}high\xspace} & 2.32 & 2 & 1.81 & 1 & \textgreater{} & 0.39 & {\bf moderate} \\
\bf {\color{darkred}critical\xspace} & 1.17 & 1 & 1.04 & 1 & \textgreater{} & 0.11 & negligible \\ \hline
\bf all & 2.22 & 1 & 1.98 & 1 & \textgreater{} & 0.16 & small
\end{tabular}
%
\end{table}
\smallskip
\noindent\fbox{%
\parbox{0.98\textwidth}{%
%
{RubyGems}\xspace has more than twice the proportion of vulnerable direct dependencies than {npm}\xspace (9\% compared to 4.2\%).
On the other hand, direct dependencies in {npm}\xspace tend to have more vulnerabilities than direct dependencies in {RubyGems}\xspace.
}%
}
\bigskip
\paragraph{\textbf{Vulnerable indirect dependencies.}}
Releases may also be exposed \emph{indirectly} to vulnerable dependencies.
1,225,724 out of the 64,959,052 indirect dependencies in {npm}\xspace (1.9\%) are vulnerable;
whereas a much higher proportion of 65,090 out of 1,033,870 indirect dependencies in {RubyGems}\xspace (6.3\%) are vulnerable.
\fig{fig:transitive_deps} shows the distribution, per severity category, of the number of vulnerable package releases found among the indirect dependencies for {npm}\xspace and {RubyGems}\xspace. Similar to \fig{fig:direct_deps}, medium and high severity vulnerabilities are the most common. We also observe a higher number of vulnerabilities for each severity level for {npm}\xspace. %
Mann-Whitney U tests comparing the distributions between {npm}\xspace and {RubyGems}\xspace confirm a statistically significant difference.
The effect sizes for low, medium, high and critical vulnerabilities are \textit{small} to \textit{moderate} in favor of {npm}\xspace (see \tab{tab:rq3_severities_indirect}). Disregarding the severity category, the effect size is small (i.e.,\xspace $|d|=0.32$).
\begin{table}[!ht]
\centering
\caption{Mean and median number of vulnerabilities found in indirect dependencies, in addition to effect sizes and their directions when comparing {npm}\xspace and {RubyGems}\xspace dependency vulnerabilities.}
\label{tab:rq3_severities_indirect}
\begin{tabular}{l|rr|rr||ccr}
\multirow{2}{*}{} & \multicolumn{2}{c|}{\bf{npm}\xspace} & \multicolumn{2}{c||}{\bf{RubyGems}\xspace} & \multirow{2}{*}{\bf direction} & \multirow{2}{*}{ $|\textbf{d}|$} & \multirow{2}{*}{\bf effect size}\\
& \bf mean & \bf median & \bf mean & \bf median & & \\ \hline
\bf {\color{darkgreen}low\xspace} & 1.95 & 1 & 1.08 & 1 & \textgreater{} & 0.43 & {\bf moderate} \\
\bf \medium & 3.53 & 2 & 2.77 & 1 & \textgreater{} & 0.32 & small \\
\bf {\color{orange}high\xspace} & 3.41 & 2 & 2 & 2 & \textgreater{} & 0.27 & small \\
\bf {\color{darkred}critical\xspace} & 1.33 & 1 & 1.12 & 1 & \textgreater{} & 0.15 & small \\ \hline
\bf all & 3.08 & 2 & 1.94 & 1 & \textgreater{} & 0.32 & small
\end{tabular}
\end{table}
\begin{figure}[!ht]
\begin{center}
\setlength{\unitlength}{1pt}
\footnotesize
\includegraphics[width=0.98\columnwidth]{transitive_deps.pdf}
\caption{Boxen plots showing the distribution of the number of vulnerabilities in vulnerable {\bf indirect} dependencies of the {npm}\xspace and {RubyGems}\xspace snapshots, grouped by severity.}
\label{fig:transitive_deps}
%
\end{center}
\end{figure}
\smallskip
\noindent\fbox{%
\parbox{0.98\textwidth}{%
The proportion of vulnerable indirect dependencies for {RubyGems}\xspace (6.3\%) is more than three times higher than for {npm}\xspace (1.9\%).
On the other hand, for each severity, vulnerable indirect dependencies in {npm}\xspace have more vulnerabilities than in {RubyGems}\xspace.
}%
}
\bigskip
\changed{
\paragraph{\textbf{Vulnerabilities of all transitive dependencies.}}
Considering both direct and indirect dependencies, we investigate whether the number of dependency vulnerabilities is related to the date when the studied package was released. \fig{fig:evolution_package_vulns} visualises the monthly evolution of the distribution of the number of dependency vulnerabilities for all packages released during that month. In general, we observe that for both ecosystems, the number of vulnerabilities decreased over time, in the sense that more recent packages are exposed to less vulnerabilities coming from their dependencies than older packages. However, we notice that the number of vulnerabilities for {npm}\xspace was increasing over time until 2014 (i.e.,\xspace dashed line in red), after which it started decreasing. This coincides with the time when {npm}\xspace introduced the permissive constraint \textit{caret} ($^{\wedge}$) as default constraint for {npm}\xspace dependencies instead of the more restrictive constraint \textit{tilde} ($\sim$). Caret accepts new patch and minor releases to be installed while tilde only accepts new patches. This means that packages with permissive dependency constraints are exposed to less dependency vulnerabilities than those with restrictive constraints. Intuitively, permissive constraints accept wider ranges of releases than restrictive ones and thus they provide more opportunities to install dependencies with fixed vulnerabilities.
\begin{figure}[!ht]
\begin{center}
\setlength{\unitlength}{1pt}
\footnotesize
\includegraphics[width=0.98\columnwidth]{evolution_package_vulns.pdf}
\caption{Monthly evolution of the distribution of the number of vulnerabilities coming from transitive dependencies of all studied packages. The shaded areas correspond to the interval between the $25^{th}$ and $75^{th}$ percentile.}
\label{fig:evolution_package_vulns}
\end{center}
\end{figure}
\smallskip
\noindent\fbox{%
\parbox{0.98\textwidth}{%
%
Older packages are exposed to more vulnerabilities coming from their dependencies than recent ones. The introduction of the permissive dependency constraint caret in {npm}\xspace led to have packages with less vulnerable transitive dependencies.
}%
}
}
\paragraph{\textbf{Exposed packages.}}
Only 849 vulnerable packages used as dependencies (667 for {npm}\xspace and 182 for {RubyGems}\xspace) are responsible for all exposed packages. This means that 60.1\% and 43.3\% of vulnerable packages in {npm}\xspace and {RubyGems}\xspace is never used as a dependency,
as the number of vulnerable packages we originally found is 1,672 for {npm}\xspace and 321 for {RubyGems}\xspace. %
\changed{Moreover, only a small subset of the used vulnerable packages is responsible for most of the vulnerabilities found among direct and indirect dependencies. In fact, 90\% of the vulnerabilities found in {npm}\xspace dependencies come from 50 packages only. For {RubyGems}\xspace this subset is even smaller with only 20 packages responsible for 90\% of the vulnerabilities.}
Starting from a unique vulnerable package that is used as a dependency, we quantify the number of dependent packages that are directly or indirectly exposed to a vulnerability because of it. \fig{fig:exposedPackgesToDepsBoxen} shows the distribution, revealing that considerably more packages are indirectly exposed to vulnerabilities. The median number of packages that is directly exposed to one vulnerable package is 11 for {npm}\xspace and 12 for {RubyGems}\xspace, while it is about twice as high for indirect exposures (26 for {npm}\xspace and 21.5 for {RubyGems}\xspace).
We carried out Mann-Whitney U tests to confirm that the distribution for indirectly exposed packages is higher than for directly exposed packages between the distributions. The null hypothesis could only be rejected for {npm}\xspace with a \textit{small} effect size ($|d|=0.19$).
Without distinguishing between direct or indirect dependencies, one single vulnerable package is responsible for exposing a median of 21 and a maximum of 213,851 (67.8\%) {npm}\xspace packages, and a median of 19 and a maximum of 22,233 (60.2\%) {RubyGems}\xspace packages, respectively.
\begin{figure}[!ht]
\begin{center}
\setlength{\unitlength}{1pt}
\footnotesize
\includegraphics[width=0.98\columnwidth]{exposedPackgesToDepsBoxen.pdf}
\caption{Distribution of the number of exposed {npm}\xspace and {RubyGems}\xspace packages affected by a given vulnerable package.}
\label{fig:exposedPackgesToDepsBoxen}
\end{center}
\end{figure}
\smallskip
\noindent\fbox{%
\parbox{0.98\textwidth}{%
%
A limited set of vulnerable packages is responsible for most of the vulnerabilities exposed through dependencies.
One single vulnerable package can be responsible for exposing two thirds of all dependent \minor{latest} package releases.%
}%
}
\subsubsection*{\textbf{\changed{$RQ_3^b$: To \minor{what} extent are external projects exposed to their vulnerable dependencies?}}}
The main purpose of package managers such as {npm}\xspace and {RubyGems}\xspace is to facilitate depending on packages.
Therefore, not only packages but also external projects can be exposed to the vulnerabilities of their dependencies.
We intuitively expect external projects to be more exposed to vulnerabilities through their dependents than packages distributed through the package manager, as the maintainers of the latter intend their packages to be depended on.
For the 24,593 collected external projects (see \sect{subsec:dependency}) we searched for all dependencies %
referenced in the vulnerability dataset.
We found 79\% external projects for {npm}\xspace (11,003 out of 13,930) and 74.1\% for {RubyGems}\xspace (7,901 out of 10,663) with at least one direct or indirect vulnerable dependency. More specifically, for {npm}\xspace 47\% of all external projects are directly exposed and 70.4\% are indirectly exposed; whereas for {RubyGems}\xspace 54\% of all external projects are directly exposed and 66.4\% are indirectly exposed~\footnote{The two categories of directly and indirectly exposed projects are non-exclusive.}.
\smallskip
\noindent\fbox{%
\parbox{0.98\textwidth}{%
%
\changed{
About half of all external projects (47\% for {npm}\xspace and 54\% for {RubyGems}\xspace) are exposed to vulnerabilities coming from vulnerable {\bf direct} dependencies.
About two thirds of all external projects (70.4\% for {npm}\xspace and 66.4\% for {RubyGems}\xspace) are exposed to vulnerabilities coming from vulnerable {\bf indirect} dependencies.}
%
}%
}
\paragraph{\textbf{Vulnerable direct dependencies of external projects.}}
Out of 147,622 of the direct dependencies of external projects on {npm}\xspace packages, 11,969 (8.1\%) are vulnerable.
Out of 101,079 of the direct dependencies of external projects on {RubyGems}\xspace packages, 11,034 (10.9\%) are vulnerable.
\fig{fig:direct_repos} shows the distribution of the number of vulnerabilities affecting direct dependencies of external projects.
Similar to what we observed for $RQ_3^a$, medium and high severity vulnerabilities are the most common among direct dependencies, and dependencies on {npm}\xspace packages tend to have higher numbers of vulnerabilities than dependencies on {RubyGems}\xspace packages.
\tab{tab:rq4_severities} shows the mean and median number of vulnerabilities caused by direct dependencies, grouped by severity.
We performed Mann-Whitney U tests to compare the number of vulnerabilities in project dependencies between {npm}\xspace and {RubyGems}\xspace.
We found statistically significant differences for all compared distributions but the effect size was \textit{negligible}, with one exception:
a \textit{moderate} effect size was found for highly vulnerable dependencies.
\begin{figure}[!ht]
\begin{center}
\setlength{\unitlength}{1pt}
\footnotesize
\includegraphics[width=0.98\columnwidth]{direct_deps_repos.pdf}
\caption{Boxen plots showing the distribution of the number of vulnerabilities found in vulnerable {\bf direct} {npm}\xspace and {RubyGems}\xspace dependencies of {GitHub}\xspace projects, grouped by severity.}
\label{fig:direct_repos}
\end{center}
\end{figure}
\begin{table}[!ht]
\centering
\caption{Mean and median number of vulnerable dependencies, in addition to effect sizes and their directions, for direct dependencies of {GitHub}\xspace projects on vulnerable {npm}\xspace and {RubyGems}\xspace packages.}
%
\label{tab:rq4_severities}
\begin{tabular}{l|rr|rr||ccr}
\multirow{2}{*}{} & \multicolumn{2}{c|}{\bf{npm}\xspace} & \multicolumn{2}{c||}{\bf{RubyGems}\xspace} & \multirow{2}{*}{\bf direction} & \multirow{2}{*}{ $|\textbf{d}|$} & \multirow{2}{*}{\bf effect size}\\
& \bf mean & \bf median & \bf mean & \bf median & & \\ \hline
\bf{\color{darkgreen}low\xspace} & 1.16 & 1 & 1.03 & 1 & \textgreater{} & 0.11 & negligible \\
\bf\medium & 2.96 & 2 & 2.80 & 2 & \textless{} & 0.05 & negligible \\
\bf{\color{orange}high\xspace} & 2.80 & 2 & 1.78 & 1 & \textgreater{} & 0.33 & {\bf moderate} \\
\bf{\color{darkred}critical\xspace} & 1.17 & 1 & 1.07 & 1 & \textgreater{} & 0.09 & negligible \\ \hline
\bf all & 2.60 & 2 & 2.27 & 1 & \textgreater{} & 0.06 & negligible
\end{tabular}
\end{table}
\smallskip
\noindent\fbox{%
\parbox{0.98\textwidth}{%
%
8.1\% of the direct dependencies of external projects on {npm}\xspace are vulnerable, while this is 10.9\% for {RubyGems}\xspace.
{npm}\xspace-dependent projects have more highly vulnerable direct dependencies than {RubyGems}\xspace-dependent projects.
}%
}
\paragraph{\textbf{Vulnerable indirect dependencies of external projects.}}
Out of 2,666,922 indirect dependencies of external projects on {npm}\xspace packages, 87,062 (3.2\%) are vulnerable.
Out of 443,427 indirect dependencies of external projects on {RubyGems}\xspace packages, 46,682 (10.5\%) are vulnerable.
\fig{fig:transitive_repos} shows the distribution of the number of vulnerabilities affecting indirect dependencies of external projects.
Similar to \fig{fig:direct_repos}, medium and high severity vulnerabilities are the most common.
We observe that indirect dependencies on {RubyGems}\xspace tend to be more often of medium severity than indirect dependencies on {npm}\xspace packages, while the latter tend be more often of low severity.
Mann-Whitney U tests comparing the distributions between {npm}\xspace and {RubyGems}\xspace confirmed a statistically significant difference.
Regardless of the severity level we found a \textit{small} effect size ($|d|=0.15$) in favour of {npm}\xspace dependency vulnerabilities (see \tab{tab:rq4_severities_indirect}).
Per severity levels, we only found a non-negligible effect size for \textit{critical} severity vulnerabilities in favour of {RubyGems}\xspace ($|d|=0.25$) (i.e., {RubyGems}\xspace projects have more \textit{critical} vulnerabilities coming form indirect dependencies than {npm}\xspace projects), and for \textit{low} severity vulnerabilities in favour of {npm}\xspace ($|d|=0.71$). The \textit{negligible} effect size for \textit{medium} severity vulnerabilities is in favour of {RubyGems}\xspace ($|d|=0.1$), while for \textit{high} severity vulnerabilities it is in favour of {npm}\xspace ($|d|=0.09$).
\begin{table}[!ht]
\centering
\caption{Mean and median number of vulnerable dependencies, in addition to effect sizes and their directions, for indirect dependencies of {GitHub}\xspace projects on vulnerable {npm}\xspace and {RubyGems}\xspace packages.}
\label{tab:rq4_severities_indirect}
\begin{tabular}{l|rr|rr||ccr}
\multirow{2}{*}{} & \multicolumn{2}{c|}{\bf{npm}\xspace} & \multicolumn{2}{c||}{\bf{RubyGems}\xspace} & \multirow{2}{*}{\bf direction} & \multirow{2}{*}{ $|\textbf{d}|$} & \multirow{2}{*}{\bf effect size}\\
& \bf mean & \bf median & \bf mean & \bf median & & \\ \hline
\bf {\color{darkgreen}low\xspace} & 2.88 & 2 & 1.05 & 1 & \textgreater{} & 0.71 & {\bf large} \\
\bf \medium & 6.56 & 4 & 12.36 & 5 & \textless{} & 0.1 & negligible \\
\bf {\color{orange}high\xspace} & 6.2 & 4 & 4.55 & 3 & \textgreater{} & 0.09 & negligible \\
\bf {\color{darkred}critical\xspace} & 1.4 & 1 & 1.67 & 2 & \textless{} & 0.25 & small \\ \hline
\bf all & 5.16 & 3 & 5.74 & 2 & \textgreater{} & 0.15 & small
\end{tabular}
\end{table}
\begin{figure}[!ht]
\begin{center}
\setlength{\unitlength}{1pt}
\footnotesize
\includegraphics[width=0.98\columnwidth]{transitive_deps_repos.pdf}
\caption{Distribution of the number of vulnerabilities found in vulnerable {\bf indirect} {npm}\xspace and {RubyGems}\xspace dependencies of {GitHub}\xspace projects, grouped by severity.}
\label{fig:transitive_repos}
\end{center}
\end{figure}
\smallskip
\noindent\fbox{%
\parbox{0.98\textwidth}{%
Only 3.2\% of the indirect {npm}\xspace dependencies of external projects are vulnerable, while this is more than three times higher (10.5\%) for {RubyGems}\xspace.
Disregarding severities, external projects for {npm}\xspace have more vulnerabilities coming from vulnerable indirect dependencies than for {RubyGems}\xspace.
{RubyGems}\xspace external projects have more critical vulnerabilities coming from vulnerable indirect dependencies than {npm}\xspace external projects, while the latter have more low severity vulnerabilities.%
}%
}
\bigskip
\changed{
\paragraph{\textbf{Vulnerabilities of all transitive dependencies.}}
\fig{fig:evolution_projects_vulns} visualises the monthly evolution of the distribution of the number of dependency vulnerabilities for all external projects that have their last commit during that month. Similar to \fig{fig:evolution_package_vulns}, we observe that more recently active projects are exposed to less vulnerabilities coming from their dependencies. Also, we can again see the impact of the introduction of the caret ($^{\wedge}$) dependency constraint in {npm}\xspace in 2014 (left figure).
\begin{figure}[!ht]
\begin{center}
\setlength{\unitlength}{1pt}
\footnotesize
\includegraphics[width=0.98\columnwidth]{evolution_projects_vulns.pdf}
\caption{Monthly evolution of the distribution of the number of vulnerabilities coming from transitive dependencies of all studied external projects. \minor{Time points refer to the project's last commit date, i.e.,\xspace each project is considered only once.} The shaded areas correspond to the interval between the $25^{th}$ and $75^{th}$ percentile.}
\label{fig:evolution_projects_vulns}
\end{center}
\end{figure}
\smallskip
\noindent\fbox{%
\parbox{0.98\textwidth}{%
%
More recently active external projects are exposed to \minor{fewer} vulnerabilities coming from their dependencies than older ones. The introduction of the permissive dependency constraint caret in {npm}\xspace seems to have led to less vulnerable dependencies in external projects.
}%
}
}
\paragraph{\textbf{Exposed external projects.}}
Only 560 (28\%) vulnerable packages (400 for {npm}\xspace and 160 for {RubyGems}\xspace) are responsible for all exposed external projects.
This is less than the number we found for exposed packages in $RQ_3^a$, most likely because we studied fewer external projects than (internal) packages.
Similar to \fig{fig:exposedPackgesToDepsBoxen},
we analysed how many external projects are exposed to vulnerabilities because of a single vulnerable package. \fig{fig:exposedProjectsToDepsBoxen} shows the distribution of the number of projects that are directly or indirectly exposed to one unique vulnerable package.
We observe that there are considerably more indirectly exposed projects than direct ones.
The median number of projects that one vulnerable {npm}\xspace package directly exposes is 4, while for indirect exposure it is 12.
The median number of projects that one vulnerable {RubyGems}\xspace package directly exposes is 8, while for indirect exposure it is 12.
We carried out Mann-Whitney U tests between the distributions of direct and indirect dependency vulnerabilities exposing external projects. The null hypothesis could only be rejected
in the case of {npm}\xspace comparison with a \textit{small} effect size ($|d|=0.27$) in favor of indirectly exposed projects.
Without distinguishing between direct or indirect dependencies, we found that one single vulnerable package is responsible for exposing a median of 8 and a maximum of 7,506 (68.2\%) projects on vulnerable {npm}\xspace dependencies; while a median of 13.5 and a maximum of 5,270 (49.4\%) projects is exposed to vulnerable {RubyGems}\xspace dependencies.
\begin{figure}[!ht]
\begin{center}
\setlength{\unitlength}{1pt}
\footnotesize
\includegraphics[width=0.98\columnwidth]{exposedProjectsToDepsBoxen.pdf}
\caption{Distribution of the number of exposed {npm}\xspace and {RubyGems}\xspace projects that one single vulnerable package is affecting.}
\label{fig:exposedProjectsToDepsBoxen}
\end{center}
\end{figure}
\noindent\fbox{%
\parbox{0.98\textwidth}{%
%
Only 28\% of all vulnerable packages \minor{are} responsible for all vulnerabilities of exposed external projects.
\changed{One single vulnerable package can be responsible for exposing 68.2\% of all exposed projects that use {npm}\xspace, and 49.4\% of all exposed projects that use {RubyGems}\xspace.}
}%
}
\subsection{\changed{$RQ_4$: How are vulnerabilities spread in the dependency tree?}}
\label{subsec:rq4}
$RQ_3$ revealed that dependents are considerably more often indirectly exposed to vulnerabilities than directly. %
With $RQ_4$, we want to know how deep in the dependency tree we can find vulnerabilities to which packages and external projects are exposed. This allows us to quantify the transitive impact that vulnerable packages may have on their (transitive) dependents.
In an earlier study focusing on the package dependency networks of 7 different package distributions (including {npm}\xspace and {RubyGems}\xspace), \changed{Decan et al.\xspace~\cite{Decan2019}} analysed the prevalence of indirect dependencies. They observed that more than 50\% of the top-level packages\footnote{Top-level packages are packages that do not have any dependent packages themselves.} in {npm}\xspace have a dependency tree depth of at least 5, whereas for the large majority of top-level packages in {RubyGems}\xspace this was 3 or less. As a result, {npm}\xspace appears to be potentially much more subject to deep vulnerable dependencies. $RQ_4$ aims to quantify this claim, \changed{focusing on \minor{the latest} package releases in $RQ_4^a$ and external projects in $RQ_4^b$.}
\subsubsection*{\textbf{\changed{$RQ_4^a$: How are vulnerabilities spread in the dependency trees of packages?}}}
We computed the number of vulnerabilities at each depth for all vulnerable (direct or indirect) dependencies. \fig{fig:propgation_packages} shows the distribution of the number of vulnerabilities in dependencies, grouped by dependency depth.
The first level (corresponding to direct dependencies) and the second level (dependencies of dependencies) have higher numbers of vulnerabilities. The number of vulnerabilities decreases at deeper levels, as a consequence of the fact that there are fewer releases with deep dependency trees.
Statistical comparisons confirmed, with non-negligible effect sizes, that more shallow levels correspond to higher numbers of vulnerabilities.
Nevertheless, vulnerabilities do remain present at the deepest levels. For example, we could find at least one vulnerable dependency at depth 16 for {npm}\xspace package \textsf{formcore} that is indirectly exposed to a vulnerability in package \textsf{kind-of}, and at depth 10 for {RubyGems}\xspace package \textsf{erp\_inventory} that is indirectly exposed to a vulnerability in package \textsf{rack}.
The vulnerable packages used as dependencies in {npm}\xspace with the highest number of vulnerabilities
are \textsf{node-sass}, \textsf{lodash} and \textsf{minimist}, while for {RubyGems}\xspace they are \textsf{nokogirl}, \textsf{activerecord} and \textsf{actionpack}. Unsurprisingly, we found the same set of vulnerable dependencies and vulnerability types reoccurring at deeper dependency levels.
The most prevalent vulnerability type for {npm}\xspace was \emph{Prototype Pollution (PP)}, while for {RubyGems}\xspace dependencies it was \emph{Denial of Service (DoS)} (see \tab{tab:vuln_names} for an overview of the most common vulnerability types).
\begin{figure}[!ht]
\begin{center}
\setlength{\unitlength}{1pt}
\footnotesize
\includegraphics[width=0.98\columnwidth]{propgation_packages.pdf}
\caption{Distribution of the number of vulnerabilities found in all (direct and indirect) dependencies of {npm}\xspace and {RubyGems}\xspace latest package releases, grouped by dependency tree depth.}
\label{fig:propgation_packages}
\end{center}
\end{figure}
\smallskip
\noindent\fbox{%
\parbox{0.98\textwidth}{%
%
The number of dependency vulnerabilities \changed{for the latest package releases} decreases at deeper levels of the dependency tree. Yet, vulnerable dependencies continue to be found at the deepest levels. The same vulnerability types can be found at all dependency tree levels.
}%
}
\smallskip
We also studied how deep in the dependency tree a vulnerability can reach by identifying for each exposed package the maximum dependency depth where a vulnerable dependency could be found (\fig{fig:numberOfExposedPackgesMax}). %
For {npm}\xspace, the number of exposed packages
increases from the dependency depth 1 (direct dependencies) to depth 4 and then starts to decrease. For {RubyGems}\xspace, the numbers start to decrease starting from depth 1. This implies that {npm}\xspace packages are more susceptible to indirect vulnerable dependencies at deeper dependency levels. \changed{This seems to be in line with the findings of Decan et al.\xspace~\cite{Decan2019} mentioned at the beginning of this research question.}
\begin{figure}[!ht]
\begin{center}
\setlength{\unitlength}{1pt}
\footnotesize
\includegraphics[width=0.98\columnwidth]{numberOfExposedPackgesMax.pdf}
\caption{Number of exposed {npm}\xspace and RubyGems \minor{latest} package releases and their maximum dependency depth.}
\label{fig:numberOfExposedPackgesMax}
\end{center}
\end{figure}
\smallskip
\noindent\fbox{%
\parbox{0.98\textwidth}{%
{npm}\xspace packages are more likely than {RubyGems}\xspace packages to be exposed to vulnerabilities deep in their dependency tree.
}%
}
\subsubsection*{\textbf{\changed{$RQ_4^b$: How are vulnerabilities spread in the dependency trees of external projects?}}}
To gain more insights about the prevalence of vulnerabilities at different depths in the dependency tree of external projects depending on {npm}\xspace or {RubyGems}\xspace packages, we computed the number of vulnerabilities found at each dependency depth.
\fig{fig:propgation_repos} shows the distribution of the number of vulnerabilities in dependencies, grouped by dependency depth.
From the second level onwards, the number of vulnerabilities starts to decrease with increasing levels of depth.
As for $RQ_4^a$, statistical comparisons confirmed, with non-negligible effect sizes, that more shallow levels correspond to higher numbers of vulnerabilities.
Yet, vulnerabilities continue to remain present at the deepest levels.
For example, we could find at least one external project with a vulnerable dependency at depth 14 on {npm}\xspace package \textsf{kind-of}, and one external project with a vulnerable dependency at depth 7 on {RubyGems}\xspace package \textsf{nokogiri}.
We also observe that external projects for {RubyGems}\xspace and {npm}\xspace have the same median number of dependency vulnerabilities at depths 2 and 3, while starting from level 4 we observe the same trend as in \fig{fig:propgation_packages}. The reason for this difference at shallow depths (compared to what we saw in $RQ_4^a$) is because external projects depending on {RubyGems}\xspace tend to include more direct dependencies than normal {RubyGems}\xspace packages.
\begin{figure}[!ht]
\begin{center}
\setlength{\unitlength}{1pt}
\footnotesize
\includegraphics[width=0.98\columnwidth]{propgation_repos.pdf}
\caption{Distribution of the number of vulnerabilities found in all (direct and indirect) dependencies of {npm}\xspace and {RubyGems}\xspace {GitHub}\xspace projects, grouped by dependency depth.}
\label{fig:propgation_repos}
\end{center}
\end{figure}
\smallskip
\noindent\fbox{%
\parbox{0.98\textwidth}{%
Starting from depth 2, the number of vulnerable dependencies \changed{for external projects} decreases as the depth within the dependency tree increases.
Yet, vulnerable dependencies are still found at the deepest levels.
}%
}
\smallskip
We also analysed how far in the dependency tree a vulnerability can reach. \fig{fig:numberOfExposedProjectsMax} shows the number of exposed \changed{external} projects with the maximum depth at which we found at least one vulnerable dependency. We observe that for both {npm}\xspace and {RubyGems}\xspace, the number of exposed \changed{external} projects increases from the first level (direct dependencies) until the fourth and third levels, respectively, and then starts decreasing.
Comparing this finding with \fig{fig:numberOfExposedPackgesMax} in $RQ_4^a$, we observe that {RubyGems}\xspace projects are more susceptible to having vulnerable dependencies at deeper levels than {RubyGems}\xspace packages.
\begin{figure}[!ht]
\begin{center}
\setlength{\unitlength}{1pt}
\footnotesize
\includegraphics[width=0.98\columnwidth]{numberOfExposedProjectsMax.pdf}
\caption{Number of exposed \changed{external projects for {npm}\xspace and {RubyGems}\xspace} and their maximum dependency depth, grouped by dependency tree depth.}
\label{fig:numberOfExposedProjectsMax}
\end{center}
\end{figure}
\noindent\fbox{%
\parbox{0.98\textwidth}{%
External projects dependent on {RubyGems}\xspace are more likely to be exposed to vulnerabilities deep in their dependency tree than external projects for {npm}\xspace.
}%
}
\subsection{$RQ_5$: Do exposed dependents upgrade their vulnerable dependencies when a vulnerability fix is released?}
\label{subsec:rq5}
57.8\% of the vulnerabilities in our dataset (i.e.,\xspace 1,611 to be precise) have a known fix.
With $RQ_5$ we aim to quantify how much dependent packages and dependent external projects would benefit from updating their dependencies for which there is a known fix available. Upgrading their dependencies to more recent releases will reduce their exposure to vulnerabilities.
First, we start by exploring how many vulnerable dependencies have known vulnerability fixes. \tab{tab:rq5_fixes} shows the proportion of direct and indirect dependencies that are only affected by vulnerabilities that have a known fix. We observe that for the large majority of the vulnerable dependencies, fixes are available (more than 90\% for {npm}\xspace and more than 60\% for {RubyGems}\xspace). {npm}\xspace indirect dependencies have more fixed vulnerabilities than direct ones, while for {RubyGems}\xspace we observe the inverse. %
We also observe that fewer affected {RubyGems}\xspace dependencies have fixes available than {npm}\xspace dependencies. These results show that most of the vulnerable dependencies could be made safe if the maintainers of the dependent packages or projects would choose the appropriate non-vulnerable version of their dependencies.
\begin{table}[!ht]
\centering
\caption{Proportion of vulnerable dependencies (for packages and external projects) having a known fix.}
\label{tab:rq5_fixes}
\begin{tabular}{l|r|r||r|r}
\multirow{2}{*}{} & \multicolumn{2}{c||}{packages} & \multicolumn{2}{c}{projects} \\ \cline{2-5}
& direct & indirect & direct & indirect \\ \hline
npm & 90.0 & 96.9 & 92.5 & 97.1 \\
RubyGems & 85.2 & 66.7 & 90.0 & 76.0 \\
\end{tabular}
\end{table}
Updating a dependency is not always easy, especially when maintainers of dependent projects or packages would be confronted with breaking changes.
For example, if a \changed{package release or external project} uses a dependency that resolves to version 1.1.0 while a vulnerability fix exists in 1.2.0 then the exposure to the vulnerability can be removed by only doing a minor version update of the dependency. On the other hand, if a \changed{package release or external project} uses a dependency that resolves to version 1.1.0 while a vulnerability fix exists in 2.0.0, then one would need
to update to a new major version. If the dependency is adhering to \emph{semver}\xspace, then the second but not the first kind of dependency update would require updating the dependent's implementation according to backwards incompatible changes.
We therefore verified whether it would be possible for dependent \changed{\minor{latest} package releases and external projects} to avoid vulnerabilities by only updating their vulnerable dependencies to a higher version within the same major version range that is currently in use.
Focusing only on direct dependencies, 32.8\% of the vulnerabilities affecting direct dependencies of \minor{latest} package releases and 40.9\% of the vulnerabilities affecting direct dependencies of external projects, could be avoided by making backward compatible dependency updates. For indirect dependencies, 22.1\% of the vulnerabilities affecting indirect dependencies of \minor{latest} package releases and 50.3\% of the vulnerabilities affecting indirect dependencies of external projects, have a fix within the major version range that is currently in use.
We also found that 5.4\% of the exposed \minor{latest} package releases and 5\% of the exposed external projects could be made completely vulnerability-free by only making backward compatible dependency updates to their vulnerable direct dependencies.
\smallskip
\noindent\fbox{%
\parbox{0.98\textwidth}{%
%
Vulnerability fixes are available for the large majority of vulnerable dependencies.
Around one out of three dependency vulnerabilities to which \changed{the \minor{latest} package releases or external projects} are exposed, could be avoided if software developers would update their direct dependencies to more recent releases within the same major release range.
%
Performing backward compatible updates to vulnerable direct dependencies \changed{could} make 5.4\% of the exposed packages and 5\% of the exposed external projects completely vulnerability-free.
}%
}
\minor{
\subsection{\minor{$RQ_6$: To what extent are dependents exposed to their vulnerable dependencies at their release time?}}
\label{subsec:rq6}
$RQ_3$, $RQ_4$ and $RQ_5$ studied vulnerabilities of dependencies used in dependent packages and external projects as if they were deployed on 12 January 2020. This led us to consider all vulnerabilities already disclosed and reported in Snyk's dataset. $RQ_6$ investigates if dependents were already incorporating dependencies with disclosed vulnerabilities at their release time. To do so, we first need to resolve dependency constraints used in package releases and external projects at the time of their release. Then, we identify dependencies affected by disclosed vulnerabilities only. For example, the latest version of the package \texttt{node-sql}~\footnote{\url{https://www.npmjs.com/package/sql}} was released on August 2017 while depending on the package \texttt{lodash}~\footnote{\url{https://www.npmjs.com/package/lodash}}. If we resolve the used version of \texttt{lodash} on August 2017, we find that it was \texttt{4.1.0} which is affected by the vulnerability \texttt{CVE-2019-10744}~\footnote{\url{https://nvd.nist.gov/vuln/detail/cve-2019-10744}}. However, at the version release date in 2017, this vulnerability was not disclosed yet and thus the developers of \texttt{sql-node} could not do anything about it. Answering this research question will help us to assess how careful developers are when incorporating dependencies with already disclosed vulnerabilities. Therefore, we will only focus on direct dependencies.
\subsubsection*{\textbf{\minor{$RQ_6^a$: To what extent are packages exposed to their vulnerable dependencies at their release time?}}}
For all 842,697 of the latest package releases available in the {npm}\xspace and {RubyGems}\xspace snapshots, we determined their direct dependencies by resolving their dependency constraints at their release time. We narrowed down the analysis to those dependencies that are referenced in the vulnerability dataset.
6.8\% of all considered {npm}\xspace latest package releases (50,720 out of 748,026) and 6.2\% of all considered {RubyGems}\xspace latest package releases (5,896 out of 94,671) were found to have, at their release dates, at least one direct dependency affected by at least one vulnerability that is already disclosed.
Moreover, of all 3,638,361 direct dependencies considered for {npm}\xspace packages, only 58,184 (1.6\%) were affected by vulnerabilities disclosed before the latest package release in which the dependencies are incorporated. For {RubyGems}\xspace, of all 224,959 direct dependencies, only 6,410 (2.8\%) were affected. \tab{tab:rq6_severities} shows more details about the number of disclosed vulnerabilities found in direct dependencies.
Comparing these results to $RQ_3^a$, we can clearly see that at their creation dates, the latest package releases were exposed to fewer disclosed vulnerabilities than on 12 January 2020 (the dataset snapshot date). Moreover, more than half of the package releases that are exposed to vulnerabilities via their dependencies at the snapshot date, were not exposed to any disclosed vulnerabilities when they were created.
\begin{table}[!ht]
\centering
\caption{Mean and median number of disclosed vulnerabilities found in direct dependencies at the package release creation date, in addition to effect sizes and their directions when comparing {npm}\xspace and {RubyGems}\xspace dependency vulnerabilities.}
\label{tab:rq6_severities}
\begin{tabular}{l|rr|rr|ccr}
\multirow{2}{*}{} & \multicolumn{2}{c|}{\bf{npm}\xspace} & \multicolumn{2}{c|}{\bf{RubyGems}\xspace} & \multirow{2}{*}{\bf direction} & \multirow{2}{*}{ $|\textbf{d}|$} & \multirow{2}{*}{\bf effect size}\\
& \bf mean & \bf median & \bf mean & \bf median & & \\
\toprule
\bf {\color{darkgreen}low\xspace} & 1.04 & 1 & 1.03 & 1 & \textgreater{} & 0.01 & negligible \\
\bf \medium & 1.96 & 1 & 1.49 & 1 & \textgreater{} & 0.1 & negligible \\
\bf {\color{orange}high\xspace} & 1.72 & 1 & 1.27 & 1 & \textgreater{} & 0.2 & small \\
\bf {\color{darkred}critical\xspace} & 1.07 & 1 & 1 & 1 & \textgreater{} & 0.07 & negligible \\ \hline
\bf all & 1.76 & 1 & 1.36 & 1 & \textgreater{} & 0.1 & negligible
\end{tabular}
\end{table}
\smallskip
\noindent\fbox{%
\parbox{0.98\textwidth}{%
%
At their release dates, {RubyGems}\xspace latest package releases had proportionally more vulnerable direct dependencies than {npm}\xspace (2.8\% compared to 1.6\%).
More than half of the latest package releases that are exposed to vulnerabilities via their dependencies at the observation date, were not exposed to any disclosed vulnerabilities when they were first created.
}%
}
\subsubsection*{\textbf{\minor{$RQ_6^b$: To what extent are external projects exposed to their vulnerable dependencies at the date of their last commit?}}}
For all 24,593 external projects that make use of {npm}\xspace and {RubyGems}\xspace packages, we determined their direct dependencies by resolving their dependency constraints at their release time.
22.1\% of all considered external projects for {npm}\xspace (3,077 out of 13,930) and 33.9\% of all considered external projects for {RubyGems}\xspace (3,619 out of 10,663) were found to have, at their last commit date, at least one direct dependency affected by at least one vulnerability that is already disclosed.
Out of 147,622 of the direct dependencies of external projects on {npm}\xspace packages, only 4,600 (3.1\%) were affected by vulnerabilities disclosed before the date of the last commit. For {RubyGems}\xspace, of all 101,079 direct dependencies of external projects, only 5,264 (5.2\%) were affected. \tab{tab:rq6B_severities} shows more details about the number of disclosed vulnerabilities found in direct dependencies.
\begin{table}[!ht]
\centering
\caption{Mean and median number of disclosed vulnerabilities found in direct dependencies of {GitHub}\xspace external projects at their last commit dates, in addition to effect sizes and their directions.}
\label{tab:rq6B_severities}
\begin{tabular}{l|rr|rr|ccr}
\multirow{2}{*}{} & \multicolumn{2}{c|}{\bf{npm}\xspace} & \multicolumn{2}{c|}{\bf{RubyGems}\xspace} & \multirow{2}{*}{\bf direction} & \multirow{2}{*}{ $|\textbf{d}|$} & \multirow{2}{*}{\bf effect size}\\
& \bf mean & \bf median & \bf mean & \bf median & & \\
\toprule
\bf {\color{darkgreen}low\xspace} & 1.16 & 1 & 1.02 & 1 & \textgreater{} & 0.12 & negligible \\
\bf \medium & 2.29 & 1 & 1.65 & 1 & \textgreater{} & 0.14 & negligible \\
\bf {\color{orange}high\xspace} & 1.88 & 1 & 1.38 & 1 & \textgreater{} & 0.17 & small \\
\bf {\color{darkred}critical\xspace} & 1.13 & 1 & 1.07 & 1 & \textgreater{} & 0.04 & negligible \\ \hline
\bf all & 2.01 & 1 & 1.55 & 1 & \textgreater{} & 0.11 & negligible
\end{tabular}
\end{table}
Comparing these results to $RQ_3^b$, we observe that, at the time of their last commit, {GitHub}\xspace external projects were exposed to a lesser number of disclosed vulnerabilities than at the dataset snapshot date (i.e.,\xspace 12 January 2020). Moreover, more than half of the {GitHub}\xspace projects that make use of {npm}\xspace packages and are exposed to vulnerabilities via their dependencies at the snapshot date, were not exposed to any disclosed vulnerabilities when they were first created. This is different for {GitHub}\xspace projects that make use of {RubyGems}\xspace packages, since only one third of the projects that are exposed to vulnerable dependencies at the snapshot date were not exposed to any vulnerability at the time of their last commit.
\smallskip
\noindent\fbox{%
\parbox{0.98\textwidth}{%
%
At the time of their last commit, {GitHub}\xspace external projects that make use of {RubyGems}\xspace packages had proportionally more vulnerable direct dependencies than projects with {npm}\xspace dependencies (33.9\% compared to 22.1\%).
Half of the external projects with {npm}\xspace dependencies that are exposed to vulnerabilities at the observation date, were not exposed to any disclosed vulnerability at the time of their last commit, while this is only one third for projects with {RubyGems}\xspace dependencies.
}%
}}
\section{Discussion}
\label{sec:discussion}
\changed{This section discusses our findings and their implications for developers, security researchers and package managers.
It provides insights specific to {npm}\xspace and {RubyGems}\xspace, as well as some challenges that need to be overcome to better secure open source package distributions.}
We start our exposition with the vulnerable packages in each package distribution (\sect{sec:discuss-vuln-packages}), continue with the ramifications of how direct and indirect dependents are exposed to these vulnerabilities (\sect{sec:discuss-vuln-dependents}) \changed{and end with a discussion about comparing different package distributions (\sect{sec:discuss-comparing}).}
\subsection{Vulnerable packages}
\label{sec:discuss-vuln-packages}
In a comparative study of package distributions (for the observation period 2012--2017), Decan et al.\xspace~\cite{Decan2019} observed that both {npm}\xspace and {RubyGems}\xspace have an exponential increase in their number of packages. The monthly number of package updates remained more or less stable for {RubyGems}\xspace, while a clear growth could be observed for {npm}\xspace.
$RQ_0$ builds further upon that work by analysing and comparing how vulnerable packages are in each package distribution using {Snyk}\xspace's dataset of vulnerability reports.
A first observation was that {npm}\xspace has more vulnerabilities affecting more packages than {RubyGems}\xspace. We posit that this is due to the popularity of {npm}\xspace that exposes the package distribution considerably more to attackers and attracts more security researchers. At the time of writing this article, the number of packages distributed through {npm}\xspace is an order of magnitude higher than those distributed through {RubyGems}\xspace. \changed{In fact, we found that {npm}\xspace has more security researchers that disclose vulnerabilities than {RubyGems}\xspace. Based on the adage ``Given enough eyeballs, all bugs are shallow"~\cite{maillart2017given}, the community of package repositories should invest in organizing bug bounty programs to attract more security researchers~\cite{alexopoulos2021vulnerability} that may help to discover hidden vulnerabilities. This will also help to reduce the vulnerability disclosure gap (see $RQ_1$).}
\changed{
\begin{leftbar}
\vspace{-0.1cm}
\noindent\textit{\textbf{Challenge:} How to incite more security researchers to inspect open source packages for vulnerabilities?}
\end{leftbar}
}
Let us now focus on vulnerability reports with the vulnerability type \textit{Malicious Package}. It is the most prevalent vulnerability type for {npm}\xspace packages. %
Even though most packages suffering from this vulnerability will eventually be removed from the package distribution, previous studies~\cite{ohm2020backstabber} and experiments~\cite{dependencyConfusion} have shown that this type of vulnerability can be very dangerous.
{npm}\xspace has 410 packages corresponding to the \textit{Malicious Package} vulnerability, while {RubyGems}\xspace has only a mere 19 vulnerabilities of this type.
The second most common vulnerability in {npm}\xspace is \textit{Directory Traversal}, a vulnerability allowing an attacker to access arbitrary files and directories on the application server. This vulnerability could also be classified under two other types of attacks, namely \textit{Information Exposure} and \textit{Writing Arbitrary Files}. For {RubyGems}\xspace, we found \textit{Information Exposure} as the 4th ranked vulnerability type, probably because
its contributors prefer to report \textit{Directory Traversal} under the more general category of \textit{Information Exposure} vulnerabilities.
\changed{
\begin{leftbar}
\vspace{-0.1cm}
\noindent\textit{\textbf{Recommendation:} Malicious packages are more prevalent in {npm}\xspace than {RubyGems}\xspace. {npm}\xspace users should be aware of the common strategies of malicious packages like Typosquatting~\cite{ohm2020backstabber}.}
\end{leftbar}
}
42\% (1,175) of the reported vulnerabilities in the dataset did not have any known fix.
We can exclude from them 406 unfixed \textit{Malicious Package} vulnerabilities (389 for {npm}\xspace and 17 for {RubyGems}\xspace)
that can ultimately be fixed by simply removing the package or the affected releases from the distribution.
This leaves us with 669 vulnerabilities different from the \textit{Malicious Package} type that affect all recent releases of {npm}\xspace packages in which they were discovered, and 100 such vulnerabilities for {RubyGems}\xspace packages. \minor{Users of package distributions should be aware of whether package managers incorporate and mark the vulnerable packages and their releases in their registries.}
%
Having such information in package managers will help developers and users to decide which releases they should depend upon. It will also simplify the work of tools like \textsf{Dependabot}~\footnote{\url{https://dependabot.com/}} and \textsf{Up2Dep}~\cite{nguyen2020up2dep} that try to find and fix insecure code of used dependencies. Such tools should also be aware that not all vulnerabilities are in MITRE or NVD~\footnote{\url{https://nvd.nist.gov/}} since we found that 45\% of all vulnerabilities do not have a CVE reserved for them. Many of the vulnerability threats and their information are shared through social media channels (e.g.,\xspace Reddit, Twitter, Stack Overflow, etc)~\cite{aranovich2021beyond}. Therefore, other vulnerability databases, security advisories and issue trackers might help these tools to extend their reach to vulnerabilities that are not in NVD.
\changed{
\begin{leftbar}
\vspace{-0.1cm}
\noindent\textit{\textbf{Recommendation:} Package managers can help developers by marking vulnerable package releases in their registries.}
\end{leftbar}
}
\changed{\minor{To offer more secure packages, all known vulnerabilities should be fixed, discovered and disclosed rapidly.} Unfortunately, many vulnerabilities remain undisclosed for months to years, providing attackers plenty of time to search for flaws and develop exploits that can harm millions of users of these packages.
For {npm}\xspace, we observed that {\em critical} vulnerabilities were \changed{disclosed} more rapidly, while for {RubyGems}\xspace we did not observe a relation to the severity of the vulnerability. %
Software package maintainers should invest more effort in keeping their packages secure, especially those packages that are frequently used as dependents of other packages or external projects, in order to reduce the transitive exposure to vulnerabilities.
They should be supported in this task by additional static and dynamic application security testing tools, perhaps tailored to the characteristics of packages and libraries.
For instance, there is typically no single method that can serve as the entry point for a whole-program analysis.
Moreover, the longer a vulnerability lasts, and the more releases are affected by it, the more likely it becomes that someone will unknowingly use a vulnerable release.
Registries of snapshot-based deployments such as \emph{Docker Hub}\xspace that are attracting more and more popularity exacerbate this problem~\cite{zerouali2021multi,zerouali2021usage} as users might unknowingly reuse images that wrap vulnerable package releases. Unfortunately, the majority of the vulnerabilities in {npm}\xspace and {RubyGems}\xspace packages are fixed within several years after their first introduction leaving most of the package releases affected.
\minor{However, after their disclosure, vulnerabilities seem to be fixed after a short time. Only about one out of five vulnerabilities takes more than 3 months to be fixed, which is the deadline given by {Snyk}\xspace to maintainers before publishing a reported vulnerability publicly. This means that most of the maintainers are willing to fix vulnerabilities as soon as possible after disclosure.}
}
\changed{
\begin{leftbar}
\vspace{-0.1cm}
\noindent\textit{\textbf{Challenge:} A vulnerability can stay hidden in a package for years and it affects most of its previous releases before it is finally fixed. Effective static and dynamic analysis tools should emerge to support security researchers in finding vulnerabilities.}
\end{leftbar}
}
\subsection{Dependents exposed to vulnerabilities}
\label{sec:discuss-vuln-dependents}
$RQ_3^a$ investigated how packages are exposed to vulnerable dependencies. To this end, we studied the latest release of each package available in the considered snapshot of each package distribution. Since previous studies~\cite{Cox2015_freshness,zerouali2021multi}
showed that less recent releases of a package tend to be more vulnerable, we expect to find more vulnerable dependencies if we would study all package releases rather than only the latest one.
The results of $RQ_3^a$ revealed that only a small set of vulnerable packages %
is responsible for a large number of vulnerable dependencies. Moreover, $RQ_4^a$ showed that these vulnerable dependencies can often be found deep in the dependency tree, making it difficult for dependents to cope with them. \changed{Several very popular packages that are used by many dependents have been affected by vulnerabilities, leading to a vulnerability exposure in thousands of transitively dependent packages.}
\changed{
\begin{leftbar}
\vspace{-0.1cm}
\noindent\textit{\textbf{Recommendation:} Package maintainers should inspect not only direct, but also indirect dependencies, since vulnerabilities are often found deep in the dependency chain.}
\end{leftbar}
}
\changed{
\begin{leftbar}
\vspace{-0.1cm}
\noindent\textit{\textbf{Challenge:} How can the community help to secure popular packages, that are present in many dependency chains?}
\end{leftbar}
}
Another factor behind this vulnerability exposure
is that most of the reported vulnerabilities do not have a lower bound on affected releases. They are found to affect all package releases before the version in which the vulnerability was \changed{fixed},
yielding a wide range of vulnerable package releases. For example, \minor{according to Snyk,} the critical vulnerability CVE-2019-10744 affects all releases before version 4.17.12 of the popular package {\sf lodash}~\footnote{\url{https://nvd.nist.gov/vuln/detail/cve-2019-10744}}. \minor{However, maintainers of \texttt{lodash} showed vigilance since this vulnerability was fixed exactly 13 days after its disclosure. Users of package distributions therefore have an important responsibility in keeping the ecosystem in a healthy shape by keeping their own dependencies up to date.}
However, managers of package distributions \minor{can share this} responsibility by raising awareness of the importance of keeping dependencies to popular packages up to date. %
Maintainers of those popular packages \minor{can participate as well by informing} their dependents whenever vulnerabilities are discovered and fixes become available in newer versions, and maintainers of dependents that should actually update their dependencies to those fixed releases.
\changed{
\begin{leftbar}
\vspace{-0.1cm}
\noindent\textit{\textbf{Recommendation:} \minor{Package developers should be aware that disclosed vulnerabilities frequently affect all previous releases of a package. This kind of vulnerabilities can be prioritized since all dependents of the package will be impacted.}}
\end{leftbar}
}
\changed{
\begin{leftbar}
\vspace{-0.1cm}
\noindent\textit{\textbf{Challenge:} How often should one inspect previous releases for a vulnerability that is disclosed in a recent release, and how accurate is this information in vulnerability report databases?~\cite{dashevskyi2018screening, nguyen2016automatic, meneely2013patch}}
\end{leftbar}
}
\smallskip
The analysis of $RQ_3^b$ revealed that \changed{external projects hosted on {GitHub}\xspace}, containing software that is not distributed via the package distributions, are exposed to high numbers of vulnerabilities coming from transitive dependencies caused by a small set of vulnerable packages. %
Package dependents should rely on tools like {\sf npm audit} to run security checks and be warned about such vulnerable dependencies. %
We also noticed that many {npm}\xspace vulnerabilities are coming from duplicated dependencies across the dependency tree. %
Dependents should reduce the number of duplicated {npm}\xspace dependencies --~especially vulnerable ones~-- by sharing the common dependencies using commands like {\sf npm dedupe}~\footnote{\url{https://docs.npmjs.com/cli/v7/commands/npm-dedupe}} available for {npm}\xspace. \changed{In a similar vein, dependents should check for bloated vulnerabilities which are not necessary to build or run the dependent software and remove them. This may reduce the number of vulnerable dependencies and their vulnerabilities. In fact, previous studies about dependencies in Maven showed that 2.7\% and 57\% of direct and transitive dependencies of Maven libraries are bloated~\cite{soto2021comprehensive}.}
\changed{
\begin{leftbar}
\vspace{-0.1cm}
\noindent\textit{\textbf{Recommendation:} {npm}\xspace dependents should reduce the relatively large number of vulnerable indirect dependencies by eliminating duplicate package releases.}
\end{leftbar}
}
\changed{We found that about 40\% of the packages and 70\% of the external projects have at least one vulnerable transitive dependency. This does not necessarily mean that the dependents are at risk and affected by the vulnerabilities of their dependencies. Many of the vulnerabilities will only affect the functionalities that are not actually used by the dependent~\cite{pashchenko2020vuln4real}. When fixing vulnerabilities coming from dependencies, developers should prioritize vulnerabilities by identifying the ones that are actually exposing their software by distinguishing between effective and ineffective functionalities used from dependencies. However, there are many cases where it is not necessary to use the vulnerable dependency to be affected by its vulnerability, e.g.,\xspace Prototype Pollution~\footnote{\url{https://www.whitesourcesoftware.com/resources/blog/prototype-pollution-vulnerabilities/}} and Malicious Package vulnerabilities~\cite{dependencyConfusion}. For this reason, it is important to know about all vulnerabilities coming from both direct and indirect dependencies and then decide whether they are exposing the dependent software or not.
\changed{
\begin{leftbar}
\vspace{-0.1cm}
\noindent\textit{\textbf{Recommendation:} Indirect dependencies come with a high number of vulnerabilities, especially in {npm}\xspace. Dependents should reduce the number and depth of their indirect dependencies or monitor them alongside the direct ones. }
\end{leftbar}
}
Verifying direct dependencies for effective functionalities could be done by the developers of software dependents, while this could be difficult to do with transitive dependencies. Developers of package and project dependents may rely on Software Composition Analysis (SCA) tools to check transitive dependencies for vulnerabilities affecting their used dependency functionalities. They can also combine SCA tools to identify and mitigate the maximum number of vulnerabilities. In fact, previous studies have shown that SCA tools vary in their vulnerability reporting~\cite{imtiaz2021comparative}.}
\changed{
\begin{leftbar}
\vspace{-0.1cm}
\noindent\textit{\textbf{Recommendation:} Package dependents should rely on available tools to run security checks and be warned about vulnerable dependencies and their fixes. }
\end{leftbar}
}
\changed{
\begin{leftbar}
\vspace{-0.1cm}
\noindent\textit{\textbf{Challenge:} Should developers be warned about all \minor{disclosed} vulnerabilities of their dependencies or only about vulnerabilities affecting the functionalities they use?~\cite{Pashchenko2018,pashchenko2020vuln4real,Ponta2020EMSE}}.
\end{leftbar}
}
$RQ_5$ highlighted that most of the vulnerabilities exposing dependent packages and projects have a known fix, implying that \changed{by updating their vulnerable dependencies, dependents might} avoid those vulnerabilities. To keep informed about new releases that may include vulnerability fixes, dependents can rely on tools like \textsf{Dependabot}, which creates pull requests to update outdated and vulnerable direct dependencies to newer and patched releases.
It is the responsibility of package maintainers to keep their own dependencies up to date so dependent packages and projects can have secure transitive dependencies.
To do so, package dependents can rely on permissive \emph{semver}\xspace constraints~\cite{decan2019package} when specifying the dependencies to rely on in manifests like \emph{package.json} and \emph{Gemfile}. This will ensure that dependencies get updated automatically. For project maintainers, it is possible to use commands like \texttt{npm update PACKAGE --depth= DEPTH} to update their transitive dependencies and then lock their dependencies using lockfiles like \emph{package-lock.json}~\footnote{\url{https://docs.npmjs.com/cli/v7/configuring-npm/package-lock-json}} and \emph{Gemfile.lock}~\footnote{\url{https://bundler.io/rationale.html}}. \changed{An other option could be to update frequently. There is a big difference between updating a dependency with a security patch with a change of few lines of code versus several years worth of code~\cite{zerouali2019measurement}.} The analysis of $RQ_5$ showed that around one in three vulnerabilities \changed{might} be avoided if every dependent updated its dependencies to a newer minor or patch increment of the major release it is already using. This means that many vulnerable dependencies can be fixed just by modifying the dependency constraints to accept new minor and patch releases. On the other hand, for vulnerable dependencies for which the fix requires a major update, maintainers of dependent packages and projects should be careful as major dependency updates might introduce breaking changes. Therefore, maintainers of required packages should not only provide fixes in new major releases but should also attempt to provide these fixes in older major releases via backports\changed{~\cite{decan2021back}}. This way, even dependents that need to stick to older major releases could still benefit from the backported vulnerability fix. \changed{Perhaps popular packages with a high package centrality~\cite{mujahid2021towards} can benefit from community support in bringing such backports to older major releases.}
\changed{
\begin{leftbar}
\vspace{-0.1cm}
\noindent\textit{\textbf{Recommendation:} Packages and external {GitHub}\xspace projects should invest more efforts in updating their dependencies.}
\end{leftbar}
}
\changed{In addition, in $RQ_2$ we found a considerable proportion of vulnerabilities that have been fixed in minor and major releases. We think that package maintainers should try as much as possible to fix their vulnerabilities in patch updates or at least in backward compatible releases and follow semantic versioning. If a new package release incorporates breaking changes, then the major version number should be incremented. This will help developers to know whether they can update their vulnerable dependencies or not. Previous studies showed that developers are hesitant to update their vulnerable dependencies because they are afraid that the new releases not only include security fixes but also bundle them with functional changes. This hinders adoption due to lack of resources to fix functional breaking changes~\cite{pashchenko2020qualitative,nguyen2020up2dep}.}
\changed{
\begin{leftbar}
\vspace{-0.1cm}
\noindent\textit{\textbf{Recommendation:} Vulnerability fixing updates should not include breaking changes. If package maintainers can rely on semantic versioning, they will be able to characterize their package updates, while their dependents will be able to decide whether they want to update their dependency or not. }
\end{leftbar}
}
\changed{
\begin{leftbar}
\vspace{-0.1cm}
\noindent\textit{\textbf{Challenge:} Is it always possible to incorporate vulnerability fixes within patch updates?}
\end{leftbar}
}
\subsection{Comparing package distributions}
\label{sec:discuss-comparing}
\changed{Our analysis revealed many differences between {npm}\xspace and {RubyGems}\xspace.}
{npm}\xspace has more reported vulnerabilities than {RubyGems}\xspace, and {npm}\xspace projects exposed to vulnerabilities have more dependencies. On the other hand, {RubyGems}\xspace has higher proportions of vulnerable dependencies than {npm}\xspace. It is likely that the security efforts undertaken by {npm}\xspace~--such as its integrated dependency auditing tools--~are gradually making {npm}\xspace more secure.
\changed{Still, tooling could be improved further, as most} of the existing tooling relies on dependency metadata alone, i.e.,\xspace available security monitoring tools only rely on dependency information extracted from manifests like \textit{package.json} and \textit{Gemfile} to detect vulnerabilities.
Usage-based vulnerability detection tools~\cite{zimmermann2019small,Ponta2020EMSE} should emerge to help developers in identifying which dependency vulnerabilities can actually be exploited.
\changed{
\begin{leftbar}
\vspace{-0.1cm}
\noindent\textit{\textbf{Lesson learned:} More effort is needed to better secure open source package distributions. All parties can help including package managers, security experts, communities and developers.}
\end{leftbar}
}
\changed{However, presence or absence of tooling is by no means the only factor that influences the proportion of vulnerable packages and vulnerable dependencies in a packaging ecosystem. Decan et al.\xspace \cite{Decan2019} have shown important differences in the topological structure and evolution of package dependency networks, whereas Bogart et al.\xspace \cite{Bogart2021} have shown that each ecosystem uses different practices and policies of their communities. All these factors are likely to play a role in vulnerability management.}
While this paper focused on {npm}\xspace and {RubyGems}\xspace, Alfadel~et al.\xspace\cite{alfadelempirical} studied the PyPI package distribution.
Through an analysis of 550 vulnerability reports affecting 252 {Python}\xspace packages they studied the time to \changed{disclose} and fix vulnerabilities in the PyPI package distribution.
There are some similarities between their results and our own observations for {npm}\xspace and {RubyGems}\xspace.
For example, the number of vulnerabilities found in {PyPI}\xspace increases over time and the majority of those vulnerabilities are of medium or high severity. The most prevalent vulnerabilities in {PyPI}\xspace are \emph{Cross-Site-Scripting (XSS)} and \emph{Denial of Service (DoS)}, which is similar to what we found for {RubyGems}\xspace. Vulnerabilities in {PyPI}\xspace are \changed{disclosed} after a median of 37 months, which is similar to what we found for {npm}\xspace.
On the other hand, vulnerabilities in {PyPI}\xspace seem to take longer to be fixed than those in {npm}\xspace and {RubyGems}\xspace. %
Since Alfadel~et al.\xspace\cite{alfadelempirical} did not study the exposure of dependent packages to vulnerabilities, we cannot compare {PyPI}\xspace to {npm}\xspace and {RubyGems}\xspace on this aspect.
\changed{
\begin{leftbar}
\vspace{-0.1cm}
\noindent\textit{\textbf{Challenge:} What are the main factors in a packaging ecosystem that play a role in its security management?}
\end{leftbar}
}
%
%
\label{sec:threats}
The empirical nature of our research exposes it to several threats to validity. We present them here, following the classification and recommendations of~\cite{Wohlin:2000}.
The main threat to \emph{construct validity} comes from imprecision or incompleteness of the data sources we used to identify vulnerabilities and their affected and exposed packages. \changed{We assumed that the Snyk vulnerability database represents a sound and complete list of vulnerability reports for third-party packages. This may have led to an underestimation since some vulnerabilities may not have been disclosed yet and are therefore missing from the database.} Another data source that we relied on is \emph{libraries.io}\xspace. While there is no guarantee that this dataset is complete (e.g.,\xspace there may be missing package releases), we did not observe any missing data during a manual inspection of the dataset. Considering the full set of packages ever released also constitutes a threat to validity since some of them could have been removed from the package registry yet are still referenced in \emph{libraries.io}\xspace. To mitigate this threat we sanitised the dataset by excluding a number of {npm}\xspace packages that were removed from {npm}\xspace. Examples are the {\sf wowdude-x}~\footnote{\url{https://libraries.io/search?q=wowdude}} and {\sf neat-x} packages~\footnote{\url{https://libraries.io/npm/neat-106}}. The only purpose of these packages was to bundle a large set of run-time dependencies.
\minor{
As explained in \sect{subsec:vulnerability}, the vulnerability severity labels are extracted from Snyk. This data source has its own way of computing the severity of vulnerabilities. Relying on another data source might have led to different vulnerability severity results. To understand how different severity labels in Snyk compare to other sources, we extracted the severity labels for all vulnerabilities with a CVE from NVD~\footnote{\url{https://nvd.nist.gov/}}. For all 1,487 vulnerabilities (55\% of the entire dataset) that we found with a CVE ID in $RQ_1$, only 1,227 have a severity label in NVD. 792 (64.55\%) of them have the same severity in both Snyk and NVD. The heatmap of \fig{fig:heatmap_severity} shows the proportion of vulnerabilities with different severity labels in Snyk and NVD. We observe that 21.68\% of the vulnerabilities have higher severity in NVD than in Snyk, while 13.77\% of the vulnerabilities are more severe in Snyk than in NVD. This confirms that the findings could differ if another vulnerability data source were to be used. For example, the observed vulnerabilities exposing package and project dependents could be more severe than what is reported in this paper. Nevertheless, since many vulnerabilities are not even reported by NVD, we chose for the pragmatic option to rely on the more complete dataset of Snyk.
We also tried to verify severity labels given by CNAs (i.e.,\xspace vulnerability reporters in NVD), but only 42 vulnerabilities contained such labels.
\begin{figure}[!ht]
\begin{center}
\setlength{\unitlength}{1pt}
\footnotesize
\includegraphics[width=0.9\columnwidth]{heatmap_severity.pdf}
\caption{Heatmap of the proportion of vulnerabilities that have the same or different severity labels in Snyk and NVD.}
\label{fig:heatmap_severity}
\end{center}
\end{figure}
}
Another threat to construct validity stems from the fact that we found some packages whose first vulnerable releases were never distributed via the package manager. %
For example, the cross-site scripting (XSS) vulnerability type~\footnote{\url{https://snyk.io/vuln/npm:wysihtml:20121229}} affects all releases before version 0.4.0 of the {npm}\xspace package \texttt{wysihtml}. Those releases, including version 0.4.0, were never actually distributed through {npm}\xspace~\footnote{\url{https://www.npmjs.com/package/wysihtml}} even though they are present on the package's {GitHub}\xspace repository~\footnote{\url{https://github.com/Voog/wysihtml/tags?after=0.4.0}}. Since our study only focused on vulnerabilities affecting package releases contained in the \emph{libraries.io}\xspace dataset, our results might underestimate the exact time needed to \changed{disclose} a vulnerability.
We noticed that many of the reported vulnerabilities affect {\em all} package releases before the version in which the vulnerability was fixed, leading to a large number of vulnerable package releases. \changed{
To ascertain that indeed all package releases before the fix are affected,} we contacted the {Snyk}\xspace security team. They confirmed that they carry out manual inspections of the affected releases before declaring the set of vulnerable releases in their vulnerability report.
\minor{However, since many vulnerabilities were not analyzed by Snyk but only copied from other security trackers, we decided to manually inspect 50 additional vulnerability reports (25 from {npm}\xspace and 25 from {RubyGems}\xspace), randomly chosen from the whole set of vulnerability reports that do not have a lower bound on the range of affected package releases. For each vulnerability, we manually checked its fixing commit and then verified whether the vulnerability was present in the first initial release~\cite{ozment2006milk}. Among these 50 cases, we found 8 false positives (4 for each ecosystem), i.e.,\xspace vulnerability reports in which the first initial version is claimed to be vulnerable, while it is not. This corresponds to a confidence interval of $0.81574 \pm 0.10356$ using Agresti-Coull's method~\cite{agresti1998approximate} with a confidence level of 95\%. This manual verification implies that our results overestimate the actual number of packages and package releases affected by reported vulnerabilities.}
\changed{
To construct our dataset of vulnerabilities to study in $RQ_{1,2}$, we have removed vulnerabilities affecting inactive packages and vulnerabilities that have been disclosed before 2017-04-17. Considering all vulnerabilities of all packages (i.e.,\xspace active and inactive) in the analysis might produce different outcomes.}
As a threat to \emph{internal validity}, when studying external {GitHub}\xspace projects that are not distributed via the package managers, we only focused on those {GitHub}\xspace projects explicitly mentioned in the \emph{libraries.io}\xspace dataset. These projects were the most popular ones in terms of number of stars. While one may argue that this sample is not representative of all external projects depending on {npm}\xspace or {RubyGems}\xspace packages, the selected projects do have $90\%$ of the total number of stars attributed to all possible project candidates available in the \emph{libraries.io}\xspace dataset. We therefore consider the chosen set of {GitHub}\xspace projects to be representative for most {GitHub}\xspace projects that depend on {npm}\xspace and {RubyGems}\xspace packages.
As a threat to \emph{conclusion validity}, we used metadata to identify vulnerable dependencies. This identification approach assumes that the metadata associated with the used dependencies (e.g.,\xspace package, version) and vulnerability descriptions (e.g.,\xspace affected package, list of affected versions) are always accurate. These metadata are used to map each package onto a list of known vulnerabilities that affect it. However, dependents that rely on a vulnerable package might only access functionality of the dependent package that is not affected by the vulnerability. Therefore, our results present an overestimation of the actual risk. Still, we believe \minor{in the importance of signalling maintainers of exposed dependents that they are relying on vulnerable dependencies.} It will be the responsibility of those maintainers to decide whether they are actually accessing vulnerable code, and to get rid of the vulnerable dependency if this happens to be the case.
As a threat to \emph{external validity}, our findings do not generalise to other package distributions (e.g.,\xspace Maven Central, CRAN, Cargo, PyPI). However, the design of our study can easily be replicated for other package distributions that are known to recommend \emph{semver}\xspace practices.
\changed{Another threat to \emph{external validity} stems from the fact that we relied on the manifests \emph{Gemfile} and \emph{packages.json} instead of the lockfiles \emph{Gemfile.lock} and \emph{package-lock.json} when extracting dependencies used in {GitHub}\xspace projects. The former are the default manifests that are always present in an {npm}\xspace or {RubyGems}\xspace project and in which dependencies with their permissive or restrictive constraints are declared, while lockfiles list the specific releases of all dependencies that should be selected to replicate the deployment of the project. We do not know whether our results would remain the same when considering the dependencies specified in lockfiles rather than the ones specified in the manifest of each package. A manifest expresses (through dependency constraints) the set of releases that could be selected when the package is installed through the package manager. The latter always select the highest available release satisfying the constraints. As such, it may be the case that the specific releases pinned in a lockfile do not correspond to the ones that will be selected by the package manager, potentially leading to a different exposure to vulnerabilities. Although the use of a lockfile allows maintainers to explicitly select a non-vulnerable release of a package that they know to be vulnerable, the lockfile prevents from automatically benefiting from a fix in case the selected, pinned release is vulnerable. On the other hand, nothing prevents the maintainer from excluding vulnerable releases through appropriate dependency constraints in the package manifest, while still allowing future security patches to be selected as soon as they become available.}
\section{Conclusion}
\label{sec:conclusion}
This paper quantitatively analysed and compared how security vulnerabilities are treated in {npm}\xspace and {RubyGems}\xspace,
two popular package distributions known to recommend the practice of semantic versioning.
Relying on the Snyk vulnerability database, we studied 2,786 vulnerabilities which affect 1,993 packages directly.
We observed that the number of reported vulnerabilities is increasing exponentially in {npm}\xspace, and linearly in {RubyGems}\xspace.
Most of the reported vulnerabilities were of medium or high severity.
We observed that malicious packages occur much more frequently in {npm}\xspace.%
\changed{
We analyzed the time needed to disclose and fix vulnerabilities and we found that half of the studied vulnerabilities needed more than two years to be disclosed, and more than four years to be fixed.}
Maintainers of reusable packages should therefore invest more effort in inspecting their packages for undiscovered vulnerabilities, especially if those packages have a lot of dependents that may get exposed to these vulnerabilities directly or indirectly.
Better tooling should be developed to help developers in looking for undiscovered vulnerabilities and fixing them.
By analysing the impact of vulnerable packages on dependent packages and dependent external projects (hosted on {GitHub}\xspace),
we observed that vulnerabilities can be found deep in the packages' and projects' dependency trees.
Around one out of three packages and two out of three external projects are exposed to vulnerabilities coming from indirect dependencies. The most prevalent vulnerability type that affects {npm}\xspace dependencies is \emph{Prototype Pollution}, while for {RubyGems}\xspace dependencies it is \emph{Denial of Service}.
An important observation is that most of the vulnerabilities affecting dependencies of packages and external projects have known fixes (in the form of more recent package releases that are no longer vulnerable). Maintainers of dependents \changed{could} therefore invest more effort in checking their exposure to vulnerable dependencies, and in updating their outdated dependencies in order to reduce the number of dependency vulnerabilities.
In contrast, the majority of the outdated and vulnerable dependencies need to be updated to a new major release to avoid the vulnerabilities.
We found that around one out of three vulnerabilities affecting direct dependencies of packages and external {GitHub}\xspace projects, \changed{might} be avoided by only making backward compatible dependency updates.
\section*{Acknowledgments}
This research was partially funded by the Excellence of Science project 30446992 SECO-Assist financed by F.R.S.-FNRS and FWO-Vlaanderen, as well as FNRS Research Credit J015120 and FNRS Research Project T001718.
We express our gratitude to the security team of \emph{Snyk} for granting us permission to use their dataset of vulnerability reports for research purposes.
\balance
\providecommand{\noopsort}[1]{}
|
2,869,038,154,081 | arxiv | \section{Introduction}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{concept.pdf}
\caption{Long-tailed video recognition. General video recognition methods are overfitted on head classes, while long-tailed video recognition focuses on the performance of both head and tail classes, especially on tail classes. (Blue box is the head class region, red box is the region of medium and tail classes.)}
\label{fig:lt_video}
\end{figure}
Deep neural networks have achieved astounding success in a wide range of computer vision tasks like image classification~\cite{resnet, inception, senet, efficientnet}, object detection~\cite{yolo, ssd, fastrcnn, fasterrcnn}, \etc. Training these networks requires carefully curated datasets like ImageNet and COCO, where object classes are uniformly distributed. However, real-world data often have a long tail of categories with very few training samples, posing significant challenges for network training. This results in biased models that perform exceptionally well on head classes (categories with a large number of training samples) but poorly on tail classes that contain a limited number of samples (see Figure~\ref{fig:lt_video}).
Recently, there is growing interest in learning from long-tailed data for image tasks~\cite{decoupling,bbn,tang2020long,eql,yang2020rethinking, dbloss,ldam,cbloss,openLT}. Two popular directions to balance class distributions are re-sampling and re-weighting. Re-sampling~\cite{decoupling,bbn,chawla2002smote,han2005borderline,drumnond2003class} methods up-sample tail classes and down-sample head classes to acquire a balanced data distribution from the original data. On the other hand, re-weighting methods~\cite{focalloss, rangeloss, ldam, cbloss, eql, dbloss} focus on designing weights to balance the loss functions of head and tail classes. While extensive studies have been done for long-tailed image classification tasks, limited effort has been made for video classification.
While it is appealing to directly generalize these methods from images to videos, it is also challenging since for classification tasks, videos are usually weakly labeled---only a single label is provided for a video sequence and only a small number of frames correspond to that label. This makes it difficult to apply off-the-shelf re-weighting and re-sampling techniques since not all snippets~\footnote{We use ``snippet'' to denote a stack of frames sampled from a video clip, which are typically used as inputs for video networks.} contain informative clues---some snippets directly relate to the target class while others might consist of purely background frames. As a result, using a fixed weight/sampling strategy for all snippets to balance label distributions is problematic.
For long-tailed video recognition, we argue that balancing the distribution between head and tail classes should be performed at the frame-level rather than at the video (sample)-level---more frames in videos from tail classes should be sampled for training and vice versa. More importantly, frame sampling should be dynamic based on the confidence of neural networks for different categories during the training course. This helps preventing overfitting for head classes and underfitting for tail classes.
To this end, we introduce FrameStack, a simple yet effective approach for long-tailed video classification. FrameStack\xspace operates on video features and can be plugged into state-of-the-art video recognition models with minimal surgery. More specifically, given a top-notch classification model which preserves the time dimension of input snippets, we first compute a sequence of features as inputs of FrameStack, \ie, for a input video with $T$ frames, we obtain $T$ feature representations. To mitigate the long tail problem, we define a temporal sampling ratio to select different number of frames from each video conditioned on the recognition performance of the model for target classes. If the network is able to offer decent performance for the category to be classified, we then use fewer frames for videos in this class. On the contrary, we select more frames from a video if the network is uncertain about its class of interest. We instantiate the ratio using running average precision (AP) of each category computed on training data. The intuition is that AP is a dataset-wise metric, providing valuable information about the performance of the model on each category and it is dynamic during training as a direct indicator of progress achieved so far.
Consequently, we can adaptively under-sample classes with high AP to prevent over-fitting and up-sample those with low AP.
However, this results in samples with different time dimensions and such variable-length inputs are not parallel-friendly for current training pipelines. Motivated by recent data-augmentation techniques which blend two samples~\cite{cutmix,manifoldmixup} as virtual inputs, FrameStack\xspace performs temporal sampling on a pair of input videos and then concatenates re-sampled frame features to form a new feature representation, which has the same temporal dimension as its inputs. The resulting features can then be readily used for final recognition. We also adjust the the corresponding labels conditioned on the temporal sampling ratio.
Moreover, we also collect a large-scale long tailed video recognition dataset, VideoLT, which consists of 256,218 videos with an average duration of 192 seconds. These videos are manually labeled into 1,004 classes to cover a wide range of daily activities. Our VideoLT have 47 head classes ($\#\text{videos} > 500$), 617 medium classes ($100 < \#\text{videos} <= 500$) and 340 tail classes ($\#\text{videos} <= 100$), which naturally has a long tail of categories.
Our contributions are summarized as follows:
\begin{itemize}
\item We collect a new large-scale long-tailed video recognition dataset, VideoLT, which contains 256,218 videos that are manually annotated into 1,004 classes. \textit{To the best of our knowledge, this is the first ``untrimmed'' video recognition dataset which contains more than 1,000 manually defined classes.}
\item We propose FrameStack\xspace, a simple yet effective method for long-tailed video recognition. FrameStack\xspace uses a temporal sampling ratio derived from knowledge learned by networks to dynamically determine how many frames should be sampled.
\item We conduct extensive experiments using popular long-tailed methods that are designed for image classification tasks, including re-weighting, re-sampling and data augmentation. We demonstrate that the existing long-tailed image methods are not suitable for long-tailed video recognition. By contrast, our FrameStack\xspace combines a pair of videos for classification, and achieves better performance compared to alternative methods. Overall, the method in this work sets a new benchmark on long-tailed video recognition task. \textit{The dataset, code and results will be released upon publication of this work.}
\end{itemize}
\begin{figure*}
\begin{minipage}[t]{0.48\textwidth}
\centering
\includegraphics[width=9cm]{taxonomic_system.pdf}
\caption{The taxonomy structure of VideoLT. There are 13 top-level entities and 48 sub-level entities, the children of sub-level entities are sampled. Full taxonomy structure can be found in Supplementary materials.}
\label{fig:taxonomy}
\end{minipage}
\hspace{5mm}
\begin{minipage}[t]{0.48\textwidth}
\centering
\includegraphics[width=9cm]{lt_level.pdf}
\caption{Histogram of category frequency of VideoLT. There are 47 head classes, 617 medium classes and 340 tail classes. As we can see, the dominant samples are from head and medium classes.}
\label{fig:lt-dist}
\end{minipage}
\end{figure*}
\section{Related Work}
\subsection{Long-Tailed Image Recognition}
Long-tailed image recognition has been extensively studied, and there are two popular strands of methods: re-weighting and re-sampling.
\vspace{0.05in}
\noindent
\textbf{Re-Weighting} A straightforward idea of re-weighting is to use the inverse class frequency to weight loss functions in order to re-balance the contributions of each class to the final loss. However, the inverse class frequency usually results in poor performance on real-world data~\citep{cbloss}. To mitigate this issue, Cui \etal \cite{cbloss} use carefully selected samples for each class to re-weight the loss. Cao \etal \cite{ldam} propose a theoretically-principled label-distribution-aware margin loss and a new training schedule DRW that defers re-weighting during training. In contrast to these methods, EQL~\cite{eql} demonstrates that tail classes receive more discouraging gradients during training, and ignoring these gradients will prevent the model from being influenced by those gradients.
For videos, re-weighting the loss is sub-optimal as snippets that are used for training contain different amount of informative clues related to the class of interest---assigning a large weight to a background snippet from tail classes will likely bring noise for training.
\vspace{0.05in}
\noindent
\textbf{Re-Sampling}. There are two popular types of re-sampling: over-sampling and under-sampling. Over-sampling~\cite{chawla2002smote,han2005borderline} typically repeats samples from tail classes and under-sampling~\cite{drumnond2003class} abandons samples from head classes. Recently, class frequency is used for class-balanced sampling~\cite{ca-sampling,mahajan2018exploring, bbn, decoupling}.
BBN~\cite{bbn} points out that training a model in an end-to-end manner on long-tailed data can improve the discriminate power of classifiers but damages representation learning of networks. Kang \etal~\cite{decoupling} show that it is possible to achieve strong long-tailed recognition performance by only training the classifier.
Motivated by these observations~\cite{decoupling,bbn}, we decouple feature representation and classification for long-tailed video recognition. But unlike these standard re-sampling methods, we re-sample videos by concatenating frames from different video clips.
\vspace{0.05in}
\noindent
\textbf{Mixup} Mixup~\cite{zhang2017mixup} is a popular data augmentation method that linearly interpolates two samples at pixel level and their targets. There are several recent methods improving mixup from different perspectives. For example Manifold Mixup~\cite{manifoldmixup} extends mixup from input space to feature space. CutMix~\cite{cutmix} cuts out a salient image region and pastes it to another image. PuzzleMix~\cite{puzzlemix} uses salient signals without removing the local properties of input. ReMix~\cite{remix} designs disentangled mixing factors to handle imbalanced distribution and improve performance on minority classes. Recently, several studies show that mixup is also powerful when dealing with the long tail problem~\cite{remix, bbn}, because it brings higher robustness and smoother decision boundaries to models and it can reduce overfitting to head classes. Our approach is similar to mixup in that we also combine two videos and mix their labels. However, FrameStack\xspace operates on frame features along the temporal dimension, but more importantly the mixing ratio in FrameStack\xspace is dynamic based on knowledge from the network model.
\subsection{General \& Long-tailed Video Recognition}
Extensive studies have been made on video recognition with deep neural networks~\cite{kinetics, qiu2017learning, feichtenhofer2019slowfast, feichtenhofer2020x3d,wang2018non,lin2019tsm,hussein2019timeception} or training methods~\cite{wu2020multigrid} for video recognition.
These approaches focus on learning better features for temporal modeling, by developing plugin modules \cite{wang2018non,lin2019tsm} or carefully designing end-to-end network structures \cite{feichtenhofer2019slowfast,feichtenhofer2020x3d}. State-of-the-art video recognition models mainly experiment with general video recognition datasets to demonstrate their capacity in modeling long-term temporal relationships \cite{wang2018non,hussein2019timeception} or capturing short-term motion dynamics \cite{kinetics, qiu2017learning,feichtenhofer2020x3d,lin2019tsm}. However, limited effort has been made for long-tailed video recognition due to the lack of proper benchmarks. Zhu and Yang \cite{zhu2020inflated} propose Inflated Episodic Memory to address long-tailed visual recognition in both image and videos. However, the approach mainly extracts features from videos without specific components designed to model the temporal information therein.
\section{VideoLT Dataset}
We now introduce VideoLT in detail, which is a large scale benchmark designed for long-tailed video recognition.
In contrast to existing video datasets that focus on action or activities~\cite{ucf101, hmdb, sports1m, activitynet, kinetics}, VideoLT is designed to be general and to cover a wide range of daily activities. We manually define a hierarchy with 13 top-level categories including: \textit{Animal}, \textit{Art}, \textit{Beauty and Fashion}, \textit{Cooking}, \textit{DIY}, \textit{Education and Tech}, \textit{Everyday life}, \textit{Housework}, \textit{Leisure and Tricks}, \textit{Music}, \textit{Nature}, \textit{Sports} and \textit{Travel}. See Figure~\ref{fig:taxonomy} for details.
For each top-level class, we used ConceptNet to find sub-level categories. Finally, we selected $1,004$ classes for annotation. To obtain a more diverse video dataset, we not only use the defined categories in taxonomy system, but also expand tags with the same semantics. Then we use these tags to search and crawl videos from YouTube. For each category, duplicate videos and some very long videos are removed, and the number of videos for all categories is larger than 80. In order to ensure annotation quality, each video is labelled by three annotators and a majority voting is used to determine the final labels. See supplemental materials for more details.
VideoLT is split into a train set, a validation set and a test set using 70\%, 10\% and 20\% of videos, respectively. To better evaluate approaches for long-tailed recognition, we define 47 head classes ($\#videos > 500$), 617 medium classes ($100 < \#videos <= 500$) and 340 tail classes ($\#videos <= 100$). See Figure~\ref{fig:lt-dist} for details.
\vspace{0.05in}
\noindent
\textbf{Comparing with existing video datasets}
We visualize in Fig.~\ref{fig:vdata} the class frequency distribution of the train and validation set from ActivityNet v1.3~\cite{activitynet}, Charades~\cite{charades}, Kinetics-400~\cite{kinetics}, Kinetics-600~\cite{kinetics-600}, Kinetics-700~\cite{kinetics-700}, FCVID~\cite{jiang2017exploiting}, Something-something v1~\cite{sth-sth}, AVA~\cite{ava} and VideoLT. VideoLT has the best linearity in logarithmic coordinate system, which means the class distribution of VideoLT is close to long-tailed distribution.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{vdata.pdf}
\end{center}
\caption{Class frequency distribution of existing video datasets and VideoLT, VideoLT has the best linearity in logarithmic coordinate system.}
\label{fig:vdata}
\end{figure}
It is worth noting that YouTube-8M is a large scale dataset with 3,862 classes and 6.8 million videos~\cite{abu2016youtube}. With so many categories, the dataset naturally has a long tail distribution of classes as ours. However, the classes in YouTube-8M are inferred by algorithms automatically rather than manually defined. Each video class has at least 200 samples for training, which is two times higher than ours. In addition, it does not provide head, medium, tail classes for better evaluations of long-tailed approaches.
\section{FrameStack\xspace}
We now introduce FrameStack\xspace, a simple yet effective approach for long-tailed video recognition. In image recognition tasks, input samples always correspond to their corresponding labels. However, for video recognition, snippets, which might not contain informative clues due to the weakly-labeled nature of video data, could also be sampled from video sequences for training. Popular techniques with a fixed re-sampling/re-weighting strategy for long-tailed image recognition are thus not applicable, since they will amplify noise in background snippets when calculating losses.
To mitigate imbalanced class distributions for video tasks, FrameStack\xspace re-samples training data at the frame level and adopts a dynamic sampling strategy based on knowledge learned by the network itself. The rationale behind FrameStack\xspace is to dynamically sample more frames from videos in tail classes and use fewer frames for those from head classes.
Instead of directly sampling raw RGB frames to balance label distributions, we operate in the feature space by using state-of-the-art models that are able to preserve the temporal dimension in videos~\cite{feichtenhofer2019slowfast,lin2019tsm}~\footnote{Most top-notch recognition models do not perform temporal downsampling till the end of networks.}. This allows FrameStack\xspace to be readily used as a plugin module for popular models to address the long-tail problem in video datasets without retraining the entire network.
More formally, we represent a video sequence with $L$ frames as $\mathcal{V} = \{{\bm f}_1, {\bm f}_2, \ldots, {\bm f}_L\}$, and its labels as $\bm y$.
We then use a top-notch model (more details will be described in the experiment section) to compute features for $\mathcal{V}$, and the resulting representations are denoted as ${\bm V} = \{{\bm v}_1,{\bm v}_2,\ldots,{\bm v}_L\}$. To determine how many frames should be selected from ${\bm V}$ for training a classifier, we compute a running Average Pecision (rAP) during training to evaluate the performance of the network for each category on the entire dataset. For each mini-batch of training samples, we record their predictions and the groundtruth. After an epoch, we calculate the ap for each class on the training set. We refer to this metric as running AP since the parameters of the model are changing every mini-batch. While it is not accurate as standard average precision, it provides relative measurement about how the model performs for different classes. If the model is very confident for certain categories as suggested by rAP, we simply use fewer frames and vice versa.
However, this creates variable-length inputs for different samples in a batch, which is not parallel-friendly for current GPU architectures. In addition, it is difficult to directly translate rAP to the number of samples to be used. To address this issue, FrameStack\xspace operates on a pair of video samples, $({\bm V}_i, {\bm y}_i), ({\bm V}_j, {\bm y}_j)$, which are randomly selected in a batch. Based on their ground-truth labels, we can obtain the corresponding rAPs, $rAP_i$ and $rAP_j$ for classes ${\bm y}_i$ and ${\bm y}_j$, respectively. We then define a temporal sampling ratio as:
\begin{equation}
\beta=\frac{rAP_i}{rAP_i+rAP_j},
\end{equation}
where $\beta$ indicates the relative performance of classes ${\bm y}_i$ and ${\bm y}_j$ by the network so far. Then the number of frames that are sampled from ${\bm V}_i$ and ${\bm V}_j$ are $L_i$ and $L_j$, respectively:
\begin{equation}
\begin{array}{l}L_i= \lfloor(1-\beta)\times L\rfloor \\ L_j=\lfloor \beta\times L \rfloor \\\end{array}.
\end{equation}
We then produce two new snippets $\widehat{ {\bm V}_i}$ and $\widehat{ {\bm V}_j}$ with length $L_i$ and $L_j$ derived from $ {\bm V}_i$ and $ {\bm V}_j$ respectively through uniform sampling. By concatenating $\widehat{ {\bm V}_i}$ and $\widehat{ {\bm V}_j}$, we obtain a new sample $\widetilde{\bm V}$ whose length is $L$:
\begin{equation}
\widetilde {\bm V}=\texttt{Concat}([\widehat{ {\bm V}_i} \,\,;\,\, \widehat{ {\bm V}_j}]).
\end{equation}
Now $\widetilde{{\bm V}}$ becomes a multi-label snippet containing categories of ${\bm y}_i$
and ${\bm y}_j$. We associate $\widetilde{{\bm V}}$ with a multi-label vector scaled by $\beta$:
\begin{align}
\tilde{\bm y} = (1-\beta)\times {\bm y}_i + \beta \times {\bm y}_j,
\end{align}
Then, $\widetilde {\bm V}$ and $\tilde{\bm y}$ can then be used by temporal aggregation modules for classification. Note that at the beginning of the training process, recognition accuracies are pretty low for all categories, and thus $\beta$ is not accurate. To remedy this, when
$(rAP_i+rAP_j) < 1e-5$, we set $\beta$ to 0.5 to sample a half of frames from the two videos. Algorithm~\ref{alg:alg} summarizes the overall training process.
It is worth pointing out that FrameStack\xspace shares similar spirit as mixup~\cite{zhang2017mixup}, which interpolates two samples linearly as data augmentations to regularize network training. Here, instead of mixing frames, we concatenate sampled video clips with different time steps to address the long-tailed video recognition problem. As will be shown in the experiments, FrameStack\xspace outperforms mixup by clear margins in the context of video classification. FrameStack can be regarded as a class-level re-balancing strategy which is based on average precision of each class, we also use focal loss~\cite{focalloss} which adjusts binary cross-entropy based on sample predictions.
\SetKwInput{KwInput}{Input}
\SetKwInput{KwOutput}{Output}
\begin{algorithm}
\SetAlgoLined
\KwResult{Updated $rAP$ list. Updated model $f_\theta$}
\KwInput{Dataset $D=\{({\bm V}_i,{\bm y}_i))\}_{i=1}^n$. Model $f_\theta$}
Initialize $rAP=0$, $\varepsilon=1e-5$
\tcc{$M$: videos in a mini-batch}
\For{$e\in Max Epoch$}{
\For{$({\bm V},{\bm y})\in $ $M$}
{
$({\bm V}_i,{\bm y}_i))$, $({\bm V}_j,{\bm y}_j))\leftarrow $ Sampler($D$, $M$)
\uIf{$(rAP_i+rAP_j) < \varepsilon$}{
$\beta=0.5$
}
\uElse{
$\beta=\frac{rAP_i}{rAP_i+rAP_j}$
}
$L_i \leftarrow \lfloor (1-\beta)\times L \rfloor$
$L_j \leftarrow \lfloor \beta\times L \rfloor$
\tcc{$\widehat{ {\bm V}_i}$, $\widehat{ {\bm V}_j}$: Uniformly sample $L_i$, $L_j$ frames from ${\bm V}_i$, ${\bm V}_j$}
$\widehat{ {\bm V}_i} \leftarrow Uniform({\bm V}_i\lbrack L_i\rbrack)$
$\widehat{ {\bm V}_j} \leftarrow Uniform({\bm V}_j\lbrack L_j\rbrack)$
$\widetilde{\bm V} \leftarrow Concat\lbrack\widehat{ {\bm V}_i},\widehat{ {\bm V}_j}\rbrack$
$\tilde{\bm y} \leftarrow (1-\beta)\times {\bm y}_i + \beta \times {\bm y}_j$
}
$\mathcal{L}(f_\theta) \leftarrow {\textstyle\frac1M}{\textstyle\sum_{(\widetilde{\bm V},\tilde{\bm y}) \in M}}\mathcal{L}((\widetilde{\bm V},\tilde{\bm y});f_\theta$)
$f_\theta \leftarrow f_\theta-\delta\nabla_\theta \mathcal{L}(f_\theta)$
\tcc{$rAP$: A list of running average precision for each class}
$rAP \leftarrow $ APCalculator
\Return {$rAP$}
}
\caption{Pseudo code of~FrameStack\xspace.}
\label{alg:alg}
\end{algorithm}
\begin{table*}[t]
\renewcommand\arraystretch{1.1}
\begin{center}
\renewcommand\tabcolsep{1.2pt}
\begin{tabular}{c|cccccc|cccccc}
\toprule
\multicolumn{1}{l|}{} & \multicolumn{6}{c|}{ResNet-50} & \multicolumn{6}{c}{ResNet-101} \\\midrule
LT-Methods & Overall & \begin{tabular}[c]{@{}c@{}}{[}500,$+\infty$)\\Head \end{tabular} & \begin{tabular}[c]{@{}c@{}} {[}100,500)\\Medium\end{tabular} & \begin{tabular}[c]{@{}c@{}}{[}0,100)\\Tail \end{tabular} & Acc@1 & Acc@5 & Overall & \begin{tabular}[c]{@{}c@{}}{[}500,$+\infty$)\\Head \end{tabular} & \begin{tabular}[c]{@{}c@{}}{[}100,500)\\Medium \end{tabular} & \begin{tabular}[c]{@{}c@{}}{[}0,100)\\Tail \end{tabular} & Acc@1 & Acc@5 \\
\midrule
baseline&0.499 & 0.675 & 0.553 & 0.376 & 0.650 & 0.828 & 0.516 & 0.687 & 0.568 & 0.396 & 0.663 & 0.837 \\
\midrule
LDAM + DRW&0.502 & 0.680 & 0.557 & 0.378 & 0.656 & 0.811 & 0.518 & 0.687 & 0.572 & 0.397 & 0.664 & 0.820 \\
EQL&0.502 & 0.679 & 0.557 & 0.378 & 0.653 & 0.829 & 0.518 & 0.690 & 0.571 & 0.398 & 0.664 & 0.838 \\
CBS&0.491 & 0.649 & 0.545 & 0.371 & 0.640 & 0.820 & 0.507 & 0.660 & 0.559 & 0.390 & 0.652 & 0.82.8 \\
CB Loss&0.495 & 0.653 & 0.546 & 0.381 & 0.643 & 0.823 & 0.511 & 0.665 & 0.561 & 0.398 & 0.656 & 0.832 \\
mixup&0.484 & 0.649 & 0.535 & 0.368 & 0.633 & 0.818 & 0.500 & 0.665 & 0.550 & 0.386 & 0.646 & 0.828 \\\midrule
Ours&\textbf{0.516} & \textbf{0.683} & \textbf{0.569} & \textbf{0.397} & \textbf{0.658} & \textbf{0.834}&\textbf{0.535} & \textbf{0.697} & \textbf{0.587} & \textbf{0.419} & \textbf{0.672} & \textbf{0.844} \\
\bottomrule
\end{tabular}
\vspace{3mm}
\caption{Results and comparisons of different approaches for long-tailed recognition using features extracted from ResNet-50 and ResNet-101, our method FrameStack\xspace outperforms other long-tailed methods designed for image classification by clear margin.}
\label{tab:2d-nonlinear}
\end{center}
\end{table*}
\section{Experiments}
\subsection{Settings}
\noindent
\textbf{Implementation Details.} During training, we set the initial learning rate of Adam optimizer to 0.0001 and decrease it every 30 epochs; we train for a maximum of 100 epochs by uniformly sampling 60 frames as inputs, and the batch size is set to 128. At test time, 150 frames are uniformly sampled from raw features. For FrameStack\xspace, we use a mix ratio $\eta$ to control how many samples are mixed in a mini-batch, and set $\eta$ to 0.5. In addition, the clip length of FrameStack\xspace $L$ is set to 60.
\vspace{0.05in}
\noindent
\textbf{Backbone networks.}
To validate the generalization of our method for long-tailed video recognition, we follow the experimental settings as Decoupling~\cite{decoupling}. We use two popular backbones to extract features including: ResNet-101~\cite{resnet} pretrained on ImageNet and ResNet-50~\cite{resnet} pretrained on ImageNet. We also experiment TSM~\cite{lin2019tsm} with a ResNet-50~\cite{lin2019tsm} as its backbone and the model is pretrained on Kinetics. We take the features from the penultimate layer of the network, resulting in features with a dimension of 2048.
We decode all videos at 1 fps, then resize frames to shorter size of 256 and crop the center of them as inputs; these frames are uniformly sampled to construct a sequence with a length of 150.
Note that we do not finetune the networks on VideoLT due to computational limitations. Additionally, we hope FrameStack\xspace can serve as a plugin module to existing backbones with minimal surgery. Further, given features from video, we mainly use a non-linear classifier with two fully-connected layers to aggregate them temporally. To demonstrate the effectiveness of features, we also experiment with NetVLAD for feature encoding~\cite{netvlad} with 64 clusters and the hidden size is set to 1024.
\vspace{0.05in}
\noindent
\textbf{Evaluation Metrics.}
To better understand the performance of different methods for long-tailed distribution, we calculate mean average precision for head, medium and tail classes in addition to dataset-wise mAP, Acc@1 and Acc@5. Long-tailed video recognition requires algorithms to obtain good performance on tail classes without sacrificing overall performance, which is a new challenge for existing video recognition models.
\subsection{Results}
\noindent
\textbf{Comparisons with SOTA methods}.
We compare FrameStack\xspace with three kinds of long-tailed methods that are widely used for image recognition tasks:
\begin{itemize}
\item[$\bullet$] \textbf{Re-sampling}: We implement class-balanced sampling (CBS)~\cite{decoupling, ca-sampling} which uses equalized sampling strategies for data from different classes. In a mini-batch, it takes a random class then randomly samples a video, and thus videos from head and tail classes share the same probability to be chosen.
\noindent
\item[$\bullet$] \textbf{Re-weighting}: It takes sampling frequency of each class into consideration to calculate weights for cross-entropy or binary cross-entropy. We conduct experiments with, Class-balanced Loss~\cite{cbloss}, LDAM Loss~\cite{ldam} and EQL~\cite{eql}.
\noindent
\item[$\bullet$] \textbf{Data augmentation}: We use the popular method Mixup~\cite{zhang2017mixup} for comparisons. For fair comparisons, mixup is performed in feature space as FrameStack\xspace. In particular, mixup mixes features from two videos in a mini-batch by summing their features frame-by-frame.
\end{itemize}
Table~\ref{tab:2d-nonlinear} summarizes the results and comparisons of different approaches on VideoLT.
We observe that the performance of tail classes are significantly worse compare to that of head classes using both ResNet-50 and ResNet-101 for all methods. This highlights the challenge of long-tailed video recognition algorithms. In addition, we can see popular long-tail recognition algorithms for image classification tasks are not suitable for video recognition. Class-balanced sampling and class-balanced losses result in slightly performance drop, compared to the baseline model without using any re-weighting and re-sampling strategies; LDAM+DRW and EQL achieve comparable performance for overall classes and tail classes. For mixup, its performance is even worse compared to the baseline model, possibly due to the mixing between features makes training difficult. Instead, our approach achieves better results compared with these methods. In particular, when using ResNet-101 features, FrameStack\xspace achieves an overall mAP of 53.5\%, which is 1.9\% and 1.7\% better compared to the baseline and the best performing image-based method (\ie, LDAM+DRW and EQL). Furthermore, we can observe that although CB Loss brings slightly better performance on tail classes, this comes at the cost of performance drop for overall classes. Compared to the CB Loss, FrameStack\xspace significantly improves the performance with 2.1\% and 2.3\% for tail classes without sacrificing the overall results.
\begin{table}[]
\renewcommand\arraystretch{1.1}
\begin{center}
\renewcommand\tabcolsep{2.5pt}
\begin{tabular}{c|c|cccc}
\toprule
& \multicolumn{1}{c|}{ LT Methods} & Overall & \begin{tabular}[c]{@{}c@{}}{[}500,$+\infty$)\\ Head\end{tabular} & \begin{tabular}[c]{@{}c@{}} {[}100,500)\\ Medium\end{tabular} & \begin{tabular}[c]{@{}c@{}}{[}0,100)\\ Tail\end{tabular} \\\midrule\midrule
\parbox[t]{5mm}{ \multirow{7}{*}{\rotatebox[origin=c]{90}{Nonlinear Model}} } & baseline & 0.565 & 0.757 & 0.620 & 0.436 \\
& LDAM + DRW & 0.565 & 0.750 & 0.620 & 0.439 \\
& EQL & 0.567 & 0.757 & 0.623 & 0.439 \\
& CBS & 0.558 & 0.733 & 0.612 & 0.435 \\
& CB Loss & 0.563 & 0.744 & 0.616 & 0.440 \\
& Mixup & 0.548 & 0.736 & 0.602 & 0.425 \\
& Ours & \textbf{0.580} & \textbf{0.759} & \textbf{0.632} & \textbf{0.459} \\
\midrule\midrule
\parbox[t]{5mm}{ \multirow{7}{*}{\rotatebox[origin=c]{90}{NetVLAD Model}} } & baseline & 0.660 & 0.803 & 0.708 & 0.554 \\
& LDAM + DRW & 0.627 & 0.779 & 0.675 & 0.519 \\
& EQL & 0.665 & \textbf{0.808} & \textbf{0.713} & 0.557 \\
& CBS & 0.662 & 0.806 & 0.708 & 0.558 \\
& CB Loss & 0.666 & 0.801 & 0.712 & \textbf{0.566} \\
& Mixup & 0.659 & 0.800 & 0.706 & 0.556 \\
& Ours & \textbf{0.667} & 0.806 &\textbf{0.713} & \textbf{0.566} \\
\bottomrule
\end{tabular}
\vspace{3mm}
\caption{Results and comparisons using TSM (ResNet-50). Top: features aggregated using a non-linear model; Bottom: features aggregated using NetVLAD.}
\label{tab:tsm}
\end{center}
\end{table}
\vspace{0.05in}
\noindent
\textbf{Extensions with more powerful backbones.} We also experiment with a TSM model with a ResNet-50 as its backbone to demonstrate the compatibility of our approach with more powerful networks designed for video recognition. In addition, we also use two feature aggregation methods to derive a unified representations for classification. The results are summarized in Table~\ref{tab:tsm}. We observe similar trends as Table~\ref{tab:2d-nonlinear} using a nonlinear model---FrameStack\xspace outperforms image-based long-tailed algorithms by 1.5\% and 2.3\% for overall and tail classes, respectively. In addition, we can see that features from TSM pretrained on Kinetics are better than image-pretrained features (58.0\% \vs 53.5\%). Furthermore, we can see that our approach is also compatible with more advanced feature aggregation strategies like NetVLAD. More specifically, with NetVLAD, FrameStack\xspace outperforms the baseline approach by 0.7\% and 1.2\% for overall classes and tail classes, respectively.
\subsection{Discussion}
We now conduct a set of studies to justify the contributions of different components in our framework and provide corresponding discussion.
\vspace{0.05in}
\noindent
\textbf{Effectiveness of AP.} Throughout the experiments, we mainly use AP as a metric to adjust the number of frames used in FrameStack\xspace. We now experiment with class frequency, which is a popular metric widely used for image-based long-tailed recognition~\cite{ca-sampling,mahajan2018exploring, bbn, decoupling}. In detail, we take the inverse frequency of each class as weights for computing number of frames sampled from head and tail classes, and then concatenate two clips as FrameStack\xspace. The results are summarized in Table~\ref{tab:ap}. We observe that resampling videos with class frequency results in 0.5\% performance drop on NetVLAD model. In contrast, using running average precision is a better metric for resampling frames since it is dynamically derived based on knowledge learned by networks so far. As a result, it changes sampling rates based on the performance of particular classes, which prevents overfitting for top-performing classes and avoids under-fitting for under-performing categories at the same time. As aforementioned, treating weakly labeled videos as images and then resampling/reweighting them using class frequency might be problematic because some snippets might consist of purely background frames.
\begin{table}[htp]
\renewcommand\arraystretch{1.0}
\begin{center}
\renewcommand\tabcolsep{2.5pt}
\begin{tabular}{c|c|cccc}
\toprule
Model & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}CB\\ Strategy\end{tabular}} & \multicolumn{1}{c}{Overall} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}{[}500,$+\infty$)\\ Head\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}{[}100,500)\\ Medium\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}{[}0,100)\\ Tail\end{tabular}} \\\midrule\midrule
& - & 0.516 & 0.687 & 0.568 & 0.396 \\
Nonlinear & w/ CF & 0.520 & 0.680 & 0.571 & 0.405 \\
& w/ rAP & \textbf{0.535} & \textbf{0.697} & \textbf{0.587} & \textbf{0.419} \\\midrule
& - & 0.668 & 0.775 & \textbf{0.707} & 0.584 \\
NetVLAD & w/ CF & 0.663 & 0.767 & 0.699 & 0.584 \\
& w/ rAP &\textbf{0.670} & \textbf{0.780} & \textbf{0.707} & \textbf{0.590} \\\bottomrule
\end{tabular}
\vspace{3mm}
\caption{Results and comparisons of using running AP and class frequency (CF) to determine how many frames should be used from video clips.}
\label{tab:ap}
\end{center}
\end{table}
\begin{table*}[htp]
\renewcommand\arraystretch{1.2}
\begin{center}
\renewcommand\tabcolsep{2.5pt}
\resizebox{1.0\linewidth}{!}{
\begin{tabular}{c|c|cccccc|cccccc}
\toprule
& & \multicolumn{6}{c}{TSM (ResNet-50)} & \multicolumn{6}{|c}{ResNet-101} \\\midrule
& LT-Methods & Overall & \begin{tabular}[c]{@{}c@{}}{[}500,$+\infty$)\\ Head\end{tabular} & \begin{tabular}[c]{@{}c@{}}{[}100,500)\\ Medium\end{tabular} & \begin{tabular}[c]{@{}c@{}}{[}0,100)\\ Tail\end{tabular} & Acc@1 & Acc@5 & Overall & \begin{tabular}[c]{@{}c@{}}{[}500,$+\infty$)\\ Head\end{tabular} & \begin{tabular}[c]{@{}c@{}}{[}100,500)\\ Medium\end{tabular} & \begin{tabular}[c]{@{}c@{}}{[}0,100)\\ Tail\end{tabular} & Acc@1 & Acc@5 \\\midrule\midrule
\parbox[t]{5mm}{ \multirow{3}{*}{\rotatebox[origin=c]{90}{NonLinear}} } & baseline & 0.565 & 0.757 & 0.620 & 0.436 & 0.680 & 0.851 & 0.516 & 0.687 & 0.568 & 0.396 & 0.663 & 0.837 \\
& FrameStack-BCE & 0.568 & 0.751 & 0.622 & 0.445 & 0.679 & 0.855 & 0.521 & 0.684 & 0.571 & 0.406 & 0.660 & 0.839 \\
& FrameStack-FL & \textbf{0.580} & \textbf{0.759} & \textbf{0.632} &\textbf{0.459} & \textbf{0.686} &\textbf{0.859} & \textbf{0.535} & \textbf{0.697} & \textbf{0.587} & \textbf{0.419} & \textbf{0.672} &\textbf{0.844} \\\midrule\midrule
\parbox[t]{5mm}{ \multirow{3}{*}{\rotatebox[origin=c]{90}{NetVLAD}} } & baseline & 0.660 & 0.803 & 0.708 & 0.554 & 0.695 & 0.870 & 0.668 & 0.775 & 0.707 & 0.584 & 0.700 & \textbf{0.864} \\
& FrameStack-BCE & \textbf{0.669} & \textbf{0.807} & \textbf{0.715} & \textbf{0.568} & \textbf{0.711} &\textbf{0.872} & \textbf{0.671} & \textbf{0.781} & 0.707 & 0.589 & 0.709 & 0.858 \\
& FrameStack-FL & 0.667 & 0.806 & 0.713 & 0.566 & 0.708 & 0.866 & 0.670 & 0.780 & 0.707 & \textbf{0.590} & \textbf{0.710} & 0.858 \\
\bottomrule
\end{tabular}}
\vspace{3mm}
\caption{Results of our approach using different loss functions and comparisons with baselines. FrameStack\xspace is complemented with focal loss on Nonlinear model and TSM (ResNet-50), ResNet-101 features.}
\label{tab:focal}
\end{center}
\end{table*}
\begin{table}[htp]
\renewcommand\arraystretch{1.0}
\begin{center}
\renewcommand\tabcolsep{1pt}
\begin{tabular}{p{0.7cm}|cccccc}
\toprule
$\eta$ & Overall & \begin{tabular}[c]{@{}c@{}}{[}500,$+\infty$)\\ Head\end{tabular} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}{[}100,500)\\ Medium\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}{[}0,100)\\ Tail\end{tabular}} & Acc@1 & Acc@5 \\\midrule\midrule
0 & 0.668 & 0.775 & 0.707 & 0.584 & 0.700 & \textbf{0.864} \\
0.3 & 0.667 & 0.780 & 0.707 & 0.586 & 0.710 & 0.860 \\
0.5 & \textbf{0.670} & \textbf{0.780} & 0.707 &\textbf{0.590}& 0.710 & 0.858 \\
0.7 & 0.669 & 0.780 & 0.707 & 0.585 & 0.709 & 0.860 \\
0.9 & 0.668 & 0.774 & 0.704 & 0.588 & 0.706 & 0.859 \\\bottomrule
\end{tabular}
\vspace{3mm}
\caption{Effectiveness of mix ratio $\eta$, test results are based on ResNet-101 feature and NetVLAD Model.}
\label{tab:ratio}
\end{center}
\end{table}
\vspace{0.05in}
\noindent
\textbf{Effectiveness of loss functions.} As mentioned above, our approach resamples data from different classes and it is trained with focal loss. We now investigate the performance of our approach with different loss functions and the results are summarized in Table~\ref{tab:focal}. We observe that using FrameStack\xspace is compatible with both loss functions, outperforming the baseline model without any resampling/reweighting strategies. For the nonlinear model, FrameStack\xspace achieves better performance, while for NetVLAD, FrameStack with binary cross-entropy loss(BCE) is slightly better.
\vspace{0.05in}
\noindent
\textbf{Effectiveness of mixing ratio.}
We also investigate how many samples that mixes in a mini-batch, determined by a mixing ratio $\eta$, influence the performance. From table~\ref{tab:ratio} we find that as $\eta$ increases, the performance of overall and tail classes increases at the beginning and then decreases---$\eta=0.5$ reaches highest performance which is adopted in our FrameStack\xspace. It suggests that mixing all data in an epoch makes training more difficult.
\vspace{0.05in}
\noindent
\textbf{FrameStack \vs Mixup.}
We compare the performance of FrameStack and Mixup for all and tail classes. Specifically, we compute the difference of average precision for each class between FrameStack\xspace and Mixup. In Figure~\ref{fig:compare-all}, we visualize the top 10 classes from 1,004 classes that FrameStack outperforms Mixup and find that 40\% of them are action classes. When comparing Figure~\ref{fig:compare-all} and Figure~\ref{fig:compare-tail} together, we observe that the 80\% of the top 10 classes are tail classes, which shows FrameStack is more effective than Mixup especially in recognizing tail classes.
\begin{figure}[]
\begin{center}
\includegraphics[width=\linewidth]{all.pdf}
\end{center}
\caption{Top 10 among 1004 classes that FrameStack\xspace surpasses Mixup. 40\% classes are action classes.}
\label{fig:compare-all}
\end{figure}
\begin{figure}[]
\begin{center}
\includegraphics[width=\linewidth]{tail.pdf}
\end{center}
\caption{Top 10 among 340 tail classes that FrameStack\xspace surpasses Mixup. Compared with figure~\ref{fig:compare-all}, FrameStack\xspace achieves better performance mainly on tail clases.}
\label{fig:compare-tail}
\end{figure}
\section{Conclusion}
This paper introduced a large-scale long-tailed video dataset---VideoLT with an aim to advance research in long-tailed video recognition. Long-tailed video recognition is a challenging task because videos are usually weakly labeled. Experimental results show existing long-tailed methods that achieve impressive performance in image tasks such as re-sampling, re-weighting and mixup are not suitable for videos. In our work, we presented FrameStack\xspace, which performs sampling at the frame level by using running AP as a dynamic measurement. FrameStack\xspace adaptively selects different number of frames from different classes. Extensive experiments on different backbones and aggregation models show FrameStack\xspace outperforms all competitors and brings clear performance gains on both overall and tail classes. Future directions include leveraging weakly-supervised learning~\cite{wang2017untrimmednets, nguyen2018weakly}, self-supervised learning~\cite{han2020self, Xu_2019_CVPR} methods to solve long-tailed video recognition.
{\small
\bibliographystyle{ieee_fullname}
|
2,869,038,154,082 | arxiv | \section{Introduction}
Considering online platforms, most use cases of generative modeling applications are motivated by the need to capture user behavior. When in need to answer questions such as "How long do we need to wait for the next user's visit?" or "What is the optimal time of recommending winter coats?" prediction type tasks where the goal is to predict times and marks of the subsequent events are formed. With the rapid growth of online platforms, it is crucial to estimate how its users will behave in the near future.
Commonly referred to as Buy-till-You-Die models, Pareto/NBD \cite{10.5555/2780513.2780514}, BG/NBD \cite{10.5555/2882600.2882608}, and other generative models alike \cite{jasek_comparative_2019} might be too simplistic and lack flexibility in modeling heterogeneity of customer behavior. Nonetheless, the generative approach towards customer behavior modeling described by \cite{FADER200961} might be useful, albeit in a different form.
For example, in Pareto/NBD model, the number of purchases that an $i’\text{th}$ customer makes in their lifetime is described by a Poisson distribution parameterized by $\lambda_i$. Given the inferred parameters for Poisson and Exponential distributions representing a customer's purchase number and lifetime, one can estimate the probability of a customer being "alive." Buy-till-You-Die models assume a fixed parameter $\lambda$ that does not depend on the time, and the models are static. A customer who had a positive experience with a service is more likely to come back and keep returning if the experience continues to be positive and vice versa - these models do not capture such behavior.
An alternative would be to use a self-exciting point process like Hawkes, which implies that $\lambda$ parameter is a function of time, positively influenced by past events. Lately, a new class of temporal point processes (TPPs) has emerged - Neural Temporal Point Process (NTPP) \cite{NIPS2017_6463c884, 1907.07561, 1806.07366}. The NTPPs are based on the self-exciting nature of the Hawkes process but are more versatile and can capture more diverse user behavior. In this work, we identify open research opportunities in applying neural temporal point process models to industry scale customer behavior data.
We explore and compare different approaches to NTPP: Neural Ordinary Differential Equations \cite{1806.07366} and Attention \cite{1706.03762} based temporal point processes \cite{1907.07561, 2007.13794}. We apply these models to various synthetic and real-life datasets. To further test these models, we introduce online second-hand marketplace users behavior dataset. The dataset consists of 4,886,657 events and is 10 times larger than the commonly used Stack Overflow dataset \cite{10.1145/2939672.2939875, 1905.10403, 2007.13794, 10.5555/3295222.3295420}. We attempt to use selected models to describe users' actions in the marketplace and explore the benefits of parameterizing Self-Attention based NTPP models with static features. Finally, we discuss explored models' scalability and their ability to capture rare events.
This paper is organized as follows. In section 2, we explore different ways temporal point processes are used in the industry. In section 3, we give a formal overview of traditional and neural temporal point processes. In section 4, we describe the data and our experimental approach. Section 5 reports the main results of the document, and the final section concludes.
\section{Related work}
With the rapid growth of online platforms, it is crucial to estimate how its users will behave in the near future. Multivariate point processes like FastPoint \cite{10.1007/978-3-030-46147-8_28} prove that optimal results are achieved by combining neural networks with traditional generative temporal point process models. The approach scales to millions of correlated marks with superior predictive accuracy by fusing deep recurrent neural networks with Hawkes processes. Combining traditional temporal point process models with neural networks enables the industry to solve diverse problems: from predicting when customers will leave an online platform to simulating A/B tests.
\paragraph{Churn prediction}
Customer churn prediction is one of the most crucial problems to solve in subscription and non-subscription-based online platforms. Traditionally churn prediction is handled as a supervised learning problem \cite{1604.05377, 2201.02463, 1909.11114} where the goal is to predict if a user will be leaving the platform after a fixed time period. However, platform usage habits can differ from one user to another, and a decrease in an activity does not always is a sign of churn. \cite{1909.06868} is one of the solutions that tackle churn prediction via generative modeling. The solution interprets user-generated events in time scope of sessions and proposes using user return times and session durations to predict user churn.
\paragraph{Time-Sensitive Recommendations}
When considering item recommendations for a user, we are thinking about the optimal suggestion and what time of the year we should suggest. \cite{NIPS2015_136f9513} introduces a novel convex formulation of the problems by establishing an under-explored connection between self-exciting point processes and low-rank models. The proposed solution expands its applicability by offering time-sensitive recommendations and users' returning time predictions that can determine previously discussed churn or optimize marketing strategies. The item recommendation part is accomplished by calculating intensity $\lambda_{u,i}$ for each item $i$ and user $u$. After, items are sorted by descending order of calculated intensity, and top-k items are returned.
\paragraph{Interactive Systems Simulation}
It is common to see modern systems using numerous different machine learning models interacting with each other. As a result, user experience is often defined by various machine learning systems layered iteratively atop each other. The previously discussed item-recommendation problem is one example of such complex system \cite{10.1145/2843948}. When there is a need to change one or multiple models in the recommendation system, one might ask themselves what effect a particular change might have on the overall system performance. Usually, A/B testing answers these types of questions. Although when different system modules are changing rapidly and systems are getting more complex, it becomes more practical to simulate the effect than test it using traditional methods. \cite{10.1145/3460231.3474259} presents one of many ways to simulate interactive systems. The paper proposes Accordion, a fully trainable simulator for interactive systems based on inhomogeneous Poisson processes. While combining multiple intensity functions, the Accordion enables a comparison between realistic simulation settings and their effect on the total number of visits, positive interactions, impressions, and any other empirical quantity derived from a sampled dataset.
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{media/network_structure.png}
\caption{Architecture of parameterized Self Attention encoder.}
\Description{Architecture of parameterized Self Attention encoder.}
\label{fig:network_structure}
\end{figure}
\section{Methods}
The base of our explored generative user behavior modeling methods is Poisson process - a continuous-time version of a Bernoulli process where event arrivals are happening at any point of the time \cite{bertsekas2000introduction}. The homogeneous Poisson process describes intensity of event arrival by $\lambda = \lim_{\delta \rightarrow 0} = \frac{P(1, \delta)}{\delta}$, where $\delta$ is a minimal interval between events time. If needed to model events that are influenced by the time dimension, we can use non-homogeneous version of the Poisson process. Simply instead of using a static intensity function $\lambda$ to represent the intensity of the events, we transform it into a function of time $\lambda = \lambda(t)$ \cite{pishro2014introduction}.
To capture more complex behavior, for example item recommendation, where one successful transaction mostly reinforces members to buy more items, self-exciting processes are used. The self-exciting process is a point process with a property that the arrival of the event causes the conditional intensity function to increase. One of the most known self-exciting processes is Hawkes process \cite{10.1093/biomet/58.1.83}. The key feature of the process is a conditional intensity function, influenced by the events in the past $\lambda(t|\mathcal{H}_{t})=\lambda_{0}(t) + \sum_{i:t>T_{j}}\phi (t - T_{j}),$ where $\mathcal{H}_{t}$ is the history of the events that happened before time $t$. From the defined Hawkes function we can see that there is two major parts in it: base intensity function $\lambda_{0}$ and a summation of \textit{kernel} $\phi(t - T_{i})$. The base intensity function $\lambda_{0}$ keeps the independence property from the Poisson process, and past events do not influence it. The influence of the past arrivals in the Hawkes is defined with a kernel function $\phi(\cdot )$, which usually takes form of exponential function $\phi(x) = \alpha e^{-\beta x} \mathcal{,}$ where $\alpha \geq 0 \mathcal{,} \beta > 0$ and $\alpha < \beta$. The kernel function ensures that more recent events would have a greater influence on the current intensity than events in the past.
\subsection{Neural Hawkes Process}
While Hawkes process brings a significant improvement in capturing the self-exciting type of events, the process is not fully sufficient at representing a more complex type of events. Hawkes process's intensity function dictates additive and decaying influence of the past events. While this behavior can be true for some events, event streams like item recommendations do not benefit from the Hawkes process's additive nature. The more similar items are shown, the less influence they bring and even can discourage people from buying the product. Neural Hawkes Process (NHP) \cite{NIPS2017_6463c884} is better suited to battle more complex TPP problems. The Neural Hawkes Process introduces methods that can define processes where a past event's influence: is not always additive, is not always positive, and does not always decay. The Neural Hawkes process brings improvements in two parts. First, NHP introduces non-linear \textit{transfer function} $f: \mathbb{R} \rightarrow \mathbb{R}^{+}$, which usually takes form of non-linear \textit{softplus} function $f(x) = \alpha\log(1 + e^{x / \alpha})$. Second, rather than predicting $Hawkes\:\lambda(t)$ as a simple summation, NHP uses recurrent neural network (RNN) \cite{doi:10.1073/pnas.79.8.2554} to determine the intensity function $\lambda = f(h(x)),$ where $h$ is a hidden state of the RNN.
This change allows learning a complex dependence of the intensities on past events' number, order, and timing. This architecture lets to improve the base Hawkes process shortcomings: inability to represent events where past event's influence is not always additive is not always positive, and does not always decay; however, the neural Hawkes process has a downside - weaknesses of RNNs are inherited; namely the model's inability to capture long-term dependencies (also called as "forgetting") and is difficult to train \cite{bengio_learning_1994}.
\subsection{Self-Attention Hawkes process}
Despite not being directly applicable to model point process, recently, Transformer architecture was successfully generalized to continuous-time domain \cite{2002.09291, 1907.07561}. Self-Attentive Hawkes Process (SHAP) is an self-attention based approach to a Hawkes process proposed by \cite{1907.07561}. SHAP employs self-attention to measure the influence of historical events to the next event. The SHAP approach to the temporal point processing field also brings a time-shifted positional embedding method where time intervals act as phase shifts of sinusoidal functions. Transformers Hawkes Process (THP) presented by \cite{2002.09291} is another self-attention based approach to the Hawkes process. THP improves recurrent neural network-based point process models that fail to capture short-term and long-term temporal dependencies \cite{hochreiter2001}. The key ingredient of the proposed THP model is the self-attention module. Different from RNNs, the attention mechanism discards recurrent structures. However, the THP model still needs to be aware of the temporal information of inputs, such as timestamps. To achieve this temporal encoding procedure is proposed that is used to compute the positional embedding.
\subsection{Neural Jump Stochastic Differential Equations}
Sometimes user's interest is built up continuously over time but may also be interrupted by stochastic events. To simultaneously model these continuous and discrete dynamics, we can use the Neural Jump Stochastic Differential Equations (NJSDEs) \cite{1905.10403}. The NJSDE uses latent vector $z(t)$ to encode the system's state, which continuously flows over time until an event at random, introducing an abrupt jump and changing its trajectory. The event conditional intensity function and the influence of the jump are parameterized with neural networks as a function of $z(t)$, while the continuous flow is described by Neural Ordinary Differential Equations \cite{1806.07366}. The advantage of Neural NJSDEs is that they can be used to model a diverse selection of marked point processes, where a discrete value can compliment events (for example, a class of purchased item) or a vector of real-valued features (e.g., spatial locations).
\subsection{Parameterised Self-Attention Hawkes process}
To combat a cold-start problem, we explore a novel addition to an encoder-decoder architecture based NTPP models - a parametrization based on static user features. Specifically, we experiment with self-attention type encoders. We concatenate outputs of the Transformer model $m$ with processed static user features $p$ and pass them thought multilayer perceptron (MLP): $f(X) = MLP(m||p)$. The change differs from the approach proposed by \cite{2007.13794}, where outputs of the Transformer model are directly passed to the MLP. We present the illustrated architecture of our approach in Figure \ref{fig:network_structure}.
\section{Experimental setup}
\subsection{Data}
\begin{figure}[!ht]
\centering
\begin{subfigure}{\linewidth}
\includegraphics[width=\linewidth]{media/so_distribution.pdf}
\caption{Comparison between common (left) and rare (right) event type distribution within Stack Overflow platform.}
\end{subfigure}
\begin{subfigure}{\linewidth}
\includegraphics[width=\linewidth]{media/vinted_distribution.pdf}
\caption{Comparison between common (left) and rare (right) event type distribution within Vinted platform.}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=0.5\linewidth]{media/vinted_inter_event_distribution.pdf}
\caption{Vinted inter-event time distribution.}
\end{subfigure}
\caption{Different properties of Stack Overflow and Vinted datasets.}
\Description{Different properties of Stack Overflow and Vinted datasets.}
\label{fig:properties}
\end{figure}
The first dataset we use is a simulated Hawkes process. The synthetic Hawkes data serves as a starting point while evaluating selected models' performance. As the data is generated, we can have unlimited samples. Also, as the intensity values at any point in time are known, we are able to compare them with values provided by the trained models. We designed the synthetic dataset consisting of two independent processes.
The second and the third datasets are real user behavior datasets. Stack Overflow is a question-answering website that awards users various badges based on their activity on the website. The website's users activity is commonly used when benchmarking various temporal point process models \cite{10.1145/2939672.2939875, 1905.10403, 2007.13794, 10.5555/3295222.3295420}. Novel customer behavior dataset comes from a second hand marketplace Vinted, where users sell and buy various goods.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\linewidth]{media/so_activity.pdf}
\caption{Comparison between passive (top) and active (bottom) Stack Overflow user's activity within the website. Passive website user tends to acquire less diverse badges' set than an active one.}
\Description{Comparison between active (top) and passive (bottom) Stack Overflow user's activity within the website. Passive website user tends to acquire less diverse badges' set than an active one.}
\label{fig:so_users_activity}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[width=\linewidth]{media/vinted_activity.pdf}
\caption{Comparison between passive (top) and active (bottom) Vinted user's activity within the platform. Active Vinted user completes same actions as the passive user, only in a more frequent manner. }
\Description{Comparison between passive (top) and active (bottom) Vinted user's activity within the platform. Active Vinted user completes same actions as the passive user, only in a more frequent manner. }
\label{fig:vinted_users_activity}
\end{figure}
We collect Vinted users' activity data by sampling all of the recorded activity (from user's registration to the time of collecting the data) of 5,082 Vinted users. All of the selected users were active in the last year (2021/01/01 - 2022/01/01) and completed at least 40 actions. Inside the dataset, four different types of events can be found: Purchase, Listing, Search, and Sale. Frequent event occurrences and long recorded activity periods result in the dataset being ten times larger than Stack Overflow (4,886,667 vs. 480,414 events). While inspecting the event count distributions (see Figure \ref{fig:properties}), we notice that users' activities within Vinted platform are similar to Stack Overflow, where common events' distributions are shaped similarly. Events collected from Vinted platform as well as from the Stack Overflow website suffer from data imbalance. Events like Publicist, Populist, or Booster are given to the website users only when he accomplishes unique action. These actions can be tricky to capture for temporal point process models, as the data imbalance is quite significant within the dataset. The most common events are Search and Listing. Purchases and Sales are happening much rarer - as they are influenced not only by users' behavior but also by external factors such as the listed item's appeal or Vinted's recommendation system's quality.
Where the differences between Vinted and Stack Overflow platforms begins is the measured inter-event times (see Figure \ref{fig:properties}). Vinted users are returning to the platform more frequently than Stack Overflow ones. This difference is caused by the fact that Stack Overflow browsing activity is not recorded. Activity tendencies between active and passive platform users are also different between mentioned platforms. Active Vinted user completes the same actions as the passive user, only in a more systematic manner (see Figure \ref{fig:vinted_users_activity}).
Besides user activity records, we collect static Vinted users' features. We use the features to explore a solution to the cold start problem in modeling user behavior. We parameterize temporal point process models in the hope of more accurate event type classification results.
We split each dataset into training, test and validation parts. Dataset statistics are summarized in Table \ref{tab:dataset_stats}.
\begin{table*}
\caption{Properties of datasets used for experimentation.}
\label{tab:dataset_stats}
\begin{tabular}{lllllllll}
\toprule
& & & & & \multicolumn{4}{c}{Size} \\ \cmidrule(l){6-9}
Dataset & \# classes & Task type & \# events & Avg. length & Train & Valid & Test & Batch \\ \midrule
Hawkes & 2 & Multi-class & 350,281 & 14 & 233,537 & 58,451 & 58,293 & 512 \\
Stack Overflow & 22 & Multi-class & 480,414 & 72 & 335,870 & 47,966 & 96,578 & 32 \\
Vinted & 4 & Multi-class & 4,886,657 & 962 & 3,428,380 & 482,512 & 975,765 & 4 \\ \bottomrule
\end{tabular}
\end{table*}
\subsection{Training}
We base our model selection on the model's performance in capturing the type and time of the next event based on the time it occurred. Also, the accessibility of a source code is a significant deciding factor. We select \cite{1905.10403} and \cite{2007.13794} as the primary source of models used for our experimentation. \cite{2007.13794} demonstrates that models based on Encoder-decoder architectures are effective not only for Natural Language Processing \cite{1406.1078, 1706.03762} but also work well in a temporal point processing field. In the paper, the usage of Self-Attention as a building block of the model's architecture proves to be beneficial while modeling healthcare records. Where \cite{1905.10403} showcases a Neural Jump Stochastic Differential Equations - model that captures temporal point processes with a piecewise-continuous latent trajectory. The model demonstrates its predictive capabilities on various synthetic and real-world marked point process datasets.
We conclude the final list of models from six different ones. Five of them are selected from \cite{2007.13794}: \textbf{SA-COND-POISSON}, \textbf{SA-LNM}, \textbf{SA-MLP-MC}, \textbf{SA-RMTPP-POISSON}, \textbf{SA-SA-MC}. Where the beginning of the model's name "\textit{SA-}" means the type of encoder (Self-Attention), and the rest of the name marks the type of encoder. The sixth model used in experimentation is Neural Jump Stochastic Differential Equations (\textbf{NJSDE}).
For all models, we use the hyperparameters specified in the original literature, with an exception of batch size and training epochs.
Experiments on Vinted data use the same training configuration as experiments on Stack Overflow users' activity.
Furthermore, we explore different solutions to the cold start problem in modeling behavior of new Vinted users, we experiment with parameterizing self-attention based NTPPs. Similarly to \cite{gu-budhkar-2021-package}, we join processed tabular user features to the output of Encoder's transformer network. We train all models on a workstation with a \textit{Intel 32 core 3rd gen Xeon CPU and 120 GB memory}. The complete implementation of our algorithms and experiments are available at \hyperlink{https://github.com/dqmis/ntpps}{https://github.com/dqmis/ntpps}.
\subsection{Model evaluation}
All trained models are trained and evaluated using a five-fold cross-validation strategy. As in \cite{1905.10403}, we use a weighted F1 score as one of the primary evaluation metrics. This allows us to verify model's capabilities of predicting the type of the next event given the time it occurred. As a secondary metric, we use classification accuracy, mainly to compare our results with \cite{2007.13794}. Despite its popularity in quantifying the predictive performance of TPPs, we do not use MAE/MSE metric in this paper. As the metric compares actual intensity values with the model predicted ones, the lack of such data (only synthetic Hawkes dataset intensity values are known) is the main reason that determines our choice. Additionally, we assess each model's predictions via a classification report. As the data imbalance problem is a concern, the metric enables us to validate model's performance in predicting low-samples label.
\section{Results}
\begin{table*}
\caption{Evaluation on Hawkes and Stack Overflow datasets. Best performances are boldened. Where comparable, results from \cite{1905.10403} and \cite{2007.13794} are displayed in \textit{italic font style}. }
\label{tab:so_hawkes_results}
\begin{tabular}{llllll}
\toprule
& \multicolumn{2}{c}{Hawkes} & & \multicolumn{2}{c}{Stack Overflow} \\ \cmidrule(lr){2-3} \cmidrule(l){5-6}
Model & Accuracy & F1 score & & Accuracy & F1 score \\ \midrule
NJSDE & \textbf{.541} & \textbf{.552} & & \textbf{.548} \textit{(.527)} & \textbf{.363} \\
SA-COND-POISSON & \textbf{.538} & \textbf{.538} & & .501 & \textbf{.332} \textit{(.326)} \\
SA-LNM & .537 & .536 & & .369 & .319 \textit{(.305)} \\
SA-MLP-MC & .536 & .531 & & .216 & .194 \textit{(.327)} \\
SA-RMTPP-POISSON & .526 & .474 & & .515 & .316 \textit{(.288)} \\
SA-SA-MC & .507 & .467 & & .382 & .305 \textit{(.324)} \\ \bottomrule
\end{tabular}
\end{table*}
We report our results on Hawkes and Stack Overflow data (table \ref{tab:so_hawkes_results}) and on Vinted data (table \ref{tab:vinted_results}). First, we note that replication of the models' performance results provided by \cite{1905.10403} and \cite{2007.13794} are mostly successful. Notably, the results of SA-MLP-MC model trained on the Stack Overflow dataset are not similar to the ones presented in \cite{1905.10403}. This could be caused by a mismatch in training environments or errors in our or \cite{2007.13794} evaluation process. Notably, the best performing models were NJSDE and SA-COND-POISSON. Both presented great results while classifying synthetic Hawkes events and Stack Overflow users' actions. However, NJSDE model required much more resources and took longer to train to reach high accuracy (15 hours to train NJSDE model vs. 4 hours to train SA-COND-POISSON). NJSDE implementation proposed by \cite{1905.10403} requires time series conversion into grid later used for modeling. This action consumes more memory as the sequence count in the batch grows. Because of this, we could not validate the NJSDE model's performance on Vinted data.
\begin{figure}[!ht]
\centering
\begin{subfigure}{\linewidth}
\includegraphics[width=\linewidth]{media/vinted_intensities.pdf}
\end{subfigure}
\begin{subfigure}{\linewidth}
\includegraphics[width=\linewidth]{media/so_intensities.pdf}
\end{subfigure}
\caption{Intensity functions on several labels provided by \textbf{SA-COND-POISSON} model trained on Vinted (top) and Stack Overflow (bottom) datasets. Common event's intensity (marked green color) is always higher than rare event's one (marked black color). This results in the model omitting rare events while classifying the next event's type.}
\Description{Intensity functions on several labels provided by \textbf{SA-COND-POISSON} model trained on Vinted (top) and Stack Overflow (bottom) datasets. Dominant class (samples count in the dataset wise) label always has a higher intensity value than the recessive one. This results in the model omitting rare events while classifying the next event's type.}
\label{fig:intesities}
\end{figure}
Notably, all models, including the best-performing ones, suffer from an inability to classify rare events. This can be seen while comparing the accuracy and F1 score metrics (also see classification report in Table \ref{tab:so_classification}). F1 score and accuracy used in the papers that inspired our research fail to report this phenomenon. This is caused by a significant data imbalance in the Stack Overflow and later discussed Vinted datasets. Also, some types of events could be hard to capture, not only based on their rarity but also their nature. For example, events inside Stack Overflow dataset like Yearling (awarded for being an active user for one year time period) or Caucus (awarded for visiting an election during any phase of an active election and having enough reputation for casting a vote) can be difficult to identify without a time dimension. Also, some of the badges like Promoter (given for the first bounty user offers on his question) are less based on the user's activity and more on external factors like the importance of the asked question.
\begin{table}[!ht]
\caption{Evaluation on Vinted dataset. The scores are provided in two sections: where model was parameterised with user features / and where it was not. Best performances are boldened.}
\label{tab:vinted_results}
\begin{tabular}{llll}
\toprule
Model & Accuracy & F1 score \\ \midrule
SA-LNM & .842 / .840 & .771 / .768 \\
SA-COND-POISSON & \textbf{.882} / \textbf{.882} & \textbf{.857} / \textbf{.856} \\
SA-SA-MC & .707 / .839 & .657 / .767 \\
SA-MLP-MC & .654 / .643 & .676 / .670 \\
SA-RMTPP-POISSON & .842 / .839 & .771 / .767 \\ \bottomrule
\end{tabular}
\end{table}
Same tendencies can be seen while inspecting performance results of models trained on Vinted data. While the best performing SA-COND-POISSON model was able to reach an F1 score of 0.857, it still failed to predict some event types like Purchase and Sale (see Table \ref{tab:vinted_classification}). Interestingly Sale event had twice as many samples as the Listing but failed to be captured by the model. As with Stack Overflow badges, this can be explained by looking at the nature of the event. Searching for a new dress to buy or listing some pair of shoes are events caused mainly by users' habits. Purchases are motivated more by Vinted's search quality or recommendation system. The sale of the uploaded item is also less caused by its seller's past activity and more by the item's quality and appeal to the buyers. Inspecting differences between intensity values of the SA-COND-POISSON model trained on Vinted and Stack Overflow data (see Figure \ref{fig:intesities}) gives a better understanding of the model's behavior. There is a significant difference between the intensity values of a frequent and less frequent event type. The difference tends to stay the same between all time points. This results in the model capturing overall intensity changes but lacking the ability to identify rare events.
Furthermore, our experimentation with parameterizing encoder-decoder architecture-based models with user's static features did not produce valuable results (see Table \ref{tab:vinted_results}). There is one case (SA-SA-MC) where parameterized model performs better than its counterpart, but this tendency is not identified while comparing the rest of the models. The lack of models' performance improvement is probably caused by the fact that selected user features do not directly impact user's behavior. However, to verify if the NTPPs parameterization with tabular features is beneficial, we need to conduct more detailed experiments with a more diverse set of features.
\section{Conclusion}
We discussed broad industry application cases of NTPPs and their ability to capture online platform users' behavior. We used synthetic Hawkes dataset, Stack Overflow users' badges, and second-hand marketplace Vinted user activity as our experimentation datasets. Furthermore, we identify that Self-Attention and Neural Ordinary Differential Equations based NTPP models fail to capture the time of the next rare event but succeed in identifying overall user activity intensity. While the NJSDE model is optimal for identifying the next event's type and time, we note that it is not scalable to extensive industry data. Our experimentation with parametrizing Self-Attention-based NTPPs did not show any significant improvements. However, we identify that we need to conduct more experiments to verify if the method is beneficial.
\subsection{Broader Impact}
While the explored NTPPs are not suitable for predicting the time of the next rare event, it does not mean they are not beneficial for subscription and non-subscription-based online platforms. These models can be valuable when solving problems such as churn prediction. By grouping individual events into user sessions, we would be able to detect when a general user's intensity is decreasing, thus letting us know when to act. Furthermore, we can optimize recommendation systems or marketing strategies by knowing the time of the next user's arrival on the online platform.
\bibliographystyle{ACM-Reference-Format}
|
2,869,038,154,083 | arxiv | \section{Introduction\label{INTRO}}
The \textit{collision problem} is to decide whether a black-box function
$f:\left[ n\right] \rightarrow\left[ n\right] $\ is one-to-one (i.e., a
permutation) or two-to-one function, promised that one of these is the case.
\ Together with its close variants, the collision problem is one of the
central problems studied in quantum computing theory; it abstractly models
numerous other problems such as graph isomorphism and the breaking of
cryptographic hash functions.
In this paper, we will mostly deal with a slight generalization of the
collision problem that we call the \textit{Permutation Testing Problem}, or
PTP. \ This is a property testing problem, in which we are promised that
$f:\left[ n\right] \rightarrow\left[ n\right] $ is either a permutation or
far from any permutation, and are asked to decide which is the case.
In 1997, Brassard, H\o yer, and Tapp \cite{bht} gave a quantum algorithm for
the collision problem that makes $O\left( n^{1/3}\right) $\ queries to $f$,
an improvement over the $\Theta\left( \sqrt{n}\right) $\ randomized query
complexity that follows from the birthday paradox. \ Brassard et al.'s
algorithm is easily seen to work for the PTP as well.
Five years later, Aaronson \cite{aar:col} proved the first non-constant lower
bound for these problems: namely, any bounded-error quantum algorithm to solve
them needs $\Omega\left( n^{1/5}\right) $\ queries to $f$. \ Aaronson and
Shi \cite{as} subsequently improved the lower bound to $\Omega\left(
n^{1/3}\right) $, for functions $f:\left[ n\right] \rightarrow\left[
3n/2\right] $; then Ambainis \cite{ambainis:col} and Kutin \cite{kutin}
proved the optimal $\Omega\left( n^{1/3}\right) $\ lower bound for functions
$f:\left[ n\right] \rightarrow\left[ n\right] $. \ All of these lower
bounds work for both the collision problem and the PTP, though they are
slightly easier to prove for the latter.
The collision problem and the PTP are easily seen to admit Statistical
Zero-Knowledge (SZK) proof protocols. \ Thus, one consequence of the collision
lower bound was the existence of an oracle $A$ such that $\mathsf{SZK
^{A}\not \subset \mathsf{BQP}^{A}$.
In talks beginning in 2002,\footnote{See for example: \textit{Quantum Lower
Bounds}, www.scottaaronson.com/talks/lower.ppt; \textit{The Future (and Past)
of Quantum Lower Bounds by Polynomials},
www.scottaaronson.com/talks/future.ppt; \textit{The Polynomial Method in
Quantum and Classical Computing}, www.scottaaronson.com/talks/polymeth.ppt.}
the author often raised the following question:
\begin{quotation}
\noindent\textit{Suppose a function }$f:\left[ n\right] \rightarrow\left[
n\right] $\textit{\ is a permutation, rather than far from a permutation.
\ Is there a small (}$\operatorname*{polylog}\left( n\right)
\textit{-qubit) quantum proof }$\left\vert \varphi_{f}\right\rangle
$\textit{\ of that fact, which can be verified using }$\operatorname*{polylog
\left( n\right) $\textit{\ quantum queries to }$f$\textit{?}
\end{quotation}
In this paper, we will answer the above question in the negative. \ As a
consequence, we will obtain an oracle $A$ such that $\mathsf{SZK
^{A}\not \subset \mathsf{QMA}^{A}$. \ This implies, for example, that any
$\mathsf{QMA}$\ protocol for graph non-isomorphism would need to exploit
something about the problem structure beyond its reducibility to the collision problem.
Given that the relativized $\mathsf{SZK}$\ versus $\mathsf{QMA}$\ problem
remained open for eight years, our solution is surprisingly simple. \ We first
use the in-place amplification procedure of Marriott and Watrous \cite{mw} to
\textquotedblleft eliminate the witness,\textquotedblright\ and reduce the
question to one about quantum algorithms with extremely small acceptance
probabilities. \ We then use a relatively-minor adaptation of the polynomial
degree argument that was used to prove the original collision lower bound.
\ Our proof actually yields an oracle $A$\ such that $\mathsf{SZK
^{A}\not \subset \mathsf{A}_{\mathsf{0}}\mathsf{PP}^{A}$, where $\mathsf{A
_{\mathsf{0}}\mathsf{PP}$\ is a class defined by Vyalyi \cite{vyalyi} that
sits between $\mathsf{QMA}$\ and $\mathsf{PP}$.
Despite the simplicity of our result, to our knowledge it constitutes
\textit{the first nontrivial lower bound on }$\mathsf{QMA}$\textit{\ query
complexity},\ where \textquotedblleft nontrivial\textquotedblright\ means that
it doesn't follow immediately from earlier results unrelated to $\mathsf{QMA
$.\footnote{From the BBBV lower bound for quantum search \cite{bbbv}, one
immediately obtains an oracle $A$ such that $\mathsf{coNP}^{A}\not \subset
\mathsf{QMA}^{A}$: for if there exists a witness state $\left\vert
\varphi\right\rangle $\ that causes a $\mathsf{QMA}$\ verifier to accept the
all-$0$ oracle string, then that same $\left\vert \varphi\right\rangle $\ must
also cause the verifier to accept some string of Hamming weight $1$. \ Also,
since $\mathsf{QMA}\subseteq\mathsf{PP}$ relative to all oracles, the result
of Vereshchagin \cite{vereshchagin}\ that there exists an oracle $A$\ such
that $\mathsf{AM}^{A}\not \subset \mathsf{PP}^{A}$\ implies an $A$ such
that\ $\mathsf{AM}^{A}\not \subset \mathsf{QMA}^{A}$ as well.} \ We hope it
will serve as a starting point for stronger results in the same vein.
\section{Preliminaries\label{PRELIM}}
We assume familiarity with quantum query complexity, as well as with
complexity classes such as $\mathsf{QMA}$ (Quantum Merlin-Arthur),
$\mathsf{QCMA}$ (Quantum Merlin-Arthur with classical witnesses), and
$\mathsf{SZK}$ (Statistical Zero-Knowledge). \ See Buhrman and de Wolf
\cite{bw}\ for a good introduction to quantum query complexity, and the
Complexity Zoo\footnote{www.complexityzoo.com} for definitions of complexity classes.
We now define the main problem we will study. \
\begin{problem}
[Permutation Testing Problem or PTP]Given black-box access to a function
$f:\left[ n\right] \rightarrow\left[ n\right] $, and promised that either
\begin{enumerate}
\item[(i)] $f$ is a permutation (i.e., is one-to-one), or
\item[(ii)] $f$ differs from every permutation on at least $n/8$\ coordinates.
\end{enumerate}
The problem is to accept if (i) holds and reject if (ii) holds.
\end{problem}
In the above definition, the choice of $n/8$\ is arbitrary; it could be
replaced by $cn$\ for any $0<c<1$.
As mentioned earlier, Aaronson \cite{aar:col}\ defined the collision problem
as that of deciding whether $f$\ is \textit{one-to-one or two-to-one},
promised that one of these is the case. \ In this paper, we are able to prove
a $\mathsf{QMA}$ lower bound for PTP, but not for the original collision problem.
Fortunately, however, most of the desirable properties of the collision
problem carry over to PTP. \ As an example, we now observe a simple
$\mathsf{SZK}$\ protocol for PTP.
\begin{proposition}
\label{szkprop}PTP has an (honest-verifier) Statistical Zero-Knowledge proof
protocol, requiring $O\left( \log n\right) $\ time and\ $O\left( 1\right)
$\ queries to $f$.
\end{proposition}
\begin{proof}
The protocol is the following: to check that $f:\left[ n\right]
\rightarrow\left[ n\right] $\ is one-to-one, the verifier picks an input
$x\in\left[ n\right] $\ uniformly at random, sends $f\left( x\right) $ to
the prover, and accepts if and only if the prover returns $x$. \ Since the
verifier already knows $x$, it is clear that this protocol has the
zero-knowledge property.
If $f$\ is a permutation, then the prover can always compute $f^{-1}\left(
f\left( x\right) \right) $, so the protocol has perfect completeness.
If $f$ is $n/8$-far from a permutation, then with at least $1/8$\ probability,
the verifier picks an $x$ such that $f\left( x\right) $\ has no unique
preimage, in which case the prover can find $x$\ with probability at most
$1/2$. \ So the protocol has constant soundness.
\end{proof}
\subsection{Upper Bounds\label{UPPER}}
To build intuition, we now give a simple $\mathsf{QMA}$\ \textit{upper} bound
for the collision problem. \ Indeed, this will actually be a $\mathsf{QCMA
$\ upper bound, meaning that the witness is classical, and only the
verification procedure is quantum.
\begin{theorem}
\label{upperbound}For all $w\in\left[ 0,n\right] $, there exists a
$\mathsf{QCMA}$\ protocol for the collision problem---i.e., for verifying that
$f:\left[ n\right] \rightarrow\left[ n\right] $\ is one-to-one rather than
two-to-one---that uses a $w\log n$-bit classical witness and makes $O\left(
\min\left\{ \sqrt{n/w},n^{1/3}\right\} \right) $ quantum queries to $f$.
\end{theorem}
\begin{proof}
If $w=O\left( n^{1/3}\right) $, then the verifier $V$\ can just ignore the
witness and solve the problem in $O\left( n^{1/3}\right) $\ queries\ using
the Brassard-H\o yer-Tapp algorithm \cite{bht}. \ So assume $w\geq Cn^{1/3}$
for some suitable constant $C$.
The witness will consist of claimed values $f^{\prime}\left( 1\right)
,\ldots,f^{\prime}\left( w\right) $\ for $f\left( 1\right) ,\ldots
,f\left( w\right) $ respectively. \ Given this witness, $V$ runs the
following procedure.
\begin{enumerate}
\item[(Step 1)] Choose a set of indices $X\subset\left[ w\right] $\ with
$\left\vert X\right\vert =O\left( 1\right) $ uniformly at random. \ Query
$f\left( x\right) $\ for each $x\in X$, and reject if there is an $x\in X$
such that $f\left( x\right) \neq f^{\prime}\left( x\right) $.
\item[(Step 2)] Choose a set of indices $Y\subset\left\{ w+1,\ldots
,n\right\} $ with $\left\vert Y\right\vert =n/w$\ uniformly at random. \ Use
Grover's algorithm to look for a $y\in S$\ such that $f\left( y\right)
=f^{\prime}\left( x\right) $ for some $x\in\left[ w\right] $. \ If such a
$y$\ is found, then reject; otherwise accept.
\end{enumerate}
Clearly this procedure makes $O\left( \sqrt{n/w}\right) $\ quantum queries
to $f$. \ For completeness, notice that if $f$\ is one-to-one, and the witness
satisfies $f^{\prime}\left( x\right) =f\left( x\right) $\ for all
$x\in\left[ w\right] $, then $V$ accepts with probability $1$. \ For
soundness, suppose that Step 1 accepts. \ Then with high probability, we have
$f^{\prime}\left( x\right) =f\left( x\right) $\ for at least (say) a
$2/3$\ fraction of $x\in\left[ w\right] $. \ However, as in the analysis of
Brassard et al.\ \cite{bht}, this means that, if $f$\ is two-to-one, then with
high probability, a Grover search over $n/w$\ randomly-chosen indices
$y\in\left\{ w+1,\ldots,n\right\} $\ will succeed at finding a $y$\ such
that $f\left( y\right) =f^{\prime}\left( x\right) =f\left( x\right) $
for some $x\in\left[ w\right] $. \ So if Step 2 does \textit{not} find such
a $y$, then $V$ has verified to within constant soundness that $f$ is one-to-one.
\end{proof}
For the Permutation Testing Problem, we do not know whether there is a
$\mathsf{QCMA}$\ protocol that satisfies both $T=o\left( n^{1/3}\right)
$\ and $w=o\left( n\log n\right) $. \ However, notice that if $w=\Omega
\left( n\log n\right) $, then the witness can just give claimed values
$f^{\prime}\left( 1\right) ,\ldots,f^{\prime}\left( n\right) $ for
$f\left( 1\right) ,\ldots,f\left( n\right) $\ respectively. \ In that
case, the verifier simply needs to check that $f^{\prime}$ is indeed a
permutation, and that $f^{\prime}\left( x\right) =f\left( x\right) $\ for
$O\left( 1\right) $ randomly-chosen values $x\in\left[ n\right] $. \ So if
$w=\Omega\left( n\log n\right) $, then the $\mathsf{QMA}$, $\mathsf{QCMA}$,
and $\mathsf{MA}$\ query complexities are all $T=O\left( 1\right) $.
\section{Main Result\label{MAIN}}
In this section, we prove a lower bound on the $\mathsf{QMA}$\ query
complexity of the Permutation Testing Problem. \ Given a $\mathsf{QMA
$\ verifier $V$ for PTP, the first step will be to amplify $V$'s success
probability. \ For this, we use the by-now standard procedure of Marriott and
Watrous \cite{mw}, which amplifies without increasing the size of the quantum witness.
\begin{lemma}
[In-Place Amplification Lemma \cite{mw}]\label{inplace}Let $V$\ be a
$\mathsf{QMA}$\ verifier that uses a $w$-qubit quantum witness, makes $T$
oracle queries, and has completeness and soundness errors $1/3$. Then for all
$s\geq1$, there exists an amplified verifier $V_{s}^{\prime}$\ that uses a
$w$-qubit quantum witness, makes $O\left( Ts\right) $\ oracle queries, and
has completeness and soundness errors\ $1/2^{s}$.
\end{lemma}
Lemma \ref{inplace} has a simple consequence that will be the starting point
for our lower bound.
\begin{lemma}
[Guessing Lemma]\label{guesslem}Suppose a language $L$ has a $\mathsf{QMA
$\ protocol, which makes $T$\ queries and uses a $w$-qubit quantum witness.
\ Then there is also a quantum algorithm for $L$ (with no witness) that makes
$O\left( Tw\right) $ queries, accepts every $x\in L$ with probability at
least $0.9/2^{w}$, and accepts every $x\notin L$ with probability at most
$0.3/2^{w}$.
\end{lemma}
\begin{proof}
Let $V_{s}^{\prime}$\ be the amplified verifier from Lemma \ref{inplace}.
\ Set $s:=w+2$, and consider running $V_{s}^{\prime}$\ with the $w$-qubit
maximally mixed state $I_{w}$ in place of the $\mathsf{QMA}$ witness
$\left\vert \varphi_{x}\right\rangle $. \ Then given any yes-instance $x\in
L$
\[
\Pr\left[ V_{s}^{\prime}\left( x,I_{w}\right) \text{ accepts}\right]
\geq\frac{1}{2^{w}}\Pr\left[ V_{s}^{\prime}\left( x,\left\vert \varphi
_{x}\right\rangle \right) \text{ accepts}\right] \geq\frac{1-2^{-s}}{2^{w
}\geq\frac{0.9}{2^{w}},
\]
while given any no-instance $x\notin L$
\[
\Pr\left[ V_{s}^{\prime}\left( x,I_{w}\right) \text{ accepts}\right]
\leq\frac{1}{2^{s}}\leq\frac{0.3}{2^{w}}.
\]
\end{proof}
Now let $Q$ be a quantum algorithm for PTP, which makes $T$ queries to $f$.
\ Then just like in the collision lower bound proofs of Aaronson
\cite{aar:col}, Aaronson and Shi \cite{as}, and Kutin \cite{kutin}, the
crucial fact we will need is the so-called \textquotedblleft Symmetrization
Lemma\textquotedblright: namely, $Q$\textit{'s acceptance probability can be
written as a polynomial, of degree at most }$2T$\textit{, in a small number of
integer parameters characterizing }$f$\textit{.}
In more detail, call an ordered pair of integers $\left( m,a\right)
$\ \textit{valid} if
\begin{enumerate}
\item[(i)] $0\leq m\leq n$,
\item[(ii)] $1\leq a\leq n-m$, and
\item[(iii)] $a$ divides $n-m$.
\end{enumerate}
Then for any valid $\left( m,a\right) $, let $S_{m,a}$\ be the set of all
functions $f:\left[ n\right] \rightarrow\left[ n\right] $\ that are
one-to-one on $m$\ coordinates and $a$-to-one on the remaining $n-m
\ coordinates (with the two ranges not intersecting, so that $\left\vert
\operatorname{Im}f\right\vert =m+\frac{n-m}{a}$). \ The following version of
the Symmetrization Lemma is a special case of the version proved by Kutin
\cite{kutin}.
\begin{lemma}
[Symmetrization Lemma \cite{aar:col,as,kutin}]\label{kutinlem}Let $Q$ be a
quantum algorithm that makes $T$ queries to $f:\left[ n\right]
\rightarrow\left[ n\right] $. \ Then there exists a real polynomial
$p\left( m,a\right) $, of degree at most $2T$, such tha
\[
p\left( m,a\right) =\operatorname*{E}_{f\in S_{m,a}}\left[ \Pr\left[
Q^{f}\text{ accepts}\right] \right]
\]
for all valid $\left( m,a\right) $.
\end{lemma}
Finally, we will need a standard result from approximation theory, due to
Paturi \cite{paturi}.
\begin{lemma}
[Paturi \cite{paturi}]\label{polylem}Let $q:\mathbb{R}\rightarrow\mathbb{R
$\ be a univariate polynomial such that $0\leq q\left( j\right) \leq\delta
$\ for all integers $j\in\left[ a,b\right] $, and suppose that $\left\vert
q\left( \left\lceil x\right\rceil \right) -q\left( x\right) \right\vert
=\Omega\left( \delta\right) $\ for some $x\in\left[ a,b\right] $.\ \ Then
$\deg\left( q\right) =\Omega\left( \sqrt{\left( x-a+1\right) \left(
b-x+1\right) }\right) $.
\end{lemma}
Intuitively, Lemma \ref{polylem} says that $\deg\left( q\right)
=\Omega\left( \sqrt{b-a}\right) $\ if $x$\ is close to one of the endpoints
of the range $\left[ a,b\right] $, and that $\deg\left( q\right)
=\Omega\left( b-a\right) $\ if $x$ is close to the middle of the range.
We can now prove the $\mathsf{QMA}$\ lower bound for PTP.
\begin{theorem}
[Main Result]\label{mainthm}Let $V$ be a $\mathsf{QMA}$\ verifier for the
Permutation Testing Problem, which makes $T$ quantum queries to the function
$f:\left[ n\right] \rightarrow\left[ n\right] $, and which takes a
$w$-qubit quantum witness $\left\vert \varphi_{f}\right\rangle $\ in support
of $f$ being a permutation. \ Then $Tw=\Omega\left( n^{1/3}\right) $.
\end{theorem}
\begin{proof}
Assume without loss of generality that $n$ is divisible by $4$. \ Let
$\varepsilon:=0.3/2^{w}$. \ Then by Lemma \ref{guesslem}, from the
hypothesized $\mathsf{QMA}$\ verifier $V$, we can obtain a quantum algorithm
$Q$ for the PTP that makes $O\left( Tw\right) $\ queries to $f$, and that
satisfies the following two properties:
\begin{enumerate}
\item[(i)] $\Pr\left[ Q^{f}\text{ accepts}\right] \geq3\varepsilon$ for all
permutations $f:\left[ n\right] \rightarrow\left[ n\right] $.
\item[(ii)] $\Pr\left[ Q^{f}\text{ accepts}\right] \leq\varepsilon$ for all
$f:\left[ n\right] \rightarrow\left[ n\right] $ that are at least
$n/8$-far from any permutation.
\end{enumerate}
Now let $p\left( m,a\right) $ be the real polynomial of degree $O\left(
Tw\right) $ from Lemma \ref{kutinlem}, such tha
\[
p\left( m,a\right) =\operatorname*{E}_{f\in S_{m,a}}\left[ \Pr\left[
Q^{f}\text{ accepts}\right] \right]
\]
for all valid $\left( m,a\right) $. \ Then $p$ satisfies the following two properties:
\begin{enumerate}
\item[(i')] $p\left( m,1\right) \geq3\varepsilon$ for all $m\in\left[
n\right] $. \ (For any $f\in S_{m,1}$ is one-to-one on its entire domain.)
\item[(ii')] $0\leq p\left( m,a\right) \leq\varepsilon$ for all integers
$0\leq m\leq3n/4$\ and $a\geq2$ such that $a$ divides $n-m$. \ (For in this
case, $\left( m,a\right) $\ is valid and every $f\in S_{m,a}$\ is at least
$n/8$-far\ from a permutation.)
\end{enumerate}
So to prove the theorem, it suffices to show that any polynomial $p$
satisfying properties (i') and (ii') above has degree $\Omega\left(
n^{1/3}\right) $.
Let $g\left( x\right) :=p\left( n/2,2x\right) $, and let $k$\ be the least
positive integer such that $\left\vert g\left( k\right) \right\vert
>2\varepsilon$ (such a $k$ must exist, since $g$\ is a non-constant
polynomial). \ Notice that\ $g\left( 1/2\right) =p\left( n/2,1\right)
\geq3\varepsilon$, that $g\left( 1\right) =p\left( n/2,2\right)
\leq\varepsilon$, and that $\left\vert g\left( i\right) \right\vert
\leq2\varepsilon$\ for all $i\in\left[ k-1\right] $. \ By Lemma
\ref{polylem}, these facts together imply that $\deg\left( g\right)
=\Omega\left( \sqrt{k}\right) $.
Now let $c:=2k$, and let $h\left( i\right) :=p\left( n-ci,c\right) $.
\ Then for all integers $i\in\left[ \frac{n}{4c},\frac{n}{c}\right] $, we
have $0\leq h\left( i\right) \leq\varepsilon$, since $\left( n-ci,c\right)
$\ is valid, $n-ci\leq3n/4$, and $c\geq2$. \ On the other hand, we also hav
\[
h\left( \frac{n}{2c}\right) =p\left( \frac{n}{2},c\right) =p\left(
\frac{n}{2},2k\right) =g\left( k\right) >2\varepsilon.
\]
By Lemma \ref{polylem}, these facts together imply that $\deg\left( h\right)
=\Omega\left( n/c\right) =\Omega\left( n/k\right) $.
Clearly $\deg\left( g\right) \leq\deg\left( p\right) $ and $\deg\left(
h\right) \leq\deg\left( p\right) $. \ So combining
\[
\deg\left( p\right) =\Omega\left( \max\left\{ \sqrt{k},\frac{n
{k}\right\} \right) =\Omega\left( n^{1/3}\right) .
\]
\end{proof}
\section{Oracle Separations\label{ORACLE}}
Using Theorem \ref{mainthm}, we can exhibit an oracle separation between
$\mathsf{SZK}$\ and $\mathsf{QMA}$, thereby answering the author's question
from eight years ago.
\begin{theorem}
\label{szkqmasep}There exists an oracle $A$\ such that $\mathsf{SZK
^{A}\not \subset \mathsf{QMA}^{A}$.
\end{theorem}
\begin{proof}
[Proof Sketch]The oracle $A$ will encode an infinite sequence of instances
$f_{n}:\left[ 2^{n}\right] \rightarrow\left[ 2^{n}\right] $\ of the
Permutation Testing Problem, one for each input length $n$. \ Define a unary
language $L_{A}$\ by $0^{n}\in L_{A}$\ if $f_{n}$\ is a permutation, and
$0^{n}\notin L_{A}$\ if $f_{n}$\ is far from a permutation. \ Then Proposition
\ref{szkprop}\ tells us that $L_{A}\in\mathsf{SZK}^{A}$\ for all $A$. \ On the
other hand, Theorem \ref{mainthm}\ tells us that we can choose $A$ in such a
way that $L_{A}\notin\mathsf{QMA}^{A}$, by diagonalizing against all possible
$\mathsf{QMA}$\ verifiers.
\end{proof}
In the rest of the section, we explain how our lower bound actually places
$\mathsf{SZK}$\ outside of a larger complexity class than $\mathsf{QMA}$.
\ First let us define the larger class in question.
\begin{definition}
[Vyalyi \cite{vyalyi}]$\mathsf{A}_{\mathsf{0}}\mathsf{PP}$ is the class of
languages $L$\ for which there exists a $\mathsf{\#P}$\ function $g$, as well
as polynomials $p$\ and $q$, such that for all inputs $x\in\left\{
0,1\right\} ^{n}$:
\begin{enumerate}
\item[(i)] If $x\in L$\ then $\left\vert g\left( x\right) -2^{p\left(
n\right) }\right\vert \geq2^{q\left( n\right) }$.
\item[(ii)] If $x\notin L$\ then $\left\vert g\left( x\right) -2^{p\left(
n\right) }\right\vert \leq2^{q\left( n\right) -1}$.
\end{enumerate}
\end{definition}
We now make some elementary observations about $\mathsf{A}_{\mathsf{0
}\mathsf{PP}$. \ First, $\mathsf{A}_{\mathsf{0}}\mathsf{PP}$\ is contained in
$\mathsf{PP}$, and contains not only $\mathsf{MA}$\ but also the
slightly-larger class $\mathsf{SBP}$ (Small Bounded-Error Polynomial-Time)
defined by B\"{o}hler et al.\ \cite{bohler}. \ Second, it is not hard to show
that $\mathsf{P}^{\mathsf{P{}romiseA}_{\mathsf{0}}\mathsf{PP}}=\mathsf{P
^{\mathsf{PP}}=\mathsf{P}^{\mathsf{\#P}}$. \ The reason is that, by varying
the polynomial $p$, we can obtain a multiplicative estimate of the difference
$\left\vert g\left( x\right) -2^{p\left( n\right) }\right\vert $, which
then implies that we can use binary search to determine $g\left( x\right) $\ itself.
By adapting the result of Aaronson \cite{aar:pp}\ that $\mathsf{PP
=\mathsf{PostBQP}$, Kuperberg \cite{kuperberg:jones} gave a beautiful
alternate characterization of $\mathsf{A}_{\mathsf{0}}\mathsf{PP}$ in terms of
quantum computation. \ Let $\mathsf{SBQP}$\ (Small Bounded-Error Quantum
Polynomial-Time) be the class of languages $L$ for which there exists a
polynomial-time quantum algorithm\ that accepts with probability at least
$2^{-p\left( n\right) }$\ if $x\in L$, and with probability at most
$2^{-p\left( n\right) -1}$\ if $x\notin L$, for some polynomial $p$.
\begin{theorem}
[Kuperberg \cite{kuperberg:jones}]\label{kuperthm}$\mathsf{A}_{\mathsf{0
}\mathsf{PP}=\mathsf{SBQP}.$
\end{theorem}
By combining Theorem \ref{kuperthm} with Lemma \ref{inplace}, it is not hard
to reprove the following result of Vyalyi \cite{vyalyi}.
\begin{theorem}
[Vyalyi \cite{vyalyi}]\label{vyalyithm}$\mathsf{QMA}\subseteq\mathsf{A
_{\mathsf{0}}\mathsf{PP}$.
\end{theorem}
\begin{proof}
Similar to Lemma \ref{guesslem}. \ Given a language $L$, suppose $L$ has a
$\mathsf{QMA}$\ verifier $V$\ that takes a $w$-qubit quantum witness. \ Then
first apply Marriott-Watrous amplification (Lemma \ref{inplace}), to obtain a
new verifier $V^{\prime}$\ with completeness and soundness errors $0.2/2^{w}$,
which also takes a $w$-qubit quantum witness. \ Next, run $V^{\prime}$ with
the $w$-qubit maximally mixed state $I_{w}$ in place of the witness. \ The
result is a quantum algorithm that accepts every $x\in L$ with probability at
least $0.9/2^{w}$, and accepts every $x\notin L$ with probability at most
$0.2/2^{w}$. \ This implies that $L\in\mathsf{SBQP}$.
\end{proof}
We now observe that our results from Section \ref{MAIN} yield, not only an
oracle $A$ such that $\mathsf{SZK}^{A}\not \subset \mathsf{QMA}^{A}$, but an
oracle $A$ such that $\mathsf{SZK}^{A}\not \subset \mathsf{A}_{\mathsf{0
}\mathsf{PP}^{A}$, which is a stronger separation.
\begin{theorem}
\label{szka0pp}There exists an oracle $A$\ such that $\mathsf{SZK
^{A}\not \subset \mathsf{A}_{\mathsf{0}}\mathsf{PP}^{A}$.
\end{theorem}
\begin{proof}
[Proof Sketch]As in Theorem \ref{szkqmasep}, the oracle $A$ encodes an
infinite sequence of instances $f_{n}:\left[ 2^{n}\right] \rightarrow\left[
2^{n}\right] $\ of the Permutation Testing Problem. \ The key observation is
that Theorem \ref{mainthm} rules out, not merely any $\mathsf{QMA}$\ protocol
for PTP, but also any $\mathsf{SBQP}$\ algorithm: that is, any polynomial-time
quantum algorithm that accepts with probability at least $2\varepsilon$\ if
$f_{n}$\ is a permutation, and with probability at most $\varepsilon$\ if
$f_{n}$\ is far from a permutation, for some $\varepsilon>0$. \ This means
that we can use Theorem \ref{mainthm}\ to diagonalize against $\mathsf{SBQP
$\ (or equivalently $\mathsf{A}_{\mathsf{0}}\mathsf{PP}$) machines.
\end{proof}
\section{Open Problems\label{OPEN}}
\begin{enumerate}
\item[(1)] It is strange that our lower bound works only for the Permutation
Testing Problem, and not for the original collision problem (i.e., for
certifying that $f$ is one-to-one rather than two-to-one). \ Can we rule out
succinct quantum proofs for the latter?
\item[(2)] Even for PTP, there remains a large gap between the upper and lower
bounds that we can prove on $\mathsf{QMA}$\ query complexity. \ Recall that
our lower bound has the form $Tw=\Omega\left( n^{1/3}\right) $, where
$T$\ is the query complexity and $w$ is the number of qubits in the witness.
\ By contrast, if $w=o\left( n\log n\right) $, then we do not know of
\textit{any} $\mathsf{QMA}$\ protocol that achieves $T=o\left( n^{1/3
\right) $---i.e., that does better than simply ignoring the witness and
running the Brassard-H\o yer-Tapp algorithm. \ It would be extremely
interesting to get sharper results on the tradeoff between $T$ and $w$. \ (As
far as we know, it is open even to get a sharp tradeoff for \textit{classical}
$\mathsf{MA}$\ protocols.)
\item[(3)] For the collision problem, the PTP, or any other black-box problem,
is there a gap (even just a polynomial gap) between the $\mathsf{QMA}$\ query
complexity and the $\mathsf{QCMA}$\ query complexity? \ This seems like a
difficult question, since currently, the one lower bound technique that we
have for\ $\mathsf{QCMA}$---namely, the reduction to $\mathsf{SBQP
$\ exploited in this paper---\textit{also} works for $\mathsf{QMA}$. \ It
follows that a new technique will be needed to solve the old open problem of
constructing an oracle $A$\ such that $\mathsf{QCMA}^{A}\neq\mathsf{QMA}^{A}$.
\ (Currently, the closest we have is a \textit{quantum} oracle separation
between $\mathsf{QMA}$\ and $\mathsf{QCMA}$, shown by Aaronson and Kuperberg
\cite{ak}.)
\item[(4)] Watrous (personal communication) asked whether there exists an
oracle $A$ such that $\mathsf{SZK}^{A}\not \subset \mathsf{PP}^{A}$. \ Since
$\mathsf{PP}\subseteq\mathsf{P}^{\mathsf{P{}romiseA}_{\mathsf{0}}\mathsf{PP}
$, our oracle separation between\ $\mathsf{SZK}$ and $\mathsf{A}_{\mathsf{0
}\mathsf{PP}$ comes \textquotedblleft close\textquotedblright\ to answering
Watrous's question. \ However, a new technique seems needed to get from
$\mathsf{A}_{\mathsf{0}}\mathsf{PP}$\ to $\mathsf{PP}$.
\end{enumerate}
\bibliographystyle{plain}
|
2,869,038,154,084 | arxiv | \section{Introduction}
The introduction of singlet neutrino fields which can propagate in
extra spatial dimensions as well as in the usual three dimensional space
may lead to naturally small Dirac neutrino masses, due to a volume
suppression. If those singlets mix with standard neutrinos they may
have an impact on neutrino oscillations, even if
the size of the largest extra dimension is smaller than
$2 \times 10^{-4}$ m (the current limit from Cavendish-type
experiments which test the Newton Law).
\section{Theoretical Framework}
Here we consider the model discussed in Ref.~\cite{DLP02} where the 3
standard model (SM) left-handed neutrinos $\nu^{\alpha}_{L}$
and the other SM fields, including the Higgs ($H$), are confined to
propagate in a 4-dimensional spacetime, while 3 families of SM singlet
fermions ($\Psi^\alpha$) can propagate in a higher dimensional
spacetime with at least two compactified extra dimensions, one
of these ($y$) compactified on a circle of radius $a$, much larger
than the size of the others so that we can in practice use a
5-dimensional treatment.
The singlet fermions have Yukawa couplings $\lambda_{\alpha \beta}$
with the Higgs and the SM neutrinos leading to Dirac masses and
mixings among active species and sterile KK modes. This can be
derived from the action
\vglue -0.7cm
\begin{eqnarray*}
S \, &= &\, \int d^4 x \, dy \, \imath \, \Psi^{\alpha} \, \Gamma_{J} \, \partial^{J}\Psi^\alpha \\
&+& \int d^4 x \, \imath \, \bar \nu^{\alpha}_{L} \, \gamma_{\mu} \, \partial^{\mu}
\nu^\alpha_L \\
&+& \int d^4 x \, \lambda_{\alpha \beta} \, H \, \bar \nu^{\alpha}_{L} \, \Psi^\beta_{R} (x,0)+\mbox{h.c.},
\end{eqnarray*}
\vglue -0.1cm
where $\Gamma_J, J=0,..,4$ are the 5-dimensional Dirac matrices, that after
dimensional reduction and electroweak symmetry breaking gives rise to
the effective neutrino mass Lagrangian
\vglue -0.6cm
\begin{eqnarray*}
\mathcal{L_{\rm eff}} \, &= & \displaystyle
\sum_{\alpha,\beta}m_{\alpha\beta}^{D}\left[\overline{\nu}_{\alpha
L}^{\left(0\right)}\,\nu_{\beta R}^{\left(0\right)}+\sqrt{2}\,
\sum_{N=1}^{\infty}\overline{\nu}_{\alpha
L}^{\left(0\right)}\,\nu_{\beta
R}^{\left(N\right)}\right] \\
& + &\sum_{\alpha}\sum_{N=1}^{\infty}\displaystyle
\frac{N}{a}\, \overline{\nu}_{\alpha L}^{\left(N\right)} \,
\nu_{\alpha R}^{\left(N\right)} +\mbox{h.c.},
\end{eqnarray*}
\vglue -0.1cm
where the Greek indices $\alpha,\beta = e,\mu,\tau$, the capital Roman
index $N=1,...,\infty$, $m_{\alpha \beta}^{D}$ is a Dirac mass
matrix, $\nu^{(0)}_{\alpha R}$ , $\nu^{(N)}_{\alpha R}$ and
$\nu^{(N)}_{\alpha L}$ are the linear combinations of the singlet fermions
that couple to the SM neutrinos $\nu^{(0)}_{\alpha L}$.
In this context one can compute the active neutrino transition probabilities
\[ P(\nu_{\alpha}^{(0)}\to\nu_{\beta}^{(0)};L)= \] \vglue -0.9cm
\[ \bigg|\sum_{i,j,k}\sum_{N=0}^{\infty}U_{\alpha i}U_{\beta k}^{*}W_{ij}^{(0N)*}W_{kj}^{(0N)}\exp\left(i\frac{\lambda_{j}^{(N)2}L}{2Ea^{2}}\right)\bigg|^{2}
\]
where $U$ and $W$ are the mixing matrices for active and KK modes,
respectively. Here $\lambda_j^{(N)}$ is a dimensionless eigenvalue of
the evolution equation~\cite{DLP02} which depends on the $j$-th
neutrino mass ($m_j$), hence on the mass hierarchy, $L$ is the baseline and $E$
is the neutrino energy.
To illustrate what is expected we plot in Fig.~\ref{fig:prob} the survival
probabilities for $\nu_\mu \to \nu_\mu$ and $\bar \nu_e \to \bar \nu_e$ as
a function of $E$. We show the behavior for the normal and inverted mass
hierarchy assuming the lightest neutrino to be massless ($m_0=0$).
The effect of this large extra dimension (LED) depends on the product $m_j a$.
We observe that in the $\nu_\mu \to \nu_\mu$ channel the effect of LED
is basically the same for normal (NH) and inverted (IH) hierarchies, since in
this case all the amplitudes involved are rather large.
On the other hand, for $\bar \nu_e \to \bar \nu_e$ the effect is smaller for
NH as in this case the dominant $m_j a$ term is suppressed by $\theta_{13}$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.50\textwidth]{probs.pdf}
\end{center}
\vglue -1.3cm
\caption{In the top (bottom) panel we show the survival probability for
$\nu_\mu$ ($\bar \nu_e$) as a function of the neutrino energy for
$L=735$ km ($180$ km) for $a=0$ (no LED, black curve) and
$a=5\times 10^{-7}$ m for normal hierarchy (dashed blue curve) and
inverted hierarchy (dotted red curve).}
\label{fig:prob}
\vglue -0.7cm
\end{figure}
\section{Results}
As we can observe in Fig.~\ref{fig:prob} the main effect of LED is a shift in
the oscillation maximum with a decrease in the survival probability due to
oscillations to KK modes. This makes experiments such as KamLAND and MINOS,
which are currently the best experiments to measure $\Delta m^2_{\odot}$ and
$\vert \Delta m^2_{\rm atm}\vert$, respectively, also the best experiments to
test for LED.
We have used the recent MINOS~\cite{MINOS10} and KamLAND~\cite{Kam08}
results and reproduced their allowed regions for the standard
oscillation parameters. For this and the LED study we have modified
GLoBES \cite{globes} according to our previous analysis of these experiments
in \cite{MNTZ05} and \cite{MKP10}.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.42\textwidth]{exclusions.pdf}
\end{center}
\vglue -1.3cm
\caption{Excluded region in the plane $m_0 \times a$ by KamLAND (upper panel),
MINOS (center panel) and combined (lower panel).
}
\label{fig:exc}
\vglue -0.5cm
\end{figure}
In Fig.~\ref{fig:exc}, we present the excluded region in the plane $m_0
\times a$, by MINOS, KamLAND and their combined data at 90 and 99\%
CL (2 dof). When finding these regions all standard oscillation parameters
where considered free. To account for our previous knowledge of their
values \cite{concha}, we have added Gaussian priors to the $\chi^2$ function.
As expected the limits provided by MINOS ($\nu_\mu \to \nu_\mu$) are
basically the same for NH and IH. From their data we obtain
$a < 7.3 (9.7) \times 10^{-7}$ m in the hierarchical
case for $m_0 \to 0$ and $a < 1.2 (1.6) \times 10^{-7}$ m at 90 (99)\%~CL
for degenerate neutrinos with $m_0 = 0.2$ eV. We have verified that the
inclusion of LED in the fit of the standard atmospheric oscillation
parameters does not modify very much the region in the plane
$\sin^2 2\theta_{23} \times \vert \Delta m^2_{\rm atm} \vert$ allowed
by MINOS data. In fact the best fit point as well as the $\chi^2_{\rm min}$
remain the same as in the case without LED.
KamLAND data provide, for hierarchical neutrinos with $m_0 \to 0$, a
competitive limit only for IH, in this case one gets $a < 8.5 (9.8)
\times 10^{-7}$ m at 90 (99)\% CL. For degenerate neutrinos with
$m_0= 0.2$ eV one also gets from KamLAND $a < 2.0 (2.3) \times
10^{-7}$ m at 90 (99)\% CL. The inclusion of LED in the fit of the
standard solar oscillation parameters here enlarges the region in the
plane $\tan^2 \theta_{12} \times \Delta m^2_{\odot}$ allowed by
KamLAND data. The best fit point changes from $\Delta m^2_{\odot} =
7.6 \times 10^{-5}$ eV$^2$ and $\tan^2 \theta_{12}=0.62$ to $\Delta
m^2_{\odot} = 8.6 \times 10^{-5}$ eV$^2$ and $\tan^2 \theta_{12}=0.42$,
however, the $\chi^2_{\rm min}/\rm dof$ remains the same.
When one combines both experiments the hierarchical limits
improve but the degenerate limit remains practically the one given by MINOS.
\section{Conclusions}
\vglue -0.1cm
We have investigated the effect of LED in neutrino oscillation data
deriving limits on the largest extra dimension $a$ provided by the
most recent data from MINOS and KamLAND experiments. For hierarchical
neutrinos with $m_0 \to 0$, MINOS and KamLAND constrain $a < 6.8 (9.5)
\times 10^{-7}$ m for NH and $a < 8.5 (9.8) \times 10^{-7}$ m
at 90 (99)\% CL for IH. For degenerate neutrinos with
$m_0= 0.2$ eV their combined data constrain $a < 2.1 (2.3) \times
10^{-7}$ m at 90 (99)\% CL.
We can estimate that the future Double CHOOZ experiment will be able to
improve these limits by a factor 2 for the IH and by a factor 1.5 for
the degenerate case. Unfortunately NO$\nu$A and T2K cannot improve
MINOS limits. See~\cite{MNZ-LED} for any detail.
RZF and HN thank Profs. Fogli and Lisi for the cordial invitation
to participate of NOW2010. This work has been
supported by FAPESP, FAPERJ and CNPq Brazilian
funding agencies.
|
2,869,038,154,085 | arxiv | \section{Eigenface analysis}
Figure \ref{figure:pca_and_faces} plots the visualization of the eigenfaces corresponding to the top six principal components that explain $78.2\%$ of the total variance.
\begin{align*}
\underbrace{f_1, f_2, ...,f_{21}}_{\text{basic faces}}, \xrightarrow{\text{PCA}} eigenfaces \\
(v,a,d) \xrightarrow{\text{regression}} eigenfaces
\end{align*}
Then we perform a linear regression from VAD scores to the eigenfaces
\begin{figure}[h]
\centering
\includegraphics[width=0.95\linewidth]{images/pca_and_faces.pdf}
\caption{Eigenfaces and the ratio of the variances explained by the number of principal components.}
\label{figure:pca_and_faces}
\end{figure}
\section{Relation identification}
We compare our method (ST-AOG + MCMC) with other methods to predict the social relations between characters in our good-quality animated scenes. The input features include the one-hot encoding of the labeled names of motions and emotions and VADI scores of motions and emotions obtained from the previous steps. Through those encoding, we run the logistic regression and a two-layer neural network as baselines. We report the performances in Table \ref{table:relation_prediction}. The results demonstrate that our method performs well as the baselines in predicting the types of dominance and intimacy.
\begin{table}[h]
\begin{tabular}{c||cc}
\hline
& \textbf{Dominance} & \textbf{Intimacy} \\ \hline \hline
\textbf{Logistic regression} & 43.6\% & 55.6\% \\
\textbf{Neural network} & 44.3\% & 66.9\% \\
\textbf{Ours} & 43.5\% & 71.4\% \\ \hline
\end{tabular}
\caption{The training accuracy for different methods. Each method predicts the dominance and intimacy separately as multi-class classification problems.}
\label{table:relation_prediction}
\end{table}
\section{Exploratory analysis of our dataset}
\begin{figure}[h]
\centering
\includegraphics[width=0.95\linewidth]{images/label_motion_vadi.pdf}
\caption{(a) The density plot for the valence and arousal scores of labeled motions. (b) The density plot for the valence and arousal scores of labeled emotions. (Notice that motions and emotions also have dominance scores, but we only show valence and arousal scores for better visualization.) (c) The distribution of the dominance and intimacy scores of labeled relations.}
\label{figure:label_motion_vadi}
\end{figure}
Figure \ref{figure:label_motion_vadi} plots the distribution of labeled samples' arousal and valence scores of motions and emotions: a large part of the valence-arousal space is covered. While motions are widely distributed, emotions focus on the center, indicating the minor changes of facial expressions in most animation samples. Figure \ref{figure:label_motion_vadi} also shows the distribution of labeled relations w.r.t their dominance and intimacy scores.
\subsection{Animation and interpolation}
\begin{figure}[h]
\centering
\includegraphics[width=0.95\linewidth]{images/interpolation_comparison.pdf}
\caption{The comparison between original rotations and interpolated rotations of the joints: neck, root~(\textit{hips} in Mixamo rigging), left hip~(\textit{left up leg} in Mixamo rigging), and left shoulder for the animation \textit{Quick Informal Bow}.}
\label{figure:interpolation_comparison}
\end{figure}
\section{VRNN and Transformer-VAE}
Figure \ref{figure:motion_vae_error} compares the performance between Transformer-VAE and VRNN, and the bar plot demonstrates the performance between VRNN and Transformer-VAE for predicting the poses of different body parts.
\begin{figure}[h]
\centering
\includegraphics[width=0.95\linewidth]{images/degree_difference.pdf}
\caption{The comparison between original rotations and predicted rotations of the joints: (a) the density plot of the prediction error from Transformer-VAE; (b) the density plot of the prediction error from VRNN; (c) comparison between Transformer-VAE and VRNN for different body parts. (See detailed classification of joints in the appendix.) }
\label{figure:motion_vae_error}
\end{figure}
\section{Labels}
Table \ref{table:one} shows label types with label options.
\begin{table}[h]
\centering
\begin{tabular}{c c c}\toprule
\textbf{Label} & \textbf{Option} \\
\midrule
Quality & good/medium/bad\\
\hline
Dominance score & low/medium/high \\
Intimacy score & low/medium/high \\
\bottomrule\\
\end{tabular}
\caption{Scene labels and options}
\label{table:one}
\end{table}
\section{Application}
We present two applications of our model:
\begin{itemize}
\item \textbf{Emotion sampling}: given two characters' social relations and motions, our model can complement animations of facials expressions.
\item \textbf{Motion completion}: given two characters' social relation, emotions, and part of the movement information, our model can complement the subsequent body movements of characters.
\end{itemize}
\subsection{Emotion sampling}
Facial expressions are among universal forms of body language and nonverbal communication. Currently, there is a lack of study on how a virtual character could interplay his/her facial expression to another character, resulting in poor playability in a game or a virtual reality (VR) platform. For example, a user is shaking hands with a VR character, who offers his/her hand but does not show any facial expressions. His/Her expressionless face may make the user confused or even scared.
Our work proposes a method to automatically generate facial expressions for virtual characters. Given two characters' motions, their social relation, and the emotion of one of the characters, our task is to sample the ending facial expression of the other character from his/her starting facial expression. We consider six scenarios: \textit{wave hands} (between friends), \textit{high-five} (between brothers), \textit{shake hands} (between strangers), \textit{apologize} (employee to employer), \textit{criticize} (teacher to student), and \textit{quarrel} (between colleagues). Figure \ref{figure:ms_scene} depicts the snapshots of motions of two characters
We apply the emotion dynamic $q_e$ for a total number of $20$ times for each scenario, and change the value $v$, $a$, or $d$ by $\pm 0.1$ for each proposed $pg^\prime$. Table \ref{Table:emotion} shows the sampling results. With the help of VAD scores, facial expression can be synthesized accurately.
\subsection{Motion completion}
Common methods to control the motions of a character including finite state machines and behavior trees highly rely on manually created animations.
Our model also provides an innovative way to help generate body poses for characters. Given the two characters' emotions and their social relation, our task is to sample the other character's next body pose based on his/her previous poses. We consider the same six scenarios as those in emotion sampling. However, in this case, we only keep the motions of the left character and set up the starting pose of the right one as \textit{standing} in those scenarios. The social relation and emotions follow those in Table \ref{Table:emotion}.
Figure \ref{figure:ms_scene} plots the skeletons of sampled poses. For each scenario, we sample two pose sequences by VRNN that represent the character's body movement after $0.5$ second and $1.0$ second. We can see that in most cases, our sampled motions can interpret some social interaction meanings. For example, to respond to the left character's high-five proposal, the right character may raise his hand or jump happily.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{images/ms_scene.pdf}
\caption{Motion sampling results. Our model samples the animations (shown in the rectangle) that represent the response of a character in different scenarios. For each scenario, we sample two sets of body poses.}
\label{figure:ms_scene}
\end{figure}
\section{Conclusion}
We propose a method to generate animations of two-character social interaction scenes automatically. Our approach can be useful for the tasks including but not limited to: i) assisting animators to make keyframe animations including character poses and facial expressions. ii) helping game developers generate vivid NPC interaction events; iii) offering better emotional intelligence for VR agents.
\newpage
\section{Data collection}
Traditional methods \cite{creswell2018generative} for training a generative model prefer using a well-labeled dataset. Therefore, a model hardly modifies the original dataset after preparation steps such as data preprocessing and augmentation. However, there is not a suitable dataset to train our model. Fortunately, our ST-AOG is a generative model that can sample animations.
We start from low-quality animations sampled by our ST-AOG, since our ST-AOG is initiated from a random state and probably sample distorted body poses, weird facial expressions, and meaningless interpretations for the two-character social interactions. Then, we introduce \textbf{iterative data generating and labeling} (IDGAL) to render two-character interaction scenes, making more samples while improving the data sampling quality by training model. Each round of IDGAL contains three steps: sampling scenes from our ST-AOG, labeling scenes, and updating ST-AOG using the labeled data.
\begin{figure}[h]
\centering
\includegraphics[width=0.35\textwidth]{images/iterative.pdf}
\vspace{-2mm}
\caption{Comparison between (a) regular machine learning pipeline and (b) iterative data generating and labeling.}
\vspace{-2mm}
\label{fig:my_label}
\end{figure}
\subsection{Sampling scenes}
Sampling $pg$ from our ST-AOG requires selecting Or-nodes and terminal nodes from the \textit{transform branch}, \textit{relation branch}, \textit{emotion branch}, and \textit{motion branch}. We assume that each character in a scene has one motion, one starting facial expression and one ending facial expression.
The \textit{transform branch} samples the initial relative distance between the two characters and we assume that they stand face to face. The \textit{relation branch} sets up their social relation (see Figure \ref{figure:relation_types}). The ST-AOG samples emotions from a pool of $21$ basic facial expressions (e.g. smile, happy face, and sad face) by facial rigging from Facial Action Coding System \cite{cohn2007observer} and annotate their VAD scores by the NRC-VAD Lexicon~\cite{mohammad2018obtaining}. The motion pool we gathered is from Adobe Mixamo \cite{blackman2014rigging}. We select a total number of $65$ single-person animations and label their names (such as \textit{kissing}, \textit{bowing} and \textit{yelling}) in order to get their VAD scores by the NRC-VAD Lexicon~\cite{mohammad2018obtaining}.
The above process specifies the configuration of $pg$ and VADI scores that determine the spatial potential $\mathcal{E}\left(S_{p t} \mid \Theta\right)$. To obtain temporal potential $\mathcal{E}\left(T_{p t} \mid \Theta\right)$, We further assume the time difference between motion start and emotion start, and the time misalignments between their motions and emotions $(t_{1,m} - t_{2,m})$ and $(t_{1,e} - t_{2,e})$ follow the standard normal distribution.
\subsection{Labeling scenes}
Training our ST-AOG requires labeling expert data (see Equation \ref{eq:train} and \ref{eq:gradient}): the model learns to sample those expert data with higher probabilities. Therefore, we label the scene's quality by asking experimenters' subjective judgments on whether the scene is reasonable and meaningful.
\subsection{Updating ST-AOG}
We rewrite the probability
distribution of cliques formed on terminal nodes as
\begin{align} \label{eq:train}
p\left(S_{p t}, T_{p t}, \mid \Theta\right) &=\frac{1}{Z} \exp \left\{-\mathcal{E}\left(S_{p t} \mid \Theta\right) -\mathcal{E}\left(T_{p t} \mid \Theta\right) \right\} \nonumber \\ &=\frac{1}{Z} \exp \left\{-\left\langle\lambda, l\left(ST_{p t}\right)\right\rangle\right\} .
\end{align}
where $\lambda$ is the weight vector and $l\left(ST_{p t}\right)$ is the loss vector given by Equations \ref{eq:S_me}, \ref{eq:S_re}, \ref{eq:S_rm}, \ref{eq: T_me} and \ref{eq: T_r}.
To learn the weight vector, the standard maximum likelihood estimation (MLE) maximizes
the log-likelihood:
\begin{equation}
\mathcal{L}\left(S_{p t}, T_{p t}, \mid \Theta\right)=-\frac{1}{N} \sum_{n=1}^{N}\left\langle\lambda, l\left(ST_{pt} \right)\right\rangle-\log Z .
\end{equation}
It is usually maximized by gradient descent:
\begin{align}\label{eq:gradient}
&\frac{\partial \mathcal{L}\left(ST_{p t} \mid \Theta\right)}{\partial \lambda}=-\frac{1}{N} \sum_{n=1}^{N} l\left(ST_{p t_{n}}\right)-\frac{\partial \log Z}{\partial \lambda} \nonumber \\
&=-\frac{1}{N} \sum_{n=1}^{N} l\left(ST_{p t_{n}}\right)+\frac{1}{\tilde{N}} \sum_{\tilde{n}=1}^{\tilde{N}} l\left(ST_{p t_{\tilde{n}}}\right) ,
\end{align}
where $\{l(ST_{p t_{\tilde{n}}})\}_{\widetilde{n}=1, \cdots, \widetilde{N}}$ is calculated by synthesized examples from the current model, and $\{l(ST_{p t_{n}})\}_{n=1, \cdots, N}$ by expert samples (gold labels) from the labeled dataset.
\subsection{A dataset of two-character animations}
We apply Equation \ref{eq:gradient} to train our model from the labeled data. We truncate the low-likelihood samples, and in each updating ST-AOG round, we take $100$ epochs of the gradient descent (equation \ref{eq:gradient}) with the learning rate as $1e-3$. After getting the sampled scenes in each round, experimenters were asked to examine scenes and label them with good/medium/bad. In total, three rounds of IDGAL give us $1,240$ well-labeled animations with a total length of $9,285$ seconds. Table \ref{table:iterative} shows the improvement along with three rounds of IDGAL: the percent of good samples rises whereas the percent of bad samples decreases.
\begin{table}[h]
\begin{tabular}{c||ccc}
\hline
& \textbf{Round one} & \textbf{Round two} & \textbf{Round three} \\ \hline\hline
\textbf{Count} & 440 & 400 & 400 \\
\textbf{Good rate} & 36.8\% & 39.5\% & 40.0\% \\
\textbf{Bad rate} & 36. 0\% & 26.5\% & 24.0\% \\ \hline
\end{tabular}
\caption{Quality labeling results of different rounds.}
\label{table:iterative}
\end{table}
\section{Introduction}
Traditional 3D animation is time-consuming. From sculpting 3D meshes, building rigging and deformation, to assigning skin weights, those preparatory steps make a 3D character ready to move. Beyond those efforts, an animator designs detailed movement for each body joint by setting keyframes or obtaining live actions from a motion capture device. That complicated work requires precise adjustment and careful consideration of many details.
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\textwidth]{images/intro.pdf}
\caption{When animating a two-character social interaction scene, we need to consider the consistency of each character's body movements and facial expressions and the interplay between them. The figure on the left shows the initial frame of the animation; the figures on the right plot the sampling results of our method for the body movements and facial expressions of the two characters every $0.5$ second.}
\label{figure:intro}
\end{figure*}
Recent years have witnessed the rapid evolution of machine learning methods that facilitate animation making~\cite{lee2018interactive, zhang2018data, taylor2017deep}. However, few works aim to make multi-character animation from a data-driven approach. In multi-character animations, movements of one character along with others, which bring out complicated combinations of body poses, hand gestures, and facial expressions. To synthesize meaningful animations, especially for multi-character social interactions, we argue there exist three main difficulties:
\begin{itemize}
\item Multi-character animation should interpret meaningful social interactions. (e.g., two characters say greetings with a high five.)
\item Body motions of one character must correspond with its facial emotions. For example, one character applauds with a happy face.
\item Multi-character animation brings additional constraints between characters to match their body movements and facial expressions temporally.
\end{itemize}
To address these challenges, first, we use the Spatial-Temporal And-Or Graph (ST-AOG)~\cite{xiong2016robot} that samples a two-character key-frame animation from a stochastic grammar. The ST-AOG allows us to set up the contextual constraints for (body) motions, (facial) emotions, and the (social) relation between the two characters across time. Then, we propose Iterative Data Generating And Labeling (IDGAL) to alternatively sample scenes, label samples, and update model parameters, allowing us to efficiently train the ST-AOG from limited human-annotated data. Next, to make detailed animation for single-character motion and emotion, we apply the Eigenface method~\cite{turk1991eigenfaces} to encode facial joints for generating facial expressions, and a Variational Recurrent Neural Network (VRNN)~\cite{chung2015recurrent} to control body joints for generating body poses. Finally, using Markov Chain Monte Carlo (MCMC), the well-trained ST-AOG helps the Eigenface and VRNN to sample animations.
Our work makes two major contributions: (1) we are the pioneer to jointly and automatically sample the motion, emotion, and social relation for a multi-character configuration; (2) we present IDGAL to collect data while training a stochastic grammar. We plan to make our work (including our model and collected dataset) open-source to encourage researches on synthesizing multi-character animation.
The following sections first review some related works and then define the ST-AOG representing a two-character scene. Next, we formulate the stochastic grammar of the ST-AOG and propose the learning algorithm with IDGAL. Finally, after pre-training the Eigenface and VRNN, we can sample animations using MCMC~\footnote{The animation part in our work is done by Autodesk Maya~\cite{maya}}.
\section{Representation and formulation}
In this section, we first deliver a brief mathematical definition of skeletal animation. Then, we introduce norms of \textbf{valence}, \textbf{arousal}, \textbf{dominance} and \textbf{intimacy}, which set constraints between motion, emotion, and social relation. Finally, we conduct the stochastic grammar for animations.
\subsection{Skeletal animation}
\textit{Skeletal animation} is a computer animation technique that enables a hierarchical set of body joints to control a character. Let $j$ denote one body joint that is characterized by its joint type, rotation, and position. A body pose $p$ is defined as a set of joints $\{j_u\}_{u=1,2,...,n}$ controlling the whole body, where $n$ is the total number of joints. Similarly, a facial expression $f$ is characterized by facial rigging. We can make an animation by designing the body pose $p$ and facial expression $f$ at $k$-th keyframe at time $t_k$.
We define a \textbf{motion} $m$ as a sequence of body poses and an \textbf{emotion} $e$ as a sequence of facial expressions corresponding to the keyframes.
\subsection{Valance, arousal, dominance, and intimacy}
Norms of valence, arousal, and dominance (VAD) are standardized to assess environmental perception, experience, and psychological responses~\cite{warriner2013norms}. \textbf{Valence} $v$ describes a stimulus's pleasantness, \textbf{arousal} $a$ quantifies the intensity provoked by a stimulus, and \textbf{dominance} $d$ evaluates the degree of control~\cite{bakker2014pleasure}. Norms of VAD can also describe a facial expression~\cite{ahn2012nvc}. Figure \ref{figure:vad} plots different facial expressions with varying degrees of valence, arousal, and dominance.
\begin{figure}[h]
\centering
\includegraphics[width=.45\textwidth]{images/nvc.pdf}
\caption{Norms of VAD and facial expressions~\cite{ahn2012nvc}}
\vspace{-3mm}
\label{figure:vad}
\end{figure}
Bringing the concept of \textit{intimacy}, we extend VAD norms as VADI norms. Specifically, \textbf{intimacy} $i$ describes the closeness of the relationship~\cite{karakurt2012relationship}. We describe the \textbf{social relation} $r$ between two characters by their relative dominance and intimacy $(d, i)$. Figure~\ref{figure:relation_types} plots different types social relations along with their dominance and intimacy.
\begin{figure}[h]
\centering
\includegraphics[width=0.30\textwidth]{images/relation_types.pdf}
\caption{Examples of different social relations and their corresponding dominance-intimacy scores.}
\vspace{-3mm}
\label{figure:relation_types}
\end{figure}
Norms of valance, arousal, dominance, and intimacy form the space to set constraints between motion, emotion, and social relation. We will discuss it in detail in the next section.
\subsection{Stochastic grammar for two-character animations}
We define a Spatial-Temporal And-Or Graph (ST-AOG)
\begin{equation}
\mathcal{G}=(R, V, C, P, S, T)
\end{equation}
to represent the social interaction scene of two characters, where $R$ is the root node for representing the scene, $V$ the node set, $C$ the set of production rules (see Figure \ref{figure:aog}), $P$ the probability model. The spatial relation set $S$ represents the contextual relations between terminal nodes and the temporal relation set $T$ represents the time dependencies.
\textbf{Node Set} $V$ can be decomposed into a finite set of nonterminal and terminal nodes: $V = V^{NT} \cup V^T$. The non-terminal nodes $V^{NT}$ consists of two subsets $V^{And}$ and $V^{Or}$. A set of \textbf{And-nodes} $V^{And}$ is a node set in which each node represents a decomposition of a larger entity (e.g., one character's animation) into smaller components (e.g., emotion and motion). A set of \textbf{Or-nodes}
$V^{Or}$ is a node set in which each node branches to alternative decompositions (e.g., the intimacy score between two characters can be low, medium, or high). The selection rule of Or-nodes follows the probability model $P$. In our work, we select the child nodes with equal. The \textbf{terminal nodes} $V^{T}$ represent entities with different meanings according to context: terminal nodes under the \textit{relation branch} identify the relationship between the two characters; the ones under \textit{motion branch} determine their motions, and the ones under \textit{emotion branch} depict emotions.
\textbf{Spatial Relations} $S$ among nodes are represented
by the horizontal links in ST-AOG forming Conditional Random Fields (CRFs) on the terminal nodes. We define different potential functions to encode pairwise constraints between motion $m$, emotion $e$, and social relation $r$:
\begin{equation}
S = S_{me}\cup S_{re} \cup S_{rm} .
\end{equation}
$S_{me}$ sets constraints on the motion and emotion to ensure that the body movement supports one emotion according to the social affordance. For example, the crying action (rubbing eyes) can hardly be compatible with a big smile. $S_{re}$ regulates the emotion when social relation is considered. For instance, an employee has few chances to laugh presumptuously in front of the boss. Similarly, $S_{rm}$ manages to select the suitable body motion under social relation. For example, kissing is allowed for couples.
\textbf{Temporal Relations} $T$ among nodes are also represented by the links in ST-AOG to address time dependencies. Temporal relations are divided into two subsets:
\begin{equation}
T = T_{me} \cup T_{r} .
\end{equation}
$T_{me}$ encodes the temporal associations between motion and emotion. $T_{r}$ describes the extent to which the two characters' animations match temporally.
A hierarchical parse tree $pt$ is an instantiation of the ST-AOG by selecting a child node for the Or-nodes and determining the terminal nodes. A parse graph $pg$ consists of a parse tree $pt$, a number of spatial relations $S$, and several temporal relations $T$ on the parse tree:
\begin{equation}
pg = (pt, S_{pt}, T_{pt}) .
\end{equation}
\subsection{Probabilistic model of ST-AOG}
A scene configuration is represented by a parse graph $pg$, including animations and social relations of the two characters. The probability of $pg$ generated by an ST-AOG parameterized by $\theta$ is formulated as a Gibbs distribution:
\begin{small}
\begin{align} \label{eq:energy}
p(pg \mid \Theta) &=\frac{1}{Z} \exp \{-\mathcal{E}(p g \mid \Theta)\} \nonumber\\
&=\frac{1}{Z} \exp \left\{-\mathcal{E}(p t \mid \Theta)-\mathcal{E}\left(S_{p t} \mid \Theta\right) -\mathcal{E}\left(T_{p t} \mid \Theta\right) \right\} ,
\end{align}
\end{small}
\noindent where $\mathcal{E}(p g \mid \Theta)$ is the energy function of a parse graph, and $\mathcal{E}(p t \mid \Theta)$ of a parse tree. $\mathcal{E}(S_{p t} \mid \Theta)$ and $\mathcal{E}(T_{p t} \mid \Theta)$ are the energy terms of spatial and temporal relations.
$\mathcal{E}(pt \mid \Theta)$ can be further decomposed into the energy functions of different types nodes:
\begin{equation}
\mathcal{E}(p t \mid \Theta)=\underbrace{\sum_{v \in V} \mathcal{E}_{\Theta}^{O r}(v)}_{\text {non-terminal nodes }}+\underbrace{\sum_{v \in V_{T}^{r}} \mathcal{E}_{\Theta}^{T}(v)}_{\text {terminal nodes }} .
\end{equation}
\textbf{Spatial potential} $\mathcal{E}\left(S_{p t} \mid \Theta\right)$ combines the potentials of three types of cliques $C_{me},C_{re}, C_{rm}$ in the terminal layer, integrating semantic contexts mentioned previously for motion, emotion and relation:
\begin{align}
p\left(S_{p t} \mid \Theta\right) &=\frac{1}{Z} \exp \left\{-\mathcal{E}\left(S_{p t} \mid \Theta\right)\right\} \nonumber\\
&=\prod_{c \in C_{me}} \phi_{me}(c) \prod_{c \in C_{re}} \phi_{re}(c) \prod_{c \in C_{rm}} \phi_{rm}(c) .
\end{align}
We apply the norms of valence, arousal, dominance and intimacy (VADI) to quantify the triangular constraints between social relation, emotion and motion:
\begin{itemize}
\item Social relation $r$ is characterized by its dominance and intimacy $(d_r, i_r)$.
\item For emotion $e$, which is a sequence facial expression, we consider its valance, arousal, and dominance scores as the different between the beginning facial expression $f_0$ and ending facial expression $f_1$:
\begin{equation}
(v_e, a_e, d_e) = (v_{f_1}, a_{f_1}, d_{f_1}) - (v_{f_0}, a_{f_0}, d_{f_0}) .
\end{equation}
\item To get the scores of a motion $m$, we first label the name $N_m$ of the motion, such as \textit{talk}, \textit{jump} and \textit{cry}. Then we can obtain the valance, arousal, and dominance scores from NRC-VAD Lexicon \cite{mohammad2018obtaining}, which includes a list of more than 20,000 English words and their valence, arousal, and dominance scores:
\begin{equation*}
m \to N_m \to (v_m, a_m, d_m) .
\end{equation*}
\end{itemize}
Therefore, the relation $S_{me}$ and its potential $\phi_{me}$ on the clique $C_{me} = \{(m, e)\}$ contain all the motion-emotion pairs in the animation, and we define
\begin{align}\label{eq:S_me}
\phi_{me}(c) = \frac{1}{Z_{me}^s}\exp\{\lambda_{me}^s\cdot (v_m, a_m, d_m) \cdot (v_e, a_e, d_e)^\top\}.
\end{align}
Calculating potentials $\phi_{rm}$ on clique $C_{rm} = \{(m, r)\}$ and $\phi_{re}$ on $C_{re} = \{(e, r)\}$ needs another variable $i_{me}$ suggesting the intimacy score. $i_{me}$ is defined as the distance $dist$ between the two characters compared with a standard social distance $dist_0$ :
\begin{equation}
i_{me} = (dist_0 - dist) / dist_0 .
\end{equation}
Then we can define
\begin{align}
\phi_{re}(c) &= \frac{1}{Z_{re}^s}\exp\{\lambda_{re}^s\cdot (d_r, i_{r}) \cdot (d_e, i_{me})^\top\} , \label{eq:S_re} \\
\phi_{rm}(c) &= \frac{1}{Z_{rm}^s}\exp\{\lambda_{rm}^s\cdot (d_r, i_{r}) \cdot (d_m, i_{me})^\top\} . \label{eq:S_rm}
\end{align}
\textbf{Temporal potential} $\mathcal{E}\left(S_{p t} \mid \Theta\right)$ combines two potentials for time control, and we have
\begin{align}
p\left(T_{p t} \mid \Theta\right) &=\frac{1}{Z} \exp \left\{-\mathcal{E}\left(T_{p t} \mid \Theta\right)\right\} \nonumber \\
&=\prod_{c \in C^T_{me}} \psi_{me}(c) \prod_{c \in C^T_{r}} \psi_{r}(c) .
\end{align}
Potential $\psi_{me}$ is defined on clique $C^T_{me} = \{(t_{m}, t_{e})\}$ representing the time to start a motion and an emotion. We assume the time discrepancy between them follows a Gaussian distribution, then we can get
\begin{align}\label{eq: T_me}
\psi_{me}(c) = \frac{1}{Z^t_{me}}\exp \left(\lambda_{re}^t \cdot (t_{m} - t_{e})^2\right) .
\end{align}
Notice that so far the training parameters $\lambda_{me}^s$, $\lambda_{rm}^s$, $\lambda_{rm}^s$, $\lambda_{me}^t$ and partition functions $Z_{me}^s$, $Z_{re}^s$, $Z_{rm}^s$, $Z_{me}^t$ should be doubled since we have two characters in the scene.
At last, to match the animation for both characters, we assume that the time differences between ending time of their motions $t_{1,m}, t_{2,m}$ and emotions $t_{1,e}, t_{2, e}$ follow the Gaussian distribution:
\begin{align}\label{eq: T_r}
\psi_{r}(c) &= \frac{1}{Z^t_{m}}\exp \left(\lambda_{m}^t \cdot (t_{1,m} - t_{2,m})^2\right) \nonumber\\
& + \frac{1}{Z^t_{e}}\exp \left(\lambda_{e}^t \cdot (t_{1,e} - t_{2,e})^2\right) .
\end{align}
Here we have two additional parameters $\lambda_{m}^t, \lambda_{e}^t$, and two more partition functions $Z_{m}^t, Z_{e}^t$.
\section{Related Work}
\subsection*{Machine learning for animation}
Recent advances in machine learning have made it possible for animators to automatize some of the tough processes of generating human motion sequences. From early approaches such as hidden Markov models \cite{tanco2000realistic, ren2005data}, Gaussian processes \cite{wang2007gaussian, fan2011gaussian} and restricted Boltzmann machines \cite{taylor2009factored}, to more recent neural architectures such as Convolutional Neural Networks (CNN) \cite{holden2016deep} and Recurrent Neural Networks (RNN) \cite{fragkiadaki2015recurrent}, synthesizing human motions relies on framing poses or gestures~\cite{pavllo2019modeling}. More recent work focuses on improving animation quality by considering physical environment~\cite{holden2017phase}, character-scene interactions~\cite{starke2019neural} or social activities~\cite{shu2016learning}.
Emotion animation often comes together with speech animation \cite{taylor2017deep}. Producing high-quality speech animation relies on speech-driven audios~\cite{Oh_2019_CVPR} and performance-driven videos obtained by facial motion captures~\cite{maurer2001wavelet, cao2015real}. Then, animations for the designated actor can be synthesized by transferring facial features~\cite{taylor2017deep} or Generative Adversarial Networks (GAN)~\cite{vougioukas2019realistic}.
\subsection*{Animation with social interaction}
Involving animations with social interactions has been studied for a long time \cite{takeuchi1995situated, waters1997diamond,arafa2002two, silvio2010animation}. Animations with social meanings are created by the interaction between the individuals in a group, which build better adoption for the characters in VR \cite{wei2019vr}, AR \cite{miller2019social}, and Human-robot Interaction (HRI) \cite{shu2016learning}. Existing datasets for two-person interactions can be video-based \cite{yun2012two}, skeleton-based \cite{liu2019skeleton} or dialogue-based \cite{yu2020dialogue}.
\subsection*{Stochastic grammar model}
Stochastic grammar models are useful for parsing the hierarchical structures for creating indoor scenes \cite{qi2018human}, predicting human activities \cite{taylor2017deep}, and controlling robots \cite{xiong2016robot}.
In this paper, we forward sampling from a grammar model to generate large variations of two-character animations. We also attempt to outline a general multi-character animation model to bring together motions, emotions, and social relations.
\section{Synthesizing scenes}
Training our ST-AOG requires sampling motions and emotions from existing motion and emotion pools, limiting it to a small range of applications with little flexibility. To overcome this problem, we present the idea to make the learned stochastic grammar suitable for all kinds of generative models for facial animation and body movement so that our ST-AOG can be useful for a broader range of applications.
\subsection{Markov chain dynamics}
First, we design three types of Markov chain dynamics: (1)~$q_{r}$ to make proposal sample social relations;
(2) $q_e$ to sample emotions; (3) $q_m$ to sample motions.
\textbf{Relation dynamic} $q_r$ makes transition of social relations directly from ST-AOG's Or-nodes on \textit{relation branch}:
\begin{align}
\text{Node}_{r_1}\to\text{Node}_{r_2} .
\end{align}
\textbf{Emotion dynamic} $q_e$ changes the VAD scores of one of the facial expression at keyframe $t_k$:
\begin{align}
(v,a,d) \to (v^\prime,a^\prime,d^\prime) .
\end{align}
Besides, to synthesize emotions, we train a linear model based on our $21$ basic faces with manually labeled VAD scores. Specifically, we first get the eigenfaces through principal component analysis (PCA) based on the positions of \textit{eyebrows, eyes, mouth} and \textit{cheeks}. We show some constructed examples in Figure \ref{figure:reconstructed_emotion} and leave the detailed analysis in the appendix.
\begin{figure}[h]
\centering
\includegraphics[width=0.85\linewidth]{images/reconstructed_emotion.pdf}
\caption{Reconstructed facial expressions and VAD scores.}
\label{figure:reconstructed_emotion}
\end{figure}
\textbf{Motion dynamic $q_m$} regards motion as a sequence of body poses $\{p_i\}_{i=1,2,...}$. Therefore, we train a generative model for motions, where the model takes the inputs of poses $\{p_i\}_{i=1,2,...,k}$ for every $\delta t$ time interval and predicts the next body pose $p_{k+1}$ according to the maximum likelihood:
\begin{align}
\arg\max_{p_{k+1}}\text{P}(p_{k+1}\mid p_{k}, p_{k-1}, ..., p_{1}; \delta t) .
\end{align}
We train a model to generate animations for a single character. The motion samples collected from Adobe Mixamo~\cite{blackman2014rigging} are manually filtered. The filtered database contains $149$ animations such as \textit{yelling, waving a hand, shrugging}, and \textit{talking}. The complete list of animations is attached to the Appendix.
The data augmentation process mirrors each animation from left to right, resulting in a total number of $36,622$ keyframes of animations with $24$ frames in a second. We set $0.5$s as the time interval between poses, and each pose is represented by the rotations of $65$ joints and the position of the root joint (see Appendix for the rigging). Therefore, each $p_t$ is a $198$-dimensional ($65\times 3 + 3$) vector.
We apply the variational recurrent neural network (VRNN) \cite{chung2015recurrent} as our motion generative model and we set $\delta t = 0.5\text{s}$. As Figure \ref{figure:vrnn} shows, the model first encodes the historical poses $\{p_{u}\}_{u=1,2,...,{k-1}}$ into a state variable $h_{k-1}$, whereby the prior on the latent random variable $z_{k}$ is calculated. $z_{k}$ usually follows a Gaussian distribution. Finally, the generating process takes the inputs $h_{k-1}$ and $z_{k}$ to generate pose $p_k$.
\begin{figure}[h]
\centering
\includegraphics[width=0.65\linewidth]{images/vrnn_croped.pdf}
\caption{The variational recurrent neural network (VRNN).}
\label{figure:vrnn}
\end{figure}
The training procedure follows that of a standard VAE \cite{doersch2016tutorial}. The objective is to minimize the reconstruction loss (mean square error between original and reconstructed data) and KL-divergence loss. The results show that VRNN performs well: $56.9\%$ of the reconstructed errors for joint rotations are less than $1.0$ degree and $87.3\%$ less than $5$ degrees. See to the Aappendix for detailed analysis.
Then, dynamic $q_m$ generates a body pose $p_k$ by sampling the latent random variable $z_k$:
\begin{align}
&p_{k} \to p_{k}^\prime , \\
\text{ by } &z_{k} \to z_{k}^\prime .
\end{align}
To map the pose to the VADI norms, we train a linear model:
\begin{align}
p_k \xrightarrow{\text{regression}} (v,a,d,i) .
\end{align}
Finally, according to Equations \eqref{eq:S_me}, \eqref{eq:S_re}, and \eqref{eq:S_rm}, we can calculate the probability of sampling $pg^\prime$.
\begin{table*}[h]
\resizebox{0.98\textwidth}{!}{
\begin{tabular}{cccc}
\hline
\textbf{Scenario} & \textbf{Social relation} & \textbf{Character one emotion} & \textbf{Character two emotion}\\ \hline
wave hands & friends (medium, medium) & neutral $\to$ happy & \textcolor{blue}{$(0.5, 0.5, 0.5)$[neutral]} $\to$\textcolor{red}{$(0.9, 0.6, 0.6)[delight]$} \\
high-five & brothers (medium, high) & happy $\to$ excited & \textcolor{blue}{$(0.5, 0.5, 0.5)$[neutral]} $\to$\textcolor{red}{$(0.9, 0.7, 0.7)[glad]$} \\
shake hands & strangers (medium, low) & neutral $\to$ excited & \textcolor{blue}{$(0.8, 0.7, 0.6)$[joyful]} $\to$\textcolor{red}{$(0.9, 0.4, 0.8)[respectful]$} \\
apologize & employee to employer (low, medium) & neutral $\to$ sad & \textcolor{blue}{$(0.5, 0.5, 0.5)$[neutral]} $\to$\textcolor{red}{$(0.3, 0.6, 0.4)[concerned]$} \\
criticize & teacher to student (high, close) & neutral $\to$ angry & \textcolor{blue}{$(1.0, 0.7, 0.8)$[happy]} $\to$\textcolor{red}{$(0.3, 0.8, 0.3)[scared]$} \\
quarrel & colleagues (medium, medium) & neutral $\to$ dissatisfied & \textcolor{blue}{$(0.5, 0.5, 0.5)$[neutral]} $\to$\textcolor{red}{$(0.1, 0.8, 0.3)[annoyed]$} \\
\hline
\\
\end{tabular}
}
\caption{Emotion sampling results. The \textit{social relation} is labeled with its dominance and intimacy scores. MCMC generates the emotion to sample from the starting facial expression (the blue text describes its valence, arousal, and dominance scores) to the ending facial expression (VAD norms in red). We also pick a word to describe a facial expression according to the NRC-VAD Lexicon~\cite{warriner2013norms}.}
\label{Table:emotion}
\end{table*}
\subsection{MCMC}
Adopting the Metropolis-Hastings algorithm \cite{chib1995understanding}, the proposed new parse graph $pg\prime$ is sampled according to the following acceptance probability:
\begin{align}
\alpha\left(p g^{\prime} \mid p g, \Theta\right) &=\min \left(1, \frac{p\left(p g^{\prime} \mid \Theta\right) p\left(p g \mid p g^{\prime}\right)}{p(p g \mid \Theta) p\left(p g^{\prime} \mid p g\right)}\right),
\end{align}
where $p\left(p g\mid \Theta\right)$ and $p\left(p g^{\prime} \mid \Theta\right)$ are calculated by energy terms $\exp (\mathcal{E}(p g \mid \Theta))$ and $\exp (\mathcal{E}(p g^{\prime} \mid \Theta))$ from Equation \eqref{eq:energy}. $p(p g \mid p g^\prime)$ and $p(p g^\prime \mid p g)$ are the proposal probabilities define by dynamics $q_r, q_e$, and $q_m$.
|
2,869,038,154,086 | arxiv | \section{Introduction}
The algebra $\gl(\infty)$ is the most basic example of a complex locally-finite
Lie algebra, i.e. a limit of finite-dimensional complex Lie algebras. Its
representation theory has been a very active area of study for the last ten
years. See for example \cites{PS11a,PS11b,DCPS16,HPS19,GP19,PS19}.
Beyond its intrinsic interest the representation theory of $\gl(\infty)$
has connections to two other areas of Lie theory: the stable representation
theory of the family $\gl(d,\CC)$, and the classical representation theory of
the Lie superalgebra $\gl(n|m)$.
The relation between representations of $\gl(\infty)$ and $\gl(n|m)$ goes back
to Brundan \cite{Brundan03}, who used a categorical action of the quantized
enveloping algebra of $\gl(\infty)$ on category $\mathcal O_{\gl(n|m)}$ to
compute the characters of atypical finite dimensional modules. This approach
was expanded by Brundan, Losev and Webster in \cite{BLW17} and by Brundan and
Stroppel in \cite{BS12a}. Also, Hoyt, Penkov and Serganova studied the
$\gl(\infty)$-module structure of the integral block $\mathcal O^{\ZZ}_{\gl
(n|m)}$ in \cite{HPS19}. Recently Serganova has extended this to a
categorification of Fock-modules of $\sl(\infty)$ through the Deligne
categories $\mathcal V_t$ in \cite{Serganova21}.
Let us now turn to the connections of $\gl(\infty)$ with stable representation
theory. Informally this refers to the study of sequences of representations
$V_d$ of $\gl(d,\CC)$, compatible in a suitable sense, as $d$ goes to
infinity. For example, set $W_d = \CC^d$ and $V_d(p,q) = W_d^{\ot p}\ot
(W_d^*)^{\ot q}$. For each $d \geq 1$ the decomposition of this module into its
simple components is a consequence of Schur-Weyl duality, and we can ask
whether this decomposition is in some way compatible with the obvious
inclusion maps $V_d(p.q) \hookrightarrow V_{d+1}(p,q)$. One way to solve this
problem is regard the limit $T^{p,q} = \varinjlim_d V_d(p,q)$ as a
$\gl(\infty)$-module and study its decomposition, as done by Penkov and Styrkas
in \cite{PS11b}. A more categorical approach to this problem appeared
independently in the articles by Sam and Snowden \cite{SS15} and by Dan-Cohen,
Penkov and Serganova in \cite{DCPS16}. The former studies the category
$\operatorname{Rep}(\mathsf{GL}
(\infty))$ of stable algebraic representation theory of the family of groups
$\mathsf{GL}_d(\CC)$, and the latter is focused on the category
$\TT_{\gl(\infty)^0}$ of modules that arise as subquotients of the
modules $T^{p,q}$. As mentioned in the introduction to \cite{SS15}, these
categories are equivalent.
By \cite{DCPS16}, objects in the category $\TT_{\gl(\infty)}^0$ are the
integrable finite length $\gl(\infty)$-modules that satisfy the so-called
large annihilator condition (LAC from now on). This category is analogous in
many ways to the category of integrable finite dimensional representations of
$\gl(n,\CC)$, and it forms the backbone of the representation theory of
$\gl(\infty)$. Since this category is well understood by now, it is natural to
look for analogues of category $\mathcal O$ for $\gl(\infty)$. Several
alternatives have been proposed such as those by Nampaisarn
\cite{Nampaisarn17}, Coulembier and Penkov \cite{CP19}, and Penkov and
Serganova \cite{PS19}.
The fact that there is no one obvious analogue of $\mathcal O$ is due to two
reasons. First, Cartan and Borel subalgebras of $\gl(\infty)$ behave in a much
more complicated way than in the finite dimensional case, in particular
different choices of these will produce non-equivalent categories of highest
weight modules. The second problem is that the enveloping algebra of
$\gl(\infty)$ is not left-noetheran, so its finitely generated modules do not
form an abelian category. The first problem can be solved (or rather swept
under the rug) by fixing adequate choices of Cartan and Borel subalgebras; this
is the road followed in this paper. The second however does not have an obvious
solution.
In \cite{PS19} Penkov and Serganova propose replacing this condition
with the LAC. They study the category $\mathcal{OLA}$ consisting of $\lie
h$-semisimple and $\lie n$-nilpotent modules satisfying the LAC, and in
particular show that it is a highest weight category. On the other hand,
the categorifications arising in \cites{HPS19,Serganova21} above only satisfy
the LAC when seen as modules over a Levi-type subalgebra $\lie l \subset
\gl(\infty)$. Given the importance of the work relating representations of
$\gl(\infty)$ to Lie superalgebras, this motivates the study of analogues of
category $\mathcal O$ where the condition of being finitely generated is
replaced by this weaker version of the LAC. Thus in this paper we study the
category $\CAT{\lie l}{\gl(\infty)}$ of $\lie h$-semisimple, $\lie n$-torsion
modules satisfying the LAC with respect to various Levi-type subalgebras $\lie
l$.
As mentioned above our main result is that $\CAT{\lie l}{\gl(\infty)}$ is a
highest weight category. Let us give some details on the result. We
introduce the notion of eligible weights, which are precisely those that can
appear in a module satisfying the LAC. Admissible weights are endowed with an
interval-finite order that has maximal but no minimal elements. Simple objects
of $\CAT{\lie l}{\gl(\infty)}$ are precisely the simple highest weight modules
indexed by eligible weights. Standard objects are given by projections of dual
Verma modules to $\CAT{\lie l}{\gl(\infty)}$ and have infinite length. Their
simple multiplicities are given in terms of weight multiplicities of a (huge)
representation of $\gl(\infty)$. Finally, injective envelopes have finite
standard filtrations that satisfy a form of BGG reciprocity. We point out that
$\CAT{\lie l}{\gl(\infty)}$ does not have enough projectives, and hence no
costandard modules.
\vspace{12pt}
The article is structured as follows. Section \ref{s:generalities} contains
some general notation. In section \ref{s:faces} we introduce several
presentations of $\gl(\infty)$, each of which highlights some particular
features and subalgebras. Section \ref{s:rep-theory} deals with various matters
related to representation theory of Lie algebras, and some specifics regarding
$\gl(\infty)$. In section \ref{s:cats-reps} we discuss some basic categories of
representations of $\gl(\infty)$ in its various incarnations. We begin our
study of $\CAT{}{}$ in section \ref{s:ola}, where we classify its simple
objects, prove some general categorical properties, and show that simple
multiplicities of a general object can be computed in terms of simple
multiplicities for category $\overline{\mathcal O}_{\lie s}$. In section
\ref{s:standard} we prove the existence of standard modules and compute their
simple multiplicities. This last result depends on a long technical computation
given in an appendix to the section. Finally, in section \ref{s:injectives}
we wrap up the proof that $\CAT{}{}$ is a highest weight category with an
analysis of injective modules. We also prove an analogue of BGG reciprocity and
give a decomposition of $\CAT{}{}$ into irreducible blocks.
The usual zoo of Cartan, Borel, parabolic, and Levi subalgebras, along with
their nilpotent ideals, is augmented in each case by their corresponding
exhaustions and the subalgebras spanned by finite-root spaces. To help keep
track of these wild variety, most of these subalgebras are introduced at once in
subsection \ref{ss:subalgebras-vn}, along with a visual device to describe them.
\section*{Acknowledgements}
I thank Ivan Penkov for introducing me to the study of locally-finite Lie
algebras and posing the questions that led to this work. Also, Vera Serganova
answered several questions and provided valuable references.
\section{Generalities}
\label{s:generalities}
\subsection{Notation}
For each $r \in \NN$ we set $\interval r = \{1, 2, \ldots, r\}$. Throughout we
denote by $\ZZ^\times$ the set of nonzero integers endowed with the following,
not quite usual order
\begin{align*}
1 \prec 2 \prec 3 \prec \cdots \prec -3 \prec -2 \prec -1.
\end{align*}
The symbol $\delta_{i,j}$ denotes the Kronecker delta.
We denote by $\Part$ the set of all partitions, and we identify each partition
with its Young diagram. Given $\lambda \in \Part$ we will the corresponding
Schur functor by $\Schur_\lambda$. For any $\lambda, \mu, \nu$ we denote by
$c_{\lambda,\mu}^\nu$ the corresponding Littlewood-Richardson coefficient.
Vector spaces and unadorned tensor products are always taken over $\CC$ unless
explicitly stated. Given a vector space $V$ we denote its algebraic dual by
$V^*$. Given $n \in \ZZ_{>0}$ we will denote by either $V^n$ or $n V$ the direct
sum of $n$ copies of $V$. Given $d \in \ZZ_{>0}$ we will denote by $\SS^d(V)$
the $d$-th symmetric power of $V$, by $\SS^\bullet(V)$ its symmetric
algebra, and by $\SS^{\leq d}(V)$ the direct sum of all $\SS^{d'}(V)$ with $d'
\leq d$. We will often use that given a second vector space $W$ we have
\begin{align*}
\SS^\bullet(V \ot W) &\cong \bigoplus_{\lambda \in \Part}
\Schur_\lambda(V) \otimes \Schur_\lambda(W).
\end{align*}
\subsection{Locally finite Lie algebras}
A locally finite Lie algebra $\lie g$ is one where any finite set of elements
is contained in a finite dimensional subalgebra. If $\lie g$ is countable
dimensional then this is equivalent to the existence of a chain of finite
dimensional subalgebras $\lie g_1 \subset \lie g_2 \subset \cdots \subset \lie
g = \bigcup_{r \geq 0} \lie g_r$. Any such chain is called an \emph{exhaustion}
of $\lie g$. We say $\lie g$ is locally root-reductive if each $\lie g_r$ can
be taken to be reductive, and the inclusions send root spaces to root spaces
for a fixed choice of nested Cartan subalgebras $\lie h_r \subset \lie g_r$
satisfying $\lie h_r = \lie h_{r+1} \cap \lie g_r$. In general we will say that
$\lie g$ is locally reductive, nilpotent, etc. if there is has an exhaustion
such that each $\lie g_r$ is of the corresponding type.
We denote by $\gl(\infty)$ the direct limit of the sequence of Lie algebras
\begin{align*}
\gl(1,\CC) \hookrightarrow \gl(2,\CC) \hookrightarrow
\cdots \gl(n,\CC) \hookrightarrow \cdots
\end{align*}
where each map is given by inclusion in the upper left corner. This restricts
to inclusions of $\sl(n,\CC)$ into $\sl(n+1,\CC)$, and we denote the limit of
these algebras by $\sl(\infty)$, which is clearly a subalgebra of
$\gl(\infty)$. This is a locally finite Lie algebra.
Fix a locally finite Lie algebra $\lie g$ and an exhaustion $\lie g_r$. Suppose
also that we have a sequence $(M_r, j_r)$ where every $M_r$ is a $\lie
g_r$-module (which makes every $M_s$ with $s \geq r$ a $\lie g_r$-module), and
$j_r: M_r \to M_{r+1}$ is an injective morphism of $\lie
g_r$-modules. The limit vector space $M = \varinjlim M_r$ has the structure of
a $\lie g_r$-module for each $r$, and these structures are compatible in the
obvious sense so $M$ is a $\lie g$-module. We refer to $M_r$ as an exhaustion
of $M$. Any $\lie g$-module $M$ has an exhaustion: if $X$ is a generating set
of $M$, we can take as $M_r$ the $\lie g_r$-submodule generated by $X$.
Concepts such as Cartan subalgebras, root systems, Borel and parabolic
subalgebras relative to these systems, weight modules, etc. are defined for
locally finite Lie subalgebras. In the context of this paper these objects will
behave as in the finite dimensional case, but there are some subtle differences
which we will point out when relevant. For details we refer the reader to the
monograph \cite{HP22} and the references therein.
We will write $\lie g = \lie b \niplus \lie a$ if $\lie b$ is a subalgebra of
$\lie g$ and $\lie a$ is an ideal that is also a vector space complement for
$\lie b$. Suppose we have a locally finite Lie algebra $\lie g$ with fixed
splitting Cartan subalgebra $\lie h$ (i.e. such that $\lie g$ is a semisimple
$\lie h$-module through the adjoint action), and Borel (i.e. maximal locally
solvable) subalgebra $\lie b$ of the form $\lie h \niplus \lie n$ for some
subalgebra $\lie n$. Given any
weight $\lambda \in \lie h^*$, we can define Verma modules as usual by
$\Ind_{\lie b}^{\lie g} \CC_\lambda$. We will denote the Verma module by
$M_{\lie g}(\lambda)$ or simply $M(\lambda)$ when $\lie g$ is clear from the
context. We will also denote by $L(\lambda) = L_{\lie g}(\lambda)$ the
corresponding unique simple quotient of $M_{\lie g}(\lambda)$.
\section{The many faces of $\lie{gl}(\infty)$}
\label{s:faces}
In this section we will review some standard facts about the Lie algebra
$\gl(\infty)$ and introduce several different ways in which this arises. While
this amounts to choosing different exhaustions, we will take a different
approach to these various ``avatars'' of $\gl(\infty)$, which brings to the fore
some non-obvious structures in this Lie algebra.
\subsection{The Lie algebra $\lie g(\VV)$}
Let $I$ be any infinite denumerable set and let $\VV = \langle v_i \mid i \in
I\rangle$ and $\VV_* = \langle v^i \mid i \in I\rangle$, which we endow with a
perfect pairing
\begin{align*}
\tr: \VV_* \ot \VV &\to \CC\\
v^i \ot v_j &\mapsto \delta_{i,j}.
\end{align*}
The vector space $\VV \ot \VV_*$ is a nonunital associative algebra with
$v' \ot v \cdot w' \ot w = \tr(v \ot w') v' \ot w$. We denote the Lie algebra
associated to this algebra by $\lie g(\VV, \VV_*, \tr)$, or simply by $\lie
g(\VV)$. For $i,j \in I$ we set $E_{i,j} = v^i \ot v_j$.
Take for example $I = \NN$. Given $r \in \ZZ_{>0}$ write $\lie g_r = \langle
E_{i,j} \mid i,j \in \interval r\rangle$. Then $\lie g_r$ is a subalgebra
of $\lie g(\VV)$ isomorphic to $\gl(r,\CC)$. The inclusions $\lie g_r \subset
\lie g_{r+1}$ correspond to the injective Lie algebra morphisms $\gl(r,\CC)
\hookrightarrow \gl(r+1,\CC)$ given by embedding a $r\times r$ matrix into the
upper left corner of a $r+1 \times r+1$ matrix with its last row and column
filled with zeroes. Thus
\begin{align*}
\lie g(\VV)
&= \bigcup_{r \geq 1} \lie g_r \cong
\varinjlim \gl(r,\CC) \cong \gl(\infty).
\end{align*}
The choice of $I$ does not change the isomorphism type of this algebra, so in
general every algebra of the form $\lie g(\VV, \VV_*, \tr)$ is isomorphic to
$\gl(\infty)$.
Notice that $\VV$ and $\VV_*$ are $\lie g(\VV)$-modules with $E_{i,j} e_k
= \delta_{j,k}e_i$ and $E_{i,j} e^k = - \delta_{k,i} e^j$. A simple comparison
shows that $\VV = \lim V_r$, where $V_r$ is the natural representation of
$\gl(r,\CC)$ and the maps $V_r \to V_{r+1}$ are uniquely determined up to
isomoprhism by the fact that they are $\gl(r,\CC)$-linear. In a similar fashion,
$\VV_* \cong \varinjlim V_r^*$.
\subsection{Cartan subalgebra and root decomposition}
The definition and study of Cartan subalgebras of $\gl(\infty)$ is an
interesting and subtle question, which was taken up in \cite{NP03}. General
Cartan subalgebras can behave quite differently from Cartan subalgebras of
finite dimensional reductive Lie algebras, for example they may not produce
a root decomposition of $\gl(\infty)$. We will sidestep this problem by
choosing one particularly well-behaved Cartan subalgebra as ``the'' Cartan
subalgebra of $\lie g(\VV)$, namely the obvious maximal commutative subalgebra
$\lie h(\VV) = \langle E_{i,i} \mid i \in I \rangle$, and freely borrow
notions from the theory of finite root systems that work for this particular
choice.
The action of the Cartan subalgebra $\lie h(\VV)$ on $\lie g(\VV)$ is
semisimple. Denoting by $\epsilon_i$ the unique functional of $\lie h(\VV)^*$
such that $\epsilon_i(E_{j,j}) = \delta_{i,j}$, the root system of $\lie g(\VV)$
with respect to $\lie h(\VV)$ is
\begin{align*}
\Phi &= \{\epsilon_i - \epsilon_j \mid i,j \in I\}
\end{align*}
with the component of weight $\epsilon_i - \epsilon_j$ spanned by $E_{i,j}$,
and hence $1$-dimensional. Also $\lie g(\VV)_0 = \lie h(\VV)$, which is of
course infinite dimensional.
\subsection{Positive roots, simple roots and Borel subalgebras}
If we fix a total order $\prec$ on $I$ (as we did implicitly when setting $I =
\NN$) we can see an element of $\lie g(\VV)$ as an infinite matrix whose rows
and columns are indexed by the set $I$. A total order also allows us to define
a set of positive roots
\begin{align*}
\Phi^+_I &= \{\epsilon_i - \epsilon_j \mid i \prec j \}
\end{align*}
and a corresponding Borel subalgebra $\lie b_I = \lie h(\VV) \oplus
\bigoplus_{\alpha \in \Phi^+_I} \lie g(\VV)_\alpha$.
Again, Borel subalgebras of $\gl(\infty)$ behave quite differently than Borel
subalgebras of $\gl(r,\CC)$, and their behaviour is tied to the order type of
$I$. One striking difference is the following: simple roots can be defined as
usual, namely as those positive roots which can not be written as the sum of
two positive roots, but it may happen that simple roots span a strict subspace
of the root space. In the extreme case $I = \QQ$ there are \emph{no} simple
roots. In this article we will only consider a few well-behaved examples, the
reader interested in the general theory of Borel subalgebras of $\gl(\infty)$
can consult \cites{DP04,DanCohen08,DCPS07}.
Up to order isomoprhism, the only cases where simple roots span the full root
space are $I=\ZZ_{>0}, \ZZ_{<0}$ and $\ZZ$ with their usual orders. These are
called the \emph{Dynkin} cases, and the corresponding Borel subalgebras are said
to be of Dynkin type. An example of non Dynkin type comes from the choice $I =
\ZZ^\times$, where the simple roots are
\begin{align*}
\{\epsilon_i - \epsilon_{i+1} \mid i \in \ZZ^\times\setminus\{-1\}\},
\end{align*}
but the root $\epsilon_1 - \epsilon_{-1}$ is not in their span. We get a basis
of the root space by adding this extra root to the set of simple roots.
A root is called \emph{finite} if it is in the span of simple roots, otherwise
it is called \emph{infinite}. By definition we are in a Dynkin case if and only
if every root is finite.
\subsection{Further subalgebras and visual representation}
\label{ss:visual-rep}
For the rest of this article we will assume that the bases of $\VV$ and $\VV_*$
are indexed by $\ZZ^\times$. Thus elements of $\lie g(\VV)$ can be seen as
infinite matrices with rows and columns indexed by $\ZZ^\times$ and finitely
many nonzero entries. The choice of $\ZZ^\times$ as index set means that it
makes sense to speak of the $k$-to-last row or column of an infinite matrix $g
\in \lie g(\VV)$ for any $k \in \ZZ_{>0}$.
We denote by $\lie b(\VV)$ Borel subalgebra corresponding to $I = \ZZ^\times$,
which is non-Dynkin since it has an infinite root in its support. We also
denote by $\lie n(\VV)$ the commutator subalgebra $[\lie b(\VV),\lie b(\VV)]$,
so $\lie b(\VV) = \lie h(\VV) \niplus \lie n(\VV)$. Notice that $\lie n(\VV)$
is not nilpotent but only locally nilpotent.
We now introduce some useful subalgebras of $\lie g(\VV)$. Fix $r \in \ZZ_{>0}$
and set
\begin{align*}
\lie g(\VV)_r &= \langle E_{i,j} \mid i,j \in \pm \interval r\rangle
\end{align*}
This is a finite dimensional subalgebra of $\lie g(\VV)$, isomorphic to
$\gl(2r, \CC)$. Clearly $\lie g(\VV)_r \subset \lie g_{r+1}(\VV)$ and $\lie
g(\VV) =\bigcup_{r \geq 1} \lie g(\VV)_r$, so this is an exhaustion of $\lie
g(\VV)$. We also set
\begin{align*}
\VV[r] &= \langle v_i \mid r+1 \preceq i \preceq -r-1 \rangle \subset \VV
&\VV_*[r] &= \langle v^i \mid r+1 \preceq i \preceq -r-1 \rangle \subset \VV_*.
\end{align*}
The map $\tr$ restricts to a non-degenerate pairing between these two subspaces
and we write $\lie g(\VV)[r] = \lie g(\VV[r], \VV_*[r], \tr)$. This Lie algebra
is isomorphic to $\lie g(\VV)$, and it is the centraliser of $\lie g_k(\VV)$
inside $\lie g(\VV)$.
We will meet many more subalgebras of $\lie g(\VV)$, so we now introduce
a visual aid to recall them. We will represent elements of $\lie
g(\VV)$ as squares, and we will represent a subalgebra by shading in gray the
region where nonzero entries can be found, while unshaded areas will always be
filled with zeroes. The following examples should help clarify this idea.
\begin{align*}
\begin{tikzpicture}
\draw
(0,0) rectangle (1,1);
\filldraw[color=black!20!white, draw=none]
(0,0) rectangle (1,1);
\draw (0.5,-0.5) node {$\lie g(\VV)$};
\end{tikzpicture}
&&\begin{tikzpicture}
\draw
(0,0) rectangle (1,1);
\filldraw[color=black!20!white, draw=none]
(0,1) -- (1,1) -- (1,0) -- cycle;
\draw (0.5,-0.5) node {$\lie b(\VV)$};
\end{tikzpicture}
&&\begin{tikzpicture}
\draw
(0,0) rectangle (1,1);
\filldraw[color=black!20!white, draw=none]
(0,1) -- (1,1) -- (1,0) -- cycle;
\draw[color=black,dashed] (0,1) -- (1,0);
\draw (0.5,-0.5) node {$\lie n(\VV)$};
\end{tikzpicture}
&&\begin{tikzpicture}
\draw
(0,0) rectangle (1,1);
\filldraw[color=black!20!white, draw=none]
(0.2,0.2) rectangle (0.8,0.8);
\draw (0.5,-0.5) node {$\lie g(\VV)[r]$};
\end{tikzpicture}
&&\begin{tikzpicture}
\draw
(0,0) rectangle (1,1);
\filldraw[color=black!20!white, draw=none]
(0,0) rectangle (0.2,0.2);
\filldraw[color=black!20!white, draw=none]
(0.8,0) rectangle (1,0.2);
\filldraw[color=black!20!white, draw=none]
(0,0.8) rectangle (0.2,1);
\filldraw[color=black!20!white, draw=none]
(0.8,0.8) rectangle (1,1);
\draw (0.5,-0.5) node {$\lie g_k(\VV)$};
\end{tikzpicture}
\end{align*}
Notice that with these representations each corner of the square contains
finitely many entries, and that the centre of the square concentrates
infinitely many entries.
\subsection{The Lie algebra $\lie g(\VV^n)$}
\label{ss:vn}
Fix $n \in \NN$. We denote by $e_i$ the $i$-th vector of the canonical basis of
$\CC^n$, and by $e^i$ the $i$-th vector in the dual canonical basis of
$(\CC^n)^*$. Set $\VV^n = \VV \otimes \CC^n, \VV^n_* = \VV_* \ot (\CC^n)^*$
and set
\begin{align*}
\tr^n: \VV^n_* \ot \VV^n &\to \CC\\
(v^i \ot e^k)\ot(v_j \ot e_l) &\mapsto \delta_{i,j}\delta_{k,l}.
\end{align*}
We can form the Lie algebra $\lie g(\VV^n) = \lie g(\VV^n, \VV^n_*, \tr^n)$ as
before. Since $\VV$ and $\VV_*$ have bases indexed by $\ZZ^\times$, the vector
spaces $\VV^n$ and $\VV_*^n$ have bases indexed by $\ZZ^\times \times
\interval n$. Since the isomorphism type of $\lie g(\VV)$ is independent of the
indexing set as long as it is infinite and denumerable, any bijection between
$\ZZ^\times$ and $\ZZ^\times \times \interval n$ will induce an isomorphism
$\lie g(\VV^n) \cong \lie g(\VV)$. Notice that under any such
isomorphism $\lie h(\VV^n)$ is mapped to $\lie h(\VV)$.
On the other hand there is an obvious isomorphism of vector spaces $\lie
g(\VV^n) \cong \lie g(\VV) \ot M_n(\CC)$. Thus we can see elements of
$\lie g(\VV^n)$ as $n \times n$ block matrices, with each block an infinite
matrix from $\lie g(\VV)$. We set for each $k \in
\interval n, r \in \ZZ_{>0}$
\begin{align*}
\VV^{(k)} &= \VV \otimes \langle e_k\rangle,
&\VV^{(k)}_* &= \VV_* \otimes \langle e^k\rangle;\\
\VV^{(k)}[r] &= \VV[r] \otimes \langle e_k\rangle,
&\VV^{(k)}_*[r] &= \VV_*[r] \otimes \langle e^k\rangle.
\end{align*}
so $\lie g(\VV^n) = \bigoplus_{k,l \in \interval n} \VV^{(k)} \ot \VV^{(l)}$.
We highlight this particular avatar of $\gl(\infty)$ since it reveals some
internal structure which is not obvious in its usual presentation. The first
example is the Borel subalgebra $\lie b(\VV^n)$, which is awkward to handle as
a subalgebra of $\gl(\infty)$ with the usual presentation. We also obtain
a non-obvious exhaustion by setting $\lie g_k(\VV^n) = \lie g_k(\VV) \ot
M_n(\CC)$ for each $k \in \ZZ_{>0}$.
Another feature of $\gl(\infty)$ which becomes clear by looking at its
avatar $\lie g(\VV^n)$ is that it has a $\ZZ^n$-grading compatible with the Lie
algebra, inherited from the weight grading of $M_n(\CC)$ as
$\gl(n,\CC)$-module. Thus for $k,l \in \interval n$ the direct summand
$\VV^{(k)} \ot \VV^{(l)}_*$ is contained in the homogeneous component of degree
$e_k - e_l$. This grading highlights some interesting subalgebras of block
diagonal and block upper-triangular matrices, namely
\begin{align*}
\lie l
&= \bigoplus_{i = 1}^n \VV^{(k)} \ot \VV^{(l)}_*;&
\lie u
& = \bigoplus_{i < j} \lie \VV^{(k)} \ot \VV_*^{(l)};&
\lie p
& = \bigoplus_{i \leq j} \lie \VV^{(k)} \ot \VV_*^{(l)},
\end{align*}
which are the subalgebras corresponding to $0$, strictly positive, and
non-negative $\gl(n,\CC)$-weights, respectively. Notice the obvious Levi-type
decomposition $\lie p = \lie l \niplus \lie u$, where $\lie l$ is not reductive
but rather locally reductive, and $\lie u$ is $n-1$-step nilpotent.
\subsection{Subalgebras of $\lie g(\VV^n)$ and their visual representation}
\label{ss:subalgebras-vn}
As before, we will use some visual aids to describe the various subalgebras of
$\lie g(\VV^n)$. We will take $n = 3$ for these visual representations, and
hence represent a subalgebra of $\lie g(\VV^n)$ by shading regions in a
three-by-three grid of squares. We refer to these as the \emph{pictures} of the
subalgebras. The following examples should clarify the idea.
\begin{align*}
\begin{tikzpicture}[scale=0.9]
\filldraw[color=black!20!white, draw=none]
(0,3) -- (3,3) -- (3,0) -- (0,3);
\draw[color=black] (0,3) -- (3,0);
\draw[step=1cm,black] (0.01,0.01) grid (2.99,2.99);
\draw (1.5, -0.5) node {$\lie b$};
\end{tikzpicture}
&&
\begin{tikzpicture}[scale=0.9]
\filldraw[color=black!20!white, draw=none]
(0,3) -- (3,3) -- (3,0) -- (0,3);
\draw[color=black,dashed] (0,3) -- (3,0);
\draw[step=1cm,black] (0.01,0.01) grid (2.99,2.99);
\draw (1.5, -0.5) node {$\lie n$};
\end{tikzpicture}
\end{align*}
\begin{align*}
\begin{tikzpicture}[scale=0.9]
\filldraw[color=black!20!white, draw=none]
(0,3) -- (1,3) -- (1,2) -- (2,2) -- (2,1) -- (3,1) -- (3,0)
-- (2,0) -- (2,1) -- (1,1) -- (1,2) -- (0,2) -- (0,3);
\draw[step=1cm,black] (0.01,0.01) grid (2.99,2.99);
\draw (1.5, -0.5) node {$\lie l$};
\end{tikzpicture}
&&
\begin{tikzpicture}[scale=0.9]
\filldraw[color=black!20!white, draw=none]
(1,3) -- (3,3) -- (3,1) -- (2,1) -- (2,2) -- (1,2) -- (1,3);
\draw[step=1cm,black] (0.01,0.01) grid (2.99,2.99);
\draw (1.5, -0.5) node {$\lie u$};
\end{tikzpicture}
&&
\begin{tikzpicture}[scale=0.9]
\filldraw[color=black!20!white, draw=none]
(0,3) -- (0,2) -- (1,2) -- (1,1) -- (2,1) -- (2,0) -- (3,0)
-- (3,3) -- (0,3);
\draw[step=1cm,black] (0.01,0.01) grid (2.99,2.99);
\draw (1.5, -0.5) node {$\lie p = \lie l \niplus \lie u$};
\end{tikzpicture}
\end{align*}
Recall the subalgebras $\lie g(\VV^n)_r$ for all $r \in \ZZ_{>0}$. We denote by
$\lie g(\VV^n)[r]$ the subalgebra of $\lie g(\VV^n)_r$ invariants inside $\lie
g(\VV^n)$, and by $\lie l[r]$ its intersection with $\lie l$.
\begin{align*}
\begin{tikzpicture}[scale=0.9]
\filldraw[color=black!20!white, draw=none]
(0,3) rectangle (0.2,2.8)
(0.8,3) rectangle (1.2,2.8)
(1.8,3) rectangle (2.2,2.8)
(2.8,3) rectangle (3,2.8)
(0,2.2) rectangle (0.2,1.8)
(0.8,2.2) rectangle (1.2,1.8)
(1.8,2.2) rectangle (2.2,1.8)
(2.8,2.2) rectangle (3,1.8)
(0,1.2) rectangle (0.2,0.8)
(0.8,1.2) rectangle (1.2,0.8)
(1.8,1.2) rectangle (2.2,0.8)
(2.8,1.2) rectangle (3,0.8)
(0,0.2) rectangle (0.2,0)
(0.8,0.2) rectangle (1.2,0)
(1.8,0.2) rectangle (2.2,0)
(2.8,0.2) rectangle (3,0);
\draw[step=1cm,black] (0.01,0.01) grid (2.99,2.99);
\draw (1.5, -0.5) node {$\lie g(\VV^n)_r$};
\end{tikzpicture}
&&\begin{tikzpicture}[scale=0.9]
\filldraw[color=black!20!white, draw=none]
(0.2,2.8) rectangle (0.8,2.2)
(1.2,2.8) rectangle (1.8,2.2)
(2.2,2.8) rectangle (2.8,2.2)
(0.2,1.8) rectangle (0.8,1.2)
(1.2,1.8) rectangle (1.8,1.2)
(2.2,1.8) rectangle (2.8,1.2)
(0.2,0.8) rectangle (0.8,0.2)
(1.2,0.8) rectangle (1.8,0.2)
(2.2,0.8) rectangle (2.8,0.2);
\draw[step=1cm,black] (0.01,0.01) grid (2.99,2.99);
\draw (1.5, -0.5) node {$\lie g(\VV^n)[r]$};
\end{tikzpicture}
&&\begin{tikzpicture}[scale=0.9]
\filldraw[color=black!20!white, draw=none]
(0.2,2.8) rectangle (0.8,2.2)
(1.2,1.8) rectangle (1.8,1.2)
(2.2,0.8) rectangle (2.8,0.2);
\draw[step=1cm,black] (0.01,0.01) grid (2.99,2.99);
\draw (1.5, -0.5) node {$\lie l[r]$};
\end{tikzpicture}
\end{align*}
We also introduce the parabolic subalgebra $\lie p(r) = \lie l[r] + \lie b$.
This algebra again has a Levi-type decomposition $\lie p(k) = \lie l[r]^+
\niplus \lie u(k)$, where $\lie l[r]^+ = \lie l[r] + \lie h$ and $\lie u(r)$ is
the unique ideal giving this decomposition.
\begin{align*}
\begin{tikzpicture}[scale=0.9]
\draw[step=1cm,black] (0.01,0.01) grid (2.99,2.99);
\filldraw[color=black!20!white, draw=none]
(0,3) -- (0.2, 2.8) -- (0.2, 2.2) -- (0.8, 2.2) --
(1.2,1.8) -- (1.2,1.2) -- (1.8,1.2) -- (2.2,0.8) -- (2.2,0.2)
-- (2.8,0.2) -- (3,0) -- (3,3) -- (0,3);
\draw (1.5, -0.5) node {$\lie p(r)$};
\end{tikzpicture}
&&
\begin{tikzpicture}[scale=0.9]
\draw (3,0) -- (0,3);
\filldraw[color=black!20!white, draw=none]
(0.2,2.8) -- (0.2,2.2) -- (0.8,2.2) -- (0.8,2.8) -- cycle;
\filldraw[color=black!20!white, draw=none]
(1.2,1.8) -- (1.2,1.2) -- (1.8,1.2) -- (1.8,1.8) -- cycle;
\filldraw[color=black!20!white, draw=none]
(2.2,0.8) -- (2.2,0.2) -- (2.8,0.2) -- (2.8,0.8) -- cycle;
\draw[step=1cm,black] (0.01,0.01) grid (2.99,2.99);
\draw (1.5, -0.5) node {$\lie l[r]^+$};
\end{tikzpicture}
&&
\begin{tikzpicture}[scale=0.9]
\filldraw[draw=none,black!20!white]
(0,3) -- (0.2,2.8) -- (0.8,2.8) -- (0.8,2.2) --
(1,2) -- (1.2,1.8) -- (1.8,1.8) -- (1.8,1.2) --
(2,1) -- (2.2,0.8) -- (2.8,0.8) -- (2.8,0.2) --
(3,0) -- (3,3) -- (0,3);
\draw[step=1cm,black] (0.01,0.01) grid (2.99,2.99);
\draw (1.5, -0.5) node {$\lie u(r)$};
\end{tikzpicture}
\end{align*}
Another family of algebras we will study are related to the finite and infinite
roots of $\lie g(\VV^n)$. We denote by $\lie s$ the subalgebra spanned by all
finite root spaces of $\lie g(\VV^n)$, and by $\lie q$ the parabolic subalgebra
$\lie s + \lie b(\VV^n)$. This subalgebra has a Levi-type decomposition $\lie
q = \lie s \niplus \lie m$, where $\lie m$ is the subspace spanned by all root
spaces corresponding to positive infinite roots.
\begin{align*}
\begin{tikzpicture}[scale=0.9]
\filldraw[color=black!20!white, draw=none]
(0,3) rectangle (0.5,2.5)
(0.5,2.5) rectangle (1.5,1.5)
(1.5,1.5) rectangle (2.5,0.5)
(2.5,0.5) rectangle (3,0)
(0,3) -- (3,0) -- (3,3) -- cycle;
\draw[step=1cm,black] (0.01,0.01) grid (2.99,2.99);
\draw (1.5, -0.5) node {$\lie q$};
\end{tikzpicture}
&&
\begin{tikzpicture}[scale=0.9]
\filldraw[color=black!20!white, draw=none]
(0,3) rectangle (0.5,2.5)
(0.5,2.5) rectangle (1.5,1.5)
(1.5,1.5) rectangle (2.5,0.5)
(2.5,0.5) rectangle (3,0);
\draw[step=1cm,black] (0.01,0.01) grid (2.99,2.99);
\draw (1.5, -0.5) node {$\lie s$};
\end{tikzpicture}
&&
\begin{tikzpicture}[scale=0.9]
\filldraw[color=black!20!white, draw=none]
(0.5,3) rectangle (3,2.5)
(1.5,2.5) rectangle (3,1.5)
(2.5,1.5) rectangle (3,0.5);
\draw[step=1cm,black] (0.01,0.01) grid (2.99,2.99);
\draw (1.5, -0.5) node {$\lie m$};
\end{tikzpicture}
\end{align*}
We will also occasionally need the exhaustion of $\lie s$ given by $\lie s_r =
\lie s \cap \lie g(\VV^n)_r$, whose picture is left for the interested reader.
\subsection{Transpose automorphism}
Denote by $E_{i,j}^{(k,l)}$ the element $E_{i,j} \otimes e_k \ot e^l$.
The Lie algebra $\lie g(\VV^n)$ has an anti-automorphism $\tau$, analogous to
transposition in $\lie gl(r,\CC)$, given by $\tau(E_{i,j}^{(k,l)}) =
E_{j,i}^{(l,k)}$. Given a subalgebra $\lie k \subset \lie g(\VV^n)$ its image
by $\tau$ will also be denoted by $\overline{\lie k}$. Thus we have further
subalgebras $\overline{\lie b}, \overline{\lie p}, \overline{\lie u}$, etc.
Notice that the picture of the image of a subalgebra
by $\tau$ is the reflection of the picture of the subalgebra by the main
diagonal. We will denote by $T$ the automorphism of $U(\lie g(\VV^n))$ induced
by $\tau$, and given $u \in U(\lie g(\VV^n))$ we write $\overline u$
for $T(u)$.
\subsection{Roots, weights and eligible weights}
We denote by $\epsilon^{(k)}_i$ the functional in $\lie h(\VV^n)^*$ given by
$\epsilon^{(k)}_i(E_{j,j}^{(l,l)}) = \delta_{i,j}\delta_{k,l}$. By a slight
abuse of notation we can represent any element of $\lie h(\VV^n)^*$ as an
infinite sum of the form $\sum_{i,k} a_i^{(k)}\epsilon_i^{(k)}$ with $a_i^{(k)}
\in \CC$. We denote by $\omega^{(k)}$ the functional $\omega^{(k)} = \sum_{i
\in \ZZ^\times} \epsilon_i^{(k)}$. We set $\lie h(\VV^n)^\circ$ to be the span
of $\{\epsilon^{(k)}_i, \omega^{(k)} \mid i \in \ZZ^\times, k \in \interval
n\}$, and refer to its elements as eligible weights. A weight will be called
$r$-eligible if it is a linear combination of the $\omega^{(k)}$ and the
$\epsilon_i^{(k)}$ with $i \in \pm \interval r$.
As we will see in section \ref{s:rep-theory}, $r$-eligible weights
parametrise the simple finite dimensional representations of $\lie l[r]^+$,
which are all $1$-dimensional. The importance of these representations will be
discussed in that same section.
As usual, $\Lambda$ denotes the $\ZZ$-span of the roots of $\lie g(\VV^n)$
inside $\lie h(\VV^n)^*$. It is a free $\ZZ$-module spanned by the families
of roots
\begin{align*}
\epsilon_i^{(k)} &- \epsilon_{i+1}^{(k)}
&& k \in \interval n, i \in \ZZ^\times \setminus \{-1\}; \\
\epsilon_{-1}^{(k)} &- \epsilon_1^{(k+1)}
&& k \in \interval{n-1}; \\
\epsilon_{-1}^{(k)} &- \epsilon_1^{(k)}
&& k \in \interval n.
\end{align*}
The first two families consist of finite roots and any finite root is a linear
combination of them, while the roots in the last family are infinite roots.
We denote by $\Lambda_\CC^+$ the $\CC$-span of the $\epsilon_i^{(k)}$. There is
a pairing $(-, -) : \lie h(\VV^n)^* \otimes \Lambda_\CC^+ \to \CC$ with
$(\lambda, \epsilon_i^{(k)}) = \lambda(E_{i,i}^{(k,k)})$. This allows us to
define for any root $\alpha$ the reflection
\begin{align*}
s_\alpha: \lie h(\VV^n)^* &\to \lie h(\VV^n)^*\\
\lambda &\mapsto \lambda -
\frac{2 (\lambda,\alpha)}{(\alpha,\alpha)} \lambda.
\end{align*}
The group generated by these reflections is denoted by $\mathcal W$, and
by analogy with the finite dimensional case is called the \emph{Weyl group} of
$\lie g(\VV^n)$.
Set $\lie h_r = \lie h(\VV^n) \cap \lie g(\VV^n)_r$ and for $\lambda \in \lie
h(\VV^n)$ denote by $\lambda|_r$ its restriction to $\lie h_r$. Set also
$\mathcal W_r$ to be the group generated by the reflections $s_\alpha$
with $\alpha$ a root of $\lie g(\VV^n)_r$. Clearly $\mathcal W$ is the
union of the $\mathcal W_r$, and hence the direct limit of the Weyl groups of
the Lie algebras in the exhaustion $\{\lie g(\VV^n)_r\}_{r > 0}$. Furthermore,
if $\sigma \in \mathcal W_r$ and $\lambda \in \lie h(\VV^n)$ then
$\sigma(\lambda)|_r = \sigma(\lambda|_r)$. The group $\mathcal W$ is
isomorphic to the group of bijections of $\ZZ^\times \times \interval n$ that
leave a cofinite set fixed, and its action on $\lie h(\VV^n)^*$ is given by the
corresponding permutation of the $\epsilon_i^{(k)}$, suitably extended.
There is also an analogue of the dot action of the Weyl group for $\lie
h(\VV^n)^*$. Set $\rho = \sum_{i,k} -i \epsilon_i^{(k)}$, and set for
every $\sigma \in \mathcal W$ and every $\lambda \in \lie h(\VV^n)^*$
\begin{align*}
\sigma \cdot \lambda = \sigma(\lambda + \rho) - \rho.
\end{align*}
Both the usual and dot action send eligible weights to eligible weights but
the dot action of the Weyl group of $\lie g(\VV^n)$ does not restrict to the
dot action of $\lie g(\VV^n)_r$. However, if we denote by
$\mathcal W(\lie s)$ and $\mathcal W(\lie s_r)$ the groups generated by
reflections associated to finite roots of $\lie g(\VV^n)$ and $\lie g(\VV^n)_r$
respectively, then $\mathcal W^\circ$ is the union of the $\mathcal
W(\lie s_r)$ and the dot action of $\sigma \in \mathcal W(\lie s_r)$ on $\lie
h(\VV^n)^*$ does satisfy that $(\sigma \cdot \lambda)|_r = \sigma \cdot
(\lambda|_r)$.
We define an order $<_\fin$ on weights defined as follows: $\mu <_\fin \lambda$
if and only if $\mu = \sigma \cdot \lambda$ for some $\sigma \in \mathcal
W(\lie s)$ and $\lambda - \mu$ is a sum of positive roots, which necessarily
will be finite. The following lemma will be used several times in the sequel. The argument originally appeared in the proof of \cite{PS19}*{Lemma 4.10}.
\begin{lem}
\label{lem:linked-weights}
Let $\lambda, \mu$ be $r$-eligible weights, with $\lambda \succeq \mu$. If
$\lambda|_s$ and $\mu|_s$ are linked for all $s \geq r$ then $\mu = \sigma
\cdot \lambda$, with $\sigma \in \mathcal W(\lie s_{r+1})$. Thus $\lambda -
\mu$ is a sum of positive finite roots of $\lie g(\VV^n)_r$, and in particular
$\mu <_{\fin} \lambda$.
\end{lem}
\begin{proof}
Denote by $\sigma_s$ the element in the Weyl group of $\lie g(\VV^n)_s$ such
that $\sigma_s \cdot \lambda|_s = \mu|_s$, and by $\rho_s$ the half-sum of the
positive roots of $\lie g_s$. If $\alpha$ is a simple root of $\lie
g(\VV^n)_s$ that is also a root of $\lie l[r]$ then $(\mu, \alpha) = 0$, so
$\sigma_s$ can only involve reflections $s_\beta$ with $\beta$ a root of $\lie
g(\VV^n)_{r+1}$. It follows then that
\begin{align*}
0 = ( \lambda|_s - \sigma_s(\mu|_s), \alpha)
&= ( \sigma_s(\rho_s) - \rho_s, \alpha)
\end{align*}
for all $s$ and this is only possible if $\sigma_s$ only involves reflection
through simple roots, i.e. if $\sigma_s \in \mathcal W(\lie s_{r+1})$.
\end{proof}
\section{Representation theory}
\label{s:rep-theory}
Throughout this section $\lie g$ is a Lie algebra with a fixed splitting Cartan
subalgebra $\lie h$, and $\lie k$ is a root-subalgebra of $\lie g$ containing
$\lie h$.
\subsection{Socle filtrations}
Let $M$ be a $\lie g$-module. Recall that the socle of $M$ is the largest
semisimple $\lie g$-submodule contained in $M$ and is denoted by $\soc M$. The
socle filtration of $M$ is the filtration defined inductively as follows:
$\soc^{(0)} M = \soc M$, and $\soc^{(n+1)} M$ is the preimage of
$\soc(M / \soc^{(n)} M)$ by the quotient map $M \to M/ \soc^{(n)} M$. The
\emph{layers} of the filtration are the semisimple modules
$\overline{\soc}^{(n)} M = \soc^{(n)} M / \soc^{(n-1)} M$.
\subsection{Categories of weight modules}
Given a $\lie g$-module $M$ and $\lambda \in \lie h^*$ we denote by $M_\lambda$
the subspace of $\lie h$-eigenvectors of eigenvalue $\lambda$, and refer to the
set of all $\lambda$ with $M_\lambda \neq 0$ as the \emph{support} of $M$.
We say $M$ is a \emph{weight} module if $M = \bigoplus_\lambda M_\lambda$.
We denote by $\Mod{(\lie g, \lie h)}$ the full subcategory of $\Mod{\lie g}$
whose objects are weight modules.
The inclusion of $\Mod{(\lie g, \lie h)}$ in $\Mod{\lie g}$ is an exact functor
with right adjoint $\Gamma_{\lie h}: \Mod{\lie g} \to \Mod{(\lie g, \lie h)}$
that assigns to each module $M$ the largest $\lie h$-semisimple submodule it
contains. By standard homological algebra, this functor is left exact and sends
injective objects to injective objects and direct limits to direct limits. In
particular $\Mod{(\lie g, \lie h)}$ has enough injective objects. Given two
weight modules $M, N$ we denote by $\Hom_{\lie g, \lie h}(M,N)$ the space of
morphisms in the category of weight modules, and by $\Ext^\bullet_{\lie g,
\lie h}(N,M)$ the corresponding derived functors.
Given any $\lie g$-modules $M, N$ the space $\Hom_{\lie g}(M,N)$ is a $\lie
h$-module in a natural way, and we denote by $\ssHom_{\lie g}(M,N)$ the
subspace spanned by its $\lie h$-semisimple vectors. If $M$ is a weight
representation then a map $\phi: M \to N$ is semisimple of weight $\lambda$ if
and only if $\phi(M_\mu) \subset N_{\lambda + \mu}$ for any weight $\mu$.
\subsection{Induction and coinduction for weight modules}
The restriction functor $\Res_{\lie k}^{\lie g}: \Mod{(\lie g, \lie h)} \to
\Mod{(\lie k, \lie h)}$ has a left adjoint, given by the usual induction functor
$\Ind_{\lie k}^{\lie g}: \Mod{(\lie g, \lie h)} \to \Mod{(\lie k, \lie h)}$;
both functors are exact. We often write just $N$ instead of $\Res_{\lie
k}^{\lie g} N$. Restriction also has a right adjoint $\ssCoind_{\lie k}^{\lie
g}: \Mod{(\lie g, \lie h)} \to \Mod{(\lie k, \lie h)}$, given by
\begin{align*}
\ssCoind_{\lie k}^{\lie g} M &= \ssHom_{\lie k}(U(\lie g), M)
\end{align*}
By definition a morphism $\phi: U(\lie g) \to M$ will be in $(\ssCoind_{\lie
k}^{\lie g} M)_\mu$ if it is $\lie k$-linear and maps the weight space $U(\lie
g)_\lambda$ to $M_{\lambda + \mu}$. In particular the semisimple coinduction
of $M$ only depends on $\Gamma_{\lie h}(M)$.
Unlike usual coinduction, semisimple coinduction is not exact. However it is
left exact, sends injective objects to injective objects and direct limits to
direct limits. This follows from the fact that it is right adjoint to an
exact functor.
\subsection{Semisimple duals}
Suppose we have fixed an antiautomorphism $\tau$ of $\lie g$ which restricts to
the identity over $\lie h$ (in all our examples $\lie g$ will be $\gl(r,\CC)$
or $\gl(\infty)$, and $\tau$ will be the transposition map). Denote by $T$ the
antiautomorphism of $U(\lie g)$ induced by $\tau$ and by $\overline{\lie k}$
the image of $\lie k$ by $\tau$. If $M$ is a $\lie k$-module with structure map
$\rho: \lie k \to \End(M)$ we can define a $\overline{\lie k}$-module ${}^T M$
whose underlying vector space is $M$ and whose structure map is $\rho \circ \tau
:\overline{\lie k} \to \End(M)$. This assignation is functorial and commutes
with the functor $\Gamma_{\lie h}$. The \emph{semisimple dual} of a $\lie
k$-module $M$ is the $\overline{\lie k}$-module $M^\vee = {}^T \Gamma_{\lie h}
(M^*)$. The definition is analogous to the duality operator of category
$\mathcal O$ for finite dimensional reductive Lie algebras. Indeed, we have
$(M^\vee)_\lambda = (M_\lambda)^*$, and if $f \in M^\vee_\lambda$ then
$(u \cdot f)(m) = f(T(u)m)$ for all $u \in U(\overline{\lie k})$ and $m \in M$.
We have the following result relating semisimple duals with induction and
semisimple coinduction functors.
\begin{prop}
\label{prop:ss-dual-coind}
Let $\lie k \subset \lie g$ be a subalgebra containing $\lie h$, and let
$M$ be a weight $\lie k$-module. There is a natural isomorphism
\begin{align*}
(\Ind_{\lie k}^{\lie g} M)^\vee
&\cong \ssCoind_{\overline{\lie t}}^{\lie g} M^\vee.
\end{align*}
In particular semisimple coinduction is exact over the image of the semisimple
dual functor.
\end{prop}
\begin{proof}
By \cite{Dixmier77}*{5.5.4 Proposition} there exists a natural isomorphism
$(\Ind_{\lie k}^{\lie g} M)^* \cong \ssCoind_{\lie t}^{\lie g} M^*$. If we apply
$\Gamma_{\lie h}$ and then twist by the automorphism $T$ we get
\begin{align*}
(\Ind_{\lie k}^{\lie g} M)^\vee
&\cong \Gamma_{\lie h}\left({}^T \Hom_{\lie k}(U(\lie g), M^*)
\right).
\end{align*}
The automorphism $T$ defines an equivalence between the categories of weight
$\lie k$ and $\overline{\lie k}$-modules, so there is a natural isomorphism
$\Hom_{\lie k}(U(\lie g), M^*) \cong \Hom_{\overline{\lie k}}({}^T U(\lie g),
{}^T M^*)$. Twisting the left $\lie g$-action on this module by $T$ corresponds
to twisting the right $\lie g$ action of $U(\lie g)$ by $T$, so
\begin{align*}
{}^T \Hom_{\lie k}\left( U(\lie g), M^* \right) &\cong
\Hom_{\overline{\lie k}}\left({}^T U(\lie g)^T, {}^T M^* \right)
\cong \Hom_{\overline{\lie k}}(U(\lie g), {}^T M^*),
\end{align*}
where the last isomorphism comes from the fact that $T: U(\lie g) \to {}^T
U(\lie g)^T$ defines an isomorphism of $U(\overline{\lie k})$-$U(\lie g)$
bimodules. Thus
\begin{align*}
{}^T \Gamma_{\lie h}\left(\Hom_{\lie k}(U(\lie g), M^*) \right)
&\cong \ssHom_{\lie k}\left(
U(\lie g), \Gamma_{\lie h}({}^T M^*))
\right)
= \ssCoind_{\overline{\lie k}}^{\lie g} M^\vee.
\end{align*}
The last statement follows from the fact that induction and semisimple duality
are exact functors.
\end{proof}
\subsection{Torsion and inflation}
Let $M$ be a $\lie g$-module. We say that $m \in M$ is $\lie
k$-\emph{integrable} if for any $g \in \lie k$ the dimension of the span of
$\{g^r m \mid r \geq 0\}$ is finite.
This happens in particular if $g^r m = 0$ for $r \gg 0$ depending on $g$, and
in this case we say that $M$ is $\lie k$-\emph{locally nilpotent}. We say that
$m$ is $\lie k$-\emph{torsion} if $g^r m = 0$ and $r$ can be chosen
independently of $g$. We say that $M$ is $\lie k$-integrable,
nilpotent, or torsion if each of its elements is.
The subspace of $\lie k$-torsion vectors of a $\lie g$-module $M$ is again a
$\lie g$-module, which we denote by $\Gamma_{\lie k}(M)$. This is a left exact
functor, as can be easily checked. Denoting by $J_r$ the right ideal generated
by $\lie k^r$ inside $U(\lie g)$, we see there exists a natural isomorphism
\begin{align*}
\Gamma_{\lie n} \cong \varinjlim \Hom_{\lie g}(U(\lie g)/J_r, -).
\end{align*}
Now suppose $\lie g = \lie k \niplus \lie r$. The projection $\lie g \to \lie
g/\lie r \cong \lie k$ induces a functor
\begin{align*}
\mathcal I_{\lie k}^{\lie g}: \Mod{(\lie k, \lie h)}
\to \Mod{(\lie g, \lie h)}.
\end{align*}
We refer to this as the \emph{inflation} functor. It can be identified with
$\ssHom_{\lie g}(U(\lie k), -)$ where $U(\lie k)$ is seen as a right $\lie
g$-module, which implies it is exact and has a left adjoint $M \mapsto U(\lie
k)\ot_{\lie g} M \cong M/U(\lie g) \lie r M$. We will write $M$ for $\mathcal
I_{\lie k}^{\lie g}(M)$ when the context makes clear that we are seeing $M$ as
a $\lie g$-module.
\subsection{Weights, gradings and parabolic subalgebras}
Given any morphism of abelian groups $\phi: \lie h^* \to A$, we can turn an
semisimple $\lie h$-module into an $A$-graded vector space by setting $M_a =
\bigoplus_{\phi(\lambda) = a} M_\lambda$. We will denote this $A$-graded vector
space by $M^\phi$, though we will often omit the superscript when it is clear
from context. The assignation $M^\phi$ is clearly functorial, and in particular
turns $\lie g$ into an $A$-graded Lie algebra, and any weight module into an
$A$-graded $\lie g$-module.
Suppose now that $A = \CC^n$ and that $\phi(\alpha) \in \ZZ^n$ for each root.
For $\chi, \xi \in \CC^n$ we write $\chi \succeq \xi$ if $\chi - \xi \in
\ZZ_{\geq 0}^n$. We get a decomposition $\lie g = \lie g^\phi_{\prec
0} \oplus \lie g^\phi_{0} \oplus \lie g^\phi_{\succ 0}$. If we have defined a
set of positive roots $\Phi^+$ along with a corresponding Borel subalgebra
$\lie b$, and if $\phi(\alpha) \succ 0$ for every positive root, then the
subalgebra $\lie g^\phi_{\succeq 0}$ is a parabolic subalgebra of $\lie g$.
\begin{ex}
\label{ex:gradings}
Fix $n \in \ZZ_{>0}$ and set $\lie g = \lie g(\VV^n)$.
\begin{enumerate}
\item We fix $n \in \NN$ and $\lie h ^\circ = \lie h(\VV^n)^\circ$. Consider the
$\CC$-linear map $\phi: \lie h(\VV^n)^\circ \to \CC^n$ given by
$\phi(\epsilon_i^{(k)}) = \phi(\omega^{(k)}) = e_k$, and extend to $\lie h^*$
arbitrarily. Since roots are mapped to vectors in $\ZZ^n$ this map induces a
$\ZZ^n$-grading on $\gl(\infty)$, which coincides with the one introduced in
\ref{ss:vn}. The corresponding parabolic subalgebra is $\lie g_{\succeq 0}^\phi
= \lie p$ with $\lie g^\phi_0 = \lie l$ and $\lie g^\phi_{\succ 0} = \lie u$.
\item Fix $r > 0$ and denote by $\rho_r$ the sum of all positive roots of $\lie
s_r$. Set $\theta: \lie h^\circ \to \ZZ$ to be the map $\alpha \mapsto
(\alpha, \rho_r) + (\phi(\alpha), \rho_n)$, where we are seeing $\rho_n$ as a
vector of $\CC^n$ in the usual way. The corresponding parabolic subalgebra is
$\lie p(r)$ with $\lie g^{\theta_r}_0 = \lie l[r]^+$ and $\lie g^{\theta_
r}_{>0} = \lie u(r)$.
\item Let $\psi: \lie h(\VV^n)^\circ \to \CC$ be the map $\psi(\epsilon^{(k)}_i)
= \frac12(n + 1 - 2k + \mathsf{sg}(i))$ (here $\mathsf{sg}(i)$ is $1$ if $i$ is
a positive integer and $-1$ if it is negative) and $\psi(\omega^{(k)})= 0$.
This formula has the nice property that if $\alpha$ is a finite root then
$\psi(\alpha) = 0$, while the infinite roots $\epsilon^{(k)}_{-1} -
\epsilon^{(k)}_1$ are mapped by $\psi$ to $1$. The corresponding parabolic
subalgebra is $\lie q$ with $\lie g^\psi_0 = \lie s$ and $\lie g^\psi_{>0} =
\lie m$.
\end{enumerate}
These maps and the induced gradings will reappear throughout the rest of the
article, so we fix the notations $\phi, \theta$ and $\psi$ for them.
\end{ex}
\subsection{Local composition series}
In general it is hard to establish a priori whether a $\lie g$-module
has a composition series. For this reason we introduce the following notion
taken from \cite{DGK82}*{Proposition 3.2}.
\begin{defn}
Let $M$ be a $\lie g$-module and let $\lambda \in \lie h^*$. We say that $M$
has a \emph{local composition series at $\lambda$} if there exist a finite
filtration
\begin{align*}
M = F_0M \supset F_1M \supset \cdots \supset F_tM = \{0\}
\end{align*}
and a finite set $J \subset \{0, 1, \cdots, t-1\}$ such that
\begin{enumerate}[(i)]
\item if $j \in J$ then $F_{j} M / F_{j+1}M \cong L(\lambda_j)$ for some
$\lambda_j \succeq \lambda$;
\item if $j \notin J$ and $\mu \succeq \lambda$ then $(F_{j-1} M / F_{j}M)_\mu
= 0$.
\end{enumerate}
We say that $M$ has local composition series, or LCS for short, if it has a
local composition series at all $\lambda \in \lie h^*$.
\end{defn}
It follows from the definition that a local composition series at $\lambda$
induces a local composition series at $\lambda'$ for any $\lambda' \succeq
\lambda$, possibly with a different set $J$. A standard argument shows that if
$M$ has two local composition series at $\lambda$ then the multiplicities of
any simple object $L(\mu)$ in each series coincide. If $M$ has LCS then we
denote by $[M:L(\mu)]$ this common multiplicity. The class of modules having
LCS is closed under submodules, quotients and extensions, and multiplicity is
additive for extensions.
\subsection{Finite dimensional representations of $\gl(\infty)$}
For each $a \in \CC$ the algebra $\gl(\infty)$ has a one-dimensional
representation $\CC_a$, where $g \in \gl(\infty)$ acts by multiplication by
$a\tr(g)$; in particular $\CC_a$ is a trivial $\sl(\infty)$-module. These are
the only simple finite dimensional representations of $\gl(\infty)$, and any
finite dimensional weight representation is a finite direct sum of these.
In order to find more interesting representations, we need to look for infinite
dimensional modules. Finitely generated modules of $\gl(\infty)$ do not form
an abelian category, since $U(\gl(\infty))$ is not left-noetherian, so we need
to look for an alternative notion of a ``small'' $\gl(\infty)$-module.
\subsection{The large annihilator condition}
Let $\lie k \subset \gl(\infty)$ be a subalgebra. Let $M$ be a
$\gl(\infty)$-module $M$ and let $m \in M$. We say that $m$ satisfies the
\emph{large annihilator condition} (LAC from now on) with respect to $\lie k$
if there exists a finite dimensional subalgebra $\lie t \subset \lie k$ such
that $\mathcal [\lie k^{\lie t},\lie k^{\lie t}]$, the derived subalgebra of
the centraliser of $\lie t$ in $\lie k$, acts trivially on $\CC m$. In other
words, the annihilator of $m$ contains the ``large'' subalgebra $[\lie
k^{\lie t},\lie k^{\lie t}]$. We say that $M$ satisfies the LAC if every vector
in $M$ satisfies the LAC.
The adjoint representation of $\gl(\infty)$ satisfies the LAC with respect to
itself, and hence with respect to any other subalgebra $\lie k$. If $m \in M$
satisfies the LAC with respect to $\lie k$ then the $\gl(\infty)$ module
generated by $m$ also satisfies the LAC. The tensor product of two
representations satisfying the LAC again satisfies the LAC.
Denote by $\Mod{(\gl(\infty), \lie h)}_{\mathsf{LA}}^{\lie k}$ the full
subcategory of modules satisfying the LAC with respect to $\lie k$. The natural
inclusion of this category in $\Mod{(\gl(\infty), \lie h)}$ has a right
adjoint, which we denote by $\Phi_{\lie k}$, or just $\Phi$ for simplicity.
Being right adjoint to an exact functor, this functor is left exact and sends
direct limits to direct limits and injective objects to injective objects.
Let us consider the case $\lie k = \gl(\infty)$ in more detail. For simplicity
we fix the exhaustion $\lie g(\VV)_r$ associated to the order $I = \ZZ^\times$.
If $M$ satisfies the large annihilator condition and $m \in M$, the finite
dimensional Lie algebra $\lie t$ is contained in $\lie g(\VV)_r$ for some $r
\gg 0$, so we might as well take $\lie t = \lie g(\VV)_r$. Now the algebra
$\gl(\infty)^{\lie t}$ is the subalgebra we denoted by $\lie g(\VV)[r]$ and is
isomorphic to $\gl(\infty)$. Thus $\CC m$ is a $1$-dimensional representation
of $\lie g(\VV)_r$ and hence isomorphic as such to $\CC_a$ for some $a \in
\CC$.
\subsection{The large annihilator condition over $\lie l$}
Recall we have set $\lie l = \lie g_0$ for the $\ZZ^n$-grading introduced in
subsection \ref{ss:vn}. In the sequel we will mostly consider modules
satisfying the large annihilator condition with respect to $\lie l$, so we now
focus on this condition. Since $\lie l = \lie g(\VV^{(1)}) \oplus \cdots \oplus
\lie g(\VV^{(n)})$, its one-dimensional representations are external tensor
products of the form $\CC_{a_1} \boxtimes \cdots \boxtimes \CC_{a_n}$, with
$\chi = (a_1, \ldots, a_n) \in \CC^n$.
A vector $m \in M$ satisfying the LAC with respect to $\lie l$ must span a
one-dimensional module over $\lie l[r] \cong \lie l$, for some $r \gg 0$. This
forces the weight of $m$ to be $r$-eligible and hence in $\lie h(\VV^n)^\circ$.
In other words, the weight must be of the form
\begin{align*}
\sum_{k=1}^n a_k \omega^{(k)} +
\sum_{k=1}^n\sum_{i\in \pm \interval r} b_i^{(k)} \epsilon_i^{(k)}.
\end{align*}
We call the $n$-tuple $\chi$ the \emph{level} of $m$, or rather of its weight.
Any vector in the module spanned by $m$ has the same level as $m$, and denoting
by $M^\chi$ the space of all vectors of level $\chi$ we see that $M =
\bigoplus_{\chi \in \CC^n} M^\chi$. Thus every vector in an indecomposable
module $M$ satisfying the LAC with respect to $\lie l$ must have the same
level, and we define this to be the level of $M$.
For each $r \geq 1$ we set $\Phi_r : \Mod{(\lie g, \lie h)} \to \Mod{(\lie g_r,
\lie h_r)}$ to be the functor that assigns to a module $M$ the space of
invariants $M^{\lie l[r]'}$. If $m \in M_\lambda$ spans a trivial $\lie
l[r]'$-module then $\CC m \cong \CC_\lambda$ as $\lie l[r]^+$-modules. We can
use this observation to get natural isomorphisms of functors
\begin{align*}
\Phi_r &\cong \bigoplus_{\lambda}
\Hom_{\lie g} (U(\lie g) \ot_{\lie l[r]^+} \CC_\lambda, -); &
\Phi &\cong \bigoplus_{\lambda \in \lie h^\circ} \varinjlim
\Hom_{\lie g} (U(\lie g) \ot_{\lie l[r]^+} \CC_\lambda, -),
\end{align*}
where the first sum is taken over the space of all $r$-eligible weights.
\section{Some categories of representations of $\gl(\infty)$}
\label{s:cats-reps}
Fix $n \in \NN$ and set $\lie g = \lie g(\VV^n)$. We omit the symbol $\VV^n$
from this point on and simply write $\lie b, \lie s, \lie g_r$, etc.
for the subalgebras introduced in Subsection \ref{ss:subalgebras-vn}.
\subsection{Category $\overline{\mathcal O}$ for Dynkin Borel algebras}
\label{ss:nampaisarn}
In this subsection we review some results from Nampaisarn's thesis
\cite{Nampaisarn17}, where he introduces and studies category
$\overline{\mathcal O}$. We will state his results for the Dynkin
subalgebra $\lie s \subset \lie g$ since this will be the only example we will
need. We set $\lie b_{\lie s} = \lie b \cap \lie s$ and $\lie n_{\lie s} =
[\lie b_{\lie s}, \lie b_{\lie s}]$. For each $r \in \ZZ_{>0}$ we set $\lie s_r
= \lie s \cap \lie g_r$, which is the Lie subalgebra of $\lie g_r$ spanned by
finite roots, and is isomorphic to $\gl(r,\CC)^2 \oplus \gl(2r, \CC)^{n-1}$,
in particular it is reductive.
\begin{defn}
An $\lie s$-module $M$ lies in $\overline{\mathcal O} = \overline{\mathcal
O}^{\lie s}_{\lie b_{\lie s}}$ if it is $\lie h$-semisimple, $\lie n_{\lie
s}$-torsion, and $\dim M_\lambda < \infty$ for each $\lambda \in \lie h^*$.
\end{defn}
The definition mimics that of $\mathcal O$ for finite dimensional reductive
algebras. It makes sense for arbitrary locally reductive algebras with a
fixed Borel subalgebra and many results from category $\mathcal O_{\lie s_r}$
extend to $\overline{\mathcal O}$ in a straightforward manner. On the other hand
the category lacks several obvious modules, for example the adjoint
representation; if we waive the Dynkin hypothesis then Verma modules are also
excluded. Still, it contains many interesting objects, and turns out
to be an important stepping stone in the study of further categories of
representations.
Since $\lie b_{\lie s}$ is a Dynkin Borel subalgebra, given $\lambda \in \lie
h^*$ the Verma module $M_{\lie s}(\lambda)$ and its simple subquotient $L_{\lie
s}(\lambda)$ belong to $\overline{\mathcal O}$. The category is also closed
under semisimple duals, so the dual Verma module $M(\lambda)^\vee$ also belongs
to $\overline{\mathcal O}$. A weight $\lambda \in \lie h^*$ is \emph{integral}
if $(\lambda, \alpha) \in \ZZ$ for any simple root $\alpha$ of $\lie s$,
\emph{dominant} if $(\lambda + \rho, \alpha) \notin \ZZ_{<0}$, and \emph{almost
dominant} if $(\lambda + \rho, \alpha) \in \ZZ_{<0}$ for only finitely many
simple roots $\alpha$.
\begin{thm}[\cite{Nampaisarn17}*{Theorem 6.7, Proposition 8.11}]
\label{thm:dynkin-verma}
Let $\lambda, \mu \in \lie h^*$.
\begin{enumerate}[(a)]
\item If $M(\mu) \subset M(\lambda)$ then $\lambda \succeq \mu$ and there
exists $\sigma \in \mathcal W(\lie s)$ such that $\mu = \sigma \cdot \lambda$.
\item If $M$ is a highest weight module with highest weight vector $m \in
M_{\lambda}$ then for $r \gg 0$
\begin{align*}
[M: L_{\lie s}(\mu)] &= [U(\lie s_r) m: L_{\lie s_r}(\mu|_r)].
\end{align*}
\end{enumerate}
\end{thm}
It follows that $m(\lambda, \mu) = [M_{\lie s}(\lambda): L_{\lie s}(\mu)]$
coincides with the Kazhdan-Lusztig multiplicities of the corresponding Verma
modules for the finite-dimensional reductive algebra $\lie s_k$.
Let $\lambda$ be a dominant weight and denote by $\overline{\mathcal O}
[\lambda]$ the subcategory of the modules in $\overline{\mathcal O}$ whose
support is contained in $\{\lambda - \mu \mid \mu \succeq 0\}$. It follows from
the previous theorem that $\overline{\mathcal O}[\lambda]$ is a block of
category $\overline{\mathcal O}$. By \cite{Nampaisarn17}*{Theorem 9.9} this
block has enough injectives, and by \cite{Nampaisarn17}*{Proposition 9.21}
these injective objects have finite filtrations by dual Verma modules (notice
Nampaisarn refers to dual Verma modules as costandard modules), and their multiplicities are given by BGG-reciprocity. We put this down as a theorem for
future reference.
\begin{thm}[\cite{Nampaisarn17}*{Theorem 9.9, Proposition 9.21}]
\label{thm:inj-filtration}
Let $\lambda$ be an almost dominant weight. Then $L_{\lie s}(\lambda)$ has an
injective envelope $I_{\lie s}(\lambda)$ in $\overline{\mathcal O}$. The
injective envelope has a finite filtration whose layers are modules of the form
$M(\lambda^{(k)})^\vee$ with $i = 0, \ldots, m$, and such that $\lambda^{(0)} =
\lambda$ and $\lambda^{(k)} >_\fin \lambda$. Furthermore, the multiplicity of
$M(\mu)^\vee$ in this filtration equals $m(\lambda, \mu)$.
\end{thm}
\subsection{The categories $\TT_{\lie g}$ and $\TT_{\lie l}$}
We now return to our study of representations of $\gl(\infty)$. Throughout this
section we identify $\gl(\infty)$ with $\lie g(\VV)$, and so $\VV$ and $\VV_*$
are $\gl(\infty)$-modules.
A tensor module of $\gl(\infty)$ is a subquotient of a finite direct sum of
modules of the form $\VV^p \ot \VV_*^q$ for $p,q \in \NN_0$.
Tensor modules were first studied by Penkov and Styrkas in \cite{PS11b} and
later by Dan-Cohen, Penkov and Serganova in \cite{DCPS16}; in \cite{SS15} Sam and Snowden study the equivalent category $\operatorname{Rep}^{st}
\mathbf{GL}(\infty)$. It follows from \cite{DCPS16}*{section 4} that a module
is a tensor module if and only if it is an integrable module of finite length
satisfying the LAC with respect to $\gl(\infty)$ and of level $0$. This
category is denoted by $\TT_{\gl(\infty)}^0$.
The simple tensor modules are parametrised by pairs of partitions $(\lambda,
\mu)$. The corresponding simple module, which we denote by $L(\lambda, \mu)$,
is the simple highest weight module with respect to the Borel subalgebra $\lie
b(\VV)$ with highest weight $\lambda_1 \epsilon_1 + \cdots +\lambda_n
\epsilon_n - \mu_n \epsilon_{-n} - \cdots - \mu_1 \epsilon_{-1}$. The module
$I(\lambda, \mu) =
\Schur_\lambda(\VV) \ot \Schur_\mu(\VV_*)$ belongs to $\TT_{\gl(\infty)}^0$ and
is injective in this category \cite{DCPS16}*{Corollary 4.6}. The layers of its
socle filtration are given by \cite{PS11b}*{Theorem 2.3}
\begin{align*}
\overline{\soc}^{(r+1)} I(\lambda,\mu)
&= \bigoplus_{\gamma \vdash r, \lambda',\mu'}
c^{\lambda}_{\lambda',\gamma}c^{\mu}_{\mu',\gamma}
L(\lambda',\mu').
\end{align*}
In particular $I(\lambda, \mu)$ is the injective envelope of $L(\lambda,
\mu)$ in $\TT_{\gl(\infty)}^0$.
\begin{lem}
\label{lem:tensor-fg-fl}
Let $M$ be an integrable weight $\gl(\infty)$-module satisfying the large
annihilator condition with respect to $\gl(\infty)$. Then $M$ has finite length
if and only if $M$ is finitely generated.
\end{lem}
\begin{proof}
We will use the isomorphism $\gl(\infty) \cong \lie g(\VV)$ to see $M$ as a
$\lie g(\VV)$-module. If $M$ has finite length then it is finitely generated.
To prove the reverse implication it is enough to prove it when $M$ is cyclic,
so we assume that $M$ is generated by $v \in M_\lambda^a$ for $\lambda \in
\lie h^*$ and $a \in \CC$.
Take $r \gg 0$ so $v$ spans a $1$-dimensional $\lie g(\VV)[r]$-module
isomorphic to $\CC_a$. Take $\lie r$ to be the locally nilpotent ideal of
$\lie g(\VV)[r] + \lie b$, and $\overline{\lie r}$ to be the opposite ideal, so
$\gl(\infty) = \lie r \oplus \lie g(\VV)[r]^+ \oplus \overline{\lie r}$, where
$\lie g(\VV)[r]^+ = \lie g(\VV)[r] + \lie h$.
\begin{align*}
\begin{tikzpicture}
\draw
(0,0) rectangle (2,2);
\filldraw[color=black!20!white, draw=none]
(0,2) -- (0.4,1.6) -- (0.4,0.4) -- (1.6,0.4) -- (2,0) -- (0,0) --
cycle;
\draw (1,-0.5) node {$\overline{\lie r}$};
\end{tikzpicture}
&&
\begin{tikzpicture}
\draw
(0,0) rectangle (2,2);
\filldraw[color=black!20!white, draw=none]
(0.4,0.4) rectangle (1.6,1.6);
\draw (0,2) -- (0.4,1.6);
\draw (2,0) -- (1.6,0.4);
\draw (1,-0.5) node {$\lie g(\VV)[r]^+$};
\end{tikzpicture}
&&
\begin{tikzpicture}
\draw
(0,0) rectangle (2,2);
\filldraw[color=black!20!white, draw=none]
(0,2) -- (0.4,1.6) -- (1.6,1.6) -- (1.6,0.4) -- (2,0) -- (2,2) --
cycle;
\draw (1,-0.5) node {$\lie r$};
\end{tikzpicture}
\end{align*}
As $\lie g(\VV)[r]$-module, $M$ is isomorphic to a quotient of $\SS^{\bullet}
(\overline{\lie r}) \ot \SS^{\bullet}(\lie r) \ot \CC_a$, while $\lie r$ and
$\overline{\lie r}$ decompose as $\VV[r]^r \oplus \VV_*[r]^r \oplus
\CC^{2r^2-r}$. By \cite{DCPS16}*{Lemma 4.1} there exists a finite dimensional
subspace $X \subset \lie r$ such that $\SS^p(\lie r)$ is generated over $\lie
g(\VV)[r]$ by $\SS^p(X)$ for all $p \geq 0$. On the other hand, since
$M$ is integrable every element of $X$ acts nilpotently on $M$, and so
$\SS^p(X)m = 0$, and hence $\SS^p(\lie r)m = 0$, for $p \gg 0$. By an analogous
reasoning, $\SS^p(\overline{\lie r})m = 0$, for $p \gg 0$.
It follows that $M$ is in fact isomorphic as $\lie g(\VV)[r]$-module to a
quotient of $\SS^{\leq p}(\overline{\lie r}) \ot \SS^{\leq p}(\lie r) \ot
\CC_a$ for $p \gg 0$. Since $\SS^{\leq p} (\overline{\lie r}) \ot \SS^{\leq p}
(\lie r)$ is a tensor $\lie g(\VV)[r]$-module it has finite length as $\lie
g(\VV)[r]$-module. Thus $M$ has finite length over $\lie g(\VV)[r]$ and hence
over $\lie g(\VV)$.
\end{proof}
We now turn back to the case $\lie g = \lie g(\VV^n)$.
We say that an $\lie l$-module is a \emph{tensor module} if it is a subquotient
of the tensor algebra $T(\VV^{(1)} \oplus \VV^{(1)}_* \oplus \cdots \oplus
\VV^{(n)} \oplus \VV^{(n)}_*)$. An indecomposable tensor module is isomorphic
to an external tensor product $M_1 \boxtimes M_2 \boxtimes \cdots \boxtimes M_n$
with each $M_i$ a tensor $\lie g(\VV^{(k)})$-module. Simple tensor modules are
parametrised by pairs $(\boldsymbol \lambda, \boldsymbol \mu)$ with $\boldsymbol
\lambda = (\lambda_1, \ldots, \lambda_n)$ and $\boldsymbol \mu = (\mu_1,
\ldots, \mu_n)$ $n$-tuples of partitions, and
\begin{align*}
L(\boldsymbol \lambda, \boldsymbol \mu)
&\cong L(\lambda_1, \mu_1) \boxtimes L(\lambda_2, \mu_2)
\boxtimes \cdots \boxtimes L(\lambda_n, \mu_n).
\end{align*}
Tensor $\lie l$-modules have finite filtrations whose layers are simple tensor
modules, and hence they have finite length. It is also clear they are
integrable and satisfy the LAC by the characterisation of $\gl(\infty)$ tensor
modules given above.
\begin{defn}
The category $\TT_{\lie l}$ is the full subcategory of integrable $\lie
g$-modules of finite length satisfying the LAC. For each $\chi \in \CC^n$ we
define $\TT_{\lie l}^\chi$ to be the subcategory of $\TT_{\lie l}$ formed by
modules of level $\chi$.
\end{defn}
Tensor $\lie l$-modules belong to $\TT^0_{\lie l}$. Each $\TT_{\lie l}^\chi$ is
a block of $\TT_{\lie l}$. Also $\CC_\chi \in \TT^\chi_{\lie l}$, and tensoring
with $\CC_{\chi}$ gives an equivalence between $\TT_{\lie l}^0$ and $\TT_{\lie
l}^\chi$.
\begin{prop}
Let $M$ be an object of $\TT_{\lie l}$.
\begin{enumerate}[(a)]
\item If $M$ is simple then it is isomorphic to $\CC_\chi \ot
L(\boldsymbol \lambda, \boldsymbol \mu)$ for $\boldsymbol \lambda, \boldsymbol
\mu \in \Part^n$.
\item If $M$ is a highest weight module with highest weight $\lambda$ and
$\alpha$ is a positive root of $\lie l$ then $(\lambda, \alpha) \in \ZZ_{\geq
0}$.
\end{enumerate}
\end{prop}
\begin{proof}
Since tensoring with a fixed one-dimensional module gives an equivalence
between blocks of $\TT_{\lie l}$, to prove $(a)$ it is enough to prove that
a simple object $L$ in $\TT^0_{\lie l}$ is a simple tensor module.
Take $v \in L$, and set $L_k$ to be the $\lie g(\VV^{(k)})$-submodule of $L$
spanned by $v$. By definition $L_k$ is integrable and satisfies the LAC
with respect to $\lie g(\VV^{(k)})$, and so by Lemma \ref{lem:tensor-fg-fl}
it is a tensor $\lie g(\VV^{(k)})$-module. Since the $\lie g(\VV^{(k)})$ commute
with each other inside $\lie l$, there is a surjective map $L_1 \boxtimes L_2
\boxtimes \cdots \boxtimes L_n \to L$, so $L$ is a quotient of a tensor module,
and hence a simple tensor module. To prove $(b)$, notice that if $M$ is a
highest weight module in $\TT_{\lie l}$ then its unique simple quotient also
lies in $\TT_{\lie l}$ and so it must be of the form $\CC_\chi \otimes L$ with
$L$ a simple tensor module. The statement is easily proved for the highest
weight of this simple module.
\end{proof}
\subsection{The category $\tilde \TT_{\lie l[r]}$}
Recall that
weights in $\lie h^\circ$ are called eligible weights, and a weight is
$r$-eligible if it is in the span of $\{\omega^{(k)}, \epsilon_i^{(k)} \mid k
\in \interval n, i \in \pm \interval r\}$.
Let $\lambda \in \lie h^*$. We denote by $\mathcal D(\lambda)$ the set of all
weights $\lambda - \mu$ with $\mu \succeq 0$. Let $M$ be a weight $\lie
g$-module. A weight $\lambda$ is said to be \emph{extremal} if $M_\lambda \neq
0$ and $M_\mu = 0$ for all $\mu \succ \lambda$. If a module $M$ has finitely
many extremal weights $\lambda_1, \ldots, \lambda_k$ then its support is
contained in the union of the $\mathcal D(\lambda_i)$.
\begin{defn}
\label{defn:t-tilde}
The category $\tilde \TT_{\lie l[r]}$ is the full subcategory of $\Mod{(\lie
l[r]^+, \lie h)}$ whose objects are modules $M$ satisfying the following
conditions.
\begin{enumerate}[(i)]
\item $M$ has finitely many extremal weights.
\item For each $\mu \in \lie h^*$ the $\lie l[r]$-module generated
by $M_{\succeq \mu} = \bigoplus_{\nu \succeq \mu} M_\nu$ lies in
$\TT_{\lie l[r]}$.
\end{enumerate}
\end{defn}
The following lemma is an easy consequence of the definition.
\begin{lem}
\label{lem:locally-tensor}
The category $\tilde \TT_{\lie l[r]}$ is closed under direct summands, finite
direct sums and finite tensor products.
\end{lem}
\begin{ex}
\label{ex:vkl}
We have decompositions
\begin{align*}
\VV^{(k)}
&=V^{(k)}_{-,r} \oplus \VV^{(k)}[r] \oplus V^{(k)}_{+,r}
& \VV^{(k)}_*
&= \left(V^{(k)}_{-,r}\right)^* \oplus \VV^{(k)}_*[r] \oplus
\left(V^{(k)}_{+,r}\right)^*
\end{align*}
where $V^{(k)}_{\pm,r} = \langle v_i \ot e_k \mid i \in \pm \interval r
\rangle$. Each of these modules is a tensor $\lie l[r]$-module: $\VV^{(k)}[r]$
is the natural representation of $\lie g(\VV^{(k)}[r])$, while $\VV^{(l)}_*[r]$
is the conatural representation of $\lie g(\VV^{(l)}[r])$, and the other four
spaces are finite direct sums of $1$-dimensional $\lie h$-modules with trivial
$\lie l[r]$ action. Since tensor modules are closed by finite direct sums, it
follows that both are tensor modules over $\lie l[r]$.
Now consider $\VV^{(k,l)} = \VV^{(k)} \ot \VV^{(l)}_*$. From the previous
example we get a decomposition of this space into nine summands, illustrated in
the picture below.
\begin{align*}
\begin{tikzpicture}
\draw
(0,1.3) -- (5,1.3) (0,3.7) -- (5,3.7) (1.3,0) -- (1.3,5)
(3.7,0) -- (3.7,5);
\filldraw [pattern=dots,draw=none]
(0,0) rectangle (1.3,1.3) (0,3.7) rectangle (1.3,5)
(3.7,0) rectangle (5,1.3) (3.7,3.7) rectangle (5,5);
\draw (2.5,0.7) node {$\VV_*^{(l)}[r]^r$};
\draw (2.5,4.3) node {$\VV_*^{(l)}[r]^r$};
\draw (0.5,2.5) node {$\VV^{(k)}[r]^r$};
\draw (4.5,2.5) node {$\VV^{(k)}[r]^r$};
\draw (2.5,2.5) node {$\VV^{(k,l)}[r]$};
\node [below right, text width=6cm,align=justify] at (6,5)
{$\VV^{(k)} \ot \VV^{(l)}_*$ decomposes as $\lie l[r]$-module as a direct
sum of:
\begin{itemize}
\item[-] $\VV^{(k,l)}[r] = \VV^{(k)}[r] \ot \VV^{(l)}_*[r]$;
\item[-] $2r$ copies of $\VV^{(k)}[r]$;
\item[-] $2r$ copies of $\VV^{(l)}_*[r]$;
\item[-] a trivial module of dimension $4r^2$.
\end{itemize}
};
\end{tikzpicture}
\end{align*}
The fact that $\TT_{\lie l[r]}$ is closed by direct sums, direct summands and
tensor products implies that $\SS^p(\VV^{(k,l)})$ is again in $\TT_{\lie l[r]}$.
We claim that if $k > l$ then $\SS^\bullet(\VV^{(k,l)})$ is in $\tilde
\TT_{\lie l[r]}$. Indeed, its support is contained in $\mathcal D(0)$, and
furthermore for each $\lambda \in \lie h^\circ$ we have $\SS^\bullet
(\VV^{(k,l)})_{\succeq \lambda} \subset \bigoplus_{p=1}^{-\phi(\lambda)}
\SS^{p}(\VV^{(k,l)})$, which lies in $\TT_{\lie l[r]}$; here $\phi$ is the map
introduced in Example \ref{ex:gradings}. On the other hand if $k\leq l$ then
the symmetric algebra is not in $\tilde \TT_{\lie l[r]}$, as it has no extremal
weights.
\end{ex}
We will mostly be interested in studying $\lie g$-modules whose restriction to
$\lie l[r]^+$ lies in $\tilde \TT_{\lie l[r]}$. We now show that such a module
has LCS. The statement and proof are similar to \cite{DGK82}*{Proposition 3.2},
but we replace the dimension of $M_{\succeq \lambda}$, which could be infinite
in our case, with the length of the $\lie l[r]$-module it spans.
\begin{prop}
\label{prop:lcs}
Let $M$ be a $\lie g$-module lying in $\tilde \TT_{\lie l[r]}$. Then $M$ has
LCS.
\end{prop}
\begin{proof}
Fix $\mu$ in the support of $M$ and denote by $N(\mu)$ the $\lie l[r]$-submodule
spanned by $M_{\succeq \mu}$. The proof proceeds by induction on the length of
$N(\mu)$, which we denote by $\ell$. If $\ell = 0$ then $0 \subset M$ is the
desired composition series. In any other case we have $\mu \prec \lambda$ for
some $\lambda$ which is maximal in the support of $M$. Taking $v \in M_\lambda$
we see that $V = U(\lie g) v$ is a highest weight module and hence has a unique
maximal submodule $V'$. We then have a filtration
\begin{align}
0 \subset V' \subset V \subset M.
\end{align}
Since $v \in N(\mu)$ by hypothesis, $N'(\mu) = N(\mu) \cap V' \subsetneq N$ has
length strictly less than $l$. Using this module and the induction hypothesis
we know there exists an LCS at $\mu$ for $V'$. The same applies to $N(\mu)/
(N(\mu) \cap V) \subseteq M/V$, so $M/V$ also has a LCS at $\mu$. These LCS can
be used to refine the filtration $V' \subset V$ into an LCS of $M$ at $\mu$.
\end{proof}
\subsection{The symmetric algebras of parabolic radicals}
\label{ss:sym-algs}
Recall that we have introduced for each $r \geq 0$ the subalgebra $\lie p(r)
= \lie l[r] + \lie b$. This subalgebra has a Levi-type decomposition $\lie p(r)
= \lie l[r]^+ \niplus \lie u(r)$, which in turn induces a decomposition
$\lie g = \overline{\lie u}(r) \oplus \lie l[r]^+ \oplus \lie u(r)$.
The pictures of these subalgebras are given in subsection \ref{ss:visual-rep}.
We will now study the structure of $\SS^\bullet(\overline{\lie u}(r))$ as
$\lie l[r]^+$-module, as this will be essential in the sequel.
Using the the decomposition of $V^{(k,l)}$ given in
Example \ref{ex:vkl} it is easy to see that $\overline{\lie u}(r)$ is isomorphic
as $\lie l[r]$-module to
\begin{align*}
Z \oplus
\left(\bigoplus_{k=1} \VV^{(k)}[r] \ot Y_k\right) \oplus
\left(\bigoplus_{l=1} X_l \ot \VV^{(l)}_*[r]\right) \oplus
\left(\bigoplus_{n \geq k>l \geq 1} \VV^{(k)}[r] \ot\VV^{(l)}_*[r]
\right)
\end{align*}
for some trivial $\lie l[r]$-modules $X_k, Y_l, Z$, which we will study in more
detail.
The above decomposition of $\overline{\lie u}(r)$ is in fact a decomposition
into indecomposable $\overline{\lie b}_r \oplus \lie l[r]$-modules, where
$\overline{\lie b}_r = \overline{\lie b} \cap \lie g_r$. Indeed, $Z$
coincides with the subalgebra $\overline{\lie n}_r = \overline{\lie n} \cap
\lie g_r$. The modules $X_k$ and $Y_l$ can be described recursively as
\begin{align*}
X_{n} &= V_{+,r}^{(n)}
& X_{k-1} &= X_k \oplus V_{-,r}^{(k)} \oplus V_{+,r}^{(k-1)}; \\
Y_{1} &= (V_{+,r}^{(1)})^*
& Y_{k+1} &= Y_k \oplus (V^{(k-1)}_{-,r})^* \oplus (V^{(k)}_{+,r})^*.
\end{align*}
It follows from this that $Y_k$ is isomorphic to the $\overline{\lie
b}_r$-submodule of the conatural representation of $\lie g_r$ spanned by the
unique vector of weight $-\epsilon^{(k)}_r$, and $X_l$ is isomorphic to the
$\overline{\lie b}_r$-submodule of the natural representation of $\lie g_r$
spanned by the unique vector of weight $\epsilon^{(l)}_{-r}$.
\begin{prop}
\label{prop:ur-computation}
The symmetric algebra $\SS^\bullet(\overline{\lie u}(r))$ is an object of
$\tilde \TT_{\lie l[r]}$.
\end{prop}
\begin{proof}
To study the symmetric algebra $\SS^\bullet(\overline{\lie u}(r))$ we look at
the symmetric algebra of each summand in the decomposition of $\overline{\lie
u}(r)$ as $\overline{\lie b}_r \oplus \lie l[r]$-module separately.
\begin{align*}
\SS^\bullet(\overline{\lie n}_r)
&= \bigoplus_{k \geq 0} \SS^k \left(\overline{\lie n}_r\right) \\
\SS^\bullet\left(\VV^{(k)}[r] \ot Y_{k}\right)
&= \bigoplus_{\nu_k} \Schur_{\nu_k}\left(\VV^{(k)}[r]\right)
\otimes \Schur_{\nu_k}\left(Y_{k}\right) & (k = 1, \ldots, n)\\
\SS^\bullet\left(X_{l} \otimes \VV^{(l)}_*[r] \right)
&= \bigoplus_{\gamma_l} \Schur_{\gamma_i}\left(X_{l}\right)
\otimes \Schur_{\gamma_l}\left(\VV^{(l)}_*[r]\right)
& (l = 1, \ldots, n)\\
\SS^\bullet\left(\VV^{(k)}[r] \ot \VV^{(l)}_*[r]\right)
&= \bigoplus_{\rho_{k,l}} \Schur_{\rho_{k,l}}\left(\VV^{(k)}[r]\right)
\ot \Schur_{\rho_{k,l}}\left(\VV^{(l)}_*[r]\right)
&(1 \leq l < k \leq n)
\end{align*}
By Lemma \ref{lem:locally-tensor} it is enough to see that each symmetric
algebra lies in $\tilde \TT_{\lie l[r]}$, and in each case we will show that if
$\mu \in \lie h^*$ is in the support then it is in the support of finitely
many of the indecomposable summands.
\emph{Case 1: $\SS^\bullet(\overline{\lie n}_r)$}. If $\mu$ is any weight in
the support then $\SS^\bullet(\overline{\lie n}_r)_{\succeq \mu}$ must be
finite dimensional. In particular it can only intersect $\SS^k(\overline{\lie
n}_r)$ for finitely many $k \in \ZZ_{>0}$.
\emph{Case 2: $\SS^\bullet\left(\VV^{(k)}[r] \ot Y_k\right)$}. Fix a weight
$\mu$ in the support. Recall that $\mu|_r$ and $\mu[r]$ denote the restriction
of $\mu$ to $\lie h_r$ and $\lie h[r]$, respectively. Then
\begin{align*}
\left(\Schur_{\nu}\left(\VV^{(k)}[r]\right)
\ot \Schur_{\nu}(Y_k)\right)_\mu
&= \Schur_{\nu}\left(\VV^{(k)}[r]\right)_{\mu[r]}
\ot \Schur_{\nu}(Y_k)_{\mu|_r}.
\end{align*}
Since the support of $Y_{k}$ consists entirely of negative weights,
$\Schur_{\nu}(Y_{k})_{\succeq \mu|_r} = 0$ whenever $|\nu|$ is larger than the
height of $\mu|_r$ as a weight of $\overline{\lie b}_r$.
\emph{Case 3: $\SS^\bullet\left(X_k \ot \VV^{(k)}_*[r]\right)$}. As in case
$2$ we have a decomposition
\begin{align*}
\left(\Schur_{\nu}\left(\VV^{(l)}_*[r])\right)
\ot \Schur_{\nu}(X_{l})\right)_\mu
&= \Schur_{\nu}\left(\VV^{(l)}[r]_*\right)_{\mu[r]}
\ot \Schur_{\nu}(X_{l})_{\mu|_r}.
\end{align*}
Now the support of $X_l$ consists entirely of positive weights, and the
argument is analogous to the previous case.
\emph{Case 4: $\SS^\bullet\left(\VV^{(k)}[r] \ot \VV^{(l)}_*[r]\right)$}.
This was done in example \ref{ex:vkl}.
\end{proof}
Notice that while $\SS^p(\lie u(r))$ is a tensor $\lie l[r]$-module for any
$p$, $\SS^\bullet(\lie u(r))$ is \emph{not} in $\tilde \TT_{\lie l[r]}$.
\subsection{Large annihilator duality}
\label{ss:lac}
The categories $\TT_{\lie g}$ and $\TT_{\lie l}$ are not closed under
semisimple duals. For example $\lie h^* = \lie g^\vee_0$ contains uncountably
many vectors that do not satisfy the LAC, so $\lie g^\vee$ is not in
$\TT_{\lie g}$. For this reason we introduce the \emph{large annihilator dual}
of a $\lie l$-module, which we denote by $\dual M$ and set to be $\Phi_{\lie l}
(M^\vee)$. We use the same notation when $M$ is a $\lie g$-module, seen as
$\lie l$-module by restriction. The assignation $M \mapsto \dual M$ is
left exact, but since it is a composition of continuous functors it sends
colimits in $\Mod{(\lie g, \lie h)}$ into limits.
Given two $n$-tuples of partitions $\boldsymbol \lambda, \boldsymbol \mu$ we set
\begin{align*}
I(\boldsymbol \lambda, \boldsymbol \mu)
&= I(\lambda_1, \mu_1) \boxtimes \cdots \boxtimes I(\lambda_n, \mu_n).
\end{align*}
The socle filtration of these modules can be deduced from the socle filtration
of the $I(\lambda_i, \mu_i)$, and as a direct consequence we see that there is
a nonzero morphism $I(\boldsymbol \lambda, \boldsymbol\mu) \to \CC$ if and only
if $\boldsymbol \lambda = \boldsymbol \mu$, and in that case this map is unique.
\begin{lem}
\label{lem:la-duality}
The large annihilator dual of $I(\boldsymbol \lambda, \boldsymbol \mu)$ is
isomorphic to
\begin{align*}
\bigoplus_{\boldsymbol \alpha, \boldsymbol \beta}
\bigoplus_{\boldsymbol \gamma} \left(
\prod_k c_{\alpha_k, \gamma_k}^{\lambda_k} c_{\beta_k, \gamma_k}^{\mu_k}
\right) I(\boldsymbol \alpha, \boldsymbol \beta),
\end{align*}
where the sum runs over all $n$-tuples of partitions $\boldsymbol \alpha,
\boldsymbol \beta$ and $\boldsymbol \gamma$. In particular $\dual{I(\boldsymbol
\lambda, \boldsymbol \mu)}$ lies in $\TT_{\lie l}^0$.
\end{lem}
\begin{proof}
Set $\lie l_r = \lie l \cap \lie g_r$. There exist decompositions of
$\lie l_r \oplus \lie l[r]$-modules
\begin{align*}
\VV^{(k)} &= V^{(k)}_r \oplus \VV^{(k)}[r]; &
\VV^{(k)}_* &= \left(V^{(k)}_r\right)^* \oplus \VV^{(k)}_*[r].
\end{align*}
Using the decomposition of the image of a direct sum by a Schur functor we get
\begin{align*}
I(\lambda,\mu)
&= \bigoplus_{\alpha, \beta,\gamma,\delta}
c_{\alpha,\gamma}^\lambda c_{\beta,\delta}^\mu
\Schur_\alpha(V^{(k)}_r)
\otimes
\Schur_\beta\left( \left(V^{(k)}_r \right)^* \right) \boxtimes
I(\gamma,\delta)[r],
\end{align*}
where $I(\gamma,\delta)[r] = \Schur_\gamma(\VV^{(k)}[r]) \otimes
\Schur_\delta(\VV^{(k)}_*[r])$, and this is an isomorphism of $\lie l_r \oplus
\lie l[r]$-modules.
Notice that twisting the modules $\VV^{(k)}$ and $\VV^{(k)}_*$ by the
automorphism $-\tau$ produces isomorphic $\lie l$-modules. Since twisting by
$-\tau$ commutes with Schur functors, it follows that ${}^{-\tau}I(\boldsymbol
\lambda, \boldsymbol \mu) \cong I(\boldsymbol \lambda, \boldsymbol \mu)$.
Computing the semisimple duals we get
\begin{align*}
(I(\boldsymbol \lambda, \boldsymbol \mu))^\vee
&\cong \ssHom_{\CC}
({}^{-\tau}I(\boldsymbol \lambda, \boldsymbol \mu),\CC)
\cong \ssHom_{\CC}
(I(\boldsymbol \lambda, \boldsymbol \mu), \CC).
\end{align*}
Now writing $I(\boldsymbol \lambda, \boldsymbol \mu)$ as the tensor product of
the $I(\lambda_i, \mu_i)$ and using the above decomposition, we get
\begin{align*}
(I(\boldsymbol \lambda, \boldsymbol \mu))^\vee
&\cong
\ssHom_{\CC} \left(
\bigotimes_{k=1}^n
T_{\alpha_k,\beta_k,\gamma_k,\delta_k}^{(k)} \boxtimes
I(\gamma_k, \delta_k),\CC
\right)
\end{align*}
where
\begin{align*}
T_{\alpha,\beta,\gamma,\delta}^{(k)} &=
\bigoplus_{\gamma,\delta}
c_{\alpha,\delta}^{\lambda}
c_{\beta,\gamma}^{\mu}
\left(\Schur_\alpha(V_r^{(k)}) \otimes
\Schur_{\beta}((V_r^{(k)})^*)
\right).
\end{align*}
This last module is a semisimple finite dimensional $\lie l_r$-module, and
hence isomorphic to its semisimple dual. Using the fact that duals
distribute over tensor products if one of the factors is finite
dimensional, we obtain an isomorphism of $\lie l_r \oplus \lie l[r]$-modules
\begin{align*}
I(\boldsymbol \lambda, \boldsymbol \mu)
\cong \bigoplus_{\boldsymbol \alpha, \boldsymbol \beta,
\boldsymbol \gamma, \boldsymbol \delta}
\left(
\bigotimes_{k=1}^n T_{\alpha_k,\beta_k,\gamma_k,\delta_k}^{(k)}
\right)
\boxtimes \ssHom_{\CC} \left(
I(\boldsymbol \gamma, \boldsymbol \delta)[r], \CC
\right).
\end{align*}
Now to compute the image of this module by $\Phi_r$ only the $\lie l[r]$-module
structure is relevant, and
\begin{align*}
\Phi_r\left(\ssHom_{\CC} \left(
I(\boldsymbol \gamma, \boldsymbol \delta)[r], \CC
\right)\right)
&\cong
\ssHom_{\lie l[r]} \left(
I(\boldsymbol \gamma, \boldsymbol \delta)[r], \CC
\right)
\cong
\ssHom_{\lie l} \left(
I(\boldsymbol \gamma, \boldsymbol \delta), \CC
\right).
\end{align*}
As mentioned in the preamble this is the zero vector space unless $\boldsymbol
\gamma = \boldsymbol \delta$, in which case it is one-dimensional. Thus
$\Phi_r(I(\boldsymbol \lambda, \boldsymbol \mu)^\vee)$ is isomorphic to
\begin{align*}
\bigoplus_{\boldsymbol \alpha, \boldsymbol \beta,
\boldsymbol \gamma}
\left(
\bigotimes_{k=1}^n T_{\alpha_k,\beta_k,\gamma_k,\gamma_k}^{(k)}
\right)
&=
\bigoplus_{\boldsymbol \alpha, \boldsymbol \beta,
\boldsymbol \gamma}
\bigotimes_{k=1}^n
c_{\alpha_k,\gamma_k}^{\lambda_k}
c_{\beta_k,\gamma_k}^{\mu_k}
\left(\Schur_{\alpha_k}(V_r^{(k)}) \otimes
\Schur_{\beta_k}((V_r^{(k)})^*)
\right).
\end{align*}
Taking the limit as $r$ goes to infinity, we see that $\dual{I(\boldsymbol
\lambda, \boldsymbol \mu)}$ is isomorphic to
\begin{align*}
\bigoplus_{\boldsymbol \alpha, \boldsymbol \beta,
\boldsymbol \gamma}
\bigotimes_{k=1}^n
c_{\alpha_k,\gamma_k}^{\lambda_k}
c_{\beta_k,\gamma_k}^{\mu_k}
\left(\Schur_{\alpha_k}(\VV^{(k)}) \otimes
\Schur_{\beta_k}(\VV_*^{(k)})
\right),
\end{align*}
and this precisely the module in the statement.
\end{proof}
The following is an immediate consequence of the last two results.
\begin{prop}
\label{p:ur-ladual}
The large annihilator dual of $\SS^\bullet(\overline{\lie u}(r))$ lies in
$\tilde \TT_{\lie l[r]}$.
\end{prop}
\section{Category $\CAT{}{}$: first definitions}
\label{s:ola}
As in the previous section we fix $n \in \ZZ_{>0}$ and denote by $\lie g$ the
Lie algebra $\lie g(\VV^n)$. We will also continue to omit $\VV^n$ from the
notation for subalgebras of $\lie g$.
\subsection{Introduction to $\CAT{}{}$}
We now introduce a category of representations of $\lie g$ that serves as an
analogue of category $\mathcal O$. The definition is analogous to the usual
definition of category $\mathcal O$ for the reductive Lie algebra $\lie{gl}
(r,\CC)$ but, since $U(\lie{gl}(\infty))$ is not noetherian, we need to replace
finite generation with the LAC with respect to $\lie l$.
\begin{defn}
The category $\CAT{\lie l}{\lie g}$ is the full subcategory of $\Mod{\lie g}$
whose objects are the $\lie g$-modules $M$ satisfying the following conditions.
\begin{enumerate}[(i)]
\item $M$ is $\lie h$-semisimple.
\item $M$ is $\lie n$-torsion.
\item $M$ satisfies the LAC with respect to $\lie l$.
\end{enumerate}
\end{defn}
We often write $\CAT{}{}$ for $\CAT{\lie l}{\lie g}$. If $M$ is an object
of $\CAT{}{}$ then so is any subquotient of $M$. It follows from the definition
that the support of an object $M$ in $\CAT{}{}$ must be contained in $\lie
h^\circ$. The following lemma shows that finitely generated objects in
$\CAT{}{}$ have LCS and hence well-defined Jordan-Holder multiplicities. Recall
from Example \ref{ex:gradings} that there is a map $\psi: \lie h^\circ \to \CC$
that sends the finite roots to $0$, induces a $\ZZ$-grading on $\lie g$, and
turns a weight module $M$ into a graded module $M^\psi$.
\begin{lem}
\label{lem:ola-generator}
Let $M$ be any $\lie g$-module and let $v \in M$ be an $\lie h$-semisimple and
$\lie n$-torsion vector satisfying the LAC with respect to $\lie l$. Then the
submodule $N = U(\lie g)v$ lies in $\CAT{}{}$, has LCS, and $N^\psi_z = 0$ for
$z \gg 0$.
\end{lem}
\begin{proof}
Take $r \gg 0$ such that $\CC v$ is a trivial $\lie l[r]$-module. Using the PBW
theorem we have a surjective map of $\lie l[r]$-modules
\begin{align*}
p:\SS^\bullet(\overline{\lie u}(r)) \ot \SS^\bullet(\lie u(r)) \ot
\SS^\bullet(\lie l[r]^+) \ot \CC_\lambda &\to N \\
u \ot u' \ot l \ot 1_\lambda &\mapsto uu'lv.
\end{align*}
Since $v$ is $\lie n$-nilpotent and satisfies the LAC we have $\SS^\bullet(\lie
l[r]^+)v = \CC v$ and $\SS^k(\lie u(r))v = 0$ for $k \gg 0$. Thus the
restriction
\begin{align*}
p':\SS^\bullet(\overline{\lie u}(r)) \ot \SS^{\leq k}(\lie u(r)) \ot
\CC_\lambda &\to N
\end{align*}
is surjective. By Propositions \ref{prop:lcs} and \ref{prop:ur-computation} the
domain of this last map lies in $\tilde \TT_{\lie l[r]}$ and
has LCS. Since $\overline{\lie u(r)}^\psi$ has a right-bounded grading, the
same holds for the domain of $p'$ and hence for $N$. By the PBW theorem
\begin{align*}
\lie n \cdot \SS^i(\overline{\lie u}(r)) \ot \SS^j(\lie u(r))
\subset \bigoplus_{i',j',t'}
\SS^{i'}(\overline{\lie u}(r)) \ot \SS^{j'}(\lie u(r))
\ot \SS^{t'}(\lie l[r]^+)
\end{align*}
where $i' + j' + t' = i + j + 1$ and $i' \leq i$. Induction on $i$ shows that
$p'(\SS^i(\overline{\lie u}(r)) \ot \SS^j(\lie u(r)) \ot \CC_\lambda)$ is
annihilated by a large enough power of $\lie n$, and so $N$ is $\lie n$-torsion.
\end{proof}
This lemma has a very useful consequence: every module in $\Mod{(\lie g, \lie
h)}$ has a largest submodule contained in $\CAT{}{}$, namely the submodule
spanned by its $\lie n$-torsion elements satisfying the LAC with respect to
$\lie l$. Categorically, this means that the inclusion functor of $\CAT{}{}$ in
$\Mod{(\lie g, \lie h)}$ has a right adjoint $\LAprojector$. We will come back
to this observation later on, when we look at categorical properties of
$\CAT{}{}$.
\subsection{Simple objects in $\CAT{}{}$}
Let $\lambda \in \lie h^*$. The Verma module $M(\lambda)$ does not belong to
$\CAT{}{}$ since the highest weight vector does not satisfy the
LAC. However, if $\lambda$ is $r$-eligible there exists a $1$-dimensional
$\lie l [r]^+$-module $\CC_\lambda$, which we can inflate it to a $\lie
p(r)$-module setting the action of $\lie u(r)$ to be zero. Set $M_r(\lambda) =
\Ind_{\lie p(r)}^{\lie g} \CC_\lambda$, which is clearly a highest weight
module of weight $\lambda$.
Since the highest weight vector of $M_r(\lambda)$ satisfies the LAC, Lemma
\ref{lem:ola-generator} implies that $M_r(\lambda)$ lies in $\CAT{}{}$. Notice
also that we have surjective maps $M_{r+1}(\lambda) \to M_r(\lambda)$, that
given a weight $\mu$ in the support the restriction $M_{r+1}(\lambda)_\mu \to
M_r(\lambda)_\mu$ is an isomorphism for $r\gg 0$, and that $M(\lambda)$ is the
inverse limit of this system. This shows in particular that $\CAT{}{}$ is not
closed under inverse limits. With these parabolic Verma modules in hand, we are
ready to prove our first result.
\begin{thm}
\label{thm:simples}
The simple objects in $\CAT{}{}$ are precisely the highest weight simple modules
whose highest weight is in $\lie h^\circ$.
\end{thm}
\begin{proof}
Suppose $L$ is a simple object in $\CAT{}{}$. Since $L$ is $\lie n$-torsion it
has a highest weight vector $v$ of weight $\lambda \in \lie h^\circ$, so
$L \cong L(\lambda)$. On the other hand, if $\lambda$ is an $r$-eligible
weight then $M_r(\lambda)$ lies in $\CAT{}{}$ and so does its unique simple
quotient $L(\lambda)$.
\end{proof}
\begin{rmk}
A more subtle difference between the definition of $\CAT{}{}$ and that of
category $\mathcal O$ for finite dimensional reductive Lie algebras is that we
ask for $M$ to be $\lie n$-torsion and not just locally $\lie n$-nilpotent.
In the finite-dimensional case, and even in the case $n = 1$, these two
conditions are equivalent (see \cite{PS19}*{Proposition 4.2} for
$\lie{sl}(\infty)$, the proof is the same of $\gl(\infty)$).
This is no longer true as soon as $n \geq 2$. Indeed, when $n=2$ the simple
\emph{lowest} weight module with lowest weight $\lambda = - \omega^{(1)} +
\omega^{(2)}$ is the limit of the simple finite dimensional lowest weight
modules $\tilde L(\lambda|_k)$, where each embeds in the next by sending the
lowest weight vector to the lowest weight vector. Thus $\tilde L(\lambda) =
\varinjlim \tilde L(\lambda|_k)$ is generated by a weight vector satisfying the
LAC, and the construction shows that $\tilde L(\lambda)$ is locally $\lie
n$-nilpotent, but not $\lie n$-torsion. Notice that this module can not be a
highest weight module: indeed, if it had a highest weight vector $v$ then $v$
would belong to $\tilde L(\lambda|_k)$ for all $k \gg 0$ and be a highest
weight vector, but the highest weight vectors of $\tilde L(\lambda|_k)$ are
never sent to highest weight vectors of $\tilde L(\lambda|_{k+1})$.
\end{rmk}
\subsection{Highest weight modules in $\CAT{}{}$}
As a consequence of Lemma \ref{lem:ola-generator} a highest weight module $M$
in $\CAT{}{}$ has finite Jordan-Holder multiplicities, and furthermore if
the highest weight vector generates a $1$-dimensional $\lie l[r]$-module then
$M$ lies in $\tilde \TT_{\lie l[r]}$. We will now show that highest
weight modules have finite length.
Recall that we denote by $\lie s$ and $\lie m$ the subalgebras of $\lie g$
spanned by root spaces corresponding to finite and positive infinite roots,
respectively, and that $\lie s = \lie g^\psi_0$ and $\lie m = \lie g^\psi_{>0}$.
If $M$ is an object of $\CAT{}{}$ we can see it as a $\ZZ$-graded module through
the map $\psi$, and each homogeneous component is a $\lie s$-module. If the
grading on $M$ is right-bounded, for example if $M$ is finitely generated, we
will denote by $M^+$ the top nonzero homogeneous component.
We have also set $\lie q = \lie s \oplus \lie m$. An $\lie s$-module can
be inflated into a $\lie q$-module by imposing a trivial $\lie m$-action. The
following result shows that the problem of computing Jordan-Holder
multiplicities of a highest weight module in $\CAT{}{}$ reduces to computing
Jordan-Holder multiplicities of highest weight $\lie s$-modules in $\overline
{\mathcal O}_{\lie s}$. This is very useful since weight components of finite
length $\lie s$-modules are finite dimensional, and hence characters can be
given in terms of these dimensions.
\begin{prop}
\label{prop:hwm-properties}
Let $M \in \CAT{}{}$ be a highest weight module of highest weight $\lambda$.
\begin{enumerate}[(i)]
\item If $N \subset M$ is a nontrivial module then $N^+ = N \cap M^+$.
\item $M$ is simple if and only if $M^+$ is a simple $\lie s$-module.
\item $M = \Ind^{\lie g}_{\lie q} \mathcal I_{\lie s}^{\lie q} M^+$.
\item If $\mu$ is an eligible weight and $[M:L(\mu)] \neq 0$ then $\lambda
- \mu$ is finite, and furthermore $[M:L(\mu)] = [M^+:L_{\lie s}(\mu)] \leq
m(\lambda,\mu)$.
\end{enumerate}
\end{prop}
\begin{proof}
Denote by $v$ the highest weight vector of $M$. If $N$ is a nontrivial
submodule of $M$ then it contains a highest weight vector, say $w$ of weight
$\mu$. For each $r \geq 0$ denote by $M_r$ the $\lie g_r$-module generated
by $v$. Then $M_r$ is a highest weight $\lie g_r$-module and for $r$ large
enough $w \in M_r$, and it is a highest weight vector. Thus $\mu|_r$ and
$\lambda|_r$ are linked for all large $r$. Lemma \ref{lem:linked-weights} tells
us that $\lambda - \mu$ must then be a finite root, and so $w \in N \cap M^+
\neq \emptyset$ and $N^+ = N \cap M^+$. This proves the first item.
If $M$ is simple then $M^+$ is a $U(\lie g)^\psi_0$ simple module. Using the
PBW theorem we have a decomposition $U(\lie g)^\psi_0 = \bigoplus_{k \in \NN}
U(\overline{\lie m})^\psi_{-k}U(\lie s) U(\lie m)^\psi_k$, and since $\lie m$
acts trivially on $M^+$ it follows that $M^+$ is a simple $U(\lie s)$-module.
Conversely, if $M^+$ is simple then any nonzero submodule $N \subset M$ must
have $N^+ = M^+$ by the first item, so it must contain the highest weight
vector. Thus $N = M$ and $M$ is simple.
Denote by $K$ be the kernel of the natural map $\Ind_{\lie q}^{\lie g} M^+ \to
M$. Then by construction $K \cap M^+ = 0$, and the first item implies $K = 0$.
In particular this shows that $L(\mu) = \Ind_{\lie q}^{\lie g} \mathcal
I_{\lie s}^{\lie q} L_{\lie s}(\mu)$.
Finally, the module $M^+$ is a highest weight $\lie s$-module and its highest
weight $\lambda$ is almost dominant since this holds for all eligible
weights. Thus $M_{\lie s}(\lambda)$ has a composition series and if we
apply the exact functor $\Ind_{\lie q}^{\lie g} \circ \mathcal I_{\lie s}^{\lie
q}$ to this filtration we get a filtration of $M$. By the previous item the
layers of this filtration are simple modules, and hence it is again a
composition series and we can use it to compute
\begin{align*}
[M: L(\mu)] &= [M^+:L_{\lie s}(\mu)].
\end{align*}
Finally, by the universal property of Verma modules $M^+$ is a quotient of
$M_{\lie s}(\lambda)$, so $[M^+:L_{\lie s}(\mu)] \leq m(\lambda,\mu)$.
\end{proof}
We will now show that parabolic Verma modules have finite length, and compute
their Jordan-Holder multiplicities in terms of multiplicities of Verma modules
over $\lie g_k$.
\begin{cor}
\label{cor:parabolic-verma-jh}
Let $\lambda, \mu \in \lie h^\circ$ with $\lambda$ $r$-eligible. If $L(\mu)$
is a simple constituent of $M_r(\lambda)$ then the following hold.
\begin{enumerate}[(i)]
\item $\mu$ lies in the dot orbit of $\lambda$ by $\mathcal W(\lie s_{r+1})$.
\item If $\mu$ is $r$-eligible then $[M_r(\lambda): L(\mu)] = m(\lambda,\mu)$.
\end{enumerate}
In particular $M_r(\lambda)$ has finite length.
\end{cor}
\begin{proof}
We have already seen in Proposition \ref{prop:hwm-properties} that $\mu$ and
$\lambda$ must be linked. Since $M_r(\lambda)$ is an object of $\tilde
\TT_{\lie l[r]}$ so is $L(\mu)$, and in particular $\mu|_r$ must be the highest
weight of a highest weight module in $\TT_{\lie l[r]}$. Thus $(\lambda,
\alpha)$ and $(\mu, \alpha)$ are positive for any finite root $\alpha$ of
$\lie l[r]$, and Lemma \ref{lem:linked-weights} implies that
$\mu = \sigma \cdot \lambda$ for $\sigma \in W(\lie s_{r+1})$. In particular
$\mu$ belongs to a finite set, so $M_r(\lambda)$ has finite length.
We also know from Proposition \ref{prop:hwm-properties} that $[M_r(\lambda):
L(\mu)] = [M_r(\lambda)^+, L_{\lie s}(\mu)]$. The surjective map of $\lie
s$-modules $M_{\lie s}(\lambda) \to M_r(\lambda)^+$ restricts to a bijection of
the weight components $\succeq \mu$, and so
\begin{align*}
\dim M_r(\lambda)^+_\mu
&=\sum_{\nu \succeq \mu} [M_r(\lambda)^+:L_{\lie s}(\nu)]
\dim L_{\lie s}(\nu)_\mu \\
&= \dim M_{\lie s}(\lambda)_{\mu}
= \sum_{\nu \succeq \mu} m(\lambda,\nu) \dim L_{\lie s}(\nu)_\mu
\end{align*}
Thus $[M_r(\lambda)^+:L_{\lie s}(\mu)] = m(\lambda,\mu)$.
\end{proof}
\subsection{The structure of a general object in $\CAT{}{}$}
We now turn to more general objects of $\CAT{}{}$. First we show that finitely
generated objects in $\CAT{}{}$ must have finite length.
\begin{prop}
\label{prop:fg-fl}
Let $M$ be an object of $\CAT{}{}$. The following are equivalent.
\begin{enumerate}[(a)]
\item $M$ is finitely generated.
\item $M$ has a finite filtration whose layers are highest weight modules.
\item $M$ has finite length.
\end{enumerate}
\end{prop}
\begin{proof}
To prove $(a) \Rightarrow (b)$ suppose first that $M$ is cyclic, and its
generator is a weight vector $v$ of weight $\lambda$ such that $\lie l[r]'
v = 0$. As seen in Lemma \ref{lem:ola-generator} $M$ is in $\tilde \TT_{\lie
l[r]}$, so we can proceed by induction on the length of the $\lie l[r]$-module
spanned by $M_{\succeq \lambda}$. This submodule
contains a highest weight vector, say $w$, and we set $N = U(\lie g)w$. Clearly
$N$ is a highest weight module and $M/N$ has a filtration by highest weight
modules by hypothesis, so the same holds for $M$. The general case now follows
by induction on the number of generators of $M$. The implication $(b)
\Rightarrow (c)$ follows from Corollary \ref{cor:parabolic-verma-jh}, and $(c)
\Rightarrow (a)$ is obvious.
\end{proof}
We now give a general approach to compute the Jordan-Holder multiplicities of
an arbitrary object of $\CAT{}{}$. We will use the following tool.
\begin{defn}
\label{defn:psi-filtration}
Let $M \in \CAT{}{}$. For each $i \in \ZZ$ we set $\FF_iM$ to be the submodule
of $M$ generated by $\bigoplus_{j \geq i} M_j^\psi$. The family $\left\{\FF_i
M \mid i \in \ZZ\right\}$ is the \emph{$\psi$-filtration} of $M$. We also
define $\mathcal S_i M$ to be the $\lie s$-module $(\FF_i M / \FF_{i-1} M)_i$,
which is the top component of the corresponding layer of the filtration.
\end{defn}
By construction the layers of the $\psi$-filtration of any module are
generated by their top degree component, and hence their multiplicities can be computed using Proposition \ref{prop:hwm-properties}.
\begin{prop}
\label{prop:s-filtration}
Let $M$ be an object in $\CAT{}{}$ with LCS and let $\lambda$ be an eligible
weight with $\psi(\lambda) = p$. Then $[M:L(\lambda)] = [\mathcal S_p M:
L_{\lie s}(\lambda)]$.
\end{prop}
\begin{proof}
It is enough to show the result for finitely generated objects, and by
Proposition \ref{prop:fg-fl} it is enough to show it for finite length objects.
Let $M$ be a finite length object, and take $q \in \ZZ$ such that $M^+ =
M^\psi_q$. A simple induction on $p-q$ shows that $[\mathcal S_p M: L_{\lie s}
(\lambda)]$ is an additive function on $M$, with the base case a consequence of
Proposition \ref{prop:hwm-properties}. Since both $[M:L(\lambda)]$ and
$[\mathcal S_p M: L_{\lie s}(\lambda)]$ are additive functions on $M$ it is
enough to show they are equal when $M$ is simple, which again follows from
Proposition \ref{prop:hwm-properties}.
\end{proof}
\subsection{Categorical properties of $\CAT{}{}$}
We now focus on the general categorial properties of $\CAT{}{}$.
Let $M$ be a $\lie h$-semisimple $\lie g$-module. By Lemma
\ref{lem:ola-generator} the submodule spanned by all its $\lie n$-torsion
vectors satisfying the large annihilator condition is an object of $\CAT{}{}$,
and is in fact the largest submodule of $M$ lying in $\CAT{}{}$. We thus
have a diagram of functors, where each arrow from left to right is an embedding
of categories and each right to left arrow is a left adjoint
\begin{align*}
\xymatrix{
\CAT{\lie l}{\lie g} \ar@<-1ex>[r]
& \Mod{(\lie g, \lie h)}_{\mathsf{LA}}^{\lie l}
\ar@<-1ex>[r] \ar@<-1ex>[l]_-{\Gamma_{\lie n}}
& \Mod{(\lie g, \lie h)}
\ar@<-1ex>[r] \ar@<-1ex>[l]_-{\Phi}
&\Mod{\lie g}
\ar@<-1ex>[l]_-{\Gamma_{\lie h}}
}
\end{align*}
It follows that we have a functor $\LAprojector = \Gamma_{\lie n} \circ \Phi
=\Phi \circ \Gamma_{\lie n}: \Mod{(\lie g, \lie h)} \to \CAT{}{}$, which is
right adjoint to the exact inclusion functor $\CAT{}{} \to \Mod{(\lie g, \lie
h)}$. In particular it preserves direct limits and sends injectives to
injectives.
Recall that an abelian category $\mathcal A$ is locally artinian if every
object is the limit of its finite length objects. Also, $\mathcal A$ has the
Grothendieck property if it has direct limits, and for every object $M$, every
subobject $N \subset M$ and every directed family of subobjects $(A_\alpha)_{
\alpha \in I}$ of $M$ it holds that $N \cap \varinjlim A_\alpha = \varinjlim N
\cap A_\alpha$. The following result is an easy consequence of the properties
of $\LAprojector$.
\begin{thm}
Category $\CAT{}{}$ is a locally artinian category with direct limits, enough
injective objects, and the Grothendieck property.
\end{thm}
\section{Category $\CAT{}{}$: Standard objects}
\label{s:standard}
In this section we will introduce the standard objects of $\CAT{}{}$. We then
compute their simple multiplicities through their $\psi$-filtrations.
\subsection{Large-annihilator dual Verma modules}
We now introduce the modules that will play the role of standard objects on
$\CAT{}{}$. In the finite dimensional case this role is played by the
semisimple duals of Verma modules, so it is natural to consider the ``best
approximation'' to these modules in $\CAT{}{}$.
\begin{defn}
For every $\lambda \in \lie h^\circ$ we set $A(\lambda) = \dual{M(\lambda)}$.
\end{defn}
The module $A(\lambda)$ can be hard to grasp. For example, it is not clear
at first sight that it lies in $\CAT{}{}$. As a first approximation, set
$A_r(\lambda)$ to be the LA dual of the parabolic Verma $M_r(\lambda)$. The
Verma module $M(\lambda)$ is the inverse limit of the $M_r(\lambda)$, and since
LA duality sends inverse limits to direct limits, $A(\lambda) = \varinjlim
A_r(\lambda)$. Also, the natural maps $A_r(\lambda) \to A(\lambda)$ are
injective, and for any $\mu$ in the support of $A(\lambda)$ and $r \gg 0$ the
map $A(\lambda)_{\succeq\mu} \to A_r(\lambda)_{\succeq\mu}$; this follows from
the dual facts for $M_r(\lambda)$ and $M(\lambda)$. This approach
allows us to prove that the $A(\lambda)$ are indeed in $\CAT{}{}$.
\begin{prop}
\label{prop:a-in-ola}
For every eligible weight $\lambda$ and every $r \geq 0$ the modules
$A(\lambda), A_r(\lambda)$ lie in $\CAT{}{}$ and have LCS. Furthermore,
we have $[A(\lambda): L(\mu)] = [A_r(\lambda): L(\mu)]$ for $r \gg 0$.
\end{prop}
\begin{proof}
By definition the space $\dual{M_r(\lambda)}$ is an $\lie h$-semisimple module
satisfying the LAC. Also it is isomorphic to $\dual{\SS^\bullet(\overline{\lie
u}(r))} \ot \CC_\lambda$ as $\lie l[r]^+$-module, and hence lies in $\tilde
\TT_{\lie l[r]}$ by Proposition \ref{p:ur-ladual}, and by Lemma \ref{prop:lcs}
it has LCS.
To see that it is $\lie n$-torsion, first observe that it is
$\lie n(r) = \lie l[r] \cap \lie n$-torsion by virtue of being in $\tilde
\TT_{\lie l[r]}$. Now recall the map $\theta$ from Example \ref{ex:gradings}.
It induces a right-bounded grading on $A_r(\lambda)$, and since $\lie
g^\theta_{>0} = \lie u(r)$, it follows that this subalgebra acts nilpotently on
$A(\lambda)$and hence $A(\lambda)$ is $\lie n = \lie n(r) \niplus \lie
u(r)$-torsion. This completes the proof for $A_r(\lambda)$.
Since $\CAT{}{}$ is closed by direct limits it follows that $A(\lambda)$
belongs to $\CAT{}{}$. Given a weight $\mu$ we build an LCS for $A(\lambda)$
at $\mu$ as follows: start with an LCS at $\mu$ for $A_r(\lambda)$ such that
$A_r(\lambda)_\mu \to A(\lambda)_\mu$ is an isomorphism, and use the natural
map to obtain a filtration of $A(\lambda)$. Adding $A(\lambda)$ at the top of
the filtration gives us the desired LCS, and shows that the multiplicities
coincide as desired.
\end{proof}
This result shows that $A(\lambda)$ is the projection of
$M(\lambda)^\vee$ to $\CAT{}{}$. By Proposition \ref{prop:ss-dual-coind} we get
that $A(\lambda) \cong \Phi(\ssCoind_{\overline{\lie b}}^{\lie g}
\CC_\lambda) \cong \LAprojector(\ssCoind_{\overline{\lie b}}^{\lie g}
\CC_\lambda)$. The following theorem shows that the $A(\lambda)$ have the
usual properties associated to standard objects in highest weight categories.
\begin{thm}
\label{thm:standard}
For each $\lambda \in \lie h^\circ$ the following hold.
\begin{enumerate}[(i)]
\item
\label{i:pre-a1}
$A(\lambda)$ is indecomposable and $\soc A(\lambda) \cong L(\lambda)$.
\item
\label{i:pre-a2}
The composition factors of $A(\lambda)/L(\lambda)$ are of the form $L(\mu)$
with $\mu \prec \lambda$.
\item
\label{i:pre-a3}
For each $\mu \in \lie h^\circ$ we have $\dim \Hom_{\CAT{}{}} (A(\mu),
A(\lambda)) < \infty$.
\end{enumerate}
\end{thm}
\begin{proof}
Let $\mu \in \lie h^\circ$. Since $\LAprojector$ is right adjoint to the
inclusion of $\CAT{}{}$ in $\Mod{(\lie g, \lie h)}$ there are isomorphisms
\begin{align*}
\Hom_{\CAT{}{}}(L(\mu), A(\lambda))
&\cong \Hom_{\lie g, \lie h}(L(\mu), \ssCoind_{\overline{\lie b}}^{\lie g}
\CC_\lambda) \cong \Hom_{\overline{\lie b}, \lie h}
(L(\mu), \CC_\lambda),
\end{align*}
and since $L(\mu)$ has $\CC_\mu$ as its unique simple quotient as $\overline
{\lie b}$-module, it follows that this space has dimension $1$ when $\lambda =
\mu$ and zero otherwise. Hence $L(\lambda)$ is the socle of $A(\lambda)$, which
implies item (\ref{i:pre-a1}). Since the support of $A(\lambda)/L(\lambda)$ is
contained in the set of weights $\mu \prec \lambda$, item (\ref{i:pre-a2})
follows.
Since $A(\mu)$ has a local composition series at $\lambda$, there exists a
finite length $\lie g$-module $N \subset A(\mu)$ such that $(A(\mu)/N)_\lambda
= 0$. Now consider the following exact sequence
\begin{align*}
0 \to \Hom_{\CAT{}{}}(A(\mu)/N, A(\lambda)) \to
\Hom_{\CAT{}{}}(A(\mu), A(\lambda)) \to
\Hom_{\CAT{}{}}(N, A(\lambda)).
\end{align*}
Since any nonzero map to $A(\lambda)$ must contain $L(\lambda)$ in its
image, the first $\Hom$-space in the long exact sequence is zero and the map
$\Hom_{\CAT{}{}}(A(\mu), A(\lambda)) \to \Hom_{\CAT{}{}}(N, A(\lambda))$ is
injective. Now a simple induction shows that $\dim \Hom_{\CAT{}{}}(X,
A(\lambda))$ is finite for any $X$ of finite length. This proves
(\ref{i:pre-a3}).
\end{proof}
From now on we will refer to the $A(\lambda)$ as \emph{standard} modules of
$\CAT{}{}$. We point out that the order $\prec$ on $\lie h^\circ$ is not
interval-finite, and so it is not yet clear that the $A(\lambda)$ are standard
in the sense of highest weight categories. In the coming subsections we will
compute the Jordan-Holder multiplicities of the standard modules explicitly,
and in the process find an interval-finite order for its simple constituents.
\subsection{The submodules $T_r(\lambda)$}
Our computation of the simple multiplicities of the module $A(\lambda)$ is
rather long and technical. The idea is to find an exhaustion of $A(\lambda)$
which will allow us to compute the layers of the $\psi$-filtration of
$A(\lambda)$. Set $T_r(\lambda) = \Phi_r(M_r(\lambda)^\vee)$; this is clearly a
$\lie g_r$-module, and since it is spanned by $\lie h$-semisimple vectors it is
in fact a $\lie g_r^+ = \lie g_r + \lie h$-module, and through $\psi$ we can
see it as a graded module.
\begin{lem}
For each $r \gg 0$ there are injective maps of $\psi$-graded $\lie
g_r^+$-modules $T_r(\lambda) \hookrightarrow T_{r+1}(\lambda)$ and
$T_r(\lambda) \hookrightarrow T(\lambda)$. Furthermore $A(\lambda) \cong
\varinjlim T_r(\lambda)$ as $\lie g$-modules.
\end{lem}
\begin{proof}
Identifying $M_r(\lambda)^\vee$ with its image inside $M(\lambda)^\vee$ it is
clear that
$M_r(\lambda)^\vee \subset M_{r+1}(\lambda)^\vee$ and that
\begin{align*}
T_r(\lambda) =
\Phi_r(M_r(\lambda)^\vee) \subset \Phi_{r+1}(M_r(\lambda)^\vee) \subset
\Phi_{r+1}(M_{r+1}(\lambda)^\vee) = T_{r+1}(\lambda).
\end{align*}
The desired maps are given by the inclusions, which trivially satisfy the
statement. Now if $m \in \Phi_r(M(\lambda)^\vee)$ then there exists $s \geq r$
such that $m \in \Phi_r(M_s(\lambda)^\vee) \subset T_s(\lambda)$, so
$A(\lambda) = \bigcup_{r \geq 0} T_r(\lambda) \subset M(\lambda)^\vee$.
\end{proof}
The definition of $\psi$-filtrations given in \ref{defn:psi-filtration} is
easily adapted to $\psi$-graded $\lie g_r^+$-modules. The fact that $A(\lambda)$
is the direct limit of the $T_r(\lambda)$ in the category of $\psi$-graded
$\lie g_r^+$-modules implies that the $p$-th module in the $\psi$-filtration of
$A(\lambda)$ is the limit of the $p$-th module in the $\psi$ filtrations of the
$T_r(\lambda)$. Furthermore, since direct limits are exact we can recover the
$\lie s$-module $\mathcal S_p(A(\lambda))$ as the direct limit of the $\lie
s_r$-modules $\mathcal S_p(T_r(\lambda))$. This will allow us to compute their
characters and, using Proposition \ref{prop:s-filtration}, the simple
multiplicities of $A(\lambda)$.
\begin{rmk}
Notice that it is not true that the Jordan-Holder multiplicities of the
$T_r(\lambda)$ can be recovered from the multiplicities of the $\lie
s_r$-modules $\mathcal S_p(T_r(\lambda))$. Indeed, Proposition
\ref{prop:s-filtration} follows from Proposition \ref{prop:hwm-properties}, and
the analogous statement is clearly false for $\lie g_r$.
\end{rmk}
Set $R_{i} = \lie g^\psi_{\leq -i}$, i.e. the subspace of $\lie g$ spanned by
all root-spaces corresponding to roots $\alpha$ with $\psi(\alpha) \leq -i$,
and set $R_{i,r} = R_i \cap \lie g_r$. We see $R_i$ as $\lie b$-module through
the isomorphism $R_i \cong \lie g/\lie g_{> -i}^\psi$. We set $\mathcal R =
\bigoplus_{k=1}^n 2^{k-1} R_k$ and $\mathcal R_r = \bigoplus_{k=1}^n 2^{k-1}
R_{k,r}$.
\begin{lem}
\label{lem:W}
There exists an isomorphism of $\lie s_r$-modules
\begin{align*}
\mathcal S_p T_r(\lambda)
&\cong \left(\Ind^{\lie s_r}_{\lie s_r \cap \lie b}
\SS^\bullet(\mathcal R_r)_{p - \psi(\lambda)}
\ot \CC_\lambda \right)^\vee
\end{align*}
\end{lem}
The proof of this statement is quite technical and not particularly
illuminating, so we postpone it until the end of this section. It is, however
the main step to compute the Jordan-Holder multiplicities of standard modules
\begin{thm}
\label{thm:standard-mults}
Let $\lambda$ and $\mu$ be eligible weights. Then
\begin{align*}
[A(\lambda):L(\mu)] = m(\lambda + \nu, \mu) \dim \SS^\bullet(\mathcal
R^\vee)_\nu.
\end{align*}
In particular this is nonzero if and only if $\mu$ is of the form $\sigma \cdot
(\lambda + \nu)$ for a weight $\nu$ in the support of $\SS^\bullet(\mathcal
R^\vee)$ and $\sigma \in \mathcal W^\circ$.
\end{thm}
\begin{proof}
Since $A(\lambda)^\psi_p = \bigcup_{r \geq 0} T_r(\lambda)^\psi_p$, it follows
that $\mathcal F_p A(\lambda) = \varinjlim \mathcal F_p T_r(\lambda)$. Exactness
of direct limits of vector spaces implies that
\begin{align*}
\frac{\mathcal F_p A(\lambda)}{\mathcal F_{p-1} A(\lambda)}
&\cong
\varinjlim \frac{\mathcal F_p T_r(\lambda)}{\mathcal F_{p-1} T_r(\lambda)}
\end{align*}
and so taking top degree components we get $\mathcal S_p A(\lambda) = \varinjlim
\mathcal S_p T_r(\lambda)$. It follows that $[A(\lambda): L(\mu)] =
[T_r(\lambda): L_{\lie s_r}(\mu)]$ for large $r$.
Using Lemma \ref{lem:W} and standard results on category $\mathcal O$ for the
reductive algebra $\lie s_r$, see for example \cite{Humphreys08}*{3.6 Theorem}, $\mathcal S_p T_r(\lambda)$ has a filtration by dual Verma modules of the form
$M_{\lie s_r}(\lambda + \nu)^\vee$, and each of these modules appears with
multiplicity $\dim \SS^\bullet(\mathcal R^\vee)_\nu$. Thus
\begin{align*}
[\mathcal S_p T_r(\lambda): L_{\lie s_r}(\mu|_r)]
&= [M_{\lie s_r}(\lambda + \nu|_r): L_{\lie s_r}(\mu|_r)]
\dim \SS^\bullet(\mathcal R_r^\vee)_\nu,
\end{align*}
and for large $r$ this is $m(\lambda + \nu, \mu) \dim \SS^\bullet(\mathcal
R^\vee)_\nu$, as desired.
\end{proof}
\subsection{An interval finite order on $\lie h^\circ$}
We are now ready to show that there is an interval finite order for the simple
constituents in standard objects of $\CAT{}{}$. In view of Theorem
\ref{thm:standard-mults} there is only one reasonable choice.
\begin{defn}
Let $\lambda, \mu \in \lie h^\circ$. We write $\mu <_{\order} \lambda$
if there exist $\nu$ in the support of $W$ and $\sigma \in \mathcal W(\lie s)$
such that $\mu = \sigma \cdot (\lambda + \nu) \preceq \lambda$.
\end{defn}
\begin{lem}
\label{lem:order}
The order $<_{\order}$ is interval finite.
\end{lem}
\begin{proof}
Let us write $\mu \triangleleft \lambda$ if either
\begin{enumerate}[(i)]
\item there is a simple root $\alpha$ such that $\mu = s_\alpha \cdot \lambda
\prec \lambda$ or
\item $\mu = \lambda + \nu$ with $\nu$ a root such that $\psi(\nu) < 0$.
\end{enumerate}
Notice that $\mu <_{\order} \lambda$ implies $\mu \prec \lambda$.
Since the support of $\mathcal R$ is the $\ZZ_{>0}$-span of the space of roots
of $\lie g^{\psi}_{<0}$, it follows that $\triangleleft$ is a subrelation of
$<_{\order}$, and in fact this order is the reflexive transitive closure of
$\triangleleft$. To prove the statement, it is enough to show that there are
only finitely many weights $\gamma$ such that $\mu \triangleleft \gamma
<_\order \lambda$, and that there is a global bound on the length of chains of
the form $\mu \triangleleft \lambda_1 \triangleleft \cdots \triangleleft
\lambda_l \triangleleft \lambda$.
If we are in case $(i)$ then $\gamma = s_\alpha \cdot \mu$, and since $\gamma
\prec \lambda$ it follows that $\alpha \in \mathcal W(\lie s_r)$, so there are
only finitely many options in this case. Notice also that $(\mu, \rho_{r+1})
< (\gamma, \rho_{r+1}) \leq (\lambda, \rho_{r+1})$, where $\rho_{r+1}$ is the
sum of all positive roots of $\lie s_{r+1}$. Thus in a chain as above there can
be at most $(\lambda - \mu, \rho_{r+1})$ inequalities of type $(i)$.
If we are in case $(ii)$, then $\lambda - \mu \succ 0$ and $\gamma =
\mu - \nu$ for some $\psi$-negative root $\nu$. Since $\lambda - \gamma =
\lambda - \mu - \nu \succeq 0$, it is enough to check that there are only
finitely many $\nu$ such that the last inequality holds. Now since $\lambda$ and
$\mu$ are $r$-eligible, $(\lambda - \mu, \alpha) = 0$ for all roots outside of
the root space of $\lie g_{r+1}$. This means that $\nu$ must be a negative root
of $\lie g_{r+1}$, of which there are finitely many. Notice also that in this
case $\psi(\mu) < \psi(\gamma) \leq \psi(\lambda)$, and hence in any chain as
above there are at most $\psi(\lambda - \mu)$ inequalities of type $(ii)$. Thus
any chain is of length at most $\psi(\lambda - \mu) + (\lambda - \mu,
\rho_{r+1})$ and we are done.
\end{proof}
\section*{Appendix: A proof of Lemma \ref{lem:W}}
\subsection{A result on Schur functors}
We begin by setting some notation for Schur functors and Littlewood-Richardson
coefficients. Recall that given two vector spaces $V, W$ and partitions
$\lambda, \mu$ we have isomorphisms, natural in both variables,
\begin{align*}
\Schur_\lambda(V) \otimes \Schur_\mu(V)
&= \bigoplus_{\nu} c_{\lambda, \mu}^\nu \Schur_{\nu}(V)
&\Schur_\lambda(V \oplus W)
&= \bigoplus_{\alpha, \beta}
c_{\alpha, \beta}^\gamma \Schur_\alpha(V) \ot \Schur_\beta(W).
\end{align*}
Given an $m$-tuple of partitions $\boldsymbol \alpha = (\alpha_1, \ldots,
\alpha_m)$ and an $m$-tuple of vector spaces $U_1, \ldots, U_m$ we set
$\Schur_{\boldsymbol \alpha}(U_1, \ldots, U_m) = \Schur_{\alpha_1}(U_1)
\otimes \cdots \otimes \Schur_{\alpha_m}(U_m)$. It follows that for each
$\gamma \in \Part$ there exists $c_{\boldsymbol \alpha}^\gamma \in \ZZ_{\geq
0}$ such that the following isomorphisms hold
\begin{align*}
\Schur_{\gamma}(U_1 \oplus \cdots \oplus U_m)
&\cong \bigoplus_{\boldsymbol \alpha} c^\gamma_{\boldsymbol \alpha}
\Schur_{\boldsymbol \alpha}(U_1, \ldots, U_m);\\
\Schur_{\boldsymbol \alpha}(U, U, \ldots, U)
&\cong \bigoplus_{\gamma} c^\gamma_{\boldsymbol \alpha}
\Schur_{\gamma}(U).
\end{align*}
These easily imply the following lemma.
\begin{lem}
\label{lem:schur-identity}
Fix $\boldsymbol \alpha \in \Part^m$. Given vector spaces $U_1, \ldots, U_m$
there is an isomorphism, natural in all variables,
\begin{align*}
\bigoplus_{\gamma \in \Part} \bigoplus_{\boldsymbol \beta \in \Part^m}
c_{\boldsymbol \alpha}^\gamma c_{\boldsymbol \beta}^\gamma
&\Schur_{\boldsymbol \beta}(U_1, U_2, \ldots, U_m)
\cong \Schur_{\boldsymbol \alpha}(U, U, \ldots, U)
\end{align*}
where $U = U_1 \oplus U_2 \oplus \cdots \oplus U_m$.
\end{lem}
\subsection{The module $\mathcal H$}
In order to understand the module $T_r(\lambda)$ we will need to look under
the hood of the dual modules $M_r(\lambda)^\vee$. For this subsection only we
write $\WW = \VV[r]$, so in particular $\WW^{(k)} = \VV^{(k)}[r]$ and
$\WW^{(l)}_* = \VV^{(l)}_*[r]$, etc.
We begin by decomposing the parabolic algebra $\lie p(r)$ as $\lie a[r] \oplus
\lie b_r \oplus \lie f_r$, where $\lie b_r = \lie b \cap \lie g_r,
\lie a[r] = \bigoplus_{k<l} \WW^{(k)} \otimes \WW^{(l)}$,
and $\lie f_r$ is the unique root subspace completing the decomposition.
\begin{align*}
\begin{tikzpicture}
\filldraw[draw=none,black!20!white]
(0.2,3) rectangle (0.8,2.8)
(1.2,3) rectangle (1.8,2.8)
(2.2,3) rectangle (2.8,2.8)
(1.2,2.2) rectangle (1.8,1.8)
(2.2,2.2) rectangle (2.8,1.8)
(2.2,1.2) rectangle (2.8,0.8)
(0.8,2.8) rectangle (1.2,2.2)
(1.8,2.8) rectangle (2.2,2.2)
(2.8,2.8) rectangle (3,2.2)
(1.8,1.8) rectangle (2.2,1.2)
(2.8,1.8) rectangle (3,1.2)
(2.8,0.8) rectangle (3,0.2)
;
\draw[step=1cm,black] (0.01,0.01) grid (2.99,2.99);
\node [below right, text width=6cm,align=justify] at
(4,2.5) {The picture of $\lie f_r$. Notice that each rectangle is a direct summand of $\lie f_r$ as right $\lie l[r] \oplus \lie s_r$-module};
\end{tikzpicture}
\end{align*}
Using these spaces we can decompose $\lie g = \lie g[r] \oplus (\lie f_r \oplus
\overline{\lie f}_r) \oplus \lie g_r$. This is a decomposition of $\lie g$ as
$\lie g[r] \oplus \lie g_r$-module, since $\lie f_r \oplus \overline{\lie f}_r$
is stable by the adjoint action of this subalgebra and isomorphic to
$(\WW^n \otimes (V_r)^*) \oplus (V_r \ot \WW_*^n)$. We also obtain a
decomposition of $\overline {\lie b}_r \oplus \lie l[r] \oplus
\overline{\lie a}[r]$-modules
\begin{align*}
\overline{\lie p}(r) &= \overline{\lie b}_r \oplus \overline{\lie f_r}
\oplus \lie l[r] \oplus \overline{\lie a}[r].
\end{align*}
Set $\lie q_r = \lie s_r + \lie b_r$. The subspaces $\lie a[r]$ and
$\overline{\lie a}[r]$ are trivial $\lie q_r$-submodules of $\lie g$. The
subspace $\lie f_r$ is also a $\lie q_r$-submodule of $\lie g$, and we endow
$\overline{\lie f}_r$ with the structure of $\lie q_r$-modules through the
vector space isomorphism $\overline{\lie f}_r = \lie f_r \oplus \overline{\lie
f}_r / \lie f_r$. We set $\mathcal H = \SS^\bullet(\overline{\lie f}_r
\oplus \overline{\lie a}[r])$. By the PBW theorem we have an isomorphism of
$\lie g_r \oplus \lie l[r]$-modules
\begin{align*}
M_r(\lambda) &\cong U(\lie g) \ot_{\lie p(r)} \CC_\lambda
\cong U(\lie g_r) \ot_{\lie b_r} (\mathcal H \ot \CC_\lambda)
= \Ind_{\lie b_r}^{\lie g_r} \mathcal H \ot \CC_\lambda.
\end{align*}
where the $\lie l[r]$-module structure is determined by that of $\mathcal H$.
Thus when computing the $\lie l[r]'$-invariant vectors of the semisimple dual
of this module we get
\begin{align*}
T_r(\lambda) &= \Phi_r(M_r(\lambda)^\vee)
\cong \ssCoind_{\overline{\lie b}_r}^{\lie g_r}
\Phi_r(\mathcal H^\vee) \ot \CC_\lambda.
\end{align*}
\subsection{The invariants of $\mathcal H^\vee$}
Our task is thus to compute $\Phi_r(\mathcal H^\vee)$, and for this we will
need a finer description of $\overline{\lie f}_r$.
For each $k \in \interval{0,n}$ denote by $B_k = V_{r,+}^{(k)} \oplus
V_{r,-}^{(k-1)}$ and $A_k = B_k^*$. These are $\lie s_r$-modules and we have a
decomposition of $\lie l[r] \oplus \lie s_r$-modules
\begin{align*}
\overline{\lie f}_r &=
\bigoplus_{k \geq l}
(\WW^{(k)} \otimes A_l) \oplus (B_k \otimes\WW^{(l)}_*),\\
\SS^\bullet(\overline{\lie f}_r) &\cong
\bigoplus_{\boldsymbol \alpha, \boldsymbol \beta}
\bigotimes_{k \geq l}
\Schur_{\alpha_{k,l}}(A_l) \otimes \Schur_{\beta_{k,l}}(B_k)
\otimes \Schur_{\alpha_{k,l}}(\WW^{(k)}) \otimes
\Schur_{\beta_{k,l}}(\WW^{(k)}_*),
\end{align*}
where the sum runs over all families $\boldsymbol \alpha =(\alpha_{k,l})_{1
\leq k \leq l \leq n}$ and $\boldsymbol \beta =(\beta_{k,l})_{1 \leq k \leq l
\leq n}$. On the other hand we have decompositions
\begin{align*}
\overline{\lie a}[r] &= \bigoplus_{k > l} \WW^{(k)} \otimes \WW^{(l)}_*;&
\SS^\bullet(\overline{\lie a}[r]) &=
\bigoplus_{\boldsymbol \gamma} \bigotimes_{k > l}
\Schur_{\gamma_{k,l}}
(\WW^{(k)}) \otimes \Schur_{\gamma_{k,l}}(\WW^{(l)}_*);
\end{align*}
where again the sum runs over all families of partitions $\boldsymbol \gamma =(
\gamma_{k,l})_{1 \leq l < k \leq n}$. We extend these families by setting
$\gamma_{k,k} = \emptyset$, so $\gamma_{k,l}$ is defined for all $k \geq l$.
We now introduce the following notation: given a family of partitions
$\boldsymbol \alpha =(\alpha_{k,l})_{1 \leq l \leq k \leq n}$ we set
$\boldsymbol \alpha_{k,\bullet} =(\alpha_{k,l})_{l \leq k \leq n}$ and
$\boldsymbol \alpha_{\bullet,l} =(\alpha_{k,l})_{1 \leq l \leq k}$. By
collecting all terms of the form $\Schur_{\eta_k}(\WW^{(k)}) \ot
\Schur_{\nu_k}(\WW_*^{(k)})$ we obtain the following isomoprhism
of $\lie s_r \oplus \lie l[r]$-modules
\begin{align*}
\mathcal H &\cong
\bigoplus_{\boldsymbol{\alpha, \beta,\gamma,\eta,\nu}}
\bigotimes_{k=1}^n
c^{\eta_k}_{\boldsymbol \alpha_{k,\bullet}, \boldsymbol \gamma_{k,\bullet}}
c^{\nu_k}_{\boldsymbol \beta_{\bullet,k}, \boldsymbol \gamma_{\bullet,k}}
Q(\boldsymbol \alpha_{k,\bullet}, \boldsymbol \beta_{\bullet,k}) \boxtimes
\Schur_{\eta_k}(\WW^{(k)}) \ot \Schur_{\nu_k}(\WW_*^{(k)})
\end{align*}
where the sum is taken over all collections of partitions and
\begin{align*}
Q(\boldsymbol \alpha_{k,\bullet}, \boldsymbol \beta_{\bullet,k})
&=
\Schur_{\boldsymbol \alpha_{k,\bullet}}
(A_1, A_2, \ldots, A_k) \ot \Schur_{\boldsymbol \beta_{\bullet,k}}
(B_k, B_{k+1}, \ldots, B_n).
\end{align*}
Since this module is contained in $\SS^\bullet(\overline{\lie u}(r))$, it must
lie in $\tilde \TT_{\lie l[r]}$. It follows that each weight component
intersects at most finitely many of these direct summands. Thus semisimple
duality will commute with both the direct sum and the large tensor product.
Furthermore, since each $Q(\boldsymbol \alpha_{k,\bullet}, \boldsymbol
\beta_{\bullet,k})$ has finite dimensional components we have
\begin{align*}
\mathcal H^\vee
&\cong
\bigoplus_{\boldsymbol{\alpha, \beta,\gamma,\eta,\nu}}
\bigotimes_{k=1}^n
c^{\eta_k}_{\boldsymbol \alpha_{k,\bullet}, \boldsymbol \gamma_{k,\bullet}}
c^{\nu_k}_{\boldsymbol \beta_{\bullet,k}, \boldsymbol \gamma_{\bullet,k}}
Q(\boldsymbol \alpha_{k,\bullet}, \boldsymbol \beta_{\bullet,k})^\vee \boxtimes
I(\boldsymbol \eta, \boldsymbol \nu)[r]^\vee.
\end{align*}
Now recall that $\Phi_r(I(\boldsymbol \eta, \boldsymbol \nu)[r]^\vee) \cong
\ssHom_{\lie l[r]'}(I(\boldsymbol \eta, \boldsymbol \nu)[r], \CC) \cong
\delta_{\boldsymbol \eta, \boldsymbol \nu}$. Thus
\begin{align*}
\Phi_r(\mathcal H^\vee)
&\cong
\bigoplus_{\boldsymbol{\alpha, \beta,\gamma,\eta}}
\bigotimes_{k=1}^n
c^{\eta_k}_{\boldsymbol \alpha_{k,\bullet}, \boldsymbol \gamma_{k,\bullet}}
c^{\eta_k}_{\boldsymbol \beta_{\bullet,k}, \boldsymbol \gamma_{\bullet,k}}
Q(\boldsymbol \alpha_{k,\bullet}, \boldsymbol \beta_{\bullet,k})^\vee \\
&\cong \left( \bigoplus_{\boldsymbol{\alpha, \beta,\gamma,\eta}}
\bigotimes_{k=1}^n
c^{\eta_k}_{\boldsymbol \alpha_{k,\bullet}, \boldsymbol \gamma_{k,\bullet}}
c^{\eta_k}_{\boldsymbol \beta_{\bullet,k}, \boldsymbol \gamma_{\bullet,k}}
Q(\boldsymbol \alpha_{k,\bullet}, \boldsymbol \beta_{\bullet,k})\right)^\vee.
\end{align*}
Denote the module inside the semisimple dual by $Q$, and let us analyse each
factor of the tensor product separately. When $k = 1$ the
Littlewood-Richardson coefficient $c^{\eta_k}_{\boldsymbol
\alpha_{k,\bullet}, \boldsymbol \gamma_{k,\bullet}}$ simplifies to
$c^{\eta_1}_{\alpha_{1,1}, \emptyset}$, and the only way for this to be nonzero
is that $\eta_1 = \alpha_{1,1}$. Thus the first factor in the tensor product has
the form
\begin{align*}
c^{\eta_1}_{\boldsymbol \beta_{\bullet,1}, \boldsymbol \gamma_{\bullet,1}}
\Schur_{\eta_1}(A_1) \ot \Schur_{\boldsymbol \beta_{\bullet,1}}
(B_1, B_{2}, \ldots, B_n) .
\end{align*}
We can sum all these factors over $\eta_1$ and $\beta_{l,1}$ for every $l \geq
1$ since these do not appear in any other factors. Using Lemma
\ref{lem:schur-identity} we obtain
\begin{align*}
\bigoplus_{\boldsymbol \beta_{\bullet,1}, \eta_1}
&c^{\eta_1}_{\boldsymbol \beta_{\bullet,1}, \boldsymbol \gamma_{\bullet,1}}
\Schur_{\eta_1}(A_1) \ot \Schur_{\boldsymbol \beta_{\bullet,1}}
(B_1, B_{2}, \ldots, B_n) \\
&\cong \bigoplus_{\boldsymbol \beta_{\bullet,1}}
\Schur_{\boldsymbol \beta_{\bullet,1}}
(A_1, A_{1}, \ldots, A_1)
\otimes \Schur_{\boldsymbol \gamma_{\bullet,1}}
(A_1, A_{1}, \ldots, A_1)\otimes
\Schur_{\boldsymbol \beta_{\bullet,1}}
(B_1, B_{2}, \ldots, B_n) \\
&\cong \SS^\bullet(A_1 \ot (B_1 \oplus B_2 \oplus \cdots \oplus B_n))
\otimes \Schur_{\boldsymbol \gamma_{\bullet,1}}
(A_1, A_{1}, \ldots, A_1).
\end{align*}
We see thus that $Q$ is isomorphic to
\begin{align*}
\SS^\bullet(A_1 \otimes (B_1 \oplus B_2 \oplus \cdots \oplus B_n))
\otimes \bigoplus_{\boldsymbol{\alpha, \beta,\gamma,\eta}}
\bigotimes_{k=2}^n
c^{\eta_k}_{\boldsymbol \alpha_{k,\bullet}, \boldsymbol \gamma_{k,\bullet}}
c^{\eta_k}_{\boldsymbol \beta_{\bullet,k}, \boldsymbol \gamma_{\bullet,k}}
\Schur_{\gamma_{k,1}}(A_1)\otimes Q(\boldsymbol \alpha_{k,\bullet},
\boldsymbol \beta_{\bullet,k})
\end{align*}
where the sum is now restricted to the subsequences where $k \geq 2$.
Let us now focus on the new factor corresponding to $k = 2$. We can again sum
over $\eta_2$ and $\boldsymbol \beta_{\bullet,2}$ to get
\begin{align*}
\bigoplus_{\eta_2, \boldsymbol \beta_{\bullet,2}}
&c^{\eta_2}_{\boldsymbol \beta_{\bullet,2}, \boldsymbol \gamma_{\bullet,2}}
c^{\eta_2}_{\alpha_{2,1}, \alpha_{2,2}, \gamma_{2,1}}
\Schur_{\gamma_{k,1}}(A_1) \otimes \Schur_{\alpha_{2,1},
\alpha_{2,2}}(A_1, A_2)\otimes
\Schur_{\boldsymbol \beta_{\bullet,2}}(B_2, \ldots, B_n)\\&\cong
\bigoplus_{\boldsymbol \beta_{\bullet,2}}
\Schur_{\boldsymbol \beta_{\bullet,2}}(\overline A_2)
\otimes \Schur_{\boldsymbol \gamma_{\bullet,2}}(\overline A_2)
\otimes \Schur_{\boldsymbol \beta_{\bullet,2}}(B_2, \ldots, B_n)
\\
&\cong \SS^\bullet(\overline A_2 \otimes (B_2 \oplus \cdots \oplus B_n))
\otimes \Schur_{\boldsymbol \gamma_{\bullet,2}}(\overline A_2).
\end{align*}
where $\overline A_2 = A_1 \oplus A_1 \oplus A_2$. After $n$ steps of this
recursive process we obtain
\begin{align*}
Q \cong
\bigotimes_{k=1}^n \SS^\bullet(\overline A_k \otimes \underline B_k)
\cong \SS^\bullet \left(\bigoplus_{k=1}^n \overline A_k \otimes \underline B_k
\right)
\end{align*}
where
\begin{align*}
\underline B_k &= B_k \oplus B_{k+1} \oplus \cdots \oplus B_n
\end{align*}
and
\begin{align*}
\overline A_k &= A_1 \oplus A_2 \oplus \cdots \oplus A_k \oplus
\overline A_1 \oplus \overline A_2 \oplus \cdots \oplus \overline A_{k-1}\\
&= 2^{k-1}A_1 \oplus 2^{k-2}A_2 \oplus \cdots \oplus A_k.
\end{align*}
Thus in the direct sum the term $A_k \otimes B_l$ appears $2^{l-k} + 2^{l-k-1}
+ \cdots + 2 + 1 = 2^{l-k+1}-1$ times. Now the sum of all the $A_k \otimes B_l$
with $l - k + 1= i$ fixed is the space $R_{i,r}$ introduced in the preamble of
Lemma \ref{lem:W} and so $Q \cong \mathcal \SS^\bullet(\mathcal R_r)$. The
naturality of Schur functors implies that this is an isomorphism of $\lie
s_r$-modules.
Now recall that $T_r(\lambda) = \ssCoind_{\overline{\lie b_r}}^{\lie g_r}
Q^\vee \ot \CC_\lambda$. The $p$-th module of the $\psi$-filtration of
$T_r(\lambda)$ is thus given by $\ssCoind_{\overline{\lie b_r}}^{\lie g_r}
Q_{\geq p-\psi(\lambda)}^\vee \ot \CC_\lambda$, and by exactness of
semisimple coiniduction on duals, the $p$-th layer of the filtration is
isomorphic to $\ssCoind_{\overline{\lie b_r}}^{\lie g_r} Q_{p -
\psi(\lambda)}^\vee \ot \CC_\lambda$, whose top layer is in turn isomorphic to
$\ssCoind_{\overline{\lie b} \cap \lie s_r}^{\lie s_r} Q_{p-\psi(\lambda)}^\vee
\ot \CC_\lambda$. Finally, the isomorphism we found before tells us that
$Q_{\geq p-\psi(\lambda)}^\vee \ot \CC_\lambda \cong \SS^\bullet(\mathcal
R^\vee)_{p-\psi(\lambda)}$ as $\lie s_r \cap \overline{\lie b}$-modules, so we
are done.
\section{Category $\CAT{}{}$: injective objects and further categorical
properties}
\label{s:injectives}
\subsection{Large annihilator coinduction}
In previous sections we have used inflation functors to turn objects in
$\overline{\mathcal O}_{\lie s}$ into $\lie q$-modules and then used induction
to construct objects in $\CAT{}{}$. In this section we will use a dual
construction. Recall that $I_{\lie s}(\lambda)$ denotes the injective envelope
of $L_{\lie s}(\lambda)$ in $\overline{\mathcal O}_{\lie s}$.
\begin{defn}
We denote by $\mathcal C: \overline{\mathcal O}_{\lie s} \to \Mod{(\lie g,
\lie h)}$ the functor $\mathcal C = \Phi \circ \ssCoind_{\overline{\lie
q}}^{\lie g} \circ \mathcal I_{\lie s}^{\overline{\lie q}}$. For each
$\lambda \in \lie h^\circ$ we set $I(\lambda) = \mathcal C(I_{\lie
s}(\lambda))$.
\end{defn}
Notice that $\mathcal C$ does not automatically fall in $\CAT{}{}$. However
from the definition and Proposition \ref{prop:ss-dual-coind} it follows
that $\mathcal C(M(\lambda)^\vee) = A(\lambda)$, so standard objects from the
category $\overline{\mathcal O}_{\lie s}$ are sent to standard objects of
$\CAT{}{}$. We will show that the same holds for injective modules, and that
in fact the standard filtrations of injective modules in $\overline{\mathcal
O}_{\lie s}$ also lift to standard filtrations in $\CAT{}{}$.
\subsection{Injective objects}
We now begin our study of injective envelopes in $\CAT{}{}$. Set $J(\lambda) =
\ssCoind_{\overline{\lie q}}^{\lie g} \mathcal I_{\lie s}^{\overline{\lie q}}
I_{\lie s}(\lambda)$. We will show that, just as for standard modules, we have
$I(\lambda) = \Phi(J(\lambda)) = \LAprojector(J(\lambda))$, and that this is
the injective envelope of the simple module $L(\lambda)$.
\begin{lem}
\label{lem:coind-inj}
Let $\lambda \in \lie h^\circ$ and let $I_{\lie s}(\lambda)$ be the injective
envelope of $L_{\lie s}(\lambda)$ in $\overline{\mathcal O}_{\lie s}$. Then
for any object $M$ in $\CAT{}{}$ we have $\Ext_{\lie g, \lie h}^1 (M, J(\lambda)
) = 0$.
\end{lem}
\begin{proof}
It is enough to prove the result for $M = L(\mu)$ with $\mu$ an eligible
weight. It follows from Proposition \ref{prop:hwm-properties} that $L(\mu) \cong
\Ind_{\lie s}^{\overline{\lie q}} L_{\lie s}(\lambda)$ as $\overline{\lie
q}$-module. Since $I_{\lie s}(\lambda)$ is an injective object it is acyclic
for the functor $\ssCoind_{\overline{\lie q}^{\lie g}} \circ \mathcal I_{\lie
s}^{\overline{\lie q}}$ so by standard homological algebra
\begin{align*}
\Ext_{\lie g, \lie h}^1(L(\mu), J(\lambda))
&\cong \Ext_{\lie s, \lie h}^1(U(\lie s) \ot_{\overline{\lie q}} L(\mu),
I_{\lie s}(\lambda))
= \Ext_{\lie s, \lie h}^1(L_{\lie s}(\lambda), I_{\lie s}(\lambda)).
\end{align*}
Since $\overline{\mathcal O}_{\lie s}$ is closed under semisimple extensions,
this last $\Ext$-space must be $0$.
\end{proof}
The class of $\Phi$-acyclic modules is closed by extensions, and so is the
subclass of $\Phi$-acyclic modules $M$ such that $\Phi(M) = \LAprojector(M)$.
The following lemma states that any module with a filtration by dual Verma
modules lies in this class.
\begin{lem}
\label{lem:phi-pi}
Let $M$ be a weight $\lie g$-module. Suppose $M$ has a finite filtration
$F_1M \subset F_2M \subset \cdots F_nM$ such that its $i$-th layer is
isomorphic to $M(\lambda_i)^\vee$ for $\lambda_i \in \lie h^\circ$. Then
$\Phi(M) = \LAprojector(M)$, and $\LAprojector(F_i M)/\LAprojector(F_{i-1}M)
\cong A(\lambda_i)$.
\end{lem}
\begin{proof}
The proof reduces to showing that $M(\lambda)^\vee$ is acyclic for both
$\Phi$ and $\Gamma_{\lie n}$.
Recall that $\Phi$ can be written as the direct limit of functors $\Phi_r$
taking $\lie l[r]'$-invariant submodules, and that there are natural
isomorphisms
\begin{align*}
\Phi_r &\cong
\bigoplus_\mu \Hom_{\lie g, \lie h}(U(\lie g) \ot_{\lie l[r]^+}
\CC_\mu, -)
\end{align*}
where the sum is taken over all $r$-eligible weights $\mu \in \lie h^\circ$.
Since derived functors commute with direct limits, and in particular with
direct sums, it is enough to show that if $\lambda$ and $\mu$ are $r$-eligible
then $\Ext^1_{\lie g, \lie h}(U(\lie g) \ot_{\lie l[r]^+} \CC_\mu,
M(\lambda)^\vee) = 0$.
Notice that the result is trivial if $\lambda$ and $\mu$ have different levels,
so we may assume that both have level $\chi = \chi_1 \omega^{(1)} + \cdots +
\chi_n \omega^{(n)}$ and so $\lambda = \lambda_r + \chi$ and $\mu = \mu|_r +
\chi$. We can rewrite $M(\lambda)^\vee$ as follows
\begin{align*}
M(\lambda)^\vee
&\cong \ssCoind_{\overline{\lie p}(r)}^{\lie g}
\mathcal I^{\overline{\lie p}(r)}_{\lie l[r]^+}
\ssCoind_{\overline{\lie b} \cap \lie l[r]^+}^{\lie l[r]^+}
(\CC_{\chi_1} \boxtimes \cdots \boxtimes \CC_{\chi_n}) \ot
\CC_{\lambda_r}\\
&\cong \ssCoind_{\overline{\lie p}(r)}^{\lie g}
\mathcal I^{\overline{\lie p}(r)}_{\lie l[r]^+} (M_1^\vee
\boxtimes \cdots \boxtimes M_n^\vee) \otimes
\CC_{\lambda_r}
\end{align*}
where $M_i = M_{\lie l[r]}(\chi_i \omega^{(i)}[r])$.
Using that inflation functors are exact and semisimple coinduction is
exact on semisimple duals, standard homological algebra implies
\begin{align*}
\Ext^1_{\lie g, \lie h}&(U(\lie g) \ot_{\lie l[r]^+} \CC_\mu, M(\lambda)^\vee) \\
&\cong \Ext_{\lie l[r]^+, \lie h}^1(U(\lie l[r])^+
\ot_{\overline{\lie p}(r)}
U(\lie g) \ot_{\lie l[r]^+} \CC_\mu,
M_1^\vee \boxtimes \cdots \boxtimes M_n^\vee \otimes
\CC_{\lambda_r}) \\
&\cong \Ext_{\lie l[r], \lie h}^1
(\SS^\bullet(\lie u(r)) \ot \CC_{\mu - \lambda},
M_1^\vee \boxtimes \cdots \boxtimes M_n^\vee).
\end{align*}
The left hand side of this $\Ext$-space decomposes as a direct sum of tensor
$\lie l[r]$-modules, so the $\Ext$-space decomposes as a tensor product
of spaces of the form
\begin{align*}
\Ext_{\lie g(\VV^{(k)}[r]), \lie h^{(k)}}(T_k, M_k^\vee)
\end{align*}
with $T_k$ a tensor module, and hence in $\CAT{\lie l[r]}{\lie l[r]}$. Now
each $M_i^\vee$ is injective in the corresponding category $\overline{\mathcal
O}$, since it is the dual Verma module corresponding to a weight that is
maximal in its dot-orbit. By Lemma \ref{lem:coind-inj} each of these
$\Ext$-spaces is $0$, and hence $M(\lambda)^\vee$ is $\Phi$-acyclic.
Now take $J_r$ to be the left ideal of $U(\lie g)$ generated by $\lie n^r$.
There exists a natural isomorphism
\begin{align*}
\Gamma_{\lie n}
\cong \varinjlim \bigoplus_{\lambda \in \lie h^\circ}
\Hom_{\lie g, \lie h}(U(\lie g)/J_r \ot_{\lie h} \CC_\lambda, -),
\end{align*}
so a similar argument works in this case. Finally since $\LAprojector =
\Gamma_{\lie n} \circ \Phi$, we are done.
\end{proof}
We are now ready to prove the main result of this section.
\begin{thm}
\label{thm:injectives}
Let $\lambda \in \lie h^\circ$. The module $I(\lambda)$ lies in $\CAT{}{}$ and
it is an injective envelope for $L(\lambda)$. Furthermore it has a finite
filtration whose first layer is $A(\lambda)$ and whose higher layers are
isomorphic to $A(\mu)$ with $\mu >_\fin \lambda$.
\end{thm}
\begin{proof}
Since inflation is exact and semisimple coinduction is exact over semisimple
duals, the costandard filtration of $I_{\lie s}(\lambda)$ induces a finite
filtration over $J(\lambda)$, whose first layer is isomorphic to
$M(\lambda)^\vee$ and whose higher layers are isomorphic to $M(\mu)^\vee$ for
$\mu \succeq \lambda$.
Applying Lemma \ref{lem:phi-pi} it follows that $I(\lambda) = \Phi(J(\lambda))
= \LAprojector(J(\lambda))$ and hence belongs to $\CAT{}{}$, and that this
module has the required filtration. Finally, reasoning as in the proof of Lemma
\ref{lem:coind-inj} we have isomorphisms
\begin{align*}
\Ext_{\CAT{}{}}^i(L(\mu), I(\lambda)) \cong
\Ext_{\lie g, \lie h}^i(L(\mu), J(\lambda))
&\cong \Ext_{\lie s, \lie h}^i(L_{\lie s}(\mu), I_{\lie s}(\lambda)).
\end{align*}
Taking $i = 0$ and varying $\mu$ we see that $L(\lambda)$ is the socle of
$I(\lambda)$, and taking $i = 1$ it follows that $I(\lambda)$ is injective in
$\CAT{}{}$.
\end{proof}
\subsection{Highest weight structure and blocks}
We are now ready to prove the main result of this paper, namely that $\CAT{}{}$
is a highest weight category in the sense of Cline, Parshall and Scott
\cite{CPS88}.
\begin{thm}
Category $\CAT{}{}$ is a highest weight category with indexing set
$(\lie h^\circ, <_\order)$. Simple modules are simple highest weight modules
$L(\lambda)$, the family of standard objects $A(\lambda)$ have infinite length,
and the corresponding injective envelopes $I(\lambda)$ have finite standard
filtrations.
\end{thm}
\begin{proof}
We only need to refer to previously proved results. We have just seen in Lemma
\ref{lem:order} that $<_\order$ is an interval-finite order. The fact that
$\CAT{}{}$ is locally artinian is immediate from \ref{prop:fg-fl}. That
eligible weights parametrise simple modules was proved in Theorem
\ref{thm:simples}, and standard modules exist and have the desired properties
by Theorem \ref{thm:standard}. Finally injective modules exist and have finite
standard filtrations with the required properties by Theorem
\ref{thm:injectives}.
\end{proof}
A consequence of the previous results is that $M_{\lie s}(\lambda)^\vee$ is
$\mathcal C$-acyclic, and standard filtrations of $I(\lambda)$ arise as the
image by $\mathcal C$ of a standard filtration of $I_{\lie s}(\lambda)$. As a
consequence we get the following analogue of BGG-reciprocity in $\CAT{}{}$.
\begin{cor}
Let $\lambda, \mu \in \lie h^\circ$. The multiplicity of $A(\mu)$ in any
standard filtration of $I(\lambda)$ equals $m(\lambda, \mu)$.
\end{cor}
\begin{proof}
It is enough to check the result for one standard filtration.
We can obtain a standard filtration of $I(\lambda)$ by applying the functor
$\mathcal C$ to a standard filtration of $I_{\lie s}(\lambda)$. Thus the
desired multiplicity is the same as the multiplicity of $M_{\lie s}(\mu)^\vee$
in $I_{\lie s}(\lambda)$. By Theorem \ref{thm:inj-filtration} this multiplicity
equals $[M_{\lie s}(\mu): L_{\lie s}(\lambda)] = m(\lambda,\mu)$.
\end{proof}
We finish with a description of the irreducible blocks of $\CAT{}{}$. Recall
that we denote by $\Lambda$ the root lattice of $\lie g$. For each $\kappa
\in \lie h^\circ / \Lambda$ set $\CAT{}{}(\kappa)$ to be the subcategory formed
by those modules in $\CAT{}{}$ whose support is contained in $\kappa$. Notice
in particular that each class has a fixed level, and so the decomposition
$\CAT{}{} = \prod_{\kappa} \CAT{}{}(\kappa)$ refines the decomposition by
levels. We now show that these are the indecomposable blocks of $\CAT{}{}$.
\begin{prop}
The blocks $\CAT{}{}(\kappa)$ are indecomposable.
\end{prop}
\begin{proof}
The support of $\lie g_{-1}^\psi \subset \mathcal R$ spans the
root lattice, and from Theorem \ref{thm:standard-mults} that $[A(\lambda):
L(\lambda)] = [A(\lambda): L(\lambda + \mu)]$ for any root in $\lie
g^\psi_{-1}$. Thus whenever two weights $\lambda, \mu$ lie in the same class
modulo the root lattice, there is a finite chain of weights $\lambda =
\lambda_1, \lambda_2, \ldots, \lambda_m = \mu$ such that $L(\lambda_{i+1})$ is a
simple constituent of the indecomposable module $A(\lambda_i)$. This shows that
all simple modules in the block $\CAT{}{}(\kappa)$ are linked and hence the
block is indecomposable.
\end{proof}
\begin{bibdiv}
\begin{biblist}
\bib{Brundan03}{article}{
author={Brundan, Jonathan},
title={Kazhdan-Lusztig polynomials and character formulae for the Lie superalgebra $\germ g\germ l(m|n)$},
journal={J. Amer. Math. Soc.},
volume={16},
date={2003},
number={1},
pages={185--231},
issn={0894-0347},
doi={10.1090/S0894-0347-02-00408-3},
}
\bib{BLW17}{article}{
author={Brundan, Jonathan},
author={Losev, Ivan},
author={Webster, Ben},
title={Tensor product categorifications and the super Kazhdan-Lusztig conjecture},
journal={Int. Math. Res. Not. IMRN},
date={2017},
number={20},
pages={6329--6410},
issn={1073-7928},
doi={10.1093/imrn/rnv388},
}
\bib{BS12a}{article}{
author={Brundan, Jonathan},
author={Stroppel, Catharina},
title={Highest weight categories arising from Khovanov's diagram algebra IV: the general linear supergroup},
journal={J. Eur. Math. Soc. (JEMS)},
volume={14},
date={2012},
number={2},
pages={373--419},
issn={1435-9855},
review={\MR {2881300}},
doi={10.4171/JEMS/306},
}
\bib{CPS88}{article}{
author={Cline, Edward},
author={Parshall, Brian},
author={Scott, Leonard},
title={Finite dimensional algebras and highest weight categories},
journal={J. reine angew. Math},
volume={391},
number={85-99},
pages={3},
year={1988},
}
\bib{CP19}{article}{
author={Coulembier, Kevin},
author={Penkov, Ivan},
title={On an infinite limit of BGG categories $\mathbf O$},
journal={Mosc. Math. J.},
volume={19},
date={2019},
number={4},
pages={655--693},
issn={1609-3321},
doi={10.17323/1609-4514-2019-19-4-655-693},
}
\bib{DanCohen08}{article}{
author={Dan-Cohen, Elizabeth},
title={Borel subalgebras of root-reductive Lie algebras},
journal={J. Lie Theory},
volume={18},
date={2008},
number={1},
pages={215--241},
}
\bib{DCPS16}{article}{
author={Dan-Cohen, Elizabeth},
author={Penkov, Ivan},
author={Serganova, Vera},
title={A Koszul category of representations of finitary Lie algebras},
journal={Adv. Math.},
volume={289},
date={2016},
pages={250--278},
}
\bib{DCPS07}{article}{
author={Dan-Cohen, Elizabeth},
author={Penkov, Ivan},
author={Snyder, Noah},
title={Cartan subalgebras of root-reductive Lie algebras},
journal={J. Algebra},
volume={308},
date={2007},
number={2},
pages={583--611},
}
\bib{DP04}{article}{
author={Dimitrov, Ivan},
author={Penkov, Ivan},
title={Borel subalgebras of $\rm gl(\infty )$},
journal={Resenhas},
volume={6},
date={2004},
number={2-3},
pages={153--163},
}
\bib{Dixmier77}{book}{
title={Enveloping Algebras},
author={Dixmier, J.},
series={North-Holland mathematical library},
url={https://books.google.com.ar/books?id=Dl4ljyqGG9wC},
year={1977},
}
\bib{DGK82}{article}{
author={Deodhar, Vinay},
author={Gabber, Ofer},
author={Kac, Victor},
title={Structure of some categories of representations of inifinite-dimensional Lie algebras},
journal={Adv. in Math.},
number={45},
pages={92--116},
volume={45},
year={1982},
}
\bib{GP19}{article}{
author={Grantcharov, Dimitar},
author={Penkov, Ivan},
title={Simple bounded weight modules for $\mathfrak {sl}_\infty , \mathfrak {o}_\infty , \mathfrak {sp}_\infty $},
note={preprint, available at arXiv:1807.01899},
year={2018},
}
\bib{HP22}{book}{
author={Penkov, Ivan},
author={Hoyt, Crystal},
title={Classical Lie algebras at infinity},
series={Springer Monographs in Mathematics},
publisher={Springer, Cham},
date={2022},
pages={xiii+239},
}
\bib{HPS19}{article}{
author={Hoyt, Crystal},
author={Penkov, Ivan},
author={Serganova, Vera},
title={Integrable $\mathfrak {sl}(\infty )$-modules and category $\mathcal O$ for $\mathfrak {gl}(m \mid n)$},
journal={{J. Lond. Math. Soc., II. Ser.}},
volume={99},
number={2},
pages={403--427},
year={2019},
}
\bib{Humphreys08}{book}{
author={Humphreys, James E.},
title={Representations of semisimple Lie algebras in the BGG category $\scr {O}$},
series={Graduate Studies in Mathematics},
volume={94},
publisher={American Mathematical Society, Providence, RI},
date={2008},
pages={xvi+289},
}
\bib{Nampaisarn17}{book}{
author={Nampaisarn, V.},
title={Categories $\mathcal O$ for Dynkin Borel subalgebras of Root-Reductive Lie algebras},
year={2017},
note={PhD Thesis, available at \url {https://arxiv.org/abs/1706.05950}},
}
\bib{NP03}{article}{
author={Neeb, Karl-Hermann},
author={Penkov, Ivan},
title={Cartan subalgebras of $\germ {gl}_\infty $},
journal={Canad. Math. Bull.},
volume={46},
date={2003},
number={4},
pages={597--616},
issn={0008-4395},
doi={10.4153/CMB-2003-056-1},
}
\bib{PS11a}{article}{
author={Penkov, Ivan},
author={Serganova, Vera},
title={Categories of integrable $sl(\infty )$-, $o(\infty )$-, $sp(\infty )$-modules},
conference={ title={Representation theory and mathematical physics}, },
book={ series={Contemp. Math.}, volume={557}, publisher={Amer. Math. Soc., Providence, RI}, },
date={2011},
pages={335--357},
label={PS11a},
}
\bib{PS19}{article}{
author={Penkov, Ivan},
author={Serganova, Vera},
title={Large annihilator category $\mathcal {O}$ for $\mathfrak {sl}(\infty ),\mathfrak {o}(\infty ),\mathfrak {sp}(\infty )$},
journal={J. Algebra},
volume={532},
date={2019},
pages={152--182},
}
\bib{PS11b}{article}{
author={Penkov, Ivan},
author={Styrkas, Konstantin},
title={Tensor representations of classical locally finite Lie algebras},
conference={ title={Developments and trends in infinite-dimensional Lie theory}, },
book={ series={Progr. Math.}, volume={288}, publisher={Birkh\"{a}user Boston, Inc., Boston, MA}, },
date={2011},
pages={127--150},
label={PS11b},
}
\bib{SS15}{article}{
author={Sam, Steven V.},
author={Snowden, Andrew},
title={Stability patterns in representation theory},
journal={Forum Math. Sigma},
volume={3},
date={2015},
pages={Paper No. e11, 108},
review={\MR {3376738}},
doi={10.1017/fms.2015.10},
}
\bib{Serganova21}{article}{
author={Serganova, Vera},
title={Tensor product of the Fock representation with its dual and the Deligne category},
conference={ title={Representation theory, mathematical physics, and integrable systems}, },
book={ series={Progr. Math.}, volume={340}, publisher={Birkhauser/Springer, Cham}, },
date={2021},
pages={569--584},
doi={10.1007/978-3-030-78148-4-19},
}
\end{biblist}
\end{bibdiv}
\end{document} |
2,869,038,154,087 | arxiv |
\section{Introduction}
\label{sec:introduction}
The goal of sequential data assimilation is to estimate the true state of a dynamical system $\x^\textnormal{true} \in \Re^{\Nstate \times 1}$ using information from numerical models, priors, and observations. A numerical model captures (with some approximation) the physical laws of the system and evolves its state forward in time \cite{Cheng2010}:
\begin{eqnarray}
\label{eq:intro-numerical-model}
\x_{k} = \M_{t_{k-1} \rightarrow t_k} \lp \x_{k-1} \rp \in \Re^{\Nstate \times 1},\, \text{ for $\x \in \Re^{\Nstate \times 1}$} ,\,
\end{eqnarray}
where $\Nstate$ is the dimension of the model state, $k$ denotes time index, and $\M$ can represent, for example, the dynamics of the ocean and/or atmosphere. A prior estimation $\x_k^\textnormal{b} \in \Re^{\Nstate \times 1}$ of $\x_k^\textnormal{true}$ is available, and the prior error $\errbac$ is usually assumed to be normally distributed:
\begin{eqnarray}
\label{eq:background-errors}
\displaystyle
\x_k^\textnormal{b} - \x^\textnormal{true} = \errbac_k \sim \Nor \lp {\bf 0},\, \B_k \rp \in \Re^{\Nstate \times 1},
\end{eqnarray}
where $\B_k \in \Re^{\Nstate \times \Nstate}$ is the background error covariance matrix. Noisy observations (measurements) of the true state $\y_k \in \Re^{\Nobs \times 1}$ are taken, and the observation errors $\errobs$ are usually assumed to be normally distributed:
\begin{eqnarray}
\label{eq:noisy-observations}
\displaystyle
\y_k -\Ho \lp \x_k^\textnormal{true} \rp = \errobs_k \sim \Nor \lp {\bf 0},\, \R_k \rp\, \in \Re^{\Nobs \times 1} ,\,
\end{eqnarray}
where $\Nobs$ is the number of observed components, $\Ho: \Re^{\Nstate\times 1} \rightarrow \Re^{\Nobs \times 1}$ is the observation operator, and $\R_k \in \Re^{\Nobs \times \Nobs}$ is the data error covariance matrix.
Making use of Bayesian statistics and matrix identities, the assimilation of the observation \eqref{eq:noisy-observations} is performed as follows:
\begin{equation}
\label{eq:analysis-state}
\begin{split}
\x_k^\textnormal{a} &= \x_k^\textnormal{b} + \B_k \cdot \H_k^T \cdot \lb \H_k \cdot {\B_k} \cdot \H_k^T +\R_k \rb^{-1} \cdot \lb \y_k - \Ho(\x_k^\textnormal{b}) \rb \in \Re^{\Nstate \times 1} ,\,\\
{\bf A}_k &= \lb \I - \B_k \cdot \H_k^T \cdot \lb \R_k+\H_k \cdot \B_k \cdot \H_k^T \rb^{-1} \cdot \H_k \rb \cdot \B_k \in \Re^{\Nstate \times \Nstate},\,
\end{split}
\end{equation}
where $ \H_k \approx \Ho'(\x_k^\textnormal{b}) \in \Re^{\Nobs \times \Nstate}$ is a linear approximation of the observational operator, and $\x_k^\textnormal{a} \in \Re^{\Nstate \times 1}$ is the analysis (posterior) state.
According to equation \eqref{eq:analysis-state} the elements of $\B_k$ determine how the information about the observed model components contained in the innovations $ \y_k - \Ho(\x_k^\textnormal{b}) \in \Re^{\Nobs \times 1}$ is distributed to properly adjust all model components, including the unobserved ones. Thus, the successful assimilation of the observation \eqref{eq:noisy-observations} will rely, in part, on how well the background error statistics are approximated.
In the context of ensemble based methods, an ensemble of model realizations
\begin{eqnarray}
\label{eq:background-ensemble}
\X^\textnormal{b}_k = \lb \x^{\textnormal{b}[1]}_k,\,\x^{\textnormal{b}[2]}_k,\,\ldots,\, \x^{\textnormal{b}[\Nens]}_k\rb \in \Re^{\Nstate \times \Nens} ,\,
\end{eqnarray}
is used in order to estimate the unknown moments of the background error distribution:
\begin{subequations}
\label{eq:moments-ensemble}
\begin{eqnarray}
\label{eq:ensemble-mean}
\displaystyle
\xm^\textnormal{b}_k &=& \frac{1}{\Nens} \cdot \sum_{i=1}^{\Nens} \x_k^{\textnormal{b}[i]} \in \Re^{\Nstate \times 1} ,\\
\label{eq:covariance-ensemble}
\displaystyle
\B_k \approx \P^\textnormal{b} &=& \frac{1}{\Nens-1} \cdot {\bf U}_k^\textnormal{b} \cdot \left({\bf U}_k^\textnormal{b}\right)^T \in \Re^{\Nstate \times \Nstate} ,\,
\end{eqnarray}
where $\Nens$ is the number of ensemble members, $\x_k^{\textnormal{b}[i]} \in \Re^{\Nstate \times 1}$ is the $i$-th ensemble member, $\xm_k^\textnormal{b} \in \Re^{\Nstate \times 1}$ is the background ensemble mean, $\P_k^\textnormal{b}$ is the background ensemble covariance matrix, and ${\bf U}_k \in \Re^{\Nstate \times \Nens}$ is the matrix of member deviations:
\begin{eqnarray}
\label{eq:matrix-of-member-deviations}
\displaystyle
{\bf U}_k^\textnormal{b} = \X_k^\textnormal{b} - \xm_k^\textnormal{b} \cdot {\bf 1}_{\Nens}^T \in \Re^{\Nstate \times \Nens}.
\end{eqnarray}
\end{subequations}
One attractive feature of $\P_k^\textnormal{b}$ is its flow-dependency which allows to approximate the background error correlations based on the dynamics of the numerical model \eqref{eq:intro-numerical-model}. However, in operational data assimilation, the number of model components is much larger than the number of model realizations $\Nstate \gg \Nens$ and therefore $\P_k^\textnormal{b}$ is rank-deficient. Spurious correlations (e.g., correlations between distant model components in space) can degenerate the quality of the analysis corrections. One of the most succesful EnKF formulations is the local ensemble transform Kalman filter (LETKF) in which the impact of spurious analysis corrections is avoided by making use of local domain analyses. In this context, every model component is surrounded by a box of a prescribed radius, and then the assimilation is performed within every local box. In this case the background error correlations are provided by the local ensemble covariance matrix. The local analyses are mapped back onto the global domain to obtain the global analysis state. Nevertheless, when sparse observational networks are considered many boxes can contain no observations, in which case the local analyses coincide with the background. The local box sizes can be increased in order to include observations within the local domains, in which case local analysis corrections can be impacted by spurious correlations. Moreover, in practice, the size of local boxes can be still larger than the number of ensemble members and therefore, the local sample covariance matrix can be rank-deficient.
In order to address the above issues this paper proposes a better estimation of the inverse background error covariance matrix $\B^{-1}$ obtained via a modified Cholesky decomposition. By imposing conditional independence between errors in remote model components we obtain sparse approximations of $\B^{-1}$.
This paper is organized as follows. In Section \ref{sec:background} ensemble based methods and the modified Cholesky decomposition are introduced. Section \ref{sec:EnKF-MC} discusses the proposed ensemble Kalman filter based on a modified Cholesky decomposition for inverse covariance matrix estimation; a theoretical convergence of the estimator in the context of data assimilation as well as its computational effort are discussed. Section \ref{sec:experimental-settings} presents numerical experiments using the Atmospheric General Circulation Model SPEEDY; the results of the new filter are compared against those obtained by the local ensemble transform Kalman filter. Future work is discussed in Section \ref{sec:future-work} and conclusions are drawn in Section \ref{sec:conclusions}.
\section{Background}
\label{sec:background}
The ensemble Kalman filter is a sequential Monte Carlo method for state and parameter estimation of non-linear models such as those found in atmospheric and oceanic sciences \cite{TELA:TELA299,EnKFEvensen,EnKF1657419}. The EnKF popularity is due to its basic theoretical formulation and its relative ease of implementation \cite{EnKFEvensen}. Given the \textit{background} ensemble \eqref{eq:background-ensemble} EnKF builds the \textit{analysis} ensemble as follows:
\begin{subequations}
\begin{eqnarray}
\label{eq:EnKF-analysis-ensemble}
\displaystyle
\X^\textnormal{a} = \X^\textnormal{b} + \P^\textnormal{b} \cdot \H^T \cdot \lb \R + \H \cdot \P^\textnormal{b} \cdot \H^T \rb \cdot \boldsymbol{\Delta} \in \Re^{\Nstate \times \Nens} ,\,
\end{eqnarray}
where:
\begin{eqnarray}
\label{eq:EnKF-innvo-observations}
\displaystyle
\boldsymbol{\Delta} = \Y^\textnormal{s} - \Ho (\X^\textnormal{b}) \in \Re^{\Nobs \times \Nens} \,,
\end{eqnarray}
and the matrix of perturbed observations $\Y^\textnormal{s} \in \Re^{\Nobs \times \Nens}$ is:
\begin{equation}
\label{eq:EnKF-synthetic-data}
\begin{split}
\Y^\textnormal{s} &= \lb \y+\errobs^{[1]},\, \y+\errobs^{[2]},\, \ldots, \, \y+\errobs^{[\Nens]} \rb \in \Re^{\Nobs \times \Nens} \,,\\
\errobs^{[i]} &\sim \Nor \lp {\bf 0},\, \R \rp, \quad 1 \le i \le \Nens \,.
\end{split}
\end{equation}
\end{subequations}
For ease of notation we have omitted the time index superscripts.
The use of perturbed observations \eqref{eq:EnKF-synthetic-data} during the assimilation provides asymptotically correct analysis-error covariance estimates for large ensemble sizes and makes the formulation of the EnKF statistically consistent \cite{Thomas2002}. However, it also has been shown that the inclusion of perturbed observations introduces sampling errors in the assimilation \cite{SamplingErrors1,SamplingErrors2}.
One of the important problems faced by current ensemble based methods is that spurious correlations between distant components in the physical space lead to spurious analysis corrections. Better approximations of the background error covariance matrix are proposed in the literature in order to alleviate this problem. A traditional approximation of $\B$ is the Hollingworth and Lonnberg method \cite{TELA:TELA460} in which the difference between observations and background states are treated as a combination of background and observations errors. However, this method provides statistics of background errors in observation space, and requires dense observing networks (not the case in practice). Another method has been proposed by Benedetti and Fisher \cite{QJ:QJ37} based on forecast differences in which the spatial correlations of background errors are assumed to be similar at 24 and 48 hours forecasts. This method can be efficiently implemented in practice, however, it does not perform well in data-sparse regions, and the statistics provided are a mixture of analysis and background errors. Another way to reduce the impact of spurious correlations is based on adaptive modeling \cite{AdaptativeModeling}. In this context, the model learns and changes with regard to the data collected (i.e., parameters values and model structures). This allows to calibrate, in time, the error subspace rank (i.e., number of empirical orthogonal functions used in the assimilation process), the tapering parameter (i.e., local domain sizes), and the ensemble size, among others. Yet another method based on error subspace statistical estimation is proposed in \cite{LemusDA}. This approach develops an evolving error subspace, of variable size, that targets the processes where the dominant errors occur. Then, the dominant errors are minimized in order to estimate the best model state trajectory with regard to the observations. We proposed approximations based on autoregressive error models \cite{Sandu_2007_ARMA} and using hybrid subspace techniques.\cite{Sandu_2010_hybridCovariance}.
Covariance matrix localization artificially reduces correlations between distant model components via a Schur product with a localization matrix $\boldsymbol{\Pi} \in \Re^{\Nstate \times \Nstate}$:
\begin{eqnarray}
\label{eq:EnKF-localized-ensemble-covariance}
\displaystyle
\widehat{\P}^\textnormal{b} = \boldsymbol{\Pi}\circ \P^\textnormal{b} \in \Re^{\Nstate \times \Nstate}
\end{eqnarray}
and then $\P^\textnormal{b}$ is replaced by $\widehat{\P}^\textnormal{b} \in \Re^{\Nstate \times \Nstate}$ in the EnKF analysis equation \eqref{eq:EnKF-analysis-ensemble}. The entries of $\boldsymbol{\Pi}$ decrease with the distance between model components depending on the radius of influence ${\zeta}$:
\begin{eqnarray}
\label{eq:EnKF-covariance-localisation}
\lle \boldsymbol{\Pi} \rle_{i,j} = \exp \lp -\frac{\pi \lp m_i,\,m_j \rp}{f({\zeta})} \rp \,, \text{ for $1 \le i \le j \le \Nstate$}\,,
\end{eqnarray}
where $\pi \lp m_i,\,m_j \rp$ represents the physical distance between the model components $m_i$ and $m_j$ while, $f({\zeta})$ is a function of ${\zeta}$ (e.g., $f({\zeta}) = 2 \cdot {\zeta}^2$). The exponential decay allows to reduce the impact of innovations between distant model components. The use of covariance matrix localization alleviates the impact of sampling errors. However, the explicit computation of $\boldsymbol{\Pi}$ (and even $\P^\textnormal{b}$) is prohibitive owing to numerical model dimensions. Thus, domain localization methods \cite{spatial_localization,local_analysis_1} are commonly used in the context of operational data assimilation. One of the best EnKF implementations based on domain localization is the local ensemble transform Kalman filter (LETKF) \cite{LETKFHunt}. In the LETKF the analysis increments are computed in the space spanned by the ensemble perturbations ${\bf U}^\textnormal{b}$ defined in \eqref{eq:matrix-of-member-deviations}. An approximation of the analysis covariance matrix in this space reads:
\begin{subequations}
\label{eq:LETKF-method}
\begin{eqnarray}
\label{eq:LETKF-analysis-covariance-ensemble-space}
\displaystyle
\widehat{\P}^\textnormal{a} = \lb \lp\Nens-1\rp \cdot \I + \Q^T \cdot \R^{-1} \cdot \Q\rb^{-1} \in \Re^{\Nens \times \Nens},\,
\end{eqnarray}
where $\Q = \H \cdot {\bf U}^\textnormal{b} \in \Re^{\Nobs \times \Nens}$ and $\I$ is the identity matrix consistent with the dimension. The analysis increments in the subspace are:
\begin{eqnarray}
\W^\textnormal{a} = \widehat{\P}^\textnormal{a} \cdot \Q^T \cdot \R^{-1} \cdot \lb \y - \Ho (\xm^\textnormal{b}) \rb \in \Re^{\Nens \times 1} ,\,
\end{eqnarray}
from which an estimation of the analysis mean in the model space can be obtained:
\begin{eqnarray}
\label{eq:LETKF-analysis-mean}
\displaystyle
\xm^\textnormal{a} = \xm^\textnormal{b} + {\bf U}^\textnormal{b} \cdot \W^\textnormal{a} \in \Re^{\Nstate \times 1} .\,
\end{eqnarray}
Finally, the analysis ensemble reads:
\begin{eqnarray}
\label{eq:LETKF-analysis-ensemble}
\x^\textnormal{a} = \xm^\textnormal{a} \cdot {\bf 1}_{\Nens}^T + {\bf U}^\textnormal{b} \cdot \lb \lp \Nens-1\rp \cdot \widehat{\P}^\textnormal{a} \rb^{1/2} \in \Re^{\Nstate \times \Nens} .\,
\end{eqnarray}
\end{subequations}
The domain localization in the LETKF is performed as follows: each model component is surrounded by a local box of radius ${\zeta}$. Within each local domain the analysis equations \eqref{eq:LETKF-method} are applied, and therefore a local analysis component is obtained. All local analysis components are mapped back onto the model space to obtain the global analysis state. Local boxes for different radii are shown in Figure \ref{fig:LETKF-radius}. The local sample covariance matrix \eqref{eq:covariance-ensemble} is utilized as the covariance estimator of the local $\B$. This can perform well when small radii ${\zeta}$ are considered during the assimilation step. However, for large values of ${\zeta}$, the analysis corrections can be impacted by spurious correlations since the local sample covariance matrix can be rank deficient. Consequently, the local analysis increments can perform poorly.
\begin{figure}[H]
\centering
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.9\textwidth]{figures/grid_1-eps-converted-to.pdf}
\caption{${\zeta}=1$}
\end{subfigure}%
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.9\textwidth]{figures/grid_2-eps-converted-to.pdf}
\caption{${\zeta}=2$}
\label{subfig:grid-2}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.9\textwidth]{figures/grid_3-eps-converted-to.pdf}
\caption{${\zeta}=3$}
\label{subfig:grid-3}
\end{subfigure}
\caption{Local domains for different radii of influence ${\zeta}$. The red dot is the model component to be assimilated, blue components are within the scope of ${\zeta}$, and black model components are unused during the local assimilation process.}
\label{fig:LETKF-radius}
\end{figure}
There is an opportunity to reduce the impact of sampling errors by improving the background error covariance estimation. We achieve this by making use of the modified Cholesky decomposition for inverse covariance matrix estimation \cite{bickel2008}. Consider a sample of $\Nens$ Gaussian random vectors:
\begin{eqnarray*}
\label{eq:intro-samples}
\displaystyle
\S = \lb {\bf s}^{[1]},\, {\bf s}^{[2]},\, \ldots ,\, {\bf s}^{[\Nens]} \rb \in \Re^{\Nstate \times \Nens} ,\,
\end{eqnarray*}
with statistical moments:
\begin{eqnarray*}
{\bf s}^{[j]} \sim \Nor \lp {\bf 0}_{\Nstate} ,\, \Q \rp, \, \text{for $1 \le j \le \Nens$} ,\,
\end{eqnarray*}
where ${\bf s}^{[j]} \in \Re^{\Nstate \times 1}$ denotes the $j$-th sample. Denote by $\x_{[i]} \in \Re^{\Nens \times 1}$ the vector holding the $i$-th component across all the samples (the $i$-th row of $\S$, transposed). The modified Cholesky decomposition arises from regressing each component on his predecessors according to some component ordering:
\begin{eqnarray}
\label{eq:intro-modified-Chokesky-decomposition}
\displaystyle
\x_{[i]} = \sum_{j=1}^{i-1} \x_{[j]} \cdot \beta_{i,j} + \boldsymbol{\varepsilon}_{[i]} \in \Re^{\Nens \times 1} ,\,\quad 2 \le i \le \Nstate,
\end{eqnarray}
where $\x_{[j]}$ is the $j$-th model component which precedes $\x_{[i]}$ for $1 \le j \le i-1$, $\boldsymbol{\varepsilon}_{[1]} = \x_{[1]}$, and $\boldsymbol{\varepsilon}_{[i]} \in \Re^{\Nens \times 1}$ is the error in the $i$-th component regression for $i \ge 2$. Likewise, the coefficients $\beta_{i,j}$ in \eqref{eq:intro-modified-Chokesky-decomposition} can be computed by solving the optimization problem:
\begin{eqnarray}
\label{eq:intro-coefficient-computation}
\displaystyle
{\boldsymbol \beta}_{[i]} = \underset{{\boldsymbol \beta}}{\arg\,\min} \ln \x_{[i]} - \Z_{[i]} \cdot {\boldsymbol \beta} \ \big\|^2_2
\end{eqnarray}
where
\begin{eqnarray*}
\displaystyle
\Z_{[i]} &=& \lb \x^{[1]},\, \x^{[2]},\, \ldots,\, \x^{[i-1]}\rb^T \in \Re^{(i-1) \times \Nens} ,\quad 2 \le i \le \Nstate, \\
\displaystyle
{\boldsymbol \beta}_{[i]} &=& \lb \beta_{i,1},\, \beta_{i,2},\, \ldots ,\, \beta_{i,i-1} \rb^{T} \in \Re^{(i-1) \times 1} .\,
\end{eqnarray*}
The regression coefficients form the lower triangular matrix
\begin{subequations}
\label{eq:intro-empirical-T-D}
\begin{eqnarray}
\label{eq:intro-lower-T}
\big\{ \widehat{\bf T} \big\}_{i,j} = \left\{
\begin{aligned}
-\beta_{i,j} & \text{ for }1 \le j < i, \\
1 & \text{ for } j=i, \\
0 & \text{ for } j>i,
\end{aligned} \right. \quad 1 \le i \le \Nstate,
\end{eqnarray}
where $\big\{ \widehat{\bf T} \big\}_{i,j}$ denotes the $(i,j)$-th component of matrix $\widehat{\bf T} \in \mathbb{R}^{\Nstate \times \Nstate}$. The empirical variances $\widehat{\bf cov}$ of the residuals $\boldsymbol{\varepsilon}_{[i]}$ form the diagonal matrix:
\begin{eqnarray}
\label{eq:intro-diagonal-D}
\widehat{\bf D} = \underset{1 \le i \le \Nstate}{\textnormal{diag}}\left( \widehat{\bf cov} ( \boldsymbol{\varepsilon}_{[i]}) \right) = \underset{1 \le i \le \Nstate}{\textnormal{diag}}\left( \frac{1}{\Nens-1} \sum_{j=1}^{\Nens} \big\{\boldsymbol{\varepsilon}_{[i]} \big\}^2_{j} \right) \in \mathbb{R}^{\Nstate \times \Nstate}\,.
\end{eqnarray}
\end{subequations}
where $\lle \widehat{\bf D} \rle_{1,1} = \widehat{\bf cov}\lp \x_{[1]}\rp$. Then an estimate of $\Q^{-1}$ can be computed as follows:
\begin{subequations}
\label{eq:intro-Q-approximations}
\begin{eqnarray}
\label{eq:intro-Q-inverse}
\displaystyle
{\bf \widehat{\bf Q}}^{-1} = \widehat{\bf T}^T \cdot \widehat{\bf D}^{-1} \cdot \widehat{\bf T} \in \Re^{\Nstate \times \Nstate} ,\,
\end{eqnarray}
or, by basic matrix algebra identities the estimate of $\Q$ reads:
\begin{eqnarray}
\displaystyle
{\bf \widehat{\bf Q}} = \widehat{\bf T}^{-1} \cdot \widehat{\bf D} \cdot \widehat{\bf T}^{-T} \in \Re^{\Nstate \times \Nstate} \,.
\end{eqnarray}
\end{subequations}
Note that the structure of ${\bf \widehat{\bf Q}}^{-1}$ is strictly related to the structure of $\widehat{\bf T}$. This can be exploited in order to obtain sparse estimators of $\Q^{-1}$ by imposing that some entries of $\widehat{\bf T}$ are zero. This is important for high dimensional probability distributions where the explicit computation of ${\bf \widehat{\bf Q}}$ or ${\bf \widehat{\bf Q}}^{-1}$ is prohibitive. The zero components in $\widehat{\bf T}$ can be justified as follows: when two components are {\it conditionally independent} their corresponding entry in ${\bf \widehat{\bf Q}}^{-1}$ is zero. In the context of data assimilation, the conditional independence of background errors between different model components can be achieved by making use of domain localization. We can consider zero correlations between background errors corresponding to model components located at distances that exceed a radius of influence ${\zeta}$. In the next section we present an ensemble Kalman filter implementation based on modified Cholesky decomposition for inverse covariance matrix estimation.
\section{Ensemble Kalman Filter Based On Modified Cholesky Decomposition}
\label{sec:EnKF-MC}
In this section we discuss the new ensemble Kalman filter based on modified Cholesky decomposition for inverse covariance matrix estimation ( EnKF-MC).
\subsection{Estimation of the inverse background covariance}
The columns of matrix \eqref{eq:matrix-of-member-deviations}
\begin{eqnarray*}
{\bf U}^\textnormal{b} = \lb {\bf u}^{\textnormal{b}[1]},\,{\bf u}^{\textnormal{b}[2]},\, \ldots ,\, {\bf u}^{\textnormal{b}[\Nens]} \rb \in \Re^{\Nstate \times \Nens}
\end{eqnarray*}
can be seen as samples of the (approximately normal) distribution:
\begin{eqnarray*}
\x^{\textnormal{b}[j]} - \xm^\textnormal{b} = {\bf u}^{\textnormal{b}[j]} \sim \Nor \lp {\bf 0},\, \B \rp,\, \text{ for $1 \le j \le \Nens$} \,,
\end{eqnarray*}
and therefore, if we let $\x_{[i]} \in \Re^{\Nens \times 1}$ in \eqref{eq:intro-modified-Chokesky-decomposition} to be the vector formed by the $i$-th row of matrix \eqref{eq:matrix-of-member-deviations}, for $1 \le i \le \Nstate$, according to equations \eqref{eq:intro-Q-approximations}, an estimate of the inverse background error covariance matrix reads:
\begin{subequations}
\label{eq:EnKF-MC-background-cov}
\begin{eqnarray}
\label{eq:EnKF-MC-inverse-background-cov}
\displaystyle \B^{-1} \approx \widehat{\bf B}^{-1} = \widehat{\bf T}^T \cdot \widehat{\bf D}^{-1} \cdot \widehat{\bf T} \in \Re^{\Nstate \times \Nstate} ,\,
\end{eqnarray}
and therefore:
\begin{eqnarray}
\label{eq:EnKF-MC-background-cov}
\displaystyle
\B \approx \widehat{\bf B} = \widehat{\bf T}^{-1} \cdot \widehat{\bf D} \cdot \widehat{\bf T}^{-T} \in \Re^{\Nstate \times \Nstate} \,.
\end{eqnarray}
\end{subequations}
As we mentioned before, the structure of $\widehat{\bf B}^{-1}$ depends on that of $\widehat{\bf T}$. If we assume that the correlations between model components are local, and there are no correlations outside a radius of influence ${\zeta}$, we obtain lower-triangular sparse estimators of $\widehat{\bf T}$. Consequently, the resulting $\widehat{\bf B}^{-1}$ will also be sparse, and $\widehat{\bf B}$ will be localized. Since the regression \eqref{eq:intro-modified-Chokesky-decomposition} is performed only on the predecessors of each model component, an ordering (labeling) must be set on the model components prior the computation of $\widehat{\bf T}$. Since we work with gridded models we consider column-major and row-major orders. They are illustrated in Figure \ref{fig:ordering} for a two-dimensional domain. Figure \ref{fig:predecessors} shows the local domain and the predecessors of the model component 6 when column-major order is utilized.
\begin{figure}[H]
\centering
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=0.6\textwidth,height=0.6\textwidth]{figures/Ordering-Column.png}
\caption{Column-major order}
\end{subfigure}%
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=0.6\textwidth,height=0.6\textwidth]{figures/Ordering-Row.png}
\caption{Row-major order}
\label{subfig:row-major-order}
\end{subfigure}
\caption{Row-major and column-major ordering for a $4 \times 4$ domain. The total number of model components is 16. }
\label{fig:ordering}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=0.6\textwidth,height=0.6\textwidth]{figures/Ordering_radius.png}
\caption{In blue, local box for the model component 6 when ${\zeta}=2$.}
\end{subfigure} \hspace{1em}
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=0.6\textwidth,height=0.6\textwidth]{figures/Ordering_regression.png}
\caption{In blue, predecessors of the model component 6 for ${\zeta}=2$.}
\end{subfigure}
\caption{Local model components (local box) and local predecessors for the model component 6 when ${\zeta}=2$. Column-major ordering is utilized to label the model components.}
\label{fig:predecessors}
\end{figure}
The estimation of $\widehat{\bf B}^{-1}$ proceeds as follows:
\begin{enumerate}
\item Form the matrix $\Z_{[i]} \in \Re^{p_i \times \Nens}$ with the predecessors of the $i$-th model component:
\begin{eqnarray}
\label{eq:EnKF-MC-predecessors}
\Z_{[i]} = \lb \x^{[q(i,1)]},\,\x^{[q(i,2)]},\, \ldots ,\, \x^{[q(i,p_i)]} \rb^T \in \Re^{p_i \times \Nens} \,,
\end{eqnarray}
where $\x^{[e]}$ is the $e$-th row of matrix \eqref{eq:matrix-of-member-deviations}, $p_i$ is the number of predecessors of component $i$, and $1 \le q(i,j) \le \Nstate$ is the index (row of matrix \eqref{eq:matrix-of-member-deviations}) of the $j$-th predecessor of the $i$-th model component.
\item For the $i$-th model components the regression coefficients are obtained as follows:
\begin{eqnarray*}
\x_{[i]} = \sum_{j=1}^{p_i} \beta_{i,j} \cdot \x^{[q(i,j)]} + \boldsymbol{\varepsilon}_{[i]} \in \Re^{\Nens \times 1} \,.
\end{eqnarray*}
For $2 \le i \le \Nstate$, compute ${\boldsymbol \beta}_{[i]} = [\beta_{i,1},\,\beta_{i,2},\,\ldots,\,\beta_{i,p_i}] \in \Re^{p_i \times 1}$ by solving the optimization problem \eqref{eq:intro-coefficient-computation} with $\Z_{[i]}$ given by \eqref{eq:EnKF-MC-predecessors}.
\item Build the matrices
\begin{eqnarray*}
\big\{ \widehat{\bf T} \big\}_{i,q(i,j)}= -\beta_{i,j} ~~ \text{ for}~~1 \le j \le p_i, ~~1 < i \le \Nstate \,; \quad \big\{ \widehat{\bf T} \big\}_{i,i}=1,
\end{eqnarray*}
and $\widehat{\bf D}$ according to equation \eqref{eq:intro-diagonal-D}. Note that the number of non-zero elements in the $i$-th row of $\widehat{\bf T}$ equals the number of predecessors $p_i$.
\end{enumerate}
Note that the solution of the optimization problem \eqref{eq:intro-coefficient-computation} can be obtained as follows:
\begin{eqnarray}
\label{eq:EnKF-MC-solution-of-optimization-problem}
{\boldsymbol \beta}_{[i]} = \lb \Z_{[i]} \cdot { \Z_{[i]} }^T \rb^{-1} \cdot \Z_{[i]} \cdot \x_{[i]}
\end{eqnarray}
and since the ensemble size can be smaller than the number of model components, $\Z_{[i]} \cdot { \Z_{[i]} }^T \in \Re^{p_i \times p_i} $ can be rank deficient. To overcome this situation, regularization of the zero singular values of $\Z_{[i]} \cdot {\Z_{[i]}}^T$ can be used. One possibility is Tikhonov regularization \cite{Tik1,Tik2,Tik3}:
\begin{eqnarray}
\label{eq:EnKF-MC-optimization-tik}
{\boldsymbol \beta}_{[i]} = \underset{{\boldsymbol \beta}}{\arg\,\min} \lle \ln \x_{[i]} - \Z_{[i]} \cdot {\boldsymbol \beta} \big\|^2_2 + \lambda^2 \cdot \ln {\boldsymbol \beta} \big\|_2^2 \rle
\end{eqnarray}
where $\lambda \in \Re$. In our context the best choice for $\lambda$ relies on prior knowledge of the background and the observational errors \cite{Tik4}. Another approach to regularization is to use a truncated singular value decomposition (SVD) of $\Z_{[i]}$:
\begin{eqnarray*}
\Z_{[i]} = {\bf U}^{\Z_{[i]}} \cdot \boldsymbol{\Sigma}^{\Z_{[i]}} \cdot {{\bf V}^{\Z_{[i]}}}^T \in \Re^{p_i \times \Nens},
\end{eqnarray*}
where ${\bf U}^{\Z_{[i]}} \in \Re^{p_i \times p_i}$ and ${\bf V}^{\Z_{[i]}} \in \Re^{\Nens \times \Nens}$ are the right and the left singular vectors of $\Z_{[i]}$, respectively. Likewise, $\boldsymbol{\Sigma}^{\Z_{[i]}} \in \Re^{p_i \times \Nens}$ is a diagonal matrix whose diagonal entries are the singular values of $\Z_{[i]}$ in descending order. The solution of \eqref{eq:intro-coefficient-computation} can be computed as follows \cite{Jiang2000137,VANHUFFEL1991675,Per1}:
\begin{eqnarray}
\label{eq:EnKF-MC-truncated-SVD}
{\boldsymbol \beta}_{[i]} =\sum_{j=1}^{k_i} \frac{1}{\tau_j} \cdot {\bf u}^{\Z_{[i]}}_j \cdot {{\bf v}_j^{\Z_{[i]}}}^T \cdot \x_{[i]} \quad \text{with}\,\frac{\tau_{j}}{\tau_{\rm max}} \ge \sigma_r,\,
\end{eqnarray}
where $\tau_j$ is the $j$-th singular value with corresponding right and left singular vectors ${\bf u}^{\Z_{[i]}}_j \in \Re^{p_i \times 1}$ and ${\bf v}^{\Z_{[i]}}_j \in \Re^{p_i \times 1}$, respectively, $\sigma_r \in (0,1)$ is a predefined threshold, and $\tau_{\rm max} = \max\lle \tau_1,\, \tau_2,\, \ldots,\, \tau_{\Nens-1}\rle$. Since small singular values are more sensitive to the noise in $\x_{[i]}$, the threshold $\tau_j > \tau_{\max} \cdot \sigma_r$ seeks to neglect their contributions.
\subsection{Formulation of EnKF-MC}
Once $\widehat{\bf B}^{-1}$ is estimated, the EnKF based on modified Cholesky decomposition (EnKF-MC) computes the analysis using Kalman's formula:
\begin{subequations}
\label{eq:EnKF-MC-formulations}
\begin{eqnarray}
\label{eq:EnKF-MC-primal-incremental}
\x^\textnormal{a} &=& \x^\textnormal{b} + \AE \cdot \H^T \cdot \R^{-1} \cdot \boldsymbol{\Delta} \in \Re^{\Nstate \times \Nens},
\end{eqnarray}
where $\AE \in \Re^{\Nstate \times \Nstate}$ is the estimated analysis covariance matrix
\begin{eqnarray*}
\displaystyle
\AE = \lb \widehat{\bf B}^{-1} +\H^T \cdot \R^{-1} \cdot \H \rb^{-1}\,,
\end{eqnarray*}
and $\boldsymbol{\Delta} \in \Re^{\Nobs \times \Nens}$ is the innovation matrix on the perturbed observations given in \eqref{eq:EnKF-innvo-observations}.
Computationally-friendlier alternatives to \eqref{eq:EnKF-MC-dual} can be obtained by making use of elementary matrix identities:
\begin{eqnarray}
\label{eq:EnKF-MC-primal}
\displaystyle
\x^\textnormal{a} &=& \AE \cdot \lb \widehat{\bf B}^{-1} \cdot \x^\textnormal{b} + \H^T \cdot \R^{-1} \cdot \Y^\textnormal{s} \rb \in \Re^{\Nstate \times \Nens},\, \\
\label{eq:EnKF-MC-dual}
\displaystyle
\x^\textnormal{a} &=& \x^\textnormal{b} + \widehat{\bf T}^{-1} \cdot \widehat{\bf D}^{1/2} \cdot {\bf V}_{\widehat{\bf B}}^T \cdot \lb \R + {\bf V}_{\widehat{\bf B}} \cdot {\bf V}_{\widehat{\bf B}}^T \rb^{-1} \cdot \boldsymbol{\Delta},\, \\
\notag
{\bf V}_{\widehat{\bf B}} &=& \H \cdot \widehat{\bf T}^{-1} \cdot \widehat{\bf D}^{1/2} \in \Re^{\Nstate \times \Nobs},
\end{eqnarray}
\end{subequations}
where $\Y^\textnormal{s}$ are the perturbed observations. The formulation \eqref{eq:EnKF-MC-dual} is well-known as the EnKF dual formulation, \eqref{eq:EnKF-MC-primal} is known as the EnKF primal formulation, and the equation \eqref{eq:EnKF-MC-primal-incremental} is the incremental form of the primal formulation. In the next subsection, we discuss the computational effort of the EnKF-MC implementations \eqref{eq:EnKF-MC-formulations}.
\subsection{Computational effort of EnKF-MC implementations}
The computational cost of the different EnKF-MC implementations depend, in general, on the model state dimension $\Nstate$, the number of observed components $\Nobs$, the radius of influence ${\zeta}$, and the ensemble size $\Nens$. Typically \cite{Tippett2003} the data error covariance matrix $\R$ has a simple structure (e.g., block diagonal), the ensemble size is much smaller than the model dimension ($\Nstate \gg \Nens$), and the observation operator $\H$ is sparse or can be applied efficiently. We analyze the computational effort of the formulation \eqref{eq:EnKF-MC-primal-incremental}; similar analyses can be carried out for the other formulations. The incremental formulation can be written as follows:
\begin{eqnarray*}
\x^\textnormal{a} = \x^\textnormal{b} + {{\boldsymbol \delta} {\bf X}}^\textnormal{a} \,,
\end{eqnarray*}
where the analysis increments ${{\boldsymbol \delta} {\bf X}}^\textnormal{a} \in \Re^{\Nstate \times \Nens}$ are given by the solution of the linear system:
\begin{eqnarray*}
\lb \widehat{\bf B}^{-1} + \R_{\H} \cdot \R_{\H}^T\rb \cdot {{\boldsymbol \delta} {\bf X}}^\textnormal{a} = \boldsymbol{\Delta}_{\H} \,.
\end{eqnarray*}
with $\R_{\H} = \H^T \cdot \R^{-1/2} \in \Re^{\Nstate \times \Nobs}$, $\boldsymbol{\Delta}_{\H} = \H^T \cdot \R^{-1} \cdot \boldsymbol{\Delta} \in \Re^{\Nstate \times \Nens}$, and $\boldsymbol{\Delta}$ is given in \eqref{eq:EnKF-innvo-observations}. This linear system can be solved making use of the iterative Sherman Morrison formula \cite{Nino2014} as follows:
\begin{enumerate}
\item Compute:
\begin{subequations}
\label{eq:EnKF-MC-linear-system-primal}
\begin{eqnarray}
{\bf W}^{(0)[i]}_{\Z} &=& \lb \widehat{\bf T}^T \cdot {\bf D}^{-1} \cdot \widehat{\bf T} \rb^{-1} \cdot \boldsymbol{\Delta}_{\H}^{[i]} ,\, \text{ for $1 \le i \le \Nens$},\, \\
{\bf W}^{(0)[j]}_{{\bf U}} &=& \lb \widehat{\bf T}^T \cdot {\bf D}^{-1} \cdot \widehat{\bf T} \rb^{-1} \cdot \R_{\H}^{[j]} ,\, \text{ for $1 \le j \le \Nobs$} \,.
\end{eqnarray}
\end{subequations}
where $\boldsymbol{\Delta}_{\H}^{[i]} \in \Re^{\Nstate \times 1}$ and $\R_{\H}^{[j]} \in \Re^{\Nstate \times 1}$ denote the $i$ and $j$ columns of matrices $\boldsymbol{\Delta}_{\H}$ and $\R_{\H}$, respectively. Since $\widehat{\bf T}$ is a sparse unitary lower triangular matrix, the direct solution of the linear system \eqref{eq:EnKF-MC-linear-system-primal} can be obtained by making use of forward and backward substitutions. Hence, this step can be performed with:
\begin{eqnarray}
\label{eq:EnKF-MC-computational-effort-step-1}
\displaystyle
\BO{\Nstate_{nz} \cdot \Nstate \cdot \Nens + \Nstate_{nz} \cdot \Nstate \cdot \Nobs }
\end{eqnarray}
long computations, where $\Nstate_{nz}$ denotes the maximum number of non-zero elements across all rows of $\widehat{\bf T}$, this is
\begin{eqnarray*}
\Nstate_{nz} = \max \lle p_1,\,p_2,\,\ldots,\,p_{\Nstate} \rle
\end{eqnarray*}
where $p_i$ is the number of predecessors of model component $i$, for $1 \le i \le \Nstate$.
\item For $1 \le i \le \Nobs$ compute:
\begin{eqnarray*}
{\bf h}^{(i)} &=& \frac{1}{\gamma^{(i)}} \cdot {\bf W}_{{\bf U}}^{(i-1)[i]},\, \text{with } \gamma^{(i)} =\lb 1+{\R_{\H}^{[i]}}^T \cdot {\bf W}_{{\bf U}}^{(i-1)[i]} \rb^{-1} \,, \\
{\bf W}_{\Z}^{(i)[j]} &=& {\bf W}_{\Z}^{(i-1)[j]} - {\bf h}^{(i)} \cdot \lb {\R_{\H}^{[i]}}^T \cdot {\bf W}_{\Z}^{(i-1)[j]} \rb ,\, \text{ for $1 \le j \le \Nens$}\,, \\
{\bf W}_{{\bf U}}^{(i)[k]} &=& {\bf W}_{{\bf U}}^{(i-1)[k]} - {\bf h}^{(i)} \cdot \lb {\R_{\H}^{[i]}}^T \cdot {\bf W}_{{\bf U}}^{(i-1)[k]} \rb ,\, \text{ for $i+1 \le k \le \Nobs$}\,. \\
\end{eqnarray*}
Note that, at each step, ${\bf h}^{(i)}$ can be computed with $\Nstate$ long computations, while ${\bf W}_{\Z}$ and ${\bf W}_{{\bf U}}$ can be obtained with $\Nstate \cdot \Nens$ and $\Nstate \cdot \Nobs$ long computations, respectively. This leads to the next bound for the number of long computations:
\begin{eqnarray*}
\BO{\Nobs \cdot \Nstate+\Nobs \cdot \Nstate \cdot \Nens + \Nobs^2 \cdot \Nstate} \,.
\end{eqnarray*}
\end{enumerate}
Hence, the computational effort involved during the assimilation step of formulation \eqref{eq:EnKF-MC-primal-incremental} can be bounded by:
\begin{eqnarray*}
\BO{\Nobs \cdot \Nstate+\Nobs \cdot \Nstate \cdot \Nens + \Nobs^2 \cdot \Nstate+\Nstate_{nz} \cdot \Nstate \cdot \Nens + \Nstate_{nz} \cdot \Nstate \cdot \Nobs},\,
\end{eqnarray*}
which is linear with respect to the number of model components. For dense observational networks, when local observational operators can be approximated, domain decomposition can be exploited in order to reduce the computational effort during the assimilation cycle. This can be done as follows:
\begin{enumerate}
\item The domain is split in certain number of sub-domains (typically matching a given number of processors).
\item Background error correlations are estimated locally.
\item The assimilation is performed on each local domain.
\item The analysis sub-domains are mapped back onto the model domain from which the global analysis state is obtained.
\end{enumerate}
Figure \ref{fig:sub-domain-splitting} shows the global domain splitting for different sub-domain sizes. In Figure \ref{subfig:boundary-information} the boundary information needed during the assimilation step for two particular sub-domains is shown in dashed blue lines. Note that each sub-domain can be assimilated independently. Note that we only use domain decomposition in order to reduce the computational effort of the proposed implementation (and its derivations) and not in order to reduce the impact of spurious correlations.
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.625\textwidth]{figures/grid12.png}
\caption{Number of sub-domains 12}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.625\textwidth]{figures/grid80.png}
\caption{Number of sub-domains 80}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.9\textwidth]{figures/grid16.png}
\caption{Number of sub-domains 16.}
\label{subfig:boundary-information}
\end{subfigure}%
\caption{Global domain splitting in different sub-domain sizes. Blue local boxes reflects the boundary information utilized in order to perform local data assimilation.}
\label{fig:sub-domain-splitting}
\end{figure}
\subsection{Convergence of the covariance inverse estimator}
\label{subsec:proof-of-convergence}
In this section we prove the convergence of the $\widehat{\bf B}^{-1}$ estimator in the context of data assimilation.
\begin{comment}[Sparse Cholesky factors and localization]
The modified Cholesky decomposition for inverse covariance matrix estimation can be seen as a form of covariance matrix localization method in which the resulting matrix approximates the inverse of a localized ensemble covariance matrix. This process is implicit in the resulting estimator when only a local neighborhood for each model component is utilized in order to perform the local regression and to estimate $\widehat{\bf T}$ and $\widehat{\bf D}$. Figure \ref{fig:theo-Cholesky-factors} shows an example for the Lorenz 96 \cite{LorenzModel}:
\begin{eqnarray}
\label{eqch4:Lorenz-model}
\displaystyle
\frac{dx_j}{dt} = \begin{cases}
\lp x_2-x_{\Nstate-1}\rp \cdot x_{\Nstate} -x_1 +F & \text{ for $j=1$} \\
\lp x_{j+1}-x_{j-2}\rp \cdot x_{j-1} -x_j +F & \text{ for $2 \le j \le \Nstate-1$} \\
\lp x_1-x_{\Nstate-2}\rp \cdot x_{\Nstate-1} -x_{\Nstate} +F & \text{ for $j=\Nstate$}
\end{cases}
\end{eqnarray}
where $F$ is usually set to $8$ to exhibit chaotic behavior and the number of model components is $\Nstate=40$. We assume $\B$ to be a sample covariance matrix based on $10^5$ samples, the localized ensemble covariance matrix $\P^\textnormal{b}$ and the estimator $\widehat{\bf B}^{-1}$ are based on 80 samples. The radius of influence is ${\zeta} = 7$. The similarities among the different Cholesky factors is evident. Even more, along the main diagonal, the correlations decay with respect to the distance of the model components. This is reflected in the resulting estimator of $\B^{-1}$ for each case. Definition \ref{theo-def} of covariance matrices relies on this assumpation for the Cholesky factors ${\bf T}$ and $\widehat{\bf T}$.
\end{comment}
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth,height=0.8\textwidth]{figures/theo_BE.png}
\caption{Exact $\B^{-1} \approx {\P^b}^{-1}$ for $\Nens={10^5}$}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth,height=0.8\textwidth]{figures/theo_BE_T.png}
\caption{${\bf T}$, $\B^{-1} = {\bf T}^T \cdot {\bf D}^{-1} \cdot {\bf T}$}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth,height=0.8\textwidth]{figures/theo_PBLI.png}
\caption{Localized ensemble estimate $\widehat{\P^\textnormal{b}}^{-1}$}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth,height=0.8\textwidth]{figures/theo_PBLI_T.png}
\caption{${\bf T}_{\bf L}$, $\widehat{\P^\textnormal{b}}^{-1} = {\bf T}_{\bf L}^T \cdot {\bf D}_{\bf L} \cdot {\bf T}_{\bf L}$}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth,height=0.8\textwidth]{figures/theo_PBLI.png}
\caption{Cholesky estimate $\widehat{\bf B}^{-1}$}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth,height=0.8\textwidth]{figures/theo_PBLI_T.png}
\caption{$\widehat{\bf T}$, $\widehat{\bf B}^{-1} = \widehat{\bf T}^T \cdot \widehat{\bf D}^{-1} \cdot \widehat{\bf T}$}
\end{subfigure}
\caption{Decay of correlations in the Cholesky factors for different approximations of $\B^{-1}$. }
\label{fig:theo-Cholesky-factors}
\end{figure}
We consider a two-dimensional square domain with $s \times s$ grid points. Our proof below can be extended immediately to non-square domains, as well as to three-dimensional domains. In our domain each space point is described by two indices $(i,\,j)$, a zonal component $i$ and a meridional component $j$, for $1 \le i,j \le s$. A particular case for $s = 4$ is shown in Figure \ref{subfig:grid-distribution}. We make use of row-major order in order to map model grid components to the one dimensional ``index space'':
\begin{eqnarray*}
k = f(i,j) = (j-1) \cdot s+i,\,\quad \text{for $1 \le k \le \Nstate$}.\,
\end{eqnarray*}
where here, $\Nstate = s^2$. For a particular grid component $(i,\,j)$, the resulting $k = f(i,j)$ denotes the row index in $\widehat{\bf B}^{-1}$. The results of labeling each model component in this manner can be seen in Figure \ref{subfig:row-major-order}.
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\fbox{\includegraphics[width=0.8\textwidth]{figures/HIGH_RES_GRID_2.png}}
\caption{Grid components $(i,j)$}
\label{subfig:grid-distribution}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\fbox{\includegraphics[width=0.45\textwidth]{figures/HIGH_RES_ORDER.png}}
\caption{Index space $f(i,j)$}
\label{fig:corresponding-index-B}
\end{subfigure}
\caption{Grid distribution of model components and corresponding index terms in $\widehat{\bf B}^{-1}$. }
\label{fig:ordering-example}
\end{figure}
To start our proof, the inverse of the (exact) background error covariance matrix $\B^{-1}$ and of the its estimator $\widehat{\bf B}^{-1}$ can be written as
\begin{subequations}
\begin{eqnarray}
\label{eq:proof-estimator-written}
\displaystyle
\widehat{\bf B}^{-1} = \lb \I- \widehat{\bf C} \rb^T \cdot \widehat{\bf D}^{-1} \cdot \lb \I - \widehat{\bf C}\rb \in \Re^{\Nstate \times \Nstate}
\end{eqnarray}
and
\begin{eqnarray}
\label{eq:proof-estimator-written}
\displaystyle
\B^{-1} = \lb \I - {\bf C}\rb^T \cdot {\bf D}^{-1} \cdot \lb \I - {\bf C}\rb \in \Re^{\Nstate \times \Nstate} ,\,
\end{eqnarray}
\end{subequations}
respectively, where $\widehat{\bf C} = \I - \widehat{\bf T} \in \Re^{\Nstate \times \Nstate}$ and ${\bf C} = \I - {\bf T} \in \Re^{\Nstate \times \Nstate}$. Moreover, ${\bf D}$ and $\widehat{\bf D}$ are diagonal matrices:
\begin{eqnarray*}
{\bf D} &=& {\bf diag} \lle {d}_{1}^2,\, {d}_{2}^2,\, \ldots,\, {d}_{\Nstate}^2\rle \\
\widehat{\bf D} &=& {\bf diag} \lle \widehat{d}_{1}^2,\, \widehat{d}_{2}^2,\, \ldots,\, \widehat{d}_{\Nstate}^2\rle
\end{eqnarray*}
where $\lle {\bf D} \rle_{i,i} = d_i^2$ and $\lle \widehat{\bf D} \rle_{i,i} = \widehat{d}_i^2$, for $1 \le i \le \Nstate$. In what follows we denote by $\widehat{\bf c}^{\{j\}} \in \Re^{\Nstate \times 1}$ and ${\bf c}^{\{j\}} \in \Re^{\Nstate \times 1}$ the $j$-th columns of matrices $\widehat{\bf C}$ and ${\bf C}$, respectively, for $1 \le j \le \Nstate$.
\begin{subequations}
\label{eq:Proof-theorem-1}
\begin{definition}[Class of matrices under consideration.]
\label{theo-def}
We consider the class of covariance matrices matrices with correlations decreasing quickly:
\begin{eqnarray}
\label{eq:Proof-class-of-matrices}
\displaystyle
\mathcal{U}^{-1} \lp \varepsilon_0, C, \alpha \rp &=& \Bigg \{ \B: 0<\varepsilon_0 \le \lambda_{min} \lp \B \rp \le \lambda_{max} \lp \B \rp \le \varepsilon_0^{-1},\, \\ \nonumber
& & \underset{k}{\max} \sum_{\ell=1}^{\Nstate} \left | \gamma_{k,\ell} \cdot \lle {\bf T} \rle_{k,\ell} \right | \le C \cdot {{\zeta}}^{-\alpha},\, \text{ for ${\zeta} \le s-1$} \Bigg \}
\end{eqnarray}
where $\B^{-1} = {\bf T}^T \, {\bf D}^{-1} \, {\bf T}$, $\alpha$ is the decay rate (related to the dynamics of the numerical model),
\begin{eqnarray*}
\gamma_{k_{(i,j)},\ell_{(p,q)}} &=& \begin{cases}
0 & j-{\zeta} \le q \le j-1 \text{ and } i-{\zeta} \le p \le i+{\zeta} \\
0 & q = j \text{ and } i-{\zeta} \le p \le i \\
1 & \text{otherwise}
\end{cases} \,,
\end{eqnarray*}
and the grid components $(i,j)$ and $(p,q)$, for $1 \le i,j,p,q \le s$ are related to the $(k_{(i,j)},\ell_{(p,q)})$ matrix entry by $k_{(i,j)} = f(i,j)$ and $\ell = f(p,q)$.
\end{definition}
\begin{comment}
The factors $\gamma_{k,\ell}$ for the grid component $(i,j)$ in Definition \eqref{theo-def} are zero inside the scope of ${\zeta}$.
\end{comment}
\begin{theorem}[Error in the covariance inverse estimation]
\label{theo-main}
Uniformly for $\B \in \mathcal{U}^{-1} \lp \varepsilon_0, C, \alpha \rp$, if ${\zeta} \approx \lb \Nens^{-1} \cdot \log \Nstate \rb^{-1/2(\alpha+1)}$ and $\Nens^{-1} \cdot \log \Nstate = o(1)$,
\begin{eqnarray}
\label{eq:Proof-theorem}
\displaystyle
\ln \widehat{\bf B}^{-1} - \B^{-1}\big\|_{\infty} = \mathcal{O} \lp \lb \frac{\log(\Nstate)}{\Nens}\rb^{\alpha(\alpha+1)/2}\rp
\end{eqnarray}
where $\ln \cdot \big\|_{\infty}$ denotes the infinity norm (matrix or vector)
\end{theorem}
\end{subequations}
\begin{comment}
The factors $\gamma_{k,\ell}$ in Theorem \eqref{theo-main} are zero for the predecessors of the grid component $(i,j)$ inside the scope of ${\zeta}$.
\end{comment}
In order to prove Theorem \ref{theo-main}, we need the following result.
\begin{lemma}
\label{lemma:differences-emp-true}
Under the conditions of Theorem \ref{theo-main}, uniformly on $\mathcal{U}^{-1}$
\begin{subequations}
\label{eq:proof-differences-empirical}
\begin{eqnarray}
\label{eq:proof-max-T}
\displaystyle
&&\max \lle \ln \widehat{\bf c}^{\{j\}}-{\bf c}^{\{j\}} \big\|_{\infty} : 1\le j \le \Nstate \rle = \BO{\Nens^{-1/2} \log^{1/2} \Nstate},\, \\
\label{eq:proof-max-D}
&&\max \lle \left| \widehat{d}_{j}^2 - d_{j}^2\right |: 1\le j \le \Nstate \rle = \BO{ \lb \Nens^{-1} \log \Nstate \rb^{\alpha/(2(\alpha+1))}} ,\,
\end{eqnarray}
and
\begin{eqnarray}
\label{eq:proof-norm1}
\displaystyle
\ln {\bf C} \big\|_{\infty} = \ln {\bf D}^{-1} \big\|_{\infty} = \BO{1}.
\end{eqnarray}
\end{subequations}
\end{lemma}
The proof of Lemma \ref{lemma:differences-emp-true} is based on the following results of Bickel and Levina in \cite{bickel2008}.
\begin{lemma}
\label{lemma:lemma-Bickel}[\cite[Lemma A.2]{bickel2008}]
Let $\errbac^{[k]} \sim \Nor \lp {\bf 0},\, \B\rp$ and $\lambda_{\max} \lp \B \rp \le \varepsilon_0^{-1} < \infty$, for $1 \le k \le \Nens$. Then, if $\lle \B \rle_{i,j}$ denotes the $(i,\,j)$-th component of $\B$, for $1 \le i \le j \le \Nstate$,
\begin{eqnarray}
\label{eq:lemma-Bickel}
\displaystyle
&& \textnormal{Prob} \lb \sum_{k=1}^{\Nens} \lb \lle \errbac^{[k]} \rle_i \cdot \lle \errbac^{[k]} \rle_j - \lle \B\rle_{i,j} \rb \ge \Nens \cdot \nu \rb \\
\nonumber
&& \qquad \le C_1 \cdot \exp \lp -C_2 \cdot \Nens \cdot \nu^2 \rp,\,
\end{eqnarray}
for $|\nu| \le \delta$, where $\lle \errbac^{[k]} \rle_i$ is the $i$-th component of the sample $\errbac^{[k]}$, for $1 \le k \le \Nens$, and $1 \le i \le \Nstate$. Likewise, $C_1$, $C_2$ and $\delta$ depend on $\varepsilon_0$ only.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:differences-emp-true}]
In what follows we denote by ${\bf cov}$ and $\widehat{\bf cov}$ denote the true and the empirical covariances, respectively. In the context of EnKF we have that ${\bf cov} \lp {\bf U}^\textnormal{b} \rp = \B$.
Recall that
\begin{eqnarray*}
\widehat{\bf cov} \lp {\bf U}^\textnormal{b} \rp = \P^\textnormal{b} = \frac{1}{\Nens-1} \cdot {\bf U}^\textnormal{b} \cdot {{\bf U}^\textnormal{b}}^T = \frac{1}{\Nens-1} \cdot \sum_{k=1}^{\Nens} {\bf u}^{\textnormal{b}[k]} \cdot {{\bf u}^{\textnormal{b}[k]}}^T ,\,
\end{eqnarray*}
and therefore
\begin{eqnarray*}
\lle \widehat{\bf cov} \lp {\bf U}^\textnormal{b} \rp \rle_{i,j} = \frac{1}{\Nens-1} \cdot \sum_{k=1}^{\Nens} \lle {\bf u}^{\textnormal{b}[k]} \rle_{i} \cdot \lle {\bf u}^{\textnormal{b}[k]} \rle_{j}.
\end{eqnarray*}
For $\nu>0$, $\lle \errbac^{[k]} \rle_i \cdot \lle \errbac^{[k]} \rle_j - \lle \B\rle_{i,j} \ge \Nens \cdot \nu$ implies $\lle \errbac^{[k]} \rle_i \cdot \lle \errbac^{[k]} \rle_j - \lle \B\rle_{i,j} \ge (\Nens-1 )\cdot \nu$, and therefore by Lemma \ref{lemma:lemma-Bickel} we have:
\begin{subequations}
\begin{eqnarray}
\label{eq:proof-bound-U}
\ln {\bf cov} \lp {\bf U}^\textnormal{b} \rp - \widehat{\bf cov} \lp {\bf U}^\textnormal{b} \rp \big\|_{\infty} = \BO {\Nens^{-1/2} \cdot \log^{1/2} \Nstate },\,
\end{eqnarray}
since the entries of ${\bf cov} \lp {\bf U}^\textnormal{b} \rp - \widehat{\bf cov} \lp {\bf U}^\textnormal{b} \rp$ can be bounded by:
\begin{eqnarray*}
\left | \lle {\bf cov} \lp {\bf U}^\textnormal{b} \rp - \widehat{\bf cov} \lp {\bf U}^\textnormal{b} \rp \rle_{i,j} \right | & \le & \Nens^{-1} \cdot \sum_{k=1}^\Nens \left | \lle {\bf u}^{\textnormal{b}[k]} \rle_i \cdot \lle {\bf u}^{\textnormal{b}[k]} \rle_j - \lle \B \rle_{i,j} \right |.
\end{eqnarray*}
Lemma \ref{lemma:lemma-Bickel} ensures that:
\begin{eqnarray*}
&& \textnormal{Prob} \lb \underset{i,j}{\max} \left | \Nens^{-1} \cdot \sum_{k=1}^\Nens \lle {\bf u}^{\textnormal{b}[k]} \rle_i \cdot \lle {\bf u}^{\textnormal{b}[k]} \rle_j - \lle \B \rle_{i,j} \right | \ge \nu \rb \\
&& \qquad \le C_1 \cdot \Nstate^2 \cdot \exp \lp -C_2 \cdot \Nens \cdot \nu^2 \rp,\,
\end{eqnarray*}
for $|\nu| \le \delta$. Let $\nu = \lp \frac{\log \Nstate^2}{ \Nens \cdot C_2} \rp^{1/2}\cdot M$, for $M$ arbitrary.
Since $\Z_{[i]}$ stores the columns of ${\bf U}^\textnormal{b}$ corresponding to the predecessors of model component $i$, an immediate consequence of \eqref{eq:proof-bound-U} is
\begin{eqnarray}
\label{eq:proof-bound-Z}
\underset{i}{\max} \ln {\bf cov} \lp \Z_{[i]} \rp -\widehat{\bf cov} \lp \Z_{[i]} \rp \big\|_{\infty} = \BO{ \Nens^{-1/2} \cdot \log^{1/2} \Nstate} \,.
\end{eqnarray}
\end{subequations}
Also,
\begin{eqnarray*}
\ln \B^{-1} \big\|_{\infty} = \ln {\bf cov} \lp {\bf U}^\textnormal{b} \rp^{-1} \big\|_{\infty} \le \varepsilon_{0}^{-1} \,.
\end{eqnarray*}
According to equation \eqref{eq:EnKF-MC-solution-of-optimization-problem},
\begin{eqnarray*}
\lle {\bf c}^{[i]} \rle_{j} &=& \lle {\bf cov} \lp \Z_{[i]} \rp^{-1} \cdot \Z_{[i]} \cdot \x_{[i]} \rle_j \,,\\
\lle \widehat{\bf c}^{[i]} \rle_{j} &=& \lle \widehat{\bf cov} \lp \Z_{[i]} \rp^{-1} \cdot \Z_{[i]} \cdot \x_{[i]} \rle_j \,,
\end{eqnarray*}
therefore:
\begin{eqnarray} \nonumber
&&\underset{k}{\max} \left | \lle {\bf c}^{[i]} \rle_{k}-\lle \widehat{\bf c}^{[i]} \rle_{k} \right | \\
&=& \underset{k}{\max} \left | \lle {\bf cov} \lp \Z_{[i]} \rp^{-1} \cdot \Z_{[i]} \cdot \x_{[i]} \rle_k - \lle \widehat{\bf cov} \lp \Z_{[i]} \rp^{-1} \cdot \Z_{[i]} \cdot \x_{[i]} \rle_k \right | \\ \nonumber
&=& \underset{k}{\max} \left | \lle \lb {\bf cov} \lp \Z_{[i]} \rp^{-1} - \widehat{\bf cov} \lp \Z_{[i]} \rp^{-1} \rb \cdot \Z_{[i]} \cdot \x_{[i]} \rle_k \right | \\ \label{eq:proof-parta}
&=& \mathcal{O} \lp \Nens^{-1/2} \cdot \log^{1/2} \Nstate \rp
\end{eqnarray}
from which \eqref{eq:proof-max-T} follows. Note that:
\begin{eqnarray*}
&& \x_{[i]} = \sum_{j=1}^{\Nstate} \tilde{\gamma}_{i,j} \cdot \lle \widehat{\bf c}^{[i]} \rle_j \cdot \x_{[j]} + \widehat{\boldsymbol{\varepsilon}}^{[i]} \\
&\Leftrightarrow & \widehat{\bf cov} \lp \x_{[i]} \rp = \widehat{\bf cov} \lp \sum_{j=1}^{\Nstate} \tilde{\gamma}_{i,j} \cdot \lle \widehat{\bf c}^{[i]} \rle_j \cdot \x_{[j]} + \widehat{\boldsymbol{\varepsilon}}^{[i]} \rp \\
&\Leftrightarrow & \widehat{\bf cov} \lp \x_{[i]} \rp = \widehat{\bf cov} \lp \sum_{j=1}^{\Nstate} \tilde{\gamma}_{i,j} \cdot \lle \widehat{\bf c}^{[i]} \rle_j \cdot \x_{[j]} \rp + \widehat{\bf cov} \lp \widehat{\boldsymbol{\varepsilon}}^{[i]} \rp \\
&\Leftrightarrow & \widehat{d}^2_i = \widehat{\bf cov} \lp \x_{[i]}\rp - \widehat{\bf cov} \lp \sum_{j=1}^{\Nstate} \tilde{\gamma}_{i,j} \cdot \lle \widehat{\bf c}^{[i]} \rle_j \cdot \x_{[j]} \rp ,\,
\end{eqnarray*}
and similarly
\begin{eqnarray*}
d^2_i = {\bf cov} \lp \x_{[i]}\rp - {\bf cov} \lp \sum_{j=1}^{\Nstate} \tilde{\gamma}_{i,j} \cdot \lle {\bf c}^{[i]} \rle_j \cdot \x_{[j]} \rp \,.
\end{eqnarray*}
The claim \eqref{eq:proof-max-D} and the first part of \eqref{eq:proof-norm1} follow from \eqref{eq:proof-bound-U}, \eqref{eq:proof-bound-Z} and \eqref{eq:proof-parta}. Since
\begin{eqnarray} \nonumber
\left | \widehat{d}_{i}^2 - d_{i}^2 \right | &\le & \left | {\bf cov} \lp \x_{[i]}\rp - \widehat{\bf cov} \lp \x_{[i]}\rp \right | \\
&+& \left | \widehat{\bf cov} \lp \sum_{j=1}^{\Nstate} \tilde{\gamma}_{i,j} \cdot \lb \lle {\widehat{\bf c}}^{[i]} \rle_j - \lle {{\bf c}}^{[i]} \rle_j \rb \cdot \x_{[j]} \rp \right | \\ \nonumber
&+& \left | \widehat{\bf cov} \lp \sum_{j=1}^{\Nstate} \tilde{\gamma}_{i,j} \cdot \lle \widehat{\bf c}^{[i]} \rle_j \cdot \x_{[j]} \rp - {\bf cov} \lp \sum_{j=1}^{\Nstate} \tilde{\gamma}_{i,j} \cdot \lle \widehat{\bf c}^{[i]}\rle_j \cdot \x_{[j]} \rp \right |
\end{eqnarray}
where $\tilde{\gamma}_{i,j} = 1-{\gamma}_{i,j}$. By Lemma \ref{lemma:lemma-Bickel} the maximum over $i$ of the first term is:
\begin{eqnarray*}
\displaystyle
\underset{i}{\max} \left | {\bf cov} \lp \x_{[i]}\rp - \widehat{\bf cov} \lp \x_{[i]}\rp \right | = \BO{\Nens^{-1/2} \cdot \log^{1/2} \Nstate }.\,
\end{eqnarray*}
The second term can be bounded as follows:
\begin{eqnarray*}
&& {\tiny \left | \sum_{j=1}^{\Nstate} \tilde{\gamma}_{i,j}^2 \cdot \lb \lle {\widehat{\bf c}}^{[i]} \rle_j - \lle {{\bf c}}^{[i]} \rle_j \rb^2 \cdot \widehat{\bf cov}\lp \x_{[j]} \rp \right |} \\
& \le & \sum_{j=1}^{\Nstate} \tilde{\gamma}_{i,j}^2 \cdot \lb \lle {\widehat{\bf c}}^{[i]} \rle_j - \lle {{\bf c}}^{[i]} \rle_j \rb^2 \cdot \left | \widehat{\bf cov}\lp \x_{[j]} \rp \right | \\
& \le & \underset{k}{\max} \lb \lle {\widehat{\bf c}}^{[i]} \rle_k - \lle {{\bf c}}^{[i]} \rle_k \rb^2 \cdot \underset{i}{\max}\left | \widehat{\bf cov}\lp \x_{[i]} \rp \right | \cdot \sum_{j=1}^{\Nstate} \tilde{\gamma}_{i,j}^2 \\
&=&\BO{ {\zeta}^2 \cdot \Nens^{-1} \cdot \log \Nstate } \\ \nonumber
&=& \BO{ \lb \Nens^{-1} \cdot \log \Nstate \rb^{\alpha/2 \cdot (\alpha+1)} }
\end{eqnarray*}
by \eqref{eq:proof-max-T} and $\ln \B\big\| \le \varepsilon_0^{-1}$. Recall that ${\zeta} = \lb \Nens^{-1} \cdot \log \Nstate \rb^{1/2 \cdot(\alpha+1)}$ and even more, note that:
\begin{eqnarray*}
\sum_{j=1}^{\Nstate} \tilde{\gamma}^2_{i,j} = \frac{\lb {\zeta}+1 \rb^2}{2} = \frac{{\zeta}^2}{2}+{\zeta}+\frac{1}{2} = \BO{{\zeta}^2} \,.
\end{eqnarray*}
The third term can be bounded similarly. Thus \eqref{eq:proof-max-D} follows. Furthermore,
\begin{eqnarray*}
\displaystyle
d^2_{i} = {\bf cov} \lp \x_{[i]} - \sum_{j=1}^{\Nstate} \tilde{\gamma}_{i,j} \cdot \lle \widehat{\bf c}^{[i]}\rle_j \cdot \x_{[j]} \rp \ge \varepsilon_0 \cdot \lp 1 + \sum_{i=1}^{\Nstate} \lb \widehat{\bf c}^{[i]}_j \rb^2 \rp \ge \varepsilon_0 \,,
\end{eqnarray*}
and the lemma follows.
\end{proof}
We now are ready to prove Theorem \ref{theo-main}.
\begin{proof}[Proof of Theorem \ref{theo-main}]
We need only check that:
\begin{subequations}
\label{eq:Proof-to-check}
\begin{eqnarray}
\label{eq:Proof-difference-of-inverses}
\displaystyle
\ln \widehat{\bf B}^{-1} - \B^{-1} \big\|_{\infty} = \BO{ \Nens^{-1/2} \cdot \log^{1/2} \lp \Nstate \rp}
\end{eqnarray}
and
\begin{eqnarray}
\label{eq:Proof-rate-radius}
\displaystyle
\ln \B^{-1} - \Phi_{{\zeta}} \lp \B^{-1} \rp\big\|_{\infty} = \BO{{\zeta}^{-\alpha}}
\end{eqnarray}
where the entries of $\Phi_{{\zeta}} \lp \B^{-1} \rp$ are given by:
\begin{eqnarray}
\lle \Phi_{{\zeta}} \lp \B^{-1} \rp \rle_{k,\ell} = \delta_{k,\ell} \cdot \lle \B^{-1} \rle_{k,\ell} ,\, \text{ for $1 \le k,\ell \le \Nstate$}
\end{eqnarray}
where $k = f(i,j)$ and $\ell = f(q,p)$ for $1 \le i,j,p,q \le s$, and
\begin{eqnarray*}
\delta_{k,\ell} &=& \begin{cases}
1 & j-{\zeta} \le q \le j+{\zeta} \text{ and } i-{\zeta} \le p \le i+{\zeta} \\
0 & \text{otherwise}
\end{cases}
\end{eqnarray*}
\end{subequations}
We first prove \eqref{eq:Proof-difference-of-inverses}. By definition,
\begin{eqnarray}
\widehat{\bf B}^{-1} - \B^{-1} = \widehat{\bf T}^T \cdot \widehat{\bf D}^{-1} \cdot \widehat{\bf T} - {\bf T}^T \cdot {\bf D}^{-1} \cdot {\bf T}.
\end{eqnarray}
Applying the standard inequality:
\begin{eqnarray*}
\ln {\bf T}^T \cdot {\bf D}^{-1} \cdot {\bf T} -\widehat{\bf T}^T \cdot \widehat{\bf D}^{-1} \cdot \widehat{\bf T}^T\big\| &\le & \ln {\bf T}^T- \widehat{\bf T}^T \big\| \cdot \ln \widehat{\bf D} \big\| \cdot \ln \widehat{\bf T} \big\| \\
&+& \ln {\bf D}- \widehat{\bf D} \big\| \cdot \ln \widehat{\bf T}^T \big\| \cdot \ln \widehat{\bf T} \big\| \\
&+& \ln {\bf T}- \widehat{\bf T} \big\| \cdot \ln \widehat{\bf T} \big\| \cdot \ln \widehat{\bf D} \big\| \\
&+& \ln \widehat{\bf T} \big\| \cdot \ln {\bf D}-\widehat{\bf D} \big\| \cdot \ln \widehat{\bf T}^T-{\bf T}^T \big\| \\
&+& \ln \widehat{\bf D} \big\| \cdot \ln {\bf T}-\widehat{\bf T} \big\| \cdot \ln \widehat{\bf T}^T-{\bf T}^T \big\| \\
&+& \ln \widehat{\bf T}^T \big\| \cdot \ln {\bf D}-\widehat{\bf D} \big\| \cdot \ln \widehat{\bf T}-{\bf T} \big\| \\
&+& \ln {\bf D}-\widehat{\bf D} \big\| \cdot \ln {\bf T}-\widehat{\bf T} \big\| \cdot \ln {\bf T}^T-\widehat{\bf T}^T \big\|
\end{eqnarray*}
all previous terms can be bounded making use of Lemma \ref{lemma:differences-emp-true} and therefore, \eqref{eq:Proof-difference-of-inverses} follows. Likewise, for \eqref{eq:Proof-rate-radius}, we need to note that for any matrix ${\bf M}$,
\begin{eqnarray*}
\ln {\bf M} \cdot {\bf M}^T - \Phi_{{\zeta}} \lp {\bf M} \rp \cdot \Phi_{{\zeta}} \lp {\bf M} \rp^T \big\|_{\infty} & \le & 2 \cdot \ln {\bf M} \big\|_{\infty} \cdot \ln \Phi_{{\zeta}} \lp {\bf M} \rp-{\bf M}^{-1} \big\|_{\infty} \\
&+&\ln \Phi_{{\zeta}} \lp {\bf M} \rp-{\bf M} \big\|_{\infty}^2
\end{eqnarray*}
and by letting ${\bf M} = {\bf T}^T \cdot {\bf D}^{-1/2}$, the theorem follows from Definition \ref{theo-def}.
\end{proof}
\section{Numerical Experiments}
\label{sec:experimental-settings}
In this section we study the performance of the proposed EnKF-MC implementation. The experiments are performed using the atmospheric general circulation model SPEEDY \cite{Speedy1,Speedy2}. SPEEDY is a hydrostatic, spectral coordinate, spectral transform model in the vorticity-divergence form, with semi-implicit treatment of gravity waves. The number of layers in the SPEEDY model is 8 and the T-63 model resolution ($192 \times 96$ grids) is used for the horizontal space discretization of each layer. Four model variables are part of the assimilation process: the temperature ($K$), the zonal and the meridional wind components ($m/s$), and the specific humidity ($g/kg$). The total number of model components is $\Nstate = 589,824$. The number of ensemble members is $\Nens=94$ for all the scenarios. The model state space is approximately 6,274 times larger than the number of ensemble members ($\Nstate \gg \Nens$).
Starting with the state of the system $\x^\textnormal{ref}_{-3}$ at time $t_{-3}$, the model solution $\x^\textnormal{ref}_{-3}$ is propagated in time over one year:
\begin{eqnarray*}
\x^\textnormal{ref}_{-2} = \M_{t_{-3} \rightarrow t_{-2}} \lp \x^\textnormal{ref}_{-3}\rp.
\end{eqnarray*}
The reference solution $\x^\textnormal{ref}_{-2}$ is used to build a perturbed background solution:
\begin{eqnarray}
\label{eq:exp-perturbed-background}
\displaystyle
\widehat{\x}^\textnormal{b}_{-2} = \x^\textnormal{ref}_{-2} + \errobs^\textnormal{b}_{-2}, \quad \errobs^\textnormal{b}_{-2} \sim \Nor \lp {\bf 0}_{\Nstate} ,\, \underset{i}{\textnormal{diag}} \left\{ (0.05\, \{\x^\textnormal{ref}_{-2}\}_i)^2 \right\} \rp.
\end{eqnarray}
The perturbed background solution is propagated over another year to obtain the background solution at time $t_{-1}$:
\begin{eqnarray}
\label{eq:exp-background-state-1}
\x^\textnormal{b}_{-1} = \M_{t_{-2} \rightarrow t_{-1}} \lp \widehat{\x}^\textnormal{b}_{-2}\rp.
\end{eqnarray}
This model propagation attenuates the random noise introduced in \eqref{eq:exp-perturbed-background} and makes the background state \eqref{eq:exp-background-state-1} consistent with the physics of the SPEEDY model. Then, the background state \eqref{eq:exp-background-state-1} is utilized in order to build an ensemble of perturbed background states:
\begin{eqnarray}
\label{eq:exp-perturbed-ensemble}
\displaystyle
\widehat{\x}^{\textnormal{b}[i]}_{-1} = \x^\textnormal{b}_{-1} + \errobs^\textnormal{b}_{-1},\quad \errobs^\textnormal{b}_{-1} \sim \Nor \lp {\bf 0}_{\Nstate} ,\, \underset{i}{\textnormal{diag}} \left\{ (0.05\, \{\x^\textnormal{b}_{-1}\}_i)^2 \right\} \rp,
\quad 1 \le i \le \Nens,
\end{eqnarray}
from which, after three months of model propagation, the initial ensemble is obtained at time $t_0$:
\begin{eqnarray*}
\x^{\textnormal{b}[i]}_0 = \M_{t_{-1} \rightarrow t_0} \lp \widehat{\x}^{\textnormal{b}[i]}_{-1}\rp \,.
\end{eqnarray*}
Again, the model propagation of the perturbed ensemble ensures that the ensemble members are consistent with the physics of the numerical model.
The experiments are performed over a period of 24 days, where observations are taken every 2 days ($\N=12$). At time $k$ synthetic observations are built as follows:
\begin{eqnarray*}
\y_k = \H_k \cdot \x^\textnormal{ref}_k + \errobs_k, \quad \errobs_k \sim \Nor \lp {\bf 0}_{\Nobs},\, \R_k \rp,\,
\quad \R_k = \textnormal{diag}_{i}\left\{ (0.01\, \{\H_k \, \x^\textnormal{ref}_k\}_i )^2 \right\}.
\end{eqnarray*}
The observation operators $\H_k$ are fixed throughout the time interval. We perform experiments with several operators characterized by different proportions $p$ of observed components from the model state $\x^\textnormal{ref}_k$ ($\Nobs \approx p \cdot \Nstate$). We consider four different values for $p$: 0.50, 0.12, 0.06 and 0.04 which represent 50\%, 12 \%, 6 \% and 4 \% of the total number of model components, respectively. Some of the observational networks used during the experiments are shown in Figure \ref{fig:exp-observational-grids} with their corresponding percentage of observed components from the model state.
The analyses of the EnKF-MC are compared against those obtained making use of the LETKF implementation proposed by Hunt et al in \cite{LETKFHunt,TELA:TELA076,application_letkf_1} . The analysis accuracy is measured by the root mean square error (RMSE)
\begin{eqnarray}
\label{eq:ER-RMSE-formula}
\displaystyle
\text{RMSE} = \sqrt{\frac{1}{\N} \cdot \sum_{k=1}^\N \lb \x^\textnormal{ref}_k -\x^\textnormal{a}_k \rb^T \cdot \lb \x^\textnormal{ref}_k -\x^\textnormal{a}_k \rb }
\end{eqnarray}
where $\x^\textnormal{ref} \in \Re^{\Nstate \times 1}$ and $\x^\textnormal{a}_{k} \in \Re^{\Nstate \times 1}$ are the reference and the analysis solutions at time $k$, respectively, and $\N$ is the number of assimilation times.
The threshold used in \eqref{eq:EnKF-MC-truncated-SVD} during the computation of $\widehat{\bf B}^{-1}$ is $\sigma_{r} = 0.10$. During the assimilation steps, the data error covariance matrices $\R_k$ are used (no representativeness errors are involved during the assimilations) and therefore. The different EnKF implementations are performed making use of FORTRAN and specialized libraries such as BLAS and LAPACK are used in order to perform the algebraic computations.
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.5\textwidth]{figures/h_3-eps-converted-to.pdf}
\caption{$p=12\%$ }
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.5\textwidth]{figures/h_5-eps-converted-to.pdf}
\caption{$p=4\%$ }
\end{subfigure}%
\caption{Observational networks for different values of $p$. Dark dots denote the location of the observed components. The observed model variables are the zonal and the meridional wind components, the specific humidity, and the temperature.}
\label{fig:exp-observational-grids}
\end{figure}
\subsection{Results with dense observation networks}
We first consider dense observational networks in which 100\% and 50\% of the model components are observed. We vary the radius of influence ${\zeta}$ from 1 to 5 grid points.
Figure \ref{fig:exp-LETKF-RMSE-different-radii-dense-network} shows the RMSE values for the LETKF and EnKF-MC analyses for different values of ${\zeta}$ for the specific humidity when $50\%$ of model components are observed. When the radius of influence is increased the quality of the LETKF results degrades due to spurious correlations. This is expected since the local estimation of correlations in the context of LETKF is the sample covariance matrix. For instance, for a radius of influence of 1, the total number of local components for each local box is 36 which matches the dimension of the local background error distribution. Now, when we compare it against the ensemble size (96 ensemble members), sufficient degrees of freedom (95 degrees of freedom) are available in order to estimate the local background error distribution onto the ensemble space, and consequently all directions of the local probability error distribution are accounted during the estimation and posterior assimilation. On the other hand, when the radius of influence is 5, the local box sizes have dimension 484 (model components) which is approximately 5 times larger than the ensemble size. Thus, when the analysis increments are computed onto the ensemble space, just part of the local background error distribution is accounted during the assimilation. Consequently, the larger the local box, the more local background error information cannot be represented in the ensemble space.
Figure \ref{fig:exp-LETKF-RMSE-different-radii-dense-network} shows that EnKF-MC analyses improve with increasing radius of influence ${\zeta}$. Since a dense observational network is considered during the assimilation, when the radius of influence is increased, a better estimation of the state of the system is obtained by the EnKF-MC. This can be seen clearly in Figure \ref{fig:exp-LETKF-RMSE-different-radii-dense}, where the RMSE values within the assimilation window are shown for the LETKF and the EnKF-MC solutions for the specific humidity variable and different values of ${\zeta}$ and $p$. The quality of the EnKF-MC analysis for ${\zeta}=5$ is better than that of the LETKF with ${\zeta}=1$. Likewise, when a full observational network is considered ($p=100\%$), the proposed implementation outperforms the LETKF implementation. EnKF-MC is able to exploit the large amount of information contained in dense observational networks by properly estimating the local background error correlations. The RMSE values for all model variables and different values for ${\zeta}$ and $p$ are summarized in Table \ref{tab:exp-RMSE-values-all-dense}.
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.75\textwidth]{figures/out_1_2_SPH_SC.png}
\caption{${\zeta} = 1$.}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.75\textwidth]{figures/out_2_2_SPH_SC.png}
\caption{${\zeta} = 2$.}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.75\textwidth]{figures/out_3_2_SPH_SC.png}
\caption{${\zeta} = 3$.}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.75\textwidth]{figures/out_4_2_SPH_SC.png}
\caption{${\zeta} = 4$.}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.75\textwidth]{figures/out_5_2_SPH_SC.png}
\caption{${\zeta} = 5$.}
\end{subfigure}
\caption{RMSE of specific humidity analyses with a dense observational network. When the radius of influence ${\zeta}$ is increased the performance of LETKF degrades.}
\label{fig:exp-LETKF-RMSE-different-radii-dense-network}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.75\textwidth]{figures/out_1_SPH_RMSE_MC-eps-converted-to.pdf}
\caption{$p = 100\%$.}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.75\textwidth]{figures/out_2_SPH_RMSE_MC-eps-converted-to.pdf}
\caption{$p = 50\%$.}
\end{subfigure}%
\caption{Analysis RMSE for the specific humidity variable. The RMSE values of the assimilation window are shown for different values of ${\zeta}$ and percentage of observed components $p$. When the local domain sizes are increased the accuracy of the LETKF analysis degrades, while the accuracy of EnKF-MC analysis improves.}
\label{fig:exp-LETKF-RMSE-different-radii-dense}
\end{figure}
\begin{table}[H]
\centering
{\footnotesize
\begin{tabular}{|c|c|c|c|c|} \hline
Variable (units) &${\zeta}$ & $p$ & EnKF-MC & LETKF \\ \hline
\multirow{10}{*}{Zonal Wind Component ($u$), ($m/s$)} & \multirow{2}{*}{1} & $100 \%$ & $ 6.012 \times 10^{1}$ & $ 6.394 \times 10^{1}$\\ \cline{3-5} & & $50 \%$ & $ 4.264 \times 10^{2}$ & $ 9.825 \times 10^{2}$\\ \cline{2-5} & \multirow{2}{*}{2} & $100 \%$ & $ 6.078 \times 10^{1}$ & $ 6.820 \times 10^{1}$\\ \cline{3-5} & & $50 \%$ & $ 2.255 \times 10^{2}$ & $ 1.330 \times 10^{3}$\\ \cline{2-5} & \multirow{2}{*}{3} & $100 \%$ & $ 6.080 \times 10^{1}$ & $ 7.969 \times 10^{1}$\\ \cline{3-5} & & $50 \%$ & $ 2.341 \times 10^{2}$ & $ 1.124 \times 10^{3}$\\ \cline{2-5} & \multirow{2}{*}{4} & $100 \%$ & $ 6.088 \times 10^{1}$ & $ 9.687 \times 10^{1}$\\ \cline{3-5} & & $50 \%$ & $ 2.418 \times 10^{2}$ & $ 1.072 \times 10^{3}$\\ \cline{2-5} & \multirow{2}{*}{5} & $100 \%$ & $ 6.092 \times 10^{1}$ & $ 1.190 \times 10^{2}$\\ \cline{3-5} & & $50 \%$ & $ 2.673 \times 10^{2}$ & $ 1.017 \times 10^{3}$\\ \cline{1-5} \multirow{10}{*}{Meridional Wind Component ($v$) ($m/s$)} & \multirow{2}{*}{1} & $100 \%$ & $ 3.031 \times 10^{1}$ & $ 6.418 \times 10^{1}$\\ \cline{3-5} & & $50 \%$ & $ 2.632 \times 10^{2}$ & $ 3.247 \times 10^{2}$\\ \cline{2-5} & \multirow{2}{*}{2} & $100 \%$ & $ 3.046 \times 10^{1}$ & $ 6.597 \times 10^{1}$\\ \cline{3-5} & & $50 \%$ & $ 1.641 \times 10^{2}$ & $ 4.138 \times 10^{2}$\\ \cline{2-5} & \multirow{2}{*}{3} & $100 \%$ & $ 3.047 \times 10^{1}$ & $ 7.565 \times 10^{1}$\\ \cline{3-5} & & $50 \%$ & $ 1.964 \times 10^{2}$ & $ 4.418 \times 10^{2}$\\ \cline{2-5} & \multirow{2}{*}{4} & $100 \%$ & $ 3.052 \times 10^{1}$ & $ 9.332 \times 10^{1}$\\ \cline{3-5} & & $50 \%$ & $ 2.084 \times 10^{2}$ & $ 4.832 \times 10^{2}$\\ \cline{2-5} & \multirow{2}{*}{5} & $100 \%$ & $ 3.054 \times 10^{1}$ & $ 1.151 \times 10^{2}$\\ \cline{3-5} & & $50 \%$ & $ 2.428 \times 10^{2}$ & $ 5.029 \times 10^{2}$\\ \cline{1-5} \multirow{10}{*}{Temperature ($K$)} & \multirow{2}{*}{1} & $100 \%$ & $ 9.404 \times 10^{2}$ & $ 5.078 \times 10^{2}$\\ \cline{3-5} & & $50 \%$ & $ 6.644 \times 10^{2}$ & $ 7.059 \times 10^{2}$\\ \cline{2-5} & \multirow{2}{*}{2} & $100 \%$ & $ 9.416 \times 10^{2}$ & $ 4.112 \times 10^{2}$\\ \cline{3-5} & & $50 \%$ & $ 6.129 \times 10^{2}$ & $ 1.138 \times 10^{3}$\\ \cline{2-5} & \multirow{2}{*}{3} & $100 \%$ & $ 9.425 \times 10^{2}$ & $ 3.447 \times 10^{2}$\\ \cline{3-5} & & $50 \%$ & $ 5.815 \times 10^{2}$ & $ 1.389 \times 10^{3}$\\ \cline{2-5} & \multirow{2}{*}{4} & $100 \%$ & $ 9.432 \times 10^{2}$ & $ 2.939 \times 10^{2}$\\ \cline{3-5} & & $50 \%$ & $ 5.585 \times 10^{2}$ & $ 1.355 \times 10^{3}$\\ \cline{2-5} & \multirow{2}{*}{5} & $100 \%$ & $ 9.432 \times 10^{2}$ & $ 2.554 \times 10^{2}$\\ \cline{3-5} & & $50 \%$ & $ 5.500 \times 10^{2}$ & $ 1.104 \times 10^{3}$\\ \cline{1-5} \multirow{10}{*}{Specific Humidity ($g/Kg$)} & \multirow{2}{*}{1} & $100 \%$ & $ 1.733 \times 10^{1}$ & $ 5.427 \times 10^{1}$\\ \cline{3-5} & & $50 \%$ & $ 8.680 \times 10^{1}$ & $ 7.602 \times 10^{1}$\\ \cline{2-5} & \multirow{2}{*}{2} & $100 \%$ & $ 1.712 \times 10^{1}$ & $ 5.669 \times 10^{1}$\\ \cline{3-5} & & $50 \%$ & $ 8.204 \times 10^{1}$ & $ 1.045 \times 10^{2}$\\ \cline{2-5} & \multirow{2}{*}{3} & $100 \%$ & $ 1.705 \times 10^{1}$ & $ 6.630 \times 10^{1}$\\ \cline{3-5} & & $50 \%$ & $ 8.089 \times 10^{1}$ & $ 1.298 \times 10^{2}$\\ \cline{2-5} & \multirow{2}{*}{4} & $100 \%$ & $ 1.699 \times 10^{1}$ & $ 7.344 \times 10^{1}$\\ \cline{3-5} & & $50 \%$ & $ 7.525 \times 10^{1}$ & $ 1.431 \times 10^{2}$\\ \cline{2-5} & \multirow{2}{*}{5} & $100 \%$ & $ 1.694 \times 10^{1}$ & $ 7.617 \times 10^{1}$\\ \cline{3-5} & & $50 \%$ & $ 7.642 \times 10^{1}$ & $ 1.458 \times 10^{2}$\\ \cline{1-5}
\end{tabular}
}
\caption{RMSE values for the EnKF-MC and the LETKF analyses with the SPEEDY model and for different values for ${\zeta}$ and $p$. Dense observational networks are considered in this experimental setting.}
\label{tab:exp-RMSE-values-all-dense}
\end{table}
\subsection{Results with sparse observation networks}
For sparse observational networks, in general, the results obtained by the EnKF-MC are more accurate than those obtained by the LETKF, as reported in the Tables \ref{tab:exp-RMSE-values-wind-components-sparse} and \ref{tab:exp-RMSE-values-others-sparse}. We vary the values of ${\zeta}$ from 1 to 5. Three sparse observational networks with $p=12\%$, $6\%$, and $4\%$, respectively are considered.
Figure \ref{fig:exp-LETKF-RMSE-different-radii-sparse-network} shows the RMSE values of the specific humidity analyses for different radii of influence and $4\%$ of the model components being observed. The best performance of the LETKF analyses is obtained when the radius of influence is set to 2. Note that for ${\zeta}=1$ the LETKF performs poorly, which is expected since during the assimilation most of model components will not have observations in their local boxes. For ${\zeta}\ge 3$ the effects of spurious correlations degrade the quality of the LETKF analysis. On the other hand, the background error correlations estimated by the modified Cholesky decomposition allows the EnKF-MC formulation to obtain good analyses even for largest radius of influence ${\zeta}=5$.
Figure \ref{fig:exp-LETKF-RMSE-different-radii-sparse} shows the RMSE values of the LETKF and the EnKF-MC implementations for different radii of influences and two sparse observational networks. Clearly, when the radius of influence is increased, in the LETKF context, the analysis corrections are impacted by spurious correlations. On the other hand, the quality of the results in the EnKF-MC case is considerably better. When data errors components are uncorrelated ${\zeta}$ can be seen as a free parameter and the choice can be based on the ``optimal performance of the filter''. For the largest radius of influence ${\zeta}=5$ the RMSE values of the ENKF-MC and the LETKF implementations differ by one order of magnitude.
Figure \ref{fig:exp-model-variables} reports the RMSE values for the zonal and the meridional wind component analyses, and for different values of $p$ and ${\zeta}$. As can be seen, the estimation of background errors via $\widehat{\bf B}$ can reduce the impact of spurious correlations; the RMSE values of the EnKF-MC analyses remain small at all assimilation times, from which we infer that the background error correlations are properly estimated. On the other hand, the impact of spurious correlations is evident in the context of LETKF. Since most of the model components are unobserved, the background error correlations drive the quality of the analysis, and spurious correlations lead to a poor performance of the filter at many assimilation times.
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.75\textwidth]{figures/out_1_5_SPH_NEW.png}
\caption{${\zeta} = 1$.}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.75\textwidth]{figures/out_2_5_SPH_NEW.png}
\caption{${\zeta} = 2$.}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.75\textwidth]{figures/out_3_5_SPH_NEW.png}
\caption{${\zeta} = 3$.}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.75\textwidth]{figures/out_4_5_SPH_NEW.png}
\caption{${\zeta} = 4$.}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.75\textwidth]{figures/out_5_5_SPH_NEW.png}
\caption{${\zeta} = 5$.}
\end{subfigure}
\caption{RMSE of specific humidity analyses with a sparse observational network ($p \sim 4\%$) and different values of ${\zeta}$. }
\label{fig:exp-LETKF-RMSE-different-radii-sparse-network}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.75\textwidth]{figures/out_4_SPH_RMSE_MC-eps-converted-to.pdf}
\caption{$p = 6\%$.}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.75\textwidth]{figures/out_5_SPH_RMSE_MC-eps-converted-to.pdf}
\caption{$p = 4\%$.}
\end{subfigure}%
\caption{Analysis RMSE for the specific humidity variable with sparse observation networks. RMSE values are shown for different values of ${\zeta}$ and percentage of observed components $p$.}
\label{fig:exp-LETKF-RMSE-different-radii-sparse}
\end{figure}
Figures \ref{fig:exp-snapshot-meridional-wind-component} and \ref{fig:exp-snapshot-zonal-wind-component} provide snapshots of the meridional and the zonal wind components, respectively, at the first assimilation time. For this particular case the percentage of observed model components is $p=4\%$. At this step, only the initial observation has been assimilated in order to compute the analysis corrections by the EnKF-MC and the LETKF methods. The background solution contains erroneous waves for the zonal and the meridional wind components. For instance, for the $u$ model variable, such waves are clearly present near the poles. After the first assimilation step, the LETKF analysis solution dissipates the erroneous waves but, the numerical values of the wind components are slightly greater than those of the reference solutions. This numerical difference increases at later times due to the highly-nonlinear dynamics of SPEEDY, as can bee seen in Figure \ref{fig:exp-model-variables}. On the other hand, the EnKF-MC implementation recovers the reference shape, and the analysis values of the numerical model components are close to that of the reference solution. This shows again that the use of the modified Cholesky decomposition as the estimator of the background error correlations can mitigate the impact of spurious error correlations.
\begin{table}[H]
\centering
{\footnotesize
\begin{tabular}{|c|c|c|c|c|} \hline
Variable (units) &${\zeta}$ & $p$ & EnKF-MC & LETKF \\ \hline
\multirow{15}{*}{Zonal Wind Component ($u$), ($m/s$)} & \multirow{3}{*}{1} & $12\%$ & $ 5.514 \times 10^{2}$ & $ 5.471 \times 10^{2}$\\ \cline{3-5} & & $ 6 \%$ & $ 6.972 \times 10^{2}$ & $ 1.168 \times 10^{3}$\\ \cline{3-5} & & $4\%$ & $ 9.393 \times 10^{2}$ & $ 1.737 \times 10^{3}$\\ \cline{2-5} & \multirow{3}{*}{2} & $12\%$ & $ 4.187 \times 10^{2}$ & $ 1.275 \times 10^{3}$\\ \cline{3-5} & & $ 6 \%$ & $ 6.090 \times 10^{2}$ & $ 7.591 \times 10^{2}$\\ \cline{3-5} & & $4\%$ & $ 7.853 \times 10^{2}$ & $ 8.569 \times 10^{2}$\\ \cline{2-5} & \multirow{3}{*}{3} & $12\%$ & $ 4.388 \times 10^{2}$ & $ 1.661 \times 10^{3}$\\ \cline{3-5} & & $ 6 \%$ & $ 6.146 \times 10^{2}$ & $ 1.237 \times 10^{3}$\\ \cline{3-5} & & $4\%$ & $ 7.438 \times 10^{2}$ & $ 9.997 \times 10^{2}$\\ \cline{2-5} & \multirow{3}{*}{4} & $12\%$ & $ 4.323 \times 10^{2}$ & $ 1.752 \times 10^{3}$\\ \cline{3-5} & & $ 6 \%$ & $ 5.990 \times 10^{2}$ & $ 1.608 \times 10^{3}$\\ \cline{3-5} & & $4\%$ & $ 7.124 \times 10^{2}$ & $ 1.258 \times 10^{3}$\\ \cline{2-5} & \multirow{3}{*}{5} & $12\%$ & $ 4.456 \times 10^{2}$ & $ 1.862 \times 10^{3}$\\ \cline{3-5} & & $ 6 \%$ & $ 6.106 \times 10^{2}$ & $ 1.983 \times 10^{3}$\\ \cline{3-5} & & $4\%$ & $ 7.160 \times 10^{2}$ & $ 1.602 \times 10^{3}$\\ \cline{1-5} \multirow{15}{*}{Meridional Wind Component ($v$) ($m/s$)} & \multirow{3}{*}{1} & $12\%$ & $ 3.540 \times 10^{2}$ & $ 4.496 \times 10^{2}$\\ \cline{3-5} & & $ 6 \%$ & $ 5.165 \times 10^{2}$ & $ 1.158 \times 10^{3}$\\ \cline{3-5} & & $4\%$ & $ 7.770 \times 10^{2}$ & $ 1.749 \times 10^{3}$\\ \cline{2-5} & \multirow{3}{*}{2} & $12\%$ & $ 3.009 \times 10^{2}$ & $ 7.285 \times 10^{2}$\\ \cline{3-5} & & $ 6 \%$ & $ 4.605 \times 10^{2}$ & $ 5.520 \times 10^{2}$\\ \cline{3-5} & & $4\%$ & $ 6.217 \times 10^{2}$ & $ 7.420 \times 10^{2}$\\ \cline{2-5} & \multirow{3}{*}{3} & $12\%$ & $ 3.172 \times 10^{2}$ & $ 9.510 \times 10^{2}$\\ \cline{3-5} & & $ 6 \%$ & $ 4.735 \times 10^{2}$ & $ 8.334 \times 10^{2}$\\ \cline{3-5} & & $4\%$ & $ 6.014 \times 10^{2}$ & $ 7.455 \times 10^{2}$\\ \cline{2-5} & \multirow{3}{*}{4} & $12\%$ & $ 3.399 \times 10^{2}$ & $ 1.048 \times 10^{3}$\\ \cline{3-5} & & $ 6 \%$ & $ 4.812 \times 10^{2}$ & $ 1.146 \times 10^{3}$\\ \cline{3-5} & & $4\%$ & $ 5.913 \times 10^{2}$ & $ 9.026 \times 10^{2}$\\ \cline{2-5} & \multirow{3}{*}{5} & $12\%$ & $ 3.626 \times 10^{2}$ & $ 1.101 \times 10^{3}$\\ \cline{3-5} & & $ 6 \%$ & $ 5.107 \times 10^{2}$ & $ 1.575 \times 10^{3}$\\ \cline{3-5} & & $4\%$ & $ 6.122 \times 10^{2}$ & $ 1.102 \times 10^{3}$\\ \cline{1-5}
\end{tabular}
}
\caption{RMSE values of the wind-components for the EnKF-MC and LETKF making use of the SPEEDY model.}
\label{tab:exp-RMSE-values-wind-components-sparse}
\end{table}
\begin{table}[H]
\centering
{\footnotesize
\begin{tabular}{|c|c|c|c|c|} \hline
Variable (units) &${\zeta}$ & $p$ & EnKF-MC & LETKF \\ \hline
\multirow{15}{*}{Temperature ($K$)} & \multirow{3}{*}{1} & $12\%$ & $ 6.054 \times 10^{2}$ & $ 6.033 \times 10^{2}$\\ \cline{3-5} & & $ 6 \%$ & $ 5.692 \times 10^{2}$ & $ 6.704 \times 10^{2}$\\ \cline{3-5} & & $4\%$ & $ 6.522 \times 10^{2}$ & $ 8.073 \times 10^{2}$\\ \cline{2-5} & \multirow{3}{*}{2} & $12\%$ & $ 5.680 \times 10^{2}$ & $ 6.693 \times 10^{2}$\\ \cline{3-5} & & $ 6 \%$ & $ 5.193 \times 10^{2}$ & $ 5.556 \times 10^{2}$\\ \cline{3-5} & & $4\%$ & $ 5.299 \times 10^{2}$ & $ 5.529 \times 10^{2}$\\ \cline{2-5} & \multirow{3}{*}{3} & $12\%$ & $ 5.279 \times 10^{2}$ & $ 1.217 \times 10^{3}$\\ \cline{3-5} & & $ 6 \%$ & $ 4.982 \times 10^{2}$ & $ 6.458 \times 10^{2}$\\ \cline{3-5} & & $4\%$ & $ 4.926 \times 10^{2}$ & $ 6.073 \times 10^{2}$\\ \cline{2-5} & \multirow{3}{*}{4} & $12\%$ & $ 5.023 \times 10^{2}$ & $ 1.817 \times 10^{3}$\\ \cline{3-5} & & $ 6 \%$ & $ 4.757 \times 10^{2}$ & $ 1.030 \times 10^{3}$\\ \cline{3-5} & & $4\%$ & $ 4.766 \times 10^{2}$ & $ 7.464 \times 10^{2}$\\ \cline{2-5} & \multirow{3}{*}{5} & $12\%$ & $ 4.898 \times 10^{2}$ & $ 1.600 \times 10^{3}$\\ \cline{3-5} & & $ 6 \%$ & $ 4.644 \times 10^{2}$ & $ 1.473 \times 10^{3}$\\ \cline{3-5} & & $4\%$ & $ 4.684 \times 10^{2}$ & $ 1.172 \times 10^{3}$\\ \cline{1-5} \multirow{15}{*}{Specific Humidity ($g/Kg$)} & \multirow{3}{*}{1} & $12\%$ & $ 9.862 \times 10^{1}$ & $ 9.026 \times 10^{1}$\\ \cline{3-5} & & $ 6 \%$ & $ 1.133 \times 10^{2}$ & $ 1.449 \times 10^{2}$\\ \cline{3-5} & & $4\%$ & $ 1.405 \times 10^{2}$ & $ 1.941 \times 10^{2}$\\ \cline{2-5} & \multirow{3}{*}{2} & $12\%$ & $ 1.029 \times 10^{2}$ & $ 1.125 \times 10^{2}$\\ \cline{3-5} & & $ 6 \%$ & $ 1.146 \times 10^{2}$ & $ 1.137 \times 10^{2}$\\ \cline{3-5} & & $4\%$ & $ 1.270 \times 10^{2}$ & $ 1.321 \times 10^{2}$\\ \cline{2-5} & \multirow{3}{*}{3} & $12\%$ & $ 1.068 \times 10^{2}$ & $ 1.341 \times 10^{2}$\\ \cline{3-5} & & $ 6 \%$ & $ 1.205 \times 10^{2}$ & $ 1.418 \times 10^{2}$\\ \cline{3-5} & & $4\%$ & $ 1.317 \times 10^{2}$ & $ 1.458 \times 10^{2}$\\ \cline{2-5} & \multirow{3}{*}{4} & $12\%$ & $ 1.065 \times 10^{2}$ & $ 1.640 \times 10^{2}$\\ \cline{3-5} & & $ 6 \%$ & $ 1.246 \times 10^{2}$ & $ 1.652 \times 10^{2}$\\ \cline{3-5} & & $4\%$ & $ 1.324 \times 10^{2}$ & $ 1.739 \times 10^{2}$\\ \cline{2-5} & \multirow{3}{*}{5} & $12\%$ & $ 1.089 \times 10^{2}$ & $ 2.078 \times 10^{2}$\\ \cline{3-5} & & $ 6 \%$ & $ 1.301 \times 10^{2}$ & $ 1.950 \times 10^{2}$\\ \cline{3-5} & & $4\%$ & $ 1.373 \times 10^{2}$ & $ 2.068 \times 10^{2}$\\ \cline{1-5}
\end{tabular}
}
\caption{RMSE values for the EnKF-MC and LETKF making use of the SPEEDY model.}
\label{tab:exp-RMSE-values-others-sparse}
\end{table}
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.9\textwidth]{figures/out_3_3_UWC-eps-converted-to.pdf}
\caption{${\zeta} = 3$ and $p=12\%$ }
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.9\textwidth]{figures/out_4_3_VWC-eps-converted-to.pdf}
\caption{${\zeta} = 4$ and $p=12\%$ }
\end{subfigure}%
\caption{RMSE of the LETKF and EnKF-MC implementations for different model variables, radii of influence and observational networks. }
\label{fig:exp-model-variables}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.8\textwidth]{figures/true_1_57_5_5_VWC.png}
\caption{Reference}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.8\textwidth]{figures/back_1_57_5_5_VWC.png}
\caption{Background}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.8\textwidth]{figures/enkfB_1_57_5_5_VWC.png}
\caption{EnKF-MC}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.8\textwidth]{figures/LETKF_1_57_5_5_VWC.png}
\caption{LETKF}
\end{subfigure}
\caption{Snapshots of the reference solution, background state, and analysis fields from the EnKF-MC and LETKF for the fifth layer of the meridional wind component ($v$).}
\label{fig:exp-snapshot-meridional-wind-component}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.8\textwidth]{figures/true_1_42_5_5_UWC.png}
\caption{Reference}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.8\textwidth]{figures/back_1_42_5_5_UWC.png}
\caption{Background}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.8\textwidth]{figures/enkfB_1_42_5_5_UWC.png}
\caption{EnKF-MC}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.8\textwidth]{figures/LETKF_1_42_5_5_UWC.png}
\caption{LETKF}
\end{subfigure}
\caption{Snapshots of the reference solution, background state, and analysis fields from the EnKF-MC and LETKF for the second layer of the zonal wind component ($u$).}
\label{fig:exp-snapshot-zonal-wind-component}
\end{figure}
\subsection{Statistics of the ensemble}
In this section, we briefly discuss the spread of the ensemble making use of rank histograms. Of course, we do not claim this to be a verification procedure but, it provides useful insights about the dispersion of the members and the level of uncertainty about the ensemble mean. The plots are based on the 5-th numerical layer of the atmosphere. We collect information across all model variables and the plots are shown in figures \ref{fig:bin-sph}, \ref{fig:bin-tem}, \ref{fig:bin-uwc}, and \ref{fig:bin-vwc}. Based on the results, the proposed implementation seems to be lesser sensitive to the intrinsic need of inflation than the LETKF formulation. For instance, after the assimilation, the ensemble members from the EnKF-MC are spread almost uniformly across different observation times. On the other hand, the spread in the context of the LETKF is impacted by the constant inflation factor used during the experiments (1.04) In practice, the inflation factor is set up according to historical information and/or heuristically with regard to some properties of the dynamics of the numerical model. This implies that, the dispersion of the LETKF members after the analysis will rely in how-well we estimate the optimal inflation factor for such filter. In operational data assimilation, an answer to this question can be hard to find. We think that inflation methodologies such as adaptive inflation can lead to better spread of the ensemble members in the context of the LETKF. For the proposed method, based on the experimental results, such methodology is not needed.
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{Corrections/BIN_EnKFMC_SPH.png}
\caption{EnKF-MC}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{Corrections/BIN_LETKF_SPH.png}
\caption{LETKF}
\end{subfigure}%
\caption{Rank-histograms for the Specific Humidity model variable. The information is collected from the 5-th model layer.}
\label{fig:bin-sph}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{Corrections/BIN_EnKFMC_UWC.png}
\caption{EnKF-MC}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{Corrections/BIN_LETKF_UWC.png}
\caption{LETKF}
\end{subfigure}%
\caption{Rank-histograms for the Zonal Wind Component model variable. The information is collected from the 5-th model layer.}
\label{fig:bin-uwc}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{Corrections/BIN_EnKFMC_VWC.png}
\caption{EnKF-MC}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{Corrections/BIN_LETKF_VWC.png}
\caption{LETKF}
\end{subfigure}%
\caption{Rank-histograms for the Meridional Wind Component model variable. The information is collected from the 5-th model layer.}
\label{fig:bin-vwc}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{Corrections/BIN_EnKFMC_TEM.png}
\caption{EnKF-MC}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{Corrections/BIN_LETKF_TEM.png}
\caption{LETKF}
\end{subfigure}%
\caption{Rank-histograms for the Temperature model variable. The information is collected from the 5-th model layer.}
\label{fig:bin-tem}
\end{figure}
\subsection{The impact of SVD truncation threshold}
\label{sec:future-work}
An important question arising from this research is the number of singular values/vectors to be used in \eqref{eq:EnKF-MC-truncated-SVD}. To study this question we use the same experimental setting and the sparse observational network where only $4\%$ of the model components are observed. We apply EnKF-MC algorithm and truncate the summation \eqref{eq:EnKF-MC-truncated-SVD} based on different thresholds $\sigma_r$.
The results are reported in Figure \ref{fig:exp-RMSE-different-thresholds}. Different thresholds lead to different levels of accuracy for the EnKF-MC analyses. There is no unique value of $\sigma_r$ that provides the best ensemble trajectory in general; for instance, the best performance at the beginning of the assimilation window is obtained for $\sigma_r = 0.05$, but, at the end the best solution is obtained with $\sigma_r = 0.2$. This indicates that the results can be improved when $\sigma_r$ is dynamically and optimally chosen. Note that, on average, the results obtained by the EnKF-MC with $\sigma_r \in \lle 0.15,\, 0.20,\, 0.25 \rle$ are much better than those when $\sigma_r = 0.10$ (and therefore much better than the results obtained by the LETKF). In Figure \ref{fig:exp-RMSE-snapshots-thresholds} snapshots of the specific humidity for different $\sigma_r$ are shown. It can be seen that the spurious errors can be quickly decreased when $\sigma_r$ is chosen accordingly.
In order to understand the optimal truncation level note that the summation \eqref{eq:EnKF-MC-truncated-SVD} can be written as follows:
\begin{eqnarray}
\label{eq:alpha_j}
{\boldsymbol \beta}_{[i]} &=& \sum_{j=1}^{\Nens} \alpha_j \cdot {\bf u}^{\Z_{[i]}}_j,\, \\
\nonumber
\alpha_j &=& \frac{1}{\tau_j} \cdot {{\bf v}^{\Z_{[i]}}_j}^T \cdot \x_{[i]} = \frac{1}{\tau_j} \cdot {{\bf v}^{\Z_{[i]}}_j}^T \cdot \lb \widetilde{\x}_{[i]} + \boldsymbol{\theta}_{[i]} \rb \\
\nonumber
&=& \underbrace{ \frac{1}{\tau_j} \cdot {{\bf v}^{\Z_{[i]}}_j}^T \cdot \widetilde{\x}_{[i]} }_{\text{Uncorrupted data}} + \underbrace{\frac{1}{\tau_j} \cdot {{\bf v}^{\Z_{[i]}}_j}^T \cdot \boldsymbol{\theta}_{[i]}}_{\text{Error}}
\end{eqnarray}
where $\widetilde{\x}_{[i]}$ is the perfect data ($\x_{[i]} = \widetilde{\x}_{[i]}+\boldsymbol{\theta}_{[i]}$). The components with small singular values $\tau_j$ will amplify the error more. The threshold should be large enough to include useful information from $\widetilde{\x}_{[i]}$, but small enough in order to prune out the components with large error amplification.
We expect that model components with large variances will need more basis vectors from \eqref{eq:EnKF-MC-truncated-SVD} than those with lesser variance. An upper bound for the number of basis vectors (and therefore the threshold $\sigma_r$) can be obtained by inspection of the values $\alpha_j$ in \eqref{eq:alpha_j}. Figure \ref{fig:exp-random-noise-effect} shows the weights $\alpha_j$ for different singular values for the 500-th model component of the SPEEDY model. The large zig-zag behaviors are evidence of error amplifications and therefore, we can truncate the summation \eqref{eq:alpha_j} before this pattern starts to take place in the values of $\alpha_j$.
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.75\textwidth]{figures/VWC-eps-converted-to.pdf}
\caption{Meridional wind component ($m/s$)}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.75\textwidth]{figures/UWC-eps-converted-to.pdf}
\caption{Zonal wind component ($m/s$)}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.75\textwidth]{figures/TEM-eps-converted-to.pdf}
\caption{Temperature ($K$)}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.75\textwidth]{figures/SPH-eps-converted-to.pdf}
\caption{Specific humidity ($g/kg$)}
\end{subfigure}
\caption{RMSE for the SPEEDY analyses obtained using different SVD truncation levels based on the $\sigma_r$ values.}
\label{fig:exp-RMSE-different-thresholds}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.9\textwidth]{figures/true_SPH_8.png}
\caption{Reference}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.9\textwidth]{figures/{xa_SPH_8_0.05}.png}
\caption{$\sigma_r = 0.05$}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.9\textwidth]{figures/{xa_SPH_8_0.1}.png}
\caption{$\sigma_r = 0.10$}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.9\textwidth]{figures/{xa_SPH_8_0.15}.png}
\caption{$\sigma_r = 0.15$}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.9\textwidth]{figures/{xa_SPH_8_0.2}.png}
\caption{$\sigma_r = 0.20$}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.9\textwidth]{figures/{xa_SPH_8_0.3}.png}
\caption{$\sigma_r = 0.30$}
\end{subfigure}
\caption{Snapshots at the final assimilation time (day 22) of the EnKF-MC analysis making use of different thresholds $\sigma_r$ for ${\zeta} = 5$ and $p = 4 \%$. }
\label{fig:exp-RMSE-snapshots-thresholds}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.9\textwidth]{figures/MWC_SV-eps-converted-to.pdf}
\caption{Meridional wind component ($m/s$)}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.9\textwidth]{figures/ZWC_SV-eps-converted-to.pdf}
\caption{Zonal wind component ($m/s$)}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.9\textwidth]{figures/TEM_SV-eps-converted-to.pdf}
\caption{Temperature ($K$)}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=0.9\textwidth]{figures/SH_SV-eps-converted-to.pdf}
\caption{Specific humidity ($g/kg$)}
\end{subfigure}
\caption{The effect of $\boldsymbol{\theta}$ on the weights $\alpha_j$ for some model component $i$ of the SPEEDY model when ${\zeta}=5$ and $p=4\%$.}
\label{fig:exp-random-noise-effect}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
This paper develops an efficient implementation of the ensemble Kalman filter, named EnKF-MC, that is based on a modified Cholesky decomposition to estimate the inverse background covariance matrix. This new approach has several advantages over classical formulations. First, a predefined sparsity structure can be built into the factors of the inverse covariance. This reflects the fact that if two distant model components are uncorrelated then the corresponding entry in the inverse covariance matrix is zero; the only nonzero entries in the Cholesky factors correspond to components of the model that are located in each other's proximity. Therefore, imposing a sparsity structure on the inverse background covariance matrix is a form of covariance localization. Second, the formulation allows for a rigorous theoretical analysis; we prove the convergence of the covariance estimator for a number of ensemble members that is proportional to the logarithm of the number of states of the model therefore, when $\Nens \approx \log \Nstate$, the background error correlations can be well-estimated making use of the modified Cholesky decomposition.
We discuss different implementations of the new EnKF-MC, and asses their computational effort. We show that domain decomposition can be used in order to decrease even more the computational effort of the proposed implementation. Numerical experiments are carried out using the Atmospheric General Circulation Model SPEEDY reveal that the analyses obtained by EnKF-MC are better than those of the LETKF in the root mean square sense when sparse observations are used in the analysis. For dense observation grids the EnKF-MC solutions are improved when the radius of influence increases, while the opposite holds true for LETKF analyses. (We stress the fact that these conclusions are true for our implementation of the basic LETKF; other implementations may incorporate advances that could make the filter perform considerably better). The use of modified Cholesky decomposition can mitigate the impact of spurious correlation during the assimilation of observations.
\section*{Acknowledgements}
This work was supported in part by awards NSF CCF--1218454,
AFOSR FA9550--12--1--0293--DEF, and by the Computational Science Laboratory at Virginia Tech.
\input{Main.bbl}
\end{document}
|
2,869,038,154,088 | arxiv | \subsection*{Abstract.}
``Dual composition'', a new method of constructing energy-preserving
discretizations of conservative PDEs, is introduced. It extends
the summation-by-parts approach to arbitrary differential operators
and conserved quantities. Links to pseudospectral, Galerkin,
antialiasing, and Hamiltonian methods are discussed.
}
\medskip
\subsection*{1. Introduction}
For all $u,v\in C^1([-1,1])$,
$$
\int_{-1}^1 v \partial_x w\, dx =
-\int_{-1}^1 w \partial_x v\, dx + [vw]_{-1}^1,
$$
so the operator $\partial_x$ is skew-adjoint on $\{v\in C^1([-1,1]:
v(\pm1)=0\}$ with respect to the $L^2$ inner product $\ip{}{}$. Take $n$
points $x_i$, a real function $v(x)$, and
estimate $v'(x_i)$ from the values $v_i := v(x_i)$. In vector
notation, ${\mathbf v}' = D {\mathbf v}$, where $D$ is a differentiation matrix.
Suppose that
the differentiation matrix has the form $D = S^{-1}A$, in which $S$
induces a discrete approximation
$$\ip{{\mathbf v}}{{\mathbf w}}_S := {\mathbf v}^{\mathrm T} S {\mathbf w}\approx \int vw\,dx=\ip{v}{w},$$
of the inner product. Then
\begin{equation}
\label{byparts}
\ip{{\mathbf v}}{D{\mathbf w}}_S + \ip{D{\mathbf v}}{{\mathbf w}}_S = {\mathbf v}^{\mathrm T} S S^{-1} A {\mathbf w} + {\mathbf v}^{\mathrm T} A^{\mathrm T}
S^{-\mathrm T} S {\mathbf w} = {\mathbf v}^{\mathrm T}(A+A^{\mathrm T}){\mathbf w},
\end{equation}
which is zero if $A$ is antisymmetric
(so that $D$ is skew-adjoint with respect to $\ip{\,}{}_S$),
or equals $[vw]_{-1}^1$ if $x_1=-1$, $x_n=1$, and
$A+A^{\mathrm T}$ is zero except for $A_{nn}=-A_{11}=\frac{1}{2}$.
Eq. (\ref{byparts}) is known as a ``summation by parts'' formula;
it affects the energy flux of methods built from $D$.
More generally, preserving structural features such as skew-adjointness
leads to natural and robust methods.
Although factorizations $D=S^{-1}A$ are ubiquitous in finite element
methods, they have been less studied elsewhere. They were introduced
for finite difference methods in \cite{kr-sc} (see \cite{olsson} for
more recent developments) and for spectral methods in \cite{ca-go}, in which
the connection between spectral collocation and Galerkin methods was used
to explain the skew-adjoint structure of some differentiation matrices.
Let ${\operator H}(u)$ be a continuum conserved quantity, the {\em energy.}
We consider PDEs
\begin{equation}
\label{eq:hamilt_pde}
\dot u = {\operator D}(u)\frac{\delta\H}{\delta u}
\mbox{,}
\end{equation}
and corresponding ``linear-gradient'' spatial discretizations
\cite{mclachlan2,mclachlan1,mqr:prl}, ODEs of
the form
\begin{equation}
\label{eq:lin_grad}
\dot {\mathbf u} = L({\mathbf u}) \nabla H({\mathbf u})
\end{equation}
with appropriate discretizations of $u$, ${\operator D}$, ${\operator H}$, and
$\delta/\delta u$. For a PDE of the form (\ref{eq:hamilt_pde}), if
${\operator D}(u)$ is formally skew-adjoint, then $d{\operator H}/dt$ depends only on the
total energy flux through the boundary; if this flux
is zero, ${\operator H}$ is an integral. Analogously, if
(\ref{eq:lin_grad}) holds, then
$\dot H = \frac{1}{2}(\nabla H)^{\mathrm T} (L+L^{\mathrm T}) \nabla H$,
so that $H$ cannot increase if the symmetric part of $L$ is
negative definite, and $H$ is an integral if $L$ is antisymmetric.
Conversely, all systems with an integral can be written in
``skew-gradient'' form ((\ref{eq:lin_grad}) with $L$ antisymmetric)
\cite{mqr:prl}.
Hamiltonian systems are naturally in the form
(\ref{eq:hamilt_pde}) and provide examples.
This paper summarizes \cite{mc-ro}, which contains
proofs and further examples.
\subsection*{2. Discretizing conservative PDEs}
In (\ref{eq:hamilt_pde}), we want to allow constant operators such as
${\operator D}=\partial_x^n$ and ${\operator D} = \left(
\begin{smallmatrix}0 & 1 \\ -1 & 0\\ \end{smallmatrix}
\right)$, and nonconstant ones such as
${\operator D}(u) = u\partial_x + \partial_x u$.
These differ in the class of functions and boundary conditions which make
them skew-adjoint, which suggests Defn. 1 below.
Let (${\functionspace F},\ip{}{})$ be an inner product space.
We use two subspaces ${\functionspace F}_0$ and ${\functionspace F}_1$ which can be infinite dimensional
(in defining a PDE) or finite dimensional (in defining a discretization).
We write $\{f_j\}$ for a basis of
${\functionspace F}_0$, $\{g_j\}$ for a basis of ${\functionspace F}_1$, and expand $u=u_j f_j$,
collecting the coefficients $(u_j)$ into a vector ${\mathbf u}$.
A cardinal basis is one in which $f_j(x_i) = \delta_{ij}$, so that
$u_j = u(x_j)$.
\begin{definition}
A linear operator
$${\operator D}: {\functionspace F}_0\times {\functionspace F}_1 \to {\functionspace F},\quad {\operator D}(u)v\mapsto w\mbox{,}$$
is {\em formally skew-adjoint} if there is a functional $b(u,v,w)$,
depending only on the boundary values of $u$, $v$, and $w$ and their
derivatives up to a finite order, such that
$$
\ip{v}{{\operator D}(u)w} = -\ip{w}{{\operator D}(u)v}+b(u,v,w)\quad \forall\, u\in {\functionspace F}_0
,\ \forall\, v,w\in {\functionspace F}_1 .
$$
${\functionspace F}_1$ is called a {\em domain of interior skewness} of ${\operator D}$.
If $b(u,v,w) = 0$ $\forall\,u\in{\functionspace F}_0$, $\forall\,v,w\in{\functionspace F}_1$,
${\functionspace F}_1$ is called a {\em domain of skewness} of ${\operator D}$,
and we say that ${\operator D}$ is skew-adjoint.
\end{definition}
\begin{example}\rm Let ${\functionspace F}^{\rm pp}(n,r) = \{u\in C^r([-1,1]):u|_{[x_i,x_{i+1}]}
\in {\functionspace P}_n\}$ be the piecewise polynomials of degree $n$ with $r$ derivatives.
For ${\operator D}=\partial_x$,
${\functionspace F}^{\rm pp}(n,r)$, $n,\ r\ge 0$, is a domain of interior
skewness, i.e., continuity suffices,
and $\{u\in{\functionspace F}^{\rm pp}(n,r):u(\pm 1)=0\}$ is a domain of skewness.
\end{example}
\begin{example}\rm
With $D(u) = 2(u\partial_x + \partial_x u) + \partial_{xxx}$, we have
$$
\ip{v}{{\operator D}(u)w}+\ip{w}{{\operator D}(u)v} = [w_{xx}v - w_x v_x + w v_{xx} + 2 uvw],$$
so suitable domains of interior skewness are ${\functionspace F}_0 = {\functionspace F}^{\rm
pp}(1,0)$, ${\functionspace F}_1={\functionspace F}^{\rm pp}(3,2)$, i.e., more smoothness is required
from $v$ and $w$ than from $u$.
A boundary condition which makes ${\operator D}(u)$ skew is $\{v:
v(\pm 1)=0,\ v_x(1)=v_x(-1) \}$.
\end{example}
\begin{definition} ${\functionspace F}_0$ is
{\em natural for ${\operator H}$} if $\forall u \in {\functionspace F}_0$ there exists
$\frac{\delta {\operator H}}{\delta u}\in{\functionspace F}$ such that
\[
\lim_{\varepsilon\rightarrow 0}
\frac{ {\operator H}(u+\varepsilon v) - {\operator H}(u) }{ \varepsilon }
= \ip{v}{\frac{\delta {\operator H}}{\delta u}}
\quad \forall\, v\in{\functionspace F}
\mbox{.}
\]
\end{definition}
The naturality of ${\functionspace F}_0$ often follows from the vanishing of the
boundary terms, if any, which appear of the first variation of ${\operator H}$,
together with mild smoothness assumptions.
We use appropriate
spaces ${\functionspace F}_0$ and ${\functionspace F}_1$ to generate spectral, pseudospectral, and
finite element discretizations which have discrete energy
$H:={\operator H}|_{{\functionspace F}_0}$ as a conserved quantity. The discretization of the
differential operator ${\operator D}$ is a linear operator $\overline{\operator D}
:{\functionspace F}_1\to{\functionspace F}_0$, and the discretization of the variational derivative
$\frac{\delta\H}{\delta u}$ is $\overline{\frac{\delta\H}{\delta u}}\in{\functionspace F}_1$.
Each of $\overline {\operator D}$ and $\overline\frac{\delta\H}{\delta u}$ is a weighted residual
approximation \cite{finlayson}, but each uses spaces of
weight functions different from its space of trial functions.
\begin{definition}
$S$ is the matrix of $\ip{}{}|_{{\functionspace F}_0\times{\functionspace F}_1}$, i.e.
$S_{ij} := \ip{f_i}{g_j}$.
$A(u)$ is the matrix of the linear operator ${\operator A}:(v,w)\mapsto\ip{v}{{\operator D}(u)w}$,
i.e. $A_{ij}(u) := \ip{g_i}{{\operator D}(u)g_j}$.
\end{definition}
\begin{proposition}
Let ${\functionspace F}_0$ be natural for ${\operator H}$ and let $S$ be nonsingular. Then for
every $u\in{\functionspace F}_0$ there is a unique element
$\overline{\frac{\delta {\operator H}}{\delta u}}\in{\functionspace F}_1$ such that
\[
\ip{w}{\overline{\frac{\delta {\operator H}}{\delta u}}} =
\ip{w}{\frac{\delta {\operator H}}{\delta u}} \quad \forall\, w\in{\functionspace F}_0
\mbox{.}
\]
Its coordinate representation is $S^{-1}\nabla H$ where $H({\mathbf u}):={\operator H}(u_i f_i)$.
\end{proposition}
\begin{proposition}
\label{prop:D}
Let $S$ be nonsingular. For every $v\in{\functionspace F}_1$, there exists a
unique element $\overline{\D}v\in{\functionspace F}_0$ satisfying
\[
\ip{\overline{\D}v}{w} = \ip{{\operator D} v}{w} \quad \forall\, w\in{\functionspace F}_1 \mbox{.}
\]
The map $v\mapsto\overline{\D}v$ is linear, with matrix representation $D:=S^{-\mathrm T} A$.
\end{proposition}
\begin{definition}
$\overline{{\operator D}}\overline{\frac{\delta{\operator H}}{\delta u}}:{\functionspace F}_0\to{\functionspace F}_0$
is the {\em dual composition discretization} of
${\operator D}\frac{\delta{\operator H}}{\delta u}$.
\end{definition}
Its matrix representation is $S^{-\mathrm T} A S^{-1} \nabla H$.
The name ``dual composition'' comes from the dual roles played
by ${\functionspace F}_0$ and ${\functionspace F}_1$ in defining $\overline{{\operator D}}$
and $\overline{\frac{\delta\H}{\delta u}}$
which is necessary so that their composition has the required
linear-gradient structure.
Implementation and accuracy of
dual composition and Galerkin discretizations are similar. Because
they coincide in simple cases, such methods are widely used already.
\begin{proposition}
If ${\functionspace F}_1$ is a domain of skewness, the matrix $S^{-\mathrm T} A S^{-1}$
is antisymmetric, and the system of ODEs
\begin{equation}
\label{eq:disc}
\dot{\mathbf u}
=
S^{-\mathrm T} A S^{-1} \nabla H
\end{equation}
has $H$ as an integral. If, in addition, ${\operator D}$ is constant---i.e.,
does not depend on $u$---then the system (\ref{eq:disc}) is Hamiltonian.
\end{proposition}
The method of dual compositions also yields
discretizations of linear differential operators ${\operator D}$ (by taking
${\operator H}=\frac{1}{2}\ip{u}{u}$), and discretizations of variational
derivatives (by taking ${\operator D}=1$).
It also applies to formally {\em self}-adjoint
${\operator D}$'s and to mixed (e.g. advection-diffusion) operators, where
preserving symmetry gives control of the energy.
The composition of two weighted residual discretizations is not
necessarily itself of weighted residual type. The simplest case is
when ${\functionspace F}_0={\functionspace F}_1$ and we compare the dual composition to the
{\em Galerkin discretization}, a weighted
residual discretization of ${\operator D} \frac{\delta {\operator H}}{\delta u}$ with
trial functions and weights both in ${\functionspace F}_0$. They are the same when
projecting $\frac{\delta\H}{\delta u}$ to ${\functionspace F}_0$, applying ${\operator D}$, and
again projecting to ${\functionspace F}_0$, is equivalent to directly projecting
${\operator D}\frac{\delta\H}{\delta u}$ to ${\functionspace F}_0$.
For brevity, we assume ${\functionspace F}_0={\functionspace F}_1$ for the rest of Section 2.
\begin{proposition}
\label{prop:galerkin}
$\overline{{\operator D}}\overline{\frac{\delta\H}{\delta u}}
$ is the Galerkin approximation of
${\operator D} \frac{\delta\H}{\delta u}$ if and only if
$ {\operator D} \big( \overline{\frac{\delta\H}{\delta u}} - \frac{\delta\H}{\delta u} \big) \perp {\functionspace F}_0.$
This occurs if
(i) ${\operator D}({\functionspace F}_0^\perp)\perp{\functionspace F}_0$, or
(ii) $\overline{\operator D}$ is exact and applying ${\operator D}$ and orthogonal
projection to ${\functionspace F}_0$ commute, or
(iii) $\overline{\frac{\delta {\operator H}}{\delta u}}$ is exact,
i.e., $\frac{\delta\H}{\delta u}\in{\functionspace F}_0$.
\end{proposition}
Fourier spectral methods with ${\operator D}=\partial_x^n$ satisfy (ii), since
then ${\functionspace F}$ has an orthogonal
basis of eigenfunctions ${\mathrm e}^{ijx}$ of ${\operator D}$, and differentiating
and projecting (dropping the high modes) commute. This is illustrated
later for the KdV equation.
The most obvious situation in which $\frac{\delta\H}{\delta u}\in{\functionspace F}_0$ is when
${\operator H}=\frac{1}{2}\ip{u}{u}$, since then
$\frac{\delta\H}{\delta u}=u\in{\functionspace F}_0$ and ${\operator D}\frac{\delta {\operator H}}{\delta u}={\operator D} u$,
and the discretization of ${\operator D}$ is obviously the Galerkin one!
When the functions $f_j$ are nonlocal, $D$ is often called the
spectral differentiation matrix. The link to standard pseudospectral
methods is that some Galerkin methods are pseudospectral.
\begin{proposition}
\label{prop:pseudo}
If ${\operator D}({\functionspace F}_1)\subseteq{\functionspace F}_1$, then $\overline{{\operator D}}v={\operator D} v$,
i.e., the Galerkin approximation of the derivative is exact.
If, further, $\{f_j\}$ is a cardinal basis,
then $D$ is the standard pseudospectral differentiation matrix,
i.e. $D_{ij} = {\operator D} f_j(x_i)$.
\end{proposition}
We want to emphasize that although $A$, $S$, and $D$ depend on the basis,
$\overline{\operator D}$ depends only on ${\functionspace F}_0$ and ${\functionspace F}_1$, i.e., it is
basis and grid independent.
In the factorization $D=S^{-\mathrm T} A$, the (anti)symmetry of $A$ and $S$ is basis
independent, unlike that of $D$. These points are well known in
finite elements, less so in pseudospectral methods.
\begin{example}[\bf Fourier differentiation\rm]\rm
Let ${\functionspace F}_1$ be the trigonometric polynomials of degree $n$, which is
closed under differentiation (so that Prop. \ref{prop:pseudo}) applies,
and is a domain of skewness of ${\operator D}=\partial_x$. In any basis, $A$ is
antisymmetric. Furthermore, the two popular bases, $\{\sin(j x)_{j=1}^n,
\cos(j x)_{j=0}^n\}$, and the cardinal basis on equally-spaced grid
points, are both orthogonal, so that $S=\alpha I$ and $D=S^{-1}A$ is
antisymmetric in both cases.
\end{example}
\begin{example}[\bf Polynomial differentiation\rm]\rm
\label{sec:cheb}
${\functionspace F}_1={\functionspace P}_n([-1,1])$ is a domain of interior skewness which is
closed under ${\operator D}=\partial_x$, so pseudospectral differentiation
factors as $D=S^{-1}A$ in any basis. For a cardinal
basis which includes $x_0=-1$, $x_n=1$, we have $(A+A^{\mathrm T})_{ij}=-1$
for $i=j=0$, $1$ for $i=j=n$, and 0 otherwise, making obvious
the influence of the boundary.
For the Chebyshev points $x_i = -\cos(i
\pi/n)$, $i=0,\dots,n$, $A$ can be evaluated first in a basis
$\left\{ T_i \right\}$ of Chebyshev polynomials:
one finds
$A_{ij}^{\rm cheb} = 2 j^2/(j^2-i^2)$ for $i-j$ odd, and
$S_{ij}^{\rm cheb} -2(i^2+j^2-1)/
[((i+j)^2-1)((i-j)^2-1)]$ for $i-j$ even, with other entries 0.
Changing to a cardinal basis by
$F_{ij} = T_j(x_i) = \cos(i j \pi/n)$, a
discrete cosine transform, gives $A=F^{-1} A^{\rm cheb} F^{-\mathrm T}$.
For example, with $n=3$
(so that $(x_0,x_1,x_2,x_3)=(-1,-\frac{1}{2}, \frac{1}{2},1)$), we have
$$ D =
{\scriptstyle \frac{1}{6}}
\left(
\begin{smallmatrix}
-19 & 24 & -8 & 3 \\
2 & -6 & -2 & 6 \\
-6 & 2 & 6 & -2 \\
-3 & 8 & -24 & 19 \\
\end{smallmatrix}
\right)
= S^{-\mathrm T} A =
{\scriptstyle \frac{1}{256}}
\left(
\begin{smallmatrix}
4096 & -304 & 496 & -1024\\
-304 & 811 & -259 & 496\\
496 & -259 & 811 & -304\\
-1024 & 496 & -304 & 4096\\
\end{smallmatrix}
\right)
{\scriptstyle \frac{1}{270}}
\left(
\begin{smallmatrix}
-135 & 184 & -72 & 23 \\
-184 & 0 & 256 & -72\\
72 & -256 & 0 & 184 \\
-23 & 72 & -184 & 135
\end{smallmatrix}
\right).
$$
$S$ and $A$ may be more amenable to study than $D$ itself.
All their eigenvalues are very well-behaved; none are spurious. The
eigenvalues of $A$ are all imaginary and, as $n\to\infty$, uniformly
fill $[-i\pi,i\pi]$ (with a single zero eigenvalue corresponding
to the Casimir of $\partial_x$).
The eigenvalues of $S$ closely approximate the
quadrature weights of the Chebyshev grid.
\end{example}
For ${\operator D}\ne\partial_x$, $\overline{{\operator D}}$ may be quite expensive
and no longer pseudospectral. (There is in general no
$S$ with respect to which the pseudospectral approximation of
${\operator D} v$ is skew-adjoint.) However, $\overline{{\operator D}}v$ can
be computed quickly if fast transforms between cardinal and
orthonormal bases exist. We evaluate ${\operator D} v$ exactly for
$v\in{\functionspace F}_1$ and then project $S$-orthogonally to ${\functionspace F}_1$.
\begin{example}[\bf Fast Fourier Galerkin method\rm]\rm
\label{fastfourier}
Let ${\operator D}(u)$ be linear in $u$, for example, ${\operator D}(u) = u\partial_x
+ \partial_x u$. Let $u,\ v\in{\functionspace F}_1$, the trigonometric
polynomials of degree $n$. Then ${\operator D}(u)v$ is
a trigonometric polynomial of degree $2n$, the first $n$ modes of
which can be evaluated exactly using antialiasing and Fourier
pseudospectral differentiation. The approximation whose error
is orthogonal to ${\functionspace F}_1$ is just these first $n$ modes, because $S=I$
in the spectral basis. That is, the antialiased
pseudospectral method is here identical to the Galerkin method, and hence
skew-adjoint. Antialiasing makes pseudospectral methods conservative.
This is the case of the linear ${\operator D}$'s of the Euler fluid equations.
\end{example}
\begin{example}[\bf Fast Chebyshev Galerkin method\rm]\rm
Let ${\operator D}(u)$ be linear in $u$ and let $u,\ v\in{\functionspace F}_1={\functionspace P}_n$.
With respect to the cardinal basis on the Chebyshev grid with $n+1$
points, $\overline{{\operator D}}(u)v$ can be computed in time ${\mathcal O}(n \log n)$ as follows:
(i)
Using an FFT, express $u$ and $v$ as Chebyshev polynomial
series of degree $n$;
(ii) Pad with zeros to get Chebyshev polynomial series of formal
degree $2n$;
(iii) Transform back to a Chebyshev grid with $2n+1$ points;
(iv) Compute the pseudospectral approximation of ${\operator D}(u)v$ on the
denser grid. Being a polynomial of degree $\le 2n$, the
corresponding Chebyshev polynomial series is exact;
(v) Convert ${\operator D}(u)v$ to a Legendre polynomial series using a fast
transform \cite{al-ro};
(vi) Take the first $n+1$ terms. This produces
$\overline{{\operator D}}(u)v$, because the Legendre
polynomials are orthogonal.
(vii) Convert to a Chebyshev polynomial series with $n+1$ terms
using a fast transform;
(viii) Evaluate at the points of the original Chebyshev grid using an FFT.
\end{example}
\subsection*{3. Examples of the dual composition method}
\begin{example}[\bf The KdV equation\rm]\rm
$ \dot u + 6 u u_x + u_{xxx}=0$ with
periodic boundary conditions has features which can be used to illustrate
various properties of the dual composition method. Consider two of its
Hamiltonian forms,
$$
\dot u = {\operator D}_1\frac{\delta\H_1}{\delta u}\mbox{, } {\operator D}_1 =
\partial_x\mbox{, } {\operator H}_1 = \int\big( -u^3+\frac{1}{2} u_x^2\big)\,dx\mbox{,}$$
and
$$
\dot u = {\operator D}_2\frac{\delta {\operator H}_2}{\delta u}\mbox{, } {\operator D}_2 =
-(2u\partial_x + 2\partial_x u + \partial_{xxx})\mbox{, } {\operator H}_2 =
\frac{1}{2}\int u^2\,dx\mbox{.}$$
In the case ${\functionspace F}_0={\functionspace F}_1={\functionspace F}^{\rm trig}$, $v:=\overline{\frac{\delta\H_1}{\delta u}}$
is the orthogonal projection to ${\functionspace F}_0$ of $\frac{\delta\H_1}{\delta u}=-3u^2-u_{xx}$; this can be
computed by multiplying out the Fourier series and dropping all but
the first $n$ modes, or by antialiasing.
Then $\overline{{\operator D}}_1 v = v_x$, since
differentiation is exact in ${\functionspace F}^{\rm trig}$. Since ${\operator D}_1$
is constant, the discretization is a Hamiltonian system, and since
$\overline{{\operator D}}_1$ is exact on constants, it also
preserves the Casimir ${\mathcal C}=\int u\,dx$.
In this formulation, Prop. \ref{prop:galerkin} (ii) shows that
the dual composition and Galerkin approximations of ${\operator D}_1\frac{\delta\H_1}{\delta u}$ coincide,
for differentiation does not map high modes to lower modes, i.e.,
${\operator D}_1({\functionspace F}^{{\rm trig}\perp})\perp{\functionspace F}^{\rm trig}$.
In the second Hamiltonian form, $H_2 = \frac{1}{2}{\mathbf u}^{\mathrm T} S {\mathbf u}$, $\frac{\delta\H_2}{\delta u} =
S^{-1}\nabla H_2 = {\mathbf u},$ and the Galerkin approximation of $\frac{\delta\H_2}{\delta u}$ is exact,
so that Prop. \ref{prop:galerkin} (iii) implies that the composition
$\overline{\operator D}_2\overline{\frac{\delta\H_2}{\delta u}}$ {\em also} coincides with the Galerkin
approximation. $\overline{{\operator D}}_2v$ can evaluated using antialiasing
as in Example \ref{fastfourier}. $\overline{{\operator D}}_2$ is
not a Hamiltonian operator, but still generates a skew-gradient
system with integral $H_2$. Thus in this (unusual) case,
the Galerkin and antialiased pseudospectral methods coincide and have
three conserved quantities,
$H_1$, $H_2$, and ${\mathcal C}|_{{\functionspace F}^{\rm trig}}$.
The situation for finite element methods with
${\functionspace F}_0={\functionspace F}_1={\functionspace F}^{\rm pp}(n,r)$ is different.
In the first form, we need $r\ge 1$ to ensure
that ${\functionspace F}_0$ is natural for ${\operator H}_1$; in the second form, naturality is
no restriction, but we need $r\ge2$ to ensure that ${\functionspace F}_1$ is a domain
of interior skewness. The first dual composition method
is still Hamiltonian with
integral $H_1$ and Casimir $C=u_i\int f_i\, dx$, but because
$\overline{\operator D}_1$ does not commute with projection to ${\functionspace F}_1$, it is {\em
not} a standard Galerkin method.
In the second form, $\frac{\delta\H_2}{\delta u}=u$ is still exact, so the
dual composition and Galerkin methods still coincide.
However, they are not Hamiltonian.
\end{example}
\begin{example}[\bf An inhomogeneous wave equation\rm]\rm
When natural and skew boundary conditions conflict, it is necessary
to take ${\functionspace F}_0\ne{\functionspace F}_1$. Consider
$ \dot q = a(x)p$, $\dot p = q_{xx}$, $q_x(\pm1,t)=0$.
This is a canonical Hamiltonian system with
$$ {\operator D} = \left(\begin{matrix}0 & 1 \\ -1 & 0 \\\end{matrix}\right),\
{\operator H} = \frac{1}{2}\int_{-1}^1 \big(a(x)p^2 + q_x^2\big)\, dx,\
\frac{\delta {\operator H}}{\delta q} = -q_{xx},\
\frac{\delta {\operator H}}{\delta p} = a(x)p.$$
Note that (i) the boundary condition is
natural for ${\operator H}$, and (ii)
no boundary conditions are required for ${\operator D}$ to be skew-adjoint in $L^2$.
Since $\overline{\frac{\delta\H}{\delta u}}$ is computed with trial functions in ${\functionspace F}_1$, we
should not include $q_x(\pm1)=0$ in ${\functionspace F}_1$, for this would be to
enforce $(-q_{xx})_x=0$.
In \cite{mc-ro} we show that a spectrally accurate dual composition method is
obtained with
$ {\functionspace F}_0 = \{ q\in {\functionspace P}_{n+2}: q_x(\pm 1)=0 \} \times {\functionspace P}_n$ and
$ {\functionspace F}_1 = {\functionspace P}_n\times {\functionspace P}_n$.
\end{example}
\subsection*{4. Quadrature of Hamiltonians}
\label{sec:quadrature}
Computing $\nabla H =\nabla{\operator H}(u_j f_j)$ is not always possible in closed form.
We would like to approximate ${\operator H}$ itself by quadratures in real space.
However, even if the discrete $H$ and its gradient are spectrally accurate
approximations, they cannot always be used to construct spectrally
accurate Hamiltonian discretizations.
In a cardinal basis,
let ${\operator H}=\int h(u)dx$ and define the
quadrature Hamiltonian $H_q:= h( u_j) w_j = {\mathbf w}^{\mathrm T} h({\mathbf u})$
where $w_j = \int f_j dx$ are the quadrature weights.
Since $\nabla H_q = W h'({\mathbf u})$, $\frac{\delta\H}{\delta u}\approx W^{-1}\nabla H_q$,
Unfortunately,
$DW^{-1}\nabla H_q$ is not a skew-gradient system, while
$D S^{-1} \nabla H_q$ is skew-gradient, but is not an accurate approximation.
$D W^{-1} \nabla H_q$ can only be a skew-gradient
system if $DW^{-1}$ is antisymmetric, which occurs in three general cases.
(i) On a constant grid, $W$ is a multiple of the identity, so
if $D$ is antisymmetric, $D W^{-1}$ is too.
(ii) On an arbitrary grid with $D=\left(
\begin{smallmatrix}
0 & I \\
-I & 0\\
\end{smallmatrix}\right)$,
$DW^{-1}$ is antisymmetric.
(iii) On a Legendre grid with ${\functionspace F}_0={\functionspace F}_1$,
$S=W$, and $D W^{-1} = W^{-1} A W^{-1}$ is antisymmetric.
The required compatibility between
$D$ and $W$ remains an intriguing and frustrating obstacle to the
systematic construction of conservative discretizations of strongly
nonlinear PDEs.
|
2,869,038,154,089 | arxiv | \section{Introduction: negative type and generalized roundness}\label{sec:1}
A notion of generalized roundness for metric spaces was introduced by Enflo \cite{En2} (see Definition \ref{NTGRDEF} (c)).
Enflo constructed a separable metric space of generalized roundness zero that is not uniformly
homeomorphic to any metric subspace of any Hilbert space. This showed that Hilbert spaces are not universal uniform
embedding spaces and thereby settled a question of Smirnov. Enflo's application of generalized roundness to the
uniform theory of Banach spaces remains unique and may, indeed, be regarded as an anomaly. The reason for this is
that generalized roundness is, for all intents and purposes, an isometric rather than uniform invariant. Indeed,
Lennard, Tonge and Weston \cite{LTW} have shown that the generalized roundness and supremal $p$-negative type of
any given metric space coincide. Negative type is a well-known classical isometric invariant whose
origin may be traced back to an 1841 paper of Cayley \cite{Cay}. We recall the relevant definitions here.
\begin{definition}\label{NTGRDEF} Let $p \geq 0$ and let $(X,d)$ be a metric space. Then:
\begin{enumerate}
\item[(a)] $(X,d)$ has $p$-{\textit{negative type}} if and only if for all integers $n \geq 2$,
all finite subsets $\{z_{1}, \ldots , z_{n} \} \subseteq X$, and all choices of real numbers $\zeta_{1},
\ldots, \zeta_{n}$ with $\zeta_{1} + \cdots + \zeta_{n} = 0$, we have:
\begin{eqnarray}\label{NT}
\sum\limits_{i,j = 1}^{n} d(z_{i},z_{j})^{p} \zeta_{i} \zeta_{j} & \leq & 0.
\end{eqnarray}
\item[(b)] $p$ is a \textit{generalized roundness exponent} of $(X,d)$ if and only if for all integers $n > 1$,
and all choices of points $x_{1}, \ldots , x_{n}, y_{1}, \ldots , y_{n} \in X$, we have:
\begin{eqnarray}\label{GR}
\sum\limits_{i,j = 1}^{n} \Bigl\{ d(x_{i},x_{j})^{p} + d(y_{i},y_{j})^{p} \Bigl\} & \leq &
2 \sum\limits_{i,j = 1}^{n} d(x_{i},y_{j})^{p}.
\end{eqnarray}
\item[(c)] The \textit{generalized roundness} of $(X,d)$ is defined to be the supremum of the set of all
generalized roundness exponents of $(X,d)$.
\end{enumerate}
\end{definition}
There is a richly developed theory of negative type metrics that has stemmed from classical papers of
Cayley \cite{Cay}, Menger \cite{Me1, Me2, Me3} and Schoenberg \cite{Sc1, Sc2, Sc3}. Recently there
has been intense interest in negative type metrics due to their usefulness in algorithmic settings.
A prime illustration is given by the \textit{Sparsest Cut problem with relaxed demands} in combinatorial
optimization \cite{Lee, Nao}. There are a number of monographs that provide modern, in-depth, treatments
of the theory of negative type metrics, including Berg, Christensen and Ressel \cite{Ber}, Deza and
Laurent \cite{Dez}, and Wells and Williams \cite{Waw}.
We note that some natural embedding problems involve spaces such as $L_{p}$ with $0 < p < 1$. These spaces carry
the natural quasi-norm $\| \cdot \|_{L_{p}}$ together with the corresponding quasi-metric $d(x, y) = \| x - y \|_{L_{p}}$.
The terminology being used here, however, is not universal. By a \textit{quasi-metric} $d$ on a set $X$ we
mean a function $d$ that satisfies the usual conditions for a metric on $X$, save that the triangle inequality
is relaxed in the following manner: there is a constant $K \geq 1$ such that for all $x,y,z \in X$,
\[
d(x,y) \leq K \cdot \{ d(x,z) + d(z,y) \}.
\]
In the case of $L_{p}$ with $0 < p < 1$, the best possible (smallest) such constant $K$ is $2^{(1-p)/p}$.
The concepts of Definition \ref{NTGRDEF} apply equally well in the broader context of quasi-metric spaces.
It is an important result of Schoenberg \cite{Sc1} that a metric space can be isometrically embedded in some
Hilbert space if and only if it has $2$-negative type. The related problem of embedding Banach
spaces linearly and isometrically into $L_{p}$-spaces was raised by L\'{e}vy \cite{Lev} in 1937. In the case $0 < p \leq 2$,
Bretagnolle, Dacunha-Castelle and Krivine \cite[Theorem 2]{BDK} established that a real quasi-normed space $X$ is linearly
isometric to a subspace of some $L_{p}$-space if and only if $X$ has $p$-negative type. This result was applied
in \cite{BDK} to prove that $L_{q}$ embeds linearly and isometrically into $L_{p}$ if $0 < p < q \leq 2$.
However, in practice it is a hard task to determine if a given real quasi-normed space $X$ has $p$-negative type.
In 1938 Schoenberg
\cite{Sc3} raised the problem of determining those $p \in (0,2)$ for which $\ell_{q}^{(n)}$, $2 < q \leq \infty$, has
$p$-negative type. The cases $q = \infty$ and $2 < q < \infty$ were settled, respectively, by Misiewicz \cite{Mi1}
and Koldobsky \cite{Kol}: if $n \geq 3$ and $2 < q \leq \infty$, then $\ell_{q}^{(n)}$ is not linearly isometric to any
subspace of any $L_{p}$-space for which $0 < p \leq 2$. In other words, by \cite[Theorem 2]{BDK}, $\ell_{q}^{(n)}$
does not have $p$-negative type for any $p > 0$ if $n \geq 3$ and $2 < q \leq \infty$. The restriction $n \geq 3$ on the
dimension of $\ell_{q}^{(n)}$ is essential in this setting. It is well-known that every $2$-dimensional normed space is
linearly isometric to a subspace of $L_{1}$. (See, for example, Yost \cite{Yos}.) In particular, every $2$-dimensional
normed space has generalized roundness at least one.
For any $p \geq 0$, Lennard \textit{et al}.\ \cite[Theorem 2.4]{LTW} have shown that conditions (a) and (b) in
Definition \ref{NTGRDEF} are equivalent. Thus, as noted above, the generalized roundness and supremal $p$-negative type of
any given metric space coincide. It therefore follows from the results in \cite{BDK, Mi1, LTW} that $\ell_{\infty}^{(3)}$
has generalized roundness zero (see \cite[Theorem 2.8]{LTW}). Unfortunately, this indirect proof gives no
insight into the combinatorial geometry of $\ell_{\infty}^{(3)}$ that causes the inequalities (\ref{GR}) to fail
whenever $p > 0$. The main purpose of this paper is to rectify this situation by giving a direct proof that $\ell_{\infty}^{(3)}$
has generalized roundness zero.
The combinatorial geometry which forces $\ell_{\infty}^{(3)}$ to have generalized roundness zero
necessarily carries over to any real quasi-normed space $X$ that contains an isometric copy of $\ell_{\infty}^{(3)}$.
For example, Banach spaces such as $X = c_{0}$ or $C[0,1]$ have generalized roundness zero for the same geometric reasons as $\ell_{\infty}^{(3)}$.
In Section \ref{sec:2} we reformulate Definition \ref{NTGRDEF} (b) in terms of regular Borel measures of compact support.
This facilitates our direct proof that $\ell_{\infty}^{(3)}$ has generalized roundness zero. The argument proceeds by the analysis
of an explicit geometric construction. Equivalently, our arguments give an elementary new proof that $\ell_{\infty}^{(3)}$
does not have $p$-negative type for any $p > 0$. Section \ref{sec:4} completes the paper with a characterization of real
quasi-normed spaces of generalized roundness zero.
\section{A measure-theoretic reformulation of generalized roundness}\label{sec:2}
The purpose of this section is to introduce an equivalent formulation of generalized roundness
that is predicated in terms of measures. The equivalence of conditions (1) and (2) in the statement of
the following theorem is due to Lennard \textit{et al}.\ \cite[Theorem 2.2]{LTW}.
\begin{theorem}\label{equivalence}
Let $(X,d)$ be a metric space and suppose that $p \ge 0$. Then the following are equivalent:
\begin{enumerate}
\item $p$ is a generalized roundness exponent of $(X,d)$.
\item For all integers $N \geq 1$, all finite sets $\{ x_{1}, \ldots, x_{N} \} \subseteq X$,
and all collections of non-negative real numbers $m_{1}, \ldots, m_{N}, n_{1}, \ldots, n_{N}$ that
satisfy $m_{1} + \cdots + m_{N} = n_{1} + \cdots + n_{N}$, we have:
\[
\sum\limits_{i,j = 1}^{N} \bigl\{m_{i}m_{j} + n_{i}n_{j}\bigl\} d(x_{i},x_{j})^{p}
\leq 2 \sum\limits_{i,j =1}^{N} m_{i}n_{j}d(x_{i},x_{j})^{p}.
\]
\item For all regular Borel probability measures of compact support $\mu$ and $\nu$ on $X$, we have:
\begin{equation}\label{meas-p}
\iint_{X \times X} d(x,x')^p \, d\mu(x) d\mu(x')
+ \iint_{X \times X} d(y,y')^p \, d\nu(y) d\nu(y')
\le 2 \iint_{X \times X} d(x,y)^p \, d\mu(x) d\nu(y)
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof} Let $p > 0$.
Suppose there exists a finite set $\{ x_{1}, \ldots, x_{N} \} \subseteq X$,
and corresponding non-negative real numbers $m_{1}, \ldots, m_{N}, n_{1}, \ldots, n_{N}$ (not all zero)
that satisfy $m_{1} + \cdots + m_{N} = n_{1} + \cdots + n_{N}$, such that
\[
\sum\limits_{i,j = 1}^{N} \bigl\{m_{i}m_{j} + n_{i}n_{j}\bigl\} d(x_{i},x_{j})^{p}
> 2 \sum\limits_{i,j =1}^{N} m_{i}n_{j}d(x_{i},x_{j})^{p}.
\]
By normalization, we may assume that $m_{1} + \cdots + m_{N} = 1 = n_{1} + \cdots + n_{N}$. One may then define
probability measures $\mu$ and $\nu$ on the set $\{ x_{1}, \ldots, x_{N} \}$ as follows: $\mu(\{ x_{i} \}) = m_{i}$ and
$\nu(\{ x_{i} \}) = n_{i}$ for all $i$, $1 \leq i \leq N$. This provides a suitable instance of (\ref{meas-p})
failing.
Conversely, suppose that $\mu$ and $\nu$ are measures such that inequality (\ref{meas-p}) fails. Let $X_0$ be a compact set
containing the support of the two measures. For $n > 0$ let $S_n = \{x_j\}_{j=1}^N$ be a set of points so that the
balls $B(x_i,1/n)$ cover $X_0$. For $x \in X_0$, let $\alpha_n(x)$ equal the element of $S_n$ closest to $x$ (where,
in case of a tie, one takes the element with the smallest index). Let $f_n: X_0 \times X_0 \to \mathbb{R}$ be defined by $f_n(x,y)
= d(\alpha_n(x),\alpha_n(y))^p$. Since the map $(x,y) \mapsto d(x,y)^p$ is uniformly continuous on $X_0 \times X_0$
and $\alpha_n$ converges uniformly to the identity on $X_0$, it is easy to check that $f_n(x,y) \to d(x,y)^p$ uniformly
on $X_0 \times X_0$ and hence that the integrals of $f_n$ with respect to the product measures $\mu \times \nu$, $\nu \times \nu$
and $\mu \times \mu$ converge to the corresponding integrals of $d(\cdot,\cdot)^p$.
Since $f_n$ is a simple function, integrals of it are of the form $\sum_{i,j=1}^n c_{ij} d(x_i,x_j)^p$.
Indeed, if we set $m_i = \mu(\{x \,: \, \alpha_n(x) = x_i\})$ and $n_i = \nu(\{x \,: \, \alpha_n(x) = x_i\})$, then
\begin{multline}\label{f_n}
2 \iint_{X \times X} f_n(x,y) \, d\mu(x) d\nu(y)
- \iint_{X \times X} f_n(x,x') \, d\mu(x) d\mu(x')
- \iint_{X \times X} f_n(y,y') \, d\nu(y) d\nu(y') \\
= 2 \sum_{i,j=1}^N m_i n_j d(x_i,x_j)^p
- \sum_{i,j=1}^N m_i m_j d(x_i,x_j)^p
- \sum_{i,j=1}^N n_i n_j d(x_i,x_j)^p.
\end{multline}
Since inequality (\ref{meas-p}) fails, for large enough $n$, the left-hand side of (\ref{f_n}) is negative.
\end{proof}
In the formulation of Definition \ref{NTGRDEF} (b) one may assume that the sets $\{ x_{i} \}$ and $\{ y_{i} \}$ are disjoint.
This is due to cancellation of like terms. In the measure setting, this corresponds to the measures having disjoint support.
We note that analysis of negative type and hypermetric inequalities using measures is not novel. See, for example,
Nickolas and Wolf \cite[Theorem 3.2]{NIC}.
\section{A direct proof that $\ell_{\infty}^{(3)}$ has generalized roundness zero}\label{sec:3}
We now give a direct proof that $\ell_{\infty}^{(3)}$ has generalized roundness $0$ on the basis of a
geometric construction.
\begin{theorem}
For all $p > 0$, $p$ is not a generalized roundness exponent of $\ell_{\infty}^{(3)}$.
\end{theorem}
\begin{proof} Let $d$ denote the metric on $\mathbb{R}^{3}$ induced by the norm of $\ell_{\infty}^{(3)}$.
Fix $p > 0$. Fix $L > 2$ and define the sets
\begin{align*}
S_\mu &= \{(t,\pm 1,0) \, :\, -L \le t \le L\}, \text{ and} \\
S_\nu &= \{(t,0,\pm 1) \, :\, -L \le t \le L\}.
\end{align*}
So each set is made up of a pair of parallel lines of length $2L$. Define the measures $\mu = \mu_L$ and $\nu = \nu_L$
to be one-dimensional Lebesgue measure supported on the corresponding sets $S_\mu$ and $S_\nu$.
Clearly
\[ \iint_{\mathbb{R}^{3} \times \mathbb{R}^{3}} d(x,x')^p \, d\mu(x) d\mu(x')
= \iint_{\mathbb{R}^{3} \times \mathbb{R}^{3}} d(y,y')^p \, d\nu(y) d\nu(y'), \]
and so we just need to show that, for sufficiently large $L$,
\begin{equation}\label{eq1}
\iint_{\mathbb{R}^{3} \times \mathbb{R}^{3}} d(x,y)^p \, d\mu(x) d\mu(y) > \iint_{\mathbb{R}^{3} \times \mathbb{R}^{3}} d(x,y)^p \, d\mu(x) d\nu(y).
\end{equation}
Now
\begin{align*}
&\iint_{\mathbb{R}^{3} \times \mathbb{R}^{3}} d(x,y)^p \, d\mu(x) d\mu(y)\\
&\qquad = \int_{-L}^L \left( \int_{-L}^L \Vert (t,1,0) - (s,1,0) \Vert^p \, ds
+ \int_{-L}^L \Vert (t,1,0) - (s,-1,0) \Vert^p \, ds\right) \,dt \\
&\qquad\qquad +
\int_{-L}^L \left( \int_{-L}^L \Vert (t,-1,0) - (s,1,0) \Vert^p \, ds
+ \int_{-L}^L \Vert (t,-1,0) - (s,-1,0) \Vert^p \, ds \right) \,dt \\
&\qquad= 2 \Bigl( \int_{-L}^L \int_{-L}^L \Vert (t,1,0) - (s,1,0) \Vert^p \, ds\, dt
+\int_{-L}^L \int_{-L}^L \Vert (t,1,0) - (s,-1,0) \Vert^p \, ds\, dt
\Bigr) \\
&\qquad= 2 \Bigl( \int_{-L}^L \int_{-L}^L |t-s|^p \, ds\, dt
+ \int_{-L}^L \int_{-L}^L \max(|t-s|,2)^p \, ds\, dt \Bigr).
\end{align*}
For fixed $t$,
\[ \int_{-L}^L |t-s|^p \, ds = {\frac { \left( t+L \right) ^{p+1}}{p+1}}+{\frac { \left( L-t \right)
^{p+1}}{p+1}}, \]
and so
\[ T_1 = \int_{-L}^L \int_{-L}^L |t-s|^p \, ds\, dt
= 8\,{\frac {{2}^{p}{L}^{p+2}}{{p}^{2}+3\,p+2}}.
\]
The other term needs to be split into pieces. If $-L \le t \le -L+2$, then
\[ \int_{-L}^L \max(|t-s|,2)^p \, ds
= {2}^{p} \left( t+2+L \right) +
{\frac { \left( L-t \right)^{p+1} - {2}^{p+1}}{p+1}},
\]
and so
\begin{align*}
\int_{-L}^{-L+2} & \int_{-L}^L \max(|t-s|,2)^p \, ds\, dt
= \int_{L-2}^{L} \int_{-L}^L \max(|t-s|,2)^p \, ds\, dt \\
&= - \,{\frac{2^{p+1} \bigl( 2\,{L}^{2} (L-1)^{p}
-3\,{p}^{2}
-4\,L (L-1)^{p}
-{2}\,{L}^{p+2}
-7\,p
+2\,( L-1)^{p}
-2 \bigr)}
{{p}^{2}+3\,p+2}}.
\end{align*}
If $-L+2 \le t \le L-2$, then
\begin{align*}
\int_{-L}^L \max(|t-s|,2)^p \, ds
&= \int _{-L}^{t-2}\! \left( t-s \right) ^{p}{ds}+4\cdot{2}^{p}+\int _{t+2}^
{L}\! \left( s-t \right) ^{p}{ds}
\\
&= {\frac { \left( t+L \right) ^{p+1}-{2}^{p+1}}{p+1}}+4\cdot{2}^{p}+{\frac
{\left( L-t \right) ^{p+1}-{2}^{p+1}}{p+1}},
\end{align*}
and so
\begin{align*}
\int_{-L+2}^{L-2} & \int_{-L}^L \max(|t-s|,2)^p \, ds\, dt \\
&= {\frac{ 2^{p+3} \bigl(
( L-1)^{p}{L}^{2}
+ L {p}^{2}
- 2\,( L-1)^{p} L
+ 2\, L p
-2\,{p}^{2}
+(L-1)^{p}
- 4\,p
-1 \bigr)}
{{p}^{2}+3\,p+2}}.
\end{align*}
Adding the appropriate terms and simplifying
\[ T_2 = \int_{-L}^{L} \int_{-L}^L \max(|t-s|,2)^p \, ds \,dt
= {\frac{2^{p+2} \bigl(
{2}\,L{p}^{2}
+4\,L p
-{p}^{2}
+{2}\,{L}^{p+2}
-p \bigr)}
{{p}^{2}+3\,p+2}}.
\]
Combining $T_1$ and $T_2$, we get
\[ \iint_{\mathbb{R}^{3} \times \mathbb{R}^{3}} d(x,y)^p \, d\mu(x) d\mu(y)
= 2\,(T_1 + T_2)
={\frac{2^{p+3} \bigl(
2\,L{p}^{2}
+4\,Lp
-{p}^{2}
+4\,{L}^{p+2}
-p \bigr)}
{{p}^{2}+3\,p+2}}.
\]
We now turn to calculating the right-hand side of (\ref{eq1}). By symmetry
\[
\iint_{\mathbb{R}^{3} \times \mathbb{R}^{3}} d(x,y)^p \, d\mu(x) d\nu(y)
= 4 \, \Bigl(
\int_{-L}^{L} \int_{-L}^L \Vert (t,1,0) - (s,0,1) \Vert^p \,ds\,dt
\Bigr).
\]
If $-L \le t \le -L+1$, then
\begin{align*} \int_{-L}^L \Vert (t,1,0) - (s,0,1) \Vert^p \,ds
&= 1 \cdot (t+1 - (-L)) + \int_{t+1}^L (s-t)^p \, ds \\
&= t+1+L+{\frac {\left( L-t \right)^{p+1} - 1}{p+1}}.
\end{align*}
Thus
\begin{align*}
&\int_{-L}^{-L+1} \int_{-L}^L \Vert (t,1,0) - (s,0,1) \Vert^p \,ds\,dt
= \int_{L-1}^{L} \int_{-L}^L \Vert (t,1,0) - (s,0,1) \Vert^p \,ds\,dt \\
&\qquad= \frac {{2}^{p+3}{L}^{p+2}
- 8\, \left( 2\,L-1 \right)^{p}{L}^{2}
+ 8\, \left( 2\,L-1 \right)^{p} L
+3\,{p}^{2}
-2\, \left( 2\,L-1 \right)^{p}
+7\,p
+2}
{2({p}^{2}+3\,p+2)}.
\end{align*}
If $-L+1 \le t \le L-1$, then
\begin{align*}
\int_{-L}^L \Vert (t,1,0) - (s,0,1) \Vert^p \,ds
&= \int _{-L}^{t-1}\! \left( t-s \right)^{p}{ds}
+2\cdot 1
+\int _{t+1}^{L}\! \left( s-t \right)^{p}{ds} \\
&= \frac { \left( t+L \right)^{p+1}-1}{p+1}
+2
+ \frac {\left( L-t \right)^{p+1}-1 }{p+1},
\end{align*}
and so
\begin{align*}
&\int_{-L+1}^{L-1} \int_{-L}^L \Vert (t,1,0) - (s,0,1) \Vert^p \,ds\,dt \\
& \qquad
= 2\,{\frac {2\,{p}^{2}L
+ 4 \left( 2\,L-1 \right)^{p}{L}^{2}
- 4 \left( 2\,L-1 \right)^{p}L
+ \left( 2\,L-1 \right)^{p}
+ 4\,p L
- 1
- 2\,{p}^{2}
-4\,p}
{{p}^{2}+3\,p+2}}.
\end{align*}
Combining these gives
\begin{align*}
\iint_{\mathbb{R}^{3} \times \mathbb{R}^{3}} d(x,y)^p \, d\mu(x) d\nu(y)
&= 4 \int_{-L}^L \int_{-L}^L \Vert (t,1,0) - (s,0,1) \Vert^p \,ds\,dt \\
&= \frac{4 \bigl( 4\,L{p}^{2}+8\,Lp+8\,{2}^{p}{L}^{p+2}-{p}^{2}-p\bigr)}
{{p}^{2}+3\,p+ 2}.
\end{align*}
Let
\[ \Delta(L,p) = \frac{{p}^{2}+3\,p+ 2}{4p} \Bigl(
\iint_{\mathbb{R}^{3} \times \mathbb{R}^{3}} d(x,y)^p \, d\mu(x) d\nu(y)
- \iint_{\mathbb{R}^{3} \times \mathbb{R}^{3}} d(x,y)^p \, d\mu(x) d\mu(y) \Bigr).
\]
From the above calculations we see that
$\Delta(L,p) =
4\,Lp + {2}^{p+1}p + 8\,L + {2}^{p+1}
- 4\,L{2}^{p}p - 8\,L{2}^{p} - p - 1$.
It remains to show that no matter how small $p$ is, one can choose $L$ so that $\Delta(L,p) < 0$. (Of course, for any fixed $L$,
$\Delta(L,p) \ge 0$ for $p$ near zero.) It suffices to choose $L = 1/p$. To see this note that
\begin{align*} \Delta(1/p,p) &=
4 + {2}^{p+1}p + \frac{8}{p} + {2}^{p+1}
- 4 \cdot 2^p - \frac{8 \cdot 2^p}{p} - p - 1 \\
&= p\, \left( {2}^{p+1}-1 \right) + \left(3-{2}^{p+1}\right) +{\frac{8\,(1-{2}^{p})}{p}}.
\end{align*}
Elementary calculus shows that for $0 < p < 1$,
\[ p\, \left( {2}^{p+1}-1 \right) < 3,
\,
\left(3-{2}^{p+1}\right) < 1, \text{ and}
\,\,
\frac{8\,(1-{2}^{p})}{p} < -8 \ln 2 < -5,
\]
and hence for any small $p$, $\Delta(1/p,p) < 0$ as required.
\end{proof}
\section{Quasi-normed spaces of generalized roundness zero}\label{sec:4}
In this brief section we return to the theme of isometrically embedding metric spaces into $L_{p}$-spaces.
The theory developed in the papers \cite{Sc2, BDK, LTW} gives rise to the following characterization of
real quasi-normed spaces of generalized roundness zero.
\begin{theorem}\label{qn:zero}
A real quasi-normed space $X$ has generalized roundness zero if and only
if it is not linearly isometric to any subspace of any $L_{p}$-space for which $0 < p \leq 2$.
\end{theorem}
\begin{proof}
$(\Rightarrow)$ Let $X$ be a real quasi-normed space of generalized roundness zero.
It is plain that a metric space of generalized roundness $\wp$ cannot be isometric to any metric space of generalized
roundness $p > \wp$. Thus $X$ is not isometric to any metric space of positive generalized roundness.
The generalized roundness of any metric subspace of any $L_{p}$-space for which $0 < p \leq 2$ is at
least $p$ by \cite[Corollary 2.6]{LTW}. The forward implication is now evident.
$(\Leftarrow)$ We argue the contrapositive. Let $X$ be a real quasi-normed space of positive generalized roundness.
The set of all $p$ for which a given metric space has $p$-negative type is always
an interval of the form $[0, \wp]$ for some $\wp \geq 0$ or $[0, \infty)$ by Schoenberg \cite[Theorem 2]{Sc2}.
So it follows from \cite[Theorem 2.4]{LTW} that $X$ has $p$-negative type for some $p \in (0,2]$.
Thus $X$ is linearly isometric to a subspace of some $L_{p}$-space by \cite[Theorem 2]{BDK}.
\end{proof}
It is worth noting that in the proof of the forward implication of Theorem \ref{qn:zero} the linear
structures of $X$ and $L_{p}$ play no role. We may therefore infer the following corollary from the
argument given above.
\begin{corollary}
If $X$ is a metric space of generalized
roundness zero, then $X$ is not isometric to any metric subspace of any $L_{p}$-space for which $0 < p \leq 2$.
\end{corollary}
\section*{Acknowledgments}
We thank the referee for detailed and thoughtful comments on the preliminary version of this paper.
\bibliographystyle{amsalpha}
|
2,869,038,154,090 | arxiv | \section{Introduction}
The three dimensional distribution of galaxies has the potential to tell us a lot about the physics governing our Universe.
However, the imprint of the composition and history of our Universe on its structure is usually quantified in terms of the linear power spectrum. From there it is a fair way to go to connect to the distribution of luminous objects. Due to the stochastic nature of the initial conditions the comparison between theory and observation has to be made at a statistical level. Thus, it has become common practice to reduce the data to $n$-spectra and to find a way to push the theory as far as possible in order to make predictions for the observed spectra. This means that the theoretical prediction needs to account for the fact that galaxies are only sampling the underlying matter distribution. While their distribution is clearly related to the matter distribution, there are a number of distinct features present in the galaxy distribution that are related to their discrete nature and the fact that galaxies form preferentially in high density regions.
\par
Due to the complicated nature of galaxy formation, cosmological constraints from galaxy surveys are usually obtained using a bias model \cite{Kaiser:1984on}. The simplest local bias models \cite{Fry:1992vr} assume a proportionality between the galaxy and matter overdensities. As we will review in detail below in \S \ref{sec:poiss}, the auto power spectrum of a sample of $N$ particles in a volume $V$ is expected to have an additional \emph{scale independent} shot noise component $V/N$. We will refer to this Poisson prediction as fiducial stochasticity. On top of the fiducial Poisson shot noise there are further contributions to the halo power spectrum that are white only over a limited range of wavenumbers and lead to modifications in the $k\to 0$ limit. The latter will be referred to as stochasticity corrections. For instance, the studies of \cite{Seljak:2009ho,Hamaus:2009op} found evidence for a sub-Poissonian noise in the halo distribution in $N$-body simulations (see also \cite{CasasMiranda:2002on,Manera:2011th}) and used this concept to increase the information content extractable from surveys by weighting haloes accordingly. Subsequently, this approach was used to improve constraints on primordial non-Gaussianity \cite{Hamaus:2011op} and redshift space distortions \cite{Hamaus:2012op}.
\par
Thus far, the origin of these stochasticity corrections has not been understood consistently. However, some authors noted that realistic bias models would at some point need to account for the finite size of haloes and the resulting exclusion effects \cite{Sheth:1999bi}.
The effect of halo exclusion on the power spectrum was previously discussed in \cite{Smith:2007sc} in a an Eulerian setting. Here, we will argue that the exclusion effect can not be seen in isolation but has to be combined with the non-linear clustering, which can lead to positive corrections on large scales. This approach partially alleviates the longstanding problem of non-vanishing contributions of the perturbative bias model on the largest scales, where perturbative corrections are considered unphysical. This paper aims at shedding light at the stochasticity properties of halo and galaxy samples and tries to quantify them where possible.
\par
The paper breaks down as follows: We begin in \S \ref{sec:disctrac} with a short review of the standard Poisson shot noise for a sample of discrete tracers. Then, in \S \ref{sec:toymod} we consider some simple toy models to understand the effects of exclusion on the power spectrum, before we go on to discuss more realistic models for the clustering of dark matter haloes in \S \ref{sec:quanti}. In \S \ref{sec:simul} we study the stochasticity and correlation function for a sample of dark matter haloes and a HOD galaxy sample in $N$-body simulations. Finally, we summarize our findings in \S \ref{sec:concl}.
\section{Discrete Tracers}\label{sec:disctrac}
\subsection{Correlation and Power Spectrum}\label{sec:poiss}
The overdensity of discrete tracer particles (dark matter haloes, galaxies etc.) can generically be written as
\begin{equation}
\delta^\text{(d)}(\vec r)=\frac{n(\vec r)}{\bar n}-1=\frac{1}{\bar n}\sum_i \delta^\text{(D)}(\vec r- \vec r_i)-1,
\end{equation}
where $\bar n$ is the mean number density of the point-like objects whereas $n(\vec r)$ is their local number
density. The two-point correlation of this fluctuation field is the expectation value
\begin{align}
\left\langle\delta^\text{(d)}(\vec r)\delta^\text{(d)}(\vec 0)\right\rangle &=
\frac{1}{{\bar n}^2}\Bigl\langle\sum_{i,j}\delta^\text{(D)}\left(\vec r-\vec r_i\right)
\delta^\text{(D)}\left(\vec r_j\right)\Bigr\rangle -\frac{1}{\bar n}
\Bigl\langle\sum_i\delta^\text{(D)}\left(\vec r-\vec r_i\right)\Bigr\rangle-\frac{1}{\bar n}
\Bigl\langle\sum_j\delta^\text{(D)}\left(\vec r_j\right)\Bigr\rangle + 1
\\
&= \frac{1}{{\bar n}^2}\delta^\text{(D)}\left(\vec r\right)
\Bigl\langle\sum_i\delta^\text{(D)}\left(\vec r-\vec r_i\right)\Bigr\rangle
+\frac{1}{{\bar n}^2}\Bigl\langle\sum_{i\ne j}\delta^\text{(D)}\left(\vec r-\vec r_i\right)
\delta^\text{(D)}\left(\vec r_j\right)\Bigr\rangle-1 \nonumber \\
&= \frac{1}{\bar n} \delta^\text{(D)}\left(\vec r\right)
+\frac{1}{{\bar n}^2}\Bigl\langle\sum_{i\ne j}\delta^\text{(D)}\left(\vec r-\vec r_i\right)
\delta^\text{(D)}\left(\vec r_j\right)\Bigr\rangle-1 \nonumber \\
&= \frac{1}{\bar n}\delta^\text{(D)}\left(\vec r\right) + \xi^\text{(d)}(r) \nonumber \;.
\end{align}
We split the sum into an $i=j$ and and $i\neq j$ part, corresponding to the correlation of the discrete particles
with themselves and the correlation between different particles, respectively.
The second term in the last equality is the reduced two-point correlation function of the tracers. The first term
arises owing to ``self-pairs'', which are usually ignored in the calculation of real space correlations. Taking
the Fourier transform of the last expression, the power spectrum of the discrete tracers is
\begin{equation}
P^\text{(d)}(k) = \frac{1}{\bar n} + \int \derd^3 r\, \xi^\text{(d)}(r) \eh{\text{i} \vec k \cdot \vec r} \;.
\end{equation}
Self-pairs contribute the usual Poisson white noise $1/\bar n$. The only requirement is that the power spectrum
be positive definite. This implies that the Fourier transform of the two-point correlation $\xi^\text{(d)}(r)$ can
be anything equal or greater than $-1/\bar n$. In the limit $k\to 0$ in particular, the power spectrum tends
towards
\begin{equation}
P^\text{(d)}(k) \xrightarrow{k\to 0} \frac{1}{\bar n} + \int \derd^3 r\, \xi^\text{(d)}(r) \;,
\end{equation}
where the intregal of $\xi^\text{(d)}(r)$ over the whole space can be positive, zero or negative (but greater than
$-1/\bar n$) depending on the nature of the discrete tracers. This can lead to super-poisson or sub-poisson white
noise in the low-$k$ limit.
At $k=0$, the power spectrum is $P^\text{(d)}(0)=0$ because the fluctuation field $\delta^\text{(d)}(\vec r)$ is
defined relative to the mean number density, hence $\langle\delta^\text{(d)}\rangle =0$. This implies that
$P^\text{(d)}(k)$ drops precipitously on very large scales (so it must be discontinuous at $k=0$) regardless the value
of $\int \derd^3 r\, \xi^\text{(d)}(r)$. To convince ourselves that this is indeed the case, we can write the
Fourier modes of the tracer fluctuation field as
\begin{equation}
\delta^\text{(d)}(\vec k) = \frac{1}{\bar n} \sum_i \eh{\text{i} \vec k \cdot \vec r_i} - \int \derd^3 r\,
\eh{\text{i} \vec k \cdot \vec r} \;.
\end{equation}
To calculate $\delta^\text{(d)}(k=0)$ (which is formally the difference between two infinite quantities), we first
assume $N,V\gg 1$ at fixed average number density $\bar n\equiv N/V$, and then take the limit $N,V \to\infty$.
We thus have for the Fourier transform of the density field
\begin{equation}
\delta^\text{(d)}(\vec k) = \frac{1}{\bar n} \sum_i \eh{\text{i} \vec k \cdot \vec r_i} - V\delta^\text{(K)}_{\vec k,\vec 0} \;,
\label{eq:finitevoldeltak}
\end{equation}
which for $\vec k=0$ yields
\begin{equation}
\delta^\text{(d)}(\vec 0)= \frac{V}{N} N - V = 0 \;.
\end{equation}
This obviously holds also for a finite number $N$ of tracers in a finite volume $V$. Therefore, the fact that
$\int \derd^3 r\, \xi^\text{(d)}(r)$ can be different from zero has nothing to do with the fact that
$\langle\delta^\text{(d)}\rangle=0$, nor with the so-called ``integral constraint'' that appears when measuring an
excess of pairs relative to a random distribution in a finite volume \cite{Labatie:2010un,Peacock:1991th}.
\subsection{The Effect of Exclusion with Clustering}
Let us now account for the fact that haloes are the centre of an ensemble of particles, which by definition can not overlap, and that these are clustered. Exclusion means it is forbidden to have two haloes closer than the sum of their radii $R$. This fact can be accounted for by writing the correlation function of the discrete tracers as
\begin{equation}
\xi_\text{hh}^\text{(d)}(r)=
\begin{cases}
-1 & \text{for}\ r<R\\
\xi_\text{hh}^\text{(c)}(r) &\text{for}\ r\geq R,
\end{cases}
\end{equation}
where the fictitious continuous correlation function $\xi_\text{hh}^\text{(c)}(r)$ is defined for $r\in[0,\infty]$ and would for instance be related to the matter correlation function by the local bias model (see \S \ref{sec:biasmodel} below). Enforcing this step at the exclusion radius is certainly overly simplistic, since any triaxiality or variation of radius within the sample will smooth this step out. We will come back to this issue later.
\\
For generic continuous clustering models, we can write the Fourier transform of the correlation function as
\begin{align}
\int_0^\infty \derd^3 r \xi_\text{hh}^\text{(d)}(r) j_0(kr)=&-\int_0^R \derd^3 r j_0(kr)+\int_R^\infty\derd^3 r \xi_\text{hh}^\text{(c)}(r)j_0(kr)\nonumber\\
=&-V_\text{excl}W_R(k)-\int_0^R \derd^3 r \xi_\text{hh}^\text{(c)}(r)j_0(kr)+\int_0^\infty \derd^3 r \xi_\text{hh}^\text{(c)}(r)j_0(kr)\\
=&-V_\text{excl}W_R(k)-V_\text{excl} \left[W_R*P_\text{hh}^\text{(c)}\right](k)+P_\text{hh}^\text{(c)}(k)\nonumber,
\end{align}
where the exclusion volume is $V_\text{excl}=4\pi R^3/3$, $j_0$ is the zeroth order spherical Bessel function and the Fourier transform of the top-hat window is given by
\begin{equation}
W_R(k)=3\frac{\sin(kR)-kR \cos(kR)}{(kR)^3}
\end{equation}
and where the notation $[A*B](k)$ describes a convolution integral
\begin{equation}
[A*B](k)=\int \frac{\derivd^3q}{(2\pi)^3} A(q)B(|\vec k -\vec q|).
\end{equation}
We also defined the continuous power spectrum as the full Fourier transform of the continuous correlation function
\begin{equation}
P_\text{hh}^\text{(c)}(k)=\int_0^\infty \derd^3 r \xi_\text{hh}^\text{(c)}(r) j_0(kr).
\end{equation}
Combining the above results with the fiducial stochasticity contribution we finally have for the power spectrum of the discrete tracers
\begin{equation}
P_\text{hh}^\text{(d)}(k)=\frac{1}{\bar n}+P_\text{hh}^\text{(c)}(k)-V_\text{excl}W_R(k)-V_\text{excl}\left[W_R*P_\text{hh}^\text{(c)}\right](k).
\label{eq:discretepower}
\end{equation}
This equation is the basis of our paper and we will thus explore it in detail.
It is common practice to ignore the exclusion window and to approximate the continuous power spectrum by the linear local bias model, which yields for the power spectrum of the discrete tracers in the Poisson model
\begin{equation}
P_\text{hh}^\text{(d)}(k)=\frac{1}{\bar n}+b_1^2P_\text{lin}(k).
\end{equation}
This needs to be modified because of exclusion and non-linear effects.
In practice, for $k>0$, it is difficult to separate the effects. Here we will formally define the
stochasticity effects discussed in this paper as a stochasticity power spectrum \cite{Hamaus:2009op}
\begin{align}
(2\pi)^3 \delta^\text{(D)}(\vec k+\vec k')C(k)=&\Bigl\langle\bigl[\delta_\text{h}(\vec k)-b_1 \delta_\text{m}(\vec k)\bigr]\bigl[\delta_\text{h}(\vec k')-b_1 \delta_\text{m}(\vec k')\bigr]\Bigr\rangle\nonumber\\
=&
(2\pi)^3 \delta^\text{(D)}(\vec k+\vec k')\Bigl[P_\text{hh}(k)-2b_1 P_\text{hm}(k)+b_1^2 P_\text{mm}(k)\Bigr],
\label{eq:snmatrixdiag}
\end{align}
where $b_1=P_\text{hm}(k)/P_\text{mm}(k)$ is the first order bias from the cross-correlation in the low-$k$ limit.
We then have in the low-$k$ limit
\begin{equation}
P_\text{hh}^\text{(d)}(k)=C(k)+b_1^2P_\text{lin}(k).
\end{equation}
One could make this generally valid at all $k$ by defining $b(k)=P_\text{hm}(k)/P_\text{mm}(k)$, but we will
not do this here, and instead explore physically motivated models of non-linear bias.
In this paper we are interested in the stochasticity power spectrum $C(k)$ and in particular its
limit as $k \rightarrow 0$.
What are the corrections arising from the exclusion and deviations from the local bias model?
In the low $k$-limit, the window function scales as $W_R(k)\xrightarrow{k\to0}1-k^2 R^2/10$. Hence, the convolution integral leads to a constant term plus corrections scaling as $k^2 R^2$ times moments of the continuous power spectrum
\begin{equation}
\left[W_R*P_\text{hh}^\text{(c)}\right](k) \xrightarrow{k\to0} \int \frac{\derivd^3q}{(2\pi)^3} P_\text{hh}^\text{(c)}(q)W_R(q)+k^2 R^2 \int \frac{\derivd^3q}{(2\pi)^3} P_\text{hh}^\text{(c)}(q) \left[W_R(q)\left(\frac{1}{(q R)^2}-\frac16\right)-\frac{\sin(q R)}{(qR)^3}\right].
\end{equation}
Thus, irrespective of the shape of the continuous power spectrum on large scales, exclusion always introduces a white ($k^0$) correction on large scales.\\
Fig.~\ref{fig:sketch} illustrates the behaviour of the correlation function of discrete tracers.
In the very popular local bias model, the clustering of dark matter haloes is modeled at leading order as $P_\text{hh}^\text{(c)}(k)=b_1^2 P_\text{lin}(k)$. In configuration space this leads to $\xi_\text{hh}^\text{(c)}(r)=b_1^2 \xi_\text{lin}(r)$, shown by the black dashed line.
We will consider this linear bias model as the fiducial model on top of which we define corrections.
Non-linear halo clustering suggests an enhancement proportional to higher powers of the linear correlation function as exemplified by the red dashed line. Our above arguments suggest that this clustering model, if at all, can only be true outside the exclusion radius. Inside this radius the probability to find another halo is zero, leading to $\xi_\text{hh}^\text{(d)}(r<R)=-1$.\\
An intuitive understanding of the corrections can be obtained in the $k\to0$ limit, where the halo power spectrum is given by an integral over the correlation function and can thus be written as
\begin{equation}
P_\text{hh}^\text{(d)}(k)\xrightarrow{k\to 0}\frac{1}{\bar n}-V_\text{excl}-b_1^2 \int_0^R\derd^3 r\, \xi_\text{lin}(r) +\int_R^\infty \derd^3 r \left[\xi_\text{hh,NL}^\text{(c)}(r)-b_1^2 \xi_\text{lin}(r)\right],
\end{equation}
where we introduced $\xi_\text{hh,NL}^\text{(c)}(r)$ to account for generic non-linear continuous models of the halo clustering.
The red and blue shaded regions in Fig.~\ref{fig:sketch} show the negative and positive corrections with respect to the linear bias model for which we would have in absence of exclusion $P_\text{hh}^\text{(d)}(k)\xrightarrow{k\to 0}1/\bar n$.
Note that the non-linear halo-halo correlation function could in principle be smaller than the linear bias prediction. Our above notion of a positive correction arising from the non-linear correction outside the exclusion radius is solely based on local bias arguments. In general this statement should be relaxed (for an example see App.~\ref{app:peakeffects}) and the blue region could have either sign.
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{Figures/correl_4.pdf}
\caption{Cartoon version of the correlation function of discrete tracers. Continuous linear correlation function (black dashed) and non-linear correlation function (red dashed). The true correlation function of discrete tracers (green solid line) agrees with the non-linear continuous correlation function outside the exclusion scale and is -1 below, except for the delta function at the origin arising from discreteness. Thus, there are two corrections compared to the continuous linear bias model, a negative correction inside the exclusion radius (red shaded) and a positive one outside the exclusion radius due to non-linear clustering (blue shaded).}
\label{fig:sketch}
\end{figure}
\section{Toy Models}\label{sec:toymod}
To show that the exclusion can indeed lower the stochasticity we perform a simple numerical experiment. We consider a set of hard sphere haloes of radius $R/2$. For this purpose, we distribute $N$ particles randomly in a cubic box ensuring that $\left|\vec x_i-\vec
x_j\right|>R$ for all pairs of particles $(i,j)$.
The corresponding correlation function is expected to be zero except for scales $r<R$, where $\xi=-1$
due to exclusion. Thus we expect the fiducial stochasticity to be lowered by $4\pi R^3/3$ in the $k \to 0$ limit.
For an intuitive derivation of the corrections to the power spectrum we will consider a fixed number of particles $N$ in a finite volume $V$.
Using Eq.~\eqref{eq:finitevoldeltak} the auto-power spectrum of the tracer particles can be written as \footnote{{The finite grid leads to $\delta^\text{(D)}(\vec k-\vec k')=\frac{V}{(2\pi)^3}\delta^\text{(K)}_{\vec k,\vec k'}$ and consequently $V \delta^\text{(K)}_{\vec k,\vec k'} P(k)=\left\langle\delta(\vec k)\delta(-\vec k') \right\rangle$.}}
\begin{align}
P^\text{(d)}(k)=&\frac{1}{V}\Bigl\langle\delta^\text{(d)}(\vec k)\delta^\text{(d)}(-\vec k)\Bigr\rangle\nonumber\\
=&\frac{V}{N^2}\sum_{i=j} \Bigl\langle\eh{\text{i} \vec k \cdot (\vec r_i-\vec r_j)}\Bigr\rangle+\frac{V}{N^2}\sum_{i\neq j} \Bigl\langle\eh{\text{i} \vec k \cdot (\vec r_i-\vec r_j)}\Bigr\rangle-V\delta^\text{(K)}_{\vec k,\vec 0}\label{eq:equalsums}\\
=&\frac{1}{\bar n}+\frac{V}{N^2}\sum_{\text{i}\neq j}\left\langle\eh{\text{i} \vec k \cdot (\vec r_i-\vec r_j)}\right\rangle-V\delta^\text{(K)}_{\vec k,\vec 0}\nonumber.
\end{align}
This yields for the hard sphere sample, which we consider as a proxy for excluded haloes
\begin{equation}
P^\text{(d)}_\text{hh}(k)=\frac{1}{\bar{n}_\text{h}}-\frac{4\pi R^3}{3} W_R(k).
\end{equation}
In Fig.~\ref{fig:randomwgal} we show the power spectrum of this toy halo sample for $R=8 \ h^{-1}\text{Mpc}$, $N=800$ and
$V=300^3 \ h^{-3}\text{Mpc}^3$.
We clearly see that the measured power follows the exclusion corrected stochasticity.
The window is close to unity on large scales and decays at $k\approx 1/R$, i.e., the fiducial shot noise
is recovered for high $k$. This is a first indication for stochasticity not being scale independent.
Note that the above derivations are only true in the limit, where the total exclusion volume is small compared to the total volume and thus allows for a quasi random distribution (about 0.8\% volume coverage in our case).
\subsection{Satellite Galaxies}
Galaxies are believed to populate dark matter haloes. Let us consider the simple case that each of the dark matter haloes under consideration hosts a central galaxy that, as the name suggests, coincides with the halo centre plus a fixed number $N_\text{s,h}$ of satellite galaxies, such that the total number of satellite galaxies is given by $N_\text{s}=N_\text{s,h}N_\text{h}$.
For simplicity, we will assume that the galaxies are distributed according to a profile $\rho_\text{s}(r)$ with typical scale $R_\text{s}$ around the centers of the host halo centers.
For long wavelength modes $k<1/R_\text{s}$ the $N_\text{s,h}$ galaxies within one halo are effectively one particle, which is why on large scales we expect the stochasticity of the satellite galaxy sample to be equal to the one of the host haloes and only for scales $k>1/R_\text{s}$ the modes can probe the distinct nature of the particles and the stochasticity goes to $1/\bar{n}_\text{s}$.
\par
We can evaluate our model Eq.~\eqref{eq:equalsums} to obtain the satellite-satellite power spectrum
\begin{align}
P^\text{(d)}_\text{ss}(k)=&\frac{V}{N_\text{s}}+\frac{V}{N_\text{s}^2}\sum_{\text{h}_i} \sum_{\text{s}_j\in \text{h}_i}\sum_{\text{s}_l\neq \text{s}_j\in \text{h}_i}\left\langle\eh{\text{i} \vec k\cdot (\vec r_j -\vec r_l)}\right\rangle+\frac{V}{N_\text{s}^2}\sum_{\text{h}_i} \sum_{\text{s}_j\in \text{h}_i} \sum_{\text{h}_m\neq\text{h}_i} \sum_{\text{s}_l \in \text{h}_m} \left\langle\eh{\text{i} \vec k\cdot (\vec r_j-\vec r_l)}\right\rangle\nonumber\\
=&\frac{V}{N_\text{s}}+\frac{V}{N_\text{s}}(N_\text{s,h}-1)\left\langle\eh{\text{i} k R_\text{s}\mu}\right\rangle^2-\frac{4\pi R^3}{3}u^2_\text{s}(k) W_R(k)\nonumber\\
=&\frac{1}{\bar{n}_\text{s}}\bigl[1+(N_\text{s,h}-1)u_\text{s}^2(k)\bigr]-\frac{4\pi R^3}{3}u_\text{s}^2(k) W_R(k)
\end{align}
Here $\mu$ is the cosine of the angle between $\vec k$ and $\Delta \vec r_{ij}=\vec r_i -\vec r_j$ that is averaged over, $u_\text{s}(k)$ is the normalized Fourier transform of the galaxy profile $\rho_\text{s}(r)$. For definiteness we will assume a delta function profile $\rho(r)=\delta^\text{(D)}(r-R_\text{s})/r^2$ corresponding to $u_\text{s}(k)=j_0(k R_\text{s})$, where $j_0$ is the zeroth order spherical Bessel function. The two terms in the above equation correspond to the one and two halo terms in the halo model \cite{Seljak:2000an,Cooray:2002ha}, the profile and the fiducial shot noise arise from correlations between particles in the same halo, whereas the exclusion term is dominated by the correlation between distinct haloes.
The results of the numerical experiment are shown in Fig.~\ref{fig:randomwgal} as the green points. The model prediction is shown as the green solid line and describes the simulation measurement very well. On small scales the power is dominated by the fiducial galaxy shot noise and on large scales the host halo stochasticity dominates
\begin{align}
P_\text{ss}^\text{(d)}(k\ll 1/R_\text{s},1/R)=\frac{1}{\bar n_\text{h}}-\frac{4\pi R^3}{3},
&&
P_\text{ss}^\text{(d)}(k\gg 1/R_\text{s},1/R)=\frac{1}{\bar n_\text{s}}.
\end{align}
While the distribution of satellite galaxies on a sphere of fixed radius around the halo centre is very peculiar and unrealistic, the qualitative behaviour is the same for all profiles with finite support.
In the case studied above, the corrections to the fiducial galaxy shot noise $1/\bar{n}_\text{h}$ are always positive. This is due to the high satellite fraction.
As we will discuss below, this behaviour might be completely different for galaxy samples with small satellite fraction, where the exclusion effect can be more important than the enhancement due to the satellites.
\subsection{Central and Satellite Galaxies}
We can also consider the cross power spectrum between halo centers (central galaxies) and the satellite galaxies. In this case there is no Poisson shot noise, since the samples are non-overlapping and the power is dominated by a one-halo term describing the radial distribution of the satellites around the halo centre
\begin{align}
P_\text{cs}^\text{(d)}(k)=&\frac{V}{N_\text{h}N_\text{s}}\sum_{\text{h}_i}\sum_{\text{s}_j\in \text{h}_i}\left\langle \eh{\text{i} \vec k \cdot (\vec r_j-\vec r_i)}\right\rangle+\frac{V}{N_\text{h}N_\text{s}}\sum_{\text{h}_i}\sum_{\text{h}_j\neq \text{h}_i}\sum_{\text{s}_l\in \text{h}_j}\left\langle\eh{\text{i} \vec k\cdot (\vec r_{l}-\vec r_i)}\right\rangle\nonumber\\
=&\frac{1}{\bar{n}_\text{h}}u_\text{s}(k)-\frac{4\pi R^3}{3} u_\text{s}(k)W_R(k).
\end{align}
Here we have again a one halo term arising from correlations of the halo center with satellites in the same halo and a two halo term arising from the correlation of satellite galaxies in one halo with the center of another halo. The comparison with the result of the numerical experiment in Fig.~\ref{fig:randomwgal} shows very good agreement.\\
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{Figures/profilehires_nsat2_rsat3.pdf}
\caption{Power spectrum of a randomly distributed halo sample obeying exclusion (red points) and corresponding model with (red solid line) and without (red dashed line) exclusion. In a second step we populate these haloes with $N_\text{gal}=2$ satellite galaxies, and calculate the auto power spectrum of the satellite galaxies (green points) and their cross power spectrum with the halo centers (blue points). The blue and green solid lines show our model predictions, whereas the dashed lines show the naive expectation of Poisson shot noise.
}
\label{fig:randomwgal}
\end{figure}
The above discussion is overly simplified as we assume all haloes to be of the same mass and to host the same number of galaxies. Any realistic galaxy sample will be hosted by a range of halo masses (i.e. a range of exclusion radii) and the number of galaxies per halo will also be a function of mass.\\
The total galaxy power spectrum of the combined central and satellite samples can be obtained as a combination of the central-central, central-satellite and satellite-satellite contributions
\begin{equation}
P_\text{gg}(k)=(1-f_\text{s})^2P_\text{cc}(k)+2f_\text{s}(1-f_\text{s})P_\text{cs}(k)+f_\text{s}^2 P_\text{ss}(k),
\label{eq:weightedsumgg}
\end{equation}
where $f_\text{s}=N_\text{s}/(N_\text{c}+N_\text{s})$ is the satellite fraction.
For realistic satellite fractions for SDSS LRGs \cite{Eisenstein:2001sp} $f_\text{s}\approx0.1$, the weighting of the central-central power spectrum dominates over the contributions from the central-satellite and satellite-satellite power spectra by factors of $9$ and $81$, respectively. A more realistic galaxy sample based on a HOD population of dark matter haloes in a $N$-body simulation will be discussed in Sec.~\ref{sec:realgal}.
\subsection{Toy Model with Clustering}
\label{ssec:toyclust}
Haloes are clustered, i.e., there is an enhanced probability to find two collapsed objects in the vicinity of each other to finding them widely separated. Let us discuss the influence of this phenomenological result on our toy model. For the sake of simplicity let us assume that haloes always come in pairs, i.e., that there is a second halo outside the exclusion scale at typical separation $R_\text{clust}$. This will similarly to satellite galaxies residing in one halo, lead to a \emph{positive} $k^0$ term on large scales, that decays for $k>1/R_\text{clust}$. In a more realistic setting, not all haloes will come in pairs, some of them will be single objects, others will come in clusters of $n$-haloes. Furthermore not all of them will be separated by exactly the clustering scale.
\par
Some authors argued that any large scale $k^0$-behavior in the perturbation theory description of biasing is unphysical and should be suppressed by constant but aggressive smoothing \cite{Roth:2011te} or by a $k$-dependent smoothing \cite{Chan:2012ha}. Based on the above considerations, we argue that such terms are just a result of the clustering of haloes and thus \emph{not} unphysical. Whether the magnitude of these effects can be covered by a perturbative treatment such as second order bias combined with perturbation theory, is a different question, which we will pick up later in \S \ref{sec:clustexcl}.
For now, let us note that such a $k^0$ term is also predicted by biasing models that go beyond the local bias model as for instance the correlation of thresholded regions as discussed briefly in the next subsection and for instance in \cite{Beltran:2011ef}.
Generally the clustering scale exceeds the exclusion scale and thus one should expect that the enhancement due to clustering decays at lower $k$ than the suppression due to exclusion. This is actually what happens in the simulations as we will show in \S \ref{sec:simul}.
\subsection{Density Threshold Bias}
\begin{figure}[t]
\includegraphics[width=0.48\textwidth]{Figures/correl_vis_4.pdf}
\includegraphics[width=0.50\textwidth]{Figures/resid_thresh_R4.pdf}
\caption{Kaiser bias \cite{Kaiser:1984on} in configuration and Fourier space. \emph{Left panel: }Un-smoothed (black dashed) and $R=4\ h^{-1}\text{Mpc}$ smoothed (black solid) linearly biased matter correlation functions $b_{1,\text{tr}}^2\xi(r)$ and continuous correlation function of the thresholded regions $\xi_\text{tr}(r)$ (red dashed). The red solid line shows a simple implementation of exclusion imposed on the correlation function of the thresholded regions. \emph{Right panel: }Power spectrum correction arising from the non-linear biasing (top line) and effect of increasing exclusion for $R=0,4,6,8 \ h^{-1}\text{Mpc}$ from top to bottom.}
\label{fig:thresholded}
\end{figure}
The spherical collapse model suggests that spherical Lagrangian regions exceeding the critical collapse density $\delta_\text{c}\approx 1.686$ segregate from the background expansion and form gravitationally bound objects. Hence, the clustering statistics of regions above threshold can tell us something about the clustering of dark matter haloes and galaxies. The study of \cite{Kaiser:1984on} considered the correlation function of regions whose density exceeds a certain value in a Gaussian random field, smoothed with a top-hat window of scale $R$. At the same time this paper pioneered bias models, which are nothing but a large scale expansion of the full correlation function of thresholded regions. Let us see how non-linear or non-perturbative clustering can affect the power spectrum on the largest scales in the full model.\\
The root mean square overdensity within the smoothed regions is given by
\begin{equation}
\sigma_R^2=\frac{1}{2\pi^2}\int \derd k\, k^2 W_R^2(k)P_\text{lin}(k).
\end{equation}
The correlation of thresholded regions can be calculated exactly employing the two point probability density function of Gaussian random fields. For simplicity, we will consider regions of a fixed overdensity rather than regions above threshold. The peak height $\nu=\delta_\text{c}/\sigma_R$ can be chosen based on the spherical collapse argument.
For the correlation function of regions of fixed overdensity one obtains \cite{Beltran:2011ef,Kaiser:1984on}
\begin{equation}
1+\xi_\text{tr}(r)=\frac{1}{\bar n_\text{tr}^2}
\frac{1}{(2\pi)\sqrt{1-\xi_R^2(r)/\sigma_R^4}}\exp{\left[-\nu^2 \frac{1-\xi_R(r)/\sigma_R^2}{1-\xi_R^2(r)/\sigma_R^4}\right]},
\label{eq:kaiserbias}
\end{equation}
where $\bar n_\text{tr}=1/\sqrt{(2\pi)\sigma_R^2}\eh{-\nu^2/2}$. Here $\xi_R(r)$ is the linear correlation function smoothed on scale $R$.
In the large distance, small correlation limit, the correlation function of the thresholded regions can be approximated by a linearly biased version of the linear correlation function $\xi_\text{tr}(r)=b_{1,\text{tr}}^2 \xi_R(r)$ with $b_{1,\text{tr}}\approx\nu/\sigma_R$. If one is interested in an accurate description of the non-perturbative correlation function on smaller scales, higher orders in the expansion need to be considered. Comparing the expansion of the correlation of thresholded regions in powers of the smoothed correlation to the full non-perturbative result in Eq.~\eqref{eq:kaiserbias}, we can investigate the convergence properties of the linear bias model. The left panel of Fig.~\ref{fig:thresholded} shows the correlation function of thresholded regions for a Gaussian random field smoothed on $R=4\ h^{-1}\text{Mpc}$ and the linearly biased versions of the smoothed and un-smoothed linear correlation functions.\\
We can now Fourier transform the correlation function of thresholded regions and subtract out the linearly biased power spectrum to obtain the correction introduced by the non-linear clustering
\begin{equation}
\Delta P_\text{tr}(k)=\text{FT}[\xi_\text{tr}](k)-b_{1,\text{tr}}^2 P_\text{lin}(k).
\end{equation}
As we show in Fig.~\ref{fig:thresholded}, there is a non-vanishing correction in the $k\to0$ limit that is approximately constant on large scales and goes to zero on small scales. The presence of such a correction was discussed in a slightly different context in \cite{Beltran:2011ef}. \\
In its original form, the thresholded regions are a continuous field and thus do not include any exclusion. One could however imagine that the patches defining the smoothing scale do not overlap. In this case the correlation function of thresholded regions should go to $-1$ for $r<2R$.
To show how the exclusion scale affects the correction in the power spectrum we consider a few exclusion radii smaller than two smoothing radii.
As obvious in Fig.~\ref{fig:thresholded}, increasing the smoothing scale first reduces the scale independent correction on large scales, compensates it completely and eventually leads to a negative scale independent correction for $r=2R$.
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{Figures/correl_binwidth.pdf}
\includegraphics[width=0.49\textwidth]{Figures/power_pk_corr_r2.pdf}
\caption{Clustering of peaks in a one dimensional skewer through a density field smoothed with a Gaussian filter of scale $R=2\ h^{-1}\text{Mpc}$ ($M\approx 8.6\tim{12}\ h^{-1} M_\odot$). \emph{Left panel: }For fixed peak height the correlation function flattens out on small scales (black), but with increasing bin width the exclusion becomes stronger. The width of the bin in peak height increases from dark to light red. The linear local bias expansion is the same for all of these models and is shown by the dashed line. For reference we overplot the Gaussian smoothing (dash-dotted) and the top-hat smoothing scale containing the same mass (dashed). \emph{Right panel: }Corresponding stochasticity correction $\Delta P_\text{pk}(k)=\text{FT}[\xi_\text{pk}](k)-b_{1,\text{pk}}^2 P_\text{lin}(k)$ for the fiducial bin width.}
\label{fig:peaks}
\end{figure}
\subsection{Peak Bias Model}\label{sec:peaks}
While the thresholded regions provide a continuous bias model, the peak model \cite{Bardeen:1985tr,Peacock:1985th} goes beyond in identifying a discrete set of points and providing the correlation function of these points. Most studies of the peak model to date have focused on the large separation limit \cite{Desjacques:2010mo,Desjacques:2008ba,Desjacques:2010re}, where closed form expressions for the peak correlation in terms of the underlying linear correlation function and its derivatives are possible.
However, \cite{Lumsden:1989th} calculated the one dimensional peak correlation function for a set of power law power spectra and
\cite{Heavens:1999th} computed the two dimensional peak correlation function for peaks in the CMB. The reason for the restriction to one or two dimensions owes to the dimensionality of the covariance matrix that needs to be inverted for the calculation of the peak correlation. In a one dimensional field the covariance matrix is a six by six matrix (field amplitude, first and second derivative at two points).\\
Here we consider realistic $\Lambda$CDM power spectra in three dimensions, smooth them on a realistic Lagrangian scale $R=2\ h^{-1}\text{Mpc}$ and evaluate the exact non-perturbative one dimensional correlation of peaks following the approach of \cite{Lumsden:1989th} (see App.~\ref{app:1dpeaks} for a brief review). Note that the correlation of field derivatives diverges for top-hat smoothing, which is why we follow common praxis and employ a Gaussian smoothing. The Gaussian smoothing makes the correlators of field derivatives well behaved but beyond that there is no physical motivation to employ this filter. We study a range of peak heights $\nu$ and also a range of bin widths in $\nu$. The peak correlation function for four different bin widths is shown in the left panel of
Fig.~\ref{fig:peaks}. \\
The first remarkable observation is that peaks of a fixed height don't seem to obey exclusion, only after considering a finite width in peak height, we can observe that the correlation function goes to $-1$ on small scales. The transition scale to the fixed peak height case increases with bin with, i.e., wider bins have a larger exclusion region.
As above for the thresholded regions we can expand the peak correlation function in the large distance limit and obtain a bias expansion that has contributions from the underlying matter correlation function \emph{and} correlation functions of the derivatives. Doing that, it becomes obvious, that the linear matter bias is only assumed outside of the BAO scale and that there is a distinct scale dependent bias that is partially described by the derivative terms in the bias expansion. Fourier transforming the full peak correlation function and subtracting out the linear biased power spectrum we obtain the correction shown in the right panel of Fig.~\ref{fig:peaks}.
The qualitative behaviour agrees with the result obtained above for the thresholded sample with an ad hoc exclusion scale. On large scales there is a combination of clustering and exclusion effects, the clustering decays first and then also the exclusion correction goes to zero.
Note that the projected one dimensional matter power spectrum scales as $k^0$ one large scales and thus doesn't vanish in the $k\to0$ limit. This fact makes the distinction between clustering and stochasticity terms in the one dimensional peak model very difficult. We hope to report on results for the full three dimensional peak model in the near future.
\section{Quantifying the Corrections}\label{sec:quanti}
Let us now try to quantify the stochasticity corrections for a realistic halo sample. We expect the effect to be time independent in the
$k \rightarrow 0$ limit if the same sample of particles is evolved under gravity. Thus, to minimize the influence of non-linearities, we will consider the protohaloes in Lagrangian space. In numerical studies of the effect, we will later define the protohalo as the initial ensemble of particles that form the Friends-of-Friends (FoF) haloes in our final output at redshift $z_\text{f}=0$.
\subsection{Continuous Halo Power Spectrum from Local Bias}\label{sec:biasmodel}
The local Lagrangian bias model assumes that the initial halo density field can be written as a Taylor series in the matter fluctuations at the same Lagrangian position $\vec q$
\begin{equation}
\delta_\text{h}(\vec q,\eta_\text{i})=b_1^\text{(L)}(\eta_\text{i}) \delta(\vec q,\eta_\text{i}) +\frac{b_2^\text{(L)}(\eta_\text{i})}{2!} \delta^2(\vec q,\eta_\text{i}) +\frac{b_3^\text{(L)} (\eta_\text{i})}{3!}\delta^3(\vec q,\eta_\text{i})+\ldots\ ,
\end{equation}
here $\eta_i$ is the conformal time of the initial conditions and $\vec q$ is the Lagrangian coordinate.
We will follow the approach of \cite{McDonald:2006cl} where the smoothing scale is an unobservable scale, which should not affect $n$-point clustering statistics on scales exceeding the smoothing scale. The above model can be used as the starting point for a coevolution of haloes and dark matter, which finally leads to a Eulerian bias prescription. Recently, such a calculation was shown to correctly predict non-local Eulerian bias terms \cite{Baldauf:2012ev,Chan:2012gr}.
The peak-background-split (PBS) \cite{Mo:1996an} makes predictions for the Lagrangian bias parameters in the above equation, and the corresponding late time Eulerian local bias parameters can then be obtained based on the spherical collapse model. There is some evidence that the peak model yields a better description of some aspects of the initial halo clustering than the local Lagrangian bias model. While we will briefly discuss these effects in App.~\ref{app:peakeffects}, we refrain from using this model for the modelling of the stochasticity corrections, since the the peak bias expansion beyond leading order has not been studied in great detail and its implementation goes beyond the scope of this study.
\par
For the continuous power spectrum in the initial conditions the local Lagrangian bias model predicts
\begin{equation}
P_\text{hh}^\text{(c)}(k,\eta_\text{i})=\left(b_1^\text{(L)}\right)^2D^2(\eta_\text{i}) P_\text{lin,0}(k)+\frac{1}{2} \left(b_2^\text{(L)}\right)^2 D^4(\eta_\text{i}) I_{22}(k),
\end{equation}
where $D(\eta)$ is the linear growth factor and the scale dependent bias correction is described by
\begin{align}
I_{22}(k)=\int \frac{\derivd^3q}{(2\pi)^3} P_\text{lin,0}(q)P_\text{lin,0}(\left|\vec k-\vec q\right|).
\end{align}
This term leads to a positive $k^0$ contribution in the low-$k$ regime.
In this sense it deviates from typical perturbative contributions to the power spectrum, which start to dominate on small scales.
For this reason, this term was partially absorbed into the shot noise by \cite{McDonald:2006cl}. We will explicitly consider the term, since it describes the effect of non-linear clustering and is responsible for super-Poissonian stochasticity.
The cross power spectrum between haloes and matter is given by
\begin{equation}
P_\text{hm}^\text{(c)}(k,\eta_\text{i})=b_1^\text{(L)}(\eta_\text{i}) D^2(\eta_\text{i}) P_\text{lin,0}(k)
\end{equation}
and does obviously not contain any second order bias corrections. This statement remains true if higher order biasing schemes are considered, since higher order biases only renormalise the bare bias parameters \cite{McDonald:2006cl}.
\par
Truncating the bias expansion is only valid if $\left\langle\delta^2\right\rangle\ll 1$, which is certainly satisfied on large scales in the initial conditions, but not necessarily on the scales relevant for halo clustering outside the exclusion radius.
On these scales one might have to consider all the higher order local bias parameters. It is beneficial to calculate this effect in configuration space where the local bias model leads to a power series in the linear correlation function
\begin{equation}
\xi^\text{(c)}(r)=\sum \frac{\left(b_i^\text{(L)}\right)^2}{i!} D^{2i}(\eta_\text{i}) \xi^i_\text{lin}(r)\xrightarrow{\nu \to \infty} \eh{\frac{\nu}{\delta_\text{c}}\xi_\text{lin}(r)}.
\end{equation}
The limit applies only in the high peak limit and Press-Schechter bias parameters \cite{Press:1974fo}.
We will restrict ourselves to the quadratic bias model since it can account for the main effects and since using higher order biasing schemes also requires more parameters to be determined. The Press-Schechter and Sheth-Tormen prescriptions provide a rough guideline for the scaling of bias with mass and redshift, but fail to provide correct predictions for the bias amplitude. Thus we obtain the bias parameters from fits to observables not affected by stochasticity (such as the halo-matter cross power spectrum) and obtaining higher order biases would require higher order spectra such as the bispectrum.
Since our discussion is mostly in Lagrangian space we will drop the superscripts E and L from now on and absorb the growth factors into the linear power spectra and correlation functions.
\subsection{Theory Including Clustering and Exclusion}\label{sec:clustexcl}
We can now use the bias model introduced above to evaluate the discrete power spectrum Eq.~\eqref{eq:discretepower}.
We have
\begin{equation}
P^\text{(d)}(k)=\frac{1}{\bar{n}}+b_1^2P_\text{lin}(k) +\frac12 b_2^2I_{22}(k)-b_1^2 V_\text{excl}[W_R*P_\text{lin}](k)-\frac12 b_2^2 V_\text{excl}[W_R*I_{22}](k)-V_\text{excl} W_R(k).
\end{equation}
The splitting of $I_{22}$, the non-linear clustering term arising from $b_2$, is somewhat counterintuitive, since we expect this term to be active only outside the exclusion scale. Furthermore, there is no corresponding term in the halo-matter or matter-matter power spectra that would cancel the continuous $I_{22}$. Thus we combine the continuous part and the exclusion correction for the non-linear clustering term into a positive correction whose small scale contributions have been removed.
\begin{equation}
P^\text{(d)}(k)=\frac{1}{\bar{n}}+b_1^2P_\text{lin}(k) +\frac12 b_2^2 I_{22}(k,R)-b_1^2 V_\text{excl}[W_R*P_\text{lin}](k)-V_\text{excl} W_R(k).
\label{eq:sncorrlocbias}
\end{equation}
Here we defined the correction term
\begin{equation}
I_{22}(k,R)=\int_R^\infty \derd^3 r\ \xi^2(r) j_0(kr).
\end{equation}
A simpler version of Eq.~\eqref{eq:sncorrlocbias} has been presented in \cite{Smith:2007sc}, where the non-linear clustering is neglected and the results are presented in Eulerian rather than Lagrangian space.
In the $k\to 0$ limit the Fourier transform simplifies to a spatial average over the correlation function
\begin{equation}
P^\text{(d)}(k)\xrightarrow{k\to0}\frac{1}{\bar{n}}+\frac12 b_{2}^2 \int_R^\infty \derd^3 r\ \xi^2(r)-b_{1}^2\int_0^R \derd^3 r\ \xi(r)
-V_\text{excl},
\end{equation}
where the linear bias term vanishes due to $P_\text{lin}(k)\xrightarrow{k\to0}0$.
The fact that the integral over $\xi^2$ runs only from the exclusion scale to infinity mitigates the smoothing dependence of the correction, since smoothing on the scale of the halo affects the correlation function only on the halo scale, which is by definition smaller than the exclusion scale.
\renewcommand{\tabcolsep}{0.2cm}
\begin{table}[t]
\begin{tabular}{lccccc}
\hline
\hline
bin&$M$&$R_\text{excl}^\text{(L)}$ & $b_1^\text{(L)}$ &$b_2$& $b_1^\text{(E)}$ \\
\hline
\input{fittab}
\hline
\hline
\end{tabular}
\caption{Mean masses, exclusion radii, first and second order Lagrangian and Eulerian bias parameters for our $z=0$ halo sample.
Masses are in units of $\ h^{-1} M_\odot$ and radii in units of $\ h^{-1}\text{Mpc}$. The mass dependence of these parameters is also plotted in
Fig.~\ref{fig:biasparameters}. Note that the second order bias parameter is just a phenomenological fitting parameter used to get a reasonable representation of the correlation function. We do not claim that our fitting procedure yields an accurate second order bias parameter for the sample. For this purpose on has to employ the bispectrum, which yields second order bias parameters that are in much better agreement with the peak-background split expectation but fail to describe the correlation function.}
\end{table}
\subsection{Stochasticity Matrix}
We will now consider the power spectrum for a set of non-overlapping halo mass bins. We will consider their auto-power spectra and cross power spectra between different halo mass bins $i$ and $j$ and denote this quantity $P_{ij}$, whereas the cross power between a certain mass bin and the matter is denoted $P_{i\delta}$.
The sum over equal pairs in Equation \eqref{eq:equalsums} is only present for the auto power spectra and thus the $1/\bar n$ shot noise affects only the diagonal entries of the power spectrum matrix $P_{ij}(k)$. On the other hand, exclusion affects also the off-diagonal matrix entries, since by definition also haloes of different mass are distinct objects and can thus not overlap. Furthermore, different mass haloes are affected by non-linear clustering, since the probability to find any sort of massive object ($M>M_*$) in the vicinity of a massive object is enhanced. For simplicity we will employ equal number density mass bins, which all have the same fiducial shot noise $1/\bar{n}$.
\par
When trying to extract the amplitude and scale dependence of the noise, we need to remove all the contributions due to linear bias from the halo power spectra.
For this purpose, we will employ the stochasticity matrix as defined in \cite{Hamaus:2009op} (see also Eq.~\eqref{eq:snmatrixdiag} for the definition of the diagonal)
\begin{equation}
(2\pi)^3 \delta^\text{(D)}(\vec k+\vec k') C_{ij}(k)=\left\langle\left[\delta_i(\vec k)-b_{1,i} \delta(\vec k)\right]\left[\delta_j(\vec k')-b_{1,j} \delta(\vec k')\right]\right\rangle,
\label{eq:snmatrix}
\end{equation}
such that we have in terms of the power spectra
\begin{equation}
C_{ij}(k)=P_{ij}(k)-b_{1,i} P_{\delta j}(k)-b_{1,j} P_{\delta i}(k)+b_{1,i} b_{1,j} P_{\delta \delta}(k).
\label{eq:snmatrixpower}
\end{equation}
The model introduced above can be straightforwardly generalized to multiple mass bins and their respective cross power spectra by the following replacements $b_1^2\to b_{1,i}b_{1,j}$, $b_2^2\to b_{2,i}b_{2,j}$ and $R\to R_{ij}=(R_i+R_j)/2$. The exact form of the combined exclusion radius is somewhat debatable, but for now we will employ the arithmetic mean. The resulting correction to the linear local bias model is given by
\begin{equation}
C_{ij}(k)=\frac{1}{\bar{n}}\delta^\text{(K)}_{ij}+\frac{1}{2}b_{2,i}b_{2,j} I_{22}(k,R_{ij})-V_{\text{excl},ij}W_{R_{ij}}(k)-b_{1,i}b_{1,j}V_{\text{excl},ij}[W_{R_{ij}}*P_\text{lin}](k),
\label{eq:sncorrmod}
\end{equation}
where $V_{ij}=4\pi/3 R_{ij}^3$. We see that the definition of the stochasticity matrix removes all occurrences of the linearly biased power spectrum. In Eulerian space both $P_\text{hm}$ and $P_\text{hh}$ would have an additional contribution from $b_2 I_{12}$, where $I_{12}$ describes the cross correlation between non-linear bias and non-linear matter clustering. The term is defined as
\begin{equation}
I_{12}(k)=\int \frac{\derivd^3q}{(2\pi)^3} P_\text{lin,0}(q)P_\text{lin,0}(\left|\vec k-\vec q\right|)F_2(\vec q, \vec k-\vec q),
\end{equation}
where $F_2(\vec q_1,\vec q_2)$ is the standard perturbation mode coupling kernel \cite{Bernardeau2002}.
The definition of the stochasticity matrix also removes all occurrences of $I_{12}$.
\section{Evaluations and Comparison to Simulations}\label{sec:simul}
\subsection{The Simulations \& Halo Sample}
Our numerical results are based on the Z\"urich horizon zHORIZON simulations, a suite of 30 pure dissipationless dark matter simulations of the $\Lambda$CDM cosmology in which the matter density field is sampled by $N_\text{p} = 750^3$ dark matter particles. The box length of $1500 \ h^{-1}\text{Mpc}$, together with the WMAP3 \cite{Spergel:2007th} inspired cosmological parameters ($\Omega_\text{m}=0.25,\Omega_\Lambda=0.75,n_\text{s}=1,\sigma_8=0.8$), then implies a particle mass of $M_\text{p} = 5.55\tim{11}\ h^{-1} M_\odot$. The total simulation volume is $V\approx 100 \ h^{-3}\text{Gpc}^3$ and enables precision studies of the clustering statistics on scales up to a few hundred comoving megaparsecs.
\par
The simulations were carried out on the ZBOX2 and ZBOX3 computer-clusters of the Institute for Theoretical Physics at the University of Zurich using the publicly available GADGET-II code \cite{Springel:2005mi}. The force softening length of the simulations used for this work was set to $60\ h^{-1}\text{kpc}$, consequently limiting our considerations to larger scales. The transfer function at redshift $z_\text{f} = 0$ was calculated using the CMBFAST code of \cite{Seljak:1996cmb} and then rescaled to the initial redshift $z_\text{i} = 49$ using the linear growth factor. For each simulation, a realization of the power spectrum and the corresponding gravitational potential were calculated. Particles were then placed on a Cartesian grid of spacing $\Delta x = 2 \ h^{-1}\text{Mpc}$ and displaced according to a second order Lagrangian perturbation theory. The displacements and initial conditions were computed with the 2LPT code of \cite{Crocce:2006tr}, which leads to slightly non-Gaussian initial conditions.
\par
Gravitationally bound structures are identified at redshift $z_\text{f}=0$ using the B-FoF algorithm kindly provided by Volker Springel with a linking length of $0.2$ mean inter-particle spacings. Haloes with less than 20 particles were rejected such that we resolve haloes with $M > 1.2\tim{13} \ h^{-1} M_\odot$. The halo particles are then traced back to the initial conditions at $z_\text{i}=49$ and the corresponding centre of mass is identified. We split the halo sample into $10$ equal number density bins with a number density of $\bar n=3.72\tim{-5} h^{3}\text{Mpc}^{-3}$ leading to a shot noise contribution to the power at the level of $P_\text{SN}\approx2.7\tim{4} \ h^{-3}\text{Mpc}^3$.
\subsection{The Correlation Function in Lagrangian Space}\label{sec:correlsn}
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{Figures/correl_bin5.pdf}
\includegraphics[width=0.49\textwidth]{Figures/correl_bin5_rc.pdf}\\
\caption{Example of the halo-halo correlation function of the traced back haloes for mass bin V. The vertical solid line is the fitted exclusion radius.
The dot-dashed line shows the linear bias contribution, whereas the dashed line shows linear plus quadratic bias. Note that the second order bias parameter was fitted to the correlation function and does deviate quite strongly from the PBS prediction.
The red solid line shows a simple model for halo exclusion Eq.~\eqref{eq:smoothedstep}. In the right panel we show the integrand of the Fourier transform $r^3 \xi_\text{hh}(r)$, which is of essential importance for the stochasticity modelling.}
\label{fig:correl_ic}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{Figures/excluradii_publ.pdf}
\includegraphics[width=0.49\textwidth]{Figures/initialbias.pdf}
\caption{\emph{Upper left panel: }Exclusion radii measured in the initial and final halo-halo correlation functions. The exclusion radius is defined as a fixed fraction of the maximum in the halo-halo correlation function (see Fig.~\ref{fig:correl_ic}). The red line shows the na\"{\i}ve estimate of the exclusion radius $R=(3M/4\pi \bar \rho)^{1/3}$, whereas the blue line shows $R=(3M/4\pi \bar \rho (1+\delta))^{1/3}$ with $\delta\approx 30$. This fitted overdensity is clearly distinct from the spherical collapse prediction of $\delta=180$. \emph{Lower left panel: }Ratio of the initial and final exclusion radii.
\emph{Right panel: }Bias parameters fitted from the halo-matter cross power spectrum ($b_1$), the halo-matter bispectrum and the correlation function ($b_2$). Note that $b_2$ fitted from the correlation function deviates strongly from the value predicted by the peak-background split (blue solid and dashed for positive and negative) and the value inferred from the bispectrum. The dash-dotted lines show a fit used for extrapolation purposes.}
\label{fig:exclrad}
\label{fig:biasparameters}
\end{figure}
The corrections to the halo power spectrum in our model are motivated by certain features in the halo-halo correlation function. While the fiducial stochasticity affects the correlation function only at the origin, the two other effects, exclusion and non-linear clustering, should be clearly visible in the correlation function at finite distances.
For this purpose we measure the correlation function of the traced back haloes for our $10$ halo mass bins using direct pair counting.
\par
In Fig.~\ref{fig:correl_ic} we show the correlation function for mass bin V. The log-linear plot clearly shows that the correlation function is $-1$ on small scales and shows a smooth transition to positive values around the exclusion scale visualized by the vertical black line. The exclusion scale is fitted both in the initial and final conditions as 0.8 times the maximum in the correlation function and is shown in Fig.~\ref{fig:exclrad}. The ratio between the initial and final exclusion radii is roughly $3$ for all mass bins. The spherical collapse model suggests that haloes collapse by a factor $5$, but there is no reason to believe that protohaloes that are in direct contact in Lagrangian space are still touching each other in Eulerian space. Thus it is reasonable to expect a somewhat smaller reduction in the exclusion scale.
On large scales $30 \ h^{-1}\text{Mpc}<r<90 \ h^{-1}\text{Mpc}$ the correlation function is reasonably well described by linear bias shown in the Figure as a dot-dashed line.
We infer the linear bias parameter from the ratio of halo-matter cross power spectrum and matter power spectrum on scales $k< 1.5 \tim{-2} \ h\text{Mpc}^{-1}$
\begin{equation}
\hat{b}_\text{1,hm}=\frac{\hat{P}_\text{hm}}{\hat{P}_\text{mm}}.
\end{equation}
See Fig.~\ref{fig:scaledepbiasinit} and App.~\ref{app:peakeffects} for why we have to restrict the linear bias fitting to large scales even in the initial conditions.
The advantage of the cross power spectrum is that it should be free of stochasticity contributions and fully described by linear bias \cite{Frusciante:2012la} on large scales.\\
There is a clear enhancement of the data in Fig.~\ref{fig:correl_ic} compared to the linear bias model on small scales. Thus we consider the quadratic bias model
\begin{equation}
\xi_\text{hh}^\text{(c)}(r)=b_1^2 \xi_\text{mm,lin}(r)+\frac{1}{2}b_2^2 \xi_\text{mm,lin}^2(r)
\label{eq:xibiassecond}
\end{equation}
and fit for the quadratic bias parameter on scales exceeding the maximum. The resulting continuous correlation function is shown as the dashed line in Fig.~\ref{fig:correl_ic}. It does not fully account for the enhanced clustering outside the exclusion scale.
The inferred bias parameters are shown in Fig.~\ref{fig:biasparameters} and will be discussed in more detail below. Note that the above fit is performed using an un-smoothed version of the linear correlation function, whereas the local bias model relies on an explicit smoothing scale. Here, we argue that the smoothing scale for the local bias model should be related to the Lagrangian scale of the haloes and thus be smaller than the exclusion scale. In this case, the smoothing scale does typically not affect the scale dependence of the correlation function (except for around the BAO scale).
In our above discussion we have assumed a sharp transition between the exclusion regime and the clustering regime. This is certainly unphysical as is obvious in Fig.~\ref{fig:correl_ic}. The smoothness is probably caused by a number of phenomena, for instance mass variation within the mass bins (should be quite small $<1.3 \ h^{-1}\text{Mpc}$ for the highest mass bin which has $R\approx 8 \ h^{-1}\text{Mpc}$ and even smaller for the lower mass bins) or alignment of triaxial haloes.
The lack of a physically motivated, working model for the transition forces us to employ a somewhat ad hoc functional form for the step, which is based on a lognormal distribution of halo distances (see App.~\ref{app:exclkern})
\begin{equation}
\xi^\text{(d)}_\text{hh}(r)=\frac{1}{2}\left[1+\text{erf}\left(\frac{\log_{10}(r/R)}{\sqrt{2}\sigma}\right)\right]\bigl[\xi^\text{(c)}_\text{hh}(r)+1\bigr]-1.
\label{eq:smoothedstep}
\end{equation}
For alternative implementations of exclusion windows in the context of the halo model see \cite{Tinker:2005on,Valageas:2011co}.
The resulting shape of the correlation function is shown as the red solid line, where the smoothing was chosen to be $\sigma\approx 0.09$ and seems to be quite independent of halo mass. The model clearly underestimates the peak in the data in the log-linear plot.
Our final goal is to construct an accurate model for the effect of exclusion and non-linear clustering on the power spectrum. Thus, we should not only check the validity of our model on plots of the correlation function itself but also on the integrand in the Fourier integrals $r^3 \xi_\text{hh}(r)\derd \ln r$. We do so in the right panel of Fig.~\ref{fig:correl_ic}, where it is obvious that the model does not reproduce the exact shape of the correlation function outside of the exclusion scale. However, we certainly improved over the naive linear biasing on all scales and obtained a reasonable parametrization of exclusion and non-linear clustering effects.
\par
Let us now come back to the mass dependence of the inferred bias and exclusion parameters. As we show in Fig.~\ref{fig:biasparameters}, the linear bias $b_1$ is in very good agreement with the bias parameters inferred from a Sheth-Tormen mass function \cite{Scoccimarro:2001ho} rescaled to the initial conditions at $z_\text{i}=49$. For the second order bias we compare the measurement from the correlation function to measurements from the bispectrum of the protohaloes and the second order bias inferred from the Sheth-Tormen mass function. The latter agrees reasonably well with the bispectrum measurement reproducing the zero crossing in the theoretical bias function. Note that the bispectrum measurement (for details of the approach see \cite{Baldauf:2012ev}) uses only large scale information and is thus a clean probe for second order bias. Isolating second order bias effects in the correlation function is less straightforward. With decreasing scale higher and higher bias parameters become important, and to our knowledge there is no established scale down to which a certain order of bias can be trusted to a given precision. Our fitting procedure was led by the goal of obtaining a good parametrization of the correlation function, which could subsequently be used to calculate the corresponding power spectra and the corrections to the linear bias model. Note that due to the functional form of Eq.~\eqref{eq:xibiassecond} this fitting approach allows inference of the magnitude of $b_2$, but not its sign. The second order bias parameters obtained in this way deviate significantly from the theoretical bias function and the bispectrum measurement. The most severe failure of the model is the imaginary $b_2$ for the highest mass bin. In this case the deviation is connected to corrections arising from the peak constraint, as we explain in App.~\ref{app:peakeffects}. We find that the initial Lagrangian second order bias parameter in Fig.~\ref{fig:biasparameters} can be roughly fitted as follows
\begin{align}
b_2=&1100\left(\frac{M}{10^{13} \ h^{-1} M_\odot}\right)^{0.35}\eh{-\left(\frac{M}{10^{14}\ h^{-1} M_\odot}\right)^2}.
\label{eq:b2Mrelation}
\end{align}
We will use this fitting function for extrapolation in mass and redshift in \S \ref{sec:rscorr}. The initial second order bias parameters for halo samples identified at higher redshifts ($z=0.5$ and $z=1$) are roughly the same as for the $z=0$ halo sample. \\
Although we will use this $b_2$ fit to predict the resulting stochasticity corrections, we do not believe that the observed scale dependence of the correlation function is solely a second order bias effect. We checked an expansion of Eq.~\eqref{eq:xibiassecond} to higher orders in the correlation function using peak-background split bias parameters. Even up to tenth order there is no considerable improvement in the fit. Thus we argue that the enhancement is a non-perturbative effect (e.g. peak bias) and consider the $\xi^2$ scale dependence as a reasonably well working phenomenological parametrization rather than a physical truth. We hope to shed more light on this issue in a forthcoming paper.
\subsection{Stochasticity Matrix}
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{Figures/snmat_minit_halinit_k.pdf}
\includegraphics[width=0.49\textwidth]{Figures/snmat_highk.pdf}
\caption{Diagonals of the stochasticity matrix $\sigma_{ij}$. \emph{Left panel: }Initial conditions $z_\text{i}=49$ \emph{Right panel: }Final field $z_\text{f}=0$.
There is remarkable agreement in the large scale amplitude between initial conditions and final field besides the strong difference in the bias parameters and growth factors. We highlight this fact by the horizontal dashed lines that have the same amplitude in both panels and are matched to the large scale stochasticity matrix in the initial conditions.
For both panels, there is clear evidence for stochasticity going to fiducial $1/\bar{n}$ for high wavenumbers, and a modification due to exclusion and clustering for $k\leq 1/R$, where $R$ is the scale of the halo at the corresponding redshift. Note the different scaling of the $k$-scale in the two plots. The mass increases from blue to orange, i.e., top to bottom.}
\label{fig:snmatsim}
\end{figure}
In Fig.~\ref{fig:snmatsim} we show the diagonals of the stochasticity matrix measured in our simulations in Lagrangian space ($z_\text{i}=49$) and Eulerian space ($z_\text{f}=0$). The most remarkable observation in this plot is the agreement between the results, given the different amplitude of the growth factors and the linear bias parameters at these two times. This is a result of the fact, that gravity can not introduce or alter $k^0$ dependencies \cite{Peebles:1980th} due to mass and momentum conservation. This can for instance be seen in the low $k$-limit of standard perturbation theory \cite{Bernardeau2002}: The mode coupling term $P_{22}$ is a gravity-gravity correlator and thus scales as $k^4$, whereas the propagator term $P_{13}$ is a gravity-initial condition correlator and scales as $k^2 P_\text{lin}$.\\
For the highest mass bin there is a clear suppression of the noise level on the largest scales which then asymptotes to the fiducial value $1/\bar{n}_\text{h}$ at a scale $k\approx 1/R \approx 0.3 \ h\text{Mpc}^{-1}$. Since the radius of the halo shrinks during collapse, this scale is found at a higher wavenumber in Eulerian space. For the less massive haloes the behavior is not completely monotonic. On large scales we find a noise level slightly exceeding the fiducial value. Going to higher wavenumbers the fiducial value is crossed, the residual reaches a minimum and finally asymptotes to $1/\bar{n}$. This behavior can be explained as follows: the clustering scale exceeds the exclusion scale and as we have argued in \S \ref{ssec:toyclust}, the enhanced correlation on the clustering scale leads to a positive contribution on the largest scales that decays for lower wavenumbers than the negative exclusion correction.
\par
The wavenumber at which the stochasticity asymptotes to its fiducial value in the final field is at fairly high wavenumbers, exceeding the Nyquist frequencies of both $N_\text{c}=512$ and $N_\text{c}=1024$ grids. To probe smaller scales we employ a mapping technique \cite{Jenkins:1998ev,Smith:2003st} that allows us to resolve small scales without having to increase the grid size beyond $N_\text{c}=512$. The technique consists of splitting the box into $n$ parts per dimension and adding these parts to the same grid. This allows inference of each $l$-th mode but also increases the Nyquist frequency by a factor $l$. We use several different mapping factors $l=4,6,12,20,50$ to probe all the scales up to $k\approx 20 \ h\text{Mpc}^{-1}$.
\par
In Fig.~\ref{fig:snmatsim12bins} we show the diagonals of the stochasticity matrix for one and two mass bins respectively. For the one bin case we select all the haloes in our simulation, effectively combining all the ten mass bins resulting in $M_\text{1bin}=3.84\tim{13}\ h^{-1} M_\odot$. For the two bin case we combine the five lightest and the five heaviest mass bins resulting in masses $M_\text{2bin,I}=1.47\tim{13}\ h^{-1} M_\odot$ and $M_\text{2bin,II}=6.21\tim{13}\ h^{-1} M_\odot$. The plot shows both the initial condition and the final stochasticity and the two agree very well in the low-$k$ limit. The stochasticity correction does not depend on the fiducial stochasticity and thus the scale dependence of the total stochasticity is more pronounced in the wider mass bins due to their lower fiducial shot noise. Interestingly the stochasticity correction for the 1-bin case vanishes in the low-$k$ limit, but is negative in the intermediate regime. This is a sign of a perfect cancellation between exclusion and non-linear clustering. The final stochasticity seems to be a $k$-rescaled version of the initial stochasticity. Finally, let us stress that the power spectrum of the wide bins can not be obtained by a summation of the contributing bin-power spectra from the ten bin case since one has to account for the off-diagonal components of the stochasticity matrix.
\par
In the left panel of Fig.~\ref{fig:biascorr} we compare our model Eq.~\eqref{eq:sncorrmod} to the measured stochasticity matrix of the ten bins in the initial conditions. The only modification to the model is that we replace the hard cutoff by the smoothed transition Eq.~\eqref{eq:smoothedstep}. We employ the parameters obtained in the fit to the corresponding halo correlation functions in \S \ref{sec:correlsn}. The data points in the plot are copied from Fig.~\ref{fig:snmatsim}. Given the differences between the model and the correlation functions in Fig.~\ref{fig:correl_ic}, there is a reasonably good agreement both in large scale amplitude and scale dependence of the stochasticity correction. We can conclude that while being a relatively crude fit to the correlation function, our model can account for the major effects, exclusion and non-linear clustering. The drawback is that this model lives in Lagrangian space and can not be straightforwardly applied to the halo power spectrum in Eulerian space.\\
In Fig.~\ref{fig:crossterms} we show the off-diagonal terms of the stochasticity matrix in the initial conditions and our corresponding model predictions. As for the diagonals discussed above, the corrections can be either positive or negative, depending on whether exclusion or nonlinear clustering dominates The model predictions are in reasonable agreement with the measurements except for the highest mass bin. This failure is connected to the fact that the $b_2$ parameter for the highest mass bin is imaginary, i.e., we have to set the second order bias term in the cross correlations to zero. This is a severe problem of our overly simplistic model, which is related to the importance of the peak effects for the correlation function of the highest mass bin (see App.~\ref{app:peakeffects}).\\
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{Figures/scaledepsn_1bin_k}
\includegraphics[width=0.49\textwidth]{Figures/scaledepsn_2bin_k.pdf}
\caption{\emph{Left panel: }Diagonals of the stochasticity matrix for one mass bin containing all the haloes in our simulation. Open points show the initial condition measurement, whereas the filled points show the final value. The horizontal thick solid line shows the fiducial shot noise $1/\bar{n}=2680 \ h^{-1}\text{Mpc}$. The Eulerian bias is $b_1^\text{(E)}=1.49$. On the largest scales there seems to be a cancellation between the exclusion and non-linear clustering contributions resulting in no net correction to the fiducial shot noise. \emph{Right panel: }Same as left panel but for splitting our haloes into two mass bins with equal number density. The upper red points are the measurement for the lighter, lower bias bin $b_1^\text{(E)}=1.25$ and the lower blue points are for the more massive, higher bias bin $b_1^\text{(E)}=1.74$. The fiducial shot noise is $1/\bar{n}=5362 \ h^{-1}\text{Mpc}$. Note that both mass bins show a significant scale dependence of the stochasticity.}
\label{fig:snmatsim12bins}
\end{figure}
Let us try to gain some more insight on where the stochasticity corrections arise in Lagrangian and Eulerian space. For this purpose we consider the configuration space version of the diagonal of the stochasticity matrix defined in Eq.~\eqref{eq:snmatrixpower}
\begin{equation}
C_{ii}(r)=\xi_{ii}(r)-2b_{1,i}\xi_{i\delta}(r)+b_{1,i}^2\xi_{\delta\delta}(r).
\end{equation}
The stochasticity level in the $k\to 0$ limit is then given by
\begin{equation}
C_{ij}(k)\xrightarrow{k\to0}\int_0^\infty \derd \ln r \ r^3 \sigma_{ij}(r).
\label{eq:sncontrib}
\end{equation}
In Fig.~\ref{fig:snmatreal}, we show the above integral as a function of the upper integration boundary where the full large scale stochasticity correction would be obtained by taking this boundary to infinity. Comparing the contributions in the initial conditions and the final configuration, we clearly see that the large scale stochasticity arises on different scales at the two times. We clearly see that the negative stochasticity corrections are dominated at much smaller scales in the final configuration as compared to the initial conditions. At these scales the halo-matter and matter-matter correlation functions are dominated by the one halo term, i.e., the halo profile, which complicates quantitative predictions of the exclusion effect in the final configuration and motivates our Lagrangian approach.
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{Figures/sn_correl_sum_init_b.pdf}
\includegraphics[width=0.49\textwidth]{Figures/sn_correl_sum_b.pdf}
\caption{Cumulative contributions to the stochasticity correction up to scale $r$ (see Eq.~\ref{eq:sncontrib}) in configuration space for our 10 mass bin sample. Open symbols show negative contributions and filled symbols positive contributions. \emph{Left panel: }Initial conditions. \emph{Right panel: }Final configuration at $z=0$. We clearly see, that the corrections in the initial conditions are dominated on larger scales than in the final configuration, where the negative corrections are clearly in the one halo regime, where both the halo-matter and matter-matter correlation functions are highly non-linear. This gives further motivation for the modelling of the effect in Lagrangian space.
The horizontal solid (dashed) lines show the positive (negative) stochasticity corrections inferred from Fig.~\ref{fig:snmatsim}.}
\label{fig:snmatreal}
\end{figure}
\subsection{Redshift and Mass Dependence of the Correction}\label{sec:rscorr}
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{Figures/sndiag_meas_scaledep.pdf}
\includegraphics[width=0.49\textwidth]{Figures/bias_corr.pdf}
\caption{\emph{Left panel: }Theoretical prediction for the diagonals of the stochasticity matrix for the ten mass bins. The parameters $b_2$ and $R$ are measured from the correlation function. The points are the coarsly binned simulation measurement taken from the left panel of Fig.~\ref{fig:snmatsim}.
\emph{Right panel: }Bias correction as defined in Eq.~\eqref{eq:fracbiascorr}. While there is clear scale dependence for weakly non-linear wavenumbers $k> 0.1 \ h\text{Mpc}^{-1}$ the correction is fairly flat on large scales, where the cosmic variance errors are largest. It could thus be easily interpreted as a higher or lower bias value. The points show the relative deviation of the bias inferred from the halo-halo power spectrum in the final configuration from the bias inferred from the halo-matter cross power spectrum.}
\label{fig:biascorr}
\end{figure}
The subtraction of the fiducial $1/\bar{n}$ shot noise from the power spectrum will lead to a biased estimate of the continuous halo power spectrum. Thus, estimating the bias as
\begin{equation}
\hat{b}_\text{1,hh}=\sqrt{\frac{\hat{P}_\text{hh}-1/\bar{n}}{\hat{P}_\text{mm}}}
\end{equation}
will lead to a flawed estimate of the bias. Indeed, it has been found in simulations that the biases estimated from the auto- and cross-power spectra are generally not in agreement \cite{Elia:2011th,Okumura:2012rs}.
Studying for instance Table I in \cite{Okumura:2012rs} we see that $\hat{b}_\text{1,hh}$ exceeds $\hat{b}_\text{1,hm}$ for low mass haloes at redshift $0$ indicating that the fiducial shot noise subtraction underestimates the true noise level. For high mass objects the opposite happens, the bias from the cross-power exceeds the bias from the auto power indicating that the employed $1/\bar{n}$ shot noise subtraction overestimates the true noise level.\\
Let us try to understand this effect in more detail. Based on our model, subtraction of the fiducial shot noise on large scales leaves us with the linear bias term plus the stochasticity correction
\begin{equation}
\hat{P}_\text{hh}(k)-\frac{1}{\bar{n}}=\Delta P_\text{hh}(k)+b_1^2 P_\text{lin}(k)=\hat{b}_\text{1,hh}^2 \hat{P}_\text{mm}(k),
\end{equation}
where in absence of shot noise in the cross-power spectrum $b_1=\hat{b}_\text{1,hm}$.
Thus we have for systematic error on the linear bias parameter
\begin{equation}
\frac{\Delta b_1}{b_1}=\frac{\hat{b}_{1,\text{hh}}}{b_1}-1\approx \frac12 \frac{\Delta P_\text{hh} }{b_1^2 P_\text{mm}}\label{eq:fracbiascorr}
\end{equation}
Consequently the ratio $\hat{b}_\text{1,hh}/\hat{b}_\text{1,hm}$ is a function of mass and redshift due to the mass and redshift dependence of the parameters of the model. We show the $k$-dependence of the bias correction in Fig.~\ref{fig:biascorr}. Since we don't have a reliable model to relate the scale dependent stochasticity matrix from Lagrangian space to the one in Eulerian space we employ the scale dependent stochasticity model from Lagrangian space but divide by the present day linear power spectrum. This procedure should provide a reasonable estimate for the bias corrections on large scales. The linear bias is usually estimated close to the peak of the linear power spectrum, where it is approximately flat and where non-linear corrections are believed to be negligible. As a result, the bias correction is also fairly flat and would lead to a $1\%$ overestimation of bias for low mass objects and a $3-4\%$ underestimation for clusters. This behaviour can qualitatively explain the deviations found in \cite{Okumura:2012rs}.
\par
In Fig.~\ref{fig:sncorrmassdep} we show the amplitude of the low-$k$ limit of the stochasticity correction for ten equal halo mass bins at redshifts $z=0,0.5,1$. We overplot the theoretical expectation based on our model, linear bias parameters from the peak background split and second order bias parameters obtained from our phenomenological $b_2$ relation in Eq.~\eqref{eq:b2Mrelation}. In particular, we calculate the Lagrangian bias parameters and exclusion radii corresponding to the halo samples at $z=0,0.5,1$ and use them to predict the stochasticity correction. As a general result we can see that there is a negative correction for high masses and a positive correction for low masses with a zero crossing scale that decreases with increasing redshift. The model captures the trends in the measurements relatively well. We are also overplotting the low-$k$ amplitude of the SN for the one and two bin samples at $z=0$ as the squares and diamonds. Besides the fact that these bins are much wider and thus have lower fiducial shot noise, the low-$k$ amplitude is in accordance with the model and also narrower mass bins of the same mass. This fact supports our conjecture, that the stochasticity correction does not depend on the fiducial shot noise, but rather on mass (via the exclusion scale and the linear and non-linear bias parameters).
The negative correction for high masses is dominated by the exclusion term, whose amplitude depends on the linear bias parameter and the exclusion scale. The latter is a function of mass but not a function of redshift, whereas the bias increases with redshift and thus the negative correction at high masses also increases with redshift. The positive correction at the low mass end depends on the second order bias parameter. In our fits to the correlation function we found that this parameter is roughly constant for the three different redshifts under consideration.
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{Figures/sncorr_fit_new.pdf}
\caption{Dependence of the stochasticity correction on halo mass and redshift. The lines are based on linear bias parameters from the peak-background split and a fitting function modelling $b_2$. The dashed lines and open points describe negative values. We also include the low-$k$ limits of the one and two bin splitting as the diamond and squares, respectively. The dotted line shows the positive low mass correction arising from non-linear biasing whereas the dash-dotted blue, green and red lines show the negative high mass exclusion corrections for the three different redshifts.}
\label{fig:sncorrmassdep}
\end{figure}
\subsection{Eigensystem and Combination of Mass Bins}
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{Figures/snev_meas_scaledep.pdf}
\includegraphics[width=0.49\textwidth]{Figures/snevec_meas.pdf}
\caption{ \emph{Left panel: }Eigenvalues of the stochasticity matrix for the traced back $z=0$ haloes. The lines show our prediction based on the modelling of the initial correlation function whereas the points are based on the diagonalization of the measured stochasticity matrix of the traced back haloes. \emph{Right panel: }Eigenvectors in the $k\to 0$ limit. The black and red filled points show the measured eigenvectors for the lowest and highest eigenvalue in the initial conditions, whereas the open points show the respective prediction of our model. The blue line shows mass weighting, the red line $b_2$-weighting and the magenta line shows the modified mass weighting proposed by \cite{Hamaus:2009op}.}
\label{fig:eigensys}
\end{figure}
The stochasticity matrix can be diagonalized as
\begin{equation}
\sum_j \sigma_{ij} V_j^{(l)}=\lambda^{(l)} V_i^{(l)}
\end{equation}
where $V_i^{(l)}$ are the eigenvectors and $\lambda^{(l)}$ the corresponding eigenvalues. The eigenvector corresponding to a low eigenvalue can be used as a weighting function in order to construct a halo sample that has the lowest possible stochasticity contamination \cite{Hamaus:2009op}. Furthermore, the eigenvalues allow for a clean separation of the exclusion and clustering contributions to the total noise correction.
We show the measurement and comparison to our model in Fig.~\ref{fig:eigensys}. The data show pronounced low and high eigenvalues with most of the eigenvalues identical to the fiducial shot noise. The high eigenvalue is probably related to the non-linear clustering and the low eigenvalue to exclusion. The model also predicts eight of the ten eigenvalues to agree with the fiducial shot noise as well as one high and one low eigenvalue. The exact agreement is not perfect, which is probably due to an imperfect representation of the off-diagonal stochasticity terms. The main problem with the off-diagonals is to estimate the exclusion radii. The right panel of Fig.~\ref{fig:eigensys} shows the eigenvectors corresponding to the eigenvalues. The eigenvector corresponding to the low eigenvalue is clearly connected to the mass, i.e., the exclusion volume, whereas the eigenvector corresponding to the high eigenvalue is clearly connected to the second order bias.
\par
So far we have concentrated on the stochasticity correction for narrow mass bins and quantified them in terms of the corresponding stochasticity matrix. If all the corrections were linear in the parameters of the model, all we needed to do is to calculate the corresponding mean parameters of the sample and use them to calculate the stochasticity correction for the combined sample. However, the corrections are in general a non-linear function of the parameters. The wider the mass bins the less exact is a bulk description by a set of mean parameters. It should be more exact to consider subbins and combine them.
Thus we need to calculate the stochasticity correction for narrow mass bins $M \in [\underline{M}_i,\overline{M}_i]\ i=1,\ldots,h$. Then, when considering samples that span a wide range of halo masses or realistic galaxy samples, we need to weight the prediction for the stochasticity correction accordingly.
The weighted density field is then
\begin{equation}
\tilde \delta=\frac{\sum_i w_i \delta_i}{\sum_i w_i}
\end{equation}
and the corresponding noise level of the combined sample is given by
\begin{equation}
\tilde \sigma=\frac{\sum_{ij}w_i \sigma_{ij}w_j}{\left(\sum_i w_i\right)^2}.
\end{equation}
For halo samples the weighting is given by the mean number density in a mass bin
\begin{equation}
w_i=\int_{\underline{M}}^{\overline{M}} \derd M n(M).
\end{equation}
A similar weighting scheme can be derived for galaxies for the two-halo term in the context of the halo model.
\subsection{A realistic Galaxy Sample}\label{sec:realgal}
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{Figures/scaledepsn_LRG_k.pdf}
\includegraphics[width=0.49\textwidth]{Figures/scaledepsn_LRGx2_sat_cent_k.pdf}
\caption{Stochasticity of an HOD implementation of a Luminous Red Galaxy sample. We split the sample into the central-central (cyan), satellite-satellite (green) and central-satellite (black) contributions. The constituent stochasticity levels are weighted according to their contribution to the full galaxy power spectrum (see Eq.~\ref{eq:weightedsumgg}). The horizontal thick lines show the correspondingly weighted fiducial stochasticity.
\emph{Left panel: }Halo occupation distribution model with a satellite fraction of $f_\text{s}=4.9\%$ (\cite{Baldauf:2010al,Reyes:2010co}).
\emph{Right panel: }Same as left panel, but for a satellite fraction of $f_\text{s}=8.5\%$.}
\label{fig:galaxies}
\end{figure}
Let us now see how the stochasticity matrix behaves for a realistic galaxy sample.
In Halo Occupation Distribution (HOD) models \cite{Berlind:2002th,Zheng:2005th} the occupation number $N_\text{g}(M)$ is usually split into a central and a satellite component $N_\text{g}=N_\text{c}+N_\text{s}$. In Fig.~\ref{fig:galaxies} we show the stochasticity of the Luminous Red Galaxy (LRG) sample described in \cite{Baldauf:2010al,Reyes:2010co}.
The total number density of the LRGs is $\bar n_\text{g}=7.97\tim{-5}\ h^{-3}\text{Mpc}^3$ corresponding to a fiducial shot noise of $1/\bar n_\text{g}\approx 1.25\tim{4} \ h^{-3}\text{Mpc}^3$. The effective stochasticity level for the full sample is $\text{SN}_\text{eff}=1.09\tim{4}\ h^{-3}\text{Mpc}^3$, corresponding to a correction of $\Delta P_\text{gg}=-1.8\tim{3} \ h^{-3}\text{Mpc}^3$. The satellite fraction of the galaxy sample is $4.9 \%$.\\
Let us try to understand the total correction based on the constituent central, satellite and central-satellite cross-power spectra. The sum of these three components weighted according to Eq.~\eqref{eq:weightedsumgg} agrees with the measured stochasticity of the full sample.
The central-central power spectrum dominates the negative stochasticity correction on large scales with a weighted correction of $(1-f_\text{s})^2\Delta P_\text{cc}=-2100 \ h^{-3}\text{Mpc}^3$
The satellite-satellite power spectrum has a positive one halo contribution on large scales that contributes a weighted correction of $f_\text{s}^2\Delta P_\text{ss}=+570 \ h^{-3}\text{Mpc}^3$
The central-satellite cross power spectrum changes sign but contributes about $\Delta P_\text{cs}=-370 \ h^{-3}\text{Mpc}^3$ at $k=0.03 \ h\text{Mpc}^{-1}$.
The amplitude of these corrections could in principle be understood based on a accurate model for the stochasticity correction of the host haloes and the halo model. In this context the corrections are given as \cite{Seljak:2000an,Cooray:2002ha}
\begin{align}
P_\text{cc}^\text{(1h)}(k)=&\frac{1}{\bar n_\text{c}}\\
P_\text{ss}^\text{(1h)}(k)=&\frac{1}{\bar n_\text{s}}+\frac{1}{\bar n_\text{s}^2}\int \derd M n(M) N_\text{s,h}(M)\left[N_\text{s,h}(M)-1\right]u^2(k|M) \Theta(N_\text{s,h}-1)\\
P_\text{cs}^\text{(1h)}(k)=&\frac{1}{\bar n_\text{c}\bar n_\text{s}}\int \derd M n(M) N_\text{c,h}(M)N_\text{s,h}(M) u(k|M) \Theta(N_\text{s,h}-1)
\end{align}
\begin{align}
P_\text{cc}^\text{(2h)}(k)=&\frac{1}{\bar n_\text{c}^2} \int \derd M n(M)N_\text{c,h}(M) \int \derd M' n(M') N_\text{c,h}(M') P_\text{hh}(k|M,M')\\
P_\text{ss}^\text{(2h)}(k)=&\frac{1}{\bar n_\text{s}^2} \int \derd M n(M)N_\text{s,h}(M) u(k|M)\int \derd M' n(M') N_\text{s,h}(M') u(k|M')P_\text{hh}(k|M,M')\\
P_\text{cs}^\text{(2h)}(k)=&\frac{1}{\bar n_\text{s}\bar n_\text{c}}\int \derd M n(M)N_\text{c,h}(M) \int \derd M' n(M') N_\text{s,h}(M') u(k|M')P_\text{hh}(k|M,M')
\end{align}
On large scales we have $u(k|M)\to 1$. Furthermore the halo-halo power spectra can be again split into a linear bias part $b(M)b(M')P(k)$ and a correction term accounting for the discreteness of the host haloes.
For the central galaxy sample our model yields a correction of $\Delta P_\text{cc}\approx -1000 \ h^{-3}\text{Mpc}^3$.
More accurate predictions would require a better model of the stochasticity corrections, which in turn requires a better model of exclusion and non-linear biasing. \\
We also consider a slightly modified galaxy sample with a larger satellite fraction $f_\text{s}=8.47\%$. For this purpose we create a copy of each satellite galaxy at twice its separation from the host halo centre. The resulting stochasticity properties are shown in the right panel of
Fig.~\ref{fig:galaxies}. In contrast to the previous case the actual stochasticity now exceeds the fiducial shot noise due to the strong positive contribution of the satellite-satellite one halo term.
\section{Conclusions}\label{sec:concl}
In this paper we discuss effects of the discreteness and non-linear clustering of haloes on their stochasticity in the
power spectrum. The standard model for stochasticity is the Poisson shot noise model with stochasticity given as the inverse of the
number density of galaxies, $1/\bar{n}$.
Motivated by the results in \cite{Hamaus:2009op}, we study the distribution of haloes in Lagrangian space and estimate the effect of exclusion
and non-linear clustering of protohaloes on the stochasticity. These induce corrections relative to $1/\bar{n}$ in the low-$k$ limit.
Exclusion lowers and non-linear clustering enhances the large scale stochasticity. The total value of the large scale stochasticity depends on which of the two effects is stronger but the amplitude of the correction does not directly depend on the abundance of the sample.
These stochasticity corrections must decay to zero for high-$k$, implying they are scale dependent in the intermediate regime. The transition scale is related to either
the exclusion scale of the halo sample under consideration or to the non-linear clustering scale. At the final time (Eulerian space) these transition scales shrink
due to the non-linear collapse, but the low-$k$ amplitude of the stochasticity agrees with Lagrangian space, as expected from mass and momentum conservation.
While the presented model can explain the observed trend of modified stochasticity at a qualitative level, the quantitative agreement is not perfect. This is related to our imperfect modelling of the Lagrangian halo correlation function with a local bias ansatz. A more realistic modelling might be possible in the full framework of peak biasing in three dimensions, as the one dimensional results in \S \ref{sec:peaks} indicate.
We also discuss the effects of satellite galaxies,
when a galaxy sample with a non-vanishing satellite fraction is considered. In this case the stochasticity can dramatically deviate from the auxiliary $1/\bar{n}$ value on large scales. In this case one has to identify the number density of host haloes to infer the stochasticity on large scales and account for the fact that on scales below the typical scale of the satellite profile there is a transition to the fiducial Poisson shot noise of the galaxy sample.
Finally, we consider the stochasticity matrix of haloes of different mass. We show that diagoalization of this matrix
gives rise to one eigenvaue with a low amplitude, with the eigenvector that
aproximately scales with the halo mass.
This provides an explanation to the stochasticity suppression with mass weighting explored in \cite{Seljak:2009ho,Hamaus:2009op}.
It would be interesting to explore, how the stochasticity corrections imprint themselves in the halo bispectrum, which is a promising probe of inflationary physics \cite{Baldauf2011a} and whose measurement becomes realistic in present and upcoming surveys \cite{Marin:2013th}.
\begin{acknowledgements}
The authors would like to thank Niayesh Afshordi, Kwan Chuen Chan, Donghui Jeong, Patrick McDonald, Teppei Okumura, Fabian Schmidt, Ravi Sheth, Zvonimir Vlah and Jaiyul Yoo for useful discussions.
The simulations were carried out on the ZBOX3 supercomputer at the Institute for Theoretical Physics of the University of Zurich.
This work is supported in part by NASA ATP Grant number NNX12AG71G, Swiss National Foundation (SNF) under contract
200021-116696/1, WCU grant R32-10130, National Science Foundation Grant No. 1066293. T.B. acknowledges the hospitality of the Aspen Center for Physics.
\end{acknowledgements}
|
2,869,038,154,091 | arxiv | \section{Introduction}
Image noise can cause performance degradation on various tasks \cite{Koziarski2017image}.
To solve this, image denoising techniques are developed for image recovery.
One of the early deep learning approaches for denoising is \cite{zhang2017beyond} that propose DnCNN, a residual learning strategy to solve AWGN denoising tasks.
However, simple statistic AWGN-based noise cannot model real-world noise.
This encourages developments toward denoising real noisy images, where the key objective is to create a network that can adapt to various types of noises or has better generalization so it can work across various types of noises \cite{Lin2019real,kim2019grdn,tian2021dualcnn}.
In the case of adapting to various types of degradations in various low-level vision tasks, several methods use an optimization-based meta-learning paradigm to enable test-time adaptation.
\cite{chi2021test} proposes to use meta-auxiliary learning for fine-tuning to enable test-time adaptation in deblurring.
While, \cite{lee_2020} and \cite{soh2020meta} propose to use meta-learning for fine-tuning i.e. meta-transfer learning \cite{sun2019meta} to enable test-time adaptation in denoising and super-resolution tasks respectively.
Different from previous works in various aspects, our methods apply meta-auxiliary learning in pre-training and use meta-transfer learning to achieve better generalization and enable test-time adaptation.
We design two networks which are mask generation network and multi-task network.
Our goal is to utilize the two stages of learning for the multi-task network such that when the head of the multi-task network is updated using auxiliary loss, the denoising performance can be improved in any dataset.
We use self-supervised masked reconstruction loss as the auxiliary loss and the mask is generated by the mask generation network.
The motivation of using masked reconstruction loss is to encourage the auxiliary head to produce only the noisy part of the image that can benefit the primary task.
Our motivation comes from various literature such as: 1) \cite{zhang2017beyond} that shows better performance when the network is trained with noise ground truth instead of a clean image ground truth, and 2) \cite{yang2017ct} that shows the loss of high-frequency components due to over smoothing when trained using only reconstruction loss between predicted clean image and clean image ground truth.
When the network knows the region of the noisy part of the image, it can focus more on that instead of denoising other unrelated parts which can make the region become over smooth.
This problem also becomes important for real noises cases since \cite{zhou2020awgn} shows that real noises are mostly spatially/channel-correlated and
spatially/channel-variant.
Furthermore, we meta-learn the mask generation network in two stages.
The first stage uses meta-auxiliary learning to encourage the mask generation network to produce a mask that can improve the generalization of the multi-task network's primary task against various types of synthetic noise when trained using the auxiliary objective.
Meanwhile, the second stage uses meta-transfer learning to make the mask generation network produce a mask that can benefit primary tasks of the multi-task network against real noises.
The produced mask will enable test-time adaptation of multi-task network when multi-task network is trained using masked reconstruction loss, which will improve the performance of denoising task in the corresponding dataset without any ground truth.
The contributions of our paper are as follows:
\begin{itemize}[parsep=0pt,partopsep=-5pt]
\item We design a network architecture that can gain more improvements on the primary task when trained using the auxiliary objective.
\item We propose masked reconstruction loss as an auxiliary objective to improve the denoising task. Note that our masked reconstruction loss also may be used in other low-level vision tasks such as super-resolution or deblurring.
\item We propose to use the meta-auxiliary learning method to pre-train the network and use meta-transfer learning to make the network can adapt across various types of noise and enable test-time adaptation. In addition, we only update the heads of the multi-task network to enable faster adaptation.
\end{itemize}
\section{Related Work}
\subsection{CNN-based Image Denoising}
Image denoising is recovering a clean image $x$ from a noisy image $y$ that follows an image degradation model $y = x + n$.
The common assumption is that the noise $n$ is an additive white Gaussian noise (AWGN).
With the recent advances in deep learning, numerous deep learning-based methods have been proposed \cite{zhang2017beyond, zhang2017learning, liu2018non, zhang2018ffdnet, zhang2019residual, zhang2020residual}.
DnCNN \cite{zhang2017beyond} exploits a deep neural network to speed up training and boost the performance with residual learning.
FFDNet \cite{zhang2018ffdnet} takes cropped images and a noise level map to handle locally varying and different ranges of noise levels.
RNAN \cite{zhang2019residual} is a residual non-local attention network that can consider long-range dependencies among pixels.
RIDNet \cite{anwar2019real} uses residual-in-residual structure to help low-frequency information flows and uses feature attention to exploit channel dependencies.
RDN \cite{zhang2020residual} is a deep residual dense network that can extract hierarchical local and global features.
MIRNet \cite{zamir2020learning} design a novel network architecture to maintain spatially-precise high-resolution representations and strong contextual information in the entire network by using a multi-scale residual block with residual connection and attention mechanism.
However, they rely on a large number of training datasets with paired noisy and ground truth clean images and highly depend on the distribution of the training data.
The same set of training weights are used for test images, thereby failing under the distribution shift of the data.
To overcome this limitation, zero-shot denoising has been proposed to learn image-specific internal structure.
\subsection{Zero-shot Denoising}
Zero-shot denoising aims to denoise images on the zero-shot setting to be easily adapted to the test image condition.
To be less affected by the noise distribution of the training data, several works have proposed to train without true clean images with the assumption of zero-mean noise.
Noise2Noise \cite{lehtinen2018noise2noise} trains with pairs of noisy patches and is based on the reasoning that the expectation of the randomly corrupted signal is close to the clean image.
Noise2Void \cite{krull2019noise2void} only considers the center pixel of the input patch and is trained to predict the center pixels.
However, they do not exploit the large-scale external dataset and therefore show inferior performance compared to supervised methods where the distribution of the test input is identical to the training data distribution.
Different from previous methods, our method exploits the large-scale external dataset by training the method using meta-auxiliary learning.
Then, we enable test-time adaptation by training the network using meta-learning with the help of self-supervised auxiliary loss so it can learn image-specific internal structure.
\subsection{Meta-learning and Meta-auxiliary Learning}
Meta-learning aims to learn new concepts quickly with a few examples by learning to learn.
In this respect, meta-learning is considered together with few-shot and zero-shot learning.
Meta-learning is categorized into three groups; metric-based, memory network-based, and optimization-based.
Among them, MAML \cite{MAML_finn} which is one of the optimization-based methods has shown a great impact on the research community.
Meta-learning has two phases; meta-training and meta-test.
In meta-training, a task is sampled from a task distribution and training samples are used to optimize the base-learner with a task-specific loss, and test training samples are used to optimize the meta-learner.
In the meta-test, the model adapts to a new task with the meta-learner.
\cite{MAML_finn} adopts a simple gradient descent algorithm to find an initial transferable point where a few updates can fast adapt to a new task.
ZSSR \cite{soh2020meta} additionally leverages meta-transfer learning for zero-shot super resolution (MZSR).
Auxiliary learning has been integrated with meta-learning, so-called Meta AuXiliary Learning (MAXL) \cite{liu2019self}.
MAXL consists of a label-generation network to predict the auxiliary labels, and a multi-task network to train the primary task and the auxiliary task.
The interaction between the two networks is a form of meta-learning with a double gradient.
As the auxiliary task is self-supervised, it has down promising direction in zero-shot meta-learning.
MaXLDeblur \cite{chi2021test} incorporates meta-auxiliary learning for transfer learning in dynamic scene deblurring task.
It uses a self-reconstruction auxiliary task that shares layers with the primary deblurring task, which gains performance via the auxiliary task.
The model is adapted to each input image to better capture the internal information, thereby allowing fast test-time adaptation.
One related work that applies meta-learning in denoising tasks is the work from \cite{lee_2020}.
This work proposes self-supervised loss coupled with meta-learning to enable adaptation in test-time.
However, this method and the self-supervised loss only works for synthetic noise and cannot be applied to real-world noise case.
In our paper, we apply meta-auxiliary learning for pre-training and meta-learning for fine-tuning (meta-transfer learning) in image denoising problems that will enable adaptation in test-time and achieve better generalization.
Our meta-learning problem is similar to \cite{chi2021test} which can be seen as zero-shot meta-learning that tries to make the network can fast adapt to specific noise in one image by using a few updates of auxiliary loss.
\section{Proposed Methods}
\begin{figure*}[ht]
\vskip 0.1in
\begin{center}
\centerline{\includegraphics[width=0.65\textwidth]{MethodOverview.pdf}}
\caption{The overview of our learning method. Our network (multi-task and mask generation network) is trained from random initialization $\theta_{1}^{0}, \theta_{2}^{0}$ to $\theta_{1}^{T}, \theta_{2}^{T}$ using meta-auxiliary learning. Then, we use meta-transfer learning to learn representation $\theta_{1}^M, \theta_{2}^M$, where $\theta_{1}^{M}$ will have a good representation to denoise various noise models and the performance will be improved if trained using masked reconstruction loss. Then, for each test image, we adapt the denoising network using self-supervised masked-reconstruction loss.}
\label{fig:method-overview}
\end{center}
\vskip -0.2in
\end{figure*}
Given a noisy image $I_{n}$, our network (multi-task branch) output predicted clean image $\hat{I}_{c}$ and predicted noisy image $\hat{I}_{n}$.
In addition, the mask generation branch $g_{\theta_2}$ of our network also produces a mask $M$ to condition the reconstruction loss $L_{Rec}$ that is used as an auxiliary loss $L_{Aux}$ to train our multi-task network $f_{\theta_1}$.
The overview of our method can be seen in Figure \ref{fig:method-overview}.
First, we train the multi-task network and mask generation network using meta-auxiliary learning to provide better meta-initialization.
This is because meta-auxiliary learning can improve the generalization of the network for robustness against various synthetic noises.
Then, we use this pre-train network as meta-initialization for the meta-transfer learning.
In this stage of learning, the objective is to make the multi-task network can improve the primary task performance when the parameter of the network is updated by auxiliary loss (i.e. masked reconstruction loss) in real noises cases.
In addition, using these two stages of learning, we want to make the mask generation network produce a better mask that will help the multi-task network can adapt to various types of noises (synthetic and real) when trained using masked reconstruction loss.
Then, for the test dataset of unseen data, we adapt the parameter of the multi-task network on each image example (i.e. zero-shot meta-learning) by using masked reconstruction loss which can be trained in a self-supervised manner without any ground truth.
\subsection{Network Architecture}
\begin{figure*}[ht]
\vskip 0.1in
\begin{center}
\centerline{\includegraphics[width=0.6\textwidth]{NetworkArchitecture.pdf}}
\caption{The architecture of our network.}
\label{fig:network-architecture}
\end{center}
\vskip -0.2in
\end{figure*}
Inspired by \cite{liu2019self}, we design a network that consists of a multi-task network and a mask generation network.
The architecture of the network can be seen in Figure \ref{fig:network-architecture}.
The multi-task network goal is to solve two tasks which are denoising (primary task) and noisy image prediction (auxiliary task).
In the multi-task network, we use a single convolution layer and encoder-decoder with skip connection as the network body.
The network body will produce deep features which will be used by the primary head to refine the feature resulting in the residual image.
This residual image when added with the noisy image $I_{n}$ will produce the predicted clean image $\hat{I}_{c}$.
After that, we concatenate the predicted clean image and the residual.
The auxiliary head will use this concatenation to produce the predicted noisy image $\hat{I}_{n}$.
We design the network so the auxiliary head uses the output of the primary head and predicted clean image.
This is intended since we will only train the primary head and auxiliary head in the inner loop of the meta-transfer learning and meta-testing step for test-time adaptation.
The rationale behind not updating the network body is because \cite{raghu2020rapid} shows MAML-based optimization produces only a little change on the network body parameters.
As an effect, we can do fast test-time adaptation with less memory and computation in the meta testing when encountered with unseen data.
In addition, we also observe that placing the auxiliary head after the primary head gains more benefits compared to a single feature extractor with multi-head architecture when trained with auxiliary loss (Section \ref{subsec:ablation-study}).
The loss to train the multi-task network consists of auxiliary loss $L_{Aux}$ and primary loss $L_{Pri}$.
The primary loss $L_{Pri}$ is the reconstruction loss between predicted clean image $\hat{I}_{c}$ and clean ground truth image $I_{GT}$ which can be formulated as:
\begin{equation}
\label{eq:primary-loss}
L_{Pri}(\hat{I}_{c}, I_{GT}) = \lvert \lvert \hat{I}_{c} - I_{GT} \rvert \rvert_{1}
\end{equation}
Meanwhile, we use masked reconstruction loss $L_{MaskRec}$ as the auxiliary loss $L_{Aux}$ which can be formulated as:
\begin{equation}
\label{eq:aux-loss}
\begin{aligned}
L_{Aux}(\hat{I}_{n}, I_{n}, Mask) &= L_{MaskRec}(\hat{I}_{n}, I_{n}, Mask)\\
&= L_{Rec}(\hat{I}_{n}, I_{n}) \odot Mask\\
&=\lvert \lvert \hat{I}_{n} - I_{n} \rvert \rvert_{1} \odot Mask
\end{aligned}
\end{equation}
In this auxiliary loss $L_{Aux}$, we conditioned the reconstruction loss $L_{Rec}$ between the predicted noisy image $\hat{I}_{n}$ and the noisy image $I_{n}$ on the mask produced by the mask generation network.
By doing this, we only compute auxiliary loss on some pixels that will improve the primary task performance.
In addition, this auxiliary loss is also self-supervised since it does not require any ground truth that makes this loss appropriate to be applied in the test time.
\subsection{Pre-training using Meta-auxiliary Learning (MAXL)}
Similar to \cite{chi2021test, soh2020meta}, we train our network using an external dataset.
However, our method trains the network using a meta-auxiliary learning scheme similar to \cite{liu2019self}.
The goal of using this scheme is to improve the generalization power of our network that can serve as a better meta initialization before doing meta-transfer learning.
In addition, we find that using masked reconstruction loss only can make the mask collapse (one or zero at every pixel).
To solve this issue, we use two-directional image gradient loss \cite{zhang2018densely} to regularize the mask.
By using this loss for regularizing the mask, we can force the mask generation network to produce a mask that has a similar edge with the noisy image.
As an effect, it may help the multi-task network to denoise the image better and prevent the loss of fine-textural details which is a common issue in denoising networks \cite{anwar2019real, zamir2020learning}.
The two-directional gradient loss can be formulated as:
\begin{equation}
\label{eq:two-drirectional-gradient-loss}
\begin{aligned}
L_G(M, I_n) = &\sum_{w,h}\lvert\lvert(H_x(M))_{w,h} - (H_x(I_n))_{w,h}\rvert\rvert \\
& + \lvert\lvert(H_y(M))_{w,h} - (H_y(I_n))_{w,h}\rvert\rvert
\end{aligned}
\end{equation}
where $M$,$w$,$h$ are respectively mask, width, and height.
$H_x$ and $H_y$ are image gradient operators along rows (horizontal) and columns (vertical).
However, we only use this loss in the pre-training stage.
This is because in the meta-transfer learning stage the collapse issue does not appear.
Moreover, the mask always changes through learning which denotes that the multi-task network needs to reconstruct different regions of the noisy image to help in improving the denoising performance (Figure \ref{fig:mask-maxl-mtl} and Figure \ref{fig:mask-scratch-mtl} in supplementary).
The algorithm to train the network follows the algorithm from \cite{liu2019self} in Algorithm \ref{alg:maxl-algorithm}.
\setlength{\textfloatsep}{10pt
\begin{algorithm}[th]
\caption{MAXL algorithm}
\label{alg:maxl-algorithm}
\begin{algorithmic}
\STATE {\bfseries Initialize:} Network parameters: $\theta_{1}^{T}$, $\theta_{2}^{T}$; Learning rate: $\alpha$, $\beta$; Two-way image gradient loss weight: $\lambda_G$
\WHILE{not converged}
\FOR{each training iteration i}
\STATE {\# sample one batch of training data}
\STATE {$(I_{n(i)}, I_{GT(i)}) \in (I_{n}, I_{GT})$}
\STATE {\# auxiliary-training step}
\STATE {$\hat{I}_{c(i)}, \hat{I}_{n(i)} = f_{\theta_{1}^{T}}(I_{n(i)})$; $M = g_{\theta_{2}^{T}}(I_{n(i)})$}
\STATE {$L = L_{Pri}(\hat{I}_{c(i)}, I_{GT(i)}) + L_{Aux}(\hat{I}_{n(i)}, I_{n(i)}, M)$}
\STATE {Update: $\theta_{1}^{T} \leftarrow \theta_{1}^{T} - \alpha \nabla_{\theta_{1}^{T}}L$}
\ENDFOR
\FOR{each training iteration i}
\STATE {\# sample one batch of training data}
\STATE ($I_{n(i)}$, $I_{GT(i)}$) $\in$ ($I_{n}$, $I_{GT}$)
\STATE {\# retain training computational graph}
\STATE {$\hat{I}_{c(i)}, \hat{I}_{n(i)} = f_{\theta_{1}^{T}}(I_{n(i)})$; $M = g_{\theta_{2}^{T}}(I_{n(i)})$}
\STATE {$L = L_{Pri}(\hat{I}_{c(i)}, I_{GT(i)}) + L_{Aux}(\hat{I}_{n(i)}, I_{n(i)}, M)$}
\STATE {$\theta_{1}^{T+} = \theta_{1}^{T} - \alpha \nabla_{\theta_{1}^{T}}L$; $\hat{I}_{c(i)}, \_ = f_{\theta_{1}^{T+}}(I_{n(i)})$}
\STATE {\# meta-training step}
\STATE {Update: $\theta_2^{T} \leftarrow \theta_2^{T} - \beta \nabla_{\theta_2^{T}} (L_{Pri}(\hat{I}_{c(i)}, I_{GT(i)}) +$}
\begin{ALC@g}
$\lambda_G L_G(M, I_n))$
\end{ALC@g}
\ENDFOR
\ENDWHILE
\end{algorithmic}
\end{algorithm}
By training the network using meta-auxiliary learning and an external dataset, the multi-task network $f_{\theta_1}$ will have representation that can generalize to various noises.
In addition, the mask generation network also produces a mask that when used by the auxiliary loss to train the multi-task network $f_{\theta_1}$ will improve the generalization in the primary task.
We also validate the necessity of the pre-training stage in Section \ref{subsec:ablation-study}.
However, our goal is to enable the network to learn through internal training (i.e. image specific learning) by enabling test-time adaptation using meta-transfer learning (Section \ref{sec:meta-transfer-learning}) and meta-test (Section \ref{sec:meta-test}).
\subsection{Meta-transfer Learning (MTL)}
\label{sec:meta-transfer-learning}
In this step, we fine-tune the pre-train network using two different real-noise datasets to enable test-time adaptation of the network.
We use MAML-based \cite{MAML_finn} algorithm for the fine-tuning shown in Algorithm \ref{alg:meta-transfer-algorithm}.
On the inner-loop of the meta-learning, we only update the primary head and auxiliary head of the multi-task network $f_{\theta_1}$.
Meanwhile, on the outer-loop of the meta-learning stage, all the networks parameter will be updated, including the network body.
The multi-task network will be updated with the gradient from the primary objective $L_{Pri}$.
In addition, the mask generation network is also updated with the gradient from primary loss to make the network produce a better mask.
We use unbiased sampling in our method because our goal is to make the denoising performance improve when trained with the self-supervised auxiliary loss on any examples.
This means our method can be seen as zero-shot meta-learning (no training sample) and using task-related sampling will hinder our goal to achieve generalization on any examples.
\begin{algorithm}[th]
\caption{Meta-transfer learning}
\label{alg:meta-transfer-algorithm}
\begin{algorithmic}
\STATE {\bfseries Input:} $\theta_{1}^{T}$, $\theta_{2}^{T}$; dataset $D = {D_1, D_2}$; number of inner-gradient update $K$; Auxiliary loss weight: $\lambda_{in}, \lambda_{out}$
\STATE {\bfseries Initialize:} Learning rate: $\alpha$, $\beta$; $\theta_{1}^{T}$, $\theta_{2}^{T}$ as $\theta_{1}$, $\theta_{2}$
\STATE $\theta_n = \{\theta_1^{Pri},\theta_1^{Aux}\}$
\WHILE {not done}
\STATE {Sample $N$ datapoints from $\mathcal{D}$, $\mathcal{B} = \{I_{n}, I_{GT}\}$}
\FOR{each sample $j$ in $\mathcal{B}$}
\STATE {Initialize $\theta_n^{\prime} = \theta_n$}
\FOR{$k$ in $K$}
\STATE {$\hat{I}_{c(j)}, \hat{I}_{n(j)} = f_{\theta_{1}^{\prime}}(I_{n(j)})$; $M = g_{\theta_2^{\prime}}({I_{n(j)}})$}
\STATE {Compute adapted parameter of $\theta_n^{\prime}$:}
\begin{ALC@g}
\STATE {$\theta_n^{\prime} = \theta_n^{\prime} - \alpha \nabla_{\theta_n^{\prime}} \lambda_{in} L_{Aux}(\hat{I}_{n(j)}, I_{n(j)}, M)$}
\end{ALC@g}
\ENDFOR
\STATE {Evaluate: $\hat{I}_{c(j)}, \hat{I}_{n(j)} = f_{\{\theta_1^{Body},\theta_{n}^{\prime}\}}(I_{n(j)})$}
\ENDFOR
\STATE {Update $\theta_{1}$ using primary loss:}
\begin{ALC@g}
\STATE {$\theta_{1} \leftarrow \theta_{1} - \beta \nabla_{\theta_{1}} \lambda_{out} \sum L_{Pri}(\hat{I}_{c(j)}, I_{GT(j)})$ for each sample in $\mathcal{B}$}
\end{ALC@g}
\STATE {Update $\theta_{2}$ using primary loss:}
\begin{ALC@g}
\STATE {$\theta_{2} \leftarrow \theta_{2} - \beta \nabla_{\theta_{2}} \lambda_{out} \sum L_{Pri}(\hat{I}_{c(j)}, I_{GT(j)})$}
\STATE{for each sample in $\mathcal{B}$}
\end{ALC@g}
\ENDWHILE
\STATE {\bfseries Output:} $\theta_1$, $\theta_2$ as $\theta_1^{M}$, $\theta_2^{M}$
\end{algorithmic}
\end{algorithm}
\subsection{Meta-test}
\label{sec:meta-test}
\begin{algorithm}[th]
\caption{Meta-test}
\label{alg:meta-test}
\begin{algorithmic}
\STATE {\textbf{Input:} Test dataset $\mathcal{D}_{Test}$; Network parameters: $\theta_{1}^{M}, \theta_{2}^{M}$}
\STATE {\textbf{Initialize:} Learning rate: $\alpha$; $\theta_{1}^{M}, \theta_{2}^{M}$ as $\theta_{1}, \theta_{2}$}
\FOR {each noisy image $I_{n}$ in $\mathcal{D}_{Test}$}
\STATE{Initialize: $\theta_{n} = \{\theta_{1}^{Pri}, \theta_{1}^{Aux}\}$}
\FOR {$K$ steps}
\STATE{$M = g_{\theta_2}(I_{n})$; $\hat{I}_{c}, \hat{I}_{n} = f_{\theta_{1}}(I_{n})$}
\STATE{Update: $\theta_{n} \leftarrow \theta_{n} - \alpha \nabla_{\theta_{n}}(L_{Aux}(\hat{I}_{n}, I_{n}, M))$}
\ENDFOR
\ENDFOR
\STATE {\textbf{Output:} $\hat{I}_{c}$ for each noisy image $I_{n}$ in $\mathcal{D}_{Test}$}
\end{algorithmic}
\end{algorithm}
\vskip 0.1in
Algorithm \ref{alg:meta-test} shows how meta-test is being done in the testing stage for fast test-time adaptation.
The network body of the multi-task network is frozen at this stage.
In the meta-testing stage, given a noisy image $I_{n}$, we adapt the primary and auxiliary head to denoise this image by using $K$ gradient steps.
We use the auxiliary loss $L_{Aux}$ as the objective to adapt the network parameters in the testing stage since we have encouraged the multi-task network to improve the primary task performance when trained with this loss.
\section{Experiments}
\subsection{Implementation Details}
In the pre-training stage, we train our network with the MAXL algorithm by using DIV2K \cite{agustsson2017ntire} dataset with synthetic degradation consisting of salt-and-pepper, gaussian, and speckle noise.
We set $\alpha$ and $\beta$ in this stage as $0.001$ and optimize the network using Adam optimizer.
For the two-way image gradient loss weight ($\lambda_G$), we use $\lambda_G = 0.01$.
After pre-training, we further fine-tune our network using the meta-transfer learning algorithm.
For this stage, we use SIDD \cite{abdelhamed2018high} following \cite{zamir2020learning} especially the small version consisting of 160 noisy-clean pairs and Poly \cite{xu2018real} dataset.
Similar to the pre-training, we use Adam as the optimizer and set $\alpha=\beta=0.00005$ and $K=5$.
Furthermore, we set auxiliary loss weight for the inner and outer update as $\lambda_{in}=\lambda_{out}=10$.
For the meta-testing, we use smaller $\alpha$ than the meta-transfer learning $\alpha=0.00001$.
In pre-training and fine-tuning, we use a batch size of $16$ and $2$ respectively with a patch size of $128\times128$.
For the evaluation, we conduct the validation using the validation split of the training dataset while using Nam \cite{nam2016holistic} as the testing dataset.
We use PSNR and SSIM as the evaluation metric.
In terms of PSNR, the improvement of 0.05 and 1 dB can be considered as a contribution and significant contribution respectively.
Meanwhile, an improvement of 0.01 in terms of SSIM can be considered as a contribution.
\subsection{Pre-training Results}
\label{subsec:pretraining-results}
\begin{table*}[t]
\caption{The results of pre-training using different multi-task network architecture (top row) with reconstruction loss as the auxiliary loss $L_{Aux} = L_{Rec}$. We also investigate the results of using different auxiliary losses with MAXL (bottom row) using our multi-task network architecture. Aux stands for auxiliary. All of the experiments are trained with the primary loss $L_{Pri}$ and the specified auxiliary loss $L_{Aux}$.}
\label{table:pretraining-results}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\resizebox{0.68\textwidth}{!}{
\begin{tabular}{lc|cc|c}
\toprule
\multirow{2}{*}{Details} & \multicolumn{2}{c}{Validation} & \multicolumn{2}{c}{Testing} \\
& PSNR & SSIM & PSNR & SSIM \\
\midrule
Our Architecture + $L_{Aux} = L_{Rec}$ & \textbf{31.1540} & \textbf{0.8711} & \textbf{35.7237} & 0.9054 \\
\cite{chi2021test} + $L_{Aux} = L_{Rec}$ & 30.8964 & 0.8664 & 33.4407 & \textbf{0.9153} \\
\midrule
MAXL + $L_{Aux} = L_{Rec}$ & 31.1417 & 0.8691 & 33.4551 & 0.9207\\
MAXL + $L_{Aux} = L_{MaskRec}$ & \textbf{31.3130} & \textbf{0.8719} & 35.8208 & 0.9044 \\
MAXL + $L_{Aux} = L_{MaskRec}$ + $L_{G}$ (Ours) & 31.2193 & 0.8634 & \textbf{36.1182} & \textbf{0.9050} \\
\bottomrule
\end{tabular}}
\end{sc}
\end{small}
\end{center}
\vskip -0.18in
\end{table*}
In this experiment, we conduct two different experiments to demonstrate our contribution.
We train the network using DIV2K dataset with synthetic degradation.
For the evaluation, we conduct the validation using validation images from DIV2K synthetic degradation, while testing using images from the Nam dataset with real noise.
Quantitative results in terms of PSNR and SSIM metrics can be seen in Table \ref{table:pretraining-results}.
In the first experiment, we compare two different architectures of the multi-task network $f_{\theta_1}$ to evaluate which network will get more benefits when trained using auxiliary loss.
The first architecture is our proposed architecture which uses a sequential design where the auxiliary head is placed after the primary head.
Meanwhile, the second architecture is the architecture from \cite{chi2021test} which is a single feature extractor with two parallel heads (primary \& auxiliary).
The details of this architecture can be seen in Figure \ref{fig:multi-task-baseline} in the supplementary material.
Both networks have a similar number of parameters.
To train the network, we use the primary loss and change the auxiliary loss to reconstruction loss $L_{Rec}$.
We change the loss to measure the capability of the multi-task network in an isolated manner without any effect from the mask generation network $g_{\theta_2}$.
The results can be seen in the top row of Table \ref{table:pretraining-results} where our proposed architecture outperforms the other architectures by 0.3 dB on validation and 2.3 dB on the testing PSNR.
Based on the SSIM metric, our architecture also achieves better validation SSIM but lower testing SSIM.
This shows that our architecture achieves better generalization when trained with auxiliary loss compared to the baseline especially in the case of unseen real noise.
The worse results of the network architecture from \cite{chi2021test} may be due to the placement of the auxiliary head that can hurt the performance when placed after the feature extractor i.e. negative transfer issue problem.
After validating that our proposed architecture is a better option when trained with an auxiliary loss $L_{Aux}$, we compare different auxiliary loss functions trained with MAXL.
When trained with MAXL (bottom row of Table \ref{table:pretraining-results}), the validation and testing performance of using masked reconstruction loss ($L_{MaskRec}$) are consistently better compared to using reconstruction loss ($L_{Rec}$).
Since using MAXL also cannot prevent the collapse situation, we use two-way image gradient loss ($L_{G}$) to regularize the mask thus preventing the collapse situation.
This loss further improves the generalization to unseen noise by improving both PSNR and SSIM scores but decreases the validation performance.
Since our goal is to improve the generalization of multi-task networks on unseen real noise, we use the network that achieves the best testing performance to be fine-tuned using the meta-transfer learning algorithm.
\subsection{Meta-transfer Learning Results}
\begin{table*}[t]
\caption{The results of meta-transfer learning using the proposed algorithm compared with other SOTA methods.}
\label{table:meta-transfer-learning-result}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\resizebox{0.68\textwidth}{!}{
\begin{tabular}{lc|cc|cc}
\toprule
\multirow{2}{*}{Details} & \multicolumn{2}{c}{Validation} & \multicolumn{2}{c}{Testing} & Number of \\
& PSNR & SSIM & PSNR & SSIM & Parameters \\
\midrule
Ours without Meta-testing & 41.4792 & 0.9633 & 39.2499 & 0.9672 & \textbf{0.66 M} \\
Ours with Meta-testing & \textbf{41.5086} & \textbf{0.9636} & \textbf{39.2653} & 0.9685 & \textbf{0.66 M} \\
RIDNet \cite{anwar2019real} & 40.2185 & 0.9497 & 38.2214 & 0.9621 & 1.49 M \\
MIRNet \cite{zamir2020learning} & 41.0460 & 0.9609 & 38.9807 & \textbf{0.9705} & 31.79 M \\
\bottomrule
\end{tabular}}
\end{sc}
\end{small}
\end{center}
\vskip -0.18in
\end{table*}
\begin{figure*}[ht]
\vskip 0.1in
\begin{center}
\centerline{\includegraphics[width=0.68\textwidth]{QualitativeResults.pdf}}
\caption{Qualitative results of our method compared to others. BA and AA stand for before adaptation and after adaptation respectively, where the meta-testing is applied on each example of the After Adaptation (AA) result.}
\label{fig:qualitative-results}
\end{center}
\vskip -0.3in
\end{figure*}
In this experiment, we fine-tune the best pre-trained network in Section \ref{subsec:pretraining-results} using the meta-transfer learning algorithm and show the performance before adaptation (without meta-testing) and after adaptation (with meta-testing).
We also compare the result with the recent SOTA methods: RIDNet \cite{anwar2019real} and MIRNet \cite{zamir2020learning}.
We cannot compare our method with a similar method in denoising task \cite{lee_2020} since there is no official code from this method and this method is not designed to handle real noise.
In addition, we only choose the best SOTA which can be run in our machine where MIRNet is the SOTA that achieves the fourth rank in denoising task.
We use the dataset described in the implementation details of the meta-transfer learning stage to train all of the methods.
Due to the limitation of our machine, we can only conduct the meta-transfer learning using a patch size of $1024$ which makes the evaluation (validation and testing) is also conducted using a center crop with the size of $1024$.
In addition, we also try various hyperparameters but cannot find any meaningful improvement besides the one that we use.
Quantitative comparison can be seen in Table \ref{table:meta-transfer-learning-result}.
The results show that using meta-testing to adapt the network for each image in the evaluation dataset can consistently improve the validation performance and testing performance.
In addition, we can see that our method achieve the best result compared to SOTA methods.
Compared to the MIRNet result, our method with meta-testing improves the validation performance by 0.46 dB and 0.0027 in terms of PSNR and SSIM respectively.
The testing performance in terms of PSNR also improves by 0.28 dB.
Interestingly, even though the number of parameters of our network is very small compared to other SOTAs ($\sim$2x and 48x compared to RIDNet and MIRNet respectively), we can still outperform the SOTA method even though the SSIM score is slightly lower compared to MIRNet.
This shows a promising research direction where we can pursue an improvement in denoising tasks by modifying the training algorithm where the current trend of the SOTA methods is dominated by the improvement in the design of network architecture.
The reason for slight performance gain after adaptation (0.03 dB) is likely due to the small learning capacity of the network where the number of parameters of the adapted head is only 0.12 M which can be considered small.
We conjecture that better improvement can be observed if we increase the number of parameters by making the larger network.
However, due to the limitation of our machine, we cannot experiment using a larger network because of the multi-gradient computation.
We conduct an additional feature visualization study (Section \ref{app:sec-feature-map-visualization} in the supplementary material) which shows a high difference in feature maps after adaptation.
In terms of qualitative comparison, the visual comparison on challenging examples can be seen in Figure \ref{fig:qualitative-results}.
The first example (top row) shows that both of the baselines fail to effectively remove the real noise shown in the left patch and right patch of the example.
In addition, some leftover artifacts can be seen especially in the homogeneous region.
Our method successfully denoises the real noise and maintains the smoothness of the homogeneous region without any artifacts.
In the second example (bottom row), all of the methods fail to recover fine textural details of paper texture, especially in the right patch.
However, our method successfully denoises the real noise and produces visually pleasing images in the left patch and right patch of the example.
The other baseline methods fail to denoise real noise but interestingly try to maintain fine textural details (e.g. paper texture) which makes both methods produce noisy artifacts that do not look visually pleasing.
\subsection{Ablation Study}
\label{subsec:ablation-study}
\begin{table*}[t]
\caption{The results of the ablation study. Top row: the results of fine-tuning with meta-transfer learning (MTL) using different losses on our MAXL pre-trained network. Middle row: the results of fine-tuning using MTL on a Randomly Initialized (RI) network. Bottom row: the results of updating all of the network parameters instead of primary and auxiliary heads only. All of the results are the result after running meta-testing.}
\label{table:ablation-study}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\resizebox{0.68\textwidth}{!}{
\begin{tabular}{lc|cc|c}
\toprule
\multirow{2}{*}{Details} & \multicolumn{2}{c}{Validation} & \multicolumn{2}{c}{Testing} \\
& PSNR & SSIM & PSNR & SSIM \\
\midrule
MAXL + MTL with Aux Loss (MRL) & \textbf{41.5086} & \textbf{0.9636} & \textbf{39.2653} & \textbf{0.9685} \\
MAXL + MTL with Aux Loss (RL) & 41.4466 & 0.9634 & 39.1280 & 0.9683 \\
\midrule
RI + MTL with Aux Loss (MRL) & 40.1913 & 0.9554 & \textbf{38.1708} & \textbf{0.9628} \\
RI + MTL with Aux Loss (RL) & \textbf{40.2040} & \textbf{0.9570} & 38.0766 & 0.9621 \\
\midrule
Updating Body + Head: MAXL + MTL with Aux Loss (MRL) & \textbf{41.4919} & \textbf{0.9632} & \textbf{39.1973} & \textbf{0.9691} \\
Updating Body + Head: RI + MTL with Aux Loss (MRL) & 40.0701 & 0.9536 & 37.6599 & 0.9553 \\
\bottomrule
\end{tabular}}
\end{sc}
\end{small}
\end{center}
\vskip -0.18in
\end{table*}
In this study, we investigate the impact of our proposed method such as the impact of MAXL pre-training, masked reconstruction loss, and updating only the head of the multi-task network.
The top row and middle row of Table \ref{table:ablation-study} shows that training with masked reconstruction loss objective consistently outperforms reconstruction loss, especially on the testing performance.
The proposed masked reconstruction loss can improve the testing performance around 0.1 dB which shows that the network can achieve better generalization when trained using this objective, especially when coupled with the meta-transfer learning.
Different from the results of training using MAXL, we observe no collapse situation throughout fine-tuning both on MAXL pre-trained network and randomly initialized network.
Figure \ref{fig:mask-maxl-mtl} and Figure \ref{fig:mask-scratch-mtl} in the supplementary material show the evolution of the mask through training where the mask constantly evolves in early training then becomes constant when the multi-task network starts to converge (i.e. after epoch 100).
This shows the benefit of masked reconstruction loss that can provide the benefit of choosing the certain region that needs to be reconstructed so it can improve the performance of the primary task in different stages of learning.
To study the impact of pre-training, the result in the top row and middle row of Table \ref{table:ablation-study} can be compared.
The results show that pre-training is indeed required to achieve better performance both on validation and testing.
Note that the performance gap is not caused by the convergence issue since both settings (fine-tune on the pre-trained network and randomly initialized network) are already converged.
The bottom row of Table \ref{table:ablation-study} again consolidates this fact where the gap between the result of fine-tuning on the pre-trained network compared to the randomly initialized network is around 1.5 dB both on validation and testing performance.
The last study is to investigate the impact of updating only the multi-task network's head in the inner loop of the fine-tuning stage.
The result can be seen in the top row and bottom row of Table \ref{table:ablation-study}.
Validation performance of updating only head compared to updating all of the network parameters is similar.
However, the testing performance in terms of PSNR can be improved by 0.07 even though the SSIM scores slightly drop.
This denotes that both updating only head and all of the network parameters in the inner loop of fine-tuning achieve similar results.
Similar observation also can be seen in \cite{raghu2020rapid}.
\section{Conclusion}
In this paper, we propose a combination of algorithms to enable test-time adaptation on the problem of real image denoising.
We first design a network consisting of a multi-task and mask generation network.
Then, we propose a novel self-supervised masked reconstruction loss as the auxiliary loss to train the network.
To train the network, we propose to use two-stage learning.
The first stage pre-train the network using a meta-auxiliary learning algorithm to get better meta-initialization.
Meanwhile, the second stage further fine-tunes the network using meta-transfer learning.
This combination of meta-auxiliary learning and meta-transfer learning improves the generalization performance of the network against various unseen noise and enables test-time adaptation.
The adaptation makes the network can adapt to real noisy images within a few gradient updates.
Various experiments show the contribution of the components of our method and also show that our method can outperform other SOTA methods.
Yet, we still find it is necessary to conduct more extensive experiments on other real-noise datasets to validate the proposed method and use a larger version of our network to further validate our contribution.
|
2,869,038,154,092 | arxiv | \section{Introduction}
\label{sec1}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
In this paper, we develop a fifth order Hermite weighted essentially non-oscillatory (HWENO) scheme with artificial linear weights for one and two dimensional nonlinear hyperbolic conservation laws. The idea of HWENO scheme is similar to that of weighted essentially non-oscillatory (WENO) scheme which have been widely applied for computational dynamics fluids. In 1994, the first WENO scheme was proposed by Liu, Osher and Chan \cite{loc} mainly in terms of ENO scheme \cite{ho,h1,heoc}, in which they combined all candidate stencils by a nonlinear convex manner to obtain higher order accuracy in smooth regions, then, in 1996, Jiang and Shu \cite{js} constructed the third and fifth-order finite difference WENO schemes in multi-space dimension, where they gave a general definition for smoothness indicators and nonlinear weights. Since then, WENO schemes have been further developed in \cite{hs,lpr,SHSN,ccd,ZQd}.
However, if we design a higher order accuracy WENO scheme, we need to enlarge the stencil. In order to keep the compactness of the scheme, Qiu and Shu \cite{QSHw1,QSHw} gave a new option by evolving both with the solution and its derivative, which were termed as Hermite WENO (HWENO) schemes.
HWENO schemes would have higher order accuracy than WENO schemes with the same reconstruction stencils. As the solutions of nonlinear hyperbolic conservation laws often contain discontinuities, its derivatives or first order moments would be relatively large nearby discontinuities. Hence, the HWENO schemes presented in \cite{QSHw1,QSHw,ZQHW,TQ,LLQ,ZA,TLQ,CZQ} used different stencils for discretization in the space for the original and derivative equations, respectively. In one sense, these HWENO schemes can be seen as an extension by DG methods, and Dumbser et al. \cite{DBTM} gave a general and unified framework to define the numerical scheme extended by DG method, termed as $P_NP_M$ method. But the derivatives or the first order moments were still used straightforwardly nearby the discontinuities, which would be less robust for problems with strong shocks. Such as the first HWENO schemes \cite{QSHw1,QSHw} failed to simulate the double Mach and the step forward problems, then, Zhu and Qiu \cite{ZQHW} solved this problem by using a new procedure to reconstruct the derivative terms, while Cai et al. \cite{CZQ} employed additional positivity-preserving manner. Overall, only using different stencils to discretize the space is not enough to overcome the effect of the derivatives or the first order moments near the discontinuities. Hence, we took the thought of limiter for discontinuous Galerkin (DG) method \cite{DG2} to modify the first order moments nearby the discontinuities of the solution in \cite{ZCQHH}, meanwhile, we also noticed that many hybrid WENO schemes \cite{SPir,Hdp,cd1,cd2,Glj,ZZCQ} employed linear schemes directly in the smooth regions, while still used WENO schemes in the discontinuous regions, which can increase the efficiency obviously, therefore, in \cite{ZCQHH}, we directly used high order linear approximation in the smooth regions, while modified the first order moments on the troubled-cells and employed HWENO reconstruction on the interface. The hybrid HWENO scheme \cite{ZCQHH} had high efficiency and resolution with non-physical oscillations, but it still had a drawback of that the linear weights were depended on geometry of the mesh and point where the reconstruction was performed, and they were not easy to be computed, especially for multi-dimensional problems with unstructured meshes. For example, in \cite{ZCQHH} we needed to compute the linear weights at twelve points in one cell by a least square methodology with eight small stencils for two dimensional problems, in which the numerical accuracy was only the fourth order. Moreover, if we solve the problems for unstructured meshes, the linear weights would be more difficult to calculate, and the negative weights may appear or there is nonexistent of the linear weights for some cases. In order to overcome the drawback, Zhu and Qiu \cite{WZQ} presented a new simple WENO scheme in the finite difference framework, which had a convex combination of a fourth degree polynomial and other two linear polynomials by using any artificial positive linear weights (the sum equals one). Then the method was extended to finite volume methods both in structured and unstructured meshes \cite{bgs,vozq, DBSR, tezq,trzq,bgfb}.
In this paper, following the idea of the new type WENO schemes \cite{WZQ, vozq, DBSR, tezq, trzq}, hybrid WENO \cite{SPir,Hdp,cd1,cd2,Glj,ZZCQ} and hybrid HWENO \cite{ZCQHH}, we develop the new hybrid HWENO scheme in which we use a nonlinear convex combination of a high degree polynomial with several low degree polynomials and the linear weights can be any artificial positive numbers with the only constraint that their sum is one.
The procedures of the new hybrid HWENO scheme are: firstly, we modify the first order moments using the new HWENO limiter methodology in the troubled-cells, which are identified by the KXRCF troubled-cell indicator \cite{LJJN}. Then, for the space discretization, if the cell is identified as a troubled-cell, we would use the new HWENO reconstruction at the points on the interface; otherwise we employ linear approximation at the interface points straightforwardly. And we directly use high order linear approximation at the internal points for all cells. Finally, the third order TVD Runge-Kutta method \cite{so1} is applied for the time discretization. Particularly, only the new HWENO reconstructions need to be performed on local characteristic directions for systems. In addition, the new hybrid HWENO scheme inherits the advantages of \cite{ZCQHH}, such as non-physical oscillations for using the idea of limiter for discontinuous Galerkin (DG) method, high efficiency for employing linear approximation straightforwardly in the smooth regions, and compactness as only immediate neighbor information is needed, meanwhile, it gets less numerical errors on the same meshes and has higher order numerical accuracy for two dimensional problems.
The organization of the paper is as follows: in Section 2, we introduce the detailed implementation of the new hybrid HWENO scheme in the one and two dimensional cases. In Section 3, some benchmark numerical are performed to illustrate the numerical accuracy, efficiency, resolution and robustness of proposed scheme. Concluding remarks are given in Section 4.
\section{Description of Hermite WENO scheme with artificial linear weights}
\label{sec2}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
In this section, we present the construction procedures of the hybrid HWENO scheme with artificial linear weights for one and two dimensional hyperbolic conservation laws, which is the fifth order accuracy both in the one and two dimensional cases.
\subsection{One dimensional case}
At first, we consider one dimensional scalar hyperbolic conservation laws
\begin{equation}
\label{EQ} \left\{
\begin{array}
{ll}
u_t+ f(u)_x=0, \\
u(x,0)=u_0(x). \\
\end{array}
\right.
\end{equation}
The computing domain is divided by uniform meshes $I_{i} =[x_{i-1/2},x_{i+1/2}]$ for simplicity, the mesh center $x_i=\frac {x_{i-1/2}+x_{i+1/2}} 2$ with the mesh size $\Delta x=x_{i+1/2}- x_{i-1/2}$.
As the variables of our designed HWENO scheme are the zeroth and first order moments, we multiply the governing equation (\ref{EQ}) by $\frac{1}{\Delta x}$ and $\frac{x-x_i}{{(\Delta x)}^2}$, respectively, and integrate them over $I_i$, then, employ the numerical flux to approximate the values of the flux at the interface. Finally, the semi-discrete finite volume HWENO scheme is
\begin{equation}
\label{odeH}
\left \{
\begin{aligned}
\frac{d \overline u_i(t)}{dt} &=- \frac 1 {\Delta x} \left ( \hat f_{i+1/2}-\hat f_{i-1/2}\right ),\\
\frac{d \overline v_i(t)}{dt}&=- \frac 1 {2\Delta x} \left ( \hat f_{i-1/2} +\hat f_{i+1/2}\right ) +\frac 1 {\Delta x} F_i(u).
\end{aligned}
\right.
\end{equation}
The initial conditions are $\overline u_i(0) = \frac 1 {\Delta x} \int_{I_i} u_0(x) dx$ and $\overline v_i(0) = \frac 1 {\Delta x} \int_{I_i} u_0(x) \frac{x-x_i} { \Delta x} dx$. $\overline u_i(t)$ is the zeroth order moment in $I_i$ as $\frac 1 {\Delta x} \int_{I_i} u(x,t) dx$ and $\overline v_i(t)$ is the first order moment in $I_i$ as $\frac 1 {\Delta x} \int_{I_i} u(x,t) \frac{x-x_i} { \Delta x}dx $. $\hat f_{i+1/2}$ is the numerical flux to approximate the value of the flux $ f(u) $ at the interface point $x_{i+1/2}$, which is defined by the Lax-Friedrichs numerical flux method, and the explicit expression is
\begin{equation*}
\label{Nflux} \hat f_{i+1/2}= \frac 1 2 \left (f(u^-_{i+1/2})+f(u^+_{i+1/2}) \right) -\frac \alpha 2\left( u^+_{i+1/2}-u^-_{i+1/2} \right),
\end{equation*}
in which $\alpha= \max_{u}|f'(u)|$. $F_i(u)$ is the numerical integration for the flux $f(u)$ over $I_i$, and is approximated by a four-point Gauss-Lobatto quadrature formula:
\begin{equation*}
\label{Nfluxinte} F_i(u)=\frac 1 {\Delta x} \int_{I_i} f(u) dx \approx \sum_{l=1}^4 \omega_l f(u(x_l^G,t)),
\end{equation*}
where the weights are $\omega_1=\omega_4=\frac 1 {12}$ and $ \omega_2=\omega_3=\frac 5 {12}$, and the quadrature points on the cell $I_i$ are
\begin{equation*}
x_1^G=x_{i-1/2},\quad x_2^G=x_{i-\sqrt5/10}, \quad x_3^G=x_{i+\sqrt5/10}, \quad x_4^G=x_{i+1/2},
\end{equation*}
in which $x_{i+a}$ is $x_i +a \Delta x$.
Now, we first present the detailed procedures of the spatial reconstruction for HWENO scheme in Steps 1 and 2, then, we introduce the method of time discretization in Step 3.
\textbf{Step 1.} Identify the troubled-cell and modify the first order moment in the troubled-cell.
Troubled-cell means that the solution of the equation in the cell may be discontinuous, we first use the KXRCF troubled-cell indicator \cite{LJJN} to identify the troubled-cell, and the procedures were given in the hybrid HWENO scheme \cite{ZCQHH}, then, if the cell $I_i$ is identified as a troubled-cell, we would modify the first order moment $\overline v_i$ by the following procedures.
We use the thought of HWENO limiter \cite{QSHw1} to modify the first order moment, but the modification for the first order moment is based on a convex combination of a fourth degree polynomial with two linear polynomials. Firstly, we give a large stencil $S_0=\{I_{i-1},I_{i},I_{i+1}\}$ and two small stencils $S_1=\{I_{i-1},I_{i}\}$, $S_2=\{I_{i},I_{i+1}\}$, then, we obtain a quartic polynomial $p_0(x)$ on $S_0$, as
\begin{equation*}
\frac{1}{\Delta x} \int_{I_{i+j}} p_0(x)dx = \overline u_{i+j},\ j=-1,0,1, \quad \frac{1}{\Delta x} \int_{I_{i+j}} p_0(x)\frac{x-x_{i+j}} {\Delta x} dx = \overline v_{i+j}, \ j=-1,1,
\end{equation*}
and get two linear polynomials $p_1(x),p_2(x)$ on $S_1,S_2$, respectively, satisfying
\begin{equation*}
\begin{split}
\frac{1}{\Delta x} \int_{I_{i+j}} p_1(x)dx &= \overline u_{i+j}, \quad j=-1,0,\\
\frac{1}{\Delta x} \int_{I_{i+j}} p_2(x)dx &= \overline u_{i+j}, \quad j=0,1.
\end{split}
\end{equation*}
We use these three polynomials to reconstruct $\overline v_i=\frac{1}{\Delta x} \int_{I_{i}} u(x)\frac{x-x_{i}} {\Delta x} dx$, and their explicit results are
\begin{equation*}
\begin{split}
\frac{1}{\Delta x} \int_{I_{i}} p_0(x)\frac{x-x_{i}} {\Delta x}dx &= \frac 5 {76} \overline u_{i+1}-\frac 5 {76} \overline u_{i-1}-\frac {11}{38}\overline v_{i-1}-\frac {11}{38}\overline v_{i+1},\\
\frac{1}{\Delta x} \int_{I_{i}} p_1(x)\frac{x-x_{i}} {\Delta x}dx &=\frac 1 {12} \overline u_i-\frac 1 {12} \overline u_{i-1},\\
\frac{1}{\Delta x} \int_{I_{i}} p_2(x)\frac{x-x_{i}} {\Delta x}dx &= \frac 1 {12}\overline u_{i+1}-\frac 1 {12} \overline u_{i}.\\
\end{split}
\end{equation*}
For simplicity, we define $q_n$ as $\frac{1}{\Delta x} \int_{I_{i}} p_n(x)\frac{x-x_{i}} {\Delta x}dx$ in the next procedures. With the similar idea of the central WENO schemes \cite{lpr,lpr2} and the new WENO schemes \cite{WZQ,vozq,tezq,trzq}, we rewrite $q_0$ as:
\begin{equation}\label{p_big}
q_0=\gamma_0 \left( \frac 1 {\gamma_0}q_0 -\frac {\gamma_1} {\gamma_0}q_1- \frac {\gamma_2} {\gamma_0} q_2 \right) + \gamma_1 q_1+ \gamma_2 q_2.
\end{equation}
We can notice that equation (\ref{p_big}) is always satisfied for any choice of $\gamma_0$, $\gamma_1$, $\gamma_2$ with $\gamma_0 \neq 0$. To make the next WENO procedure be stable, the linear weights would be positive with $\gamma_0+ \gamma_1+\gamma_2=1$, then, we calculate the smoothness indicators $\beta_n$ to measure how smooth the functions $p_n(x)$ in the cell $I_i$, and we use the same definition as in \cite{js},
\begin{equation}
\label{GHYZ}
\beta_n=\sum_{\alpha=1}^r\int_{I_i}{\Delta x}^{2\alpha-1}(\frac{d ^\alpha
p_n(x)}{d x^\alpha})^2dx, \quad n=0,1,2,
\end{equation}
where $r$ is the degree of the polynomials $p_n(x)$, then, the expressions for the smoothness indicators are
\begin{equation*}
\left \{
\begin{aligned}
\beta_0=&(\frac{29}{38} \overline u_{i-1}-\frac{29}{38} \overline u_{i+1} +\frac{60}{19} \overline v_{i-1}+\frac{60}{19} \overline v_{i+1} )^2+( \frac94 \overline u_{i-1}-\frac92 \overline u_{i}+\frac94 \overline u_{i+1} +\frac{15}2\overline v_{i-1}-\frac{15}2\overline v_{i+1}
)^2+\\
&\frac{3905}{1444} ( \overline u_{i-1} - \overline u_{i+1}+12 \overline v_{i-1} + 12 \overline v_{i+1} )^2+\frac{1}{12}( \frac52 \overline u_{i-1}-5\overline u_{i}+\frac52 \overline u_{i+1}+9\overline v_{i-1}-9\overline v_{i+1} )^2+\\
&\frac{109341}{448}( \overline u_{i-1}-2 \overline u_{i}+ \overline u_{i+1} +\overline v_{i-1}-\overline v_{i+1} )^2,\\
\beta_1=& (\overline u_i -\overline u_{i-1})^2,\\
\beta_2=& (\overline u_{i+1} -\overline u_{i})^2.\\
\end{aligned}
\right.
\end{equation*}
Later, we use a new parameter $\tau$ to measure the absolute difference between $\beta_{0}$, $\beta_{1}$ and $\beta_{2}$, which is also can be seen in these new WENO schemes \cite{WZQ,vozq,tezq,trzq},
\begin{equation}
\label{tao5}
\tau=(\frac{|\beta_{0}-\beta_{1}|+|\beta_{0}-\beta_{2}|}{2})^2,
\end{equation}
and the nonlinear weights are defined as
\begin{equation*}
\label{9}
\omega_n=\frac{\bar\omega_n}{\sum_{\ell=0}^{2}\bar\omega_{\ell}},
\ \mbox{with} \ \bar\omega_{n}=\gamma_{n}(1+\frac{\tau}{\beta_{n}+\varepsilon}), \ n=0,1,2,
\end{equation*}
where $\varepsilon = 10^{-6}$ is to avoid the denominator by zero.
Finally, the first order moment $\overline v_i$ is modified by
\begin{equation*}
\overline v_i = \omega_0 \left( \frac 1 {\gamma_0}q_0 - \sum_{n=1}^{2}\frac {\gamma_n} {\gamma_0} q_n\right) + \sum_{n=1}^{2}\omega_n q_n.
\end{equation*}
Noticed that we just replace the linear weights in equation (\ref{p_big}) by the nonlinear weights, and the accuracy of the modification depends on the accuracy of the high degree reconstructed polynomial. The modification for the first order moment $\overline v_i$ would be the fifth order accuracy in the smooth regions, and more detailed derivation can refer to the literature \cite{vozq}.
\textbf{Step 2.} Reconstruct the values of the solutions $u$ at the four Gauss-Lobatto points.
We use the same stencils $S_0, S_1, S_2$ as Step 1, then, if one of the cells in stencil $S_0$ is identified as a troubled-cell, we would reconstruct $u^\pm_{i\mp1/2}$ using the HWENO methodology in Step 2.1; otherwise we directly reconstruct $u^\pm_{i\mp1/2}$ by the linear approximation method described in Step 2.2. And the reconstruction procedure for $u_{i\pm\sqrt5/10}$ is given in Step 2.3.
\textbf{Step 2.1.} The new HWENO reconstruction for $u^\pm_{i\mp1/2}$.
If one of the cells in stencil $S_0$ is identified as a troubled-cell, $u^\pm_{i\mp1/2}$ is reconstructed by the next HWENO procedure. For simplicity, we only present the detailed procedure of the reconstruction for $u^-_{i+1/2}$, while the reconstruction for $u^+_{i-1/2}$ is mirror symmetric with respect to $x_i$. Noticed that we have modified the first order moment in the troubled-cells, then, we would use these information here. We now reconstruct three polynomials $p_0(x), p_1(x), p_2(x)$ on $S_0, S_1, S_2$, respectively, satisfying
\begin{equation*}
\label{HWEp1}
\begin{split}
\frac{1}{\Delta x} \int_{I_{i+j}} p_0(x)dx &= \overline u_{i+j}, \ \frac{1}{\Delta x} \int_{I_{i+j}} p_0(x)\frac{x-x_{i+j}} {\Delta x} dx = \overline v_{i+j},\quad j=-1,0,1,\\
\frac{1}{\Delta x} \int_{I_{i+j}} p_1(x)dx &= \overline u_{i+j},\ j=-1,0, \quad \frac{1}{\Delta x} \int_{I_{i}} p_1(x)\frac{x-x_{i}} {\Delta x}dx = \overline v_{i},\\
\frac{1}{\Delta x} \int_{I_{i+j}} p_2(x)dx &= \overline u_{i+j},\ j=0,1, \quad \frac{1}{\Delta x} \int_{I_{i}} p_2(x)\frac{x-x_{i}} {\Delta x}dx = \overline v_{i}.\\
\end{split}
\end{equation*}
In terms of the above requirements, we first give the values of these polynomials at the point $x_{i+1/2}$, following as
\begin{equation*}
\begin{split}
p_0(x_{i+1/2})&=\frac {13} {108}\overline u_{i-1}+\frac 7{12}\overline u_i+\frac 8{27}\overline u_{i+1}+ \frac{25}{54}\overline v_{i-1}+\frac {241}{54}\overline v_i-\frac {28}{27}\overline v_{i+1},\\
p_1(x_{i+1/2})&=\frac16\overline u_{i-1}+\frac56\overline u_i+8 \overline v_i,\\
p_2(x_{i+1/2})&= \frac 5 6 \overline u_i+ \frac 1 {6} \overline u_{i+1}+4 \overline v_i.\\
\end{split}
\end{equation*}
Using the next new HWENO methodology, we can use any positive linear weights satisfying $\gamma_0+ \gamma_1+\gamma_2=1$, then, we compute the smoothness indicators $\beta_n$ in the same ways, and the formula of the smoothness indicators has been given in (\ref{GHYZ}) on Step 1, then, their expressions are given as follows,
\begin{equation*}
\label{GHYZP}
\left \{
\begin{aligned}
\beta_0=& (\frac{19}{108} \overline u_{i-1}-\frac{19}{108} \overline u_{i+1} +\frac{31}{54}\overline v_{i-1}-\frac{241}{27}\overline v_{i}+\frac{31}{54}\overline v_{i+1})^2+(\frac94 \overline u_{i-1}-\frac92 \overline u_{i}+\frac94 \overline u_{i+1}+\\
&\frac{15}2\overline v_{i-1}-\frac{15}2\overline v_{i+1})^2+(\frac{70}{9} \overline u_{i-1}-\frac{70}{9} \overline u_{i+1} +\frac{200}{9}\overline v_{i-1}+\frac{1280}{9}\overline v_{i}+\frac{200}{9}\overline v_{i+1})^2+\\
&\frac{1}{12}(\frac52 \overline u_{i-1}-5\overline u_{i}+\frac52 \overline u_{i+1}+9\overline v_{i-1}-9\overline v_{i+1})^2+\frac{1}{12}(\frac{175}{18} \overline u_{i-1}-\frac{175}{18} \overline u_{i+1} +\frac{277}{9}\overline v_{i-1}+\\
&\frac{1546}{9}\overline v_{i}+\frac{277}{9}\overline v_{i+1})^2+\frac{1}{180}(\frac{95}{18} \overline u_{i-1}-\frac{95}{18} \overline u_{i+1} +\frac{155}{9}\overline v_{i-1}+\frac{830}{9}\overline v_{i}+\frac{155}{9}\overline v_{i+1})^2+\\
&\frac{109341}{175}(\frac58 \overline u_{i-1}-\frac54\overline u_{i}+\frac58 \overline u_{i+1}+\frac{15}{4}\overline v_{i-1}-\frac{15}{4}\overline v_{i+1})^2+\frac{27553933}{1764}(\frac{35}{36} \overline u_{i-1}-\frac{35}{36} \overline u_{i+1}+\\
&\frac{77}{18}\overline v_{i-1}+\frac{133}{9}\overline v_{i}+\frac{77}{18}\overline v_{i+1})^2,\\
\beta_1=&144\overline v_{i}^2+\frac {13}{3}(\overline u_{i-1}-\overline u_{i}+12\overline v_{i} )^2,\\
\beta_2=&144\overline v_{i}^2+\frac {13}{3}(\overline u_{i}-\overline u_{i+1}+12\overline v_{i} )^2.\\
\end{aligned}
\right.
\end{equation*}
We bring the same parameter $\tau$ to define the absolute difference between $\beta_{0}$, $\beta_{1}$ and $\beta_{2}$, and the formula is given in (\ref{tao5}), then, the nonlinear weights are computed as
\begin{equation*}
\label{90}
\omega_n=\frac{\bar\omega_n}{\sum_{\ell=0}^{2}\bar\omega_{\ell}},
\ \mbox{with} \ \bar\omega_{n}=\gamma_{n}(1+\frac{\tau}{\beta_{n}+\varepsilon}), \ n=0,1,2.
\end{equation*}
Here, $\varepsilon$ is a small positive number taken as $10^{-6}$. Finally, the value of $u^-_{i+1/2}$ is reconstructed by
\begin{equation*}
u^-_{i+1/2} =\omega_0 \left( \frac 1 {\gamma_0}p_0(x_{i+1/2}) - \sum_{n=1}^{2}\frac {\gamma_n} {\gamma_0} p_n(x_{i+1/2}) \right) + \sum_{n=1}^{2}\omega_n p_n(x_{i+1/2}).
\end{equation*}
\textbf{Step 2.2.} The linear approximation for $u^\mp_{i\pm1/2}$.
If neither cell in stencil $S_0$ is identified as troubled-cell, we will use the linear approximation for $ u^\mp_{i\pm1/2}$, which means we only need to use the high degree polynomial $p_0(x)$ obtained in Step 2.1, then, we have
\begin{equation*}
u^+_{i-1/2}= p_0(x_{i-1/2})=\frac 8{27}\overline u_{i-1}+\frac 7{12}\overline u_i+\frac {13} {108}\overline u_{i+1}+\frac {28}{27} \overline v_{i-1}-\frac {241}{54}\overline v_i-\frac{25}{54}\overline v_{i+1},
\end{equation*}
and
\begin{equation*}
u^-_{i+1/2}=p_0(x_{i+1/2})=\frac {13} {108}\overline u_{i-1}+\frac 7{12}\overline u_i+\frac 8{27}\overline u_{i+1}+ \frac{25}{54}\overline v_{i-1}+\frac {241}{54}\overline v_i-\frac {28}{27}\overline v_{i+1}.
\end{equation*}
\textbf{Step 2.3.} The linear approximation for $u_{i\pm\sqrt5/10}$.
We would reconstruct $u_{i\pm\sqrt5/10}$ using the linear approximation for all cells, then, $u_{i\pm\sqrt5/10}$ are approximated by
\begin{equation*}
\begin{split}
u_{i-\sqrt5/10}= p_0(x_{i-\sqrt5/10})&=-(\frac{101}{5400}\sqrt5+\frac{1}{24})\overline u_{i-1}+\frac{13}{12}\overline u_i+(\frac{101}{5400}\sqrt5-\frac{1}{24})\overline u_{i+1}-\\
&\quad (\frac{3}{20}+\frac{841}{13500}\sqrt5)\overline v_{i-1}-\frac{10289}{6750}\sqrt5\overline v_i+(\frac{3}{20}-\frac{841}{13500}\sqrt5)\overline v_{i+1},
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
u_{i+\sqrt5/10}= p_0(x_{i+\sqrt5/10})&=(\frac{101}{5400}\sqrt5-\frac{1}{24})\overline u_{i-1}+\frac{13}{12}\overline u_i-(\frac{101}{5400}\sqrt5+\frac{1}{24})\overline u_{i+1}+\\
&\quad (\frac{841}{13500}\sqrt5-\frac{3}{20})\overline v_{i-1}+\frac{10289}{6750}\sqrt5\overline v_i+(\frac{3}{20}+\frac{841}{13500}\sqrt5)\overline v_{i+1}.
\end{split}
\end{equation*}
\textbf{Step 3.} Discretize the semi-discrete scheme (\ref{odeH})
in time by the third order TVD Runge-Kutta method \cite{so1}
\begin{eqnarray}
\label{RK}\left \{
\begin{array}{lll}
u^{(1)} & = & u^n + \Delta t L(u^n),\\
u^{(2)} & = & \frac 3 4u^n + \frac 1 4u^{(1)}+\frac 1 4\Delta t L(u^{(1)}),\\
u^{(n+1)} & = &\frac 1 3 u^n + \frac 2 3u^{(2)} +\frac 2 3\Delta t L(u^{(2)}).
\end{array}
\right.
\end{eqnarray}
{\bf \em Remark 1:} The KXRCF troubled-cells indicator can catch the discontinuities well. For one dimensional scalar equation, the solution $u$ is defined as the indicator variable, then $\overrightarrow v$ is $f'(u)$. For one dimensional Euler equations, the density $\rho$ and the energy $E$ are set as the indicator variables, respectively, then $\overrightarrow v$ is the velocity $\mu$ of the fluid.
{\bf \em Remark 2:} For the systems, such as the one dimensional compressible Euler equations, all HWENO procedures are performed on the local characteristic directions to avoid the oscillations nearby discontinuities, while the linear approximation procedures are computed in each component straightforwardly.
\subsection{Two dimensional case}
We first consider two dimensional scalar hyperbolic conservation laws
\begin{equation}
\label{EQ2} \left\{
\begin{array}
{ll}
u_t+ f(u)_x+g(u)_y=0, \\
u(x,y,0)=u_0(x,y), \\
\end{array}
\right.
\end{equation}
then, we divide the computing domain by uniform meshes $I_{i,j}$=$ [x_{i-1/2},x_{i+1/2}] \times [y_{j-1/2},y_{j+1/2}]$ for simplicity. The mesh sizes are $ \Delta x =x_{i+1/2}- x_{i-1/2}$ in the $x$ direction and $ \Delta y =y_{j+1/2}- y_{j-1/2}$ in the $y$ direction. The cell center $(x_i,y_j)=(\frac{x_{i-1/2}+x_{i+1/2}}{2}, \frac {y_{j-1/2} +y_{j+1/2}}{2})$. $x_i+a\Delta x$ is simplified as $x_{i+a}$ and $y_j+b\Delta y$ is set as $y_{j+b}$.
Since the variables of the HWENO scheme are the zeroth and first order moments, we multiply the governing equation (\ref{EQ2}) by $\frac{1}{\Delta x\Delta y}$, $\frac {x-x_i} {(\Delta x)^2\Delta y}$ and $\frac {y-y_j} {\Delta x(\Delta y)^2}$ on both sides, respectively, then, we integrate them over $I_{i,j}$ and apply the integration by parts. In addition, we approximate the values of the flux at the points on the interface of $I_{i,j}$ by the numerical flux. Finally, the semi-discrete finite volume HWENO scheme is
\begin{equation}
\label{ode2}
\left\{
\begin{aligned}
\frac{d \overline u_{i,j}(t)}{dt}&=-\frac{1} {\Delta x \Delta y} \int_{y_{j-1/2}}^{y_{j+1/2}}[\hat f(u(x_{i+1/2},y))-\hat f(u(x_{i-1/2},y))]dy\\
&-\frac{1} {\Delta x \Delta y} \int_{x_{i-1/2}}^{x_{i+1/2}}[\hat g(u(x,y_{j+1/2}))-\hat g(u(x,y_{j-1/2}))]dx,\\
\frac{d \overline v_{i,j}(t)}{dt}&=-\frac{1} {2\Delta x \Delta y} \int_{y_{j-1/2}}^{y_{j+1/2}}[\hat f(u(x_{i-1/2},y))+\hat f(u(x_{i+1/2},y))]dy+\frac 1{{\Delta x}^2 \Delta y}\int_{I_{i,j}}f(u)dxdy\\
&-\frac{1} {\Delta x \Delta y} \int_{x_{i-1/2}}^{x_{i+1/2}}\frac{(x-x_i)}{\Delta x}[\hat g(u(x,y_{j+1/2}))-\hat g(u(x,y_{j-1/2}))]dx,\\
\frac{d \overline w_{i,j}(t)}{dt}&=-\frac{1} {\Delta x \Delta y} \int_{y_{j-1/2}}^{y_{j+1/2}}\frac{(y-y_j)}{\Delta y}[\hat f(u(x_{i+1/2},y))-\hat f(u(x_{i-1/2},y))]dy\\
&-\frac{1} {2\Delta x \Delta y} \int_{x_{i-1/2}}^{x_{i+1/2}}[\hat g(u(x,y_{j-1/2}))+\hat g(u(x,y_{j+1/2}))]dx+\frac 1{{\Delta x \Delta y}^2}\int_{I_{i,j}}g(u)dxdy.
\end{aligned}
\right.
\end{equation}
The initial conditions are $ \overline u_{i,j}(0) =$$ \frac 1 {\Delta x \Delta y} \int_{I_{i,j}} u_0(x,y) dxdy$, $ \overline v_{i,j}(0) =$$ \frac 1 {\Delta x \Delta y} \int_{I_{i,j}} u_0(x,y) \frac{x-x_i} { \Delta x} dxdy$ and $ \overline w_{i,j}(0) =$$ \frac 1 {\Delta x \Delta y} \int_{I_{i,j}} u_0(x,y) \frac{y-y_j} { \Delta y} dxdy$. Here, $\overline u_{i,j}(t)$ is the zeroth order moment defined as $\frac 1 {\Delta x \Delta y} $$\int_{I_{i,j}} u(x,y,t)dxdy$; $\overline v_{i,j}(t)$ and $\overline w_{i,j}(t)$ are the first order moments in the $x$ and $y$ directions taken as $\frac 1 {\Delta x \Delta y}$$\int_{I_{i,j}} u(x,y,t)$$ \frac{x-x_i} { \Delta x} dxdy$ and $\frac 1 {\Delta x \Delta y}$$\int_{I_{i,j}} u(x,y,t)$$\frac{y-y_j}{\Delta y}dxdy$, respectively.
$\hat f(u(x_{i+1/2},y))$ and $ \hat g(u(x,y_{j+1/2}))$ are the numerical flux to approximate the values of $ f(u(x_{i+1/2},y)) $ and $g(u(x,y_{j+1/2}))$, respectively.
Now, we approximate the integral terms of equations (\ref{ode2}) by 3-point Gaussian numerical integration. More explicitly, the integral terms are approximated by
\begin{equation*}
\frac 1{\Delta x\Delta y} \int_{I_{i,j}}f(u)dxdy \approx \sum_{k=1}^{3}\sum_{l=1}^{3}\omega_k\omega_l f(u(x_{G_k},y_{G_l})),
\end{equation*}
\begin{equation*}
\int_{y_{j-1/2}}^{y_{j+1/2}}\hat f(u(x_{i+1/2},y))dy\approx \Delta y \sum_{k=1}^{3} \omega_k \hat f(u(x_{i+1/2},y_{G_k})),
\end{equation*}
in which $\omega_1=\frac5{18}$, $\omega_2=\frac4{9}$ and $\omega_3=\frac5{18}$ are the quadrature weights, and the coordinates of the Gaussian points are
\begin{equation*}
x_{G_1}=x_{i-\frac{\sqrt{15}}{10}},\ x_{G_2}=x_{i},\ x_{G_3}=x_{i+\frac{\sqrt{15}}{10}}; \quad y_{G_1}=y_{j-\frac{\sqrt{15}}{10}},\ y_{G_2}=y_{j},\ y_{G_3}=y_{j+\frac{\sqrt{15}}{10}}.
\end{equation*}
The numerical fluxes at the interface points in each directions are approximated by the Lax-Friedrichs method:
\begin{equation*}\label{2dflx}
\hat f(u(G_b))=\frac 1 2[f(u^-(G_b))+f(u^+(G_b))]-\frac{\alpha}2(u^+(G_b)-u^-(G_b)),
\end{equation*}
and
\begin{equation*}\label{2dfly}
\hat g(u(G_b))=\frac 1 2[g(u^-(G_b))+g(u^+(G_b))]-\frac{\beta}2(u^+(G_b)-u^-(G_b)).
\end{equation*}
Here, $\alpha= \max_u|f'(u)|$, $\beta= \max_u|g'(u)|$, and $G_b$ is the Gaussian point on the interface of the cell $I_{i,j}$.
Now, we first present the detailed spatial reconstruction for the semi-discrete scheme (\ref{ode2}) in Steps 4 and 5, then, we introduce the methodology of time discretization in Step 6.
\textbf{Step 4.} Identify the troubled-cell and modify the first order moments in the troubled-cell.
We also use the KXRCF troubled-cell indicator \cite{LJJN} to identify the discontinuities, and the detailed implementation procedures for two dimensional problems had been introduced in the hybrid HWENO scheme \cite{ZCQHH}.
If the cell $I_{i,j}$ is identified as a troubled-cell, we would modify the first order moments $\overline v_{i,j}$ and $\overline w_{i,j}$. We can modify the first order moments employing dimensional by dimensional manner. For example, we use these information $\overline u_{i-1,j}$, $\overline u_{i,j}$, $\overline u_{i+1,j}$, $\overline v_{i-1,j}$, $\overline v_{i+1,j}$ to modify $\overline v_{i,j}$, but employ $\overline u_{i,j-1}$, $\overline u_{i,j}$, $\overline u_{i,j+1}$, $\overline w_{i,j-1}$, $\overline w_{i,j+1}$ to reconstruct $\overline w_{i,j}$, and the procedures are the same as one dimensional case.
\textbf{Step 5.} Reconstruct the point values of the solutions $u$ at the Gaussian points.
Based on the formula of the semi-discrete scheme (\ref{ode2}), it means that we need to reconstruct the point values of $u^\pm(x_{i\mp1/2},y_{G_{1,2,3}})$, $u^\pm(x_{G_{1,2,3}},y_{j\mp1/2})$ and $u(x_{G_{1,2,3}},y_{G_{1,2,3}})$ in the cell $I_{i,j}$. If one of the cells in the big stencil is identified as a troubled-cell in Step 4, we would reconstruct the points values of solutions $u$ at the interface points of the cell $I_{i,j}$ by the HWENO methodology in Step 5.1; otherwise we directly use linear approximation at these interface points in Step 5.2. And we employ linear approximation straightforwardly for internal reconstructed points introduced in Step 5.3.
\textbf{Step 5.1.} Reconstruct the point values of the solutions $u$ at the interface points by a new HWENO methodology.
\begin{figure}
\tikzset{global scale/.style={
scale=#1,every node/.append style={scale=#1}}}
\centering
\begin{tikzpicture}[global scale = 1]
\draw(0,0)rectangle+(1.8,1.8);\draw(1.8,0)rectangle+(1.8,1.8);\draw(3.6,0)rectangle+(1.8,1.8);
\draw(0,1.8)rectangle+(1.8,1.8);\draw(1.8,1.8)rectangle+(1.8,1.8);\draw(3.6,1.8)rectangle+(1.8,1.8);
\draw(0,3.6)rectangle+(1.8,1.8);\draw(1.8,3.6)rectangle+(1.8,1.8);\draw(3.6,3.6)rectangle+(1.8,1.8);
\draw(0.9,0.9)node{1};\draw(2.7,0.9)node{2};\draw(4.5,0.9)node{3};
\draw(0.9,2.7)node{4};\draw(2.7,2.7)node{5};\draw(4.5,2.7)node{6};
\draw(0.9,4.5)node{7};\draw(2.7,4.5)node{8};\draw(4.5,4.5)node{9};
\draw(0.9,-0.25)node{i-1};\draw(2.7,-0.25)node{i};\draw(4.5,-0.25)node{i+1};
\draw(5.8,0.9)node{j-1};\draw(5.8,2.7)node{j};\draw(5.8,4.5)node{j+1};
\end{tikzpicture}
\caption{The big stencil $S_0$ and its new labels.}
\label{2dbig}
\end{figure}
\begin{figure}
\tikzset{global scale/.style={
scale=#1,every node/.append style={scale=#1}}}
\centering
\begin{tikzpicture}[global scale = 1]
\draw(0+0,0+9.6)rectangle+(1.8,1.8);\draw(1.8+0,0+9.6)rectangle+(1.8,1.8);
\draw(0+0,1.8+9.6)rectangle+(1.8,1.8);\draw(1.8+0,1.8+9.6)rectangle+(1.8,1.8);
\draw(0.9+0,0.9+9.6)node{1};\draw(2.7+0,0.9+9.6)node{2};
\draw(0.9+0,2.7+9.6)node{4};\draw(2.7+0,2.7+9.6)node{5};
\draw(0.9+0,-0.25+9.6)node{i-1};\draw(2.7+0,-0.25+9.6)node{i};
\draw(4.0+0,0.9+9.6)node{j-1};\draw(4.0+0,2.7+9.6)node{j};
\draw(0+5.0,0+9.6)rectangle+(1.8,1.8);\draw(1.8+5.0,0+9.6)rectangle+(1.8,1.8);
\draw(0+5.0,1.8+0+9.6)rectangle+(1.8,1.8);\draw(1.8+5.0,1.8+9.6)rectangle+(1.8,1.8);
\draw(0.9+5.0,0.9+9.6)node{2};\draw(2.7+5.0,0.9+9.6)node{3};
\draw(0.9+5.0,2.7+9.6)node{5};\draw(2.7+5.0,2.7+9.6)node{6};
\draw(0.9+5.0,-0.25+9.6)node{i};\draw(2.7+5.0,-0.25+9.6)node{i+1};
\draw(4.0+5.0,0.9+9.6)node{j-1};\draw(4.0+5.0,2.7+9.6)node{j};
\draw(0+0,0+14.4)rectangle+(1.8,1.8);\draw(1.8+0,0+14.4)rectangle+(1.8,1.8);
\draw(1.8+0,1.8+14.4)rectangle+(1.8,1.8);\draw(0+0,1.8+14.4)rectangle+(1.8,1.8);
\draw(0.9+0,0.9+14.4)node{4};\draw(2.7+0,0.9+14.4)node{5};
\draw(0.9+0,2.7+14.4)node{7};\draw(2.7+0,2.7+14.4)node{8};
\draw(0.9+0,-0.25+14.4)node{i-1};\draw(2.7+0,-0.25+14.4)node{i};
\draw(4+0,0.9+14.4)node{j};\draw(4+0,2.7+14.4)node{j+1};
\draw(0+5.0,0+14.4)rectangle+(1.8,1.8);\draw(1.8+5.0,0+14.4)rectangle+(1.8,1.8);
\draw(0+5.0,1.8+14.4)rectangle+(1.8,1.8);\draw(1.8+5.0,1.8+14.4)rectangle+(1.8,1.8);
\draw(0.9+5.0,0.9+14.4)node{5}; \draw(2.7+5.0,0.9+14.4)node{6};
\draw(0.9+5.0,2.7+14.4)node{8}; \draw(2.7+5.0,2.7+14.4)node{9};
\draw(0.9+5.0,-0.25+14.4)node{i};\draw(2.7+5.0,-0.25+14.4)node{i+1};
\draw(4+5.0,0.9+14.4)node{j};\draw(4+5.0,2.7+14.4)node{j+1};
\end{tikzpicture}
\caption{The four small stencils and these respective labels. From left to right and bottom to top are the stencils: $S_1,...,S_4$.}
\label{2dsmall}
\end{figure}
If one of the cells in big stencil is identified as a troubled-cell, the points values of solutions $u$ at the interface points of the cell $I_{i,j}$ are reconstructed by the next new HWENO methodology.
We first give the big stencil $S_0$ in Figure \ref{2dbig}, and we rebel the cell $I_{i,j}$ and its neighboring cells as $I_1,...,I_9$ for simplicity. Particularly, the new label of the cell $I_{i,j}$ is $I_5$. In the next procedures, we take $G_k$ to represent the specific points where we want to reconstruct. We also give four small stencils $S_1,...,S_4$ shown in Figure \ref{2dsmall}. Noticed that we only use five candidate stencils, but the hybrid HWENO scheme \cite{ZCQHH} needed to use eight small stencils. Now, we construct a quartic reconstruction polynomial $p_0(x,y)$ $\in span$ $\{1,x,y,x^2,xy,y^2,x^3,x^2y,xy^2,y^3,x^4,x^3y,x^2y^2,xy^3, y^4\}$ on the big stencil $S_0$ and four quadratic polynomials $p_1(x,y),...,p_4(x,y)$ $\in span\{1,x,y,x^2,xy,y^2\}$ on the four small stencils $S_1,...,S_4$, respectively. These polynomials satisfy the following conditions:
\begin{equation*}
\begin{array}{ll}
\frac{1}{\Delta x \Delta y}\int_{I_k}p_n(x,y)dxdy=\overline u_k, \\
\frac{1}{\Delta x \Delta y}\int_{I_{k_x}}p_n(x,y)\frac{(x-x_{k_x})}{\Delta x}dxdy=\overline v_{k_x}, \quad \frac{1}{\Delta x \Delta y}\int_{I_{k_y}}p_n(x,y)\frac{(y-y_{k_y})}{\Delta y}dxdy=\overline w_{k_y}, \\
\end{array}
\end{equation*}
for
\begin{equation*}
\begin{array}{ll}
n=0,\quad k=1,...,9,\ k_x=k_y=2,4,5,6,8;\\
n=1,\quad k=1,2,4,5,\ k_x=k_y=5; \quad n=2,\quad k=2,3,5,6,\ k_x=k_y=5;\\
n=3,\quad k=4,5,7,8,\ k_x=k_y=5; \quad n=4,\quad k=5,6,8,9,\ k_x=k_y=5.
\end{array}
\end{equation*}
For the quartic polynomial $p_0(x,y)$, we can obtain it by requiring that it matches the zeroth order moments on the cell $I_1,...,I_9$, the first order moments on the cell $I_5$ and others are in a least square sense \cite{hs}. For the four quadratic polynomials, we can directly obtain the expressions of $p_n(x,y)$ $(n=1,...,4)$ by the above corresponding requirements, respectively.
Similarly as in the one dimensional case, the new HWENO method can use any artificial positive linear weights (the sum equals 1), while the hybrid HWENO scheme \cite{ZCQHH} needed to calculate the linear weights for 12 points using 8 small stencils determined by a least square methodology, and the linear weights were not easy to be obtained especially for high dimensional problems or unstructured meshes. In addition, it only had the fourth order accuracy in two dimension, but the new HWENO methodology can achieve the fifth order numerical accuracy. Next, to measure how smooth the function $p_n(x, y)$ in the target cell $I_{i,j}$, we compute the smoothness indicators $\beta_n$ as the same way listed by \cite{hs}, following as
\begin{equation}\label{2dGHYZ}
\beta_n= \sum_{|l|=1}^r|I_{i,j}|^{|l|-1} \int_{I_{i,j}}\left( \frac {\partial^{|l|}}{\partial x^{l_1}\partial y^{l_2}}p_n(x,y)\right)^2 dxdy, \quad n=0,...,4,
\end{equation}
where $l=(l_1,l_2)$, $|l|=l_1+l_2$ and $r$ is the degree of $p_n(x, y)$. Similarly, we bring a new parameter $\tau$ to define the overall difference between $\beta_{l}$, $l=0,...,4$ as
\begin{equation}
\label{tao4}
\tau=\left(\frac{|\beta_{0}-\beta_{1}|+|\beta_{0}-\beta_{2}|+|\beta_{0}-\beta_{3}|+|\beta_{0}-\beta_{4}|}{4}\right)^2,
\end{equation}
then, the nonlinear weights are defined as
\begin{equation}
\label{99}
\omega_n=\frac{\bar\omega_n}{\sum_{\ell=0}^{4}\bar\omega_{\ell}},
\ \mbox{with} \ \bar\omega_{n}=\gamma_{n}(1+\frac{\tau}{\beta_{n}+\varepsilon}), \ n=0,...,4,
\end{equation}
in which $\varepsilon$ is taken as $10^{-6}$. The final reconstruction of
the solutions $u$ at the interface point $G_k$ is
\begin{equation*}
u^*(G_k) =\omega_0 \left( \frac 1 {\gamma_0}p_0(G_k) - \sum_{n=1}^{4}\frac {\gamma_n} {\gamma_0} p_n(G_k) \right) + \sum_{n=1}^{4}\omega_n p_n(G_k).
\end{equation*}
where "*" is "+" when $G_k$ is located on the left or bottom interface of the cell $I_{i,j}$, while "*" is "-" on the right or top interface of $I_{i,j}$.
\textbf{Step 5.2.} Reconstruct the point values of the solutions $u$ at the interface points using linear approximation.
If neither cell in the big stencil $S_0$ is identified as a troubled-cell, the point value of the solution $u$ at the interface point $G_k$ is directly approximated by $p_0(G_k)$, and we use the same polynomial $p_0(x,y)$ given in Step 5.1.
\textbf{Step 5.3.} Reconstruct the point values of the solutions $u$ at the internal points by linear approximation straightforwardly.
We would use linear approximation for the point values of the solutions $u$ at the internal points in all cells, then, we directly employ the same quartic polynomial $p_0(x,y)$ obtained in Step 5.1 to approximate these point values.
\textbf{Step 6.} Discretize the semi-discrete scheme (\ref{ode2}) in time by the third order TVD Runge-Kutta method \cite{so1}.
The semi-discrete scheme (\ref{ode2}) is discretized by the third order TVD Runge-Kutta method in time, and the formula is given in (\ref{RK}) for the one dimensional case.
{\bf \em Remark 3:} The KXRCF indicator is suitable for two dimensional hyperbolic conservation laws. For two dimensional scalar equation, the solution $u$ is the indicator variable. $\overrightarrow{v}$ is set as $f'(u)$ in the $x$ direction, while it is taken as $g'(u)$ in the $y$ direction. For two dimensional Euler equations, the density $\rho$ and the energy $E$ are defined as the indicator variables, respectively. $\overrightarrow{v}$ is the velocity $\mu$ in the $x$ direction of the fluid, while it is the velocity $\nu$ in the $y$ direction.
{\bf \em Remark 4:} For the systems, such as the two dimensional compressible Euler equations, all HWENO reconstruction procedures are performed on the local characteristic decompositions, while linear approximation procedures are performed on component by component.
\section{Numerical tests}
\label{sec3}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
In this section, we present the numerical results of the new hybrid HWENO scheme which is described in Section 2. In order to fully assess the influence of the modification of the first order moment upon accuracy, all cells are marked as troubled-cells in Step 1 and Step 4 for one and two dimensional cases, respectively, and we denote this method as New HWENO scheme. We also denote HWENO scheme and the hybrid HWENO scheme which are presented in \cite{ZCQHH}. The CFL number is set as 0.6 expect for the hybrid HWENO scheme in the two dimensional non-smooth tests.
\subsection{Accuracy tests}
We will present the results of HWENO, New HWENO, Hybrid HWENO and New hybrid HWENO schemes in the one and two dimensional accuracy tests. In addition, to evaluate whether the choice of the linear weights would affect the order of the new HWENO methodology or not, we use random positive linear weights (the sum equals one) at each time step for New HWENO and New hybrid HWENO schemes.
\noindent{\bf Example 3.1.} We solve the following scalar Burgers' equation:
\begin{equation}\label{1dbugers}
u_t+(\frac {u^2} 2)_x=0, \quad 0<x<2.
\end{equation}
The initial condition is $u(x,0)=0.5+\sin(\pi x)$ with periodic boundary condition. The computing time is $t=0.5/\pi$, in which the solution is still smooth. We give the numerical errors and orders in Table \ref{tburgers1d} with $N$ uniform meshes for HWENO, New HWENO, Hybrid HWENO and New hybrid HWENO schemes.
At first, we know that Hybrid HWENO and New hybrid HWENO schemes have same results for there are not cells which are identified as troubled-cells, therefore, they both directly use linear approximation for the spatial reconstruction. Although these HWENO schemes all have the designed fifth order accuracy, the hybrid schemes have better numerical performance with less numerical errors than the corresponding HWENO schemes, meanwhile, we can see that New HWENO scheme has less numerical errors than HWENO scheme starting with 80 meshes, which illustrates the new HWENO methodology has better numerical performance than the original HWENO method. In addition, the choice of the linear weights would not affect the order of the new HWENO methodology. Finally, we show numerical errors against CPU times by these HWENO schemes in Figure \ref{Fburges1d_smooth}, which shows two hybrid HWENO schemes have much higher efficiency than other HWENO schemes, and New HWENO scheme also has higher efficiency than HWENO scheme.
\begin{table}
\begin{center}
\caption{1D-Burgers' equation: initial data
$u(x,0)=0.5+sin(\pi x)$. HWENO schemes. $T=0.5/\pi$. $L^1$ and $L^\infty$ errors and orders. Uniform meshes with $N$ cells. }
\medskip
\begin{tabular} {lllllllll} \hline
$N$ cells & \multicolumn{4}{l}{HWENO scheme} & \multicolumn{4}{l}{New HWENO scheme}\\
\cline{2-5} \cline{6-9}
&$ L^1$ error & order & $L^\infty$error & order &$ L^1$ error & order & $L^\infty$ error &order\\ \hline
40 & 4.23E-05 & & 5.25E-04 & & 6.42E-04 & & 6.89E-03 & \\
80 & 1.24E-06 & 5.09 & 1.70E-05 & 4.95 & 4.20E-07 & 10.58 & 4.91E-06 & 10.45 \\
120 & 1.72E-07 & 4.88 & 2.08E-06 & 5.17 & 3.97E-08 & 5.82 & 6.04E-07 & 5.17 \\
160 & 4.26E-08 & 4.85 & 4.84E-07 & 5.08 & 8.83E-09 & 5.23 & 1.40E-07 & 5.08 \\
200 & 1.34E-08 & 5.17 & 1.72E-07 & 4.64 & 2.80E-09 & 5.15 & 4.47E-08 & 5.12 \\
240 & 5.21E-09 & 5.20 & 7.22E-08 & 4.76 & 1.10E-09 & 5.14 & 1.75E-08 & 5.16 \\
\hline
$N$ cells & \multicolumn{4}{l}{Hybrid HWENO scheme} & \multicolumn{4}{l}{New Hybrid HWENO scheme}\\
\cline{2-5} \cline{6-9}
&$ L^1$ error & order & $L^\infty$error & order &$ L^1$ error & order & $L^\infty$ error &order\\ \hline
40 & 8.51E-07 & & 1.14E-05 & & 8.51E-07 & & 1.14E-05 & \\
80 & 1.46E-08 & 5.87 & 2.26E-07 & 5.65 & 1.46E-08 & 5.87 & 2.26E-07 & 5.65 \\
120 & 1.39E-09 & 5.80 & 2.04E-08 & 5.94 & 1.39E-09 & 5.80 & 2.04E-08 & 5.94 \\
160 & 2.66E-10 & 5.75 & 3.59E-09 & 6.03 & 2.66E-10 & 5.75 & 3.59E-09 & 6.03 \\
200 & 7.46E-11 & 5.70 & 9.58E-10 & 5.92 & 7.46E-11 & 5.70 & 9.58E-10 & 5.92 \\
240 & 2.68E-11 & 5.62 & 3.27E-10 & 5.90 & 2.68E-11 & 5.62 & 3.27E-10 & 5.90 \\
\hline
\end{tabular}
\label{tburgers1d}
\end{center}
\end{table}
\begin{figure}
\centerline{
\psfig{file=Burger_L1.pdf,width=2.5 in}
\psfig{file=Burges_L_infinity.pdf,width=2.5 in}}
\caption{1D-Burgers' equation: initial data
$u(x,0)=0.5+sin(\pi x)$. $T=0.5/\pi$. Computing times and errors. Triangle signs and a green solid line: the results of HWENO scheme; circle signs and a black solid line: the results of New HWENO scheme; plus signs and a blue solid line: the results of Hybrid HWENO scheme; rectangle signs and a red solid line: the results of New hybrid HWENO scheme.}
\label{Fburges1d_smooth}
\end{figure}
\smallskip
\noindent{\bf Example 3.2.} One dimensional Euler equations:
\begin{equation}
\label{euler1}
\frac{\partial}{\partial t}
\left(
\begin{array}{c}
\rho \\
\rho \mu \\
E
\end{array} \right )
+
\frac{\partial}{\partial x}
\left (
\begin{array}{c}
\rho \mu \\
\rho \mu^{2}+p \\
\mu(E+p)
\end{array}
\right )
=0,
\end{equation}
where $\rho$ is density, $\mu$ is velocity, $E$ is total energy and $p$ is pressure. The initial conditions are $\rho(x,0)=1+0.2\sin(\pi x)$, $\mu(x,0)=1$, $p(x,0)=1$ and $\gamma=1.4$ with periodic boundary condition. The computing domain is $ x \in [0, 2\pi]$. The exact solution is $\rho(x,t)=1+0.2\sin(\pi(x-t))$, $\mu(x,0)=1$, $p(x,0)=1$, and the computing time is up to $T=2$. We present the numerical errors and orders of the density for the HWENO schemes in Table \ref{tEluer1d}, then, we first can see these HWENO schemes achieve the fifth order accuracy, and two hybrid HWENO schemes have same performance as they both directly use linear approximation for the spatial reconstruction, meanwhile, the hybrid schemes have less numerical errors than the corresponding HWENO schemes. In addition, New HWENO scheme has less errors than HWENO scheme, which shows the new HWENO methodology has better performance than the original HWENO method, and random positive linear weights at each time step would not affect the order accuracy of New HWENO scheme. Finally, we give the numerical errors against CPU times by these HWENO schemes in Figure \ref{FEuler1d_smooth}, which shows Hybrid HWENO schemes have much higher efficiency with smaller numerical errors and less CPU times than other HWENO schemes, and we can see New HWENO scheme has higher efficiency with smaller errors than HWENO scheme.
\begin{table}
\begin{center}
\caption{1D-Euler equations: initial data
$\rho(x,0)=1+0.2\sin(\pi x)$, $\mu(x,0)=1$ and $p(x,0)=1$. HWENO schemes. $T=2$. $L^1$ and $L^\infty$ errors and orders. Uniform meshes with $N$ cells. }
\medskip
\begin{tabular} {lllllllll} \hline
$N$ cells & \multicolumn{4}{l}{HWENO scheme} & \multicolumn{4}{l}{New HWENO scheme}\\
\cline{2-5} \cline{6-9}
&$ L^1$ error & order & $L^\infty$error & order &$ L^1$ error & order & $L^\infty$ error &order\\ \hline
40 & 4.00E-06 & & 8.18E-06 & & 9.09E-07 & & 4.85E-06 & \\
80 & 1.22E-07 & 5.04 & 2.43E-07 & 5.08 & 7.89E-09 & 6.85 & 3.76E-08 & 7.01 \\
120 & 1.59E-08 & 5.03 & 3.05E-08 & 5.11 & 1.04E-09 & 5.01 & 2.44E-09 & 6.75 \\
160 & 3.73E-09 & 5.03 & 6.71E-09 & 5.26 & 2.46E-10 & 5.00 & 4.54E-10 & 5.84 \\
200 & 1.21E-09 & 5.04 & 2.12E-09 & 5.17 & 8.05E-11 & 5.00 & 1.37E-10 & 5.37 \\
240 & 4.82E-10 & 5.06 & 8.35E-10 & 5.10 & 3.23E-11 & 5.00 & 5.25E-11 & 5.26 \\\hline
$N$ cells & \multicolumn{4}{l}{Hybrid HWENO scheme} & \multicolumn{4}{l}{New hybrid HWENO scheme}\\
\cline{2-5} \cline{6-9}
&$ L^1$ error & order & $L^\infty$error & order &$ L^1$ error & order & $L^\infty$ error &order\\ \hline
40 & 1.02E-09 & & 1.60E-09 & & 1.02E-09 & & 1.60E-09 & \\
80 & 3.10E-11 & 5.05 & 4.86E-11 & 5.04 & 3.10E-11 & 5.05 & 4.86E-11 & 5.04 \\
120 & 4.06E-12 & 5.01 & 6.37E-12 & 5.01 & 4.06E-12 & 5.01 & 6.37E-12 & 5.01 \\
160 & 9.61E-13 & 5.01 & 1.51E-12 & 5.01& 9.61E-13 & 5.01 & 1.51E-12 & 5.01 \\
200 & 3.15E-13 & 5.00 & 4.94E-13 & 5.00& 3.15E-13 & 5.00 & 4.94E-13 & 5.00 \\
240 & 1.26E-13 & 5.00 & 1.98E-13 & 5.00 & 1.26E-13 & 5.00 & 1.98E-13 & 5.00 \\ \hline
\end{tabular}
\label{tEluer1d}
\end{center}
\end{table}
\begin{figure}
\centerline{
\psfig{file=Euler_L1.pdf,width=2.5 in}
\psfig{file=Euler_L_infinity.pdf,width=2.5 in}}
\caption{1D-Euler equations: initial data
$\rho(x,0)=1+0.2\sin(\pi x)$, $\mu(x,0)=1$ and $p(x,0)=1$. $T=2$. Computing times and errors. Triangle signs and a green solid line: the results of HWENO scheme; circle signs and a black solid line: the results of New HWENO scheme; plus signs and a blue solid line: the results of Hybrid HWENO scheme; rectangle signs and a red solid line: the results of New hybrid HWENO scheme.}
\label{FEuler1d_smooth}
\end{figure}
\smallskip
\noindent{\bf Example 3.3.} Two dimensional Burgers' equation:
\begin{equation}\label{2dbugers}
u_t+(\frac {u^2} 2)_x+(\frac {u^2} 2)_y=0, \quad 0<x<4, \ 0<y<4.
\end{equation}
The initial condition is $u(x,y,0)=0.5+sin(\pi (x+y)/2)$ and periodic boundary conditions are applied in each direction. We compute the solution up to $T=0.5/\pi$, where the solution is smooth, and we present the numerical errors and orders in Table \ref{tburgers2d}, which illustrates that New HWENO and New hybrid HWENO schemes have the fifth order accuracy, while the HWENO and hybrid HWENO schemes only have the fourth order accuracy, and we can see that different choice of the linear weights has no influence on the numerical accuracy for the new HWENO methodology. In addition, we present the numerical errors against CPU times by these HWENO schemes in Figure \ref{Fburges2d_smooth}, which illustrates New hybrid HWENO scheme has higher efficiency than Hybrid HWENO scheme with smaller numerical errors and higher order numerical accuracy, and the hybrid schemes both have less CPU times than the corresponding schemes. Meanwhile, New HWENO scheme has higher efficiency than HWENO scheme.
\begin{table}
\begin{center}
\caption{2D-Burgers' equation: initial data
$u(x,y,0)=0.5+sin(\pi (x+y)/2)$. HWENO schemes. $T=0.5/\pi$. $L^1$ and $L^\infty$ errors and orders. Uniform meshes with $N_x\times N_y$ cells.}
\medskip
\begin{tabular} {lllllllll}
\hline
$N_x\times N_y$ cells & \multicolumn{4}{l}{HWENO scheme} & \multicolumn{4}{l}{New HWENO scheme}\\
\cline{2-5}\cline{6-9}
&$ L^1$ error & order & $L^\infty$error & order &$ L^1$ error & order & $L^\infty$ error &order\\ \hline
$ 40\times 40$ & 8.21E-05 & & 7.02E-04 & & 1.28E-04 & & 1.10E-03 & \\
$ 80\times 80$ & 4.67E-06 & 4.14 & 4.42E-05 & 3.99 & 2.86E-07 & 8.81 & 2.25E-06 & 8.94 \\
$ 120\times 120$ & 8.70E-07 & 4.15 & 7.76E-06 & 4.29 & 2.52E-08 & 5.99 & 3.04E-07 & 4.95 \\
$ 160\times 160$ & 2.66E-07 & 4.13 & 2.26E-06 & 4.29 & 5.60E-09 & 5.22 & 7.19E-08 & 5.00 \\
$ 200\times 200$ & 1.06E-07 & 4.12 & 8.73E-07 & 4.26 & 1.79E-09 & 5.12 & 2.39E-08 & 4.95 \\
$ 240\times 240$ & 5.02E-08 & 4.09 & 4.04E-07 & 4.23 & 7.12E-10 & 5.05 & 9.53E-09 & 5.03 \\ \hline
$N_x\times N_y$ cells & \multicolumn{4}{l}{Hybrid HWENO scheme} & \multicolumn{4}{l}{New Hybrid HWENO scheme}\\
\cline{2-5}\cline{6-9}
&$ L^1$ error & order & $L^\infty$error & order &$ L^1$ error & order & $L^\infty$ error &order\\ \hline
$ 40\times 40$ & 7.03E-05 & & 6.32E-04 & & 2.70E-06 & & 2.49E-05 & \\
$ 80\times 80$ & 3.93E-06 & 4.16 & 4.28E-05 & 3.88 & 5.01E-08 & 5.75 & 8.91E-07 & 4.81 \\
$ 120\times 120$ & 7.27E-07 & 4.16 & 7.61E-06 & 4.26 & 4.15E-09 & 6.14 & 7.81E-08 & 6.00 \\
$ 160\times 160$ & 2.18E-07 & 4.19 & 2.30E-06 & 4.16 & 7.00E-10 & 6.18 & 1.27E-08 & 6.32 \\
$ 200\times 200$ & 8.61E-08 & 4.16 & 8.95E-07 & 4.23 & 1.94E-10 & 5.74 & 3.26E-09 & 6.09 \\
$ 240\times 240$ & 4.05E-08 & 4.14 & 4.18E-07 & 4.18 & 7.65E-11 & 5.12 & 1.17E-09 & 5.63 \\
\hline
\end{tabular}
\label{tburgers2d}
\end{center}
\end{table}
\begin{figure}
\centerline{
\psfig{file=2DBurger_L1.pdf,width=2.5 in}
\psfig{file=2DBurges_L_infinity.pdf,width=2.5 in}}
\caption{2D-Burgers' equation: initial data
$u(x,y,0)=0.5+sin(\pi (x+y)/2)$. $T=0.5/\pi$. Computing times and errors. Triangle signs and a green solid line: the results of HWENO scheme; circle signs and a black solid line: the results of New HWENO scheme; plus signs and a blue solid line: the results of Hybrid HWENO scheme; rectangle signs and a red solid line: the results of New hybrid HWENO scheme.}
\label{Fburges2d_smooth}
\end{figure}
\smallskip
\noindent {\bf Example 3.4.} Two dimensional Euler equations:
\begin{equation}
\label{euler2}
\frac{\partial}{\partial t}
\left(
\begin{array}{c}
\rho \\
\rho \mu \\
\rho \nu \\
E
\end{array} \right )
+
\frac{\partial}{\partial x}
\left (
\begin{array}{c}
\rho \mu \\
\rho \mu^{2}+p \\
\rho \mu \nu \\
\mu(E+p)
\end{array}
\right )
+
\frac{\partial}{\partial y}
\left (
\begin{array}{c}
\rho \nu \\
\rho \mu \nu \\
\rho \nu^{2}+p \\
\nu(E+p)
\end{array}
\right )=0,
\end{equation}
in which $\rho$ is the density; $(\mu,\nu)$ is the velocity; $E$ is the total energy; and $p$ the is pressure. The initial conditions are $\rho(x,y,0)=1+0.2\sin(\pi(x+y))$, $\mu(x,y,0)=1$, $\nu(x,y,0)=1$, $p(x,y,0)=1$ and $\gamma=1.4$. The computing domain is $(x,y)\in [0,2] \times [0, 2]$ with periodic boundary conditions in $x$ and $y$ directions, respectively. The exact solution of $\rho$ is $\rho(x,y,t)=1+0.2\sin(\pi(x+y-2t))$ and the computing time is $T=2$. We give the numerical errors and orders of the density for HWENO, New HWENO, Hybrid HWENO, New hybrid HWENO schemes in Table \ref{tEluer2d}, then, we can find the New HWENO and New hybrid HWENO achieve the fifth order accuracy, but the HWENO and Hybrid HWENO scheme only have the fourth order accuracy, meanwhile, we can see that random positive linear weights (the sum equals one) would have no impact on the order accuracy of New HWENO scheme. Finally, we also show their numerical errors against CPU times in Figure \ref{FEuler2d}, which illustrates New hybrid HWENO scheme has higher efficiency than other three schemes, meanwhile, New HWENO scheme has better performance with less numerical errors and higher order accuracy than HWENO scheme.
\begin{table}
\begin{center}
\caption{2D-Euler equations: initial data $\rho(x,y,0)=1+0.2\sin(\pi(x+y))$, $\mu(x,y,0)=1$,
$\nu(x,y,0)=1$ and $p(x,y,0)=1$. HWENO schemes. $T=2$. $L^1$ and $L^\infty$ errors and orders. Uniform meshes with $N_x\times N_y$ cells.}
\medskip
\begin{tabular} {lllllllll}
\hline
$N_x\times N_y$ cells & \multicolumn{4}{l}{HWENO scheme} & \multicolumn{4}{l}{New HWENO scheme}\\
\cline{2-5}\cline{6-9}
&$ L^1$ error & order & $L^\infty$error & order &$ L^1$ error & order & $L^\infty$ error &order\\ \hline
$ 30\times 30$ & 8.85E-05 & & 1.63E-04 & & 6.78E-06 & & 2.86E-05 & \\
$ 60\times 60$ & 4.39E-06 & 4.33 & 7.09E-06 & 4.52 & 6.64E-08 & 6.67 & 2.71E-07 & 6.72 \\
$ 90\times 90$ & 8.08E-07 & 4.17 & 1.29E-06 & 4.20 & 8.73E-09 & 5.00 & 2.04E-08 & 6.38 \\
$ 120\times 120$ & 2.48E-07 & 4.11 & 3.95E-07 & 4.11 & 2.07E-09 & 5.00 & 3.88E-09 & 5.77 \\
$ 150\times 150$ & 1.00E-07 & 4.07 & 1.59E-07 & 4.07 & 6.78E-10 & 5.00 & 1.14E-09 & 5.48 \\
\hline
$N_x\times N_y$ cells & \multicolumn{4}{l}{Hybrid HWENO scheme} & \multicolumn{4}{l}{New hybrid HWENO scheme}\\
\cline{2-5}\cline{6-9}
&$ L^1$ error & order & $L^\infty$error & order &$ L^1$ error & order & $L^\infty$ error &order\\ \hline
$ 30\times 30$ & 2.37E-05 & & 3.72E-05 & & 3.11E-07 & & 4.87E-07 & \\
$ 60\times 60$ & 7.76E-07 & 4.93 & 1.22E-06 & 4.93 & 4.55E-09 & 6.09 & 7.14E-09 & 6.09 \\
$ 90\times 90$ & 1.07E-07 & 4.89 & 1.68E-07 & 4.89 & 3.95E-10 & 6.03 & 6.20E-10 & 6.03 \\
$ 120\times 120$ & 2.67E-08 & 4.82 & 4.20E-08 & 4.82 & 7.01E-11 & 6.01 & 1.10E-10 & 6.00 \\
$ 150\times 150$ & 9.27E-09 & 4.75 & 1.46E-08 & 4.75 & 1.84E-11 & 5.99 & 3.00E-11 & 5.84 \\
\hline
\end{tabular}
\label{tEluer2d}
\end{center}
\end{table}
\begin{figure}
\centerline{
\psfig{file=Euler_2d_L1.pdf,width=2.5 in}\psfig{file=Euler_2d_L_in.pdf,width=2.5 in}}
\caption{2D-Euler equations: initial data
$\rho(x,y,0)=1+0.2\sin(\pi(x+y))$, $\mu(x,y,0)=1$,
$\nu(x,y,0)=1$ and $p(x,y,0)=1$. $T=2$. Computing times and errors. Triangle signs and a green solid line: the results of HWENO scheme; circle signs and a black solid line: the results of New HWENO scheme; plus signs and a blue solid line: the results of Hybrid HWENO scheme; rectangle signs and a red solid line: the results of New hybrid HWENO scheme.}
\label{FEuler2d}
\end{figure}
\smallskip
\subsection{Non-smooth tests}
We present the results of the hybrid HWENO scheme here, meanwhile, the linear weights for the low degree polynomials are set as 0.01 and the linear weight for the high degree polynomial is the rest (the sum of their linear weights equals one). For comparison, we also show the numerical results of the hybrid HWENO scheme \cite{ZCQHH}. From the results of the non-smooth tests, two schemes have similar performances in one dimension, but the new hybrid HWENO scheme has better numerical performances in two dimension for the new hybrid HWENO scheme has higher order numerical accuracy. In addition, the new hybrid HWENO scheme uses more simpler HWENO methodology, where any artificial positive linear weights (the sum equals 1) can be used, which is easier to implement in the computation, and it also uses less candidate stencils and bigger CFL number for two dimensional problems.
\noindent{\bf Example 3.5.} We solve the one-dimensional Burgers' equation (\ref{1dbugers}) as introduced in Example 3.1 with same initial and boundary conditions, but the final computing time is $t=1.5/\pi$, in which the solution is discontinuous. In Figure \ref{Fburges1d}, we present the the numerical solution of the HWENO schemes and the exact solution, and we can see that two schemes have similar numerical results with high resolutions.
\begin{figure}
\centerline{ \psfig{file=1d_Burges.pdf,width=3 in}}
\caption{1D-Burgers' equation: initial data
$u(x,0)=0.5+sin(\pi x)$. $T=1.5/\pi$. Black solid line: exact solution; blue plus signs: the results of the hybrid HWENO scheme; red squares: the results of the new hybrid HWENO scheme. Uniform meshes with 80 cells.}
\label{Fburges1d}
\end{figure}
\smallskip
\noindent {\bf Example 3.6.} The Lax problem for 1D Euler equations with the next Riemann initial condition:
\begin{equation*}
\label{lax} (\rho,\mu,p,\gamma)^T= \left\{
\begin{array}{ll}
(0.445,0.698,3.528,1.4)^T,& x \in [-0.5,0),\\
(0.5,0,0.571,1.4)^T,& x \in [0,0.5].
\end{array}
\right.
\end{equation*}
The computing time is $T=0.16$. In Figure \ref{laxfig}, we plot the exact solution against the computed density $\rho$ obtained with the HWENO schemes, the zoomed in picture and the time history of the cells where the modification procedure is used in the new hybrid HWENO scheme. We can see the results computed by the new hybrid HWENO schemes is closer to the exact solution, and we also find that only 13.41 \% cells where we use the new HWENO methodology, which means that most regions directly use linear approximation with no modification for the first order moments and no HWENO reconstruction for the spatial discretization. The new hybrid HWENO scheme keeps good resolutions too.
\begin{figure}
\centering{\psfig{file=1d_lax.pdf,width=2 in}\psfig{file=1d_lax_zoom.pdf,width=2 in}\psfig{file=1d_lax_limiter.pdf,width=2 in}}
\caption{The Lax problem. T=0.16. From left to right: density; density zoomed in; the cells where the modification for the first order moments are computed in the new hybrid HWENO scheme. Black solid line: the exact solution; blue plus signs: the results of the hybrid HWENO scheme; red squares: the results of the new hybrid HWENO scheme. Uniform meshes with 200 cells.}
\label{laxfig}
\end{figure}
\smallskip
\noindent {\bf Example 3.7.} The Shu-Osher problem, which has a shock interaction with entropy waves \cite{s2}. The initial condition is
\begin{equation*}
\label{ShuOsher} (\rho,\mu,p,\gamma)^T= \left\{
\begin{array}{ll}
(3.857143, 2.629369, 10.333333,1.4)^T,& x \in [-5, -4),\\
(1 + 0.2\sin(5x), 0, 1,1.4)^T,& x \in [-4,5].
\end{array}
\right.
\end{equation*}
This is a typical example both containing shocks and complex smooth region structures, which has a moving Mach=3 shock interacting with sine waves in density. The computing time is up to $T=1.8$. In Figure \ref{sin}, we plot the computed density $\rho$ by HWENO schemes against the referenced "exact" solution, the zoomed in picture and the time history of the troubled-cells for the new hybrid HWENO scheme. The referenced "exact" solution is computed by the fifth order finite difference WENO scheme \cite{js} with 2000 grid points. We can see two schemes have similar numerical results with high resolutions, but the new hybrid HWENO scheme doesn't need to calculate the linear weights in advance. In addition, only 3.54\% cells are identified as the troubled-cells where we need to modify their first order moments.
\begin{figure}
\centering{\psfig{file=1d_shock.pdf,width=2 in}\psfig{file=1d_shock_zoom.pdf,width=2 in}\psfig{file=1d_shock_limiter.pdf,width=2 in}}
\caption{The shock density wave interaction problem. T=1.8. From left to right: density; density zoomed in; the cells where the modification for the first order moments are computed in the new hybrid HWENO scheme. Black solid line: the exact solution; blue plus signs: the results of the hybrid HWENO scheme; red squares: the results of the new hybrid HWENO scheme. Uniform meshes with 400 cells.}
\label{sin}
\end{figure}
\smallskip
\noindent {\bf Example 3.8.} We solve the next interaction of two blast waves problems. The initial conditions are:
\begin{equation*}
\label{blastwave} (\rho,\mu,p,\gamma)^T= \left\{
\begin{array}{ll}
(1,0,10^3,1.4)^T,& 0<x<0.1,\\
(1,0,10^{-2},1.4)^T,& 0.1<x<0.9,\\
(1,0,10^2,1.4)^T,& 0.9<x<1.
\end{array}
\right.
\end{equation*}
The computing time is $T=0.038$, and the reflective boundary condition is applied here. In Figure \ref{blast}, we also plot the computed density against the reference "exact" solution, the zoomed in picture and the time history of the troubled-cells. The reference "exact" solution is also computed by the fifth order finite difference WENO scheme \cite{js} with 2000 grid points. We notice that the hybrid HWENO scheme has better performance than the new hybrid HWENO scheme. The reason maybe that the modification for the first order moments uses more information provided by the two linear polynomials in this example, but the new HWENO methodology is easy to implement in the computation. Similarly, only 13.94\% cells are identified as the troubled-cells, and we directly use high order linear approximation on other cells.
\begin{figure}
\centering{\psfig{file=1d_blast.pdf,width=2 in}\psfig{file=1d_blast_zoom.pdf,width=2 in}\psfig{file=1d_blast_limiter.pdf,width=2 in}}
\caption{The blast wave problem. T=0.038. From left to right: density; density zoomed in; the cells where the modification of the first order moments are computed in the new hybrid HWENO scheme. Black solid line: the exact solution; blue plus signs: the results of the hybrid HWENO scheme; red squares: the results of the new hybrid HWENO scheme. Uniform meshes with 800 cells.}
\label{blast}
\end{figure}
\smallskip
\noindent{\bf Example 3.9.} We solve the two-dimensional Burgers' equation (\ref{2dbugers}) given in Example 3.3. The same initial and boundary conditions are applied here, but the computing time is up to $T=1.5/\pi$, in which the solution is discontinuous. In Figure \ref{Fburges2d}, we present the numerical solution computed by HWENO schemes against the exact solution and the surface of the numerical solution by the new hybrid HWENO scheme. Similarly, we can see the HWENO schemes have high resolutions.
\begin{figure}
\centerline{\psfig{file=2dBurgers.pdf,width=2.5 in}\psfig{file=2dBurgers_sur.pdf,width=2.5 in}}
\caption{2D-Burgers' equation: initial data
$u(x,y,0)=0.5+sin(\pi (x+y)/2)$. $T=1.5/\pi$. From left to right: the numerical solution at $x=y$ computed by HWENO schemes; the surface of the numerical solution for the new hybrid HWENO scheme. Black solid line: exact solution; blue plus signs: the results of the hybrid HWENO scheme; red squares: the results of the new hybrid HWENO scheme. Uniform meshes with $80\times80$ cells.}
\label{Fburges2d}
\end{figure}
\smallskip
\noindent {\bf Example 3.10.} We now solve double Mach reflection problem \cite{Wooc} modeled by the two-dimensional Euler equations (\ref{euler2}). The computational domain is
\( [0,4]\times[0,1]\). The boundary conditions are: a reflection wall lies at the bottom from $x=\frac{1}{6}$, $y=0$ with a $60^{o}$ angle based on $x$-axis. For the bottom boundary, the reflection boundary condition are applied, but the part from $x=0$ to $x=\frac{1}{6}$ imposes the exact post-shock condition. For the top boundary, it is the exact motion of the Mach 10 shock. $\gamma=1.4$ and the final computing time is up to $T=0.2$. In Figure \ref{smhfig}, we plot the pictures of region \( [0,3]\times[0,1]\), the locations of the troubled-cells at the final time and the blow-up region around the double Mach stems. The new hybrid HWENO scheme has better density resolutions than the hybrid HWENO scheme, in addition, the hybrid HWENO scheme needs to use smaller CFL number taken as 0.45, but the CFL number for the new hybrid HWENO scheme is 0.6, moreover, the new hybrid HWENO scheme uses less candidate stencils but has higher order numerical accuracy.
\begin{figure}
\centerline{\psfig{file=2d_smh_Den.pdf,width=3.5 in} \psfig{file=2d_smh_Den_new.pdf,width=3.5 in}}
\centerline{\psfig{file=2d_smh_Den_limiter.pdf,width=3.5 in} \psfig{file=2d_smh_Den_limiter_new.pdf,width=3.5 in}}
\centerline{\psfig{file=2d_smh_Den_zoom.pdf,width=2 in} \psfig{file=2d_smh_Den_zoom_new.pdf,width=2 in} }
\caption{ Double Mach reflection problem. T=0.2. From top to bottom: 30 equally spaced density contours from 1.5 to
22.7; the locations of the troubled-cells at the final time; zoom-in pictures around the Mach stem. The hybrid HWENO scheme (left); the new hybrid HWENO scheme (right). Uniform meshes with 1920 $\times$ 480 cells.}
\label{smhfig}
\end{figure}
\smallskip
\noindent {\bf Example 3.11.} We finally solve the problem of a Mach 3 wind tunnel with a step \cite{Wooc} modeled by the two-dimensional Euler equations (\ref{euler2}). The wind tunnel is 1 length unit wide and 3 length units long. The step is 0.2 length units high and is located 0.6 length units from a right-going Mach 3 flow. Reflective boundary conditions are applied along the wall of the tunnel. In flow and out flow boundary conditions are applied at the entrance and the exit, respectively. The computing time is up to $T=4$, then, we present the computed density and the locations of the troubled-cells at the final time in Figure \ref{stepfig}. We notice that the new hybrid HWENO scheme has high resolutions than the hybrid HWENO scheme, and it also has bigger CFL number, less candidate stencils, higher order numerical accuracy and simpler HWENO methodology. Similarly, only a small part of cells are identified as troubled-cells, and it means that most regions directly use linear approximation, which can increase the efficiency obviously.
\begin{figure}[!ht]
\centerline{\psfig{file=2d_step_Den.pdf,width=3.5 in} \psfig{file=2d_step_Den_new.pdf,width=3.5 in} }
\centerline{\psfig{file=2d_step_Den_limiter.pdf,width=3.5 in} \psfig{file=2d_step_Den_limiter_new.pdf,width=3.5 in} }
\caption{ Forward step problem. T=4.
From top to bottom: 30 equally spaced density contours from 0.32 to 6.15; the locations of the troubled-cells at the final time. The hybrid HWENO scheme (left); the new hybrid HWENO scheme (right). Uniform meshes with 960 $\times$ 320 cells.}
\label{stepfig}
\end{figure}
\smallskip
\section{Concluding remarks}
\label{sec4}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
In this paper, a new fifth-order hybrid finite volume Hermite weighted essentially non-oscillatory (HWENO) scheme with artificial linear weights is designed for solving hyperbolic conservation laws. Compared with the hybrid HWENO scheme \cite{ZCQHH}, we employ a nonlinear convex combination of a high degree polynomial with several low degree polynomials in the new HWENO reconstruction, and the associated linear weights can be any artificial positive numbers (their sum is one), which would have the advantages of its simplicity and easy extension to multi-dimension. Meanwhile, different choice of the linear weights would not affect the numerical accuracy, and it gets less numerical errors than the original HWENO methodology. In addition, the new hybrid HWENO scheme has higher order numerical accuracy in two dimension. Moreover, the scheme still keeps the non-oscillations as we apply the limiter methodology for the first order moments in the troubled-cells and use new HWENO reconstruction on the interface. In the implementation, only a small part of cells are identified as troubled-cells, which means that most regions directly use linear approximation. In short, the new hybrid HWENO scheme has high resolution, efficiency, non-oscillation and robustness, simultaneously, and these numerical results also show its good performances.
|
2,869,038,154,093 | arxiv | \section{Introduction} \label{sec:1}
Isoscalar (IS) monopole and dipole excitations have been extensively
investigated by $\alpha$ inelastic scattering experiments.
Significant low-energy IS strengths
have been observed in various nuclei and attracting great interests (see for example,
Refs.~\cite{Harakeh-textbook,Paar:2007bk,Roca-Maza:2018ujj} and references therein).
A central issue is to reveal properties and origins of those low-energy dipole modes.
In order to understand the low-energy dipole modes,
the vortical dipole mode (called also the torus or toroidal mode) has been originally
proposed by hydrodynamical models \cite{semenko81,Ravenhall:1987thb}, and
later studied with microscopic frameworks such as mean-field approaches
\cite{Paar:2007bk,Vretenar:2001te,Ryezayeva:2002zz,Papakonstantinou:2010ja,Kvasil:2011yk,Repko:2012rj,Kvasil:2013yca,Nesterenko:2016qiw,Nesterenko:2017rcc},
antisymmetrized molecular dynamics (AMD) \cite{Kanada-Enyo:2017fps,Kanada-Enyo:2017uzz,Kanada-Enyo:2019hrm}, and a cluster model \cite{Shikata:2019wdx}.
The vortical dipole mode is characterized by the vorticiy of the transition current and strongly excited by the
toroidal dipole (TD) operator as discussed by Kvasil {\it et al.}\cite{Kvasil:2011yk}. These features are different from the
compressional dipole (CD) mode which is excited by the standard IS dipole operator.
Following Ref.~\cite{Kvasil:2011yk},
we call the vortical dipole mode ``TD mode'' to distinguish the compressional dipole (CD) mode.
The TD mode in deformed nuclei
has been recently investigated in various stable and unstable nuclei in a wide mass-number region from light- to heavy-mass nuclei.
Cluster structures of the TD mode in $p$-shell nuclei such as $^{12}$C and $^{10}$Be have been studied
by the authors \cite{Kanada-Enyo:2017fps,Kanada-Enyo:2017uzz,Shikata:2019wdx}.
Very recently, Nesterenko {\it et al.} have investigated dipole excitations in $^{24}$Mg with the
Skyrme quasiparticle random-phase-approximation (QRPA) for axial-symmetric deformed nuclei,
and predicted that a TD state appears as the low-lying $K^\pi=1^-$ state \cite{Nesterenko:2017rcc}.
In the nuclear current density of the TD mode, they found
the vortex-antivortex type nuclear current in the deformed system and suggested its association with
the cluster structure of $^{24}$Mg.
For cluster structures of $^{24}$Mg,
one of the authors and his collaborators have studied positive-parity states of $^{24}$Mg
with the AMD framework
\cite{KanadaEnyo:1995tb,KanadaEnyo:1995ir,KanadaEn'yo:2001qw,KanadaEn'yo:2012bj},
and discussed roles of the cluster structures of $\Nea$, $\OBe$, and $\CC$ in the IS monopole excitations
\cite{Chiba:2015zxa}.
Kimura {\it et al.} have investigated negative-parity states of $^{24}$Mg with the AMD, and discussed
triaxial deformations of the ground and negative-parity bands \cite{Kimura:2012-ptp}.
Our aim is to clarify natures of the low-lying dipole modes in $^{24}$Mg
such as vortical and cluster features as well as the IS dipole transition strengths.
In order to describe dipole excitations,
we apply the
constraint AMD method combined with the generator coordinate method (GCM).
As for the constraint parameters for basis wave functions in the AMD+GCM,
the quadrupole deformations ($\beta\gamma$)
\cite{Suhara:2009jb,Kimura:2012-ptp}
and the inter-cluster distance ($d$) \cite{Taniguchi:2004zz}
are adopted.
This method is useful to analyze cluster correlations as well as intrinsic deformations because various cluster structures are explicitly
taken into account in the $d$-constraint wave functions as proved in application to
$^{28}$Si in Ref.~\cite{Chiba:2016zyz} which discussed the role of cluster structure in IS monopole and dipole excitations.
In order to investigate properties of the low-lying $1^-$ states of $^{24}$Mg,
the transition strengths are calculated for the TD and CD operators which can probe the
vortical and compressional features, respectively. Nuclear vorticity is discussed in analysis of
the intrinsic transition current density. Cluster correlations in the low-lying dipole excitations are also discussed.
The paper is organized as follows.
In Sect.~\ref{sec:2},
the framework of AMD+GCM with $\beta\gamma$- and $d$-constraints are explained.
Section \ref{sec:3} shows
the calculated results for basic properties of the dipole states, and
Sect. \ref{sec:4} gives detailed analysis focusing on the vortical and cluster features.
Finally, the paper is summarized in section \ref{sec:5}. In appendix \ref{app:1},
definitions of the transition current density, dipole operators and transition strengths are explained.
\section{Framework} \label{sec:2}
We briefly explain the present framework of the AMD+GCM method with the $\beta\gamma$- and $d$-constraints.
The method is similar to that used in Ref.~\cite{Chiba:2016zyz}.
For the detail, the readers are directed to Refs.~\cite{KanadaEn'yo:2012bj,Suhara:2009jb,Kimura:2012-ptp,Taniguchi:2004zz,Chiba:2016zyz,Kimura:2003uf} and references therein.
\subsection{Hamiltonian and variational wave function}
The microscopic Hamiltonian for an $A$-nucleon system is given as
\begin{align}
{H} = \sum_{i}^{A}{{t}_i} - {t}_\textrm{c.m.} + \sum_{i<j}^{A}{{v}_{ij}^{NN}} +
\sum_{i<j}^{A}{{v}_{ij}^\textrm{Coul}}.
\label{hamiltonian}
\end{align}
Here, the first term is the kinetic energy, and the center-of-mass kinetic energy
${t}_\textrm{c.m.}$ is exactly subtracted. As for the effective nuclear interaction $v^{NN}_{ij}$,
we employ Gogny D1S interaction \cite{Berger:1991zza}. The Coulomb
interaction $v^\textrm{Coul}_{ij}$ is approximated by a sum of seven Gaussians.
The intrinsic wave function of AMD is given by a Slater determinant of single-nucleon wave functions
$\varphi_i$,
\begin{align}
\Phi_\textrm{int} &= {\mathcal A}\left\{\varphi_1\varphi_2 \cdots \varphi_A \right\},\\
\varphi_i&= \phi_i({\bm r}) \chi_i \xi_i, \label{eq:singlewf}\\
\phi_i({\bm r}) &= \exp\biggl\{-\sum_{\sigma=x,y,z}\nu_\sigma
\Bigl(r_\sigma -\frac{Z_{i\sigma}}{\sqrt{\nu_\sigma}}\Bigr)^2\biggr\}, \\
\chi_i &= a_i\chi_\uparrow + b_i\chi_\downarrow,
\end{align}
where $\chi_i$ is the spin part and $\xi_i$ is the isospin part fixed to be proton or neutron.
In the present version of AMD, the spaital part $\phi_i({\bm r})$ is expressed by the deformed Gaussian wave packet centered
at $\bm Z_i$ with the width parameters $\nu_\sigma$ $(\sigma=x,y,z)$ which are common for all nucleons.
The Gaussian center parameter ($\bm{Z}_i$) and the nucleon-spin direction ($a_i$ and $b_i$) for each
nucleon and the width parameters $\nu_\sigma$ are the variational parameters
optimized by the energy variation \cite{Kimura:2003uf}.
The energy variation is performed for the parity-projected intrinsic wave function
$\Phi^\pi=\frac{1+\pi{P}_r}{2}\Phi_\textrm{int}$ $(\pi=\pm)$.
For the ground state, constraint of the quadrupole deformation ($\beta\gamma$-constraint) is imposed in the
energy variation of the positive-parity wave function.
We use the parametrization $\beta$ and $\gamma$ of the triaxial deformation
as described in Ref.~\cite{Kimura:2003uf} and get the $\beta\gamma$-deformed configuration for given
$\beta$ and $\gamma$ values after the energy variation.
For $1^-$ states, the $\beta\gamma$-constraint energy variation is performed for the negative-parity wave function.
In addition, cluster configurations are also obtained by
constraint on the inter-cluster distance ($d$-constraint) in the
energy variation of the negative-parity wave function,
and they are combined with the $\beta\gamma$-deformed configurations.
For the cluster configurations, we adopt {\it quasi-clusters} proposed
in Ref.~\cite{Taniguchi:2004zz}. Let us consider $C_1+C_2$ configuration consisting of two quasi-clusters $C_1$ and $C_2$
with the mass numbers $A_1$ and $A_2$ $(A_1+A_2=A)$, respectively. Each quasi-cluster $C_j$ is defined as the group of $A_j$ nucleons,
and the constraint is imposed on the inter-cluster distance $d_{A_1+A_2}$ between two quasi-clusters $C_1$ and $C_2$, which
is defined as
\begin{align}
& d_{A_1+A_2} = |\bm R_{C_1} - \bm R_{C_2}|,\\
&(\bm R_{C_j})_\sigma = \frac{1}{A_j}\sum_{i\in C_j}\textrm{Re}\left[\frac{Z_{i\sigma}}{\sqrt{\nu_\sigma}}\right],
\end{align}
where $\bm R_{C_j}$ is the center-of-mass position of the quasi-cluster $C_j$.
In the present work, we adopt the $\Nea$, $\OBe$, and $\CC$ configurations for $C_1+C_2$ of quasi-clusters.
After the energy variation under each constraint of $(\beta,\gamma)$, $d_{20+4}$, $d_{18+8}$, and $d_{12+12}$,
we obtain the basis wave functions optimized for various values of the constraint parameters,
and superpose them in the GCM calculation as explained later.
For simplicity, we number the obtained basis wave functions $\{\Phi^\pi(i)\}$ with the index $i$.
It should be stressed that the cluster wave function in the present framework is composed of
not inert (frozen) clusters but quasi-clusters, which can contain cluster breaking effects such as the core polarization, dissociation, and excitation.
These effects are taken into account in the energy variation at a given value of the quasi-cluster distance $d_{A_1+A_2}$.
Moreover, in the small distance limit, the cluster wave function
becomes equivalent to a deformed mean-field wave function because of the antisymmetrization of nucleons.
Along the distance parameter $d_{A_1+A_2}$, the $d$-constraint wave function describes the structure change from
the one-center system of a mean-field configuration to the two-center system of the spatially developed $C_1+C_2$ clustering via intermediate configurations with cluster correlation (or formation) at the nuclear surface.
\subsection{Angular momentum projection and generator coordinate method}
After the energy variation with the constraints, the obtained basis wave functions are projected
to the angular momentum eigenstates,
\begin{align}
\Phi^{J\pi}_{MK}(i) = \frac{2J+1}{8\pi^2}\int d\Omega D^{J*}_{MK}(\Omega)R(\Omega)\Phi^\pi(i),
\end{align}
where $D^{J}_{MK}(\Omega)$ and $R(\Omega)$ are Wigner's $D$
function and the rotation operator, respectively. They are superposed to describe the final GCM wave function for
the $J^\pi_{n}$ state,
\begin{align}
\Psi^{J\pi}_{M,n} = \sum_{K,i} c_n(K,i) \Phi^{J\pi}_{MK}(i)\label{eq:gcmwf}.
\end{align}
Here the coefficients $c_n(K,i)$ are determined by diagonalization of the norm and Hamiltonian matrices
so as to satisfy Hill-Wheeler (GCM) equation
\cite{Hill:1952jb,Griffin:1957zza}.
\section{Results} \label{sec:3}
\subsection{Result of energy variation} \label{sec:3.1}
We describe properties of the $\beta\gamma$-deformed and cluster configurations obtained by the
energy variation with the corresponding constraint.
For the $\beta\gamma$-deformed configurations, we obtain
almost the same result as those in the previous AMD study \cite{Kimura:2012-ptp}.
In the $J^\pi$-projected energy surface on the $\beta$-$\gamma$ plane
obtained from the $\beta\gamma$-deformed configurations,
we find the energy minimum state with triaxial deformation at ($\beta,\gamma)=(0.49,13^\circ)$ for $J^\pi=0^+$
and that at ($\beta,\gamma)=(0.5,25^\circ)$ for $J^\pi=1^-$.
These deformed states at the energy minimums become the dominant component of the $0^+_1$ and $1^-_1$ states in the
final result of the GCM calculation.
For the cluster configurations, we adopt the $\Nea$, $\OBe$, and
$\CC$ quasi-clusters as described previously.
The calculated $J^\pi=1^-$ energies
are shown as functions of quasi-cluster distances in Fig.~\ref{fig:curve_d}, and
intrinsic density distributions are displayed in Fig.~\ref{fig:dens}.
For the $\Nea$($20+4$) quasi-cluster configuration, the energy curves are shown in Fig.~\ref{fig:curve_d} (a)
and the density distributions at $d_{20+4} =$ 2.5, 4.9 and 5.9 fm
are shown in the panels (a)$-$(c) of Fig.~\ref{fig:dens}.
In the $2.0 \leq d_{20+4} \leq 5.7$ fm region, the triaxially deformed $\Nea$ configurations
are obtained by the $d$-constraint energy variation and they
yield the $K=0$ and $K=1$ states by the $J^\pi=1^-$ projection.
The $K=1$ energy curve is always lower than the $K=0$ energy curve.
The energy difference between the $K=1$ and $K=0$ states is about 6 MeV at $d_{20+4} = 2.0$ fm
but it decreases to approximately 0 MeV at $d_{20+4}=5.7$ fm. In $d_{20+4} \geq 5.7$ fm region,
almost axial symmetric states
with the dominant $K=0$ component are obtained for the $\Nea$ configuration
(see Fig.~\ref{fig:dens} (c) and the dotted line of Fig.~\ref{fig:curve_d} (a)).
Figure \ref{fig:curve_d} (b) and Fig.~\ref{fig:dens}(d)$-$(f) show the energy curves and intrinsic density distributions
for the $\OBe$($16+8$) quasi-cluster configuration. In $d_{16+8} < 5.4$ fm region, the lowest $\OBe$ configuration
is the triaxially deformed configuration because of two $\alpha$ clusters oriented along
$y$ axis as shown in the intrinsic densities (d) and (e) of Fig.~\ref{fig:dens}.
It contains
only the $K=\textrm{even}$ component because of the reflection symmetry with respect to $\pi$ rotation around $z$(longitudinal) axis.
In $d_{16+8} \geq 5.4$ fm region, the axially symmetric $\OBe$
configuration (Fig.~\ref{fig:dens} (f)) becomes lowest as shown by the dotted line of Fig.~\ref{fig:curve_d} (b).
In both cases of the $\Nea$ and $\OBe$ configurations,
the intrinsic wave functions in the small quasi-cluster distance ($d_{A_1+A_2}$) region
show no prominent cluster structure but have large overlap with the $\beta\gamma$-deformed configuration with triaxial
deformations.
As the distance $d_{A_1+A_2}$ increases, the energies of the
$^{20}$Ne+$\alpha$ and $^{16}$O+$^{8}\textrm{Be}$ configurations
increase gradually indicating that the system is soft against spatial development of the cluster structures.
Compared with the $^{20}$Ne+$\alpha$ and $^{16}$O+$^{8}\textrm{Be}$ configurations,
the energy of the $\CC$ configuration increases rapidly as $d_{12+12}$ increases as shown in Fig.~\ref{fig:curve_d} (c)
because such the symmetric cluster configuration is relatively unfavored in the negative parity ($K^\pi=0^-$) state.
As a result, inclusion of $\CC$ cluster configurations gives
almost no contribution to the low-lying $1^-$ states in the GCM calculation.
\begin{figure*}
\includegraphics[width=1.0\hsize]{energy_curve.eps}
\caption{Energy curves for $\Nea(20+4)$, $\OBe(16+8)$, and $\CC(12+12)$ quasi-cluster configurations
obtained by the
$d$-constraint energy variation for negative parity states.
The $J^\pi=1^-$ projected energies are plotted as functions of the quasi-cluster distances $d_{A_1+A_2}$.
(a) $\Nea(20+4)$ configuration: energies of $K=0$ and $K=1$ states projected from triaxially deformed intrinsic states and
$K=0$ states projected from axially deformed intrinsic states.
(b) $\OBe(16+8)$ configuration: energies of $K=0$ states projected from triaxially deformed intrinsic states and
those projected from axially deformed intrinsic states.
(c) $\CC(12+12)$ configuration: energies of $K=0$ states.
}
\label{fig:curve_d}
\end{figure*}
\begin{figure}
\includegraphics[width=1.0\hsize]{dens-fig.eps}
\caption{
(Color online) Intrinsic matter density distributions of
the $\Nea$ and $\OBe$ quasi-cluster configurations obtained by the $d$-constraint energy variation for negative parity states.
The panels (a), (b), and (c) show the $\Nea$
configurations at $d_{20+4} =$ 2.5, 4.9, and 5.9 fm.
The panels (d), (e), and (f) show the
$\OBe$ configurations at $d_{16+8} =$ 2.5, 4.9, and 6.1 fm.
The densities sliced at $x=0$ plane ($y-z$ plane) are shown. The units of the horizontal($y$) and vertical($z$) axes are fm.}
\label{fig:dens}
\end{figure}
\subsection{GCM result of dipole excitations} \label{3.2}
We present the GCM result obtained using all
the $\beta\gamma$-deformed and cluster configurations. We focus
on the low-lying $1^-$ states and their isoscalar dipole strengths.
The definitions of the CD and TD operators and transition strengths
are explained in appendix \ref{app:1}.
\subsubsection{Spectra and transition strengths}
The calculated CD and TD transition strengths ($B(\textrm{CD})$ and $B(\textrm{TD})$)
are plotted with respect to the $1^-$ excitation energies ($E_x$) in Fig.~\ref{fig:strength}.
In the low-energy region $E_x\approx 10$ MeV,
we obtain two dipole states, the $1^-_1$ and $1^-_2$ states, which have quite different natures from each other.
One is the $1^-_1$ state at $E_x= 9.5$ MeV with the strongest TD transition,
and the other is the $1^-_2$ state at $E_x=11.2$ MeV with the significant CD transition strength.
Therefore, the $1^-_1$ state can be regarded as the TD mode, and the $1^-_2$ state is the low-lying CD mode.
\begin{figure}
\includegraphics[width=1.0\hsize]{Strength.eps}
\caption{Strength functions of dipole transitions.
The toroidal and compressional dipole strengths, $B(\textrm{TD})$ and $B(\textrm{CD})$,
are plotted as functions of $1^-$ excitation energies in the upper and bottom panels, respectively.}
\label{fig:strength}
\end{figure}
In analysis of the dominant $K$ component in these two states, one can assign
the former (TD mode) to the band-head state of a $K=1$ band and
the latter (CD mode) to a $K=0$ band. This separation of the $K=1$ and $K=0$ components in the triaxially deformed intrinsic system
plays a key role in the low-lying TD and CD modes in $^{24}$Mg.
To emphasize this feature of $K$ quanta, we denote the $1^-_1$ and $1^-_2$ states as $1^-_{K=1}$ and $1^-_{K=0}$, respectively,
in the following.
The energy ordering of these two $1^-$ states in our result is consistent with
the results of $\beta\gamma$-AMD \cite{Kimura:2012-ptp} and QRPA \cite{Nesterenko:2017rcc} calculations.
In the experimental data, $1^-$(7.56 MeV) and $1^-$(8.44 MeV) states were tentatively
assigned to $K=0$ and $K=1$, respectively\cite{Fifield:1979gfv}.
Therefore, the theoretical $1^-_{K=1}$ and $1^-_{K=0}$ states in the present result may correspond
to the experimental $1^-$(8.44 MeV) and $1^-$(7.56 MeV) states though the energy
ordering of two states seems inconsistent with the observation.
\subsubsection{Cluster correlations}
We here discuss roles of cluster correlations in the $1^-_{K=1}$ and $1^-_{K=0}$ states.
To discuss the cluster correlation effect,
we perform the GCM calculation
using only the $\beta\gamma$-configurations but without the cluster configurations
and compare the results with and without cluster configurations.
The excitation energies and transition strengths of the $1^-_{K=1}$ and $1^-_{K=0}$ states
calculated with and without cluster configurations are
summarized in Table \ref{tab:cc}.
The correlation energies induced by the cluster correlations can be evaluated by the
energy gain by inclusion of the cluster configurations.
The energy gain is 0.3 MeV for the
$1^-_{K=1}$ state and 1.0 MeV for the $1^-_{K=0}$ state.
The large energy gain in the $1^-_{K=0}$ state
indicates significant cluster correlation, which mainly comes
from the $\OBe$ configuration.
The cluster correlation from the $\OBe$ configuration
also contributes to the CD transition strength of
the $1^-_{K=0}$ state as 50\% enhancement of $B(\textrm{CD})$.
This result is understood by the general feature that
the low-energy ISD strengths can be enhanced by asymmetric clustering as discussed in
Ref.~\cite{Chiba:2015khu}.
Compared with the $1^-_{K=0}$ state,
the properties of the $1^-_{K=1}$ state is not affected so much by inclusion of cluster configurations.
\begin{table}[h]
\caption{The calculated values of excitation energies ($E_x$) and TD and CD strengths
of the $1^-_{K=1}$ and $1^-_{K=0}$ states
obtained by the GCM calculations with and without the cluster configurations (cc).
}
\label{tab:cc}
\begin{ruledtabular}
\begin{tabular}{ccccc}
&
\multicolumn{2}{c}{$K=1$ state}
&
\multicolumn{2}{c}{$K=0$ state} \\
& w/ cc & w/o cc & w/ cc & w/o cc \\
\hline
$E_x$ (MeV) & 9.52 & 9.85 & 11.21 & 12.18 \\
$B(\textrm{TD})$ ($10^{-3} {\rm fm}^4$) & 1.20 & 1.13 & 0.41 & 0.32 \\
$B(\textrm{CD})$ ($10^{-3} {\rm fm}^4 $) & 0.00 & 0.00 & 2.38 & 1.61 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\section{Discussions}\label{sec:4}
\subsection{Cluster correlations in $1^-_{K=1}$ and $1^-_{K=0}$}
In the previous discussion, we showed that inclusion of cluster configurations gives
significant contributions to the $1^-_{K=0}$ state but relatively minor effect on the $1^-_{K=1}$ state.
However, it does not necessarily mean no cluster correlation in the $1^-_{K=1}$ state
because $\beta\gamma$-deformed configurations can implicitly contain cluster correlations.
What we have shown
in the previous analysis of change by inclusion of the cluster configurations is just the effects from prominent cluster structures,
which are beyond the $\beta\gamma$-constraint method.
For more detailed investigation of cluster components in the $1^-_{K=1}$ and $1^-_{K=0}$ states,
we calculate overlap of the GCM wave function with each basis of quasi-cluster configurations.
The $1^-_{K=1}$ state has 89 $\%$ overlap with the $\Nea$ configuration at $d_{20+4}=2.5$ fm
projected to $J^\pi=1^-(K=1)$, which indicates significant $\Nea$ component.
Similarly, the $1^-_{K=0}$ state is dominantly described by $K=0$ component of the $\OBe$ configuration at $d_{16+8}=2.5$ fm
with $88\%$ overlap.
The $1^-_{K=0}$ state also has non negligible
overlap with spatially developed $\OBe$ configurations, e.g., 23\% overlap at $d_{16+8}=4.9$ fm. These
developed $\OBe$ cluster components
contribute to enhancement of the CD transition strength discussed previously.
\subsection{Vorticity of the nuclear current}
\begin{figure}[t!]
\includegraphics[width=\hsize]{current-fig.eps}
\caption{
The intrinsic transition current density $\delta \bm{j}(\bm{r})$ after the parity projection
for $0^+_1\to 1^-_{K=1}$ and $0^+_1\to 1^-_{K=0}$ transitions.
The arrows and color plots indicate $\delta \bm{j}(\bm{r})$ and $x$-component of the vorticity $\bm{\nabla} \times \delta \bm{j}(\bm{r})$.
The contours are intrinsic matter densities of the $1^-_{K=1}$ and $1^-_{K=0}$ states before the parity projection.
The densities sliced at $x=0$ plane ($y-z$ plane) are shown. The units of the horizontal($y$) and vertical($z$) axes are fm.}
\label{fig:current}
\end{figure}
In order to reveal vortical nature of the two low-energy dipole modes,
we analyze the intrinsic transition current density $\delta\bm{j}(\bm{r})$
of $0^+_1\to1^-_{K=1}$ and $0^+_1\to 1^-_{K=0}$ transitions.
For simplicity, we take the dominant configuration of each state as an approximate
intrinsic state, and compute the transition current density in the intrinsic frame:
we choose the $\beta\gamma$-deformed configuration at $(\beta,\gamma)=(0.49,13^\circ)$
for the ground state,
the $\Nea$ configuration at $d_{20+4} = 2.5$ fm for the $1^-_{K=1}$ state, and
the $\OBe$ configuration at $d_{16+8} = 2.5$ fm for the $1^-_{K=0}$.
In Fig.~\ref{fig:current},
the transition current density $\delta \bvec{j}$ and vorticity $\nabla \times \delta \bvec{j}$ calculated
after the parity projection are displayed
by vector and color plots, respectively. The intrinsic
matter density distribution of the $1^-$ states before the parity projection is also shown
by contour plot.
In the transition current density in the $1^-_{K=1}$ excitation (Fig.~\ref{fig:current}(a)),
one can see two vortexes with opposite directions in the upper and lower parts of the longitudinal matter density.
The opposite vorticity is a specific character of the vortical dipole mode with $K=1$
in an elongated deformation, and consistent with the dipole mode called the
vortex-antivortex configuration in Ref.~\cite{Nesterenko:2017rcc}.
On the other hand, the transition current density in the $1^-_{K=1}$ excitation (Fig.~\ref{fig:current}(b))
shows no vortex but irrotational flow with compressional nature along the $z$-axis (the longitudinal direction),
which contributes to the CD dipole strength.
The difference in the vortical nature between the $1^-_{K=0}$ and $1^-_{K=1}$ states can be
more clearly seen in color plots of the vorticity.
The $1^-_{K=1}$ excitation indicates the strong nuclear vorticity in the top and bottom
edge parts of the elongated shape, but the $1^-_{K=0}$ excitation shows much weaker vorticity.
\subsection{Cluster and single-particle natures of $1^-_{K=1}$ state}
As described previously,
the $1^-_{K=1}$ state is approximately described by the $^{20}$Ne+$\alpha$ cluster configuration
at $d_{20+4}=2.5$ fm, which does not show a spatially developed clustering but the cluster correlation in the triaxially deformed state.
As can be seen in the intrinsic density distribution shown in Fig.~\ref{fig:dens}(a),
the essential cluster correlation in the $1^-_{K=1}$ state
is formation of $\alpha$ clusters caused by four nucleon correlations at the nuclear surface.
In a schematic picture, the cluster correlation in the $1^-_{K=1}$ state is associated with
the $^{16}$O core with two $\alpha$ clusters in one side of the core.
Two $\alpha$ clusters are placed at the surface of the $^{16}$O core in a tilted configuration
and yields the $K=1$ component because of the asymmetry against the $\pi$ rotation around $z$ axis.
On the other hand, the ground state has the triaxial deformation because of the $2\alpha$ correlation
aligned in a normal direction along the surface of $^{16}$O.
Then, the dipole excitation from the ground state to the $1^-_{K=1}$ state
can be understood by the vibrational (tilting) motion of the $2\alpha$ orientation
at the surface of the $^{16}$O core. This tilting motion of the $2\alpha$ clustering produces the nuclear vorticity.
Then, the vortex is duplicated in both sides because of the antisymmetrization effect and parity projection.
It is also worth of mentioning a link between cluster and mean-field pictures for the
$1^-_{K=1}$ mode by considering the small limit of the inter-cluster distance, where
the cluster structure can be associated with a deformed harmonic oscillator configuration. We here use
the representation $(n_xn_yn_z)$ with oscillator quanta $n_\sigma$ in $\sigma$ axis
for a single-particle orbit in the deformed harmonic oscillator.
In this limit, the ground state corresponds to the $(011)^4(002)^4$ configuration with triaxial deformation,
while the $1^-_{K=1}$ state is regarded as $(011)^3(002)^4(003)^1$. It means that
the $1^-_{K=1}$ transition is described by one-particle one-hole excitation of
$(011)^{-1}(003)^1$ on the triaxially deformed ground state, which
induces the vortical nuclear current and contributes to the TD strength.
This mechanism is similar to that discussed
with the deformed mean-field approach in Ref.~\cite{Nesterenko:2017rcc}. However, we should remark
that the present $1^-_{K=1}$ mode contains the cluster correlation
and corresponds to the coherent one-particle one-hole excitations in the $LS$-couping scheme. The
coherent contribution from four nucleons in the SU(4) symmetry (spin-isospin symmetry)
enhances collectivity of the vortical dipole excitation further than the $jj$-coupling configuration.
\section{Summary}\label{sec:5}
We investigated the low-lying $1^-$ states of $^{24}$Mg with the AMD+GCM framework
with the $\beta\gamma$-constraint for the quadrupole deformation and the
$d$-constraint for the $\Nea$, $\OBe$, and $\CC$ configurations.
We discussed properties of the $1^-$ states such as IS dipole transition strengths, cluster correlations, and vortical nature.
In the low-energy region $E_x\approx 10$ MeV, we obtained the
$1^-_{K=1}$ and $1^-_{K=0}$ states, which shows quite different features from each other.
The $1^-_{K=1}$ is the toroidal dipole mode,
which is characterized by the nuclear vorticity.
The $1^-_{K=0}$ state has the significant compressional dipole strength
and the weaker vorticity.
Effects of the cluster correlations on the excitation energy and transition strength of these two low-lying dipole states
were analyzed. It was found that the spatially developed cluster configurations
give significant contribution to the $1^-_{K=0}$ state,
whereas the effect on the $1^-_{K=1}$ state is minor.
We should stress that the deformation and cluster correlations play important roles in the low-energy dipole modes of
$^{24}$Mg.
\begin{acknowledgments}
The authors thank to Prof.~Nesterenko and Prof.~Kimura for fruitful discussions.
A part of the numerical calculations of this work were performed by using the
supercomputer at Yukawa Institute for theoretical physics, Kyoto University. This work was supported by
MEXT/JSPS KAKENHI (Grant Nos. 16J03654, 18K03617, 18H05407, 18J20926, 18H05863, 19K21046).
\end{acknowledgments}
|
2,869,038,154,094 | arxiv | \section{Introduction}
Object recognition is a challenging problem especially at a large scale because of variabilities in object appearance, viewpoint, illumination and pose \cite{DengECCV2010,tim2011tpami,LempitskICCV2009}. Fully/strongly annotated data is thus typically required to learn a generalisable model for tasks such as object classification \cite{Nguyenweakly2011}, detection \cite{Felzenszwalb2012partbased,dollar08}, and segmentation \cite{LempitskICCV2009,Kuettel2012,Rubinstein13Unsupervised}. In fully annotated images, such as those in the PASCAL VOC object classification or detection challenges \cite{pascalvoc2007}, not only the presence of objects, but also their locations are labelled, typically in the form of bounding boxes. Such a strong manual annotation of objects is time-consuming and laborious. Consequently, although media data is increasingly available with the prevalence of sharing websites such as Flickr, the lack of annotated images, particularly strongly annotated ones, becomes the new barrier that prevents tasks such as object detection from scaling to thousands of classes \cite{Guillaumin_cvpr12}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=\linewidth]{fig/fig1_jointlearning.pdf}
\end{center}
\caption{Different types of objects often co-exist in a single image. Our joint learning approach differs from previous approaches which localise each object class independently.}
\label{fig:concept}
\end{figure}
One approach to this challenge is weakly supervised object localisation (WSOL): simultaneously locating objects in images and learning their appearance using only weak labels indicating presence/absence of the objects of interest.
The WSOL problem has been tackled using various approaches \cite{Deselaers2012,Nguyenweakly2011,confeccvSivaRX12,Pandeyiccv2011,Guillaumin_cvpr12,Carolina2008eccv,TangCVPR14}.
Most of them address the task as a weakly supervised learning problem, particularly as a multi-instance learning (MIL) problem, where images are bags, and potential object locations are instances. These methods are typically discriminative in nature and attempt to localise each class of objects independently from the other classes. However, localising objects of different classes independently has a number of limitations:
(1) It fails to exploit the knowledge that different objects often co-exist within an image (see Fig.~\ref{fig:concept}). For instance, knowing that some images have both a horse and a person, in conjunction with a joint model for both classes -- the person can be ``explained away" to reduce ambiguity about the horse's appearance, and vice versa. Ignoring this increases ambiguity for each class.
(2) Although object classes vary in appearance, the background appearance is relevant to them all (e.g.~sky, tree, and grass are constant features of an image regardless of the foreground object classes). When different classes are modelled independently, the background must be re-learned repeatedly for each class, when it would be more statistically robust \cite{Salakhutdinov2011cvpr} to share this common knowledge.
In this paper, a novel framework based on Bayesian latent topic models is proposed to overcome the mentioned limitations. In our framework, both multiple object
classes and background types are modelled
jointly in a single generative model as latent topics, in order
to explicitly exploit their co-existence relationship (see Fig.~\ref{fig:concept}). As bag-of-words (BoW) models, conventional latent topic models
have no notion of localisation. We overcome this problem
by incorporating an explicit notion of object location.
Our generative model based framework has the following advantages over previous discriminative approaches:\\
\noindent \textbf{Joint vs. independent modelling}\quad
By jointly modelling different classes of objects and background, our model is able to exploit multiple object co-occurrence, so each object known to appear in an image can help disambiguate the location of the others by accounting for some of the pixels. This is illustrated by the left column of Fig.~\ref{fig:concept}, where modelling horse and person jointly helps the localisation of both objects since each single pixel can only be explained by one object, not both.
Meanwhile, a single set of shared background topics are learned once for all object classes. This is due to the nature of a generative model -- every pixel in the image must be accounted for. Even though learning background appearance can further disambiguate the location of objects, this appears to be an extremely hard task given that no labels are provided regarding background (people tend to focus on the foreground when annotating an image). However, by learning them jointly with the foreground objects and using all training images available, this task can be fulfilled effectively by the proposed model.
\noindent \textbf{Integration of prior knowledge}\quad
Exploiting prior knowledge or top-down cues about appearance or geometry (e.g., position, size, aspect ratio) should be supported if available to offset the weak labels.
Our framework is able to incorporate, when available, prior knowledge about appearances of objects in a more systematic way as a Bayesian prior.
Specifically, we exploit the prior intuition that objects are spatially compact relative to the background. We can also optionally exploit external human or internal data-driven prior about typical object size, location and appearance as a Bayesian prior. Going beyond within-class priors, we also show that cross-class appearance similarity can be exploited. For instance, the model can exploit the fact that ``bike'' is more similar to ``motorbike'' than ``aeroplane''.
\noindent \textbf{Bayesian domain adaptation}\quad
A central challenge for building generally useful recognition models is providing the capability to adapt models trained on one domain or dataset to new domains or datasets \cite{eth_biwi_00905}. This is important because any given domain or dataset is intentionally or unintentionally biased \cite{Torralba_cvpr11}, so transferring models directly across domains generally performs poorly \cite{Torralba_cvpr11}. However, with appropriate adaptation, source and target domain data can be combined to out-perform target domain data alone \cite{eth_biwi_00905}. We can leverage our model's Bayesian formulation to provide domain adaptation in a WSOL context.
\noindent \textbf{Semi-supervised learning}\quad
Since there are effectively unlimited quantity of unlabelled data available on the Internet (compared to limited quantity of manually annotated data), a valuable capability is to exploit this existing unlabelled data in conjunction with limited weakly labelled data to improve learning.
As a generative model, our framework is naturally suited for semi-supervised learning (SSL). Unlabelled data are included and the label variables for these instances left unclamped (i.e.~no supervision is enforced). Importantly, unlike conventional SSL approaches \cite{zhu2007sslsurvey}, our model does not require that all the unlabelled data are instances of known classes, making it more applicable to realistic SSL applications.
\section{Related Work}
\label{relatedwork}
\noindent \textbf{Weakly supervised object localisation}\quad Weakly supervised learning (WSL) has attracted increasing attention as the volume of data which we are interested in learning from grows much faster than available annotations. Weakly supervised object localisation (WSOL) is of particular interest \cite{Deselaers2012,Sivaiccv2011,confeccvSivaRX12,TangCVPR14,Pandeyiccv2011,zhiyuan12,Nguyenweakly2011,Crandalleccv06,TangCVPR14,TangECCV14}, due to the onerous demands of annotating object location information. Many studies \cite{Nguyenweakly2011,Deselaers2012} have approached this task as a multi-instance learning \cite{Maron98aframework,Andrews03supportvector} problem. However, only relatively recently have localisation models capable of learning from challenging data such as the PASCAL VOC 2007 dataset been proposed \cite{Deselaers2012,Sivaiccv2011,confeccvSivaRX12,Pandeyiccv2011,zhiyuan12}. Such data is especially challenging because objects may occupy only a small proportion of an image, and multiple objects may occur in each image: corresponding to a multi-instance multi-label problem \cite{nguyen2010svm_miml}. Three types of cues are exploited in existing WSL object localisation approaches: (1) \textit{saliency} -- a region containing an object should look different from the majority of (background) regions. The object saliency model in \cite{Alexe_TPAMI_2012} is widely used in most recent work \cite{Deselaers2012,confeccvSivaRX12,Guillaumin_cvpr12,Sivaiccv2011,eth_biwi_00905} as a preprocessing step to propose a set of candidate object locations so that the subsequent computation is reduced to a tractable level, (2) \textit{intra-class} -- a region containing an object should look similar to the regions containing the same class of objects in other images \cite{Sivaiccv2011}, and (3) \textit{inter-class} -- the region should look dissimilar to any regions that are known to not contain the object of interest \cite{Deselaers2012,confeccvSivaRX12,Pandeyiccv2011}. One of the first studies to combine the three cues for WSOL was \cite{Deselaers2012} which employed a conditional random field (CRF) and generic prior object knowledge learned from a fully annotated dataset. Later, \cite{Pandeyiccv2011} presented a solution exploiting latent SVMs. Recent studies have explicitly examined the role of intra- and inter-class cues \cite{Sivaiccv2011,confeccvSivaRX12}, as well as transfer learning \cite{zhiyuan12,Guillaumin_cvpr12}, for this task. Similar to the above approaches for weakly labelled images, \cite{Tang_NIPS2012,eth_biwi_00905} proposed video based frameworks to deal with motion segmented tubes instead of bounding-boxes. In contrast to these studies, which are all based on discriminative models, we introduce a generative topic model based approach that exploits all three cues, as well as joint multi-label, semi-supervised and cross-domain adaptive learning.
\noindent \textbf{Exploiting prior knowledge} \quad Prior knowledge has been exploited in existing WSOL works \cite{Deselaers2012,confeccvSivaRX12,Pandeyiccv2011}. Recognition or detection priors can be broadly broken down into appearance and geometry (location, size, aspect) cues, and can be provided manually, or estimated from data. The most common use has been crude: to generate candidate object locations based on a pre-trained model for generic objectness \cite{Alexewhatisobject}, i.e.~the previously mentioned saliency cue. This reduces the search space for discriminative models. Beyond this, geometry priors have also been estimated during learning \cite{Deselaers2012}.
We can not only exploit such simple appearance and geometry cues as model priors, but also go beyond to exploit a richer object hierarchy, which has been widely exploited in classification \cite{danielcvpr2012,zweig07_iccv,Rohrbach2011cvpr,Salakhutdinov2011cvpr} and to a less extent detection \cite{Guillaumin_cvpr12,Kuettel2012}.
More specifically, we leverage WordNet, a large lexical database based on linguistics \cite{Pedersen2004}. WordNet provides a measure of prior appearance similarity/correlation between classes, and we use this prior to regularise appearance learning. Such cross-class appearance correlation information is harder to use in previous WSOL approaches because different classes are trained separately.
Interestingly, our model uniquely shows positive results for WordNet-based appearance correlation (see Sec.~\ref{soa}), in contrast to some recent studies \cite{Rohrbach2011cvpr,Salakhutdinov2011cvpr} that found no or limited benefit from exploiting WordNet based cross-class appearance correlation for recognition. Compared to the classification task, this inter-class correlation information is more valuable for WSOL because the task is more ambiguous. Specifically, the interdependent localisation and appearance learning aspects of the task adds an extra layer of ambiguity -- the model might be able to learn the appearance if
it knew the location, but it will never find the location without knowing appearance.
Our work is related to \cite{Guillaumin_cvpr12} where hierarchical cross-class appearance similarity is used to help weakly supervised object localisation in ImageNet by transfer learning. However, a source dataset of fully annotated images are required in their work, whilst our model exploits the correlation directly for the target data which is only weakly labelled.
\noindent \textbf{Cross domain/dataset learning}\quad
Domain adaptation \cite{Cao2010cvpr} methods aim to exploit prior knowledge from a source domain/dataset to improve the performance and/or reduce the amount of annotation required in a target domain/dataset (see \cite{pan2009transfer_survey} for a review). Many conventional approaches are based on SVMs for which the target domain can be considered a perturbed version of the source domain, and thus learning proceeds in the target domain by regularising it toward the source \cite{yang2007crossDomainConcept}. More recently, transductive SVM \cite{BergamoTorresani10}, Multiple Kernel Learning (MKL) \cite{Luo2011iccv}, and instance constraints \cite{Donahue_CVPR2013} have been exploited. In contrast to these discriminative approaches, we exploit a simple and efficient Bayesian adaptation approach similar in spirit to \cite{Cao2010cvpr,Dai2007aaai}. Posterior parameters from the source domain are transferred as priors for the target, which are then adapted based on observed target domain data via Bayesian learning. Going beyond simple within-modality dataset bias, recent studies \cite{eth_biwi_00905,Tang_NIPS2012} have adapted object detectors from video to image or reverse. We show that our approach can achieve the image-video domain transfer within a single framework.
\noindent \textbf{Exploiting unlabelled data}\quad Semi-supervised learning \cite{zhu2007sslsurvey} methods aim to reduce labelling requirements and/or improve results compared to only using labelled data. Most existing SSL approaches assume a training set with a mix of fully labelled and weak or unlabelled \cite{blaschko10simultaneous,Guillaumin_cvpr12} data, while we use weak and unlabelled data alone. The existing (discriminative) line of work focusing on WSOL \cite{Deselaers2012,Pandeyiccv2011,Tang_CVPR2013,Glenn2012eccv} has not generally exploited unlabelled data, and cannot straightforwardly do so.
\noindent \textbf{Topic models for image understanding}\quad Latent topic models (LTMs) were originally developed for unsupervised text analysis \cite{BleiLDA2003}, and have been successfully adapted to both unsupervised \cite{Philbinijcv2010,Sivic05b} and supervised image understanding problems \cite{feifei2006one_shot,CaoFei-Fei2007,LiSocherFeiFei2009,wangbleifeifei08,Rasiwasia_TPAMI_2013}. Most studies have addressed the simpler tasks of classification \cite{wangbleifeifei08,Rasiwasia_TPAMI_2013} and annotation \cite{wangbleifeifei08,blei2003annotated_model}.
Our model differs from the existing ones in two main aspects: (i) Conventional topic models have no explicit notion of the spatial location and extent of an object in an image. This is addressed in our model by modelling the spatial distribution of each topic. Note that some topic model based methods \cite{LiSocherFeiFei2009,CaoFei-Fei2007} can also be applied to object localisation. However, the spatial location is obtained from a pre-segmentation step rather than being explicitly modelled. (ii) The other difference is more subtle -- existing supervised topic models such as CorrLDA \cite{blei2003annotated_model}, SLDA \cite{wangbleifeifei08} and derivatives \cite{LiSocherFeiFei2009} only weakly influence the learned topics. This is because the objective is the sum of visual words and label likelihoods, and visual words vastly outnumber annotations, thus dominating the result \cite{Rasiwasia_TPAMI_2013}. The limitation is serious for WSOL as the labels are already weak and they must be used to their full strength. In this work, a learning algorithm with topic constraints similarly to \cite{tim2011tpami} is formulated to provide stronger supervision which is demonstrated to be much more effective than the conventional supervised topic models in our experiments (see supplementary material). With these limitations addressed, we can exploit the potential of a generative model for domain adaptation, joint-learning of multiple objects and semi-supervised learning.
\noindent \textbf{Other joint learning approaches}\quad An approach similar in spirit to ours in the sense of jointly learning a model for all classes is that of Cabral \emph{et al}~ \cite{CabralDCB11}. This study formulates multi-label image classification as a matrix completion problem, which is also related to our factoring images into a mixture of topics. However we add two key components of (i) a stronger notion of the spatial location and extent of each object, and (ii) the ability to encode human knowledge or transferred knowledge through a Bayesian prior. As a result, we are able to address more challenging data than \cite{CabralDCB11} such PASCAL VOC.
Multi-instance multi-label (MIML) \cite{nguyen2010svm_miml} approaches provide a mechanism to jointly learn a model for all classes \cite{Zhou07multi-instancemultilabel,zha2008mlmi}. However, because these methods must search for a discrete space (of positive instance subsets), their optimisation problem is harder than the smooth probabilistic optimisation here.
Finally, while more elaborate joint generative learning methods \cite{sudderth2008tdp_visual,LiSocherFeiFei2009} exist, they are more complicated than necessary for WSOL and do not scale to the size of data required here.
\noindent \textbf{Feature fusion}\quad Combining multiple complementary cues has been shown to improve classification performance in object recognition \cite{GehlerN09,Gehler09_let,Orabona10,Luo2011iccv}. Two simple feature fusion methods have been widely used in existing work: early fusion which combines low-level features \cite{Siva_2013_CVPR} early (feature concatenation) and late (score level) fusion \cite{Deselaers2012,Sivaiccv2011}. Multiple kernel learning (MKL) approaches have attracted attention as a principled mid-level approach to combining features \cite{Orabona10,Gehler09_let}. Similarly to MKL, our framework provides a principled and jointly-learned mid-level probabilistic fusion via its generative process.
\noindent \textbf{Contributions}\quad In summary, this paper makes the following contributions: (1) We propose the novel concept of joint modelling of all object classes and background for weakly supervised object localisation. (2) We formulate a novel Bayesian topic model suitable for object localisation, which can use various types of prior knowledge including an inter-category appearance similarity prior. (3) Our Bayesian prior enables the model to easily borrow available domain knowledge from existing auxiliary datasets and adapt it to a target domain. (4) We further exploiting unlabelled data for improving weakly supervised object localisation. (5) Extensive experiments on the PASCAL VOC 2007 \cite{pascalvoc2007} and ImageNet \cite{imagenet_cvpr09} show that our model surpasses existing competitors and achieves state-of-the-art performance. A preliminary version of our work was described in \cite{shi_iccv_2013}.
\section{Joint Topic Model for Objects and Background}\label{sec:PGM}
In this section, we introduce our new latent topic model (LTM) \cite{BleiLDA2003} approach to the weakly-supervised object localisation task. Applied to images, conventional LTMs factor images into combinations of latent topics \cite{Philbinijcv2010,Sivic05b}. Without supervision, these topics may or may not correspond to anything of semantic relevance to humans. To address the WSOL task, we need to learn what is unique to all images sharing a particular label (object class), while explaining away both the pixels corresponding to other annotated objects as well as other shared visual aspects (background) which are irrelevant to the annotations of interest. We will achieve this in a LTM framework by applying weak supervision to partially constrain the available topics for each image. This constraint is enforced by label/topic clamping to ensure that each foreground topic corresponds to an object class of interest.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\columnwidth]{fig/JTM2.pdf}
\end{center}
\caption{Graphical model for our WSOL joint topic model. Shaded nodes are observed.}
\label{fig:GM}
\end{figure}
More specifically, to address the WSOL task, we will factor images into unique combinations
of $K$ shared topics. If there are $C$ classes of objects to
be localised, $K^{fg}=C$ of these will represent the (foreground) classes,
and $K^{bg}=K-K^{fg}$ topics will model background
data to be explained away. Each topic thus corresponds to one object class or one type of background. Let $T^{fg}$ and $T^{bg}$ index foreground
and background topics respectively. An image is represented using a Bag-of-Words (BoW) representation for each of $f=1\dots F$ different types of features (see Sec.~\ref{datasets} for the specific appearance features used). After learning, each latent topic will encode both a distribution
over the $V_f$ sized appearance vocabulary of each feature $f$ and also over the spatial
location of these words within each image. Formally, given a set of $J$ training images, each labelled with any number of the $C$ foreground classes, and represented as bags of words $\mathbf{x}_{jf}$, the generative
process of our model (Fig.~\ref{fig:GM}) is as follows (notation is summarised in Table \ref{trainandtest} for convenience):
\\~\\
\noindent For each topic $k\in1\dots K$:
\begin{enumerate}
\item For each feature representation $f\in1\dots F$:
\begin{enumerate}
\item Draw an appearance distribution $\boldsymbol{\pi}_{kf}\sim\mbox{Dir}(\boldsymbol{\pi}_{kf}^{0})$ following the Dirichlet distribution
\end{enumerate}
\end{enumerate}
For each image $j\in1\dots J$:
\begin{enumerate}
\item Draw foreground and background topic distribution $\boldsymbol{\theta}_{j}\sim\mbox{Dir}(\boldsymbol{\alpha}_{j})$,
$\boldsymbol{\alpha}_{j}=[\boldsymbol{\alpha}_{j}^{fg},\boldsymbol{\alpha}_{j}^{bg}]$ where the Dirichlet distribution parameter $\alpha_{j}$ reflects prior knowledge of the presence of each object class or background in the image $j$. Both $\boldsymbol{\theta}_{j}$ and $\boldsymbol{\alpha}_{j}$ are $K$ dimensional.
\item For each foreground topic $k\in T^{fg}$ draw a location distribution:
$\{\boldsymbol{\mu}_{kj},\Lambda_{kj}\}\sim\mathcal{NW}(\boldsymbol{\mu}_{k}^{0},\Lambda_{k}^{0},\beta_{k}^{0},\nu_{k}^{0})$
\item For each observation (visual word) $i\in1\dots N_j$:
\begin{enumerate}
\item Draw topic $y_{ij}\sim\mbox{Multi}(\boldsymbol{\theta}_{j})$
\item Draw a location: \\$\mathbf{l}_{ij}\sim\mathcal{N}(\boldsymbol{\mu}_{y_{ij}j},\Lambda_{y_{ij}j}^{-1})$
if $y_{ij}\in T^{fg}$ or\\ $\mathbf{l}_{ij}\sim Uniform$ if $y_{ij}\in T^{bg}$
\item For each feature representation $f\in1\dots F$:
\begin{enumerate}
\item Draw visual word $x_{ijf}\sim\mbox{Multi}(\boldsymbol{\pi}_{y_{ij}f})$
\end{enumerate}
\end{enumerate}
\end{enumerate}
\noindent where Multi, Dir, $\mathcal{N}$, $\mathcal{NW}$ and $Uniform$ respectively indicate Multinomial, Dirichlet, Normal, Normal-Wishart and uniform distributions with the specified parameters. These prior distributions are chosen mainly because they are conjugate to the word, topic and location distributions, and hence enable efficient inference. For the visual word spatial location, the foreground and background distributions are of different forms -- normal for foreground and uniform for background. This is to reflect the intuition that foreground objects tend to be compact and background much less so.
The joint distribution of all observed $O=\{\mathbf{x}_{jf},\mathbf{l}_{j}\}_{j,f=1}^{J,F}$
and latent $H=\{\{\boldsymbol{\pi}_{kf}\}_{k,f=1}^{K,F},\{\mathbf{y}_{j},\boldsymbol{\mu}_{kj},\Lambda_{kj},\boldsymbol{\theta}_{j}\}_{k,j=1}^{K,J}\}$
variables given parameters $\Pi=\{\{\boldsymbol{\pi}_{kf}^{0}\}_{k,f=1}^{K,F},\{\boldsymbol{\mu}_{k}^{0},\Lambda_{k}^{0},\beta_{k}^{0},\nu_{k}^{0}\}^K_{k=1},\{\boldsymbol{\alpha}_{j}\}_{j=1}^{J}\}$
in our model is therefore:
\setlength\arraycolsep{1.5pt}
\begin{eqnarray}
p(O,H|\Pi)&=&\prod_{k}^{K}\prod_{f}^{F}p(\boldsymbol{\pi}_{kf}|\boldsymbol{\pi}_{kf}^{0})\\
&& \hspace{-2.1cm} \cdot \prod_{j}^{J}p(\boldsymbol{\theta}_{j}|\boldsymbol{\alpha}_{j})\left[\vphantom{\prod_{i}^{N_{i}}}\prod_{k}^{K}p(\boldsymbol{\mu}_{jk},\Lambda_{jk}|\boldsymbol{\mu}_{k}^{0},\Lambda_{k}^{0},\beta_{k}^{0},\nu_{k}^{0}) \right.\nonumber \\
&& \left.\hspace{-2.1cm}\left(\prod_{i}^{N_j}p(\mathbf{l}_{ij}|\boldsymbol{\mu}_{jk},\Lambda^{-1}_{jk})\prod_{f}^{F}p(x_{ijf}|y_{ij},\boldsymbol{\pi}_{y_{ij}f})p(y_{ij}|\theta_{j})\right)\right].
\label{eq:Joint}
\end{eqnarray}
\begin{table}[t]
\setlength{\heavyrulewidth}{0.12em}
\centering
\begin{tabular}{ll}
\toprule
$x_{ijf}=1...V_f$ & Visual word $i$ in image $j$ for feature $f$ \\
\midrule
$\mathbf{l}_{ij}$ & Location of visual word $i$ in image $j$\\
\midrule
$y_{ijk} =1\dots K$ & Topic (object) for explaining visual word $x_{ijf}$ \\
\midrule
$\boldsymbol{\alpha}_j$ & Annotation / topic prior for image $j$\\
\midrule
$\boldsymbol{\theta}_{j}$ & Dirichlet topic proportion in image j\\
\midrule
$\boldsymbol{\pi}_{kf}^{0}$ & Appearance prior for topic/class $k$ in feature $f$\\
\midrule
$\boldsymbol{\pi}_{kf}$ & Dirichlet appearance for topic/class $k$ in feature $f$\\
\midrule
$\boldsymbol{\mu}_k^{0}, \Lambda_k^{0}$ & $\mathcal{NW}$ Location prior for class $k$\\
\midrule
$\boldsymbol{\mu}_{kj}, \Lambda^{-1}_{kj}$ & Gaussian location of object class $k$ in image $j$\\
\bottomrule
\end{tabular}
\caption{Summary of model variables and parameters}
\label{trainandtest}
\end{table}
\section{Model learning}\label{sec:learning}
\noindent \textbf{Inference via variational message passing}\quad
Learning our model involves inferring the following quantities: the
appearance of each object class for each feature type, $\boldsymbol{\pi}_{kf},k\in T^{fg}$ and each background
type, $\boldsymbol{\pi}_{kf},k\in T^{bg}$ for each feature type $f$; the word-topic distribution (soft segmentation) of each image $\mathbf{z}_{j}$,
the proportion of visual words (related to the proportion of pixels) in each image corresponding to each
class or background $\boldsymbol{\theta}_{j}$, and the location of each object
$\boldsymbol{\mu}_{jk},\Lambda_{jk}$ in each image (mean and covariance of a Gaussian). To learn the model and localise all the weakly
annotated objects, we wish to infer the posterior $p(H|O,\Pi)=p(\{\mathbf{y}_{j},\boldsymbol{\mu}_{jk},\Lambda_{jk},\boldsymbol{\theta}_{j}\}_{k,j}^{K,J},\{\boldsymbol{\pi}_{kf}\}_{k,f}^{K,F}|\{\mathbf{x}_{jf},\mathbf{l}_{j}\}_{j=1,f=1}^{J,F},\Pi)$.
This is intractable to solve directly; however a variational message passing
(VMP) \cite{winn2004vmp} strategy can be used to obtain a factored
approximation $q(H|O,\Pi)$ to the posterior:
\begin{eqnarray}
q(H|O,\Pi) & =\nonumber \\
& & \hspace{-1.5cm}\prod_{k,f}q(\boldsymbol{\pi}_{kf})\prod_{j}q(\boldsymbol{\theta}_{j})q(\boldsymbol{\mu}_{jk},\Lambda_{jk})\prod_{i}q(y_{ij}).\label{eq:varApprox}
\end{eqnarray}
\noindent Under this approximation a VMP solution is obtained by deriving integrals of the
form $\ln q(\mathbf{h})=E_{H\backslash\mathbf{h}}\left[\ln p(H,O)\right]+K$
for each group of hidden variables $\mathbf{h}$, thus obtaining the following
updates for the sufficient statistics (indicated by tilde) of each variable:
\begin{eqnarray}
\tilde{\theta}_{jk} & = & \alpha_{jk}+\sum_{i} \tilde{y}_{ijk},\label{eq:theta} \\
\tilde{y}_{ijk} & \propto & \int_{\boldsymbol{\mu}_{jk},\Lambda_{jk}}\mathcal{N}(\mathbf{l}_{ij}|\boldsymbol{\mu}_{jk},\Lambda_{jk}^{-1})q(\boldsymbol{\mu}_{jk},\Lambda_{jk}) \nonumber \\
& & \hspace{-1cm}\cdot\prod_{f}^{F}\exp\left(\Psi(\tilde{\pi}_{x_{ijf}y_{ij}f})-\Psi(\sum_{v}\tilde{\pi}_{vy_{ij}f}) \right) \nonumber \\
& & \hspace{-1cm} \cdot\exp\left(\Psi(\tilde{\theta}_{jy_{ijk}})\right), \label{eq:y} \\
\tilde{\pi}_{vkf} & = & \pi^0_{vkf}+\sum_{ij}\mathbf{I}(x_{ijf}=v) \tilde{y}_{ijk}, \label{eq:varUpdates}
\end{eqnarray}
\noindent where $\Psi$ is the digamma function, $v=1\dots V_f$ ranges over the BoW appearance vocabulary, $\mathbf{I}$ is the indicator function which returns 1 if its argument is true, and
the integral in the second line returns the student-t distribution over $\mathbf{l}_{ij}$, $\mathcal{S}(\mathbf{l}_{ij}|\tilde{\boldsymbol{\mu}}_{jk},\tilde{\Lambda}_{jk}^{-1}P,\tilde{\beta}_{jk},\tilde{\nu}_{jk})$. Within each image $j$, standard updates \cite{bishop2006prml} apply for the sufficient statistics $\{\tilde{\boldsymbol{\mu}}_{jk},\tilde{\Lambda}_{jk},\tilde{\beta}_{jk},\tilde{\nu}_{jk}\}$ of the Normal-Wishart parameter posterior $q(\boldsymbol{\mu}_{jk},\Lambda_{jk})$. The update in Eq.~(\ref{eq:y}) (estimating the object explaining each pixel) is the most non-standard for LTMs; this is because it contains a top-down contribution (the third term), and two bottom-up contributions from the location and appearance (the first and second terms respectively). The model is learned by iterating the updates of Eqs.~(\ref{eq:theta})-(\ref{eq:varUpdates}) for all images $j$, words $i$, topics $k$ and vocabulary $v$.
\noindent \textbf{Supervision via label-topic constraints}\quad
In conventional topic models, the $\alpha$ parameter encodes the expected proportion of words for each topic. In our weakly supervised topic model, we use $\alpha$ to encode the supervision from weak labels. In particular, we set $\alpha_{j}^{fg}$
as a binary vector with $\alpha_{jk}^{fg}=1$ if class $k$ is present
in image $j$ and $\alpha_{jk}^{fg}=0$ otherwise. $\alpha^{bg}$
is always set to $1$ to reflect the fact that background of different types can be shared across different images. That is, the foreground topics are clamped with the weak labels indicating the presence/absence of foreground object classes in each image, whilst all background types are assumed to be present a priori. With these partial constraints, iterating the
updates in Eqs.~(\ref{eq:theta})-(\ref{eq:varUpdates}) has the effect of factoring images
into combinations of latent topics, where $K^{bg}$ background
topics are always available to explain away backgrounds, and
$K^{fg}$ foreground topics are only available to images with
annotated classes. Note that this set-up assumes a 1:1 correspondence between object classes and topics. More topics can trivially be assigned to each object class (1:N correspondence), which has the effect of modelling multi-modality in object appearance, for a linear increase in computational cost.
\noindent \textbf{Probabilistic feature fusion}\quad
We combine multiple features probabilistically in our model. A single topic distribution ($y$) is estimated given different low-level features ($f$) in Eq.~(\ref{eq:y}). Our fusion keeps the original low-level feature representations rather than increasing ambiguity by concatenating them before they provide complementary information about the location (early fusion). The shared topic ($y$) and Gaussian location distribution ($\boldsymbol{\mu},\Lambda^{-1}$) correlate the multiple features, which avoids domination by a single one. The appearance model in each modality is updated based on the consensus estimate of location; it thus learns a good appearance in each view even if the particular category is hard to detect in that view (as a result could drift if used alone). Its advantage over early (feature concatenation) or late (score level) fusion is demonstrated experimentally in Sec.~1 of the supplementary material
\section{Object Localisation\label{sub:Object-Localisation}}
After learning, we extract the location of the objects in each image from the model, which can then be used to learn an object detector. Depending on whether the images are treated as individual images or consecutive video frames, our localisation method differs slightly.
\noindent \textbf{Individual images}\quad There are two possible strategies to localise objects in individual images, which we will compare later in Sec.~\ref{experiments}. In the first strategy (\textit{Our-Gaussian}), a bounding box
for class $k$ in image $j$ can be obtained directly from the Gaussian mean of the
parameter posterior $q(\boldsymbol{\mu}_{jk},\Lambda_{jk})$,
via aligning a bounding box to the two standard deviation ellipse. This has the advantage of being
clean and highly efficient. However, since there
is only one Gaussian per class (which will grow to cover all instances
of the class in an image), this is not ideal for images with more
than one object per class. In the second strategy (\textit{Our-Sampling}) we draw a heat-map for class $k$ by projecting $q(y_{ijk})$ (Eq.~(\ref{eq:y}))
back onto the image plane,
using the grid coordinates where visual words are computed. This heat-map is analogous to those
produced by many other approaches such as Hough transforms \cite{houghforest2012}.
Thereafter, any strategy for heat-map based localisation may be used.
We choose the non-maximum suppression (NMS) strategy of \cite{Felzenszwalb2012partbased}.
\noindent \textbf{Video frames}\quad The above two strategies are directly applicable to video data if we treat each frame as an individual image. However, the temporal information of objects is useful in continuous videos to smooth the noise of individual frames. To this end, we apply a simple state space model for video segments to post-process object locations, smoothing them in time. Two diagonal points are sufficient to encode object location (bounding-box), and these are estimated from $q(\boldsymbol{\mu}_{jk},\Lambda_{jk})$ above at every frame/time $t$ as $c_t$. Assuming a four-dimensional state latent state vector $\mathbf{z}_{t}^{T}=(z_{xt} \ z_{yt} \ \dot{z}_{xt} \ \dot{z}_{yt})$, denoting the (hidden) true coordinate of an object of interest (two diagonal corners of the bounding box). A Kalman smoother is then adopted to smooth the observation noise $\sigma_t$ in the system:
\begin{equation}
\mathbf{z}_{t} = \mathbf{A}\mathbf{z}_{t-1}+\epsilon_t,\\~\\
\mathbf{c}_t = \mathbf{O}\mathbf{z}_t + \sigma_t,
\end{equation}
\noindent where $\mathbf{A}$ is the temporal transition between true locations $\mathbf{z}$ in video, and $\mathbf{O}$ is the observation function for each frame.
\section{Bayesian Priors}
An important capability of our Bayesian approach is that top-down cues from human expertise, or estimated from data can be encoded. Various types of human knowledge about objects and their relationships with background are encoded in our model. As discussed earlier, prior cues can potentially cover appearance and geometry information.
\noindent \textbf{Encoding geometry prior}\quad For geometry, we already model the most general intuition that objects are compact relative to background by assigning them Gaussian and uniform distributions respectively (Sec.~\ref{sec:PGM}). Beyond this, prior knowledge about typical image location and size of each class can be included via prior parameters $\boldsymbol{\mu}^0_{k},\Lambda^0_k$, however we found this did not actually noticeably improve results in our experiments so we did not exploit it. This makes sense, because in challenging datasets like PASCAL VOC, objects appear in highly variable scales and locations, so there is little regularity to learn.
\noindent \textbf{Encoding appearance prior}\quad If prior information is available about object category appearance, it can be included by setting $\boldsymbol{\pi}^0_{kf}$. (We will exploit this later for cross-domain adaptation in Sec.~\ref{sec:BDA}). For within-domain learning, we can obtain an initial data-driven estimate of object appearance to use as a prior by exploiting the observation that, when aggregated across all images, the background is more dominant than any single object class in terms of size (hence the amount of visual words). Exploiting this intuition, for each object class $k$, we set the appearance prior $\boldsymbol{\pi}^0_{kf}$ as:
\begin{equation}
\boldsymbol{\pi}^0_{kf} = \left|\frac{1}{C}\sum_{j,c_j=k} h(\mathbf{x}_{jf})-\frac{1}{J}\sum_j h(\mathbf{x}_{jf}) \right|_++ \epsilon,\label{eq:appPrior}
\end{equation}
where $h(\cdot)$ indicates histogram and $\epsilon$ is a small constant. That is, set the appearance prior for each class to the mean of those images containing the object class minus the average over all images. This results in a prior which reflects what is consistently unique about each particular class. This is related to the notion of saliency, not within an image, but across all images. Saliency has been exploited in previous MIL based approaches to generate the instances/candidate object locations \cite{Deselaers2012,Sivaiccv2011,confeccvSivaRX12,Pandeyiccv2011,zhiyuan12}. However, in our model it is cleanly integrated as a prior.
\noindent \textbf{Encoding appearance similarity prior}\quad
Going beyond the direct unary appearance prior discussed above, we next consider exploiting the notion of prior \emph{inter-class appearance similarity}, rather than prior appearance per-se. The prior similarity between each object category can be estimated by computing inter-category category distance based on WordNet structure \cite{Pedersen2004}. We compute a similarity matrix $\mathcal{M}$ where elements $\mathcal{M}_{m,n}$ indicates the relatedness between class $m$ and $n$. The similarity matrix is then used to define how much appearance information from class $m$ contributes to class $n$ a priori.
We exploit this matrix by introducing an M-step into our learning algorithm (Eqs.~(\ref{eq:theta})-(\ref{eq:varUpdates})). Previously the appearance prior $\boldsymbol{\pi}_{kf}^0$ was considered fixed (e.g., from Eq.~(\ref{eq:appPrior})). As with any parameter learning in the presence of latent variables, $\boldsymbol{\pi}_{kf}^0$ could potentially be optimised by a maximum-likelihood M-step interleaved with E-step latent variable inference. However, rather than the conventional approach of optimising $\boldsymbol{\pi}_{kf}^0$ \emph{solely} given the data of class $k$, we define an update that exploits cross-class similarity by updating $\boldsymbol{\pi}_{kf}^0$ using \emph{all} the data, but weighted by its similarity to the target class $k$.
Denoting $\hat{{\pi}}^0_{vkf}$ as the new appearance prior to be learned, we introduce a new regularised M-step to learn $\hat{{\pi}}^0_{vkf}$. Specifically, the update for each class $k\in T^{fg}$ is as follows:
\begin{equation}
\hat{\pi}^0_{vkf}=\hspace{-0.3cm}\underbrace{\pi_{vkf}^{0}}_\text{fixed data driven prior}\hspace{-0.1cm}+\underbrace{\sum_{ij}\sum_{k' \in T^{fg}} \mathcal{M}_{k,k'} \cdot \mathbf{I}(x_{ijf}=v) \tilde{y}_{ijk'}}_\text{inter-class similarity prior}
\end{equation}
The first term $\pi_{vkf}^{0}$ is the original unary prior from Eq.~(\ref{eq:appPrior}). The second term is a data-driven update given the results of the E-step ($\tilde{y}$, Eqs.~(\ref{eq:theta})-(\ref{eq:varUpdates})). It includes a contribution from all images of all classes $k'$, weighted by the similarity of $k'$ to the target class $k$ -- given by $\mathcal{M}_{k,k'}$. The updated $\hat{\boldsymbol{\pi}}^0_{kf}$ then replaces $\boldsymbol{\pi}^0_{kf}$ in Eq.~(\ref{eq:varUpdates}) of the E-step.
\section{Learning from additional data}
In this section, we discuss learning from additional data beyond the data for the WSOL task. This includes partially relevant data from other domains or datasets, and any additional but un-annotated data from the same domain.
\subsection{Bayesian Domain Adaptation}\label{sec:BDA}
Across different datasets or domains (such as images and video), the appearance of each object category will exhibit similarity, but vary sufficiently that directly using an appearance model learned in a source domain $s$ for inference in a target domain $t$ will perform poorly \cite{Torralba_cvpr11}. In our case this would correspond to directly applying a learned source appearance model $\pi^s_k$ to a new target domain $t$, $\boldsymbol{\pi}^t_k:=\boldsymbol{\pi}^s_k$. However, one hopes to be able to exploit similarities between the domains to learn a better model than using only the target domain alone \cite{yang2007crossDomainConcept,BergamoTorresani10,Tang_NIPS2012,Dai2007aaai}. In our case, the Bayesian (Multinomial-Dirichlet conjugate) form of our model is able to achieve this for WSOL by simply learning $\boldsymbol{\pi}^s_k$ for a source domain $s$ (Eq.~(\ref{eq:varUpdates})), and applying it as the prior ${\boldsymbol{\pi}_k^0}^t:=\boldsymbol{\pi}^s_k$ in the target $t$ -- which is then adapted to reflect the target domain statistics (Eq.~(\ref{eq:varUpdates})).
\subsection{Semi-supervised learning (SSL)}
Beyond learning from annotated data in different but related domains, our framework can also be applied in a SSL context to learn from unlabelled data in the same domain to improve performance and/or reduce annotation requirement. Specifically, images $j$ with known annotations are encoded as described in Sec.~\ref{sec:learning}, while those without annotation are set to $\alpha_{j}^{fg}=0.1~\forall j$, meaning that all topics/classes may potentially occur, but we expect few
simultaneously within one image. Unknown images can include those from
the same pool of classes but without annotation (for which the posterior
$q(\boldsymbol{\theta})$ will pick out the present classes), or those from a completely
disjoint pool of classes (for which $q(\boldsymbol{\theta})$ will
encode only background).
\section{Experiments}
\label{experiments}
\subsection{Datasets, features and settings}
\label{datasets}
\noindent \textbf{Datasets}\quad We evaluate our model on three datasets, PASCAL VOC \cite{pascalvoc2007}, ImageNet \cite{imagenet_cvpr09} and YouTube-object video \cite{eth_biwi_00905}. The challenging PASCAL VOC 2007 dataset is now widely used for weakly supervised object localisation. A number of variants are used: \textit{VOC07-20} contains all 20 classes from VOC 2007 training set as defined in \cite{Sivaiccv2011} and was used in \cite{Sivaiccv2011,confeccvSivaRX12,zhiyuan12}; \textit{VOC07-6$\times$2} contains 6 classes with Left and Right poses considered as separate giving 12 classes in total and was used in \cite{Deselaers2012,Pandeyiccv2011,Sivaiccv2011,confeccvSivaRX12,zhiyuan12,TangCVPR14}.
The former obviously is more challenging than the latter. Note that \textit{VOC07-20} is different to the \textit{Pascal07-all} defined in \cite{Deselaers2012} which actually contains 14 classes and uses the other 6 as fully annotated auxiliary data. We call it \textit{VOC07-14} for consistency, but do not use the other 6 auxiliary classes.
To evaluate our method in a larger-scale setting, we select all images with bounding box annotation in the ImageNet dataset containing 3624 object categories as in \cite{TangCVPR14}.
We also evaluate our model on videos although it is designed primarily for individual images and does not exploit motion information during learning. Only a simple temporal smoothing post-processing step is introduced (see Sec.~\ref{sub:Object-Localisation}). YouTube-Object dataset \cite{eth_biwi_00905} is a weakly annotated dataset composed of 10 object classes in videos from YouTube. These 10 classes are a subset of the 20 VOC classes, which facilitate domain transfer experiments. \\
\noindent \textbf{Features}\quad By default, we use only a single appearance feature, namely SIFT to compare directly with most prior WSOL work which uses the same feature. Given an image $j$, we compute $N_{j}$ 128-bin SIFT descriptors, regularly sampled every 5 pixels along both directions, and quantise them into a $2000$-word codebook using K-means clustering. Differently to other bag-of-words (BoW) approaches \cite{LiSocherFeiFei2009,wangbleifeifei08} which then discard spatial information entirely, we then represent
each image $j$ by the list of $N_j$ visual words and corresponding locations
$\{x_{i},l_{ai},l_{bi}\}_{i=1}^{N_j}$ where $\{l_{ai},l_{bi}\}$ are the coordinates of each word.
We additionally extract two more BoW features at the same regular grid locations to test the feature fusion performance. They are: (1) Colour-LAB: Colour provides complementary information to SIFT gradients. We quantise colour histograms into three channels (8,16,16) of LAB space and concatenate them to produce a 40 dimensional feature vector. Visual words are then obtained by quantising the feature space using K-means with K=500.
(2) Local binary pattern (LBP) \cite{lbp2002}: 52 bin LBP feature vectors are computed and quantised into a 500-bin histogram
\noindent \textbf{Settings and implementation details}\quad For our model, we set the foreground topic number $K^{fg}$ to be equal to the number of classes, and $K^{bg}=20$ for background topics. $\alpha$ is set to 0 or 1 as discussed in Sec.~4. and $\pi^0$ is initialised by Eq.~(8) as described in Sec.~5. $\mu^0$ is initialised with the central of the image area. $\Lambda^0$ is initialised from the half size of the image area. We run Eqs.~(\ref{eq:theta})-(\ref{eq:varUpdates}) for a fixed $100$ VMP iterations. The localisation performance is measured using CorLoc \cite{eth_biwi_00905,TangCVPR14}: an object is considered to be correctly localised in an given image if the overlap between the localisation box and the ground-truth (any instance of the target class) is greater than 50\%. The CorLoc accuracy is then computed as the percentage (\%) of correctly localised images for each target class. The same measure has been used in all methods compared in our experiments.
\subsection{Comparison with state-of-the-art}
\label{soa}
\subsubsection{Results on VOC dataset}
\label{sec:cmpOnVOC}
\noindent \textbf{Competitors}\quad We compare our joint modelling approach to the following state-of-the-art competitors:
\vspace{5pt}
\hangafter=1
\setlength{\hangindent}{2em}
\noindent
\textit{Deselaers \emph{et al}~ \cite{Deselaers2012}} A CRF-based multi-instance approach that localises object instances while learning object appearance. They report performance both with a single feature (GIST) and four appearance features (GIST, colour histogram, BoW of SURF, and HOG).
\hangafter=1
\setlength{\hangindent}{2em}
\noindent
\textit{Pandey and Lazebnik \cite{Pandeyiccv2011}} They adapt the fully supervised deformable part-based models to address the weakly supervised localisation problem.
\hangafter=1
\setlength{\hangindent}{2em}
\noindent
\noindent \textit{Siva and Xiang \cite{Sivaiccv2011}} A greedy search method based on Genetic Algorithm to localise the optimal object bounding box location against a costing function combining the object saliency, intra-class and inter-class cues.
\hangafter=1
\setlength{\hangindent}{2em}
\noindent \textit{Siva \emph{et al}~ NM \cite{confeccvSivaRX12}} A simple negative mining (NM) approach which shows that inter-class is a stronger cue than the intra-class one when used properly.
\hangafter=1
\setlength{\hangindent}{2em}
\noindent \textit{Siva \emph{et al}~ OS \cite{Siva_2013_CVPR}} The negative mining approach above is extended to mine objective saliency (OS) information from a large corpus of unlabelled image. This can be considered as a hybrid of the object saliency approach in \cite{Alexe_TPAMI_2012} and the negative mining work in \cite{confeccvSivaRX12}.
\hangafter=1
\setlength{\hangindent}{2em}
\noindent \textit{Shi \emph{et al}~ \cite{zhiyuan12}} A ranking based transfer learning approach using an auxiliary dataset to score each candidate bounding box location in an image according to the degree of overlap with the unknown true location.
\hangafter=1
\setlength{\hangindent}{2em}
\noindent \textit{Zhu \emph{et al}~ \cite{zhu2014unsupervised}} An unsupervised saliency guided approach to localise an object in a weakly labelled image in a multiple instance learning framework.
\hangafter=1
\setlength{\hangindent}{2em}
\noindent \textit{Tang \emph{et al}~ \cite{TangCVPR14}} An optimisation-centric approach that uses a convex relaxation of the MIL formulation.
\vspace{0.2cm}
Note that a number of the competitors \cite{Deselaers2012,zhiyuan12,Sivaiccv2011,confeccvSivaRX12,TangCVPR14} used an additional auxiliary dataset that we do not use. Objectness trained on auxiliary data was required by \cite{Deselaers2012,zhiyuan12,Sivaiccv2011,confeccvSivaRX12,TangCVPR14}. Although Shi et al.~\cite{zhiyuan12} evaluated all 20 classes, a randomly selected 10 were used as auxiliary data with bounding-boxes annotation. Pandey and Lazebnik \cite{Pandeyiccv2011} set aspect ratio manually and/or performed cropping on the obtained bounding-boxes
\begin{table}[ht]
\scriptsize
\begin{center}
\begin{tabular}{l | l | l| l |l | l| l}
\hline
\multicolumn{1}{c|}{\multirow{2}{*}{Method} }&\multicolumn{3}{c|}{Initialisation} & \multicolumn{3}{c}{Refined by detector} \\
\hhline{~------}
& \textit{6$\times$2} & \ \textit{14} & \ \textit{20} & \textit{6$\times$2} & \ \textit{14}& \ \textit{20} \\
\hline
\hline
Deselaers \emph{et al}~ \cite{Deselaers2012} \\
\hhline{~------}
\hspace{10pt} a. single feature & 35 & 21 & - & 40 & 24 & - \\
\hhline{~------}
\hspace{10pt} b. all four features & 39 & 22 & - & 50 & 28 & - \\
\hline
Pandey and Lazebnik \cite{Pandeyiccv2011} $^*$ \\
\hhline{~------}
\hspace{10pt} a. before cropping & 36.7 & 20.0 & - & 59.3& 29.0 & - \\
\hhline{~------}
\hspace{10pt} b. after cropping & 43.7& 23.0 & - & 61.1 & 30.3 & - \\
\hline
Siva and Xiang \cite{Sivaiccv2011} & 40 & - & 28.9 & 49 & - & 30.4 \\
\hline
Siva \emph{et al}~ NM \cite{confeccvSivaRX12} & 37.1& - & 29.0 & 46 & - & - \\
\hline
Siva \emph{et al}~ OS \cite{Siva_2013_CVPR} & 42.4& - & 31.1 & 55 & - & 32.0\\
\hline
Shi \emph{et al}~ \cite{zhiyuan12} $^+$ & 39.7 & - & 32.1 & - & - & - \\
\hline
Zhu \emph{et al}~ \cite{zhu2014unsupervised} & - & - & - & - & 31 & - \\
\hline
Tang \emph{et al}~ \cite{TangCVPR14} & 39 & - & - & - & - & -\\
\hline
Cinbis \emph{et al}~ \cite{Cinbis_cinbis_2014} & & - & - & - & - & \textbf{38.8}\\
\hline
\hline
Our-Sampling & 50.8 & \textbf{32.2} &\textbf{34.1}& 65.5 & \textbf{33.8} & 36.2 \\
\hline
Our-Gaussian & \textbf{51.5} & 30.5 & 31.2 & \textbf{66.1} & 32.5 & 33.4 \\
\hline
\hline
Our-Sampling+prior & 51.2 & \textbf{33.4}& \textbf{36.1} & 65.9 & \textbf{35.4} & 38.3 \\
\hline
Our-Gaussian+prior & \textbf{51.8} &31.1 & 33.5 & \textbf{66.7} & 33.0 & 35.8 \\
\hline
\end{tabular}
\end{center}
\caption{Comparison with state-of-the-art competitors on the three variations of the PASCAL VOC 2007 dataset. \footnotesize{$^*$~Requires aspect ratio to be set manually. $^+$~Require 10 out of the 20 classes fully annotated with bounding-boxes and used as auxiliary data.}}
\label{state-of-art}
\end{table}
\noindent \textbf{Initial localisation } \quad Table \ref{state-of-art} shows that for the initial annotation accuracy our model consistently outperforms all competitors over all three VOC variants, sometimes by big margins. This is mainly due to the unique joint modelling approach taken by our method, and its ability to integrate prior spatial and appearance knowledge in a principled way. Note that the prior knowledge is either based on first principle (spatial and appearance) or computed from the data without any additional human intervention (appearance). Our two object localisation methods (Our-Sampling and Our-Gaussian) vary in performance over different-sized datasets. Our-Gaussian performs better in the relatively simple datasets (6$\times$2) where most images contain only one object, because our Gaussian location model can compact objects easily in this case. In contrast, Our-Sampling is better in the more complicated situation (20 classes) where many objects co-existing in one image is more common.
\noindent \textbf{Refined by detector}\quad After the initial annotation of the weakly labelled images, a conventional strong object detector can be trained using these annotations as ground truth. The trained detector can then be used to iteratively refine the object location. We follow \cite{Pandeyiccv2011,Sivaiccv2011} in exploiting a deformable part-based model (DPM) detector\footnote{Version 3.0 is used for fair comparison against most published results obtained using the same version.} \cite{Felzenszwalb2012partbased} for one iteration to refine the initial annotation. Table \ref{state-of-art} shows that again our model outperforms almost all competitors by a clear margin for all three datasets (see the supplementary material for more detailed per-class comparisons). Very recently, \cite{Cinbis_cinbis_2014} achieved similar performance by training a multi-instance SVM with a more powerful fisher vector based representation.
\noindent \textbf{With appearance similarity prior}\quad As described before, the proposed framework can exploit the appearance similarity prior across classes. Although the actual appearance similarity between classes is hard to calculate, we can approximate it by computing the relatedness using WordNet semantic tree \cite{wordnetbook1998}. Fig. 5 shows the pairwise relatedness among 20 classes, which is generated using the Lin distance of \cite{Pedersen2004}. The diagonal of the matrix verifies that classes are most similar to themselves. Leaf nodes (blue) correspond to the classes of VOC-20. Classes that inherit from the same subtree should show more similar appearance. A pairwise similarity matrix is then calculated from the tree structure and used to correlate their appearance as explained in Sec.~5. The bottom two rows of Table~\ref{state-of-art} show the localisation accuracy with the appearance similarity prior. It clearly shows that the prior improves the performance of both variants of our model for all experiments. It is interesting to note that the performance is improved more on VOC-20 than VOC-6$\times$2. This is because there is more opportunity to share related appearance as the number of classes increases. Categories in 6$\times$2 are generally more dissimilar, so there is less benefit to the correlation.
\begin{figure}[t]
\begin{minipage}[b]{4.3cm}
\begin{subtable}{1\textwidth}
\includegraphics[height=3.4cm]{fig/wordnet_structure}
\caption{}
\end{subtable}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[b]{3.7cm}
\begin{subtable}{1\textwidth}
\includegraphics[height=3.7cm]{fig/sim}
\caption{}
\end{subtable}
\end{minipage}
\caption{ (a) A hierarchical structure of the 20 PASCAL VOC classes using WordNet. (b) The class similarity matrix. }
\label{fig:knowledge}
\end{figure}
\begin{figure*}[ht!]
\begin{center}
\includegraphics[width=\linewidth]{fig/fig2_foreground.pdf}
\end{center}
\vspace{-0.3cm}
\caption{Top row in each subfigure: examples of object localisation using our-sampling and our-Gaussian. Bottom row: illustration of what is learned by the object (foreground) topics via heat map (brighter means object is more likely). The first four rows show some examples of PASCAL VOC and last two rows are selected from ImageNet. }
\label{fig:objecttopic}
\end{figure*}
\noindent \textbf{What has been learned}\quad Fig.~\ref{fig:objecttopic} gives examples of the localisation results and illustrates what has been learned for the foreground object classes. For the latter, we show the response of each learned object topic (i.e.~the posterior probability of the topic given the visual word) as a gray-level image, or heat map (the brighter, the higher probability that the object is present at each image location). These examples show that the foreground topics indeed capture what each object class looks like and can distinguish it from the background and between different object classes. For instance, Fig.~\ref{fig:objecttopic}{(c)} shows that the motorbike heat map is quite accurately selective, with minimal response obtained on the other vehicular clutter. Fig.~\ref{fig:objecttopic}{(e)} indicates how the Gaussian can sometimes give a better bounding box. The opposite is observed in Fig.~\ref{fig:objecttopic}{(f)} where the single Gaussian assumption is not ideal when the foreground topic has less a compact response. Selectivity is illustrated by Fig.~\ref{fig:objecttopic}{(c,d)}, Fig.~\ref{fig:objecttopic}{(h,i)} and Fig.~\ref{fig:objecttopic}{(g,k)}, which show the same images, but with detection results for different co-occurring objects. In each case, the relevant object has been successfully selected while ``explaining away'' the potentially distracting alternative. Our method may fail if the background clutter or objects of no interest dominates the image (Fig.~4(l,m,u)). For example, in Fig.~\ref{fig:objecttopic}{(l)}, a bridge structure resembles the boat in Fig.~\ref{fig:objecttopic}{(a)} resulting strong response from the boat topic, whilst the actual boat, although picked up, is small and overwhelmed by the false response.
A key strength of our framework is explicit modelling of background without any supervision. This allows background pixels to be explained, reducing confusion with foreground objects and hence improving localisation accuracy. This is illustrated in Fig.~\ref{fig:backgroundtopic} via plots of the background topic response (heat map). It illustrates qualitatively that some background topics
are often correlated with common semantic background components such as sky, grass, road and water, despite none of these being annotated.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{fig/fig3_background.pdf}
\end{center}
\caption{Illustration of the learned background topics.}
\label{fig:backgroundtopic}
\end{figure}
\noindent \textbf{Weakly supervised detector}\quad The ultimate goal of weakly supervised object localisation is to learn a weakly supervised detector. This is achieved by feeding the localised objects into an off-the-shelf detector training model. The deformable part based model (DPM) in \cite{Felzenszwalb2012partbased} is used and this weakly supervised (WS) detector is compared against a fully supervised (FS) one with the same DPM model (version 3.0).
Specifically, Table \ref{tab:detection} compares the mean average precision (mAP) of detection performance on both VOC-6$\times$2 and VOC-20 test datasets among previous reported WS detector results, ours and the fully supervised detector \cite{Felzenszwalb2012partbased}. Due to the better localisation performance on the weakly supervised training images, our approach is able to reduce the gap between the WS detector and the FS detector. The detailed per-class result is included in the supplementary material and it shows that for classes with high localisation accuracy (e.g.~bicycle, car, motorbike, train), the WS detector is often as good as the the FS one, whilst for those with very low localisation accuracy (e.g.~bottle and pottedplant), the WS detector fails completely.
\begin{table}[ht]
\footnotesize
\begin{center}
\setlength{\tabcolsep}{0.2em}
\begin{tabular}{l || l | l |l || l ||l}
\hline
Method & Deselaers \cite{Deselaers2012} & Pandey \cite{Pandeyiccv2011} & Siva \cite{Sivaiccv2011} & \textbf{Ours} & Fully Supervised \\
\hline
\hline
6$\times$2 & 21 & 20.8 & - & 26.1 &33.0\\
\hline
20 & - & - &13.9 & 17.2 &26.3\\
\hline
\end{tabular}
\end{center}
\caption{Performance of strong detectors trained using annotations obtained by different WSOL methods}
\label{tab:detection}
\end{table}
\subsubsection{Results on ImageNet dataset}
\begin{table}[ht]
\footnotesize
\begin{center}
\setlength{\tabcolsep}{0.2em}
\begin{tabular}{l || l}
\hline
Method & Initialisation \\
\hline
\hline
Alexe \emph{et al}~ \cite{Alexe_TPAMI_2012} & 37.4\\
\hline
Tang \emph{et al}~ \cite{TangCVPR14} & 53.2\\
\hline
Our-Sampling & \textbf{57.6}\\
\hline
\end{tabular}
\end{center}
\caption{Initial annotation accuracy on ImageNet dataset}
\label{tab:imagenet}
\end{table}
Table \ref{tab:imagenet} shows the initial annotation accuracy of different methods for the much larger $3624$-class ImageNet dataset. Note that the result of Alexe \emph{et al}~ \cite{Alexe_TPAMI_2012} is taken from the Table 4 in \cite{TangCVPR14}. Although the annotation accuracy could be further improved by training an object detector to refine the annotation as shown in Table 2, this step is omitted in our experiment as none of the competitors attempted it.
For such a large scale learning problem, loading all the image features into the memory is a challenge for our joint learning method. A standard solution is taken, that is, to process in batches of 100 classes. Joint learning is performed within each batch but not across batches; our model is thus not used to its full potential. Table \ref{tab:imagenet} shows that our method achieves the best result (57.6\%). Note that \cite{Alexe_TPAMI_2012} is a very simple baseline as it simply takes the top-scoring objectness box. Recently more sophisticated transfer-based techniques \cite{Guillaumin_cvpr12} and \cite{Vezhnevets_CVPR_2014} were evaluated on ImageNet. But their results were obtained on a different subset of ImageNet, thus not directly comparable here.
To investigate the effect of the similarity prior in this larger dataset, we randomly choose 500 small (containing around 100 images each) leaf-node classes from ImageNet for joint-learning with an inter-class similarity prior. This was the largest dataset size that could simultaneously fit in the memory of our platform\footnote{Our learning algorithm could potentially be modified to process all $3624$ classes in batches.}. Performing joint learning with inter-class correlation on this ImageNet subset, we achieve 58.8\% annotation accuracy on the 500 classes compared to 55.4\% without using the similarity prior.
\subsubsection{Results on YouTube-object dataset}
\label{sec:cmpOnVideo}
Our main competitors on YouTube-Object (YTO) are \cite{eth_biwi_00905} and \cite{TangECCV14}. Prest \emph{et al}~ \cite{eth_biwi_00905} first performed spatio-temporal segmentation of video into a set of 3D tubes, and subsequently searched for the best object location. Very recently, \cite{TangECCV14} simultaneously localised objects of the same class across a set of video clips (co-localisation) with the Frank-Wolfe Algorithm. Note that there are some recently published studies on weakly supervised object segmentation from video \cite{Tang_CVPR2013}. This is not directly comparable as they did not report results based on the standard YTO bounding-box annotations. Two variants of our model are compared here: Our-sampling is the method evaluated above for individual images. Used here, it ignores the temporal continuity of the video frames in a video. Our-smooth is the simple extension of our sampling for video object localisation. As described in Sec.~\ref{sub:Object-Localisation}, temporal information is used to enforce a smooth change of object location over consecutive frames. The way temporal information is exploited is thus much less elaborative than that in \cite{eth_biwi_00905}. For all methods compared, We evaluate localisation performance on the key frames which are provided with ground truth labels by \cite{eth_biwi_00905}.
Table~\ref{tab:youtube} shows that even without using any temporal information and operating on key frames only, Our-sampling outperforms the method in \cite{eth_biwi_00905}. Our-Smooth further improves the performance and the localisation accuracy of 32.2\% is very close to the upper bound result (34.8\%) suggested by \cite{eth_biwi_00905}, which is the best possible result from oracle tube extraction. Fig.~\ref{ytb_result} shows some examples of video object localisation using Our-Smooth. We note that all these results have been exceeded (50.1\% accuracy) recently by a model purposefully designed for video segmentation \cite{Papazoglou_iccv13}, which performed much more intensive spatio-temporal modelling and used superpixel segmentation within each frame and motion segmentation across frames.
\begin{table}[h]
\footnotesize
\centering
\begin{tabular}{l| l| l |c|c|c}
\hline
Categories & \cite{eth_biwi_00905} & \cite{TangECCV14} & Our-Sampling & Our-Smooth & \cite{Papazoglou_iccv13} \\
\hline
\hline
aeroplane & 51.7& 27.5&40.6 &45.9 & 65.4 \\
\hline
bird & 17.5 & 33.3&39.8 & 40.6 & 67.3 \\
\hline
boat & 34.4 &27.8& 33.3& 36.4 & 38.9 \\
\hline
car & 34.7 & 34.1 & 34.1&33.9 & 65.2\\
\hline
cat & 22.3 & 42.0&35.3 &35.3 & 46.3 \\
\hline
cow & 17.9 & 28.4&18.9 &22.1 & 40.2 \\
\hline
dog & 13.5 &35.7& 27.0 &27.2 & 65.3 \\
\hline
horse &26.7 &35.6 & 21.9 &25.2 & 48.4 \\
\hline
motorbike & 41.2 & 22.0 & 17.6 &20.0 & 39.0\\
\hline
train & 25.0 & 25.0& 32.6 &35.8 & 25.0 \\
\hline
\hline
Average & 28.5 & 31.1& 30.1 & 32.2 & 50.1\\
\hline
\end{tabular}
\caption{Performance comparison on YouTube-object}
\label{tab:youtube}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig/YTB_result.pdf}
\caption{Examples of video object localisation}
\label{ytb_result}
\end{figure}
\subsection{Bayesian domain adaptation}
\label{sec_dt}
We next evaluate the potential of our model for weakly supervised cross-domain transfer learning using the YouTube-Object and VOC07-10 as the two domains (we choose the same 10 classes from the VOC07-20 as in YouTube-Object). One domain contains continuous and highly varying video data, and the other contains high resolution but cluttered still images. We consider following two non-transfer baselines:
\vspace{0.5em}
\hangafter=1
\setlength{\hangindent}{2em}
\noindent \textit{YTO, VOC} The first baseline is the original performance on YouTube-Object and VOC07-10 classes, solely using target domain data. \textit{YTO} is exactly the same as Our-Sampling described in Sec.~\ref{sec:cmpOnVideo}, while \textit{VOC} is trained with 10 classes from VOC07-20 using the same setting described in Sec.~\ref{sec:cmpOnVOC}.
\hangafter=1
\setlength{\hangindent}{2em}
\noindent \textit{All$\rightarrow$YTO, All$\rightarrow$VOC} The second baseline simply combines the training data of YouTube-Object and VOC. One model trained with these two domains' data is used to localise object on YouTube-Object (\textit{A$\rightarrow$Y}) and VOC07-10 (\textit{A$\rightarrow$V}).
\vspace{0.5em}
We consider two directions of knowledge transfer between YouTube-Object and VOC07-10, and compare the above baselines with our domain adaptation method: \textit{V$\rightarrow$Y} is initialised with an appearance prior transferred from VOC07-10, and adapted on the YTO data. On the contrary, \textit{Y$\rightarrow$V} adapts the YTO appearance prior to VOC07-10. Table~\ref{tab:youtube_transfer} shows that our Bayesian domain adaption method performs better than the baselines on both YouTube-Object and VOC07-10. In contrast, the standard combination (A$\rightarrow$Y and A$\rightarrow$V) shows little advantage over solely using target domain data. Note that unlike prior studies of video$\rightarrow$image \cite{eth_biwi_00905} or image$\rightarrow$video \cite{Tang_NIPS2012} that adapt detectors with fully labelled data, our task is to adapt weakly labelled data.
We also vary the amount of target domain data and evaluate its effect on the domain transfer performance. Fig.~\ref{fig:domaintransfer} shows that our model provides a bigger margin of benefit given less target domain data. This can be easily understood because with a small quantity of training examples there is insufficient data to learn the object appearance well and the impact of the knowledge transfer is thus more significant.
\begin{table}[t]
\setlength{\tabcolsep}{0.5em}
\footnotesize
\centering
\begin{tabular}{l||l| l| l|| l|l|l}
\hline
\multirow{2}{*}{Categories } & \multicolumn{3}{c||}{YTO} &\multicolumn{3}{c}{VOC} \\
\cline{2-7}
& \hspace{0.05cm} {Y} & {A$\rightarrow$Y} & {V$\rightarrow$Y} &\hspace{0.05cm} {V} & {A$\rightarrow$V} &{Y$\rightarrow$V} \\
\hline
\hline
aeroplane & 40.6 &40.8& \textbf{45.8} & 57.5& 58.1& \textbf{58.7} \\
\hline
bird & 39.8 &\textbf{40.3}& 38.8 & 29.8 &30.5& \textbf{33.7} \\
\hline
boat& 33.3 &33.4& \textbf{38.8} & 28.0 &27.9& \textbf{29.0} \\
\hline
car & \textbf{34.1} &33.9 &33.6 & 39.1 &39.1& \textbf{44.4} \\
\hline
cat & 35.3 &35.3& \textbf{38.8} & 59.0 &\textbf{59.3}& 58.6 \\
\hline
cow & 18.9 &19.0& \textbf{27.7} & 36.7 & 36.9&\textbf{38.9} \\
\hline
dog & 27.0 &\textbf{27.1}& 26.7 & 46.5 &47.4& \textbf{48.3} \\
\hline
horse &21.9 &22.1& \textbf{26.1} & 53.2 &53.5 &\textbf{55.5} \\
\hline
motorbike & 17.6 &\textbf{17.9}& 17.5 & 55.6 & 55.2&\textbf{58.1} \\
\hline
train & 32.6 & 32.6&\textbf{36.2} & 54.7 &54.5& \textbf{56.3} \\
\hline
\hline
\textbf{Average} &30.1 &30.2& \textbf{33.0} & 46.0 & 46.2&\textbf{48.1} \\
\hline
\end{tabular}
\caption{Cross-domain transfer learning results}
\label{tab:youtube_transfer}
\end{table}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.6\linewidth]{fig/fig_da.pdf}
\end{center}
\vspace{-0.3cm}
\caption{Domain adaptation provides more benefit with fewer target domain samples.}
\label{fig:domaintransfer}
\end{figure}
\subsection{Semi-supervised Learning}
\label{semi-supe}
One important advantage of our model is the ability to utilise unlabelled data to further reduce the manual annotation requirements. To demonstrate this we randomly select $10\%$ of the \textit{VOC07-6$\times$2} data as our weakly labelled training data, and then vary the additional unlabelled data used. Note that 10\% labelled data corresponds to around only \textit{5 weakly labelled images per class} for the \textit{VOC07-6$\times$2} dataset, which is significantly less than what any previous method has exploited. Two evaluation procedures are considered: (i) Evaluating localisation performance on the initially annotated $10\%$ (standard WSOL task); and (ii) WSOL performance on the held out \textit{VOC07-6$\times$2} test set\footnote{To localise objects in a test image, we only need to iterate Eqs.~(\ref{eq:theta})-(\ref{eq:y}) instead of (\ref{eq:theta})-(\ref{eq:varUpdates}). That is, the object appearance is considered fixed and does not need to be updated. This both reduces the cost of each iteration and also makes convergence more rapid.}.
The latter corresponds to an online application scenario where the localisation model is trained on one database and needs to be applied online to localise objects in incoming weakly labelled images. We vary the additional data across a combination of four conditions: (1) $6R$: add the remaining 90\% of data for the 6 target classes but without labels, (2) $100U$: add all images from 100 unrelated ImageNet classes without labels, (3) $6R+100U$: add both of the above. There are two questions to answer: Whether the model can exploit the related data when it comes without labels (6R), and whether it can avoid being confused by a vast quantity of unrelated data (100U).
The results are shown in Table~7, where the ratio of relevant to irrelevant data in the additional unlabelled samples is shown in the second column. From the results, we can draw the following conclusions: (1) As expected, the model performs poorly with little data (10\%L). However it improves significantly with some relevant but unlabelled data (the standard SSL setting, 10\%L+6R). Moreover, this SSL result is almost as good as when all the data is labelled (100\%L). (2) If \emph{only} irrelevant data is added to the small labelled seed, not only does the performance not degrade, but it increases noticeably (10\%L vs. 10\%L+100U). (3) If both relevant and irrelevant data are added -- corresponding to the realistic scenario where an automatic process gathers a pool of potentially relevant data which, without any screening, will be a mix of relevant and irrelevant data to the target problem. In this case the performance improves to not far off the fully annotated case (10\%L vs. 10\%L+6R+100U vs. 100\%L). As expected, the performance of 10\%L+6R+100U is weaker than 10\%L+6R -- if one manually goes through the unlabelled data and removes the irrelevant ones and leave only the relevant ones, it would certainly benefit the model. But it is noted that the decrease in performance is small (47.1\% to 43.5\%). (4) If the irrelevant data is added to the fully annotated dataset, the performance improves slightly (100\%L vs. 100\%L+100U), which shows that our model is robust to this potential distraction from the large amount of unlabelled and irrelevant data. This is expected in SSL, which typically benefits only when the amount of labelled data is small. These results show that our approach has good promise for effective use in realistic scenarios of learning from only few weak annotations and a large volume of only partially relevant unlabelled data. This is illustrated visually in Fig.~\ref{fig:semilearned}, where unlabelled data helps to learn a better object model. Finally, the similarly good results on the held-out test set verify that our model is indeed learning a good generalisable localisation mechanism and is not over-fitted to the training data.
\begin{table}[ht]
\footnotesize
\begin{center}
\begin{tabular}{l|l || c | c }
\hline
\multicolumn{2}{c}{VOC07-$6\times2$} & \multicolumn{2}{c}{Data for Localisation}\\
\hline
Data for Training & ratio of R:U & \textit{10\%L} & \textit{Test set} \\
\hline
\hline
\textit{10\%L} & -& 27.1 & 28.0 \\
\hline
\textit{10\%L+6R} & 1 & 47.1 & 42.3 \\
\hline
\textit{10\%L+100U} & 0& 35.8 & 32.4 \\
\hline
\textit{10\%L+6R+100U} & 0.04& 43.5 & 38.1 \\
\hline
\hline
\textit{100\%L} & -& 50.3 & 46.2 \\
\hline
\textit{100\%L+100U} & 0& 50.7 & 47.5\\
\hline
\end{tabular}
\end{center}
\caption{Localisation performance of semi-supervised learning using \textit{Our-Sampling}}
\label{tab:sslBar}
\end{table}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{fig/fig4_semilearned.pdf}
\end{center}
\vspace{-0.3cm}
\caption{Unlabelled data improves foreground heat maps.}
\label{fig:semilearned}
\end{figure}
\subsection{Computational cost}
\label{cost}
Our model is efficient both in learning and inference, with a complexity $\mathcal{O}(NMK)$ for $N$ images, $M$ observations (visual words) per image, and $K$ classes. The experiments were done on a 2.6GHz PC with a single-threaded Matlab implementation. Training the model on all 5,011 VOC07 images required 3 hours and a peak of 6 GB of memory to learn a joint model for 20 classes.
Our Bayesian topic inference process not only enables prior knowledge to be used, but also achieves 10-fold improvements in convergence time compared to EM inference used by most conventional topic models with point-estimated Dirichlet topics. Online inference of a new test image took about 0.5 seconds. After model learning, for object localisation in training images, direct Gaussian localisation is effectively free and heat-map sampling took around 0.6 seconds per image. These statistics compare favourably to alternatives: \cite{Deselaers2012} reported 2 hours to train 100 images; while our Matlab implementations of \cite{confeccvSivaRX12}, \cite{Sivaiccv2011} and \cite{blei2003annotated_model} took 10, 15 and 20 hours respectively to localise objects for all 5,011 images.
\section{Conclusion and Future work} We have presented an effective and efficient model for weakly-supervised object localisation (WSOL). Our approach surpasses the performance of prior methods and obtains state-of-the-art results on PASCAL VOC 2007 and ImageNet datasets. It can also be applied to the YouTube-Object dataset, and to domain transfer between these image and video datasets.
With joint multi-label modelling, instead of independent learning in previous work, our model enables: (1) exploiting multiple object co-existence within images, (2) learning a single background shared across classes and (3) dealing with large scale data more efficiently than prior approaches. Our generative Bayesian formulation, enables a number of novel features: (1) integrating appearance and geometry priors, (2) exploiting inter-category appearance similarity and (3) exploiting different but related datasets via domain adaptation. Furthermore, it is able to use (potentially easier to obtain) unlabelled data with a challenging mix of relevant and irrelevant images to obtain an reasonable localiser when labelled data are in short supply for the target classes.
In this study we showed the usefulness of top-down, cross-class and domain transfer priors -- demonstrating the model's potential to scale learning through transfer \cite{zhiyuan12,Guillaumin_cvpr12,Kuettel2012}. These contributions bring us significantly closer to the goal of scalable learning of strong models from weakly-annotated non-purpose collected data on the Internet.
It is worth pointing out that apart from adding a few new features (e.g.~foreground-background topic separation and effective supervision via topic clamping), our generative Bayesian topic model is not fundamentally different from existing topic models used for image understanding \cite{LiSocherFeiFei2009,Philbinijcv2010}. Nevertheless, state-of-the-art WSOL performance is obtained compared with more popular, more highly engineered and complex, and slower discriminative models. This not only shows the importance of the change of paradigm from independent discriminative learning to joint generative learning, but also suggests that sometimes it is not necessary to invent a completely new model; finding the missing ingredients that make an existing model work can be equally important.
Possible directions for future work include: automatically determining the optimal number of topics $K$ \cite{sudderth2008tdp_visual}, learning a deeper multi-layered \cite{sudderth2008tdp_visual} model by exploiting parts \cite{Crandalleccv06,sudderth2008tdp_visual} and attributes \cite{fu2012attribsocial} rather than the current flat model; learning rather than pre-defining object-appearance similarity \cite{Salakhutdinov2011cvpr}; and learning from realistically noisy non-purpose collected labels \cite{fu2012attribsocial}.
\bibliographystyle{IEEEtran}
|
2,869,038,154,095 | arxiv | \section{Introduction}
In this paper we review Shelah's strong covering property and its
applications, in particular, to pairs $(W,V)$ of models of set
theory with $V=W[r],$ for some real $r$. We also consider the
consistency results of Shelah and Woodin on the failure of $CH$
by adding a real and prove some related results. Some other
results are obtained too.
The structure of the paper is as follows: In $\S 2$ we present an
interesting result of Vanliere [6] on blowing up the continuum
with a real. In $ \S 3$ we give some applications of Shelah's
strong covering property. In $\S 4$ we consider the work of Shelah
and Woodin stated above and prove some new results. Finally in $\S
5$ we state some problems.
\section{On a theorem of Vanliere}
In this section we prove the following result of Vanliere [6]:
\begin{theorem} Assume $V=\mathbf{L}[X,a]$ where $X \subseteq \omega_n$
for some $n < \omega,$ and $a \subseteq \omega.$ If $\mathbf{L}[X] \models
ZFC+GCH $ and the cardinals of $\mathbf{L}[X]$ are the
true cardinals, then $GCH$ holds in $V$.
\end{theorem}
\begin{proof} Let $\kappa$ be an infinite cardinal. We prove the
following:
$(*)_{\kappa}:$ $\hspace{2.cm}$For any $Y \subseteq \kappa$ there
is an ordinal $\alpha < \kappa^+$ and
$\hspace{2.85cm}$ a set $Z \in \mathbf{L}[X], Z \subseteq \kappa$ such that $Y \in
\mathbf{L}_{\alpha}[Z,a].$
Then it will follow that $\mathcal{P}(\kappa) \subseteq \bigcup_{\alpha <
\kappa^+} \bigcup_{Z \in \mathcal{P}^{\mathbf{L}[X]}(\kappa)} \mathbf{L}_{\alpha}[Z,a],$ and
hence
\begin{center}
$2^{\kappa} \leq \sum_{\alpha < \kappa^+} \sum_{Z \in
\mathcal{P}^{\mathbf{L}[X]}(\kappa)}| \mathbf{L}_{\alpha}[Z,a]| \leq \kappa^+.
(2^{\kappa})^{\mathbf{L}[X]}. \kappa= \kappa^+$
\end{center}
which gives the result. Now we return to the proof of
$(*)_{\kappa}$.
{\bf Case 1}. $\kappa \geq \aleph_n$.
Let $Y \subseteq \kappa.$ Let $\theta$ be large enough regular
such that $Y \in \mathbf{L}_{\theta}[X,a].$ Let $N \prec \mathbf{L}_{\theta}[X,a]$
be such that $| N |= \kappa, N \cap \kappa^{+} \in
\kappa^{+}$ and $\kappa \cup \{Y, X, a\} \subseteq N$. By the
condensation lemma there are $\alpha < \kappa^+$ and $\pi$ such
that $\pi: N \cong \mathbf{L}_{\alpha}[X, a].$ then $Y=\pi(Y) \in
\mathbf{L}_{\alpha}[X, a].$ Thus $(*)_{\kappa}$ follows.
{\bf Case 2.} $\kappa < \aleph_n$.
We note that the above argument does not work in this case. Thus
another approach is needed. To continue the work, we state a
general result (again due to Vanliere) which is of interest in its
own sake.
\begin{lemma} Suppose $\mu \leq \kappa < \lambda \leq \nu$ are
infinite cardinals, $\lambda$ regular. Suppose that $a \subseteq
\mu, Y \subseteq \kappa, Z \subseteq \lambda,$ and $X \subseteq
\nu$ are such that $V=\mathbf{L}[X,a], Z \in \mathbf{L}[X], Y \in \mathbf{L}[Z,a]$ and
$\lambda_{\mathbf{L}[X]}^+= \lambda^+.$ Then there exists a proper initial
segment $Z^{'}$ of $Z$ such that $Z^{'} \in \mathbf{L}[X]$ and $Y \in
\mathbf{L}[Z^{'},a].$
\end{lemma}
\begin{proof} Let $\theta \geq \nu$ be regular such that $Y \in
\mathbf{L}_{\theta}[Z,a].$ Let $N \prec \mathbf{L}_{\theta}[Z,a]$ be such that $|
N |= \lambda, N \cap \lambda^{+} \in \lambda^{+}$ and $\lambda
\cup \{Y, Z, a\} \subseteq N$. By the condensation lemma we can
find $\delta < \lambda^+$ and $\pi$ such that $\pi: N \cong
\mathbf{L}_{\delta}[Z, a].$
In $V$, let $ \langle M_{i}: i < \lambda \rangle$ be a continuous
chain of elementary submodels of $\mathbf{L}_{\delta}[Z, a]$ with union
$\mathbf{L}_{\delta}[Z, a]$ such that for each $i < \lambda, M_{i}
\supseteq \kappa$, $| M_{i}| < \lambda$ and $M_{i} \cap
\lambda \in \lambda.$
In $\mathbf{L}[Z]$ let $ \langle W_{i}: i < \lambda \rangle$ be a
continuous chain of elementary submodels of $\mathbf{L}_{\delta}[Z]$ with
union $\mathbf{L}_{\delta}[Z]$ such that for each $i < \lambda, W_{i}
\supseteq \kappa$, $| W_{i}| < \lambda$ and $W_{i} \cap
\lambda \in \lambda$
Now we work in $V$. Let $E= \{i < \lambda: M_{i}\cap
\mathbf{L}_{\delta}[Z]=W_{i} \}$. Then $E$ is a club of $\lambda.$ Pick $i
\in E$ such that $Y \in M_{i},$ and let $M=M_{i},$ and $W=W_{i}.$
By the condensation lemma let $\eta < \lambda$ and $\bar{\pi}$ be
such that $\bar{\pi}:M \cong \mathbf{L}_{\eta}[Z^{'}, a]$ where $Z^{'}=
\bar{\pi}[M \cap Z]=\bar{\pi}[(M \cap \lambda) \cap Z]=(M \cap
\lambda) \cap Z$, a proper initial segment of Z. Then
$Y=\bar{\pi}(Y) \in \mathbf{L}_{\eta}[Z^{'}, a]$ and $Z^{'} \subseteq \eta
< \lambda.$ It remains to observe that $Z^{'} \in \mathbf{L}[X]$ as $Z^{'}$
is an initial segment of Z. The lemma follows.
\end{proof}
We are now ready to complete the proof of Case 2. By Lemma 2.2 we
can find a bounded subset $X_n$ of $\omega_n$ such that $X_{n} \in
\mathbf{L}[X]$ and $Y \in \mathbf{L}[X_{n},a]$. Now trivially we can find a subset
$Z_{n-1}$ of $\omega_{n-1}$ such that $\mathbf{L}[X_{n}]=\mathbf{L}[Z_{n-1}],$ and
hence $Z_{n-1} \in \mathbf{L}[X]$ and $Y \in \mathbf{L}[Z_{n-1},a].$ Again by Lemma
2.2 we can find a bounded subset $X_{n-1}$ of $\omega_{n-1}$ such
that $X_{n-1} \in \mathbf{L}[X]$ and $Y \in \mathbf{L}[X_{n-1},a],$ and then we find
a subset $Z_{n-2}$ of $\omega_{n-2}$ such that
$\mathbf{L}[X_{n-1}]=\mathbf{L}[Z_{n-2}]$. In this way we can finally find a subset
$Z$ of $\kappa$ such that $Z \in \mathbf{L}[X]$ and $Y \in \mathbf{L}[Z,a]$. Then as
in case 1, for some $\alpha < \kappa^+, Y \in \mathbf{L}_{\alpha}[Z,a]$ and
$(*)_{\kappa}$ follows.
\end{proof}
\section{Shelah's strong covering property and its applications}
In this section we study Shelah's strong covering property and
give some of its applications.
By a pair $(W,V)$ we always mean a pair $(W,V)$ of
models of $ZFC$ with the same ordinals such that $W \subseteq V.$
Let us give the main definition.
\begin{definition}
$(1)$ $(W,V)$ satisfies the strong $(\lambda,
\alpha)-$covering property, where $\lambda$ is a regular cardinal
of $V$ and $\alpha$ is an ordinal, if for every model $M \in V$
with universe $\alpha$ (in a countable language) and $a \subseteq
\alpha, | a | < \lambda$ (in $V$), there is $b \in W$ such
that $a \subseteq b \subseteq \alpha, b \prec M,$ and $ | b
| < \lambda$ (in $V$). $(W,V)$ satisfies the strong
$\lambda-$covering property if it satisfies the strong $(\lambda,
\alpha)-$covering property for every $\alpha.$
$(2)$ $(W,V)$ satisfies the strong $(\lambda^*, \lambda, \kappa,
\mu)-$covering property, where $\lambda^{*} \geq \lambda \geq
\kappa$ are regular cardinals of $V$ and $\mu$ is an ordinal, if
player one has a winning strategy in the following game, called
the $(\lambda^*, \lambda, \kappa, \mu)-$covering game, of length
$\lambda$:
In the $i-$th move player I chooses $a_{i} \in V$ such that $a_{i}
\subseteq \mu, | a_{i} | < \lambda^{*}$ (in $V$) and
$\bigcup_{j < i}b_{j} \subseteq a_{i}$, and player II chooses
$b_{i} \in V$ such that $b_{i} \subseteq \mu, | b_{i} | <
\lambda^{*}$ (in $V$) and $\bigcup_{j \leq i}a_{j} \subseteq
b_{i}$.
Player I wins if there is a club $C \subseteq \lambda$ such that
for every $\delta \in C \cup \{ \lambda \}, cf(\delta)= \kappa
\Rightarrow \bigcup_{i < \delta}a_{i} \in W.$ $(W,V)$ satisfies
the strong $(\lambda^*, \lambda, \kappa, \infty)-$covering
property, if it satisfies the strong $(\lambda^*, \lambda, \kappa,
\mu)-$covering property for every $\mu.$
\end{definition}
The following theorem shows the importance of the first part of
this definition and plays an important role in this section.
\begin{theorem} Suppose $V=W[r], r$ a real and $(W,V)$
satisfies the strong $(\lambda, \alpha)-$covering property for
$\alpha < ([(2^{< \lambda})^{W}]^{+})^{V}$. Then $(2^{<
\lambda})^{V}=| (2^{< \lambda})^{W}| ^{V}.$
\end{theorem}
\begin{proof} Cf. [3, Theorem VII.4.5.].
\end{proof}
It follows from Theorem 3.2 that if $V=W[r], r$ a real and $(W,V)$
satisfies the strong $(\lambda^{+},
([(2^{\lambda})^{W}]^{+})^{V})-$covering property, then $(2^{
\lambda})^{V}=|(2^{\lambda})^{W}| ^{V}.$
We are now ready to give the applications of the strong covering
property. For a pair $(W,V)$ of models of $ZFC$ consider the
following conditions:
$\hspace{1cm} (1)_{\kappa}:\bullet$ $V=W[r], r$ a real,
$\hspace{2cm}\bullet$ $V$ and $W$ have the same cardinals $\leq
\kappa^{+},$
$\hspace{2cm}\bullet$ $W \models \forall \lambda \leq
\kappa, 2^{\lambda}= \lambda^{+} $
$\hspace{2cm}\bullet$ $V \models 2^{\kappa} > \kappa^{+}
$.
$\hspace{1cm} (2)_{\kappa}:$ $W \models GCH $.
$\hspace{1cm} (3)_{\kappa}:$ $V$ and $W$ have the same cardinals.
\begin{theorem} $(1)$ Suppose there is a pair $(W,V)$
satisfying $(1)_{\aleph_0}$ and $(2)_{\aleph_0}$. Then
$\aleph_2^{V}$ in inaccessible in $\mathbf{L}$.
$(2)$ Suppose there is a pair $(W,V)$ as in $(1)$ with $V \models
2^{\aleph_0}> \aleph_2$. Then $0^{\sharp} \in
V$.
$(3)$ Suppose there is a pair $(W,V)$ as in $(1)$ with $CARD^{W}
\cap (\aleph_1^{V}, \aleph_2^{V})= \emptyset$. Then $0^{\sharp}
\in V$.
$(4)$ Suppose $\kappa > \aleph_0$ and there is a pair $(W,V)$
satisfying $(1)_{\kappa}.$ Then $0^{\sharp} \in V.$
\end{theorem}
Before we give the proof of Theorem 3.3 we state some conditions
which imply Shelah's strong covering property. Suppose that in
$V,0^{\sharp}$ does not exist. Then:
$\hspace{1cm} (\alpha)$ If $\lambda^{*} \geq \aleph_2^{V}$ is
regular in $V$, then $(W,V)$ satisfies the strong
$\lambda^{*}-$covering
$\hspace{1.6cm}$property.
$\hspace{1cm} (\beta)$ If $CARD^{W} \cap (\aleph_1^{V},
\aleph_2^{V})= \emptyset$ then $(W,V)$ satisfies the strong
$\aleph_1^{V}-$covering
$\hspace{1.6cm}$property.
\begin{remark} For $\lambda^{*} \geq \aleph_3^{V}, (\alpha)$
follows from [3, Theorem VII.2.6], and $(\beta)$ follows from [3,
Theorem VII.2.8]. In order to obtain $(\alpha)$ for $\lambda^{*}=
\aleph_2^{V}$ we can proceed as follows: As in the proof of
[3, Theorem VII.2.6] proceed by induction on $\mu$ to show that
$(\mathbf{L},V)$ satisfies the strong $(\aleph_2^{V},\aleph_1^{V},
\aleph_0^{V}, \mu)-$covering property. For successor $\mu$ (in
$\mathbf{L}$) use [3, Lemma VII.2.2] and for limit $\mu$ use [3, Remark VII.2.4] (instead of [3, Lemma VII.2.3]). It then follows that $(\mathbf{L},V)$
and hence $(W,V)$ satisfies the strong $\aleph_2^{V}-$covering
property.
\end{remark}
\begin{proof} $(1)$ We may suppose that $0^{\sharp}
\notin V.$ Then by $(\alpha), (W,V)$ satisfies the strong
$\aleph_2^{V}-$covering property. On the other hand by Jensen's
covering lemma and [3, Claim VII.1.11], $W$ has squares. By [3, Theorem
VII.4.10], $\aleph_2^{V}$ is inaccessible in $W$, and hence in
$\mathbf{L}$.
$(2)$ Suppose not. Then by $(\alpha), (W,V)$ satisfies the strong
$\aleph_2^{V}-$covering property. By Theorem 3.2,
$(2^{\aleph_0})^{V} \leq (2^{\aleph_1})^{V}=
|(2^{\aleph_1})^{W}| ^{V}=|
\aleph_2^{W}|=\aleph_2^{V},$ which is a contradiction.
$(3)$ Suppose not. Then by $(\beta), (W,V)$ satisfies the strong
$\aleph_1^{V}-$covering property, hence by Theorem 3.2,
$(2^{\aleph_0})^{V}= |(2^{\aleph_0})^{W}|
^{V}=\aleph_1^{V},$ which is a contradiction.
$(4)$ Suppose not. Then by $(\alpha), (W,V)$ satisfies the strong
$\kappa^{+}-$covering property. By Theorem 3.2, $(2^{\kappa})^{V}=
|(2^{\kappa})^{W}| ^{V}=\kappa^{+},$ and we get a
contradiction.
\end{proof}
\begin{theorem} $(1)$ Suppose there is a pair $(W,V)$
satisfying $(1)_{\kappa},(2)_{\kappa}$ and $(3)_{\kappa}.$ Then
there is in $V$ an inner model with a measurable cardinal.
$(2)$ Suppose there is a pair $(W,V)$ satisfying $(1)_{\kappa},$
where $\kappa \geq \aleph_{\omega}.$ Further suppose that
$\kappa_{W}^{++}=\kappa_{V}^{++}$ and $(W,V)$ satisfies the
$\kappa^{+}-$covering property. Then there is in $V$ an inner
model with a measurable cardinal.
\end{theorem}
\begin{proof} $(1)$ Suppose not. Then by [3, conclusion VII.4.3(2)],
$(W,V)$ satisfies the strong $\kappa^{+}-$covering property, hence
by Theorem 3.2, $(2^{\kappa})^{V}=|(2^{\kappa})^{W} | ^{V}=
\kappa^{+},$ which is a contradiction.
$(2)$ Suppose not. Let $\kappa=\mu^{+n},$ where $\mu$ is a limit
cardinal, and $n < \omega.$ By [3, Theorem VII.2.6, Theorem VII.4.2(2) and
Conclusion VII.4.3(3)], we can show that $(W,V)$ satisfies the
strong $(\kappa^{+}, \kappa, \aleph_{1}, \mu)-$covering property.
On the other hand since $(W,V)$ satisfies the
$\kappa^{+}-$covering property and $V$ and $W$ have the same
cardinals $\leq \kappa^{+}, (W,V)$ satisfies the
$\mu^{+i}-$covering property for each $i \leq n+1.$ By repeatedly
use of [3, Lemma VII.2.2], $(W,V)$ satisfies the strong
$(\kappa^{+}, \kappa, \aleph_{1}, \kappa^{++})-$covering property,
and hence the strong $(\kappa^{+}, \kappa^{++})-$covering
property. By Theorem 3.2, $(2^{\kappa})^{V}=|(2^{\kappa})^{W}
| ^{V}= \kappa^{+},$ which is a contradiction.
\end{proof}
\begin{remark}
In [3] (see also [4]), Theorem 3.4(1), for $\kappa=
\aleph_0,$ is stated under the additional assumption $2^{\aleph_0}
> \aleph_{\omega}$ in $V$.
\end{remark}
Let us close this section by noting that the hypotheses in
[3, Conclusion VII.4.6] are inconsistent. In other words we are
going to show that the following hypotheses are not consistent:
$\hspace{1cm}(a)$ $V$ has no inner model with a measurable
cardinal.
$\hspace{1cm}(b)$ $V=W[r], r$ a real, $V$ and $W$ have the same
cardinals $\leq \lambda,$ where $(2^{\aleph_0})^{V} \geq$
$\hspace{1.5cm}$ $\lambda \geq\aleph_{\omega}^{V}, \lambda$ is a
limit cardinal.
$\hspace{1cm}(c)$ $W \models 2^{\aleph_0} < \lambda
$.
To see this, note that by $(b)$ and [3, Theorem VII.4.2],
$K_{\lambda}(W)=K_{\lambda}(V)$. Then by [3, conclusion VII.4.3(3)],
$(W,V)$ satisfies the strong $(\aleph_1, \lambda)-$covering
property. On the other hand $\lambda >
([(2^{\aleph_0})^{W}]^{+})^{V}$, and hence by Theorem 3.2,
$\lambda \leq (2^{\aleph_0})^{V}= | (2^{\aleph_0})^{W} |
^{V} < \lambda.$ Contradiction.
\section{Some consistency results}
In this section we consider the work of Shelah and Woodin in [4]
and prove some related results.
\begin{theorem} There is a generic extension $W$ of $L$ and two
reals $a$ and $b$ such that:
$\hspace{1cm}(a)$ Both of $W[a]$ and $W[b]$ satisfy $CH$.
$\hspace{1cm}(b)$ $CH$ fails in $W[a,b]$.
Furthermore $2^{\aleph_0}$ can be arbitrary large in $W[a,b]$.
\end{theorem}
\begin{proof} Let $\lambda \geq \aleph_{5}^{\mathbf{L}}$ be regular in $\mathbf{L}$.
By [4, Theorem 1] there is a pair $(W,V)$ of generic extensions
of $\mathbf{L}$ such that:
$\hspace{1cm}\bullet$ $(W,V)$ satisfies $(1)_{\aleph_0}$.
$\hspace{1cm}\bullet$ $V \models 2^{\aleph_0}= \lambda.
$
Let $V=W[r]$ where $r$ is a real. Working in $V$, let
$\textrm{P}=Col(\aleph_0, \aleph_1)$ and let $G$ be
$\textrm{P}-$generic over $V$. In $V[G]$ the set $\{D \in V: D$ is
open dense in $Add(\omega, 1) \}$ is countable, hence we can
easily find two reals $a$ and $b$ in $V[G]$ such that both of $a$
and $b$ are $Add(\omega, 1)-$generic over $V$, and $r \in L[a,b].$
Then the model $W$ and the reals $a$ and $b$ are as
required.
\end{proof}
Note that for $\kappa > \aleph_0,$ by Theorem 3.3(4) we can not
expect to obtain [4, Theorems 1 and 2] without assuming the
existence of $0^{\sharp}.$ However it is natural to ask whether it
is possible to extend them under the assumption $``0^{\sharp}$
exists''. The following result (Cf. [1, Lemma 1.6]) shows that for
$\kappa > \aleph_0,$ there is no pair $(W,V)$ satisfying
$(1)_{\kappa}$ such that $0^{\sharp} \notin W \subseteq
\mathbf{L}[0^{\sharp}]$ and $W$ and $\mathbf{L}[0^{\sharp}]$ have the same cardinals
$\leq \kappa^{+}.$
\begin{theorem} Let $\kappa = \aleph_{1}^{V}.$ If $0^{\sharp}$
exists and $M$ is an inner model in which $\kappa_{\mathbf{L}}^{+}$ is
collapsed, then $0^{\sharp} \in M.$
\end{theorem}
\begin{proof} Let $I$ be the class of Silver indiscernibles. There
are constructible clubs $C_{n}, n < \omega,$ such that $I \cap
\kappa= \bigcap_{n< \omega}C_{n}.$ if $\kappa_{\mathbf{L}}^{+}$ is
collapsed in $M$, then in $M$ there is a club $C$ of $\kappa$,
almost contained in every constructible club. Hence $C$ is almost
contained in the $\bigcap_{n< \omega}C_{n}$ and hence in $I \cap
\kappa.$ It follows that $0^{\sharp}\in M.$
\end{proof}
We now prove a strengthening of Theorem 4.1 under stronger
hypotheses.
\begin{theorem} Suppose $cf(\lambda) > \aleph_0$, there are
$\lambda-$many measurable cardinals and $GCH$ holds. Then there is
a cardinal preserving generic extension $W$ of the universe and
two reals $a$ and $b$ such that:
$\hspace{1cm}(a)$ The models $W, W[a], W[b]$ and $W[a,b]$ have the
same cardinals.
$\hspace{1cm}(b)$ $W[a]$ and $W[b]$ satisfy $GCH$.
$\hspace{1cm}(c)$ $W[a,b] \models 2^{\aleph_0}= \lambda
.$
\end{theorem}
\begin{proof} By [4, Theorem 4] there is a pair $(W,V)$ of
cardinal preserving generic extensions of the universe such that:
$\hspace{1cm}\bullet$ $(W,V)$ satisfies $(1)_{\aleph_0},
(2)_{\aleph_0}$ and $(3)_{\aleph_0}$.
$\hspace{1cm}\bullet$ $V \models 2^{\aleph_0}= \lambda
.$
Working in $V$, let $\textrm{P}=Col(\aleph_0, \aleph_1)$ and let
$G$ be $\textrm{P}-$generic over $V$. As in the proof of Theorem
4.1 we can find two reals $a^*$ and $b^*$ such that $a^*$ is
$Add(\omega, 1)-$generic over $V$ and $b^*$ is $Add(\omega,
1)-$generic over $V[a^*],$ where $Add(\omega, 1)$ is the Cohen
forcing for adding a new real. Note that $V[a^*]$ and $V[a^*,
b^*]$ are cardinal preserving generic extensions of $V$. Working
in $V[a^*, b^*]$ let $\langle k_N: N<\omega \rangle$ be an
increasing enumeration of $\{N: a^{*}(N)=0\}$ and let $a=a^*$ and
$b=\{N: b^{*}(N)=a^{*}(N)=1 \} \cup \{k_N:r(N)=1 \}$ where $V=W[r]
.$ Then clearly $r \in \mathbf{L}[\langle k_N: N<\omega \rangle,
b]\subseteq \mathbf{L}[a,b]$ as $r=\{N: k_N \in b \}.$
We show that $b$ is $Add(\omega, 1)-$generic over $V$. It suffices
to prove the following:
$\hspace{1.4cm}$ For any $(p,q) \in Add(\omega,
1)*\lusim{A}dd(\omega, 1)$ and any open dense subset $D$
$(*)$$\hspace{1cm}$ $ \in V$ of $Add(\omega, 1)$ there is
$(\bar{p},\bar{q}) \leq (p,q)$ such that $(\bar{p},\bar{q}) \vdash
`` \dot{b}$ extends
$\hspace{1.5cm}$ some element of $D $''.
Let $(p,q)$ and $D$ be as above. By extending one of $p$ or $q$ if
necessary, we can assume that $lh(p)=lh(q)$. Let $\langle k_N: N<M
\rangle$ be an increasing enumeration of $\{N<lh(p): p(N)=0\}.$
Let $s: lh(p) \rightarrow 2$ be such that considered as a subset
of $\omega,$
\begin{center}
$s=\{ N<lh(p): p(N)=q(N)=1 \} \cup \{k_N: N<M, r(N)=1 \}.$
\end{center}
Let $t \in D$ be such that $t \leq s.$
\begin{claim} There is an extension $(\bar{p},\bar{q})$ of $(p,q)$
such that $lh(\bar{p})=lh(\bar{q})=lh(t)$ and
\begin{center}
$t=\{ N<lh(t): \bar{p}(N)=\bar{q}(N)=1 \} \cup \{k_N: N<\bar{M},
r(N)=1 \},$
\end{center}
where $\langle k_N: N<\bar{M}
\rangle$ is an increasing enumeration of $\{N<lh(\bar{p}):
\bar{p}(N)=0\}$.
\end{claim}
\begin{proof} Extend $p,q$ to $\bar{p}, \bar{q}$ of length $lh(t)$
so that for $i$ in the interval $[lh(s),lh(t))$
\begin{itemize}
\item $\bar{p}(i)=1$, \item $\bar{q}(i)=1$ iff $i \in t.$
\end{itemize}
Then
\begin{center}
$t=\{ N<lh(t): \bar{p}(N)=\bar{q}(N)=1 \} \cup \{k_N: N<M, r(N)=1
\}.$
\end{center}
But (using our definitions) $M =\bar{M}$ so
\begin{center}
$t=\{ N<lh(t): \bar{p}(N)=\bar{q}(N)=1 \} \cup \{k_N: N<\bar{M},
r(N)=1 \}.$
\end{center}
as desired.
\end{proof}
It follows that
\begin{center}
$(\bar{p},\bar{q}) \vdash \dot{b}$ extends t
\end{center}
and $(*)$ follows.
It follows that $a$ and $b$ are $Add(\omega,1)-$generic over $W$
and $r \in \mathbf{L}[a,b].$ Hence the model $W$ and the reals $a$ and $b$
are as required and the theorem follows.
\end{proof}
\begin{remark} The above kind of argument is widely used in [2] to
prove the genericity of a $\l-$sequence of reals over $Add(\omega,
\l)$, the Cohen forcing for adding $\l-$many new reals.
\end{remark}
\section{Open problems}
We close this paper by some remarks and open problems.
Our first problem is related to Vanliere's Theorem.
\begin{problem} Find the least $\kappa$ such that there are $X
\subseteq \kappa$ and $a \subseteq \omega$ such that $\mathbf{L}[X] \models
ZFC+GCH , \mathbf{L}[X]$ and $\mathbf{L}[X,a]$ have the same
cardinals and $\mathbf{L}[X,a] \models 2^{\aleph_0} >
\aleph_1$
\end{problem}
Now consider the following property:
(*):$\hspace{1cm}$ If $\textrm{P}$ is a non-trivial forcing notion
and $G$ is $\textrm{P}-$generic over $V$,
$\hspace{1.5cm}$ then for any cardinal $\chi \geq \aleph_{2}^{V}$
and $x \in H^{V[G]}(\chi),$ there is $N \prec$
$\hspace{1.5cm}$ $ \langle H^{V[G]}(\chi), \in, <_{\chi}^{*},
H^{V}(\chi) \rangle$, such that $x \in N$ and $N \cap H^{V}(\chi)
\in$
$\hspace{1.5cm}$ $V,$ where $<_{\chi}^{*}$ is a well-ordering of
$H^{V}(\chi).$
Note that if $(*)$ holds, and $p \in G$ is such that $p$ forces
$``x \in N$ and $N \cap H^{V}(\chi) \in V$'' and decides a value for
$N \cap H^{V}(\chi)$, then $p$ is $(N \cap H^{V}(\chi),
\textrm{P})-$generic . Using Shelah's work on strong covering
property we can easily show that:
$\hspace{1cm}(a)$ If $0^{\sharp} \notin V,$ then $(*)$ holds.
$\hspace{1cm}(b)$ If in $V$ there is no inner model with a
measurable cardinal, then $(*)$ holds for
$\hspace{1.5cm}$any cardinal preserving forcing notion
$\textrm{P}.$
Now we state the following problem:
\begin{problem} Suppose $0^{\sharp} \in V.$ Does $(*)$ fail for
any non-trivial constructible forcing notion $\textrm{P}$ with
$o^{\mathbf{L}}(\textrm{P}) \geq \omega_{1}^{V},$ where $o^{\mathbf{L}}(\textrm{P})$
is the least $\beta$ such that forcing with $\textrm{P}$ over $\mathbf{L}$
adds a new subset to $\beta.$
\end{problem}
This problem is motivated by the fact that if $0^{\sharp} \in V,$
then for any non-trivial constructible forcing notion
$\textrm{P},$ forcing with $\textrm{P}$ over $V$ collapses
$o^{\mathbf{L}}(\textrm{P})$ into $\omega$ (Cf. [5]).
\begin{problem} Assume $V \models GCH ,
\lambda$ is a cardinal in $V, A \subseteq \lambda$ and $V[A]$ is a
model of set theory with the same cardinals as $V$. Can we have
more than $\lambda-$many reals in $H^{V}(\lambda^{+})[A].$
\end{problem}
Let us note that if $\lambda$ is regular in $V$, then the answer
is no, and if $\lambda$ has countable cofinality in $V$, then the
answer is yes. Also if there is a stationary subset of
$[\lambda]^{\leq \aleph_0}$ in $V[A]$ of size $\leq \lambda,$ then
the answer is no (Cf. [3, Theorem VII.0.5(1)]). Let us note that the
Theorem as stated in [3] is wrong. The conclusion should be: There
are $\leq \lambda$ many reals in $H^{V}(\lambda^{+})[A])$.
Concerning Problem 5.3, the main question is for $\lambda=
\aleph_{\omega_1}.$ We restate it for this special case.
\begin{problem} (Mathias). Is Problem 5.3 true for $\lambda=
\aleph_{\omega_1}.$
\end{problem}
|
2,869,038,154,096 | arxiv | \section*{ACKNOWLEDGMENT}
We would like to thank our colleagues for their kind help and support throughout the project. Yiqun Fu helped with data pre-processing. Leonard Liccardo, Wesley Reynolds, Christopher Heineman, and Douglas Wells helped with training data collection.
\end{small}
\bibliographystyle{IEEEtran}
\section{Method}
\subsection{Model Architecture}
\label{section:model}
\subsubsection{Model Input}
We form a bird`s eye view (BEV) representation with multiple channels by scene rasterization, similar to those used in both earlier planning \cite{bansal2019chauffeurnet,buhler2020driving,hecker2020learning} or prediction \cite{casas2018intentnet,chai20a,Djuric_2020_WACV} works.
The image size is \textit{W}~\texttimes~\textit{H} with $\rho$ meters per pixel in resolution.
All channels are ego-centered, which means an agent vehicle positioned at $\mathbf{x}_0\!=\![x_0, y_0]^T$ is always centered at image coordinate $\mathbf{p}_0\!=\![i_0, j_0]^T$ within the image and heading upward.
The input $\mathcal{I}$ consists of several different channels, as shown in Figure \ref{figure:model_inputs_representation}.
Both prediction or history of our ego-vehicle or other obstacles is sampled with a fixed time interval of $\Delta t$.
Besides the multi-channel images, vehicle speed $v_0$ is also an input, incorporated later by the kinematic motion model into the network (detailed in Section~\ref{subsection_model_architecture}).
\subsubsection{Model Design} \label{subsection_model_architecture}
The model consists of three parts, a CNN backbone with a branch of spatial attention module that is stemmed from intermediate features of the CNN, an LSTM decoder taking the feature embeddings from the CNN as inputs and outputting planning trajectories in the time domain, a differentiable rasterizer module that is appended to the LSTM decoder during training rasterizing output trajectories into images and finally feeding them to our loss functions. The model architecture is shown in Figure~\ref{fig:network}.
For the CNN backbone, we use MobileNetV2~\cite{sandler2018mobilenetv2} for its well-balanced accuracy and inference speed.
Then, its output features $\mathbf{F}_h$ are passed through a multilayer perceptron (MLP) layer, producing flattened feature output $\mathbf{h}_0$, which is the initial hidden state of our LSTM decoder.
Also, to lighten the computation workload, intermediate features $\mathbf{F}_{\mathcal{I}}$ from the backbone instead of raw image channels are fed to our spatial attention module.
Our attention implementation adopts the Atrous Spatial Attentional Bottleneck from~\cite{kim2020attentional}, which provides better interpretable attention maps resulting in attentive features $\mathbf{F}_{\mathcal{A}}$.
Following an encoding block $f_{encoding}$ consisting of a sequence of 3~\texttimes~3 convolutions and average poolings, a vector $\mathbf{A}_{\mathcal{I}}$ is output by the spatial attention module.
It constructs a part of the input $\mathbf{i}$ of the succeeding LSTM decoder.
The cell state $c_0$ is initialized by the Glorot initialization~\cite{glorot2010understanding}.
Then, our LSTM decoder takes these inputs and generates a sequence of future vehicle steering angle and acceleration values $(\delta_{t-1}, a_{t-1})$, where $t=0,\dots,N-1$.
Similar to prior works in the fields of vehicle control~\cite{kong2015kinematic} or trajectory prediction~\cite{cui2020deep}, we add a kinematic model that is differentiable and acts as a network layer in the LSTM.
Given the latest states $(x_{t-1}, y_{t-1}, \phi_{t-1}, v_{t-1})$, it converts $(\delta_{t-1}, a_{t-1})$ to corresponding vehicle horizontal coordinates, heading angles, and velocities $(x_t, y_t, \phi_t, v_t)$, yielding kinematically feasible planning trajectories:
\begin{equation}
\label{eq:detailed_dynamic_model}
\begin{aligned}
& x_{t} = v_{t-1}\cos{(\phi_{t-1})}\Delta t + x_{t-1}, \\
& y_{t} = v_{t-1}\sin{(\phi_{t-1})}\Delta t + y_{t-1}, \\
& \phi_{t} = v_{t-1}\frac{\tan{(\delta_{t-1})}}{L}\Delta t + \phi_{t-1}, \\
& v_{t} = a_{t-1}\Delta t + v_{t-1}, \\
\end{aligned}
\end{equation}
where $L$ denotes the vehicle wheelbase length.
The output vehicle waypoints and states $(x_t, y_t, \phi_t, v_t)$ are also passed through an MLP layer.
Then the resulted embedding $\mathbf{S}_t$ are concatenated with aforementioned $\mathbf{A}_{\mathcal{I}}$ yielding the LSTM input $\mathbf{i}$ for the next LSTM iteration.
\subsubsection{Differentiable Rasterizer}
During training time, the output trajectory points are rasterized into $N$ images.
As shown in Figure~\ref{figure:rasterizer_visualization}, unlike~\cite{cui2020ellipse}, we represent our ego vehicle using three 2D Gaussian kernels instead of one, which better sketch its shape.
Given the vehicle waypoints $\mathbf{s}_t = [x_t, y_t, \phi_t, v_t]^T$, vehicle length $l$, and width $w$, our rasterization function $\mathcal{G}$ rasterizes images as:
\begin{equation}
\label{eq:rasterisation_equations}
\mathcal{G}_{i,j}(\mathbf{s}_t) = \max_{k = 1,2,3}(\mathcal{N}(\mathbf{\mu}^{(k)}, \mathbf{\Sigma}^{(k)})),
\end{equation}
where each cell $(i, j)$ in $\mathcal{G}$ is denoted as $\mathcal{G}_{i,j}$.
Let $\mathbf{x}^{(k)}_t = [x_t^{(k)}, y_t^{(k)}]^T$ denote the center of the $k$-th Gaussian kernel of our vehicle representation. Then,
\begin{equation}
\begin{aligned}
\mathbf{\mu}^{(k)} &= \frac{1}{\rho}(\mathbf{x}^{(k)}_t - \mathbf{x}_0) + \mathbf{p}_0, \\
\mathbf{\Sigma}^{(k)} &= \mathbf{R}(\phi_t)^T\mathrm{diag}(\sigma_l, \sigma_w)\mathbf{R}(\phi_t),
\end{aligned}
\end{equation}
where $(\sigma_l, \sigma_w) = (\frac{1}{3}\alpha l, \alpha w)$, $\alpha$ is a fixed positive scaling factor, and $\mathbf{R}(\phi)$ represents a rotation matrix constructed from the heading $\phi$.
Note that the Gaussian distributions are not truncated by the vehicle box-like shape allowing the model to learn how human drivers keep a safe distance to obstacles or road boundaries.
Our experiments also demonstrate that this design choice helps us achieve better performance, as shown in Section~\ref{subsubsec:rasterizer_design_analysis}.
\begin{figure}
\begin{minipage}{0.15\textwidth}
\centering
\subfloat[Vehicle shape\label{figure:agent_box_shape}]{\includegraphics[width=1\linewidth]{sections/figs/original_cropped.png}}
\end{minipage} \hfill
\begin{minipage}{0.15\textwidth}
\centering
\subfloat[Rasterized shape\label{figure:agent_box_rasterized_shape}]{\includegraphics[width=1\linewidth]{sections/figs/gaussian_cropped.png}}
\end{minipage} \hfill
\begin{minipage}{0.15\textwidth}
\centering
\subfloat[Gaussian kernels\label{figure:agent_box_rasterisation_comparison}]{\includegraphics[width=1\linewidth]{sections/figs/color_box_cropped.png}}
\end{minipage} \hfill
\caption{\small Visualization of rasterized images: (a) vehicle shape that we aim to rasterize; (b) the rasterized images by our differentiable rasterizer; (c) visualization of the Gaussian kernels used by the rasterizer in different colors.}
\label{figure:rasterizer_visualization}
\vspace{-0.4cm}
\end{figure}
\subsubsection{Loss}
Benefiting from the LSTM decoder and differentiable rasterizer, our trajectory imitation loss is simple and can be expressed in analytic form as:
\begin{equation}
\label{eq:imitation_loss}
\mathcal{L}_\mathrm{imit}= \sum_{t=0}^{N - 1} \mathbf{\lambda}\norm{\mathbf{s}_t - \hat{\mathbf{s}}_t}_2,
\vspace{-0.15cm}
\end{equation}
where $\norm{\cdot}_2$ is the L2 norm, $\mathbf{\lambda}$ are the weights, and $\hat{\mathbf{s}}_t$ denotes the corresponding ground truth vehicle waypoints.
Besides the imitation loss, four task losses are introduced to prevent our vehicle from undesirable behaviors, such as obstacle collision, off-route, off-road, or traffic signal violation. $\mathcal{T}^\mathrm{obs}$, $\mathcal{T}^\mathrm{route}$, $\mathcal{T}^\mathrm{road}$, and $\mathcal{T}^\mathrm{signal}$ are corresponding binary masks, as shown in Figure~\ref{figure:task_loss_masks_representation}.
We assign ones to all the cells that our vehicle should avoid in our binary masks.
The following losses are included:
\begin{equation}
\begin{split}
\mathcal{L}_\mathrm{obs} = &\sum_{t=0}^{N-1}\frac{1}{WH}\sum_{i}\sum_{j} \mathcal{G}_{i,j}\mathcal{T}^\mathrm{obs}_{i,j}, \\
\mathcal{L}_\mathrm{route} = &\sum_{t=0}^{N-1}\frac{1}{WH}\sum_{i}\sum_{j} \mathcal{G}_{i,j}\mathcal{T}^\mathrm{route}_{i,j}, \\
\mathcal{L}_\mathrm{road} = &\sum_{t=0}^{N-1}\frac{1}{WH}\sum_{i}\sum_{j}\mathcal{G}_{i,j}\mathcal{T}^\mathrm{road}_{i,j}, \\
\mathcal{L}_\mathrm{signal} = &\sum_{t=0}^{N-1}\frac{1}{WH}\sum_{i}\sum_{j}\mathcal{G}_{i,j}\mathcal{T}^\mathrm{signal}_{i,j}. \\
\end{split}
\end{equation}
Overall, our total loss is given by:
\begin{equation}
\mathcal{L} =
\mathcal{L}_{imit} + \lambda_\mathrm{task}(\mathcal{L}_\mathrm{obs} + \mathcal{L}_\mathrm{route} +\mathcal{L}_\mathrm{road} + \mathcal{L}_\mathrm{signal}),
\end{equation}
where $\lambda_\mathrm{task}$ is an empirical parameter.
Like~\cite{bansal2019chauffeurnet}, we randomly drop out the imitation losses for some training samples to further benefit from the task losses.
\begin{figure}
\begin{minipage}{0.232\textwidth}
\flushright
\subfloat[Obstacle mask
\label{figure:obstacle_loss_mask}]{\includegraphics[width=0.8\linewidth]{sections/figs/mask_obs_flat.png}}
\end{minipage}\hfill
\begin{minipage}{0.232\textwidth}
\flushleft
\subfloat[Route mask
\label{figure:routing_loss_mask}]{\includegraphics[width=0.8\linewidth]{sections/figs/mask_routing.png}}
\end{minipage}
\begin{minipage}{0.232\textwidth}
\flushright
\subfloat[Road mask \label{figure:off_road_loss_mask}]{\includegraphics[width=0.8\linewidth]{sections/figs/mask_offroad.png}}
\end{minipage}\hfill
\begin{minipage}{0.232\textwidth}
\flushleft
\subfloat[Traffic signal mask \label{figure:traffic_light_loss_mask }]{\includegraphics[width=0.8\linewidth]{sections/figs/mask_trafficlight.png}}
\end{minipage}
\caption{\small Visualization of the binary masks we used in our task losses: (a) the obstacle masks that push our vehicle away from obstacles; (b) The route masks that keep our vehicle on the planned route by penalizing the behaviors that cause any portion of our vehicle that does not overlap with the planned route; (c) The road masks that keep our vehicle within drivable areas; (d) The masks that make our vehicle stop in front of traffic signals when they are red.}
\label{figure:task_loss_masks_representation}
\vspace{-0.4cm}
\end{figure}
\subsection{Data Augmentation}
To address the aforementioned distributional shift issue, we further introduce an iterative feedback synthesizer to enrich the training dataset besides the random horizontal perturbations~\cite{bansal2019chauffeurnet}.
Both data augmentation methods help our agent broaden its experience beyond the demonstrations.
\subsubsection{Random Synthesizer}
Similar to \cite{bansal2019chauffeurnet}, our synthesizer perturbs trajectories randomly, creating off-road and collision scenarios.
The start and end points of a trajectory are anchored, while we perturb one of the points on the trajectory and smooth across all other points.
Only realistic perturbed trajectories are kept by thresholding on maximum curvature.
\subsubsection{Feedback Synthesizer}
The random synthesizer above achieves impressive performance improvement, as we shall see in Section \ref{section:exp}.
However, it fails to address one important issue due to its random nature, the bias in probability distributions of the synthesized states.
To this end, following the principle of DAgger \cite{ross2011reduction}, we propose an iterative feedback synthesizer that generates the states that our agent is more likely to encounter by sampling the data based on the current policy.
Thus, we seek to improve our policy by adding these states and their corresponding actions into the training dataset.
If we denote our policy model at $i$-th iteration as $\pi_i$, given any start point $t_0$ in a trajectory, we can sample future $T$ steps using $\pi_i$.
This constructs a few new states that our agent is likely to encounter.
To demonstrate how our agent should react to these states that are deviated from our optimal behavior, we propose to fit a smooth trajectory with necessary kinematic constraints.
Figure \ref{fig:feedback} illustrates the deviated trajectories given different $T$ values from 2 to 8. A complete step-by-step overview is detailed in Algorithm \ref{alg:iter_feedback}.
The whole process is fully automatic and doesn't rely on the participation of any expert agent.
Note that some synthesized samples may be discarded when the solver fails to compute feasible and smooth trajectories.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{sections/figs/feedback.png}
\caption{\small The illustration of our feedback synthesizer. The ``star'' marks where we construct a new state by sampling an action using the current policy. We fit a smooth trajectory with necessary kinematic constraints after the ``star'' point, therefore avoiding any expert agents' dependence. (a) - (d) demonstrate the trajectories generated when $T=2, 4, 6, 8$.
}
\label{fig:feedback}
\vspace{-0.4cm}
\end{figure}
\begin{algorithm}
\caption{Feedback Synthesizer based Iterative Training}
\label{alg:iter_feedback}
\begin{algorithmic}[1]
\State Input $\mathcal{D} = \mathcal{D}_0$. \Comment {trajectories from human drivers}
\State Train policy $\pi_0$ on $\mathcal{D}$.
\For{$i=1$ to $K$}
\State Sample $T$-step trajectories using $\pi_{i-1}$ from $\mathcal{D}_0$.
\State Get dataset $D_i = \{(s, a)\}$ of visited states by $\pi_{i-1}$ and actions generated by smoothing.
\State Aggregate datasets $\mathcal{D} \leftarrow \mathcal{D} \cup \mathcal{D}_i$
\State Train policy $\pi_i$ on $\mathcal{D}$.
\EndFor
\end{algorithmic}
\end{algorithm}
\section{conclusion and Future Work}
We have presented an imitation learning based planner designed for autonomous driving applications.
We demonstrate that the overall ADS pass rate can be improved using necessary data augmentation techniques, including a novel feedback data synthesizer.
Benefiting from the differentiable rasterizer, our learning-based planner runs inference in real-time, yielding output either directly used by a downstream control module or further fed into an optional post-processing planner.
Task losses and spatial attention are introduced to help our system reason critical traffic participants on the road.
Therefore, the full exploitation of the above modules enables our system to avoid collisions or off-road, smoothly drive through intersections with traffic signals, and even overtake slow vehicles by dynamically interacting with them.
In the future, we plan to collect more real-road driving data, further strengthening our imitation learning capabilities.
Also, the introduction of reinforcement learning maybe another direction to improve the overall performance to a higher level.
\section{related works}
\subsubsection{Imitation Learning}
Imitation Learning for motion planning was first introduced in the pioneering work \cite{pomerleau1988alvinn} where it directly learns a policy that maps sensor data to steering angle and acceleration. In recent years, there is an extensive literature \cite{bojarski2016end, hecker2018end, codevilla2018end, bewley2019learning, codevilla2019exploring, prakash2020exploring,hecker2020learning} that follows this end-to-end philosophy. Alternatively, our work adopts a mid-to-mid approach \cite{bansal2019chauffeurnet, buhler2020driving} which allows us to augment data handily and go beyond pure imitation by having task-specific losses.
\begin{figure}
\begin{minipage}{0.16\textwidth}
\centering
\subfloat[Rasterized Scene \label{figure:input_in_rgb}]{\includegraphics[width=0.9\linewidth]{sections/figs/input_all.png}}
\end{minipage}\hfill
\begin{minipage}{0.16\textwidth}
\centering
\subfloat[Agent Box \label{figure:agent_current_box}]{\includegraphics[width=0.9\linewidth]{sections/figs/input_agent_box.png}}
\end{minipage}\hfill
\begin{minipage}{0.16\textwidth}
\centering
\subfloat[Past Agent Poses \label{figure:agent_trajectory_history}]{\includegraphics[width=0.9\linewidth]{sections/figs/input_agent_history.png}}
\end{minipage}\hfill
\begin{minipage}{0.16\textwidth}
\centering
\subfloat[Obs Prediction\label{figure:obstacles_prediction}]{\includegraphics[width=0.9\linewidth]{sections/figs/input_obs_future.png}}
\end{minipage}\hfill
\begin{minipage}{0.16\textwidth}
\centering
\subfloat[Obs History\label{figure:obstacles_history}]{\includegraphics[width=0.9\linewidth]{sections/figs/input_obs_history.png}}
\end{minipage}\hfill
\begin{minipage}{0.16\textwidth}
\centering
\subfloat[HD Map \label{figure:local_road_map}]{\includegraphics[width=0.9\linewidth]{sections/figs/input_roadmap.png}}
\end{minipage}\hfill
\begin{minipage}{0.16\textwidth}
\centering
\subfloat[Routing \label{figure:local_routing}]{\includegraphics[width=0.9\linewidth]{sections/figs/input_routing.png}}
\end{minipage}\hfill
\begin{minipage}{0.16\textwidth}
\centering
\subfloat[Speed Limit \label{figure:local_speed_limit}]{\includegraphics[width=0.9\linewidth]{sections/figs/input_speedlimit.png}}
\end{minipage}\hfill
\begin{minipage}{0.16\textwidth}
\centering
\subfloat[Traffic Lights \label{figure:traffic_light_status}]{\includegraphics[width=0.9\linewidth]{sections/figs/input_trafficlight.png}}
\end{minipage}\hfill
\caption{\small Scene rasterization in BEV as multi-channel image inputs for a scene shown in (a): (b) agent rasterized as a box; (c) past agent trajectory rasterized as a sequence of points with fading brightness (The oldest points are darkest); (d) prediction of obstacles rasterized as a sequence of boxes with fading brightness (The furthest boxes are brightest); (e) history of obstacles (The oldest boxes are darkest); (f) a color (3-channel) image rendered with surrounding road structures including lanes, intersections, crosswalks, etc; (g) the intended route rasterized in a constant white color; (h) lanes colored in proportion to known speed limit values; (i) traffic lights affected lanes colored in different grey levels that are corresponding to different states of traffic lights.}
\label{figure:model_inputs_representation}
\vspace{-0.4cm}
\end{figure}
\subsubsection{Loss and Differentiable Rasterization}
Imitation learning for motion planning typically applies a loss between inferred and ground truth trajectories~\cite{codevilla2018end,codevilla2019exploring}.
Therefore, the ideas of avoiding collisions or off-road situations are implicit and don't generalize well.
Some prior works~\cite{bhardwaj2020differentiable,bansal2019chauffeurnet} propose to introduce task-specific losses penalizing these unwanted behaviors.
With that in mind, we introduce task losses in our work and achieve real-time inference based on a differentiable rasterizer.
Wang et al.~\cite{wang2020improving} leverage a differentiable rasterizer, and it allows gradients to flow from a discriminator to a generator, enhancing a trajectory prediction network powered by GANs \cite{goodfellow2014generative}.
Different from us, concurrent work~\cite{cui2020ellipse} tackles a trajectory prediction task for better scene-compliant using similar rasterizers.
General-purpose differentiable mesh renderers~\cite{kato2018neural,liu2019soft} have also been employed to solve other computer vision tasks.
\subsubsection{Attention}
The attention mechanism has been used successfully in both computer vision~\cite{wang2017residual,hu2018squeeze,lu2019deepvcp} or natural language processing~\cite{vaswani2017attention} tasks.
Some works~\cite{kim2017interpretable,kim2019grounding,zhou2020da4ad} employ the attention mechanism to improve the model's interpretability by providing spatial attention heatmaps highlighting image areas that the network attends to in their tasks.
In this work, we introduce the Atrous Spatial Attentional Bottleneck from \cite{kim2020attentional}, providing easily interpretable attention heatmaps while also enhancing the network's performance.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{sections/figs/model.png}
\caption{\small The illustration of the network structure of the three main modules: (a) a CNN backbone with a branched spatial attention structure; (b) an LSTM trajectory decoder with a kinematical layer; (c) a vehicle shape differentiable rasterizer.
}
\label{fig:network}
\vspace{-0.4cm}
\end{figure*}
\subsubsection{Data Augmentation}
DAgger \cite{ross2011reduction} and its variants \cite{zhang2017query,blukis2018following,prakash2020exploring} propose to address the distributional shift issue by having more data with states that the agent is likely to encounter.
In particular, they sample new states at each iteration based on the actions inferred from the current policy, let expert agents demonstrate the actions they would take under these new states, and train the next policy on the aggregate of collected datasets.
They have been explored in the context of autonomous driving in prior works \cite{zhang2017query,pan2018agile,chen2020learning}.
ChauffeurNet \cite{bansal2019chauffeurnet} introduces a random synthesizer that augments the demonstration data by synthesizing perturbations to the trajectories.
In this work, we explore both ideas and propose a feedback synthesizer improving the overall performance.
\section{introduction}
In recent years, autonomous driving technologies have seen dramatic advances in self-driving vehicles' commercial and experimental operations.
Due to complex road topologies across cities and dynamic interaction among vehicles or pedestrians, traditional planning approaches that involve heavy manual tuning are believed to be not cost-effective or scalable \cite{sadat2019jointly}.
An alternative is imitation learning, a data-driven approach that mimics expert agents' driving behavior and leverages recent advances in supervised learning.
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{sections/figs/overview.png}
\caption{\small Demonstration of the test scenarios that our learning-based planner (M2) successfully passed: (a) passing a static obstacle; (b) overtaking a dynamic obstacle; (c) cruising on a curved lane; (d) following a leading vehicle; (e) stopping for a red traffic light; (f) yielding to a vehicle.
}
\label{fig:intro}
\vspace{-0.4cm}
\end{figure}
Imitation learning suffers from a long bothering issue, distributional shift \cite{ross2011reduction,prakash2020exploring}.
Variation in input state distribution between training and testing sets causes accumulated errors in the learned policy output, which further leads the agent to an unseen environment where the system fails ultimately.
The key solution to this compounding problem is to increase the demonstrated data as much as possible.
Inspired by \cite{blukis2018following,bansal2019chauffeurnet,buhler2020driving,chen2020learning}, our work adopts a mid-to-mid approach where our system's input is constructed by building a top-down image representation of the environment that incorporates both static and dynamic information from our HD Map and perception system.
This gives us large freedom to manipulate the input to an imitation learning network \cite{bansal2019chauffeurnet}.
Therefore, it helps us handily increase the demonstrated data using appropriate data augmentation techniques and neatly avoid technical difficulties in building a high fidelity raw sensor data simulator.
Following the philosophy of DAgger \cite{ross2011reduction}, we introduce a feedback synthesizer that generates and perturbs on-policy data based on the current policy.
Then we train the next policy on the aggregate of collected datasets.
The feedback synthesizer addresses the distributional shift issue, thus improving the overall performance shown in Section~\ref{section:exp}.
Furthermore, pure imitation learning does not explicitly specify important goals and constraints involved in the task, which yields undesirable behaviors inevitably.
We introduce task losses to help regulate these unwanted behaviors, yielding better causal inference and scene compliance.
Inspired by \cite{wang2020improving}, The task losses are implemented by directly projecting the output trajectories into top-down images using a differentiable vehicle rasterizer.
They effectively penalize behaviors, such as obstacle collision, traffic rule violation, and so on, by building losses between rasterized images and specific object masks.
Inspired by recent works \cite{cui2020deep}, our output trajectories are produced by a trajectory decoding module that includes an LSTM \cite{hochreiter1997long} and a kinematic layer that assures our output trajectories' feasibility.
On the whole, these designs help us avoid using the heavy AgentRNN network in Chauffeurnet \cite{bansal2019chauffeurnet}, which functions similarly to a Convolutional-LSTM network \cite{shi2015_convolutional}.
It is demonstrated that our proposed method achieves a much shorter network inference time comparing to \cite{bansal2019chauffeurnet}, as shown in Section \ref{subsec:run_time}.
Moreover, to further improve the performance, similar to recent works \cite{hecker2020learning, kim2020attentional}, we introduce a spatial attention module in our network design.
The heatmap estimated by the attention module provides us a tool for visualizing and reasoning critical objects in the environment and reveals a causal relationship between them and the ego vehicle, as shown in Figure~\ref{figure:attention}.
At last, although a standard vehicle controller can directly use the trajectory waypoints generated by our network to drive the vehicle following them, we propose to add an optional post-processing planner as a gatekeeper which manages to interpret them as high-level decision guidance and composes a new trajectory that offers better comfort.
This architecture can be extended in the future, addressing the paramount safety concern caused by the fact that a neural network is typically considered a black box and hard to interpret and analyze.
Our models are trained with 400 hours of human driving data.
We evaluated our system using 70 autonomous driving test scenarios (ADS) that are specifically created for evaluating the fundamental driving capabilities of a self-driving vehicle.
We show that our learning-based planner (M2) trained via imitation learning achieves 70.0\% ADS pass rate and can intelligently handle different challenging scenarios, including overtaking a dynamic vehicle, stopping for a red traffic light, and so on, as shown in Figure~\ref{fig:intro}.
\section{experiments}
\label{section:exp}
\subsection{Implementation Details}
We have a squared BEV image with $W\!=\!200$, $H\!=\!200$, and $\rho\!=\!0.2 m/\mathrm{pixel}$.
In the image, our ego-vehicle is placed at $i_0\!=\!100$, $j_0\!=160$ in image coordinates.
For the training data, we use a perception system in Apollo \cite{baidu-apollo} that processes raw sensor measurements and produces object detection, tracking, and prediction results.
Two seconds of history or predicted trajectories are rasterized in the corresponding image channels.
Our learning-based planning module outputs trajectories with a time horizon as $N = 10$ and $\Delta t = 0.2s$.
Our models were trained using the Adam optimizer with an initial learning rate of 0.0003.
\subsection{Dataset and Augmented Data}
Unlike prior works \cite{codevilla2018end,codevilla2019exploring,prakash2020exploring} that exploit a driving simulator and collect simulated driving data, we recruited our self-driving test vehicles and collected a total of 400 hours' driving data demonstrated by human-drivers in southern San Francisco bay area.
After necessary data pre-processing, we keep about 250k frames as our original training data before any data augmentation, denoted as $D_o$.
Then, additional 400k frames are generated by our random synthesizer, denoted as $D_r$. When the feedback steps $T=5$, our feedback synthesizer generates about 465k frames in one iteration, and they are denoted as $D_f$.
To keep the training procedure efficient, we let the number of iterations $K\!=\!1$ in the feedback synthesizer.
Table~\ref{table:ablation_tests} lists different configurations of training data when we evaluate different methods.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{sections/figs/figure-passing-rate6.png}
\caption{\small Detailed analysis of failure reasons of different model configurations under different driving scenarios.}
\label{fig:closed_loop_results}
\vspace{-0.4cm}
\end{figure}
\subsection{Evaluation Scenarios}
\label{subsec:evaluation_scenarios}
To fully evaluate the proposed methods, we use the Apollo Dreamland simulator \cite{baidu-apollo} and carefully created 70 ADS.
Each ADS lasts about 30-90 seconds and includes one or more driving scenarios. The ADS are either handcrafted by our scene designers or taken from logs from real driving data (separate from the training data).
The driving scenarios in the 70 ADS can be roughly categorized into four types:
\begin{itemize}
\item \textbf{Cruising}: normal cruising in straight or curved roads without other traffic participants.
\item \textbf{Junction}: junction related scenarios including left or right turns, U-turns, stop before a traffic signal, etc.
\item \textbf{Static Interaction}: interaction with static obstacles, such as overtaking a stopped car.
\item \textbf{Dynamic Interaction}: interaction with dynamic obstacles, for example, overtaking a slow vehicle.
\end{itemize}
They constitute a comprehensive driving test challenge specifically designed to evaluate the fundamental driving capabilities of a self-driving vehicle.
Using these test scenarios, we carry out closed-loop evaluations where a control module executes our output waypoints, and the results are summarized in Table~\ref{table:ablation_tests}.
\subsection{Evaluation Metrics}
Besides the pass or the success rate that prior works \cite{bansal2019chauffeurnet,codevilla2019exploring} focus on,
we propose to evaluate how comfortable the driving is and formulate a new comfort score.
We assume that human drivers deliver a comfortable driving experience.
The comfort score is calculated based on how similar our driving states are to human drivers. Particularly, we record the probabilities of a set of angular velocity ($\mathrm{rad}/s$) and jerk ($m/s^3$) values $(\omega, j)$ in our human-driving data $D_o$.
Given $(\omega, j)$ from certain driving data of an agent, we define the comfort score $c$ as:
\begin{equation}
c = \frac{\sum_{i=1}^{n}P(\omega ,j|D_o)}{n},
\end{equation}
where $P(\omega ,j|D_o)$ is the probabilities of values $(\omega, j)$ happened in human-driving data $D_o$. $n$ is the number of frames, and $(\omega, j)$ are the corresponding values in our test driving data.
Given $(\omega, j)$, when we look up the corresponding probabilities, $\omega$ and $j$ are discretized into bins with sizes of 0.1 and 1.0, respectively.
Our ADS pass rate is based on a set of safety-related metrics, including collision, off-road, speeding, traffic-light violation, etc. When any of them is violated, we grade the entire ADS as failed. Of course, the agent has to reach the scheduled destination in a limited time to be considered successful, too.
\subsection{Runtime Analysis}
\label{subsec:run_time}
We evaluated our method's runtime performance with an Nvidia Titan-V GPU, Intel Core i7-9700K CPU, and 16GB Memory.
The online inference time per frame is 10ms in rendering, 22 ms in model inference, and 15 ms (optional) in the post-processing planner.
Note that our model inference time is much shorter than the prior work \cite{bansal2019chauffeurnet}.
\begin{figure}
\begin{minipage}{0.15\textwidth}
\centering
\subfloat[M0b\label{figure:M0b_attention_heatmap}]{\includegraphics[width=1\linewidth]{sections/figs/M1-entropy.png}}
\end{minipage}\hfill
\begin{minipage}{0.15\textwidth}
\centering
\subfloat[M1a\label{figure:M1a_attention_heatmap}]{\includegraphics[width=1\linewidth]{sections/figs/A1-entropy.png}}
\end{minipage}\hfill
\begin{minipage}{0.15\textwidth}
\centering
\subfloat[M1b\label{figure:M1b_attention_heatmap}]{\includegraphics[width=1\linewidth]{sections/figs/A2-entropy.png}}
\end{minipage}\hfill
\caption{\small Comparisons of the attention heatmaps generated by different models in a dynamic interaction scenario. The corresponding heatmap from M1 can be found in Figure~\ref{figure:attention}(a). Output and ground truth trajectories are illustrated as blue and yellow dots, respectively.}
\label{figure:attention_heatmap_comparison}
\vspace{-0.4cm}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{sections/figs/attention_showcase.png}
\caption{\small Visualization of the attention heatmaps from M1 in different scenarios: (a) dynamic nudge; (b) stopping at a red light; (c) following a leading vehicle; (d) right turn; (e) yielding to a vehicle; (f) U-turn.}
\label{figure:attention}
\vspace{-0.4cm}
\end{figure}
\subsection{Comparison and Analysis}
To demonstrate each of the above contributions' effectiveness, we show the ADS pass rate and comfort score with different methods in Table~\ref{table:ablation_tests}.
Also, the proportion of scenarios where we failed due to any violation of each particular rule in different driving scenarios using different methods are shown in Figure~\ref{fig:closed_loop_results}.
Detailed analysis is discussed as follows.
\subsubsection{Task Losses}
\label{subsubsec:task_loss_effect_analysis}
By comparing M0a with M0, it is observed that the numbers of scenarios where we failed due to off-road or collision reasons decrease.
A similar trend can be found by observing the pass rate from M0b and M1.
In particular, we add the differentiable rasterizer and task losses in our models M0a and M1.
They play an important role in keeping us within drivable areas and avoiding obstacles.
\subsubsection{Spatial Attention}
\label{subsubsec:attention_ads_pass_rate_analysis}
The spatial attention in M0b gives a slightly worse pass rate with pure imitation losses comparing to M0.
It implies that the attention can not improve the performance by itself without the important task losses.
It makes sense since the task losses involve obstacles, traffic signals, and other traffic participants in explicit forms.
They help the network locate the objects that it should pay attention to.
In Figure~\ref{figure:M0b_attention_heatmap}, we also find the attention heatmaps failed to converge in M0b.
By comparing M1 with M0a, we find slight improvements in pass rate by having the spatial attention module.
Also, we visualize the attention heatmaps from M1 in Figure~\ref{figure:attention}, and they are easy to interpret and discover the attentive objects.
To further verify our attention module design, we did experiments with M1a and M1b comparing with M1.
M1a applies attention weights directly on raw image pixels resulting in higher computation complexity.
In our experiment, M1a concluded with a lower ADS pass rate and less interpretable attention heatmaps, as shown in Figure~\ref{figure:M1a_attention_heatmap}.
M1b feeds the attention features back to the CNN backbone, also resulting in worse ADS pass rate and wrongly focused attention heatmaps as shown in Figure~\ref{figure:M1b_attention_heatmap}.
Our design's inspiration is that keeping original CNN features in a separate branch is good for providing more meaningful features for downstream modules, therefore better for the overall performance.
\subsubsection{Differentiable Rasterizer}
\label{subsubsec:rasterizer_design_analysis}
When rasterizing vehicle boxes, it's intuitive to truncate the Gaussian distributions by box boundaries ensuring a precise representation.
However, while we test it in experiment M1c, the experimental results suggest otherwise.
M1c's ADS pass rate drops slightly compared to M1, the model without the truncation step.
More importantly, M1 has fewer failed scenarios, especially in the collision category, as shown in Figure~\ref{fig:closed_loop_results}.
It implies the long tail in distributions helps with obstacle avoidance. Besides this, we also find that M1c has a higher failure rate in junction scenarios where our vehicles stopped before traffic lights but failed to restart when they turned green.
A possible reason is that the overlap between the long tail in distributions and the traffic signal mask provides more hints to the model and helps it learn the stop's causality.
\subsubsection{Feedback Synthesizer}
As we mentioned, our feedback synthesizer addresses the fundamental issue with imitation learning, the distributional shift.
As a result, M2 improved the overall performance comparing to M1.
Specifically, M2 achieves 70.0\% ADS pass rate, about a 25\% improvement over M1.
The number of failure ADS decreases across almost all different scenarios or failure reasons, as shown in Figure~\ref{fig:closed_loop_results}.
\subsubsection{Post-processing Planner}
Our post-processing planner (M3) is designed as a gatekeeper with safety constraints.
It's not surprising that it achieves a higher ADS pass rate, 94.29\%.
Better yet, the post-processing planner also significantly improves the comfort score with its smoothness constraints in the joint optimization framework.
Note that we find pure imitation learning methods (M0 or M0b) without any data augmentation achieve similar comfort scores to M3.
It seems that data augmentation hurts driving comfort, and our newly proposed comfort metric reveals this.
\subsubsection{Failure Modes}
From Figure~\ref{fig:closed_loop_results}, it is easy to tell that speeding issues have already become a major problem for our learning-based planner, M2.
These results seem to suggest that further improvements may be possible by adding a speeding loss.
Another important failure reason is non-arrival.
It happens when our ego vehicle gets stuck somewhere or has a too low speed.
It probably means that we could extend our work and encourage our vehicle to keep moving forward at a steady speed by having a new loss which could be similar to the reward in reinforcement learning.
For our post-processing planner M3, safety constraints were implemented. We examined two failure scenarios that are related to collision issues.
Our vehicle is hit by a car behind it in both scenarios, which is a rear-end collision.
Also note that, due to the limitation of our test environment, all other vehicles in the ADS only move forward by following scheduled routes and are not intelligent to avoid our ego-vehicle.
This increases the chance of rear-end collisions, especially when our vehicle is slower than expected.
\subsection{Post-processing Planner}
This section introduces our post-processing planner, a new architecture that aims to offer better comfort and improve driving safety as a gatekeeper.
The implementation has been released in Apollo 6.0 at \url{https://github.com/ApolloAuto/apollo/tree/master/modules/planning}.
The traditional path planning task is typically formulated as a quadratic optimization problem where lane boundaries, collision, traffic lights, and other driving conditions are formulated as bounding constraints \cite{zhang2020optimal}.
Our new architecture is a joint optimization framework based on cues from both the kinematics and dynamics constraints and the above learning-based planner's output waypoints, as shown in Figure \ref{figure:hybrid_structure}.
In the optimization framework, the bounding constraints remain unchanged.
Meanwhile, the output waypoints of the learning-based planner are taken as its optimization objective.
Finally, the joint optimization framework produces eventual path trajectories that are fed to a control module.
Figure \ref{figure:hybrid_path} illustrates an example on a real road.
Our vehicle is approaching an intersection that generates a bound in front because of traffic signals.
Lane boundaries and dynamic obstacles on the road also compose safety bounds in our optimization problem.
Smoothness constraints ensure driving comfort.
As opposed to the centerline (the default optimization objective in a traditional planner), the learning-based planner's waypoints guide our ego vehicle to keep a safe distance from the obstacle and successfully overtake it, which is similar to human driving behaviors.
The final path, generated by the post-processing planner, follows our learning-based planner's high-level decision to overtake the obstacle vehicle while ensuring safety and comfort.
\begin{figure}
\centering
\begin{minipage}{0.23\textwidth}
\centering
\captionsetup{justification=centering}
\subfloat[Planner architecture\label{figure:hybrid_structure}]{\includegraphics[width=\linewidth]{sections/figs/hybrid_flow.png}}
\end{minipage}
\begin{minipage}{0.23\textwidth}
\centering
\captionsetup{justification=centering}
\subfloat[Bounds and objectives\label{figure:hybrid_path}]{\includegraphics[width=\linewidth]{sections/figs/hybridpath-3s.png}}
\end{minipage}
\caption{\small The illustration of our post-processing planner design. (a) We formulate a joint optimization problem by incorporating safety and comfort bounds into the framework while taking the learning-based planner's trajectory output as the optimization objective. (b) An example illustrates how we build safety and comfort bounds and the optimization objective.}
\vspace{-0.4cm}
\end{figure}
|
2,869,038,154,097 | arxiv | \section{Introduction}
For continuous channels, Infinite Constellation (IC) codes are the natural coded-modulation scheme.
The encoding operation is a simple one-to-one mapping from the information messages to the IC codewords.
The decoding operation is equivalent to finding the ``closest''\footnotemark{} IC codeword to the corresponding channel output.
\footnotetext{Closest in the sense of the appropriate distance measure for that channel.}
It has been shown that codes based on linear ICs (a.k.a. Lattices) can achieve optimal error performance \cite{j:deBudaTheUpperError}, \cite{j:ErezAchievingCapacity}.
A widely accepted framework for lattice codes' error analysis is commonly referred to as Poltyrev's setting \cite{j:PoltyrevOnCoding}.
In Poltyrev's setting the code's shaping region, defined as the finite subset of the otherwise infinite set of lattice points, is ignored, and the lattice structure is analyzed for its coding (soft packing) properties only.
Consequently, the usual rate variable $R$ is infinite and replaced by the Normalized Log Density (NLD), $\delta$.
The lattice analogous to Gallager's random-coding error-exponent \cite{b:GallagerInformationTheory}, over a random linear codes ensemble, is Poltyrev's error-exponent over a set of lattices of constant density.
Both Gallager and Poltyrev's error-exponents are asymptotic upper bounds for the exponential behavior of the average error probability over their respective ensembles.
Since it is an average property, examining a specific code from the ensemble using these exponents is not possible.
Various upper error bounding techniques and bounds for specific codes and code families have been constructed for linear codes over discrete channels \cite{j:ShamaiVariationsOnThe}, while only a few have been devised for specific lattices over particular continuous channels \cite{j:HerzbergTechniquesofBounding}.
The Shulman-Feder Bound (SFB) \cite{j:ShulmaRandomCoding} is a simple, yet useful upper bound for the error probability of a specific linear code over a discrete-memoryless channel.
In its exponential form it states that the average error probability of a $q$-ary code $\mathcal{C}$ is upper bounded by
\begin{align}
&P_e(\mathcal{C}) \leq e^{-nE_r\left(R+\frac{\log\alpha}{n}\right)} \\
&\alpha = \max_{\tau} \frac{\mathcal{N}_{\mathcal{C}}(\tau)}{\mathcal{N}_r(\tau)} \frac{e^{nR}}{e^{nR}-1}
\end{align}
where $n$, $R$, and $E_r(R)$ are the dimension, rate, and random-coding exponent respectively, and $\mathcal{N}_{\mathcal{C}}(\tau)$ and $\mathcal{N}_r(\tau)$ are the number of codewords of type $\tau$ for $\mathcal{C}$ and for an average random code, respectively (i.e. distance-spectrum).
The SFB and its extensions have lead to significant results in coding theory.
Amongst those is the error analysis for Maximum Likelihood (ML) decoding of LDPC codes \cite{j:MillerBoundsOnThe}.
The main motivation of this paper is to find the SFB analogue for lattices.
As such it should be an expression that upper bounds the error probability of a specific lattice, depend on the lattice's distance spectrum, and resemble the lattice random-coding bound.
The main result of this paper is a simple upper bounding technique for the error probability of a specific lattice code or code family, as a function of its distance-spectrum.
The bound is constructed by replacing the well-known Minkowski-Hlawka theorem \cite{b:LekkerkerkerGeometryOfNumbers} with a non-random alternative.
An interesting outcome of the main result is an error-exponent for specific lattice \textit{sequences}.
A secondary result of this paper is a tight distance-spectrum based, upper bound for the error probability of a specific lattice of finite dimension.
The paper is organized as follows: Sections \Rmnum{2} and \Rmnum{3} present the derivation of the Minkowski-Hlawka non-random alternatives, section \Rmnum{4} outlines a well-known general ML decoding upper bound, section \Rmnum{5} applies the new techniques to the general bound of section \Rmnum{4}, and section \Rmnum{6} presents a new error-exponent for specific lattice \textit{sequences} over the AWGN channel.
\section{Deterministic Minkowski-Hlawka-Siegel}
Recall that a lattice $\Lambda$ is a discrete $n$-dimensional subgroup of the Euclidean space $\mathds{R}^n$ that is an Abelian group under addition.
A generating matrix $G$ of $\Lambda$ is an $n\times n$ matrix with real valued coefficients constructed by concatenation of a properly chosen set of $n$ linearly independent vectors of $\Lambda$.
The generating matrix $G$ defines the lattice $\Lambda$ by $\Lambda=\{\sbf{\lambda}: \sbf{\lambda}=G\mbf{u}, \mbf{u}\in\mathds{Z}^n\}$.
A fundamental parallelepiped of $\Lambda$, associated with $G$ is the set of all points $p=\sum_{i=1}^n u_i g_i$ where $0\leq u_i<1$ and $\{g_i\}_{i=1}^n$ are the basis vectors of $G$.
The lattice determinant, defined as $\det{\Lambda}\equiv\abs{\deg{G}}$, is also the volume of the fundamental parallelepiped.
Denote by $\beta$ and $\delta$ the density and NLD of $\Lambda$ respectively; thus $\beta = e^{n\delta} = (\det{\Lambda})^{-1}$.
The lattice-dual of the random linear codes ensemble, in finite-alphabet codes, is a set of lattices originally defined by Siegel \cite{j:SiegelAMeanValue,j:MacbeathAModifiedForm}, for use in proving what he called the Mean-Value Theorem (MVT).
This theorem, often referred to as the Minkowski-Hlawka-Siegel (MHS) theorem, is a central constituent in upper error bounds on lattices.
The theorem states that for any dimension $n\geq 2$, and any bounded Riemann-integrable function $g(\sbf{\lambda})$ there exists a lattice $\Lambda$ of density $\beta$ for which
\begin{equation}
\label{eq_mhs}
\sum_{\sbf{\lambda}\in\Lambda\setminus\{0\}} g(\sbf{\lambda})
\leq \int_{\mathds{R}^n} g\left(\frac{\mbf{x}}{e^{\delta}}\right) d\mbf{x}
= \beta \int_{\mathds{R}^n} g(\mbf{x}) d\mbf{x}.
\end{equation}
Siegel proved the theorem by averaging over a fundamental set\footnotemark{} of all $n$-dimensional lattices of unit density.
The disadvantages of Siegel's theorem are similar to the disadvantages of the random-coding theorem.
Since the theorem is an average property of the ensemble, it can be argued that there exists at least a single specific lattice, from the ensemble, that obeys it; though finding that lattice cannot be aided by the theorem.
Neither can the theorem aid in analysis of any specific lattice.
Alternatives to \eqref{eq_mhs}, constructed for specific lattices, based on their distance-spectrum, are introduced later in this section.
\footnotetext{Let $\Upsilon$ denote the multiplicative group of all non-singular $n\times n$ matrices with determinant $1$ and let $\Phi$ denote the subgroup of integral matrices in $\Upsilon$.
Siegel's fundamental set is defined as the set of lattices whose generating matrices form a fundamental domain of $\Upsilon$ with regards to right multiplication by $\Phi$ (see section 19.3 of \cite{b:LekkerkerkerGeometryOfNumbers}).}
We begin with a few definitions, before stating our central lemma.
The lattice $\Lambda_0$ always refers to a specific known $n$-dimensional lattice of density $\beta$, rather than $\Lambda$ which refers to some unknown, yet existing $n$-dimensional lattice.
The lattice $\widetilde{\Lambda}_0$ is the normalized version of $\Lambda_0$ (i.e. $\det(\widetilde{\Lambda}_0) = 1$).
Define the distance series of $\Lambda_0$ as the ordered series of its unique norms $\{\lambda_j\}_{j=0}^\infty$, such that $\lambda_1$ is its minimal norm and $\lambda_0\triangleq 0$.
$\{\widetilde{\lambda}_j\}_{j=1}^\infty$ is defined for $\widetilde{\Lambda}_0$ respectively.
The normalized continuous distance-spectrum of $\Lambda_0$ is defined as
\begin{equation}
\label{eq_N}
N(x) = \sum_{j=1}^\infty \mathcal{N}_j \delta(x-\widetilde{\lambda}_j)
\end{equation}
where $\{\mathcal{N}_j\}_{j=1}^\infty$ is the ordinary distance-spectrum of $\Lambda_0$, and $\delta(\cdot)$ is the Dirac delta function.
Let $\Gamma$ denote the group\footnotemark{} of all orthogonal $n\times n$ matrices with determinant $+1$ and let $\mu(\gamma)$ denote its normalized measure so that $\int_{\Gamma} d\mu(\gamma) = 1$.
The notation $\gamma\Lambda_0$ is used to describe the lattice generated by $\gamma G$, where $G$ is a generating matrix of the lattice $\Lambda_0$.
\footnotetext{This group, consisting only of rotation matrices, is usually called the special orthogonal group.}
Our central lemma essentially expresses Siegel's mean-value theorem for a degenerate ensemble consisting of a specific known lattice $\Lambda_0$ and all its possible rotations around the origin.
\begin{lem}
\label{lem_mean_value_1}
Let $\Lambda_0$ be a specific $n$-dimensional lattice with NLD $\delta$, and $g(\sbf{\lambda})$ be a Riemann-integrable function, then there exists an orthogonal rotation $\gamma$ such that
\begin{equation}
\label{eq_mean_value_1}
\sum_{\sbf{\lambda}\in\gamma\Lambda_0\setminus\{0\}} g(\sbf{\lambda})
\leq \int_{\mathds{R}^n} \mathfrak{N}(\norm{\mbf{x}}) g\left(\frac{\mbf{x}}{e^{\delta}}\right) d\mbf{x}
\end{equation}
with
\begin{equation}
\label{eq_N_}
\mathfrak{N}(x)
\triangleq \left\{
\begin{array}{lr}
\frac{N(x)}{nV_n x^{n-1}} & : x > 0 \\
0 & : x \leq 0
\end{array}
\right.
\end{equation}
where $V_n$ is the volume of an $n$-dimensional unit sphere, and $\norm{\cdot}$ denotes the Euclidean norm.
\end{lem}
\begin{proof}
Let $\Theta$ denote the subspace of all points $\sbf{\theta}$ in the $n$-dimensional space with $\norm{\sbf{\theta}}=1$, so that $\Theta$ is the surface of the unit sphere. Let $\mu(\sbf{\theta})$ denote the ordinary solid-angle measure on this surface, normalized so that $\int_{\Theta} d\mu(\sbf{\theta}) = 1$.
We continue with the following set of equalities
\begin{align}
\label{eq_mean_value_1_proof}
\int_\Gamma \sum_{\sbf{\lambda}\in\gamma\Lambda_0\setminus\{0\}} &g(\sbf{\lambda}) d\mu(\gamma) \nonumber \\
&= \sum_{\sbf{\lambda}\in\Lambda_0\setminus\{0\}} \int_\Gamma g(\gamma\sbf{\lambda}) d\mu(\gamma) \nonumber \\
&= \sum_{\mbf{\widetilde{\lambda}}\in\widetilde{\Lambda}_0\setminus\{0\}} \int_\Gamma g\left(\frac{\gamma\sbf{\widetilde{\lambda}}}{e^{\delta}}\right) d\mu(\gamma) \nonumber \\
&= \sum_{\mbf{\widetilde{\lambda}}\in\widetilde{\Lambda}_0\setminus\{0\}} \int_\Theta g\left(\frac{\norm{\sbf{\widetilde{\lambda}}}\sbf{\theta}}{e^{\delta}}\right) d\mu(\sbf{\theta}) \nonumber \\
&= \int_{0^+}^\infty N(R) \int_\Theta g\left(\frac{R\sbf{\theta}}{e^{\delta}}\right) d\mu(\sbf{\theta}) dR \nonumber \\
&= \int_{0^+}^\infty \int_\Theta \frac{N(R)}{nV_nR^{n-1}} \cdot g\left(\frac{R\sbf{\theta}}{e^{\delta}}\right) d\mu(\sbf{\theta}) dV_nR^n \nonumber \\
&= \int_{\mathds{R}^n\setminus\{0\}} \frac{N(\norm{\mbf{x}})}{nV_n\norm{\mbf{x}}^{n-1}} \cdot g\left(\frac{\mbf{x}}{e^{\delta}}\right) d\mbf{x} \nonumber \\
&= \int_{\mathds{R}^n} \mathfrak{N}(\norm{\mbf{x}}) g\left(\frac{\mbf{x}}{e^{\delta}}\right) d\mbf{x}.
\end{align}
where the third equality follows from the definition of $\Gamma$ and $\Theta$ and the measures $\mu(\gamma)$ and $\mu(\sbf{\theta})$, the fourth equality is due to the circular-symmetry of the integrand, and the sixth equality is a transformation from generalized spherical polar coordinates to the cartesian system (see Lemma 2 of \cite{j:MacbeathAModifiedForm}).
Finally there exists at least one rotation $\gamma\in\Gamma$ for which the sum over $\gamma\Lambda_0$ is upper bounded by the average.
\end{proof}
The corollary presented below is a restricted version of lemma \ref{lem_mean_value_1} constrained to the case where the function $g(\sbf{\lambda})$ is circularly-symmetric, (i.e. $g(\sbf{\lambda})=g(\norm{\sbf{\lambda}})$).
To simplify the presentation, it is implicitly assumed that $g(\sbf{\lambda})$ is circularly-symmetric for the remainder of this paper.
It should be noted that all results presented hereafter apply also to a non-symmetric $g(\sbf{\lambda})$ with an appropriately selected rotation $\gamma$ of $\Lambda_0$.
\begin{cor}
\label{lem_mean_value_2}
Let $\Lambda_0$ be a specific $n$-dimensional lattice with NLD $\delta$, and $g(\sbf{\lambda})$ be a circularly-symmetric Riemann-integrable function, then
\begin{equation}
\label{eq_mean_value_2}
\sum_{\sbf{\lambda}\in\Lambda_0\setminus\{0\}} g(\sbf{\lambda})
= \int_{\mathds{R}^n} \mathfrak{N}(\norm{\mbf{x}}) g\left(\frac{\mbf{x}}{e^{\delta}}\right) d\mbf{x}
\end{equation}
with $\mathfrak{N}(x)$ as defined in lemma \ref{lem_mean_value_1}.
\end{cor}
\begin{proof}
When $g(\sbf{\lambda})$ is circularly-symmetric,
\begin{align}
\int_\Gamma \sum_{\sbf{\lambda}\in\gamma\Lambda_0\setminus\{0\}} g(\sbf{\lambda}) d\mu(\gamma)
&= \sum_{\sbf{\lambda}\in\Lambda_0\setminus\{0\}} \int_\Gamma g(\gamma\sbf{\lambda}) d\mu(\gamma) \nonumber \\
&= \sum_{\sbf{\lambda}\in\Lambda_0\setminus\{0\}} g(\sbf{\lambda}) \int_\Gamma d\mu(\gamma) \nonumber \\
&= \sum_{\sbf{\lambda}\in\Lambda_0\setminus\{0\}} g(\sbf{\lambda})
\end{align}
\end{proof}
The right-hand side of \eqref{eq_mean_value_2} can be trivially upper bounded by replacing $\mathfrak{N}(x)$ with a suitably chosen function $\alpha(x)$, so that
\begin{equation}
\label{eq_alphai}
\int_{\mathds{R}^n} \mathfrak{N}(\norm{\mbf{x}}) g\left(\frac{\mbf{x}}{e^{\delta}}\right) d\mbf{x}
\leq \int_{\mathds{R}^n} \alpha(\norm{\mbf{x}}) g\left(\frac{\mbf{x}}{e^{\delta}}\right) d\mbf{x}.
\end{equation}
Provided the substitution, it is possible to define the following upper bounds:
\begin{thm}[Deterministic Minkowski-Hlawka-Siegel (DMHS)]
\label{thm_nrmh1}
Let $\Lambda_0$ be a specific $n$-dimensional lattice of density $\beta$, $g(\sbf{\lambda})$ be a bounded Riemann-integrable circularly-symmetric function, and $\alpha(x)$ be defined such that \eqref{eq_alphai} is satisfied, then
\begin{equation}
\label{eq_nrmh1}
\sum_{\sbf{\lambda}\in\Lambda_0\setminus\{0\}} g(\sbf{\lambda})
\leq \beta \left[ \max_{x\leq e^{\delta}\lambda_{\textnormal{max}}} \alpha(x) \right] \int_{\mathds{R}^n} g(\mbf{x}) d\mbf{x}
\end{equation}
where $\lambda_{\textnormal{max}}$ is the maximal $\norm{\mbf{x}}$ for which $g(\mbf{x})\neq 0$.
\end{thm}
\begin{proof}
Substitute a specific $\alpha(x)$ for $\mathfrak{N}(x)$ in \eqref{eq_mean_value_2}, and upper bound by taking the maximum value of $\alpha(x)$ over the integrated region, outside the integral.
\end{proof}
\begin{thm}[extended DMHS (eDMHS)]
\label{thm_nrmh2}
Let $\Lambda_0$ be a specific $n$-dimensional lattice with density $\beta$, $g(\sbf{\lambda})$ be a bounded Riemann-integrable circularly-symmetric function, $\alpha(x)$ be defined such that \eqref{eq_alphai} is satisfied, and $M$ be an positive integer then
\begin{align}
\label{eq_nrmh2}
\sum_{\sbf{\lambda}\in\Lambda_0/\{0\}} g(\sbf{\lambda})
\leq \beta \min_{\{R_j\}_{j=1}^M} \left( \sum_{j=1}^M \max_{\mathcal{R}_j} \alpha(x) \int_{\sbf{\mathcal{R}}_j} g(\mbf{x}) d\mbf{x} \right)
\end{align}
with
\begin{align}
&\mathcal{R}_j = \{x: x\geq 0, e^{\delta}R_{j-1}<x\leq e^{\delta}R_j\} \\
&\sbf{\mathcal{R}}_j = \{\mbf{x}: \mbf{x}\in\mathds{R}^n, R_{j-1}<\norm{\mbf{x}}\leq R_j\}
\end{align}
where $\{R_j\}_{j=1}^M$ is an ordered set of real numbers with $R_0\triangleq 0$ and $R_M=\lambda_{\textnormal{max}}$, where $\lambda_{\textnormal{max}}$ is the maximal $\norm{\mbf{x}}$ for which $g(\mbf{x})\neq 0$.
\end{thm}
\begin{proof}
Substitute a specific $\alpha(x)$ for $\mathfrak{N}(x)$ in \eqref{eq_mean_value_2} and break up the integral over a non-overlapping set of spherical shells whose union equals $\mathds{R}^n$. Upper bound each shell integral by taking the maximum value of $\alpha(x)$ over it, outside the integral. Finally, the set of shells, or rather shell-defining radii is optimized such that the bound is tightest.
\end{proof}
The eDMHS may be viewed as a generalization of DMHS since for $M=1$ it defaults to it.
In addition, when $M\rightarrow\infty$ the eDMHS tends to the original integral.
Clearly, both bounds are sensitive to choice of $\alpha(x)$, though one should note, that for the same choice of $\alpha(x)$ the second bound is always tighter.
The bounds shown above are general up to choice of $\alpha(x)$, and clarify the motivation for the substitution in \eqref{eq_alphai}.
Careful construction of $\alpha(x)$ along with the selected bound can provide a tradeoff between tightness and complexity.
The next section presents a few simple methods for construction of the function $\alpha(x)$, and their consequences.
\section{Construction of $\alpha(x)$}
Maximization of the right-hand-side of \eqref{eq_mean_value_2}, by taking out the maximum value of $\mathfrak{N}(x)$ outside the integral, is not well-defined.
This is since $\mathfrak{N}(x)$ is an impulse train.
The motivation of this section is to find a replacement for $\mathfrak{N}(x)$ that enables using this maximization technique whilst retaining a meaningful bound.
Let's assume that $g(\sbf{\lambda})$ is monotonically non-increasing\footnotemark{} in $\norm{\sbf{\lambda}}$, and define $N'(x)$ to be a smoothed version of the normalized continuous distance-spectrum, selected such that it satisfies
\footnotetext{This restriction holds for many continuous noise channels, such as AWGN. For other channels, it is possible to define similar heuristics.}
\begin{equation}
\label{eq_alpha1_2}
\int_{0^+}^r N(R) dR
\leq \int_{0^+}^r N'(R) dR
\qquad \forall r \in (0,e^\delta\lambda_{\textnormal{max}}].
\end{equation}
Given the above, $\alpha(x)$ can be defined by expanding \eqref{eq_alphai} as follows:
\begin{align}
\int_{\mathds{R}^n} \mathfrak{N}(\norm{\mbf{x}}) g\left(\frac{\mbf{x}}{e^{\delta}}\right) d\mbf{x}
&= \int_{0^+}^{e^\delta\lambda_{\textnormal{max}}} N(R) g\left(\frac{R}{e^\delta}\right) dR \nonumber \\
&\leq \int_{0^+}^{e^\delta\lambda_{\textnormal{max}}} N'(R) g\left(\frac{R}{e^\delta}\right) dR \nonumber \\
&= \int_{\mathds{R}^n} \alpha(\norm{\mbf{x}}) g\left(\frac{\mbf{x}}{e^{\delta}}\right) d\mbf{x}
\end{align}
with
\begin{equation}
\alpha(x)
\triangleq \left\{
\begin{array}{lr}
\frac{N'(x)}{nV_n x^{n-1}} & : x > 0 \\
0 & : x \leq 0
\end{array}
\right.
\end{equation}
where the equalities follow methods used in \eqref{eq_mean_value_1_proof} together with $g(\sbf{\lambda})$'s circular-symmetry, and the inequality follows from $g(\sbf{\lambda})$ being monotonically non-increasing together with $N'(x)$ obeying \eqref{eq_alpha1_2}.
Define $\{\alpha_i(x)\}_{i\in I}$ to be the set of all functions $\alpha(x)$ such that \eqref{eq_alpha1_2} is satisfied.
Specifically let's define the following two functions:
\begin{itemize}
\item The first function $\alpha^{\textnormal{rng}}(x)$ is defined to be piecewise constant over shells defined by consecutive radii from the normalized distance series $\{\widetilde{\lambda}_j\}_{j=0}^\infty$, (i.e. a shell is defined as $\mathcal{S}_j = \{x: \widetilde{\lambda}_{j-1}<x\leq\widetilde{\lambda}_j\}$). The constant value for shell $j$ is selected such that the spectral mass $\mathcal{N}_j$ is spread evenly over the shell. Formally, this can be expressed as
\begin{equation}
\left[\alpha^{\textnormal{rng}}(x)\right]_{x\in\mathcal{S}_j} = \frac{\mathcal{N}_j}{V_n\left(\widetilde{\lambda}_j^n-\widetilde{\lambda}_{j-1}^n\right)}.
\end{equation}
\item The second function $\alpha^{\textnormal{opt}}(x)$ is selected to be $\alpha_j(x)$ such that the bound \eqref{eq_nrmh1} is tightest; thus by definition
\begin{equation}
\label{eq_alpha1_3}
j = \argmin_{i \in I} \max_{x\leq e^{\delta}\lambda_{\textnormal{max}}} \alpha_i(x).
\end{equation}
As it turns out $\alpha^{\textnormal{opt}}(x)$ is also piecewise constant over shells defined by consecutive radii from the normalized distance series, and can be obtained as the solution to a linear program presented in appendix \ref{app_linear_program_alpha1}.
One subtlety, that can go by unnoticed about $\alpha^{\textnormal{opt}}(x)$, is its dependence on $\lambda_{\textnormal{max}}$.
Careful examination reveals that $\alpha^{\textnormal{opt}}(x)$ is constructed from those spectral elements whose corresponding distances are less than or equal to $\lambda_{\textnormal{max}}$.
This is of relevance when optimization of $\lambda_{\textnormal{max}}$ is dependent on $\alpha^{\textnormal{opt}}(x)$, as is the case in some of the bounds discussed hereafter.
This technical subtlety can be overcome by construction of a suboptimal version of $\alpha^{\textnormal{opt}}(x)$ consisting of more spectral elements than necessary.
In many cases both the suboptimal and optimal versions coincide.
In the remainder of this paper, this technicality is ignored and left for the reader's consideration.
\end{itemize}
Figure \ref{fig_z2_alpha} illustrates $\mathfrak{N}(x)$, $\alpha^{\textnormal{rng}}(x)$, and $\alpha^{\textnormal{opt}}(x)$ for the rectangular lattice $\mathds{Z}^2$.
\begin{figure}[htp]
\center{\includegraphics[width=0.5\textwidth]{z2_alpha}}
\caption{\label{fig_z2_alpha} $\mathfrak{N}(x)$, $\alpha^{\textnormal{rng}}(x)$, and $\alpha^{\textnormal{opt}}(x)$ for the rectangular lattice $\mathds{Z}^2$.}
\end{figure}
\section{A General ML Decoding Upper Bound}
Many tight ML upper bounds originate from a general bounding technique, developed by Gallager \cite{b:GallagerLowDensity}. Gallager's technique has been utilized extensively in literature \cite{j:ShamaiVariationsOnThe,j:YousefiANewUpper,j:TwittoTightenedUpperBounds}.
Similar forms of the general bound, displayed hereafter, have been previously presented in literature \cite{j:HerzbergTechniquesofBounding,j:IngberFiniteDimensional}.
Playing a central role in our analysis, we present it as a theorem.
Before proceeding with the theorem, let us define an Additive Circularly-Symmetric Noise (ACSN) channel as an additive continuous noise channel, whose noise is isotropically distributed and is a non-increasing function of its norm.
\begin{thm}[General ML Upper Bound]
\label{thm_bound_general}
Let $\Lambda$ be an $n$-dimensional lattice, and $f_{\norm{\mbf{z}}}(\rho)$ the \pdf of an ACSN channel's noise vector's norm, then the error probability of an ML decoder is upper bounded by
\begin{align}
\label{eq_bound_general}
P_e(\Lambda)
\leq \min_{r} & \left(\sum_{\sbf{\lambda}\in\Lambda\setminus\{0\}} \int_0^r f_{\norm{\mbf{z}}}(\rho) P_2(\sbf{\lambda},\rho) d\rho \right. \\ \nonumber
& \qquad + \left. \int_r^\infty f_{\norm{\mbf{z}}}(\rho) d\rho \right)
\end{align}
where the pairwise error probability conditioned on $\norm{\mbf{z}}=\rho$ is defined as
\begin{equation}
\label{eq_p2}
P_2(\sbf{\lambda},\rho) = \Pr(\sbf{\lambda}\in \Ball(\mbf{z},\norm{\mbf{z}})|\norm{\mbf{z}}=\rho)
\end{equation}
where $\Ball(\mbf{z},\norm{\mbf{z}})$ is an $n$-dimensional ball of radius $\norm{\mbf{z}}$ centered around $\mbf{z}$.
\end{thm}
\begin{proof}
See appendix \ref{app_bound_general}.
\end{proof}
We call the first term of \eqref{eq_bound_general} the Union Bound Term (UBT) and the second term the Sphere Bound Term (SBT) for obvious reasons.
In general \eqref{eq_p2} is difficult to quantify.
One method to overcome this, which is limited for analysis of specific lattices, is by averaging over the MHS ensemble.
New methods for upper bounding \eqref{eq_bound_general} for specific lattices are presented in the next section.
\section{Applications of the General ML Bound}
In this section, the UBT of the ML decoding upper bound \eqref{eq_bound_general} is further bounded using different bounding methods. The resulting applications vary in purpose, simplicity, and exhibit different performance. We present the applications and discuss their differences.
\subsection{MHS}
Application of the MHS theorem from \eqref{eq_mhs}, leads to the random-coding error bound on lattices.
Since it is based on the MHS ensemble average, the random-coding bound proves the existence of a lattice bounded by it, but does not aid in finding such lattice; neither does it provide tools for examining specific lattices.
\begin{thm}[MHS Bound, Theorem 5 of \cite{j:IngberFiniteDimensional}]
\label{thm_bound_mhs}
Let $f_{\norm{\mbf{z}}}(\rho)$ be the \pdf of an ACSN channel's noise vector's norm, then there exists an $n$-dimensional lattice $\Lambda$ of density $\beta$ for which the error probability of an ML decoder is upper bounded by
\begin{equation}
\label{eq_bound_mhs}
P_e(\Lambda)
\leq \beta V_n \int_0^{r^*} f_{\norm{\mbf{z}}}(\rho) \rho^n d\rho + \int_{r^*}^\infty f_{\norm{\mbf{z}}}(\rho) d\rho
\end{equation}
with
\begin{equation}
r^* = (\beta V_n)^{-1/n}.
\end{equation}
\end{thm}
\begin{proof}
Set $g(\sbf{\lambda})$ as
\begin{equation}
\label{eq_bound_mhs_g}
g(\sbf{\lambda})
= \int_0^r f_{\norm{\mbf{z}}}(\rho) P_2(\sbf{\lambda},\rho) d\rho,
\end{equation}
noting that it is a bounded function of $\sbf{\lambda}$ and continue to bound the UBT from \eqref{eq_bound_general} using \eqref{eq_mhs}.
The remainder of the proof is presented in appendix \ref{app_bound_mhs}.
\end{proof}
\subsection{DMHS}
Application of the DMHS theorem \eqref{eq_nrmh1} using $\alpha(x)=\alpha^{\textnormal{opt}}(x)$ provides a tool for examining specific lattices. The resulting bound is essentially identical to the MHS bound, excluding a scalar multiplier of the UBT. It is noteworthy that this is the best $\alpha(x)$-based bound of this form, since $\alpha^{\textnormal{opt}}(x)$ is optimized with regards to DMHS.
\begin{thm}[DMHS Bound]
\label{thm_bound_nrmh1}
Let a specific $n$-dimensional lattice $\Lambda_0$ of density $\beta$ be transmitted over an ACSN channel with $f_{\norm{\mbf{z}}}(\rho)$ the \pdf of its noise vector's norm, then the error probability of an ML decoder is upper bounded by
\begin{equation}
\label{eq_bound_nrmh1}
P_e(\Lambda_0)
\leq \min_r \left( \alpha \beta V_n \int_0^{r} f_{\norm{\mbf{z}}}(\rho) \rho^n d\rho + \int_{r}^\infty f_{\norm{\mbf{z}}}(\rho) d\rho \right)
\end{equation}
with
\begin{equation}
\label{eq_bound_nrmh1_alpha}
\alpha = \max_{x\leq e^{\delta}\cdot2r} \alpha^{\textnormal{opt}}(x)
\end{equation}
where $\alpha^{\textnormal{opt}}(x)$ is as defined by \eqref{eq_alpha1_3}.
\end{thm}
\begin{proof}
Set $g(\sbf{\lambda})$ as in \eqref{eq_bound_mhs_g}, noting that it is bounded by $\lambda_{\textnormal{max}}=2r$. The remainder is identical to the proof of theorem \ref{thm_bound_mhs} replacing $\beta$ with $\alpha\beta$.
\end{proof}
Optimization of $r$ can be performed in the following manner.
Since $\alpha^{opt}(x)$ is a monotonically non-increasing function of $x$, optimization of $r$ is possible using an iterative numerical algorithm.
In the first iteration, set $r = (\beta V_n)^{-1/n}$ and calculate $\alpha$ according to \eqref{eq_bound_nrmh1_alpha}.
In each additional iteration, set $r = (\alpha\beta V_n)^{-1/n}$ and recalculate $\alpha$. The algorithm is terminated on the first iteration when $\alpha$ is unchanged.
\subsection{eDMHS}
Rather than maximizing the UBT using a single scalar factor (as was done in the DMHS), the eDMHS splits up the UBT integral to several regions with boundaries defined by $\{\lambda_j\}_{j=0}^\infty$.
Maximization of each resulting region by its own scalar, results in much tighter, yet more complex bound.
This is typically preferred for error bounding in finite dimension lattices.
For asymptotical analysis, the eDMHS would typically require an increasing (with the dimension) number of spectral elements, while the DMHS would still only require one, making it the favorable.
We choose $\alpha(x)=\alpha^{\textnormal{opt}}(x)$, rather than optimizing $\alpha(x)$ for this case.
The reason is that although this bound is tighter for the finite dimension case, it is considerably more complex than a competing bound (for the finite dimension case) presented in the next subsection.
The main motivation for showing this bound is as a generalization of DMHS.
\begin{thm}[eDMHS Bound]
\label{thm_bound_nrmh2}
Let a specific $n$-dimensional lattice $\Lambda_0$ of density $\beta$ be transmitted over an ACSN channel with $f_{\norm{\mbf{z}}}(\rho)$ the \pdf of its noise vector's norm, then the error probability of an ML decoder is upper bounded by
\begin{align}
\label{eq_bound_nrmh2}
P_e(\Lambda_0)
\leq \min_r & \left( \beta \sum_{j=1}^M \alpha_j \int_{\lambda_j/2}^{r} f_{\norm{\mbf{z}}}(\rho) h_j(\rho) d\rho \right. \nonumber \\
& \qquad + \left. \int_{r}^\infty f_{\norm{\mbf{z}}}(\rho) d\rho \right)
\end{align}
with
\begin{align}
&\alpha_j = \max_{\mathcal{S}_j} \alpha^{\textnormal{opt}}(x) = \alpha^{\textnormal{opt}}(e^{\delta}\lambda_j) \\
&h_j(\rho) = \int_{\sbf{\mathcal{S}}_j} \sigma\{\mbf{x}\in \Ball(\mbf{z},\norm{\mbf{z}})| \norm{\mbf{z}}=\rho \} d\mbf{x} \\
&\mathcal{S}_j = \{x: x\geq 0, e^{\delta}\lambda_{j-1}<x\leq e^{\delta}\lambda_j\} \\
&\sbf{\mathcal{S}}_j = \{\mbf{x}: \mbf{x}\in\mathds{R}^n, \lambda_{j-1}<\norm{\mbf{x}}\leq \lambda_j\}
\end{align}
where $\alpha^{\textnormal{opt}}(x)$ is as defined in \eqref{eq_alpha1_3}, $\sigma\{\mbf{x}\in \Ball(\mbf{z},\norm{\mbf{z}})\}$ is the characteristic function of $\Ball(\mbf{z},\norm{\mbf{z}})$, $\{\lambda_j\}_{j=0}^\infty$ is the previously defined distance series of $\Lambda_0$, and $M$ is the maximal index $j$ such that $\lambda_j\leq 2r$.
\end{thm}
\begin{proof}
We set $g(\sbf{\lambda})$ as in \eqref{eq_bound_mhs_g} noting that it is bounded by $\lambda_{\textnormal{max}}=2r$.
We continue by remembering that $\alpha^{\textnormal{opt}}(x)$ is piecewise constant in the shells $\mathcal{S}_j$, and therefore \eqref{eq_nrmh2} collapses to
\begin{equation}
\label{eq_nrmh2_alpha1}
\sum_{\sbf{\lambda}\in\Lambda_0/\{0\}} g(\sbf{\lambda})
\leq \beta \sum_{j=0}^M \alpha_j \int_{\sbf{\mathcal{S}}_j} g(\mbf{x}) d\mbf{x}.
\end{equation}
For the remainder we continue in a similar manner to the proof of theorem \ref{thm_bound_mhs} by upper bounding the UBT from \eqref{eq_bound_general} using \eqref{eq_nrmh2_alpha1}. See appendix \ref{app_bound_nrmh2}.
\end{proof}
A geometrical interpretation of $h_j(\rho)$ is presented in figure \ref{fig_h_j}.
\input{h_j.tpx}
\subsection{Sphere Upper Bound (SUB)}
Another bound involving several elements of the spectrum can be constructed by directly bounding the conditional pairwise error probability $P_2(\mbf{x},\rho)$.
This bound is considerably more complex than the DMHS, but is potentially much tighter for the finite dimension case.
When the channel is restricted to AWGN, the resulting bound is similar in performance to the SUB of \cite{j:HerzbergTechniquesofBounding}, hence the name.
The bounding technique for $P_2(\mbf{x},\rho)$, presented in the following lemma, is based on \cite{j:LomnitzCommunicationOver}.
\begin{lem}[Appendix D of \cite{j:LomnitzCommunicationOver}]
\label{lem_prob_ball}
Let $\mbf{x}$ be a vector point in $n$-space with norm $\norm{\mbf{x}}\leq2\rho$, $\mbf{z}$ an isotropically distributed $n$-dimensional random vector, and $\rho$ a real number then
\begin{equation}
\label{eq_prob_ball}
\Pr(\mbf{x}\in \Ball(\mbf{z},\norm{\mbf{z}})|\norm{\mbf{z}}=\rho)
\leq \left( 1- \left( \frac{\norm{\mbf{x}}}{2\rho} \right)^2 \right)^{\frac{n-1}{2}}
\end{equation}
\end{lem}
\begin{proof}
See appendix \ref{app_prob_ball}.
\end{proof}
The above lemma leads directly to the following theorem.
\begin{thm}[SUB]
\label{thm_bound_sub}
Let a specific $n$-dimensional lattice $\Lambda_0$ of density $\beta$ be transmitted over an ACSN channel with $f_{\norm{\mbf{z}}}(\rho)$ the \pdf of its noise vector's norm, then the error probability of an ML decoder is upper bounded by
\begin{align}
\label{eq_bound_sub}
P_e(\Lambda_0)
\leq \min_r & \left( \sum_{j=1}^M \mathcal{N}_j \int_{\lambda_j/2}^{r} f_{\norm{\mbf{z}}}(\rho) \left( 1- \left( \frac{\lambda_j}{2\rho} \right)^2 \right)^{\frac{n-1}{2}} d\rho \right. \nonumber \\
& \qquad + \left. \int_{r}^\infty f_{\norm{\mbf{z}}}(\rho) d\rho \right)
\end{align}
where $\{\lambda_j\}_{j=1}^\infty$ and $\{\mathcal{N}_j\}_{j=1}^\infty$ are the previously defined distance series and spectrum of $\Lambda_0$ respectively, and $M$ is the maximal index $j$ such that $\lambda_j\leq 2r$.
\end{thm}
\begin{proof}
Bound the UBT of \eqref{eq_bound_general} directly, using \eqref{eq_prob_ball}. See appendix \ref{app_bound_sub}.
\end{proof}
Optimization of $r$ can be performed in the following manner.
We begin by analyzing the function $f(\rho)=\left( 1- \left( \frac{\lambda_j}{2\rho} \right)^2 \right)^{\frac{n-1}{2}}$.
It is simple to verify that this function is positive and strictly increasing in the domain $\{\rho:\rho\geq\lambda_j/2\}$.
Since the UBT of \eqref{eq_bound_sub} is a positive sum of such functions, it is positive and monotonically nondecreasing in $r$.
Since additionally the SBT of \eqref{eq_bound_sub} is always positive, it suffices to search for the optimal $r$ between $\lambda_1/2$ and an $r_{\textnormal{max}}$, where $r_{\textnormal{max}}$ is defined such that the UBT is greater than or equal to $1$.
We continue by calculating $M_{\textnormal{max}}$ that corresponds to $r_{\textnormal{max}}$. By definition, each selection of $M$ corresponds to a domain $\{r:\lambda_M\leq 2r<\lambda_{M+1}\}$.
Instead of searching for the optimal $r$ over the whole domain $\{r:\lambda_1/2\leq r<r_{\textnormal{max}}\}$, we search over all sub-domains corresponding to $1\leq M\leq M_{\textnormal{max}}$.
When $M$ is constant, independent of $r$, the optimal $r$ is found by equating the differential of \eqref{eq_bound_sub} to $0$, in the domain $\{r:\lambda_M\leq 2r<\lambda_{M+1}\}$. Equating the differential to $0$ results in the following condition on $r$:
\begin{equation}
\sum_{j=1}^M \mathcal{N}_j \left( 1- \left( \frac{\lambda_j}{2r} \right)^2 \right)^{\frac{n-1}{2}}-1 = 0.
\end{equation}
The function on the left of the above condition is monotonically nondecreasing (as shown previously), so an optimal $r$ exists in $\{r:\lambda_M\leq 2r<\lambda_{M+1}\}$ iff its values on the domain edges are of opposite signs. If no optimum exists in the domain, it could exist on one of the domain edges.
The optimization algorithm proceeds as follows: set $M=1$ and examine the differential at the domain edges $\lambda_M/2$ and $\lambda_{M+1}/2$.
If the edges are of opposite signs, find the exact $r$ that zeros the differential in the domain and store it as $r_{\textnormal{zc}}^M$.
Otherwise set $r_{\textnormal{zc}}^M=\emptyset$, advance $M$ by $1$ and repeat the process.
The process is terminated at $M=M_{\textnormal{max}}$.
When done, evaluate \eqref{eq_bound_sub} at $r=r_{\textnormal{zc}}^1,\dots,r_{\textnormal{zc}}^M,\lambda_1/2,\dots,\lambda_{M_{\textnormal{max}}}/2$ and select the $r$ that minimizes.
We conclude this section with an example presented in figure \ref{fig_leech}, which illustrates the effectiveness of the new bounds for finite dimension.
The error probability of the Leech\footnotemark{} lattice $\Lambda_{24}$ is upper bounded by DMHS \eqref{eq_bound_nrmh1} and SUB \eqref{eq_bound_sub}.
The ordinary Union Bound (UB), the MHS bound for dimension $24$ \eqref{eq_bound_mhs}, and the Sphere Lower Bound (SLB) of \cite{j:TarokhUniversalBound} are added for reference.
The spectral data is taken from \cite{b:ConwaySpherePackings}.
The bounds are calculated for an AWGN channel with noise variance $\sigma^2$.
\footnotetext{The Leech lattice is the densest lattice packing in 24 dimensions.}
\begin{figure}[htp]
\center{\includegraphics[width=0.5\textwidth]{leech}}
\caption{\label{fig_leech} A comparison of DMHS and SUB for the Leech lattice. The UB, MHS and SLB are added for reference. The graph shows the error probability as a function of the Volume-to-Noise Ratio (VNR) for rates $\delta^*<\delta<\delta_{cr}$.}
\end{figure}
\section{Error Exponents for the AWGN Channel}
The previous section presented three new spectrum based upper bounds for specific lattices.
As stated there, only the DMHS bound seems suitable for asymptotical analysis.
This section uses the DMHS bound to construct an error-exponent for specific lattices over the AWGN channel.
Although this case is of prominent importance, it is important to keep in mind that it is only one extension of the DMHS bound.
In general, DMHS bound extensions are applicable wherever MHS bound extensions are.
When the channel is AWGN with noise variance $\sigma^2$, the upper error bound on the ML decoding of a ``good''\footnotemark{} lattice from the MHS ensemble \eqref{eq_bound_mhs} can be expressed in the following exponential form \cite{j:PoltyrevOnCoding}, \cite{j:IngberFiniteDimensional}
\footnotetext{Define a ``good'' lattice, from an ensemble, as one that is upper bounded by the ensemble's average.}
\begin{equation}
\label{eq_bound_mhs_exp}
P_e(\Lambda)
\leq e^{-n(E_r(\delta)+o(1))}
\end{equation}
with
\begin{equation}
E_r(\delta)
= \left\{
\begin{array}{ll}
(\delta^*-\delta) + \log{\frac{e}{4}}, & \delta\leq\delta_{cr} \\
\frac{e^{2(\delta^*-\delta)}-2(\delta^*-\delta)-1}{2},
& \delta_{cr}\leq\delta<\delta^* \\
0, & \delta\geq\delta^*
\end{array} \right.
\end{equation}
\begin{align}
\delta^* &= \frac{1}{2}\log{\frac{1}{2\pi e\sigma^2}} \\
\delta_{cr} &= \frac{1}{2}\log{\frac{1}{4\pi e\sigma^2}}
\end{align}
where $o(1)$ goes to zero asymptotically with $n$.
This error-exponent can be directly deduced from the MHS bound \eqref{eq_bound_mhs}.
By applying similar methods to the DMHS bound \eqref{eq_bound_nrmh1}, it is possible to construct an error-exponent for a specific lattice \textit{sequence} based on its distance-spectrum.
\begin{thm}[Non-Random Coding Error Exponent]
Let $\Lambda_0[n]$ be a specific lattice \textit{sequence} transmitted over an AWGN channel with noise variance $\sigma^2$, then the error probability of an ML decoder is upper bounded by
\begin{equation}
\label{eq_bound_nrmh1_exp}
P_e(\Lambda_0[n])
\leq e^{-n(E_r(\delta+\nu[n])+o(1))}
\end{equation}
with
\begin{equation}
\nu[n] \triangleq \frac{1}{n}\log{\alpha[n]}.
\end{equation}
\end{thm}
where $[n]$ indicates the $n$'th element of the sequence.
\begin{proof}
It follows from the proof of theorem \ref{thm_bound_nrmh1}, that replacing $\beta$ with $\beta\alpha[n]$ there, is equivalent to replacing $\delta$ with $\delta+\nu[n]$ here.
\end{proof}
Clearly \eqref{eq_bound_nrmh1_exp} can be used to determine the exponential decay of the error probability of a specific lattice \textit{sequence}. This leads to the following corollary.
\begin{cor}[Gap to Capacity 1]
A lattice \textit{sequence} for which
\begin{equation}
\lim_{n\rightarrow\infty} \frac{1}{n}\log{\alpha[n]} = 0
\end{equation}
achieves the unrestricted channel error-exponent.
\end{cor}
\begin{proof}
Follows immediately from \eqref{eq_bound_nrmh1_exp} and the definition of $\nu[n]$.
\end{proof}
When certain stronger conditions on the lattice apply, the following simpler corollary can be used.
\begin{cor}[Gap to Capacity 2]
\label{cor_cap_gap_2}
A lattice \textit{sequence} for which $\alpha^{\textnormal{rng}}(x)$ (per dimension $n$) is monotonically non-increasing in $x$ and
\begin{equation}
\label{eq_cor_cap_gap_2}
\lim_{n\rightarrow\infty} \frac{1}{n} \log \left( \frac{\mathcal{N}_1[n]}{e^{n\delta}V_n(\lambda_1[n])^n} \right) = 0
\end{equation}
achieves the unrestricted channel error-exponent.
\end{cor}
\begin{proof}
When $\alpha^{\textnormal{rng}}(x)$ is monotonically non-increasing then $\alpha^{\textnormal{opt}}(x) = \alpha^{\textnormal{rng}}(x)$ and $\alpha[n] = \frac{\mathcal{N}_1[n]}{e^{n\delta}V_n(\lambda_1[n])^n}$.
\end{proof}
Let us examine corollary \ref{cor_cap_gap_2}.
Assume a sequence of lattices with minimum distance $\lambda_1[n]=e^{-\delta}V_n^{-1/n}$ is available (see \cite{p:IngberExpurgatedInfiniteConstellations} for proof of existence).
Plugging this back into \eqref{eq_cor_cap_gap_2} leads to the following necessary condition for the bound \eqref{eq_bound_nrmh1_exp} to achieve the unrestricted channel error-exponent
\begin{equation}
\label{eq_cor_cap_gap_2_eg}
\lim_{n\rightarrow\infty} \frac{1}{n} \log \left( \mathcal{N}_1[n] \right) = 0,
\end{equation}
whether the monotonicity condition applies or not.
Although this only amounts to conditioning on the bound (and not on the lattice sequence), condition \eqref{eq_cor_cap_gap_2_eg} gives an insight to the close relationship between the spectral distances and their enumeration.
We conjecture that at rates close to capacity, the bound is tight, leading to \eqref{eq_cor_cap_gap_2} being a necessary condition on the lattice sequence itself.
We conclude this section with an illustration of the exponential decay series $\nu[n]$ of the first three lattices of the Barnes-Wall lattice \textit{sequence} $BW_4=D_4$, $BW_8=E_8$, and $BW_{16}=\Lambda_{16}$.
Unfortunately the distance-spectrum for $BW_n$ is generally unknown, preventing asymptotical analysis.
Nonetheless an interpolation for dimension $4$ to $16$ is presented in figure \ref{fig_bw_nu}.
The spectral data is taken from \cite{b:ConwaySpherePackings}.
\begin{figure}[htp]
\center{\includegraphics[width=0.5\textwidth]{bw_nu}}
\caption{\label{fig_bw_nu} The exponential decay series $\nu[n]$ for the lattice \textit{sequence} $BW_n$, calculated for dimensions $4$, $8$, and $16$ and interpolated in-between.}
\end{figure}
Examination of figure \ref{fig_bw_nu} shows that the upper bound on the gap to capacity decreases with the increase in $n$, at least for the first three lattices in the sequence.
Although full spectral data is not available for the remainder of the sequence, the minimal distance and its enumeration are known analytically.
Assuming momentarily that the condition on $\alpha^{\textnormal{rng}}(x)$ as presented in corollary \ref{cor_cap_gap_2} holds, we can try to examine $\frac{1}{n} \log \left( \frac{\mathcal{N}_1[n]}{e^{n\delta}V_n(\lambda_1[n])^n} \right)$ as an upper bound on the gap to capacity.
This examination is illustrated in figure \ref{fig_bw_nu_ext}, where the first three lattices coincide with the previous results in figure \ref{fig_bw_nu}.
Clearly, these results are a lower bound on an upper bound and cannot indicate one way or the other.
Nonetheless, it seems that the results are consistent with the well-known coding performance of Barnes-Wall lattices with increasing dimension.
\begin{figure}[htp]
\center{\includegraphics[width=0.5\textwidth]{bw_nu_ext}}
\caption{\label{fig_bw_nu_ext} $\frac{1}{n} \log \left( \frac{\mathcal{N}_1[n]}{e^{n\delta}V_n(\lambda_1[n])^n} \right)$ for the first few dimensions of the lattice \textit{sequence} $BW_n$. Coincides with $\frac{1}{n}\log{\alpha[n]}$ for the first three lattices.}
\end{figure}
|
2,869,038,154,098 | arxiv | \section{Introduction}
Consider the following discrete boundary value problem for a bounded open set $\Omega \subseteq \mathbb{R}^2$ with the exterior ball condition. For each integer $n > 0$, let $u_n : \mathbb{Z}^2 \to \mathbb{Z}$ be the point-wise least function that satisfies
\begin{equation}
\label{e.fde}
\begin{cases}
\Delta u_n \leq 2 & \mbox{in } \mathbb{Z}^2 \cap n \Omega \\
u_n \geq 0 & \mbox{in } \mathbb{Z}^2 \setminus n \Omega,
\end{cases}
\end{equation}
where $\Delta u(x) = \sum_{y \sim x} (u(y) - u(x))$ is the Laplacian on $\mathbb{Z}^2$. Were it not for the integer constraint on the range of $u_n$, this would be the standard finite difference approximation of the Poisson problem on $\Omega$. The integer constraint imposes a non-linear structure that drastically changes the scaling limit. In particular, the Laplacian $\Delta u_n$ is not constant in general. For example, in the case where $\Omega$ is a unit square, depicted in \fref{identity}, $\Delta u_n$ a fractal structure reminiscent of a Sierpinski gasket. Upon close inspection one finds that the the triangular regions of this image, displayed in more detail in \fref{zoom}, are filled by periodic patterns.
\begin{figure}[h]
\includegraphics[width=.23\textwidth]{fde_3.png}
\hfill
\includegraphics[width=.23\textwidth]{fde_4.png}
\hfill
\includegraphics[width=.23\textwidth]{fde_5.png}
\hfill
\includegraphics[width=.23\textwidth]{fde_6.png}
\caption{$\Delta u_n$ for $\Omega = (0,1)^2$ and $n = 3^3, 3^4, 3^5, 3^6$. The colors blue, cyan, yellow, red correspond to values -1,0,1, 2.}
\label{f.identity}
\end{figure}
We know from \cite{Pegden-Smart} the quadratic rescalings
\begin{equation*}
\bar u_n(x) = n^{-2} u_n([ n x])
\end{equation*}
converge uniformly as $n \to \infty$ to the solution of a certain partial differential equation. As a corollary, the rescaled Laplacians $\bar s_n(x) = \Delta u_n([n x])$ converge weakly-$*$ in $L^\infty(\Omega)$ as $n \to \infty$. That is, the average of $\bar s_n$ over any fixed ball converges as $n \to \infty$. The arguments establishing this are relatively soft and apply in great generality. In this article we describe how, when $\bar u$ is sufficiently regular, the convergence of the $\bar s_n$ can be improved.
To get an idea of what we aim to prove, consider \fref{zoom}, which displays the triangular patches of \fref{identity} in greater detail. It appears that, once a patch is formed, it is filled by a double periodic pattern, possibly with low dimensional defects. This phenomenon has been known experimentally since at least the works of Ostojic \cite{Ostojic} and Dhar-Sadhu-Chandra \cite{Dhar-Sadhu-Chandra}. The recent work of Kalinin-Shkolnikov \cite{Kalinin-Shkolnikov} identifies the defects, in a more restricted context, as tropical curves.
The shapes of the limiting patches are known in many cases. Exact solutions for some other choices of domain are constructed by Levine and the authors \cite{Levine-Pegden-Smart-1}; the key point is that the notion of convergence used in this previous work ignores small-scale structure, and thus does not address the appearance of patterns. The ansatz of Sportiello \cite{Sportiello} can be used to adapt these methods to the square with cutoff 3, which yields the continuum limit of the sandpile identity on the square. Meanwhile, work of Levine and theauthors \cite{Levine-Pegden-Smart-1} did classify the patterns which should appear in the sandpile, in the course of characterizing the structure of the continuum limit of the sandpile. To establish that the patterns themselves appear in the sandpile process, it remains to to show that this pattern classification is exhaustive, and that the patterns actually appear where they are supposed to. In this manuscript we complete this framework, and our results allow one to prove that the triangular patches are indeed composed of periodic patterns, up to defects whose size we can control.
\begin{figure}[h]
\includegraphics[width=\textwidth]{zoom.png}
\caption{The patterns in the upper corner of $\Delta u_n$ for $\Omega = (0,1)^2$ and $n = 600$. The colors blue, cyan, yellow, red correspond to value $-1$, $0$, $1$, $2$.}
\label{f.zoom}
\end{figure}
We describe our result for \eref{fde} with $\Omega = (0,1)^2$, leaving the more general results for later. A doubly periodic {\em pattern} $p : \mathbb{Z}^2 \to \mathbb{Z}$ is said to $R$-match an {\em image} $s : \mathbb{Z}^2 \to \mathbb{Z}$ at $x \in \mathbb{Z}^2$ if, for some $y \in \mathbb{Z}^2$,
\begin{equation*}
s(x+z) = p(y+z) \quad \mbox{for } z \in \mathbb{Z}^2 \cap B_R,
\end{equation*}
where $B_R$ is the Euclidean ball of radius $R$ and center $0$.
\begin{theorem}
\label{t.main}
Suppose $\Omega = (0,1)^2$. There are disjoint open sets $\Omega_k \subseteq \Omega$ and doubly periodic patterns $p_k : \mathbb{Z}^2 \to \mathbb{Z}$ for each $k\geq 1$, and constants $L > 1$ and $\alpha, \tau \in (0,1)$ such that the following hold for all $n\geq 1$:
\begin{enumerate}
\item $|\Omega \setminus \cup_{1 \leq k \leq n} \Omega_k| \leq \tau^n$.
\item For all $1 < r < n$, the pattern $p_k$ $r$-matches the image $\Delta u_n$ at a $1 - L^k (n^{-\alpha} r)^{1/3}$ fraction of points in $n \Omega_k$.
\end{enumerate}
\end{theorem}
We expect that the exponents in this theorem, while effective, are suboptimal. In simulations, the pattern defects appear to be one dimensional. This leads to the following problem.
\begin{problem}
Improve the above estimate to a $1 - L^k n^{-1} r$ fraction of points.
\end{problem}
In fact, we expect that pattern convergence can be further improved in certain settings. We see below that the points of the triangular patches in the continuum limit of \eref{fde} are all triadic rationals. Moreover, when we select $n = 3^m$, then the patterns appear without any defects, as in \fref{identity}. We expect this is not a coincidence. These so-called ``perfect Sierpinski'' sandpiles have been investigated by Sportiello \cite{Sportiello} and appear in many experiments.
\begin{problem}
\label{q.perfect}
Show that, when $n$ is a power of three, the patterns in patches larger than a constant size have no defects.
\end{problem}
Our proof has three main ingredients. First, we prove that the patterns in the Abelian sandpile are in some sense stable. This is a consequence of the classification theorem for the patterns and the growth lemma for elliptic equations in non-divergence form. Second, we obtain a rate of convergence to the scaling limit of the Abelian sandpile when the limit enjoys some additional regularity. This is essentially a consequence of the Alexandroff-Bakelman-Pucci estimate for uniformly elliptic equations. Third, the limit of \eref{fde} when $\Omega = (0,1)^2$ has a piece-wise quadratic solution that can be explicitly computed by our earlier work. The combination of these three ingredients implies that the patterns appear as \fref{identity} suggests.
The Matlab/Octave code used to compute the figures for this article is included in the arXiv upload and may be freely used and modified.
\subsection*{Acknowledgments} Both authors are partial supported by the National Science Foundation and the Sloan Foundation. The second author wishes to thank Alden Research Laboratory in Holden, Massachusetts, for their hospitality while some of this work was completed.
\section{Preliminaries}
\subsection{Recurrent functions} We recall the notion of being a locally least solution of the inequality in \eref{fde}.
\begin{definition}
A function $v : \mathbb{Z}^2 \to \mathbb{Z}$ is recurrent in $X \subseteq \mathbb{Z}^2$ if $\Delta v \leq 2$ in $X$ and
\begin{equation*}
\sup_Y (v - w) \leq \sup_{X \setminus Y} (v - w)
\end{equation*}
holds whenever $w : \mathbb{Z}^2 \to \mathbb{Z}$ satisfies $\Delta w \leq 2$ in a finite $Y \subseteq X$.
\end{definition}
With this terminology, $u_n$ is characterized by being recurrent in $\mathbb{Z}^2 \cap n \Omega$ and zero outside. The word recurrent usually refers to a condition on configurations $s : X \to \mathbb{N}$ in the sandpile literature \cite{Levine-Propp}. These notions are equivalent for configurations of the form $s = \Delta v$. That is, $v$ is a recurrent function if and only if $\Delta v$ is a recurrent configuration.
\subsection{Scaling limit}
We recall that the scaling limit of the Abelian sandpile.
\begin{proposition}[\cite{Pegden-Smart}]
The rescaled solutions $\bar u_n$ of \eref{fde} converge uniformly to the unique solution $\bar u \in C(\mathbb{R}^2)$ of
\begin{equation}
\label{e.pde}
\begin{cases}
D^2 \bar u \in \partial \Gamma & \mbox{in } \Omega \\
\bar u = 0 & \mbox{on } \mathbb{R}^2 \setminus \Omega,
\end{cases}
\end{equation}
where $\Gamma \subseteq \mathbb{R}^{2 \times 2}_{sym}$ is the set of $2 \times 2$ real symmetric matrix $A$ for which there is a function $v : \mathbb{Z}^2 \to \mathbb{Z}$ satisfying
\begin{equation}
\label{e.odometer}
v(x) \geq \tfrac12 x \cdot A x + o(|x|^2) \quad \mbox{and} \quad \Delta v(x) \leq 2 \quad \mbox{for all } x \in \mathbb{Z}^2.
\end{equation}
\end{proposition}
The partial differential equation \eref{pde} is interpreted in the sense of viscosity. This means that if a smooth test function $\varphi \in C^\infty(\Omega)$ touches $\bar u$ from below or above at $x \in \Omega$, then the Hessian $D^2 \varphi(x)$ lies in $\Gamma$ or the closure of its complement, respectively. That this makes sense follows from standard viscosity solution theory, see for example \cite{Crandall}, and the following basic properties of the set $\Gamma$.
\begin{proposition}[\cite{Pegden-Smart}]
The following holds for all $A, B \in \mathbb{R}^{2 \times 2}_{sym}$.
\begin{enumerate}
\item $A \in \Gamma$ implies $\operatorname{tr} A \leq 2$.
\item $\operatorname{tr} A \leq 1$ implies $A \in \Gamma$.
\item $A \in \Gamma$ and $B \leq A$ implies $B \in \Gamma$.
\end{enumerate}
\end{proposition}
These basic properties tell us, among other things, that the differential inclusion \eref{pde} is degenerate elliptic and that any solution $\bar u$ satisfies the bounds
\begin{equation*}
1 \leq \operatorname{tr} D^2 \bar u \leq 2
\end{equation*}
in the sense of viscosity. This implies enough a prior regularity that we have a unique solution $\bar u \in C^{1,\alpha}(\Omega)$ for all $\alpha \in (0,1)$.
\subsection{Notation}
Our results make use of several arbitrary constants which we do not bother to determine. We number these according to the result in which they are defined; e.g., the constant $C_{\ref{p.structure}}$ is defined in Proposition \ref{p.structure}. In proofs, we allow Hardy notation for constants, which means that the letter $C$ denotes a positive universal constant that may differ in each instance. We let $D \varphi : \mathbb{R}^d \to \mathbb{R}^d$ and $D^2 \varphi : \mathbb{R}^d \to \mathbb{R}^{d \times d}_{sym}$ denote the gradient and hessian of a function $\varphi \in C^2(\mathbb{R}^d)$. We let $|x|$ denote the $\ell^2$ norm of a vector $x \in \mathbb{R}^d$ and $|A|$ denote the $\ell^2$ operator norm of a matrix $A \in \mathbb{R}^{d \times e}$.
\subsection{Pattern classification}
The main theorem from \cite{Levine-Pegden-Smart-2} states that $\Gamma$ is the closure of its extremal points and that the set of extremal points has a special structure. We recall the ingredients that we need.
\begin{proposition}[\cite{Levine-Pegden-Smart-2}]
If
\begin{equation*}
\Gamma^+ = \{ P \in \Gamma : \mbox{there is an $\varepsilon > 0$ such that $P - \varepsilon I \leq A \in \Gamma$ implies $A \leq P$} \},
\end{equation*}
then $A \in \Gamma$ if and only if $A = \lim_{n \to \infty} A_n$ for some $A_n \leq P_n \in \Gamma^+$.
\end{proposition}
For each $P \in \Gamma^+$, there is a recurrent $o : \mathbb{Z}^2 \to \mathbb{Z}$ witnessing $P \in \Gamma$. These functions $o$, henceforth called odometers, enjoy a number of special properties. The most important for us is that the Laplacians $\Delta o$ are doubly periodic with nice structure. Some of the patterns are on display in \fref{patterns}. We exploit the structure of these patterns to prove stability.
\begin{figure}[h]
\includegraphics[angle=90,width=.23\textwidth]{pat_9_1_6.pdf}
\hfill
\includegraphics[angle=90,width=.23\textwidth]{pat_12_7_12.pdf}
\hfill
\includegraphics[angle=90,width=.23\textwidth]{pat_25_1_20.pdf}
\hfill
\includegraphics[angle=90,width=.23\textwidth]{pat_28_7_20.pdf}
\caption{The Laplacian $\Delta o$ for several $P \in \Gamma^+$. The colors blue, cyan, yellow, and red correspond to values $-1,0,1,2$.}
\label{f.patterns}
\end{figure}
\begin{proposition}[\cite{Levine-Pegden-Smart-2}]
\label{p.structure}
There is a universal constant $C_{\ref{p.structure}} > 0$ such that, for each $P \in \Gamma^+$, there are $A, V \in \mathbb{Z}^{2 \times 3}$, $T \subseteq \mathbb{Z}^2$, and a function $o : \mathbb{Z}^2 \to \mathbb{Z}$, henceforth called an odometer function, such that the following hold.
\begin{enumerate}
\item $PV = A$,
\item $1 \leq |V|^2 \leq C_{\ref{p.structure}} \det(V)$, where $|V|$ is the $\ell^2$ operator norm of $V$,
\item $A^t Q V + V^t Q A = Q'$, where $Q = \smat{0 & 1 \\ -1 & 0}$ and $Q' = \smat{0 & 1 & -1 \\ -1 & 0 & 1 \\ 1 & -1 & 0}$,
\item $A \smat{1\\1\\1} = V \smat{1\\1\\1} = \smat{0\\0}$,
\item $o$ is recurrent and there is a quadratic polynomial $q$ such that $D^2 q = P$, $o - q$ is $V \mathbb{Z}^3$-periodic, and $|o - q| \leq C_{\ref{p.structure}} |V|^2$,
\item $\Delta o = 2$ on $\partial T = \{ x \in T : y \sim x \mbox{ for some } y \in \mathbb{Z}^2 \setminus T \}$.
\item If $x \sim y \in \mathbb{Z}^2$, then there is $z \in \mathbb{Z}^3$ such that $x, y \in T + V z$.
\item If $z, w \in \mathbb{Z}^3$ and $(T + V z) \cap (T + Vw) \neq \emptyset$ if and only if $|z - w|_1 \leq 1$.
\end{enumerate}
\end{proposition}
This proposition implies that $\Delta o$ is $V \mathbb{Z}^3$-periodic and that the set $\{ \Delta o = 2 \}$ has a unique infinite connected component. Moreover, there is a fundamental tile $T \subseteq \mathbb{Z}^2$ whose boundary is contained in $\{ \Delta o = 2 \}$ and whose $V \mathbb{Z}^3$-translations cover $\mathbb{Z}^2$ with overlap exactly on the boundaries. This structure is apparent in the examples in \fref{patterns}.
\subsection{Toppling cutoff}
In \eref{fde} we've used the bound $2$ on the right-hand side. In the language of sandpile dynamics, this means that sites topple whenever there are three or more particles at a vertex. We have also used this bound in the literature review above, although the cited papers state their theorems with the bounds 3 or 1; in particular, the paper \cite{Levine-Pegden-Smart-2} uses the bound $1$ for its results. (In fact, the published version of the paper \cite{Levine-Pegden-Smart-2} is inconsistent in its use of the cutoff, so that in a few places, a value of 2 appears where 0 would be correct; this inconsistency has been corrected in the arXiv version of the paper.) Translation between the conventions is perfomed by observing that the quadratic polynomial
\begin{equation*}
q(x) = \tfrac12 x_1 (x_1 + 1)
\end{equation*}
is integer-valued on $\mathbb{Z}^2$, satisfies $\Delta q \equiv 1$, and has hessian $D^2 q \equiv \smat{1 & 0 \\ 0 & 0}$. Since, for any $\alpha \in \mathbb{Z}$, we have $\Delta (u + \alpha q) = \Delta u + \alpha$, we can shift the right-hand side by a constant by adding the corresponding multiple of $q$. Our choice of $2$ in this manuscript makes the scaling limit of \eref{fde} have a particularly nice structure.
\section{Pattern Stability}
In this section we prove our main result, the stability of patterns. A translation of the odometer $o$ is any function of the form
\begin{equation*}
\tilde o(x) = o(x + y) + z \cdot x + w
\end{equation*}
for some $y,z \in \mathbb{Z}^2$ and $w \in \mathbb{Z}$. Note that $\tilde o$ also satisfies \pref{structure}. In particular, we have the following.
\begin{lemma}
\label{l.translations}
For any odometer $o$ and translation $\tilde o$, we have
\begin{equation}
\tilde o(x)=o(x)+b\cdot x+r(x),
\end{equation}
for $b\in \mathbb{Z}^2$ and $r:\mathbb{Z}^2\to \mathbb{Z}$ is a $V\mathbb{Z}^3$ periodic function.\qed
\end{lemma}
Throughout the remainder of this section, we fix choices of $P,A,V,T,o$ from \pref{structure}. The following theorem says that, when a recurrent function is close to $o$, then it is equal to translations $\tilde o$ of $o$ in balls covering almost the whole domain. This is pattern stability.
\begin{theorem}
\label{t.stability}
There is a universal constant $C_{\ref{t.stability}} > 0$ such that if $h\geq C_{\ref{t.stability}}$, $r\geq C_{\ref{t.stability}}|V|$, $hr\geq C_{\ref{t.stability}}|V|^3$, $R \geq C_{\ref{t.stability}} h r$, and $v : \mathbb{Z}^2 \to \mathbb{Z}$ is recurrent and satisfies $|v - o| \leq h^2$ in $B_R$, then, for a $(1 - C_{\ref{t.stability}} R^{-2} rh)$-fraction of points $x$ in $B_{R - r}$, there is a translation $\tilde o_x$ of $o$ such that $v = \tilde o_x$ in $B_r(x)$.
\end{theorem}
We introduce the norms
\begin{equation*}
|x|_V = |V^t x|_\infty
\end{equation*}
and
\begin{equation*}
|x|_{V^{-1}} = \min \{ |y|_1 : V y = x \}.
\end{equation*}
We prove these norms are dual and comparable to Euclidean distance.
\begin{lemma}
\label{l.norms}
For all $x, y \in \mathbb{Z}^2$, we have
\begin{equation*}
|x \cdot y| \leq |x|_V |y|_{V^{-1}}
\end{equation*}
and
\begin{equation*}
C_{\ref{p.structure}}^{-1} |x| \leq |V| |x|_{V^{-1}} \leq C_{\ref{p.structure}} |x|.
\end{equation*}
\end{lemma}
\begin{proof}
If $f : \mathbb{R}^2 \to \mathbb{R}^3$ satisfies $V f(x) = x$ and $| x |_{V^{-1}} = |f(x)|_1$, then
\begin{equation*}
|x \cdot y| = |x \cdot V f(y)| = |V^t x \cdot f(y)| \leq |V^t x|_\infty |f(y)|_1 = |x|_V |y|_{V^{-1}}.
\end{equation*}
The latter two inequalities follow from $1 \leq |V|^2 \leq C_{\ref{p.structure}} \det(V)$.
\end{proof}
The ``web of twos'' provided by \pref{structure} allows us to show that, when two a recurrent function differs from $o$, then the difference must grow. This is a quantitative form of the maximum principle.
\begin{lemma}
\label{l.separation}
If $v : \mathbb{Z}^2 \to \mathbb{Z}$ is recurrent, $x_0 \sim y_0 \in \mathbb{Z}^2$, $v(x_0) = o(x_0)$, and $v(y_0) \neq o(y_0)$, then, for all $k \geq 0$,
\begin{equation*}
\max_{|x - x_0|_{V^{-1}} \leq k + 1} (o - v)(x) \geq k.
\end{equation*}
\end{lemma}
\begin{proof}
We inductively construct $T_k = T + V z_k$ such that
\begin{enumerate}
\item $x_0, y_0 \in T_0$,
\item $o - v$ is not constant on $T_k$,
\item $|z_{k+1} - z_k|_1 \leq 1$,
\item $\max_{T_{k+1}} (o - v) > \max_{T_k} (o - v)$.
\end{enumerate}
Since $|x - x_0|_{V^{-1}} \leq 1$ for all $x \in T_0$, this implies the lemma.
The base case is immediate from the fact that every lattice edge is contained in a single tile. For the induction step, we use the recurrence of $o$ and $v$. In particular, since $T_k \subseteq \mathbb{Z}^2$ is finite, the difference $o - v$ attains its extremal values in $T_k$ on the boundary $\partial T_k$. Since $o - v$ is not constant on $T_k$, it is not constant on $\partial T_k$. Therefore, we may select $x \in \partial T_k$ such that
\begin{equation*}
\max_{T_k} (o - v) = (o - v)(x)
\end{equation*}
and
\begin{equation*}
(o - v)(x) > (o - v)(y) \mbox{ for some } y \sim x.
\end{equation*}
Now, if $(o-v)(x) \geq (o-v)(y)$ for all $y \sim x$, then, using from \pref{structure} that $\Delta o(x) = 2$, we compute
\begin{equation*}
- 1 \geq \Delta (o - v)(x) = \Delta o(x) - \Delta v(x) = 2 - \Delta v(x),
\end{equation*}
contradicting the recurrence of $v$. Thus we can find $y \sim x$ such that $(o - v)(x) < (o - v)(y)$. Choose $z_{k+1} \in \mathbb{Z}^3$ such that $|z_{k+1} - z_k|_1 \leq 1$ and $x, y \in T_{k+1} = T + V z_{k+1}$.
\end{proof}
On the other hand, we can approximate any linear separation of odometers, showing that the above lemma is nearly optimal.
\begin{lemma}
\label{l.translation}
For any $b \in \mathbb{R}^2$, there is a translation $\tilde o$ of $o$ such that
\begin{equation*}
|\tilde o(x) - o(x) - b \cdot x| \leq \tfrac23 |x|_{V^{-1}} + 2 C_{\ref{p.structure}}|V|^2 \quad \mbox{for } x \in \mathbb{Z}^2
\end{equation*}
\end{lemma}
\begin{proof}
For $y, z \in \mathbb{Z}^2$ to be determined, let
\begin{equation*}
\tilde o(x) = o(x + y) + z \cdot x - o(y) + \tilde o(0).
\end{equation*}
Using the quadratic polynomial $q$ from \pref{structure}, compute
\begin{equation*}
\begin{aligned}
|\tilde o(x) - o(x) - b \cdot x| - 2 C_{\ref{p.structure}} |V|^2
& \leq |q(x + y) + z \cdot x - q(y) + q(0) - q(x) - b \cdot x| \\
& = |(P y + z - b) \cdot x | \\
& \leq |P y + z - b|_V |x|_{V^{-1}}.
\end{aligned}
\end{equation*}
We claim that we can choose $y, z \in \mathbb{Z}^2$ such that
\begin{equation*}
|P y + z - b|_V \leq \tfrac23.
\end{equation*}
Indeed, using \pref{structure}, we compute
\begin{equation*}
\begin{aligned}
\{ V^t (P y + z) : y, z \in \mathbb{Z}^2 \}
& = \{ A^t y + V^t z: y, z \in \mathbb{Z}^2 \} \\
& \supseteq \{ (A^t Q V + V^t Q A) w : w \in \mathbb{Z}^3 \} \\
& \supseteq \{ Q' w : w \in \mathbb{Z}^3 \} \\
& = \{ w \in \mathbb{Z}^3 : \smat{1 \\ 1 \\ 1} \cdot w = 0 \}.
\end{aligned}
\end{equation*}
Since $V^t b \in \{ w \in \mathbb{R}^3 : \smat{1 \\ 1 \\ 1} \cdot w = 0 \}$, the result follows.
\end{proof}
We prove the main ingredient of \tref{stability} by combining the previous two lemmas. Recall that a function $u : \mathbb{Z}^2 \to \mathbb{R}$ touches another function $v : \mathbb{Z}^2 \to \mathbb{R}$ from below in a set $X \subseteq \mathbb{Z}^2$ at the point $x \in X$ if $\min_X (v - u) = (v-u)(x) = 0$.
\begin{lemma}
\label{l.touch}
There is a universal constant $C_{\ref{l.touch}} > 1$ such that, if
\begin{enumerate}
\item $R \geq C_{\ref{l.touch}} |V|^3$,
\item $v : \mathbb{Z}^2 \to \mathbb{Z}$ is recurrent in $B_R$,
\item $\psi(x) = o(x) - \tfrac12 |V|^2 R^{-2} |x - y|^2 + k$ for some $k\in \mathbb{Z}$,
\item $\psi$ touches $v$ from below at $0$ in $B_R$,
\end{enumerate}
then there is a translation $\tilde o$ of $o$ such that
\begin{equation*}
v = \tilde o \quad \mbox{in } B_{C_{\ref{l.touch}}^{-1} R}
\end{equation*}
\end{lemma}
\begin{proof}
Using \lref{translation}, we may choose a translation $\tilde o$ of $o$ such that
\begin{equation*}
|\psi(x) + \tfrac12 |V|^2 R^{-2} |x|^2 - \tilde o(x)| \leq \tfrac23 |x|_{V^{-1}} + 2 C_{\ref{p.structure}} |V|^2
\quad \mbox{for } x \in B_R.
\end{equation*}
Next, since $\psi$ touches $v$ from below at $0$, we obtain
\begin{equation}\label{ovlower}
\tilde o(0) - v(0) \geq -2 C_{\ref{p.structure}} |V|^2
\end{equation}
and
\begin{equation}
\label{ovupper}
\tilde o(x) - v(x) \leq (2 C_{\ref{p.structure}}+1)|V|^2 + \tfrac23 |x|_{V^{-1}} \quad \mbox{in } B_R.
\end{equation}
We show that, if $C_{\ref{l.touch}} > 0$ is a sufficiently large universal constant, then $\tilde o - v$ is constant in $B_{C_{\ref{l.touch}}^{-1} R}$. Suppose not. Then there are $x \sim y \in B_{C_{\ref{l.touch}}^{-1} R}$ with
\begin{equation*}
(\tilde o - v)(0) = (\tilde o - v)(x) \neq (\tilde o - v)(y).
\end{equation*}
Since $x\in B_{C_{\ref{l.touch}}^{-1}R}$ we have from \lref{norms} that
\begin{equation*}
|x|_{V^{-1}} \leq C_{\ref{p.structure}}C_{\ref{l.touch}}^{-1} |V|^{-1} R.
\end{equation*}
By \lref{norms}, $B_R$ contains all points $z$ with $|z-x|_{V^{-1}}\leq R|V|^{-1}(C_{\ref{p.structure}}^{-1}-C_{\ref{p.structure}}C_{\ref{l.touch}}^{-1})$. In particular, by \lref{separation}, there is a $z\in B_R$ such that
\begin{equation}\label{ovfromtiles}
(\tilde o - v)(z) \geq - C_{\ref{p.structure}}|V|^2 + R|V|^{-1}(C_{\ref{p.structure}}^{-1}-C_{\ref{p.structure}}C_{\ref{l.touch}}^{-1})-1.
\end{equation}
Combining \eqref{ovfromtiles} with \eqref{ovupper}, we obtain
\begin{equation}
R|V|^{-1}(C_{\ref{p.structure}}^{-1}-C_{\ref{p.structure}}C_{\ref{l.touch}}^{-1})- C_{\ref{p.structure}}|V|^2 -1\leq (C_{\ref{p.structure}}+1)|V|^2 + \tfrac23 |x|_{V^{-1}},
\end{equation}
which is impossible for $R\geq C_{\ref{l.touch}} |V|^3$ and $C_{\ref{l.touch}}$ large relative to $C_{\ref{p.structure}}$.
\end{proof}
We prove pattern stability by adapting the growth lemma for non-divergence form elliptic equations, see for example \cite{Savin}. The above lemma is used to show that the ``touching map'' is almost injective.
\begin{proof}[Proof of \tref{stability}]
Our proof assumes
\[
hr\geq C_{\ref{l.touch}}|V|^3,\quad h\geq 2C_{\ref{l.touch}},\quad r\geq 3 |V|,\quad R\geq 3hr+2C_{\ref{l.touch}}r
\]
Step 1.
We construct a touching map. For $y \in B_{R - 4 h r}$, consider the test function
\begin{equation*}
\varphi_y(x) = o(x) - \tfrac12 |V|^2 r^{-2} |x - y|^2.
\end{equation*}
Observe that
\begin{equation*}
(v - \varphi_y)(y) = (v - o)(y) \leq h^2
\end{equation*}
and, for $z \in B_R \setminus B_{3hr}(y)$,
\begin{equation*}
(v - \varphi_y)(z) = (v - o)(z) + \tfrac12 |V|^2 r^{-2} |z - y|^2 \geq - h^2 + \tfrac92 |V|^2 h^2 \geq \tfrac{7}{2} h^2.
\end{equation*}
We see that $v - \varphi_y$ attains its minimum over $B_R$ at some point $x_y \in B_{3hr}(y)$. Assuming $hr \geq C_{\ref{l.touch}}|V|^3$ and $h\geq 2C_{\ref{l.touch}}$, and $R\geq 3hr+2C_{\ref{l.touch}}r$, we have that $R-3hr\geq 2C_{\ref{l.touch}}r$, and \lref{touch} gives a translation $o_y$ of $o$ such that
\begin{equation*}
v = o_y \quad \mbox{in } B_{2r}(x_y).
\end{equation*}
The map $y \mapsto x_y$ is the touching map.
Step 2. We know that $v$ matched a translation of $o$ in a small ball around every point in the range of the touching map. If we knew the touching map was injective, then the fraction of these good points would be $|B_{R-4hr}| / |B_R|$. While injectivity generally fails, we are able to show almost injectivity, in the following sense.
\textbf{Claim:} For every $y \in B_{R - 4 hr}$, there are sets $y \in T_y \subseteq B_R$ and $x_y \in S_y \subseteq B(x_y,|V|)$ such that $|T_y| \leq |S_y|$ and $S_y \cap S_{\tilde y} \neq \emptyset$ implies $S_y = S_{\tilde y}$ and $T_y = T_{\tilde y}$.
To prove this, observe first that, assuming $r \geq 3|V|$,
\begin{equation*}
|x_y - x_{\tilde y}| \leq 2 |V| \mbox{ implies } o_y = o_{\tilde y},
\end{equation*}
since $B_{2r}(x_y)\cap B_{2r}(x_{\tilde y})$ contains four $V\mathbb{Z}^3$-equivalent (not collinear) points, which is sufficient to determine an odometer translation uniquely.
Next, observe from \lref{translations} that for every $y_0 \in B_{R \setminus 4 hr}$ there is a slope $b \in \mathbb{Z}^2$ such that, for all $y, x \in \mathbb{Z}^2$,
\begin{equation}\label{zy0y}
o_{y_0}(x) - \varphi_{y}(x) = r(x) + b \cdot x + \tfrac12 |V|^2 r^{-2}|x-y|^2.
\end{equation}
Now let $z_{y_0,y} = \operatorname{argmin} (o_{y_0} - \varphi_{y})$, and let $\mathcal X$ be any tiling of $\mathbb{Z}^2$ by the $V \mathbb{Z}^3$-translations of a fundamental domain with diameter bounded by $|V|$. We define
\begin{align*}
S_{y_0,y}&=X\in \mathcal T\quad\text{such that}\quad z_{y_0,y}\in X,\\
T_{y_0,y}&= \{ \tilde y : z_{y_0,\tilde y} \in S_{y_0,y} \}.
\end{align*}
Note that $S_{y_0,y}$ and $T_{y_0,y}$ depend only on $o_y$ and $\tilde y$ (and not directly on $y$). Moreover, since $y \mapsto z_{y_0, y}$ commutes with $V \mathbb{Z}^3$-translation by \eqref{zy0y},
we have that no $T_{y_0,y}$ can contain two $V\mathbb{Z}^3$-equivalent points, and thus that $|T_{y_0,y}|\leq |S_{y_0,y}|=|V|$ for all $y_0,y$.
Finally, since $|x_{y_0} - x_{y_1}| \leq 2 |V|$ implies $o_{y_0} = o_{y_1}$, we see that $|x_{y_0} - x_{y_1}| \leq 2 |V|$ implies $S_{y_0,y} = S_{y_1,y}$ and $T_{y_0,y} = T_{y_1, y}$. Letting $S_y = S_{y,y}$ and $T_y = T_{y,y}$, we see that $S_y \cap S_{\tilde y} \neq \emptyset$ implies $|y - \tilde y| \leq 2 |V|$ and thus $S_{y} = S_{\tilde y}$ and $T_{y} = T_{\tilde y}$, as required.
Step 3. Let $\mathcal Y \subseteq B_{R - 4 hr}$ be maximal subject to $\{ S_y : y \in \mathcal Y \}$ being disjoint. By the implication in the claim, we must have $B_{R - 4 hr } \subseteq \cup \{ T_y : y \in \mathcal Y \}$. We compute
\begin{equation*}
| \cup_{y \in \mathcal Y} S_y | \geq \sum_{y \in \mathcal Y} |T_y| \geq \left| \cup_{y \in \mathcal Y} T_y \right| \geq |B_{R - 4 hr}|.
\end{equation*}
Finally, observe that at each point of $x \in \cup \{ S_y : y \in \mathcal Y \}$, there is a translation $\tilde o_x$ of $o$ such that $v = \tilde o_x$ in $B_r(x)$. The theorem now follows from the estimate $|B_{R - 4 hr}| / |B_R| \geq 1 - C R^{-2} r h$.
\end{proof}
\section{Explicit Solution}
In this section, we describe the solution of
\begin{equation}
\label{e.pdesquare}
\begin{cases}
D^2 \bar u \in \partial \Gamma & \mbox{in } (-1,1)^2 \\
\bar u = 0 & \mbox{on } \mathbb{R}^2 \setminus (-1,1)^2.
\end{cases}
\end{equation}
As one might expect from \fref{identity}, the solution is piecewise quadratic and satisfies the stronger constraint $D^2 \bar u \in \Gamma^+$. The algorithm described here is implemented in the code attached to the arXiv upload.
\begin{theorem}[\cite{Levine-Pegden-Smart-1,Levine-Pegden-Smart-2}]
\label{t.piecewise}
There are disjoint open sets $\Omega_k \subseteq \Omega = (-1,1)^2$ and constants $L > 1 > \tau > 0$ such that the following hold.
\begin{enumerate}
\item $\sum_k |\Omega_k| = |\Omega|$ and $|\Omega_k| \leq \tau^k$.
\item $D^2 \bar u$ is constant in each $\Omega_k$ with value $P_k \in \Gamma^+$.
\item The $V_k \in \mathbb{Z}^{2 \times 3}$ corresponding to $P_k$ via \pref{structure} satisfies $|V_k| \leq L^k$.
\item For $r > 0$, $| \{ x : B_r(x) \subseteq \Omega_n \} | \geq |\Omega_n| - L |\Omega_n|^{1/2} r$.
\end{enumerate}
\end{theorem}
Since this result is essentially contained in \cite{Levine-Pegden-Smart-1} and \cite{Levine-Pegden-Smart-2}, we omit the proofs, giving only the explicit construction and a reminder of its properties. We construct a family of super-solutions $\bar v_n \in C(\mathbb{R}^2)$ of \eref{pdesquare} such that that $\bar v_n \downarrow \bar u$ uniformly as $n \to \infty$. Each super-solution $\bar v_n$ is a piecewise quadratic function with finitely many pieces. The measure of the pieces whose Hessians do not lie in $\Gamma^+$ goes to zero as $n \to \infty$. The Laplacians of the first eight super-solutions is displayed in \fref{supersolutions}. The construction is similar to that of a Sierpinski gasket, and the pieces are generated by an iterated function system.
\begin{figure}[h]
\includegraphics[angle=90,width=.23\textwidth]{pde_0.pdf}
\hfill
\includegraphics[angle=90,width=.23\textwidth]{pde_1.pdf}
\hfill
\includegraphics[angle=90,width=.23\textwidth]{pde_2.pdf}
\hfill
\includegraphics[angle=90,width=.23\textwidth]{pde_3.pdf} \\
\bigskip
\includegraphics[angle=90,width=.23\textwidth]{pde_4.pdf}
\hfill
\includegraphics[angle=90,width=.23\textwidth]{pde_5.pdf}
\hfill
\includegraphics[angle=90,width=.23\textwidth]{pde_6.pdf}
\hfill
\includegraphics[angle=90,width=.23\textwidth]{pde_7.pdf}
\caption{The Laplacian of the supersolution $\bar v_n$ on $(0,1)^2$ for $n = 0, ..., 7$}
\label{f.supersolutions}
\end{figure}
The solution is derived from the following data.
\begin{definition}
Let $z_s, a_s, w_s, b_s \in \mathbb{C}^3$ for $s \in \{ 1,2,3 \}^{< \omega}$ satisfy
\begin{equation*}
z_{()} = \mat{1 \\ 1+i \\ i},
\
a_{()} = \mat{0 \\ -1 \\ i},
\
z_{s k} = Q R^k z_s,
\
a_{s k} = \overline Q R^k z_s,
\
w_s = S z_s,
\mbox{ and }
b_s = \overline S a_s,
\end{equation*}
where
\begin{equation*}
Q = \frac13 \mat{3 & 0 & 0 \\ 1 + i & 1 - i & 1 \\ 1 - i & 1 & 1 + i},
\
R = \mat{1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0},
\mbox{ and }
S = \frac13 \mat{1 & 1+i & 1-i \\ 1-i & 1 & 1+i \\ 1+i & 1-i & 1}.
\end{equation*}
\end{definition}
The above iterated function system generates a four families of triangles, which we use to define linear maps by interpolation.
\begin{definition}
For $z, a \in \mathbb{C}^3$, let $L_{z,a}$ be the linear interpolation of the map $z_k \mapsto a_k$. That is, $L_{z,a}$ has domain
\begin{equation*}
\triangle_z = \{ t_1 z_1 + t_2 z_2 + t_3 z_3 : t_1, t_2, t_3 \geq 0 \mbox{ and } t_1 + t_2 + t_3 = 1 \}
\end{equation*}
and satisfies
\begin{equation*}
L_{z,a}(t_1 z_1 + t_2 z_2 + t_3 z_3) = t_1 a_1 + t_2 a_2 + t_3 a_3.
\end{equation*}
\end{definition}
Identifying $\mathbb{C}$ and $\mathbb{R}^2$ in the usual way, $L_{z,a}$ is a map between triangles in $\mathbb{R}^2$. We glue together the linear maps $L_{z_s,a_s}$ and $L_{w_s,b_s}$ to construct the gradients of our super-solutions. The complication is that the triangles $\triangle_{z_s}$ and $\triangle_{w_s}$ are not disjoint. As we see in \fref{supersolutions}, the domains of later maps intersect the earlier ones. We simply allow the later maps to overwrite the earlier ones.
\begin{definition}
For integers $n \geq k \geq -1$, let $G_{n,k} : (0,1)^2 \to \mathbb{R}^2$ satisfy
\begin{equation*}
G_{n,-1}(x) = \smat{0 & 0 \\ 0 & 1}x,
\end{equation*}
\begin{equation*}
G_{n,n}(x) = \begin{cases}
L_{z_s,a_s}(x) & \mbox{if } x \in \triangle_{z_s} \mbox{ for } s \in \{ 1, 2, 3 \}^n \\
G_{n,n-1}(x) & \mbox{otherwise},
\end{cases}
\end{equation*}
and, for $n > k > -1$,
\begin{equation*}
G_{n,k}(x) = \begin{cases}
L_{w_s,b_s}(x) & \mbox{if } x \in \triangle_{w_s} \mbox{ for } s \in \{ 1, 2, 3 \}^k \\
G_{n,k-1}(x) & \mbox{otherwise}.
\end{cases}
\end{equation*}
For integers $n \geq 0$, let $G_n = G_{n,n}$
\end{definition}
It follows by induction that $G_n$ is continuous and the gradient of a supersolution:
\begin{proposition}[\cite{Levine-Pegden-Smart-1}]
For $n \geq 0$, there is a $\bar v_n \in C(\mathbb{R}^2) \cap C^1((0,1)^2)$ such that
\begin{equation*}
\bar v_n(x_1,x_2) = \bar v_n(|x_1|,|x_2|),
\end{equation*}
\begin{equation*}
\bar v_n = 0 \quad \mbox{on } \mathbb{R}^2 \setminus (-1,1)^2,
\end{equation*}
and
\begin{equation*}
D \bar v_n(x) = G_n(x) + \smat{1 & 0 \\ 0 & 0} x \quad \mbox{for } x \in (0,1)^2
\end{equation*}
Moreover, $\sup_n \sup_{(-1,1)^2} |D\bar v_n| < \infty$.
\end{proposition}
The above proposition implies that the gradients $D L_{z_s, a_s}$ and $D L_{w_s, b_s}$ are symmetric matrices. In fact, one can prove that
\begin{equation*}
D L_{z_s,a_s} + \smat{ 1 & 0 \\ 0 & 0 } \in \Gamma
\quad \mbox{and} \quad
D L_{w_s,b_s} + \smat{ 1 & 0 \\ 0 & 0 } \in \Gamma^+.
\end{equation*}
Passing to the limit $n \to \infty$, one obtains \tref{piecewise}.
\begin{remark}
Observe that the intersection points of the pieces of the explicit solution all have triadic rational coordinates. We expect this is connected to \qref{perfect}.
\end{remark}
\section{Quantitative Convergence}
In order to use \tref{stability} to prove appearance of patterns, we need a rate of convergence. Throughout this section, fix a bounded convex set $\Omega \subseteq \mathbb{R}^2$ and functions $u_n : \mathbb{Z}^2 \to \mathbb{Z}$ and $\bar u \in C(\mathbb{R}^2)$ that solve \eref{fde} and \eref{pde}, respectively. We know that rescalings $\bar u_n(x) = n^{-2} u_n(n x) \to \bar u(x)$ uniformly in $x \in \mathbb{R}^2$ as $n \to \infty$. We quantify this convergence using the additional regularity afforded by \tref{piecewise}. The additional regularity arrives in the form of local approximation by recurrent functions.
\begin{definition}
We say that $\bar u$ is $\varepsilon$-approximated if $\varepsilon \in (0,1/2)$ and there is a constant $K \geq 1$ such that the following holds for all $n \geq 1$: For a $1 - K n^{-\varepsilon}$ fraction of points $x \in \mathbb{Z}^2 \cap n \Omega$, there is a $u : \mathbb{Z}^2 \to \mathbb{Z}$ that is recurrent in $B_{n^{1-\varepsilon}}(x) \subseteq n \Omega$ and satisfies $\max_{y \in B_{n^{1-\varepsilon}}(x)} |u(y) - n^2 \bar u(n^{-1} y)| \leq K n^{2-3\varepsilon}$.
\end{definition}
Being $\varepsilon$-approximated implies quantitative convergence of $\bar u_n$ to $\bar u$.
\begin{theorem}
\label{t.convergence}
If $\bar u$ is $\varepsilon$-approximated, then there is an $L > 0$ such that
\begin{equation*}
\sup_{x \in \mathbb{Z}^2} |u_n(x) - n^2 \bar u(n^{-1} x)| \leq L n^{2 - \varepsilon/8}
\end{equation*}
holds for all $n \geq 1$.
\end{theorem}
A key ingredient of our proof of this theorem is a standard ``doubling the variables'' result from viscosity solution theory. This is analogous to ideas used in the convergence result of \cite{Caffarelli-Souganidis} for monotone difference approximations of fully nonlinear uniformly elliptic equations. In place of $\delta$-viscosity solutions, we use \cite[Lemma 6.1]{Armstrong-Smart} as a natural quantification of the Theorem On Sums \cite{Crandall} in the uniformly elliptic setting. In the following lemma, we abuse notation and use the Laplacian both for functions on the rescaled lattice $n^{-1} \mathbb{Z}^2$ and the continuum $\mathbb{R}^2$.
\begin{lemma}
\label{l.doubling}
Suppose that
\begin{enumerate}
\item $\Omega \subseteq \mathbb{R}^2$ is open, bounded, and convex,
\item $u : n^{-1} \mathbb{Z}^2 \to \mathbb{R}$ satisfies $|\Delta u| \leq 1$ in $n^{-1} \mathbb{Z}^2 \cap \Omega$ and $u = 0$ in $n^{-1} \mathbb{Z}^2 \setminus \Omega$,
\item $v \in C(\mathbb{R}^2)$ satisfies $|\Delta v| \leq 1$ in $\Omega$ and $v = 0$ in $\mathbb{R}^2 \setminus \Omega$,
\item $\max_{n^{-1} \mathbb{Z}^2} (u - v) = \varepsilon > 0$.
\end{enumerate}
There is a $\delta > 0$ depending only on $\Omega$ such that, for all $p, q \in B_{\delta \varepsilon}$, the function
\begin{equation*}
\Phi(x,y) = u(x) - v(y) - \delta \varepsilon |x^2| - \delta^{-1} \varepsilon^{-1} |x - y|^2 - p \cdot x - q \cdot y
\end{equation*}
attains its maximum over $n^{-1} \mathbb{Z}^2 \times \mathbb{R}^2$ at a point $(x^*, y^*)$ such that $B_{\delta \varepsilon}(x^*) \subseteq \Omega$ and $B_{\delta \varepsilon}(y^*) \subseteq \Omega$. Moreover, the set of possible maxima $(x^*,y^*)$ as the slopes $(p,q)$ vary covers a $\delta^{10} \varepsilon^8$ fraction of $(n^{-1} \mathbb{Z}^2 \cap \Omega) \times \Omega$.
\end{lemma}
\begin{proof}
Step 1.
Standard estimates for functions with bounded Laplacian (both discrete and continuous) imply that $\bar u_n$ and $\bar u$ are Lipschitz with a constant depending only on the convex set $\Omega$. Estimate
\begin{equation*}
\bar u_n(x) - \bar u(x) \geq \Phi(x,x) \geq \bar u_n(x) - \bar u(x) - C \delta \varepsilon
\end{equation*}
and, using the Lipschitz estimates,
\begin{equation*}
\Phi(x,y) \leq \Phi(x,x) + C |x - y| - \delta^{-1} \varepsilon^{-1} |x-y|^2.
\end{equation*}
Thus, if $\max_{x,y} \Phi(x,y) = \Phi(x^*,y^*)$, then
\begin{equation*}
\varepsilon - C \delta \varepsilon \leq \Phi(x^*,y^*) \leq \varepsilon + C \delta \varepsilon - C \delta^{-1} \varepsilon^{-1} |x^* - y^*|^2.
\end{equation*}
In particular, if $\delta > 0$ is sufficiently small, then
\begin{equation*}
|x^* - y^*| \leq C \delta \varepsilon.
\end{equation*}
Using the boundary conditions in combination with the Lipschitz estimates, we see that, provided $\delta > 0$ is sufficiently small, $B_{\delta \varepsilon}(x^*) \subseteq \Omega$ and $B_{\delta \varepsilon}(y^*) \subseteq \Omega.$
Step 2. The final measure-theoretic statement is an immediate consequence of the fact that the touching map $(p,q) \mapsto (x^*,y^*)$ has a $\delta^{-5} \varepsilon^{-4}$-Lipschitz inverse. This is a consequence of the proof of \cite[Lemma 6.1]{Armstrong-Smart}. Here, one must substitute the discrete Alexandroff-Bakelman-Pucci inequality \cite{Lawler,Kuo-Trudinger} since we have the discrete Laplacian. The statement we obtain is that, if $\delta > 0$ is sufficiently small, $(p_i,q_i) \mapsto (x_i,y_i)$, and $|(x_1,y_1) - (x_2,y_2)| \leq \delta^2 \varepsilon$, then $|(p_1,q_1) - (p_2,q_2)| \leq \delta \varepsilon$. The result now follows by a covering argument.
\end{proof}
\begin{proof}[Proof of \tref{convergence}]
Suppose $\bar u$ is $\varepsilon$-approximated and $K \geq 1$ is the corresponding constant. For $L > 1$ to be determined, suppose for contradiction that
\begin{equation*}
\max_{n^{-1} \mathbb{Z}^2} (\bar u_n - \bar u) \geq L n^{-\varepsilon/8}.
\end{equation*}
(The case of the other inequality is symmetric.) Apply \lref{doubling} with $u = \bar u_n$, $v = \bar u$, and $\varepsilon = L n^{-\varepsilon/8}$. As the slopes $(p,q)$ vary, the maximum $(x^*,y^*)$ of $\Phi$ satisfies $B_{\delta L n^{-\varepsilon/8}}(y^*) \subseteq \Omega$ and the set of possible $y^*$ covers a $L^8 \delta^{10} n^{-\varepsilon}$ fraction of $n^{-1} \mathbb{Z}^2 \cap \Omega$. Thus, if $L > 1$ is large enough, we may choose $(p,q)$ such that there is a function $w : \mathbb{Z}^2 \to \mathbb{Z}$ that is recurrent in $B_{n^{1-\varepsilon}}$ and satisfies $\max_{y \in B_{n^{1-\varepsilon}}} |w(y) - n^2 \bar u(n y^* + y)| \leq K n^{2-3\varepsilon}$.
Consider
\begin{multline*}
z \mapsto \Phi(x^* + z, y^* + z) - \Phi(x^*,y^*) = \\ \bar u_n(x^* + z) - \bar u_n(x^*) - \bar u(y^* + z) - \bar u(y^*) - (2 \delta \varepsilon x^* + p + q) \cdot z - \delta \varepsilon |z|^2,
\end{multline*}
which attains its maximum at $0$. Let $r \in \mathbb{Z}^2$ denote the integer rounding of $2 \delta \varepsilon x^* + p + q$. Observe that
\begin{equation*}
(2 \delta \varepsilon x^* + p + q) \cdot x + \delta \varepsilon |z|^2 \geq (K+1) n^{2 - 3 \varepsilon} \quad \mbox{for } |z| \geq n^{1 - \varepsilon},
\end{equation*}
provided that $L > 1$ is large enough. In particular,
\begin{equation*}
z \mapsto \bar u_n(z^* + z) - w(z) - r \cdot z
\end{equation*}
attains a strict local maximum in $B_{n^{1-\varepsilon}}$. This contradicts the maximum principle for recurrent functions.
\end{proof}
\section{Convergence of Patterns}
We prove \tref{main} by combining \tref{stability}, \tref{convergence}, and \tref{piecewise}.
\begin{proof}
First observe that \tref{piecewise} implies that $\bar u$ is $\alpha$-approximated for some $\alpha > 0$. Making $\alpha > 0$ smaller, \tref{convergence} implies
\begin{equation*}
\sup |\bar u_n - \bar u| \leq C n^{2 - \alpha}
\end{equation*}
from \tref{convergence}. Let us now consider what happens inside an individual piece $\Omega_k$. For $1 < r < R < n$, \tref{piecewise} implies that a
\begin{equation*}
1 - C \tau^{-k/2} n^{-1} R
\end{equation*}
fraction of points $x$ in $n \Omega_k$ satisfy $B_R(x) \subseteq n \Omega_k$. There is an odometer $o_k$ for $P_k$ such that
\begin{equation*}
h^2 = \max_{B_R(x)} |u_n - o_k| \leq C n^{2 - \alpha} + C R.
\end{equation*}
Assuming $r \geq C L^k \geq C |V_k|$, \tref{stability} implies that $\Delta o_k$ $r$-matches $\Delta u_n$ at a
\begin{equation*}
1 - C R^{-2} r h
\end{equation*}
fraction of points in $B_R(x)$. Assuming that $r \leq n^{\alpha}$ and setting
\begin{equation*}
R = n^{1 - \alpha/3} \tau^{k/6} r^{1/3},
\end{equation*}
these together imply that $u_n$ $r$-matches $\Delta o_k$ at a
\begin{equation*}
1 - C \tau^{-k/3} n^{-\alpha/3} r^{1/3}
\end{equation*}
fraction of points in $n \Omega_k$. Replacing $C \tau^{-k/3}$ by a larger $L^k$, we can remove the restrictions on $r$, as the estimate becomes trivial at the edges.
\end{proof}
\begin{bibdiv}
\begin{biblist}
\bib{Armstrong-Smart}{article}{
author={Armstrong, Scott N.},
author={Smart, Charles K.},
title={Quantitative stochastic homogenization of elliptic equations in nondivergence form},
journal={Arch. Ration. Mech. Anal.},
volume={214},
date={2014},
number={3},
pages={867--911},
issn={0003-9527},
review={\MR{3269637}},
doi={10.1007/s00205-014-0765-6},
}
\bib{Caffarelli-Souganidis}{article}{
author={Caffarelli, Luis A.},
author={Souganidis, Panagiotis E.},
title={A rate of convergence for monotone finite difference approximations to fully nonlinear, uniformly elliptic PDEs},
journal={Comm. Pure Appl. Math.},
volume={61},
date={2008},
number={1},
pages={1--17},
issn={0010-3640},
review={\MR{2361302}},
doi={10.1002/cpa.20208},
}
\bib{Crandall}{article}{
author={Crandall, Michael G.},
title={Viscosity solutions: a primer},
conference={
title={Viscosity solutions and applications},
address={Montecatini Terme},
date={1995},
},
book={
series={Lecture Notes in Math.},
volume={1660},
publisher={Springer, Berlin},
},
date={1997},
pages={1--43},
review={\MR{1462699}},
doi={10.1007/BFb0094294},
}
\bib{Dhar-Sadhu-Chandra}{article}{
title={Pattern formation in growing sandpiles},
author={Dhar, Deepak},
author={Sadhu, Tridib},
author={Chandra, Samarth},
journal={Europhysics Letters},
volume={85},
number={4},
pages={48002},
year={2009},
publisher={IOP Publishing},
note={\arxiv{0808.1732}}
}
\bib{Kalinin-Shkolnikov}{article}{
author={Kalinin, Nikita},
author={Shkolnikov, Mikhail},
title={Tropical curves in sandpiles},
note={Preprint (2015) \arxiv{1509.02303}}
}
\bib{Kuo-Trudinger}{article}{
author={Kuo, Hung-Ju},
author={Trudinger, Neil S.},
title={A note on the discrete Aleksandrov-Bakelman maximum principle},
booktitle={Proceedings of 1999 International Conference on Nonlinear
Analysis (Taipei)},
journal={Taiwanese J. Math.},
volume={4},
date={2000},
number={1},
pages={55--64},
issn={1027-5487},
review={\MR{1757983}},
}
\bib{Lawler}{article}{
author={Lawler, Gregory F.},
title={Weak convergence of a random walk in a random environment},
journal={Comm. Math. Phys.},
volume={87},
date={1982/83},
number={1},
pages={81--87},
issn={0010-3616},
review={\MR{680649}},
}
\bib{Levine-Pegden-Smart-1}{article}{
author={Levine, Lionel},
author={Pegden, Wesley},
author={Smart, Charles K},
title={Apollonian structure in the Abelian sandpile},
note={Preprint (2012) \arxiv{1208.4839}}
}
\bib{Levine-Pegden-Smart-2}{article}{
author={Levine, Lionel},
author={Pegden, Wesley},
author={Smart, Charles K},
title={The Apollonian structure of integer superharmonic matrices},
note={Preprint (2013) \arxiv{1309.3267}}
}
\bib{Levine-Propp}{article}{
author={Levine, Lionel},
author={Propp, James},
title={What is $\dots$ a sandpile?},
journal={Notices Amer. Math. Soc.},
volume={57},
date={2010},
number={8},
pages={976--979},
issn={0002-9920},
review={\MR{2667495}},
}
\bib{Ostojic}{article}{
title={Patterns formed by addition of grains to only one site of an abelian sandpile},
author={Ostojic, Srdjan},
journal={Physica A: Statistical Mechanics and its Applications},
volume={318},
number={1},
pages={187--199},
year={2003},
publisher={Elsevier}
}
\bib{Pegden-Smart}{article}{
author={Pegden, Wesley},
author={Smart, Charles K},
title={Convergence of the Abelian Sandpile},
journal={Duke Mathematical Journal, to appear},
note={\arxiv{1105.0111}}
}
\bib{Savin}{article}{
author={Savin, Ovidiu},
title={Small perturbation solutions for elliptic equations},
journal={Comm. Partial Differential Equations},
volume={32},
date={2007},
number={4-6},
pages={557--578},
issn={0360-5302},
review={\MR{2334822}},
doi={10.1080/03605300500394405},
}
\bib{Sportiello}{article}{
author={Sportiello, Andreas},
title={The limit shape of the Abelian Sandpile identity},
note={Limit shapes, ICERM 2015}
}
\end{biblist}
\end{bibdiv}
\end{document}
|
2,869,038,154,099 | arxiv | \section{Introduction}
\par Boussinesq equation introduced by French mathematician Joseph Boussinesq has the form
\begin{equation}
u_{tt} + p u_{xx} + q \left( u^2\right)_{xx} + r u_{xxxx} = 0, \label{1}
\end{equation}
where p, q and r are constants. The Boussinesq equation have several applications in the real world. This equation play an important role in modeling various phenomena such as long waves in shallow water \cite{BE}, one dimensional nonlinear lattices waves \cite{lattices}, vibration in a nonlinear string \cite{string}, electromagnetic waves in dielectric materials \cite{dielectric} and so on. Many researchers have been used analytical methods to solve Boussinesq equation such as variational iteration method \cite{VIM}, modified variational iteration method \cite{MVIM1,MVIM2}, Adomian decomposition method and homotopy perturbation method \cite{Mohyud,ADM1}. Recently Malek et al. \cite{Malek} have used potential symmetries method to solve Boussinesq equation.
\par The Daftardar-Gejji and Jafari Method (DJM) \cite{GEJJI1} is a simpler and more efficient technique used to solve various equations such as fractional differential equations \cite{GEJJI3}, partial differential equations \cite{GEJJI4}, boundary value problems \cite{GEJJI5}, evolution equations \cite{GEJJI6}, system of nonlinear functional equations \cite{GEJJI8}, algebraic equations \cite{Noor} and so on. The method is successfully employed to solve Newell-Whitehead-Segel equation \cite{pb1}, Fisher’s equation \cite{pb8}, Ambartsumian equation \cite{pb5}, fractional-order logistic equation \cite{GEJJI7} and some nonlinear dynamical systems \cite{GEJJI9,pb4}. In \cite{pb2,pb3} we provided the series solutions of pantograph equation in terms of new special functions. Recently DJM has been used to generate a new numerical methods \cite{pb6,pb7} for solving differential equations.
\par In this manuscript we consider the solution of Boussinesq equation by using modified DJM. We organize the paper as follows:
\par The DJM is described briefly in Section \ref{djm} and the modified DJM is described in section \ref{mdjm}. A general technique used to solve Boussinesq equation using modified DJM is described in Section \ref{appl}. Section \ref{Ex} deals with illustrative examples and the conclusions are summarized in Section \ref{concl}.
\section{Daftardar-Gejji and Jafari Method}\label{djm}
In this section we describe the Daftardar-Gejji and Jafari Method (DJM) \cite{GEJJI1}, which is useful for solving the nonlinear equations of the form
\begin{equation}
u= f + L(u) + N(u),\label{2.1}
\end{equation}
where $f$ is a source term, $L$ and $N$ are linear and nonlinear operators respectively. It is assumed that the DJM solution for the Eq.(\ref{2.1}) has the form:
\begin{equation}
u= \sum_{i=0}^\infty u_i. \label{2.2}
\end{equation}
The convergence of series (\ref{2.2}) is proved in \cite{CONV}.
\par Since $L$ is linear
\begin{equation}
L\left(\sum_{i=0}^\infty u_i\right) = \sum_{i=0}^\infty L(u_i).\label{2.3}
\end{equation}
The nonlinear operator $N$ in Eq.(\ref{2.1}) is decomposed by Daftardar-Gejji and Jafari as bellow:
\begin{eqnarray}
N\left(\sum_{i=0}^\infty u_i\right) &=& N(u_0) + \sum_{i=1}^\infty \left\{N\left(\sum_{j=0}^i u_j\right)- N\left(\sum_{j=0}^{i-1} u_j\right)\right\} \nonumber\\
&=& \sum_{i=0}^\infty G_i,\label{2.4}
\end{eqnarray}
\\
where $G_0=N(u_0)$ and $G_i = \left\{N\left(\sum_{j=0}^i u_j\right)- N\left(\sum_{j=0}^{i-1} u_j\right)\right\}$, $i\geq 1$.\\
\\
Using equations (\ref{2.2}), (\ref{2.3}) and (\ref{2.4}) in Eq.(\ref{2.1}), we get
\begin{equation}
\sum_{i=0}^\infty u_i= f + \sum_{i=0}^\infty L(u_i) + \sum_{i=0}^\infty G_i.\label{2.5}
\end{equation}
From Eq.(\ref{2.5}), the DJM series terms are generated as bellow:
\begin{eqnarray}
u_0 &=& f,\nonumber\\
u_{m+1} &=& L(u_m) + G_m,\quad m=0,1,2, \cdots.\label{2.6}
\end{eqnarray}
In practice, we take the approximation
\begin{eqnarray}
u = \sum_{i=0}^{k-1} u_i
\end{eqnarray}
for suitable integer $k$.\\
The convergence results for DJM are described in \cite{CONV}.
\section{Modified Daftardar-Gejji and Jafari Method}\label{mdjm}
\par In \cite{Wazwaz} Wazwaz proposed a modification in ADM to generate a rapidly converging solution series. Using same technique we modify the Daftardar-Gejji and Jafari method as follows:\\
We assume that
\begin{equation}
f = f_1 + f_2.\label{3.1}
\end{equation}
Then Eq.(\ref{2.1}) can be written as
\begin{equation}
u= f_1 + f_2 + L(u) + N(u).\label{3.2}
\end{equation}
The modified DJM is described as:
\begin{eqnarray}
u_0 &=& f_1,\nonumber\\
u_1 &=& f_2 + L(u_0) + G_0,\nonumber\\
u_{m+1} &=& L(u_m) + G_m,\quad m=1,2,3, \cdots.\label{3.3}
\end{eqnarray}
It is obvious that the simpler form of $u_0$ in MDJM result in the reduction of computations and accelerates the convergence rate.
The convergence results for MDJM are similar as that of DJM \cite{CONV}
\section{Applications}\label{appl}
A bidirectional solitary wave solution of Eq.(\ref{1}) discussed in \cite{Clarkson} and is given by
\begin{equation}
u(x,1) =- \frac{3(\alpha^2+p)^{\frac{1}{2}}}{2q} \sech^2\left[\frac{1}{2}\left(\frac{\alpha^2+p}{-r}\right)^{\frac{1}{2}}(x\pm\alpha t) + \beta\right],
\end{equation}
where $\alpha$ and $\beta$ are constants and the initial conditions are
\begin{eqnarray}
u(x,0) &=& \frac{3(\alpha^2+p)^{\frac{1}{2}}}{2q} \sech^2\left[\frac{1}{2}\left(\frac{\alpha^2+p}{-r}\right)^{\frac{1}{2}}x +\beta\right],\\
u_t(x,0) &=& \mp\frac{3\alpha(\alpha^2+p)^{\frac{3}{2}}}{2q(-r)^{\frac{1}{2}}} \sech^2\left[\frac{1}{2}\left(\frac{\alpha^2+p}{-r}\right)^{\frac{1}{2}}x +\beta\right]\nonumber\\
&&\tanh\left[\frac{1}{2}\left(\frac{\alpha^2+p}{-r}\right)^{\frac{1}{2}}x +\beta\right].
\end{eqnarray}
The equivalent integral equation of (\ref{1}) is
\begin{eqnarray}
u(x,t) &=& \frac{3(\alpha^2+p)^{\frac{1}{2}}}{2q} \sech^2\left[\frac{1}{2}\left(\frac{\alpha^2+p}{-r}\right)^{\frac{1}{2}}x +\beta\right]\nonumber\\
&&\mp\frac{3\alpha(\alpha^2+p)^{\frac{3}{2}}}{2q(-r)^{\frac{1}{2}}} \sech^2\left[\frac{1}{2}\left(\frac{\alpha^2+p}{-r}\right)^{\frac{1}{2}}x +\beta\right]\nonumber\\
&&\tanh\left[\frac{1}{2}\left(\frac{\alpha^2+p}{-r}\right)^{\frac{1}{2}}x +\beta\right] t\nonumber\\
&& - \int_0^t(u_{xx} + u_{xxxx}) dx - \int_0^t \left(u^2\right)_{xx} dx. \label{4.1}
\end{eqnarray}
This Eq.(\ref{4.1}) is of the form Eq.(\ref{3.2}). Using Eq.(\ref{3.3}), the modified DJM series terms are generated as bellow:
\begin{equation}
u_0(x,t) = - \frac{3(\alpha^2+p)^{\frac{1}{2}}}{2q} \sech^2\left[\frac{1}{2}\left(\frac{\alpha^2+p}{-r}\right)^{\frac{1}{2}}x +\beta\right],
\end{equation}
\begin{eqnarray}
u_1(x,t) &=& \mp\frac{3\alpha(\alpha^2+p)^{\frac{3}{2}}}{2q(-r)^{\frac{1}{2}}} \sech^2\left[\frac{1}{2}\left(\frac{\alpha^2+p}{-r}\right)^{\frac{1}{2}}(x\pm\alpha t +\beta\right]\nonumber\\
&&\tanh\left[\frac{1}{2}\left(\frac{\alpha^2+p}{-r}\right)^{\frac{1}{2}}(x\pm\alpha t +\beta\right]t\nonumber\\
&& - \int_0^t\left((u_0(x,t))_{xx} + (u_0(x,t))_{xxxx}\right) dx\\
&& - \int_0^t (u_0^2(x,t))_{xx} dx,\nonumber\\
u_{n+1}(x,t)&=& -\int_0^t\left(\left(\sum_{i=0}^{n} u_i(x,t)\right)_{xx} + \left(\sum_{i=0}^{n}u_i(x,t)\right)_{xxxx} \right)dx\nonumber\\
&& - \int_0^t \left(\sum_{i=0}^{n}u_i(x,t)\right)^2_{xx} dx \nonumber\\
&& + \int_0^t \left(\sum_{i=0}^{n-1}u_i(x,t)\right)^2_{xx} dx,\quad n=1,2,3,\cdots.
\end{eqnarray}
\section{Illustrative examples}\label{Ex}
Besides equation Eq.(\ref{1}), there are few more PDEs which are called Boussinesq equation. In this section, we solve such equations using MDJM.
\begin{Ex}\label{ex1}
Consider the Boussinesq equation \cite{Mohyud}
\begin{equation}
u_{tt} - u_{xx} + 3 \left( u^2\right)_{xx} + u_{xxxx} = 0, \label{5.1}
\end{equation}
with initial condition
\begin{eqnarray}
u(x,0) &=& \frac{c}{2} \sech^2\left[\frac{\sqrt{c}}{2}(x+1)\right],\\
u_t(x,0) &=& -\frac{c^\frac{5}{2}}{2} \sech^2\left[\frac{\sqrt{c}}{2}(x+1)\right]\tanh\left[\frac{\sqrt{c}}{2}(x+1)\right] .
\end{eqnarray}
\end{Ex}
The equivalent integral equation is
\begin{eqnarray}
u(x,t) &=& \frac{c}{2} \sech^2\left[\frac{\sqrt{c}}{2}(x+1)\right] - \frac{c^\frac{5}{2}}{2} \sech^2\left[\frac{\sqrt{c}}{2}(x+1)\right]\tanh\left[\frac{\sqrt{c}}{2}(x+1)\right] t \nonumber\\
&& + \int_0^t(u_{xx} - u_{xxxx}) dx - 3 \int_0^t u^2_{xx} dx. \label{5.2}
\end{eqnarray}
This Eq.(\ref{5.2}) is of the form Eq.(\ref{3.2}). Using Eq.(\ref{3.3}), the MDJM series terms are generated as bellow:
\begin{eqnarray}
u_0(x,t) &=& \frac{c}{2} \sech^2[\frac{\sqrt{c}}{2}(x+1)],\\
u_1(x,t) &=& -\frac{1}{8} c^2 t^2 \sech^4\left[\frac{1}{2} \sqrt{c} (1+x)\right]-\frac{1}{4} c^3 t^2 \sech^6\left[\frac{1}{2}\sqrt{c} (1+x)\right]\nonumber\\
&&-\frac{1}{4} c^{5/2} t \sech^2\left[\frac{1}{2} \sqrt{c} (1+x)\right] \tanh\left[\frac{1}{2} \sqrt{c} (1+x)\right]+\cdots,\\
u_2(x,t) &=& \frac{1}{48} c^3 t^4 \sech^6\left[\frac{1}{2} \sqrt{c} (1+x)\right]+\frac{13}{192} c^4 t^4 \sech^8\left[\frac{1}{2}\sqrt{c} (1+x)\right]\nonumber\\
&& -\frac{17}{192} c^5 t^4 \sech^{10}\left[\frac{1}{2} \sqrt{c} (1+x)\right]-\cdots.,\\
u_3(x,t) &=& \frac{7}{64} c^4 t^4 \sech^8\left[\frac{1}{2} \sqrt{c} (1+x)\right]\nonumber\\
&&-\frac{17 c^4 t^6 \sech^8\left[\frac{1}{2}\sqrt{c} (1+x)\right]}{5760}\nonumber\\
&&+\frac{47}{64} c^5 t^4 \sech^{10}\left[\frac{1}{2} \sqrt{c} (1+x)\right]\nonumber\\
&&-\frac{77 c^5 t^6 \sech^{10}\left[\frac{1}{2}\sqrt{c} (1+x)\right]}{1920}+\cdots.
\end{eqnarray}
and so on.
The exact solution of Eq.(\ref{5.1}) is
\begin{equation}
u(x,t) = \frac{c}{2} \sech^2\left[\frac{\sqrt{c}}{2}x+\frac{\sqrt{c}}{2}\sqrt{1+ct}\right]
\end{equation}
\\
We compare 4-term solutions for $c=1$ and $c=2$ in Fig.1 and Fig.2, where MDJM solution and exact solution are shown by red and green colors respectively. From these figures, it can be observed that the modified DJM solution is well in agreement to exact solution.
\begin{tabular}{c}
\includegraphics[scale=1]{BE1.pdf} \\
Fig.1: Comparison of solutions of Eq.(\ref{5.1}) for $c=1$
\end{tabular}
\begin{tabular}{c}
\includegraphics[scale=1]{BE2.pdf} \\
Fig.2: Comparison of solutions of Eq.(\ref{5.1}) for $c=2$
\end{tabular}
The 6-term ADM and HPM solutions of (\ref{5.1}) are given in \cite{Mohyud} as\\
\begin{eqnarray}
u(x,t) &=& \frac{1}{2} \sech^2\left[\frac{1}{2} \sqrt{c} (1+x)\right]\nonumber\\
&&-\frac{1}{8}(-1+c) c^2 t^2\left(-2+\cosh[\sqrt{c}(x+1)]\sech^4\left[\frac{1}{2} \sqrt{c} (1+x)\right]\right.\nonumber\\
&&-\frac{1}{1280}\left((-1+c)^2 c^5 t^6 (-140 + 157 \cosh[\sqrt{c}(x+1)])\right) -26 \cosh[2\sqrt{c}(x+1)] \nonumber\\
&&+ \cosh[3\sqrt{c}(x+1)]\sech^{10}\left[\frac{\sqrt{c}}{2}(1+x)\right]+\cdots\label{5.4}
\end{eqnarray}
\\
and
\begin{eqnarray}
u(x,t) &=& \frac{1}{2} \sech^2\left[\frac{1}{2} \sqrt{c} (1+x)\right]\nonumber\\
&&-\frac{1}{8}(-1+c) c^2 t^2\left(-2+\cosh[\sqrt{c}(x+1)]\sech^4\left[\frac{1}{2} \sqrt{c} (1+x)\right]\right.\nonumber\\
&&-\frac{1}{30720}\left((-1+c) c^3 t^4 \left(-475 - 6125c - 9480 c^2 - 3360c^2t^2 +\cdots\right.\right.\label{5.5}
\end{eqnarray}
respectively. We compare the errors in all these solutions in table 1 and 2.
\\
\\
Table 1: Absolute error in the 4-term MDJM solution and exact solution for $c=1$.\\
\\
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$t/x$ & $20$ & $25$ & $30$ & $35$ & $40$\\
\hline
$0.1$ & $3.58376\times10^{-12}$ & $2.41472\times10^{-14}$ & $1.62702\times10^{-16}$ & $1.09628\times10^{-18}$ & $7.38668\times10^{-21}$ \\
\hline
$0.2$ & $1.36005\times10^{-11}$ & $9.16392\times10^{-14}$ & $6.1746\times10^{-16}$ & $4.16041\times10^{-18}$ & $2.80326\times10^{-20}$ \\
\hline
$0.3$ & $2.91257\times10^{-11}$ & $1.96248\times10^{-13}$ & $1.32231\times10^{-16}$ & $8.90963\times10^{-18}$ & $6.00326\times10^{-20}$ \\
\hline
$0.4$ & $4.94206\times10^{-11}$ & $3.32993\times10^{-13}$ & $2.24369\times10^{-15}$ & $1.51179\times10^{-17}$ & $1.01863\times10^{-19}$ \\
\hline
$0.5$ & $7.38844\times10^{-11}$ & $4.97829\times10^{-13}$ & $3.35435\times10^{-15}$ & $2.26014\times10^{-17}$ & $1.52287\times10^{-19}$ \\
\hline
$0.6$ & $1.02022\times10^{-10}$ & $6.8742\times10^{-13}$ & $4.6318\times10^{-15}$ & $3.12088\times10^{-17}$ & $2.10283\times10^{-19}$ \\
\hline
$0.7$ & $1.33421\times10^{-10}$ & $8.98981\times10^{-13}$ & $6.05728\times10^{-15}$ & $4.08137\times10^{-17}$ & $2.7500\times10^{-19}$ \\
\hline
$0.8$ & $1.67731\times10^{-10}$ & $1.13017\times10^{-12}$ & $7.61499\times10^{-15}$ & $5.13094\times10^{-17}$ & $3.4572\times10^{-19}$ \\
\hline
$0.9$ & $2.04658\times10^{-10}$ & $1.37898\times10^{-12}$ & $9.29146\times10^{-15}$ & $6.26054\times10^{-17}$ & $4.21832\times10^{-19}$ \\
\hline
$1$ & $2.43946\times10^{-10}$ & $1.64369\times10^{-12}$ & $1.10751\times10^{-14}$ & $7.46236\times10^{-17}$ & $5.0281\times10^{-19}$ \\
\hline
\end{tabular}
\\
\\
Table 2: Absolute error in the 4-term MDJM solution and exact solution for $c=2$.\\
\\
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$t/x$ & $20$ & $25$ & $30$ & $35$ & $40$\\
\hline
$0.1$ & $1.24823\times10^{-14}$ & $1.06015\times10^{-17}$ & $9.00415\times10^{-21}$ & $7.64746\times10^{-24}$ & $6.49518\times10^{-27}$ \\
\hline
$0.2$ & $4.58263\times10^{-14}$ & $3.89215\times10^{-17}$ & $3.3057\times10^{-20}$ & $2.80762\times10^{-23}$ & $2.38458\times10^{-26}$ \\
\hline
$0.3$ & $9.50542\times10^{-14}$ & $8.0732\times10^{-17}$ & $6.85678\times10^{-20}$ & $5.82364\times10^{-23}$ & $4.94616\times10^{-26}$ \\
\hline
$0.4$ & $1.56166\times10^{-13}$ & $1.32635\times10^{-16}$ & $1.12651\times10^{-19}$ & $9.56772\times10^{-23}$ & $8.12611\times10^{-26}$ \\
\hline
$0.5$ & $2.25727\times10^{-13}$ & $1.91716\times10^{-16}$ & $1.62829\times10^{-19}$ & $1.38295\times10^{-22}$ & $1.17458\times10^{-25}$ \\
\hline
$0.6$ & $3.00672\times10^{-13}$ & $2.55368\times10^{16}$ & $2.16891\times10^{-19}$ & $1.84211\times10^{-22}$ & $1.56455\times10^{-25}$ \\
\hline
$0.7$ & $3.78205\times10^{-13}$ & $3.21219\times10^{-16}$ & $2.72819\times10^{-19}$ & $2.31713\times10^{-22}$ & $1.96799\times10^{-25}$ \\
\hline
$0.8$ & $4.55765\times10^{-13}$ & $3.87093\times10^{-16}$ & $3.28768\times10^{-19}$ & $2.79231\times10^{-22}$ & $2.37158\times10^{-25}$ \\
\hline
$0.9$ & $5.31029\times10^{-13}$ & $4.51017\times10^{-16}$ & $3.8306\times10^{-19}$ & $3.25343\times10^{-22}$ & $2.76322\times10^{-25}$ \\
\hline
$1$ & $6.01928\times10^{-13}$ & $5.11233\times10^{-16}$ & $4.34203\times10^{-19}$ & $3.6878\times10^{-22}$ & $3.13214\times10^{-25}$ \\
\hline
\end{tabular}
\\
\\
It is observed that MDJM solution has less error than other iterative methods described above. Also we have used fewer terms of MDJM series than other methods to approximate the solution.
\\
\section{Conclusions}\label{concl}
The Boussinesq equation is an important class of PDEs arising in applied science. Various authors proposed different methods to solve these equations. In this article, we proposed a modification to DJM viz. MDJM and used it to solve few Boussinesq equations. It is observed that the MDJM is simpler iterative method than other iterative methods. Further, the solutions obtained by using this new method are good approximation to the exact solutions. This new method can be used to solve different nonlinear problems in a more efficient way.
\subsection{Acknowledgements}
S. Bhalekar acknowledges the Science and Engineering Research Board (SERB), New Delhi, India for the Research Grant (Ref. MTR/2017/000068) under Mathematical Research Impact Centric Support (MATRICS) Scheme
\section{References}
|
2,869,038,154,100 | arxiv | \section{Introduction}
\setcounter{equation}{0}
The goal of this paper is to study the nonexistence of nontrivial sign-changing solutions to a class of fractional elliptic inequalities and systems with variable-exponent nonlinearities, namely
\begin{equation}\label{1}
(-\Delta)^{\frac{\alpha}{2}} u+\lambda\, \Delta u \geq |u|^{p(x)}, \quad x\in\mathbb{R}^N
\end{equation}
and
\begin{eqnarray}\label{2}
\left\{\begin{array}{lll}
(-\Delta)^{\frac{\alpha}{2}} u+\lambda\, \Delta u &\geq & |v|^{q(x)},\quad x\in\mathbb{R}^N,\\ \\
(-\Delta)^{\frac{\beta}{2}} v+\mu\, \Delta v &\geq &|u|^{p(x)},\quad x\in\mathbb{R}^N,
\end{array}
\right.
\end{eqnarray}
where $N\geq 1$, $\alpha,\beta\in (0,2)$, $\lambda,\mu\in\mathbb{R}$ are constants, $p,q: \mathbb{R}^N\to (1,\infty)$ are measurable functions, and $(-\Delta)^{\frac{\kappa}{2}}$, $\kappa\in \{\alpha,\beta\}$, is the fractional Laplacian
operator of order $\frac{\kappa}{2}$. We mention below some motivations for studying problems of types \eqref{1} and \eqref{2}.
In \cite{GS}, Gidas and Spruck considered the corresponding equation
to \eqref{1} with $\alpha=2$, $\lambda=0$, $p(\cdot)\equiv p$ and $u\geq 0$, namely
\begin{eqnarray}\label{1-GS}
\left\{\begin{array}{lllll}
-\Delta u &= & u^p &\mbox{in}& \mathbb{R}^N,\\
u &\geq & 0 &\mbox{in}& \mathbb{R}^N.
\end{array}
\right.
\end{eqnarray}
It was shown that,
\begin{itemize}
\item[(a)] if $N\geq 3$ and $1<p<\frac{N+2}{N-2}$, then \eqref{1-GS} admits admits no positive classical solution;
\item[(b)] if $N\geq 3$ and $p\geq \frac{N+2}{N-2}$, then \eqref{1-GS} admits positive classical solutions.
\end{itemize}
Consider the corresponding system of equations to \eqref{2} with $\alpha=\beta=2$, $\lambda=\mu=0$, $p(\cdot)\equiv p>0$, $q(\cdot)\equiv q>0$ and $u,v\geq 0$, namely the Lane-Emden system
\begin{eqnarray}\label{2-LES}
\left\{\begin{array}{lllll}
-\Delta u &= & v^q &\mbox{in}& \mathbb{R}^N,\\
-\Delta v &= &u^p&\mbox{in}& \mathbb{R}^N,\\
u &\geq & 0 &\mbox{in}& \mathbb{R}^N,\\
v &\geq & 0 &\mbox{in}& \mathbb{R}^N,
\end{array}
\right.
\end{eqnarray}
where $N\geq 3$. The famous Lane-Emden conjecture
states that, if
$$
\frac{1}{p+1}+\frac{1}{q+1}>1-\frac{2}{N},
$$
then \eqref{2-LES} admits no positive classical solution. This conjecture was proved only in the cases $N\in \{3,4\}$
(see \cite{PQS,SZ,Souplet}).
In the case $\alpha=2$, $\lambda=0$, $p(\cdot)\equiv p>1$ and $u\geq 0$, \eqref{1} reduces to
\begin{eqnarray}\label{1-NS}
\left\{\begin{array}{lllll}
-\Delta u &\geq & u^p &\mbox{in}& \mathbb{R}^N,\\
u &\geq & 0 &\mbox{in}& \mathbb{R}^N.
\end{array}
\right.
\end{eqnarray}
Ni and Serrin \cite{NS} investigated the radial case of \eqref{1-NS}. Namely, it was shown that, if $N\geq 3$ and $1<p\leq \frac{N}{N-2}$, then \eqref{1-NS} has no positive radial solution such that
$\displaystyle\lim_{|x|\to \infty} u(|x|)=0$.
Mitidieri and Pohozaev \cite{MP} studied sign-changing solutions to the differential inequality
\begin{equation}\label{1-MP}
-\Delta u \geq |u|^{p} \quad \mbox{in }\,\, \mathbb{R}^N.
\end{equation}
In the case $N\geq 3$, it was shown that, if
\begin{equation}\label{cdMP}
1<p\leq \frac{N}{N-2},
\end{equation}
then \eqref{1-MP} has no nontrivial weak solution. Note that \eqref{cdMP} is sharp, in the sense that, if $N\geq 3$ and $p>\frac{N}{N-2}$, then \eqref{1-MP} admits positive classical solutions. Indeed, one can check easily that in this case,
$$
u(x)=\epsilon \left(1+|x|^2\right)^{\frac{1}{1-p}},\quad x\in \mathbb{R}^N,
$$
is a positive solution to \eqref{1-MP} for sufficiently small $\epsilon>0$.
Mitidieri and Pohozaev \cite{MP} studied also the corresponding system to \eqref{1-MP}, namely
\begin{eqnarray}\label{2-MP}
\left\{\begin{array}{lll}
-\Delta u &\geq & |v|^{q},\quad x\in\mathbb{R}^N,\\ \\
-\Delta v &\geq &|u|^{p},\quad x\in\mathbb{R}^N.
\end{array}
\right.
\end{eqnarray}
It was shown that, if
\begin{equation}\label{sysMP}
N\leq \max\left\{\frac{2q(p+1)}{pq-1},\frac{2p(q+1)}{pq-1}\right\},
\end{equation}
then \eqref{2-MP} admits no nontrivial weak solution. Moreover, condition \eqref{sysMP} is sharp, in the sense that, if $N> \max\left\{\frac{2q(p+1)}{pq-1},\frac{2p(q+1)}{pq-1}\right\}$, then \eqref{2-MP} admits positive classical solutions ($u,v>0$). Indeed, it can be easily seen that in this case,
$$
(u(x),v(x))=\left(\epsilon \left(1+|x|^2\right)^{\frac{q+1}{1-pq}},\epsilon \left(1+|x|^2\right)^{\frac{p+1}{1-pq}}\right),\quad x\in \mathbb{R}^N,
$$
is a positive solution to \eqref{2-MP} for sufficiently small $\epsilon>0$.
A large class of differential inequalities and systems generalizing \eqref{1-MP} and \eqref{2-MP} was systematically investigated by Mitidieri and Pohozaev (see e.g. \cite{MP98,MP1,MP2,MP}), who developed the nonlinear capacity method. Next, this approach was used by many authors in the study of different types of problems (see e.g. \cite{BP,CAM,DYZ,Fi0,Fi,Sun}
).
Nonlocal operators have been receiving increased attention in recent years
due to their usefulness in modeling complex systems with long-range interactions or memory effects, which cannot be described properly via standard differential operators. In particular, the fractional Laplacian $(-\Delta)^\frac{\alpha}{2}$, $0<\alpha<2$, has been used to describe anomalous diffusion \cite{MM}, turbulent flows \cite{GU}, stochastic dynamics \cite{BBC,Chen}, finance \cite{CO}, and many other phenomena. Due to the above facts, the study of mathematical problems involving the fractional Laplacian operator has attracted significant attention recently. In particular, many interesting results related to Liouville-type theorems for nonlocal elliptic problems have been obtained.
To overcome the difficulty caused by the nonlocal property of the fractional Laplacian operator, Caffarelli and Silvestre \cite{CS} introduced an extension method which consists of localizing the fractional Laplacian by constructing a Dirichlet to Neumann operator of a degenerate elliptic equation. Using the mentioned approach, Brandle et al. \cite{BR} established a nonexistence result for the fractional version of
\eqref{1-GS}, namely
\begin{eqnarray}\label{F-1-GS}
\left\{\begin{array}{lllll}
(-\Delta)^{\frac{\alpha}{2}} u &= & u^p &\mbox{in}& \mathbb{R}^N,\\
u &\geq & 0 &\mbox{in}& \mathbb{R}^N.
\end{array}
\right.
\end{eqnarray}
It was shown that, if $1\leq \alpha<2$, $N\geq 2$ and $1<p<\frac{N+\alpha}{N-\alpha}$, then \eqref{F-1-GS} has no nontrivial bounded solution.
In \cite{ZH}, Zhuo et al. investigated \eqref{F-1-GS} using an equivalent integral representation to \eqref{F-1-GS}. They obtained the same result as in \cite{BR} but under weaker conditions. Namely, they proved that, if $0<\alpha<2$, $N\geq 2$ and $1<p<\frac{N+\alpha}{N-\alpha}$, then \eqref{F-1-GS} has no nontrivial locally bounded solution.
In \cite{QX}, Quaas and Xia studied the fractional Lane-Emden system
\begin{eqnarray}\label{F-2-LES}
\left\{\begin{array}{lllll}
(-\Delta)^{\frac{\alpha}{2}} u &= & v^q &\mbox{in}& \mathbb{R}^N,\\
(-\Delta)^{\frac{\alpha}{2}}v &= &u^p&\mbox{in}& \mathbb{R}^N,\\
u &\geq & 0 &\mbox{in}& \mathbb{R}^N,\\
v &\geq & 0 &\mbox{in}& \mathbb{R}^N,
\end{array}
\right.
\end{eqnarray}
where $0<\alpha<2$, $N>\alpha$ and $p,q>0$. Using the method of moving planes, it was shown that, if $pq>1$,
$$
\beta_1,\beta_2\in \left[\frac{N-\alpha}{2},N-\alpha\right)\quad\mbox{and}\quad (\beta_1,\beta_2)\neq \left(\frac{N-\alpha}{2},\frac{N-\alpha}{2}\right),
$$
where $\beta_1=\frac{\alpha(q+1)}{pq-1}$ and $\beta_2=\frac{\alpha(p+1)}{pq-1}$, then for some $\sigma > 0$, there exists no positive solution to
\eqref{F-2-LES} in $X_{\alpha,\sigma}(\mathbb{R}^N)$, where
$$
X_{\alpha,\sigma}(\mathbb{R}^N)=
\left\{\begin{array}{lll}
C^{\alpha+\sigma}(\mathbb{R}^N) &\mbox{if}& 0<\alpha<1,\\
C^{1,\alpha+\sigma-1}(\mathbb{R}^N) &\mbox{if}& 1\leq \alpha<2.
\end{array}
\right.
$$
In the case $\lambda=0$, $p(\cdot)\equiv p$ and $u\geq 0$, \eqref{1} reduces to
\begin{eqnarray}\label{1-FFNS}
\left\{\begin{array}{lllll}
(-\Delta)^{\frac{\alpha}{2}} u &\geq & u^p &\mbox{in}& \mathbb{R}^N,\\
u &\geq & 0 &\mbox{in}& \mathbb{R}^N.
\end{array}
\right.
\end{eqnarray}
Using the extension method \cite{CS}, Wang and Xiao \cite{WX} proved the following results for \eqref{1-FFNS}: Let $p>1$, $0<\alpha<2$ and $N\geq 1$. Then
\begin{itemize}
\item[(a)] if $N\leq \alpha$, then \eqref{1-FFNS} has no nontrivial weak solution;
\item[(b)] if $N>\alpha$, then \eqref{1-FFNS} has no nontrivial weak solution when and only when $p\leq \frac{N}{N-\alpha}$.
\end{itemize}
In the special case $\lambda=\mu=0$, $p(\cdot)\equiv p$, $q(\cdot)\equiv q$ and $u,v\geq 0$, \eqref{2} reduces to
\begin{eqnarray}\label{2-testf}
\left\{\begin{array}{lllll}
(-\Delta)^{\frac{\alpha}{2}} u &\geq & v^{q} &\mbox{in}& \mathbb{R}^N,\\
(-\Delta)^{\frac{\beta}{2}} v &\geq & u^{p}&\mbox{in}& \mathbb{R}^N,\\
u &\geq & 0 &\mbox{in}& \mathbb{R}^N,\\
v &\geq & 0 &\mbox{in}& \mathbb{R}^N.
\end{array}
\right.
\end{eqnarray}
Using the nonlinear capacity method and Ju's inequality \cite{Ju}, Dahmani et al. \cite{DKK} proved that, if $0<\alpha,\beta<2$, $p,q>1$ and
\begin{equation}\label{Nest}
N<\max\left\{\beta+\frac{\alpha}{q},\alpha+\frac{\beta}{p}\right\} \frac{pq}{pq-1},
\end{equation}
then \eqref{2-testf} has no nontrivial weak solution. Observe that in the case $p=q>1$, $0<\alpha=\beta<2$, $N>\alpha$ and $u=v\geq 0$, \eqref{Nest} reduces to $p<\frac{N}{N-\alpha}$, which is the sufficient condition for the nonexistence of nontrivial weak solution to \eqref{1-FFNS} obtained in the statement (b) (without the limit case $p=\frac{N}{N-\alpha}$).
In all the above mentioned results, the positivity of solutions is essential. In particular, the standard nonlinear capacity method used in \cite{DKK} cannot be applied to \eqref{1} and \eqref{2}, where solutions can change sign. Namely, the main difficulty consists in constructing a function $\theta\in C_0^\infty(\mathbb{R}^N)$, $\theta\geq 0$, so that
$$
\left|(-\Delta)^{\frac{\kappa}{2}}\theta^\ell(x)\right|\leq C \theta^{\ell-1}(x) \left|(-\Delta)^{\frac{\kappa}{2}}\theta(x)\right|,\quad x\in \mathbb{R}^N,
$$
where $\kappa\in (0,2)$, $\ell>1$ and $C>0$ is a constant (independent on $x$).
The originality of this work resides in considering sign-changing solutions
to fractional elliptic inequalities and systems, as well as variable exponents. As in \cite{DKK}, we use the nonlinear capacity method, but with a different choice of the test function, allowing us to treat the sign-changing solutions case. This choice is motivated by the recent work of
Dao and Reissig \cite{DR}, where a blow-up result for semi-linear structurally damped $\sigma$-evolution equations was derived.
Before stating our main results, we recall some basic notions related to the fractional Laplacian operator and Lebesgue spaces with variable exponents, and define weak solutions to \eqref{1} and \eqref{2}. For more details about these notions, we refer to \cite{DHHMS,DHHMS2,Kwanicki,Silvestre} and the references therein.
Let $s\in (0,1)$. The fractional Laplacian operator $(-\Delta)^s$ is defined as
\begin{equation}\label{fracL}
(-\Delta)^s f(x)=C_{N,s}\, P.V. \, \int_{\mathbb{R}^N}\frac{f(x)- f(y)}{|x-y|^{N+2s}}\, dy,\quad x\in \mathbb{R}^N,
\end{equation}
where $f$ belongs to a suitable set of functions, $P.V.$ stands for Cauchy's principal value, and $C_{N,s}>0$ is a normalization constant that depends only on $N$ and $s$.
Let $p: \mathbb{R}^N\to (1,\infty)$ be a measurable function such that
\begin{equation}\label{asm1}
1<p^-:=\mbox{ess} \inf_{x\in \mathbb{R}^N}p(x)\leq p(x)\leq p^+:=\mbox{ess} \sup_{x\in \mathbb{R}^N}p(x)<\infty,\quad x\in \mathbb{R}^N\, a.e.
\end{equation}
The variable exponent Lebesgue space $L^{p(\cdot)}(\mathbb{R}^N)$ is defined by
$$
L^{p(\cdot)}(\mathbb{R}^N)=\left\{f:\mathbb{R}^N\to \mathbb{R}: f\mbox{ is measurable},\, \varrho_{p(\cdotp)}(\lambda f)<\infty \mbox{ for some }\lambda>0 \right\},
$$
where the modular $\varrho_{p(\cdotp)}$ is defined by
$$
\varrho_{p(\cdotp)}(f)=\int_{\mathbb{R}^N}| f(x)|^{p(x)}\,dx,
$$
with a Luxemburg-type norm
$$
\|f\|_{p(\cdotp)}=\inf\left\{\lambda>0:\,\varrho_{p(\cdotp)}\left(\frac{f}{\lambda}\right)\leq 1\right\},\quad f\in L^{p(\cdot)}(\mathbb{R}^N).
$$
Equipped with this norm, $L^{p(\cdotp)}(\mathbb{R}^N)$ is a Banach space. Moreover, one has
\begin{equation}\label{esve}
\min\left\{\varrho_{p(\cdotp)}(f)^{\frac{1}{p^-}},\varrho_{p(\cdotp)}(f)^{\frac{1}{p^+}}\right\}\leq \|f\|_{p(\cdotp)}\leq \max\left\{\varrho_{p(\cdotp)}(f)^{\frac{1}{p^-}},\varrho_{p(\cdotp)}(f)^{\frac{1}{p^+}}\right\},\quad f\in L^{p(\cdot)}(\mathbb{R}^N).
\end{equation}
Problem \eqref{1} is investigated under the following assumptions: $N\geq 1$, $0<\alpha<2$, $\lambda\in \mathbb{R}$, and $p: \mathbb{R}^N\to (1,\infty)$ is a measurable function satisfying \eqref{asm1}.
\begin{definition}[Weak solution for \eqref{1}]\label{weakequation}
We say that $u \in L^{2}(\mathbb{R}^N)\cap L^{2p(\cdot)}(\mathbb{R}^N)$ is a weak solution to \eqref{1} if
$$
\int_{\mathbb{R}^N}|u(x)|^{p(x)}\varphi(x)\,dx\leq \int_{\mathbb{R}^N}u(x)(-\Delta)^{\frac{\alpha}{2}}\varphi(x)\,dx+\lambda \int_{\mathbb{R}^N}u(x)\Delta\varphi(x)\,dx,
$$
for all $\varphi\in H^2(\mathbb{R}^N)$, $\varphi\geq 0$.
\end{definition}
Problem \eqref{2} is investigated under the following assumptions: $N\geq 1$, $0<\alpha,\beta<2$, $\lambda,\mu\in \mathbb{R}$, and $p,q: \mathbb{R}^N\to (1,\infty)$ are measurable functions satisfying respectively \eqref{asm1} and
$$
1<q^-:=\mbox{ess} \inf_{x\in \mathbb{R}^N}q(x)\leq q(x)\leq q^+:=\mbox{ess} \sup_{x\in \mathbb{R}^N}q(x)<\infty,\quad x\in \mathbb{R}^N\, a.e.
$$
\begin{definition}[Weak solution for \eqref{2}]
We say that
$$
(u,v) \in (L^{2}(\mathbb{R}^N)\cap L^{2p(\cdotp)}(\mathbb{R}^N))\times (L^{2}(\mathbb{R}^N)\cap L^{2q(\cdotp)}(\mathbb{R}^N))
$$
is a weak solution to \eqref{2} if
\begin{equation}\label{weaksystem1}
\int_{\mathbb{R}^N}|v(x)|^{q(x)}\varphi(x)\,dx\leq \int_{\mathbb{R}^N}u(x)(-\Delta)^{\frac{\alpha}{2}}\varphi(x)\,dx+\lambda \int_{\mathbb{R}^N}u(x)\Delta\varphi(x)\,dx
\end{equation}
and
\begin{equation}\label{weaksystem2}
\int_{\mathbb{R}^N}|u(x)|^{p(x)}\varphi(x)\,dx\leq \int_{\mathbb{R}^N}v(x)(-\Delta)^{\frac{\beta}{2}}\varphi(x)\,dx+\mu \int_{\mathbb{R}^N}v(x)\Delta\varphi(x)\,dx,
\end{equation}
for all $\varphi\in H^2(\mathbb{R}^N)$, $\varphi\geq 0$.
\end{definition}
Now, we are ready to state the main results of this paper.
\begin{theorem}\label{theorem1}
If
\begin{equation}\label{cdnonexist1}
1<p^-\leq p^+< p^*(N),
\end{equation}
where
$$
p^{*}(N)=\left\{\begin{array}{lll}
\infty &\mbox{if}& N\leq\alpha,\\
\frac{N}{N-\alpha} &\mbox{if}& N>\alpha,
\end{array}
\right.
$$
then the only weak solution to \eqref{1} is the trivial one.
\end{theorem}
\begin{rmk}
When $N>\alpha$, it is still an open question whether or not \eqref{1} admits nontrivial sign-changing weak solutions in the following cases:
\begin{itemize}
\item[(a)] $1<p^-\leq p^*(N)\leq p^+$;
\item[(b)] $1<p^*(N)<p^-$.
\end{itemize}
Note that in the case $\lambda=0$ and $p(\cdot)\equiv p$ (i.e. $p^-=p^+=p$), (a) reduces to
\begin{itemize}
\item[(a')] $p=p^*(N)$,
\end{itemize}
while (b) reduces to
\begin{itemize}
\item[(b')] $p>p^*(N)$.
\end{itemize}
From \cite[Theorem 1.1 (ii)]{WX}, in the case (a'), the only nonnegative weak solution to this special case of \eqref{1} is the trivial one; while in the case (b'), positive strong solutions exist.
\end{rmk}
\begin{rmk}
(i) The exponent $p^*(N)$ depends only on the dimension and the lower order power of $(-\Delta)$, i.e. on $\alpha$.\\
(ii) In the case $\alpha=2$, $p^*(N)$ is the critical exponent for problem \eqref{1-MP}.
\end{rmk}
\begin{theorem}\label{theorem2}
If
\begin{equation}\label{condsyst}
N< \frac{p^+q^+}{p^+q^+-1} \max\left\{\beta+\frac{\alpha}{q^+},\alpha+\frac{\beta}{p^+}\right\},
\end{equation}
then the only weak solution to \eqref{2} is the trivial one, i.e. $(u,v)\equiv (0,0)$.
\end{theorem}
\begin{rmk}
(i) In the case $\alpha=\beta=2$, $p(\cdot)\equiv p$ and $q(\cdot)\equiv q$, \eqref{condsyst} reduces to \eqref{sysMP} (with strict inequality), which is the obtained condition in \cite{MP}, under which \eqref{2-MP} admits no nontrivial weak solution.\\ (ii) In the case $p(\cdot)\equiv p$ and $q(\cdot)\equiv q$, \eqref{condsyst} reduces to \eqref{Nest}, which is the obtained condition in \cite{DKK}, under which \eqref{2-testf} has no positive weak solution.
\end{rmk}
The proofs of our main results are given in the next section.
\section{Proofs}\label{sec2}
In this section, we give the proofs of Theorems \ref{theorem1} and \ref{theorem2}. We shall use the nonlinear capacity method combined with the following pointwise estimate (see Fujiwara \cite{Fuj} and Dao and Reissig \cite{DR}).
\begin{lemma}\label{lemma1}
Let
$$
\langle x\rangle:=(1+|x|^2)^{\frac{1}{2}},\quad x\in \mathbb{R}^N.
$$
Let $s \in (0,1]$ and $\theta: \mathbb{R}^N\to (0,\infty)$ be the function defined by
\begin{equation}\label{testfOK}
\theta(x)=\langle x\rangle^{-\rho},\quad x\in \mathbb{R}^N,
\end{equation}
where $N<\rho\leq N+2s$. Then $\theta\in L^1(\mathbb{R}^N)\cap H^2(\mathbb{R}^N)$, and the following estimate holds:
\begin{equation}\label{3}
\left|(-\Delta)^s\theta(x)\right|\leq C \theta(x), \quad x\in\mathbb{R}^N,
\end{equation}
where $C>0$ is a constant (independent of $x$).
\end{lemma}
\begin{proof}[Proof of Theorem \ref{theorem1}]
Let $u \in L^{2}(\mathbb{R}^N)\cap L^{2p(\cdot)}(\mathbb{R}^N)$ be a weak solution to \eqref{1}. By Definition \ref{weakequation}, for all $\varphi\in H^2(\mathbb{R}^N)$, $\varphi\geq 0$, one has
\begin{equation}\label{gest}
\int_{\mathbb{R}^N}|u(x)|^{p(x)}\varphi(x)\,dx\leq \int_{\mathbb{R}^N}|u(x)| \left|(-\Delta)^{\frac{\alpha}{2}}\varphi(x)\right|\,dx+|\lambda| \int_{\mathbb{R}^N}|u(x)|\left|\Delta\varphi(x)\right|\,dx.
\end{equation}
On the other hand, for all $0<\epsilon<1$ and $x\in \mathbb{R}^N$ a.e, writing
$$
|u(x)| \left|(-\Delta)^{\frac{\alpha}{2}}\varphi(x)\right|=\left[\left(\epsilon p(x)\varphi(x)\right)^{\frac{1}{p(x)}} |u(x)|\right]\left[\left(\epsilon p(x)\varphi(x)\right)^{\frac{-1}{p(x)}} \left|(-\Delta)^{\frac{\alpha}{2}}\varphi(x)\right|\right]
$$
and using Young's inequality, it holds that
$$
|u(x)| \left|(-\Delta)^{\frac{\alpha}{2}}\varphi(x)\right|\leq \epsilon |u(x)|^{p(x)}\varphi(x)+\epsilon^{\frac{-1}{p(x)-1}}\left(\frac{p(x)-1}{p(x)}\right) p(x)^{\frac{-1}{p(x)-1}}\varphi(x)^{\frac{-1}{p(x)-1}}\left|(-\Delta)^{\frac{\alpha}{2}}\varphi(x)\right|^{\frac{p(x)}{p(x)-1}}.
$$
Next, using \eqref{asm1}, one obtains
$$
|u(x)| \left|(-\Delta)^{\frac{\alpha}{2}}\varphi(x)\right|\leq \epsilon |u(x)|^{p(x)}\varphi(x)+\epsilon^{\frac{-1}{p^--1}} (p^+)^{\frac{-1}{p^+-1}}\varphi(x)^{\frac{-1}{p(x)-1}}\left|(-\Delta)^{\frac{\alpha}{2}}\varphi(x)\right|^{\frac{p(x)}{p(x)-1}},
$$
which yields
\begin{equation}\label{firstes}
\int_{\mathbb{R}^N}|u(x)| \left|(-\Delta)^{\frac{\alpha}{2}}\varphi(x)\right|\,dx\leq \epsilon \int_{\mathbb{R}^N} |u(x)|^{p(x)}\varphi(x)\,dx+C \int_{\mathbb{R}^N} \varphi(x)^{\frac{-1}{p(x)-1}}\left|(-\Delta)^{\frac{\alpha}{2}}\varphi(x)\right|^{\frac{p(x)}{p(x)-1}}\,dx.
\end{equation}
Throughout, $C$ denotes a positive constant, whose value may change from line to line. Similarly, one obtains
\begin{equation}\label{secondes}
\int_{\mathbb{R}^N}|u(x)|\left|\Delta\varphi(x)\right|\,dx\leq \epsilon \int_{\mathbb{R}^N} |u(x)|^{p(x)}\varphi(x)\,dx+C \int_{\mathbb{R}^N} \varphi(x)^{\frac{-1}{p(x)-1}}\left|\Delta \varphi(x)\right|^{\frac{p(x)}{p(x)-1}}\,dx.
\end{equation}
Next, taking
$$
0<\varepsilon<\frac{1}{1+|\lambda|},
$$
it follows from \eqref{gest}, \eqref{firstes} and \eqref{secondes} that
\begin{equation}\label{es3}
\int_{\mathbb{R}^N}|u(x)|^{p(x)}\varphi(x)\,dx\leq C \left(I_1(\varphi)+|\lambda| I_2(\varphi)\right),
\end{equation}
where
$$
I_1(\varphi):=\int_{\mathbb{R}^N} \varphi(x)^{\frac{-1}{p(x)-1}}\left|(-\Delta)^{\frac{\alpha}{2}}\varphi(x)\right|^{\frac{p(x)}{p(x)-1}}\,dx\quad\mbox{and}\quad
I_2(\varphi):=\int_{\mathbb{R}^N} \varphi(x)^{\frac{-1}{p(x)-1}}\left|\Delta\varphi(x)\right|^{\frac{p(x)}{p(x)-1}}\,dx.
$$
Now, for $R>1$, we take
$$
\varphi(x)=\theta\left(\frac{x}{R}\right),\quad x\in \mathbb{R}^N,
$$
where $\theta$ is the function defined by \eqref{testfOK} with $\rho=N+\alpha$. Using \eqref{fracL}, the change of variable $x=Ry$, and \cite[Lemma 2.4]{DR}, one obtains
$$
I_1(\varphi)=R^N \int_{\mathbb{R}^N} R^{\frac{-\alpha p(Ry)}{p(Ry)-1}}\theta(y)^{\frac{-1}{p(Ry)-1}} \left|(-\Delta)^{\frac{\alpha}{2}}\theta(y)\right|^{\frac{p(Ry)}{p(Ry)-1}}\,dy\quad\mbox{and}\quad
I_2(\varphi)=R^{N} \int_{\mathbb{R}^N} R^{\frac{-2 p(Ry)}{p(Ry)-1}}\theta(y)^{\frac{-1}{p(Ry)-1}} \left|\Delta\theta(y)\right|^{\frac{p(Ry)}{p(Ry)-1}}\,dy.
$$
On the other hand, by Lemma \ref{lemma1}, one has
$$
\left|(-\Delta)^{\frac{\kappa}{2}}\theta(y)\right|\leq C \theta(y),\quad y\in \mathbb{R}^N,
$$
where $\kappa\in \{\alpha,2\}$. Hence, one deduces that
\begin{eqnarray}\label{sim1}
\nonumber I_1(\varphi) & \leq & C R^{N} \int_{\mathbb{R}^N}R^{\frac{-\alpha p(Ry)}{p(Ry)-1}}\theta(y)\,dy\\
\nonumber &\leq & C R^{N} R^{\frac{-\alpha p^+}{p^+-1}} \int_{\mathbb{R}^N}\theta(y)\,dy\\
&=& C R^{N-\frac{\alpha p^+}{p^+-1}}.
\end{eqnarray}
Similarly, one has
\begin{equation}\label{sim2}
I_2(\varphi) \leq C R^{N-\frac{2 p^+}{p^+-1}}.
\end{equation}
Therefore, using \eqref{es3}, one deduces that
$$
\int_{\mathbb{R}^N}|u(x)|^{p(x)}\varphi_R(x)\,dx\leq C
\left(R^{N-\frac{\alpha p^+}{p^+-1}}+|\lambda| R^{N-\frac{2 p^+}{p^+-1}}\right),\quad R>1.
$$
Finally, passing to the infimum limit as $R\to \infty$ in the above inequality, using Fatou's Lemma and \eqref{cdnonexist1}, one obtains
$$
\varrho_{p(\cdotp)}(u)=\int_{\mathbb{R}^N}|u(x)|^{p(x)}\,dx=0,
$$
which implies by \eqref{esve} that $\|u\|_{p(\cdotp)}=0$, i.e. $u$ is the trivial solution. This completes the proof of Theorem \ref{theorem1}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{theorem2}]
Let
$$
(u,v) \in (L^{2}(\mathbb{R}^N)\cap L^{2p(\cdotp)}(\mathbb{R}^N))\times (L^{2}(\mathbb{R}^N)\cap L^{2q(\cdotp)}(\mathbb{R}^N))
$$
be a weak solution to \eqref{2}. By \eqref{weaksystem1} and \eqref{weaksystem2}, for all $\varphi\in H^2(\mathbb{R}^N)$, $\varphi\geq 0$, one has
\begin{equation}\label{weaksystem1P}
Y\leq \int_{\mathbb{R}^N}|u(x)| \left|(-\Delta)^{\frac{\alpha}{2}}\varphi(x)\right|\,dx+|\lambda| \int_{\mathbb{R}^N}|u(x)||\Delta\varphi(x)|\,dx
\end{equation}
and
\begin{equation}\label{weaksystem2P}
X \leq \int_{\mathbb{R}^N}|v(x)| \left|(-\Delta)^{\frac{\beta}{2}}\varphi(x)\right|\,dx+|\mu| \int_{\mathbb{R}^N}|v(x)||\Delta\varphi(x)|\,dx,
\end{equation}
where
$$
X:=\int_{\mathbb{R}^N}|u(x)|^{p(x)}\varphi(x)\,dx\quad\mbox{and}\quad Y:=\int_{\mathbb{R}^N}|v(x)|^{q(x)}\varphi(x)\,dx.
$$
On the other hand, using H\"older's inequality, one obtains
\begin{eqnarray*}
\int_{\mathbb{R}^N}|u(x)| \left|(-\Delta)^{\frac{\alpha}{2}}\varphi(x)\right|\,dx&=&\int_{\{x\in \mathbb{R}^N:\, |u(x)|<1\}}|u(x)|\left|(-\Delta)^{\frac{\alpha}{2}}\varphi(x)\right|\,dx+ \int_{\{x\in \mathbb{R}^N:\, |u(x)|\geq 1\}}|u(x)|\left|(-\Delta)^{\frac{\alpha}{2}}\varphi(x)\right|\,dx\\
&\leq & \left(\int_{\{x\in \mathbb{R}^N:\, |u(x)|<1\}}|u(x)|^{p^+}\varphi(x)\,dx\right)^{\frac{1}{p^+}}\left(\int_{\{x\in \mathbb{R}^N:\, |u(x)|<1\}}\varphi(x)^{-\frac{1}{p^+-1}}\left|(-\Delta)^{\frac{\alpha}{2}}\varphi(x)\right|^{\frac{p^+}{p^+-1}}\,dx\right)^{\frac{p^+-1}{p^+}}\\
&&+ \left(\int_{\{x\in \mathbb{R}^N:\, |u(x)|\geq 1\}}|u(x)|^{p^-}\varphi(x)\,dx\right)^{\frac{1}{p^-}}\left(\int_{\{x\in \mathbb{R}^N:\, |u(x)|\geq 1\}}\varphi(x)^{-\frac{1}{p^--1}}\left|(-\Delta)^{\frac{\alpha}{2}}\varphi(x)\right|^{\frac{p^-}{p^--1}}\,dx\right)^{\frac{p^--1}{p^-}}\\
&\leq & X^{\frac{1}{p^+}} \left(\int_{\mathbb{R}^N}\varphi(x)^{-\frac{1}{p^+-1}}\left|(-\Delta)^{\frac{\alpha}{2}}\varphi(x)\right|^{\frac{p^+}{p^+-1}}\,dx\right)^{\frac{p^+-1}{p^+}}+X^{\frac{1}{p^-}}
\left(\int_{\mathbb{R}^N}\varphi(x)^{-\frac{1}{p^--1}}\left|(-\Delta)^{\frac{\alpha}{2}}\varphi(x)\right|^{\frac{p^-}{p^--1}}\,dx\right)^{\frac{p^--1}{p^-}},
\end{eqnarray*}
i.e.
\begin{equation}\label{aya1}
\int_{\mathbb{R}^N}|u(x)| \left|(-\Delta)^{\frac{\alpha}{2}}\varphi(x)\right|\,dx \leq [F(\alpha,p^+,\varphi)] ^{\frac{p^+-1}{p^+}} X^{\frac{1}{p^+}} + [F(\alpha,p^-,\varphi)] ^{\frac{p^--1}{p^-}} X^{\frac{1}{p^-}},
\end{equation}
where
$$
F(\kappa,r,\psi):=\int_{\mathbb{R}^N}\psi(x)^{-\frac{1}{r-1}}\left|(-\Delta)^{\frac{\kappa}{2}}\psi(x)\right|^{\frac{r}{r-1}}\,dx,
$$
for all $\kappa\in (0,2]$, $r>1$ and $\psi\in H^2(\mathbb{R}^N)$, $\psi\geq 0$. Similarly, one obtains
\begin{equation}\label{aya2}
\int_{\mathbb{R}^N}|u(x)| |\Delta\varphi(x)|\,dx \leq [F(2,p^+,\varphi)] ^{\frac{p^+-1}{p^+}} X^{\frac{1}{p^+}} + [F(2,p^-,\varphi)] ^{\frac{p^--1}{p^-}} X^{\frac{1}{p^-}},
\end{equation}
\begin{equation}\label{aya3}
\int_{\mathbb{R}^N}|v(x)| \left|(-\Delta)^{\frac{\beta}{2}}\varphi(x)\right|\,dx \leq [F(\beta,q^+,\varphi)] ^{\frac{q^+-1}{q^+}} Y^{\frac{1}{q^+}} + [F(\beta,q^-,\varphi)] ^{\frac{q^--1}{q^-}} Y^{\frac{1}{q^-}},
\end{equation}
and
\begin{equation}\label{aya4}
\int_{\mathbb{R}^N}|v(x)| |\Delta\varphi(x)|\,dx \leq [F(2,q^+,\varphi)] ^{\frac{q^+-1}{q^+}} Y^{\frac{1}{q^+}} + [F(2,q^-,\varphi)] ^{\frac{q^--1}{q^-}} Y^{\frac{1}{p^-}}.
\end{equation}
Next, it follows from \eqref{weaksystem1P}--\eqref{aya4} that
\begin{eqnarray}\label{NestI}
\left\{\begin{array}{lll}
X &\leq & A(\varphi) Y^{\frac{1}{q^+}}+B(\varphi)Y^{\frac{1}{q^-}},\\
Y &\leq & \overline{A(\varphi)} X^{\frac{1}{p^+}}+\overline{B(\varphi)}X^{\frac{1}{p^-}},
\end{array}
\right.
\end{eqnarray}
where
$$
A(\varphi):=[F(\beta,q^+,\varphi)] ^{\frac{q^+-1}{q^+}}+|\mu|[F(2,q^+,\varphi)] ^{\frac{q^+-1}{q^+}},\quad B(\varphi):=[F(\beta,q^-,\varphi)] ^{\frac{q^--1}{q^-}}+|\mu|[F(2,q^-,\varphi)] ^{\frac{q^--1}{q^-}}
$$
and
$$
\overline{A(\varphi)}:=[F(\alpha,p^+,\varphi)] ^{\frac{p^+-1}{p^+}}+|\lambda|[F(2,p^+,\varphi)] ^{\frac{p^+-1}{p^+}},\quad \overline{B(\varphi)}:=[F(\alpha,p^-,\varphi)] ^{\frac{p^--1}{p^-}}+|\lambda|[F(2,p^-,\varphi)] ^{\frac{p^--1}{p^-}}.
$$
Thanks to the inequality
$$
(a+b)^m \leq 2^m (a^m+b^m),\quad a,b\geq 0,\, m>0,
$$
from \eqref{NestI}, one deduces that
$$
Y^r\leq C \left(\overline{A(\varphi)}^rX^{\frac{r}{p^+}}+\overline{B(\varphi)}^rX^{\frac{r}{p^-}}\right),\quad r^{-1}\in \{q^+,q^-\},
$$
which yields
\begin{equation}\label{esXGG}
X \leq C\left( A(\varphi) \overline{A(\varphi)}^{\frac{1}{q^+}}X^{\frac{1}{q^+p^+}}+A(\varphi)\overline{B(\varphi)}^{\frac{1}{q^+}}X^{\frac{1}{q^+p^-}}+
B(\varphi)\overline{A(\varphi)}^{\frac{1}{q^-}} X^{\frac{1}{q^-p^+}}+B(\varphi)\overline{B(\varphi)}^{\frac{1}{q^-}}X^{\frac{1}{q^-p^-}}\right).
\end{equation}
On the other hand, using $\varepsilon$-Young inequality with $0<\varepsilon\ll 1$, one obtains
\begin{eqnarray}
A(\varphi) \overline{A(\varphi)}^{\frac{1}{q^+}}X^{\frac{1}{q^+p^+}}\leq \varepsilon X +C \left[A(\varphi) \overline{A(\varphi)}^{\frac{1}{q^+}}\right]^{\frac{q^+p^+}{q^+p^+-1}},\\
A(\varphi)\overline{B(\varphi)}^{\frac{1}{q^+}}X^{\frac{1}{q^+p^-}}\leq \varepsilon X +C \left[A(\varphi) \overline{B(\varphi)}^{\frac{1}{q^+}}\right]^{\frac{q^+p^-}{q^+p^--1}},\\
B(\varphi)\overline{A(\varphi)}^{\frac{1}{q^-}} X^{\frac{1}{q^-p^+}}\leq \varepsilon X +C \left[B(\varphi) \overline{A(\varphi)}^{\frac{1}{q^-}}\right]^{\frac{q^-p^+}{q^-p^+-1}},\\
B(\varphi)\overline{B(\varphi)}^{\frac{1}{q^-}} X^{\frac{1}{q^-p^-}}\leq \varepsilon X +C \left[B(\varphi) \overline{B(\varphi)}^{\frac{1}{q^-}}\right]^{\frac{q^-p^-}{q^-p^--1}}.\label{IneqV}
\end{eqnarray}
Therefore, it follows from \eqref{esXGG}--\eqref{IneqV} that
\begin{equation}\label{XESG}
X\leq C\left(\left[A(\varphi) \overline{A(\varphi)}^{\frac{1}{q^+}}\right]^{\frac{q^+p^+}{q^+p^+-1}}+\left[A(\varphi) \overline{B(\varphi)}^{\frac{1}{q^+}}\right]^{\frac{q^+p^-}{q^+p^--1}}+\left[B(\varphi) \overline{A(\varphi)}^{\frac{1}{q^-}}\right]^{\frac{q^-p^+}{q^-p^+-1}}+\left[B(\varphi) \overline{B(\varphi)}^{\frac{1}{q^-}}\right]^{\frac{q^-p^-}{q^-p^--1}}\right).
\end{equation}
Similarly, one obtains
\begin{equation}\label{YESG}
Y\leq C\left(\left[\overline{A(\varphi)} A(\varphi)^{\frac{1}{p^+}}\right]^{\frac{q^+p^+}{q^+p^+-1}}+\left[\overline{A(\varphi)} B(\varphi)^{\frac{1}{p^+}}\right]^{\frac{p^+q^-}{p^+q^--1}}+\left[\overline{B(\varphi)}A(\varphi)^{\frac{1}{p^-}}\right]^{\frac{p^-q^+}{p^-q^+-1}}+\left[\overline{B(\varphi)}B(\varphi)^{\frac{1}{p^-}}\right]^{\frac{q^-p^-}{q^-p^--1}}\right).
\end{equation}
For $R>1$, we take
$$
\varphi(x)=\theta\left(\frac{x}{R}\right),\quad x\in \mathbb{R}^N,
$$
where $\theta$ is the function defined by \eqref{testfOK} with $\rho=N+\min\{\alpha,\beta\}$. Next, we have to estimate the terms $A(\varphi)$, $B(\varphi)$, $\overline{A(\varphi)}$ and $\overline{B(\varphi)}$. Similarly to \eqref{sim1} and \eqref{sim2}, one has
$$
F(\beta,q^+,\varphi)\leq C R^{N-\frac{\beta q^+}{q^+-1}}\quad\mbox{and}\quad
F(2,q^+,\varphi)\leq C R^{N-\frac{2 q^+}{q^+-1}}.
$$
Using the above estimates, one deduces that
\begin{equation}\label{esAphi}
A(\varphi)\leq C R^{N\left(\frac{q^+-1}{q^+}\right)-\beta}.
\end{equation}
Similarly, one obtains
\begin{equation}\label{esBphi}
B(\varphi)\leq C R^{N\left(\frac{q^--1}{q^-}\right)-\beta},
\end{equation}
\begin{equation}\label{esAphibar}
\overline{A(\varphi)}\leq C R^{N\left(\frac{p^+-1}{p^+}\right)-\alpha}\end{equation}
and
\begin{equation}\label{esBphibar}
\overline{B(\varphi)}\leq C R^{N\left(\frac{p^--1}{p^-}\right)-\alpha}.
\end{equation}
Therefore, using \eqref{XESG} and the estimates \eqref{esAphi}--\eqref{esBphibar}, one deduces that
\begin{equation}\label{FestCompX}
X\leq C \sum_{i=1}^4 R^{\sigma_i},
\end{equation}
where
$$
\sigma_1:= \left(\frac{q^+p^+}{q^+p^+-1}\right)\left[N\left(\frac{p^+q^+-1}{p^+q^+}\right)-\beta-\frac{\alpha}{q^+}\right]:=\left(\frac{q^+p^+}{q^+p^+-1}\right) \overline{\sigma_1},
$$
$$
\sigma_2 := \left(\frac{q^+p^-}{q^+p^--1}\right) \left[N\left(\frac{p^-q^+-1}{p^-q^+}\right)-\beta-\frac{\alpha}{q^+}\right]:=\left(\frac{q^+p^-}{q^+p^--1}\right) \overline{\sigma_2},
$$
$$
\sigma_3:= \left(\frac{q^-p^+}{q^-p^+-1}\right) \left[N\left(\frac{p^+q^--1}{p^+q^-}\right)-\beta-\frac{\alpha}{q^-}\right]:=\left(\frac{q^-p^+}{q^-p^+-1}\right) \overline{\sigma_3}
$$
and
$$
\sigma_4:= \left(\frac{q^-p^-}{q^-p^--1}\right) \left[N\left(\frac{p^-q^--1}{p^-q^-}\right)-\beta-\frac{\alpha}{q^-}\right]:=\left(\frac{q^-p^-}{q^-p^--1}\right) \overline{\sigma_4}.
$$
One observes easily that
\begin{equation}\label{sigma1}
\overline{\sigma_1}=\max\{\overline{\sigma_i}:\, i=1,2,3,4\}.
\end{equation}
Similarly, using \eqref{YESG} and the estimates \eqref{esAphi}--\eqref{esBphibar}, one deduces that
\begin{equation}\label{secondCompY}
Y\leq C \sum_{i=1}^4 R^{\nu_i},
\end{equation}
where
$$
\nu_1:= \left(\frac{q^+p^+}{q^+p^+-1}\right)\left[N\left(\frac{p^+q^+-1}{p^+q^+}\right)-\alpha-\frac{\beta}{p^+}\right]:=\left(\frac{q^+p^+}{q^+p^+-1}\right)\overline{\nu_1},
$$
$$
\nu_2:= \left(\frac{p^+q^-}{p^+q^--1}\right) \left[N\left(\frac{q^-p^+-1}{q^-p^+}\right)-\alpha-\frac{\beta}{p^+}\right]:=\left(\frac{p^+q^-}{p^+q^--1}\right) \overline{\nu_2},
$$
$$
\nu_3:= \left(\frac{p^-q^+}{p^-q^+-1}\right) \left[N\left(\frac{q^+p^--1}{q^+p^-}\right)-\alpha-\frac{\beta}{p^-}\right]:=\left(\frac{p^-q^+}{p^-q^+-1}\right)\overline{\nu_3}
$$
and
$$
\nu_4:= \left(\frac{p^-q^-}{p^-q^--1}\right) \left[N\left(\frac{q^-p^--1}{q^-p^-}\right)-\alpha-\frac{\beta}{p^-}\right]:=\left(\frac{p^-q^-}{p^-q^--1}\right) \overline{\nu_4}.
$$
Moreover, one has
\begin{equation}\label{nu1}
\overline{\nu_1}=\max\{\overline{\nu_i}:\, i=1,2,3,4\}.
\end{equation}
Note that condition \eqref{condsyst} is equivalent to
$$
\overline{\sigma_1}<0\quad\mbox{or}\quad\overline{\nu_1}<0.
$$
If $ \overline{\sigma_1}<0$, passing to the infimum limit as $R\to \infty$ in
\eqref{FestCompX}, using \eqref{sigma1} and Fatou's Lemma, one deduces that
$$
\varrho_{p(\cdotp)}(u)=\int_{\mathbb{R}^N}|u(x)|^{p(x)}\,dx=0,
$$
which implies by \eqref{esve} that $\|u\|_{p(\cdotp)}=0$, i.e. $u\equiv 0$.
Then, by \eqref{weaksystem1P}, one obtains $Y=0$, which yields $v\equiv 0$. Therefore, $(u,v)\equiv (0,0)$. Similarly, if $ \overline{\nu_1}<0$, passing to the infimum limit as $R\to \infty$ in \eqref{secondCompY},
using \eqref{nu1} and Fatou's Lemma, one deduces that
$$
\varrho_{q(\cdotp)}(v)=\int_{\mathbb{R}^N}|v(x)|^{q(x)}\,dx=0,
$$
which implies that $v\equiv 0$. Then, by \eqref{weaksystem2P}, one obtains $X=0$, which yields $u\equiv 0$. Therefore, $(u,v)\equiv (0,0)$.
Hence, we established that under condition \eqref{condsyst}, the only weak solution to \eqref{2} is $(u,v)\equiv (0,0)$. This completes the proof of Theorem \ref{theorem2}.
\end{proof}
\section*{Acknowledgments}
The third author extends his appreciation to the Deanship of Scientific Research at King Saud University, Saudi Arabia, for funding this work through research group no. RGP-237.
|
2,869,038,154,101 | arxiv | \section{Introduction}\label{intro}
The classical \lq \lq no hair\rq \rq theorem stated that all
information about the collapsing body was lost from the outside
region except the three conserved quantities: the mass, the
angular momentum, and the electric charge. This loss of
information was not a serious problem in the classical theory.
Because the information could be thought of as preserved inside
the black hole but just not very accessible. However, the
situation is changed under the consideration of the quantum
effect. Black holes could shrink and eventually evaporate away
completely by emitting quantum thermal spectrum \cite{one,two}.
Since radiation with a pure thermal spectrum can carry no
information, the information carried by a physical system falling
toward black hole singularity has no way to be recovered after a
black hole has disappeared completely. This is the so-called \lq
\lq information loss paradox\rq \rq \cite{three,four}. It means
that pure quantum states (the original matter that forms the black
hole) can evolve into mixed states (the thermal spectrum at
infinity). This type of evolution violates the fundamental
principles of quantum theory which prescribe a unitary time
evolution of basis states.
The information paradox can perhaps be ascribed to the
semi-classical nature of the investigations of Hawking radiation.
However, researches in string theory indeed support the idea that
Hawking radiation can be described within a manifestly unitary
theory, but it still remains a mystery regarding the recovery of
information. Although a complete resolution of the information loss
paradox might be within a unitary theory of quantum gravity or
string/M-theory, Hawking argued that the information could come out
if the outgoing radiation were not pure thermal but had subtle
corrections \cite{four}.
Besides, there is some degree of mystery in the mechanism of black
hole radiation. In the original derivation of black hole
evaporation, Hawking described the thermal radiation as a quantum
tunneling process created by vacuum fluctuations near the event
horizon \cite{five}. But in this theory, the created mechanism of
the tunneling barrier is unclear for us. The related references do
not use the language of quantum tunneling method to discuss Hawking
radiation, and hence it is not the quantum tunneling method. In
order to derive the radiant spectrum from the black hole
horizon, one must solve the two difficulties: firstly, the formed
mechanism of the potential hill; and secondly, the elimination of
the coordinate singularity.
Recently, Kraus and Wilczek
\cite{six,seven,eight,nine,ten,eleven,twelve} did the pioneer work
for a program that implemented Hawking radiation as a tunneling
process. Parikh and Wilczek \cite{thirteen,fourteen,fifteen}
developed the program by carrying out a dynamical treatment of
black hole radiance in the static spherically symmetric black hole
geometries. More specifically, they took into account the effects
of a positive energy matter shell propagating outwards through the
horizon of the Schwarzschild and Reissner-Nordstr\"om black holes,
and incorporated the self-gravitation correction of the radiation.
In particular, they considered the energy conservation and allowed
the background geometry to fluctuate in their dynamical
description of the black hole background. In this model, they
allowed the black hole to lose mass while radiating, but
maintained a constant energy for the total system (consisting of
the black hole and the surrounding). The self-gravitation action
among the particles creates the tunneling barrier with turning
points at the location of the black hole horizon before and after
the particle with energy emission. The radiation spectrum that
they derived for the Schwarzschild and Reissner-Nordstr\"om black
holes gives a leading-order correction to the emission rate
arising from loss of mass of the black hole, which corresponds to
the energy carried by the radiated quantum. This result displays
that the radiant spectrum of the black hole is not
pure thermal under the consideration of energy
conservation and the unfixed spacetime background. This may be a
correct amendment of the Hawking radiation spectrum.
In addition to the consideration of the energy conservation and the
particle's self-gravitation, a crucial point in the analysis of
Kraus-Parikh-Wilczek (KPW) is to introduce a coordinate system that
is well-behaved at the event horizon in order to calculate the
emission probability. In this regard, the so-called \lq \lq
Painlev\'e-Gullstrand coordinates\rq \rq rediscovered by Kraus and
Wilczek \cite{sixteen} are not only time independent and regular at
the horizon, but for which time reversal is manifestly asymptotic,
that is, the coordinates are stationary but not static. Following
this method, a lot of people
\cite{seventeen,eighteen,nineteen,twenty,twenty one,twenty
two,twenty three,twenty four,twenty five,twenty six,twenty
seven,twenty eight,twenty nine,thirty} investigated Hawking
radiation as tunneling from various spherically symmetric black
holes, and the derived results are very successful to support KPW's
picture. However, all these studies are limited to the spherically
symmetric black holes and most of them are confined only to
investigate the tunneling effect of the uncharged massless
particles. There are some attempts to extend this model to the case
of the stationary axisymmetric geometries \cite{thirty one,thirty
two,thirty three,thirty four,thirty five,thirty six,thirty
seven,thirty eight,thirty nine}. Recently, following KPW's approach,
some authors investigated the massive charged particles' tunneling
from the static spherically symmetric \cite{fourty,fourty one,fourty
two,fourty three,fourty four} as well as stationary axisymmetric
(e.g., Kerr-Newman black hole \cite{fourty five,fourty six})
geometries. They all got a satisfying result.
In this paper we apply KPW's method to a more general spacetime. We calculate the emission rate of a charged massive particle from stationary axisymmetric Kerr-Newman black hole spacetime in the de Sitter universe endowed with NUT (magnetic mass) and magnetic monopole parameters. The
metric of the spacetime can be written as
\begin{eqnarray}
\textrm{d}s^2&=&\frac{\Sigma}{\Delta_\theta}\textrm{d}\theta^2
+\frac{\Sigma}{
\Delta_r}\textrm{d}r^2-\frac{\Delta_r}{\Sigma}\left(\textrm{d}t_{HNK}-\frac{
\mathcal A}{\chi}\textrm{d}\varphi\right)^2\nonumber\\
&&+\frac{\Delta_\theta\,\textrm{sin}^2\theta}{
\Sigma}\left(a\,\textrm{d}t_{HNK}-\frac{\rho}{\chi}\textrm{d}\varphi
\right)^2\label{eq1},
\end{eqnarray}
where
\begin{eqnarray}
\Sigma&=&r^2+(n+a\,
\textrm{cos}\theta)^2,\hspace{0.5cm}\Delta_\theta=1+\frac{a^2}{\ell^2
}\,\textrm{cos}^2\theta,\hspace{0.5cm}\ell^2=\frac{3}{\Lambda},\nonumber\\
\Delta_r&=&\rho\left[1-\frac{1}{\ell^2}\,(r^2+5\,n^2)\right]-2\,(M\,r+n^2)
+Q^2+P^2,\nonumber\\
\rho&=&r^2+a^2+n^2,\hspace{0.5cm}\chi=1+\frac{a^2}{\ell^2},\hspace{0.5cm}\mathcal A=a\,\textrm{sin}^2\theta-2\,n\,\textrm{cos}\theta,\label{eq2}
\end{eqnarray}
$t_{HNK}$ being the coordinate time of the spacetime. Beside the
cosmological parameter $\Lambda$, the metric (\ref{eq1}) possesses
five parameters: $M$ the mass parameter, $a$ the angular momentum
per unit mass parameter, $n$ the NUT (magnetic mass) parameter, $Q$
the electric charge parameter, and $P$ the magnetic monopole
parameter. The metric (\ref{eq1}) solves the Einstein-Maxwell field
equations with an electromagnetic vector potential
\begin{equation}
A=-\frac{Q\,r}{\sqrt{\Sigma\,\Delta_r}}\,e^0-\frac{P\textrm{
cos}\theta}
{\sqrt{\Sigma\,\Delta_\theta}\,\textrm{sin}\theta}\,e^3\label{eq3},
\end{equation}
and an associated field strength tensor given by
\begin{eqnarray}
F&=&-\frac{1}{\Sigma^2}\left[Q\,(r^2-a^2\textrm{cos}^2\theta)
+2P\,r\,a\,\textrm{cos}\theta\right]e^0\wedge e^1\nonumber\\
&&+\frac{1}{\Sigma^2}\left[P\,(r^2-a^2\textrm{cos}^2\theta)
-2Q\,r\,a\,\textrm{cos}\theta\right]e^2\wedge e^3,\label{eq4}
\end{eqnarray}
where we have defined the vierbein field
\begin{eqnarray}
e^0&=&\sqrt{\frac{\Delta_r}{\Sigma}}\left(\textrm{d}t_{HNK}-\frac{
\mathcal A}{\chi}\textrm{d}\varphi\right),\hspace{0.5cm}e^1=\sqrt{\frac{
\Sigma}{\Delta_r}}\,\textrm{d}r,\nonumber\\
e^2&=&\sqrt{\frac{\Sigma}{\Delta_\theta}}\,\textrm{d}\theta,
\hspace{0.5cm}e^3
=\sqrt{\frac{\Delta_\theta}{\Sigma}}\,\textrm{sin}\theta\left(a\,
\textrm{d}t_{HNK}-\frac{\rho}{\chi
}\textrm{d}\varphi\right)\label{eq5}.
\end{eqnarray}
The metric (\ref{eq1}) describes the NUT-Kerr-Newman-Kasuya-de
Sitter spacetime. Since the de Sitter spacetime has been interpreted
as being hot \cite{fourty seven}, we call the spacetime a
Hot-NUT-Kerr-Newman-Kasuya (H-NUT-KN-K, for briefness) spacetime.
There is a renewed interest in the cosmological parameter as it is
found to be present in the inflationary scenario of the early
universe. In this scenario the universe undergoes a stage where it
is geometrically similar to de Sitter space \cite{fourty eight}.
Among other things inflation has led to the cold dark matter. If the
cold dark matter theory proves correct, it would shed light on the
unification of forces \cite{fourty nine,fifty}. The monopole
hypothesis was propounded by Dirac relatively long ago. The
ingenious suggestion by Dirac that magnetic monopole does exist was
neglected due to the failure to detect such object. However, in
recent years, the development of gauge theories has shed new light
on it.
The H-NUT-KN-K spacetime includes, among others, the physically
interesting black hole spacetimes as well as the NUT spacetime which
is sometimes considered as unphysical. The curious properties of the
NUT spacetime induced Misner \cite{fifty one} to consider it \lq \lq
as a counter example to almost anything\rq \rq . This spacetime
plays a significant role in exhibiting the type of effects that can
arise in strong gravitational fields.
If we set $\ell\rightarrow\infty,\,a=0,\,Q=0=P$ in Eq.(\ref{eq1}),
it then results the NUT metric which is singular along the axis of
symmetry $\theta=0$ and $\theta=\pi$. Because of the axial
singularities the metric admits different physical interpretations.
Misner \cite{fifty two} introduced a periodic time coordinate to
remove the singularity, but this makes the metric an uninteresting
particle-like solution. To avoid a periodic time coordinate, Bonnor
\cite{fifty three} removed the singularity at $\theta=0$ and related
the singularity at $\theta=\pi$ to a semiinfinite massless source of
angular momentum along the axis of symmetry. This is analogous to
representing the magnetic monopole in electromagnetic theory by
semiinfinite solenoid \cite{fifty four}. The singularity along
$z$-axis is analogous to the Dirac string.
McGuire and Ruffini \cite{fifty five} suggested that the spaces
endowed with the NUT parameter should never be directly physically
interpreted. To make a physically reasonable solution Ahmed
\cite{fifty six} used Bonnor's interpretation of the NUT parameter,
i.e., the NUT parameter $n$ is due to the strength of the physical
singularity on $\theta=\pi$, and further considered that $n=a$. That
means, the angular momentum of the mass $M$ and the angular momentum
of massless rod coalesce, and in this case, the metric (\ref{eq1})
gives a new black hole solution which poses to solve an outstanding
problem of thermodynamics and black hole physics.
In view of all the above considerations the work of this paper is
interesting. Since we are investigating charged particles' tunneling
from the charged H-NUT-KN-K spacetime, not only should the energy
conservation but also the electric charge conservation be
considered. In particular, two significant points of this paper are
as follows. The first is that we need to find the equation of motion
of a charged massive tunneling particle. We can treat the massive
charged particle as a de Broglie wave, and then its equation of
motion can be obtained by calculating the phase velocity of the de
Broglie wave corresponding to the outgoing particle. Secondly, we
should also consider the effect of the electromagnetic field outside
the H-NUT-KN-K spacetime when a charged particle tunnels out. The
Lagrangian function of the electromagnetic field corresponding to
the generalized coordinates described by $A_\mu$ is
$-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}$. But these are ignorable
coordinates in dragged coordinate system. To eliminate the freedoms
corresponding to these coordinates, we modify the Lagrangian
function. Using WKB method we then derive the emission rate for a
charged massive particle with revised Lagrangian function.
We organize the paper as follows. In section \ref{sec:2} we
introduce the Painlev\'e-H-NUT-KN-K coordinate system, and obtain
the radial geodesic equation of a charged massive particle. In
section \ref{sec:3} we use KPW's tunneling framework to calculate
the emission spectrum. Finally, in section \ref{sec:4} we present
our concluding remarks. Throughout the paper, the geometrized units
($G=c=\hbar=1$) have been used.
\section{Painlev\'e-like Coordinate Transformation and the Radial Geodesics}
\label{sec:2} The null surface equation
${\textrm g}^{\mu\nu}\partial_\mu f\partial_\nu f=0$ gives
\begin{equation}
r^4+(a^2+6n^2-\ell^2)r^2+2M\ell^2r-\Xi=0\label{eq6},
\end{equation}
where
\begin{equation}
\Xi=\left\{(a^2-n^2+Q^2 +P^2)\ell^2-5n^2(a^2+n^2)\right\}\label{eq7}.
\end{equation}
Equation (\ref{eq6}) has four real roots: one is negative root $r_-$
and three roots $r_0,\,r_H,\,r_C$ corresponding to the inner, outer
(event) and cosmological horizon of the H-NUT-KN-K spacetime
respectively, can be given by \cite{fifty seven}
\begin{eqnarray}
&&r_0=-t_1+t_2+t_3,\nonumber\\
&&r_H=t_1-t_2+t_3,\nonumber\\
&&r_C=t_1+t_2-t_3\label{eq8},
\end{eqnarray}
where
\begin{eqnarray}
&&t_1=\left[\frac{1}{6}(\ell^2-6n^2-a^2)
+\frac{1}{6}\sqrt{\{(\ell^2-6n^2-a^2)^2-12\Xi\}}
\textrm{cos}\frac{\psi}{3}\right]^{\frac{1}{2}},\nonumber\\
&&t_2=\left[\frac{1}{6}(\ell^2-6n^2-a^2)
-\frac{1}{6}\sqrt{\{(\ell^2-6n^2-a^2)^2-12\Xi\}
}\textrm{cos}\frac{\psi+\pi}{3}
\right]^{\frac{1}{2}},\nonumber\\
&&t_3=\left[\frac{1}{6}(\ell^2-6n^2-a^2)
-\frac{1}{6}\sqrt{\{(\ell^2-6n^2-a^2)^2-12\Xi\}
}\textrm{cos}\frac{\psi-\pi}{3}
\right]^{\frac{1}{2}}\label{eq9},\\
&&\textrm{cos}\psi=-\frac{(\ell^2-6n^2-a^2)\{(\ell^2-6n^2-a^2)^2
+36\Xi\}-54M^2\ell^4
}{\{(\ell^2-6n^2-a^2)^2-12\Xi\}^{3/2}}\label{eq10},
\end{eqnarray}
under the conditions
\begin{eqnarray}
\{(\ell^2-6n^2-a^2)^2-12\Xi\}^3&>&\{(\ell^2-6n^2-a^2)^3
+36\Xi(\ell^2-6n^2-a^2)\nonumber\\
&&-54M^2\ell^4\}^2,\nonumber\\
(\ell^2-6n^2-a^2)&>&0\label{eq11},
\end{eqnarray}
$\Xi$ being given by Eq.(\ref{eq7}).
To investigate the tunneling process, we should adopt the dragged
coordinate system. The line element in the dragging coordinate
system is \cite{fifty seven}
\begin{equation}
\textrm{d}s^2=\hat{\textrm g}_{00}\textrm{d}t_d^2
+\frac{\Sigma}{\Delta_r}\textrm{d}r^2
+\frac{\Sigma}{\Delta_\theta}\textrm{d}\theta^2\label{eq12},
\end{equation}
where
\begin{equation}
\hat{\textrm g}_{00}={\textrm g}_{00}
-\frac{({\textrm g}_{03})^2}{{\textrm g}_{33}}
=-\frac{\Delta_\theta\Delta_r(\rho-a\mathcal A)^2\textrm{sin}^2\theta}
{\Sigma(\Delta_\theta\rho^2\textrm{sin}^2\theta
-\Delta_r\mathcal A^2)}\label{eq13}.
\end{equation}
In fact, the line element (\ref{eq12}) represents a three-dimensional hypersurface in the four-dimensional H-NUT-KN-K spacetime. The components of the electromagnetic potential in the dragged coordinate system can be given by
\begin{equation}
A^\prime_0=A_a(\partial_{t_d})^a=-\frac{Qr}
{\Sigma}\left(1-\frac{\mathcal A}{\chi}\Omega\right)
-\frac{P\textrm{cos}\theta}{\Sigma}
\left(a-\frac{\rho}{\chi}\Omega\right), \quad
A^\prime_1=0=A^\prime_2\label{eq14},
\end{equation}
where
\begin{equation}
(\partial_{t_d})^a=(\partial_{t_{HNK}})^a+
\Omega(\partial_\varphi)^a\label{eq15},
\end{equation}
$\Omega=-{\textrm g}_{03}/{\textrm g}_{33}$ being the dragged angular
velocity. The metric (\ref{eq12}) has a coordinate singularity at
the horizon, which brings us inconvenience to investigate the
tunneling process across the horizon.
In order to eliminate the coordinate singularity from the metric
(\ref{eq12}), we perform general Painlev\'e coordinate
transformation \cite{fifty eight}
\begin{equation}
\textrm{d}t_d=\textrm{d}t+F(r,\,\theta)\textrm{d}r
+G(r,\,\theta)\textrm{d}\theta\label{eq16},
\end{equation}
which reduces the metric in the Painlev\'e-H-NUT-KN-K coordinate
system \cite{fifty seven}
\begin{eqnarray}
\textrm{d}s^2&=&\hat{{\textrm g}}_{00}\textrm{d}t^2
+\textrm{d}r^2\pm2\sqrt{\hat{\textrm g}_{00}(1-{\textrm g}_{11})}\,
\textrm{d}t\textrm{d}r+\left[\hat{\textrm g}_{00}\{G(r,\theta)\}^2
+{\textrm g}_{22}\right]\textrm{d}\theta^2\nonumber\\
&&+2\sqrt{\hat{\textrm g}_{00}(1-{\textrm g}_{11})}\,G(r,
\theta)\textrm{d}r\textrm{d}\theta+2\hat{\textrm g}_{00}G(r,
\theta)\textrm{d}t\textrm{d}\theta\label{eq17},
\end{eqnarray}
where $F(r,\,\theta)$ satisfies
\begin{equation}
{\textrm g}_{11}+\hat{{\textrm g}}_{00}\{F(r,
\theta)\}^2=1\label{eq18},
\end{equation}
and $G(r,\,\theta)$ is determined by
\begin{equation}
G(r, \theta)=\int\frac{\partial
F(r,\,\theta)}{\partial\theta}\textrm{d}r+C(\theta)\label{eq19},
\end{equation}
where $C(\theta)$ is an arbitrary analytic function of $\theta$. The
plus(minus) sign in (\ref{eq17}) denotes the spacetime line element
of the charged massive outgoing(ingoing) particles at the horizon.
According to Landau's theory of the coordinate clock synchronization
\cite{fifty nine} in a spacetime decomposed in $(3+1)$, the
coordinate time difference of two events which take place
simultaneously in different locations, is
\begin{equation}
\Delta T=-\int\frac{{\textrm g}_{0i}}{{\textrm g}_{00}}\textrm{d}x^i,\quad
(i=1,\,2,\,3)\label{eq20}.
\end{equation}
If the simultaneity of coordinate clocks can be transmitted from one
location to another and has nothing to do with the integration path,
then
\begin{equation}
\frac{\partial}{\partial
x^i}\left(-\frac{{\textrm g}_{0j}}{\hat{{\textrm g}}_{00}}\right)
=\frac{\partial}{\partial
x^j}\left(-\frac{{\textrm g}_{0i}}{\hat{{\textrm g}}_{00}}\right),\quad
(i,\,j=1,\,2,\,3)\label{eq21}.
\end{equation}
Condition (\ref{eq21}) with the metric (\ref{eq17}) gives
$\partial_\theta F(r,\,\theta)=\partial_r G(r,\,\theta)$, which is
the condition (\ref{eq19}). Thus the Painlev\'e-H-NUT-KN-K metric
(\ref{eq17}) satisfies the condition of coordinate clock
synchronization. Apart from that, the new line element has many
other attractive features: firstly, the metric is regular at the
horizons; secondly, the infinite red-shift surface and the horizons
are coincident with each other; thirdly, spacetime is stationary;
and fourthly, constant-time slices are just flat Euclidean space in
radial. All these characteristics would provide superior condition
to the quantum tunneling radiation.
The component of the electromagnetic potential in the
Painlev\'e-H-NUT-KN-K coordinate system is
\begin{equation}
A_0=-\frac{Qr} {\Sigma}\left(1-\frac{\mathcal A}{\chi}\Omega\right)
-\frac{P\textrm{cos}\theta}{\Sigma}
\left(a-\frac{\rho}{\chi}\Omega\right), \quad
A_1=0=A_2\label{eq22},
\end{equation}
which on the event horizon becomes
\begin{equation}
\left.A_0\right|_{r_H}=-V_0=-\frac{Q\,r_H}{r_H^2+a^2+n^2}, \quad
A_1=0=A_2\label{eq23}.
\end{equation}
Now let us derive the geodesics for the charged massive particles.
Since the world-line of a massive charged quanta is not light-like,
it does not follow radial light-like geodesics when it tunnels
across the horizon. For the sake of simplicity, we consider the
outgoing massive charged particle as a massive charged shell (de
Broglie $s$-wave). According to the WKB formula, the approximative
wave function is
\begin{equation}
\phi(r,\,t)=N\textrm{exp}\left[\textrm{i}
\left(\int_{r_i-\varepsilon}^rp_r\textrm{d}r -\omega
t\right)\right]\label{eq24},
\end{equation}
where $r_i-\varepsilon$ denotes the initial location of the
particle. If
\begin{equation}
\int_{r_i-\varepsilon}^rp_r\textrm{d}r -\omega t=\phi_0\label{eq25},
\end{equation}
then, we have
\begin{equation}
\frac{\textrm{d}r}{\textrm{d}t}=\dot{r}=\frac{\omega}{k}\label{eq26},
\end{equation}
where $k$ is the de Broglie wave number. By definition, $\dot{r}$ in
(\ref{eq26}) is the phase velocity of the de Broglie wave. Unlike
the electromagnetic wave, the phase velocity $v_p$ of the de Broglie
wave is not equal to the group velocity $v_g$. They have the
following relationship
\begin{equation}
v_p=\frac{\textrm{d}r}{\textrm{d}t}=\frac{\omega}{k},\quad
v_g=\frac{\textrm{d}r_c}{\textrm{d}t}
=\frac{\textrm{d}\omega}{\textrm{d}k}=2v_p\label{eq27},
\end{equation}
where $r_c$ denotes the location of the tunneling particle. Since
tunneling across the barrier is an instantaneous process, there are
two events that take place simultaneously in different places during
the process. One is the particle tunneling into the barrier, and the
other is the particle tunneling out the barrier. In terms of
Landau's condition of coordinate clock synchronization, the
coordinate time difference of these two simultaneous events is
\begin{equation}
\textrm{d}t=-\frac{{\textrm g}_{0i}}{{\textrm g}_{00}}\textrm{d}x^i
=-\frac{{\textrm g}_{01}}{{\textrm g}_{00}}\textrm{d}r_c,\quad
(\textrm{d}\theta=0=\textrm{d}\varphi)\label{eq28}.
\end{equation}
So the group velocity is
$$
v_g=\frac{\textrm{d}r_c}{\textrm{d}t}
=-\frac{{\textrm g}_{00}}{{\textrm g}_{01}},
$$
and therefore, using (\ref{eq17}), the phase velocity (the radial
geodesics) can be expressed as
\begin{equation}
\dot{r}=v_p=\frac{1}{2}v_g =\mp\frac{\Delta_r}{2}\left[\frac{
\Delta_\theta(\rho-a\mathcal A)^2\textrm{sin}^2\theta}
{\Sigma(\Sigma-\Delta_r)(\Delta_\theta\rho^2
\textrm{sin}^2\theta-\Delta_r\mathcal A^2)}\right]
^{\frac{1}{2}}\label{eq29},
\end{equation}
where the upper sign corresponds to the geodesic of the outgoing
particle near the event horizon, and the lower sign corresponds to
that of the ingoing particle near the cosmological horizon.
Moreover, if we take into account the self-gravitation of the
tunneling particle with energy $\omega$ and electric charge $q$,
then $M$ and $Q$ should be replaced with $M\mp\omega$ and $Q\mp q$
in (\ref{eq17}) and (\ref{eq29}), respectively, with the
upper(lower) sign corresponding to outgoing(ingoing) motion of the
particle.
In the subsequent section, we shall discuss Hawking radiation from
the event and cosmological horizons, and calculate the emission rate
from each horizon by tunneling process. Sine the overall picture of
tunneling radiation for the metric is very involved, we shall
consider for simplification the outgoing radiation from the event
horizon and ignore the incoming radiation from the cosmological
horizon, when we deal with the event horizon. In the similar manner,
we shall only consider the incoming radiation from the cosmological
horizon and ignore the outgoing radiation from the event horizon for
the moment when we deal with the cosmological horizon. Of course,
this assumption is reasonable as long as the two horizons separate
away very large from each other. The radius of the cosmological
horizon is very large due to a very small cosmological constant
$\Lambda$, while the event horizon considered here is relatively
very small because the Hawking radiation can take an important
effect only for tiny black hole typical of $1\sim10\,TeV$ energy in
the brane-world scenario. Hawking radiation is a kind of quantum
effect. It can be neglected and may not be observed for an
astronomical black hole with typical star mass about $10M_\odot$.
\section{Tunneling Process of Massive Charged Particles
from H-NUT-KN-K Spacetime}\label{sec:3}
In the investigation of charged massive particles' tunneling, the
effect of the electromagnetic field outside the black hole should be
taken into consideration. So the matter-gravity system consists of
the black hole and the electromagnetic field outside the black hole.
The Lagrangian of the matter-gravity system can be written as
\begin{equation}
L=L_m+L_e\label{eq30},
\end{equation}
where $L_e=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}$ is the Lagrangian
function of the electromagnetic field corresponding to the
generalized coordinate described by $A_\mu=(A_t,\,0,\,0)$ in the
Painlev\'e-H-NUT-KN-K coordinate system. In the case of a charged
particle's tunneling out, the system transits from one state to
another. But the expression of $L_e$ tells us that
$A_\mu=(A_t,\,0,\,0)$ is an ignorable coordinate. Furthermore, in
the dragging coordinate system, the coordinate $\varphi$ does not
appear in the metric expressions (\ref{eq12}) and (\ref{eq17}). That
is, $\varphi$ is also an ignorable coordinate in the Lagrangian
function $L$. In order to eliminate these two degrees of freedom
completely, the action for the classically forbidden trajectory
should be written as
\begin{equation}
S=\int_{t_i}^{t_f}(L-p_{A_t}\dot{A}_t-
p_\varphi\dot{\varphi})\textrm{d}t\label{eq31},
\end{equation}
which is related to the tunneling rate of the emitted particle by
\begin{equation}
\Gamma\sim e^{-2\,\textrm{Im}\,S}\label{eq32}.
\end{equation}
The imaginary part of the action is
\begin{eqnarray}
\textrm{Im}\,S&=&\textrm{Im}\left\{\int_{r_i}^{r_f}
\left[p_r-\frac{\dot{A}_t}{\dot{r}}p_{A_t}
-\frac{\dot{\varphi}}{\dot{r}}p_\varphi\right]
\textrm{d}r\right\}\nonumber\\
&=&\textrm{Im}\left\{\int_{r_i}^{r_f}
\left[\int_{(0,\,0,\,0)}^{(p_r,\,p_{A_t},\,p_\varphi)}
\textrm{d}p^\prime_r-\frac{{\dot{A}_t}}{\dot{r}}
\textrm{d}p^\prime_{A_t}-\frac{\dot{\varphi}}{\dot{r}}
\textrm{d}p^\prime_\varphi\right] \textrm{d}r\right\}\label{eq33},
\end{eqnarray}
where $p_{A_t}$ and $p_\varphi$ are the canonical momenta conjugate
to $A_t$ and $\varphi$, respectively.
\subsection{Tunneling from the Event Horizon}\label{sec:3.1}
If the black hole is treated as a rotating sphere and the particle
self-gravitation is taken into account, one then finds
\begin{equation}
\dot{\varphi}=\Omega^\prime_H\label{eq34},
\end{equation}
and
\begin{equation}
J^\prime=(M-\omega^\prime)a=p^\prime_\varphi\label{eq35},
\end{equation}
where $\Omega^\prime_H$ is the dragged angular velocity of the event
horizon. The imaginary part of the action for the charged massive
particle can be written as
\begin{equation}
\textrm{Im}\,S=\textrm{Im}\left\{\int_{r_{Hi}}^{r_{Hf}}
\left[\int_{(0,\,0,\,J)}^{(p_r,\,p_{A_t},\,J-\omega a)}
\textrm{d}p^\prime_r-\frac{{\dot{A}_t}}{\dot{r}}
\textrm{d}p^\prime_{A_t}-\frac{\Omega^\prime_H}{\dot{r}}
\textrm{d}J^\prime_\varphi\right] \textrm{d}r\right\}\label{eq36},
\end{equation}
where $r_{Hi}$ and $r_{Hf}$ represent the locations of the event
horizon before and after the particle with energy $\omega$ and
charge $q$ tunnels out. We now eliminate the momentum in favor of
energy by applying Hamilton's canonical equations of motion
\begin{equation}
\dot{r}=\left.\frac{\textrm{d}H}{\textrm{d}p_r}\right|_{(r;\,
A_t,\,p_{A_t};\,\varphi,\,p_\varphi)}
=\frac{1}{\chi^2}\frac{\textrm{d}(M-\omega^\prime)}
{\textrm{d}p_r}=\frac{1}{\chi^2}\frac{\textrm{d}M^\prime}
{\textrm{d}p_r},\label{eq37}
\end{equation}
\begin{equation}
\dot{A}_t=\left.\frac{\textrm{d}H}{\textrm{d}p_{A_t}}\right|_{(A_t;\,
r,\,p_r;\,\varphi,\,p_\varphi)}
=\frac{1}{\chi}V^\prime_0\frac{\textrm{d}(Q-q^\prime)}
{\textrm{d}p_{A_t}}=\frac{1}{\chi^2}\frac{(Q-q^\prime)r_H}
{r_H^2+a^2+n^2}\frac{\textrm{d}(Q-q^\prime)}
{\textrm{d}p_{A_t}}\label{eq38},
\end{equation}
where $\frac{M}{\chi^2}$ is the total energy and
$\frac{Q}{\chi}$ is the total electric charge of the H-NUT-KN-K
spacetime, and we have treated the black hole as a charged
conducting sphere to derive (\ref{eq38}) \cite{sixty}.
It follows, similar to \cite{nine,ten}, directly that a massive
charged particle tunneling across the event horizon also sees the
effective metric (\ref{eq17}), but with the replacements
$M\rightarrow M-\omega^\prime$ and $Q\rightarrow Q-q^\prime$. We
have to perform the same substitutions in Eqs.(\ref{eq8}),
(\ref{eq23}) and (\ref{eq29}). Equation (\ref{eq29}) then gives the
desired expression of $\dot{r}$ as a function of $\omega^\prime$ and
$q^\prime$. Equation (\ref{eq36}) can now be written explicitly as
follows:
\begin{eqnarray}
\textrm{Im}\,S&=&\textrm{Im}\int_{r_{Hi}}^{r_{Hf}} \left[\int
-\frac{1}{\chi^2}\frac{2\,\sqrt{\Sigma(\Sigma
-\Delta_r^\prime)(\Delta_\theta\rho^2
\textrm{sin}^2\theta-\Delta_r^\prime\mathcal A^2)}
}{\Delta_r^\prime\,\sqrt{
\Delta_\theta(\rho-a\mathcal A)^2\textrm{sin}^2\theta}}
\right.\nonumber\\
&&\times\left.\left(\textrm{d}M^\prime-\frac{Q^\prime r_H^\prime}
{{r_H^\prime}^2+a^2+n^2}\textrm{d}Q^\prime-\Omega^\prime_H
\textrm{d}J^\prime\right)\right] \textrm{d}r\label{eq39},
\end{eqnarray}
where
\begin{eqnarray}
\Delta_r^\prime&=&(r^2+a^2+n^2)\left[1-\frac{1}
{\ell^2}\,(r^2+5\,n^2)\right]-2\,(M^\prime\,r+n^2)
+{Q^\prime}^2+P^2\nonumber\\
&=&\frac{1}{\ell^2}(r-r^\prime_-)(r-r^\prime_0)
(r-r^\prime_H)(r-r^\prime_C)\label{eq40}.
\end{eqnarray}
The above integral can be evaluated by deforming the contour around
the single pole at $r=r^\prime_H$ so as to ensure that positive
energy solution decay in time. In this way, we finish the $r$
integral and obtain
\begin{eqnarray}
\textrm{Im}\,S&=&-\frac{1}{2}\int_{(\frac{M}{\chi^2},\,\frac{Q}
{\chi})}^{(\frac{M-\omega}{\chi^2},\,\frac{Q-q} {\chi})}
\frac{1}{\chi}\frac{4\pi\ell^2({r_H^\prime}^2+a^2+n^2)}{
(r^\prime_H-r^\prime_-)(r^\prime_H-r^\prime_0)
(r^\prime_C-r^\prime_H)}\nonumber\\
&&\times\left(\textrm{d}M^\prime-\frac{Q^\prime
r_H^\prime}{{r_H^\prime}^2+a^2+n^2}
\textrm{d}Q^\prime-\Omega^\prime_H \textrm{d}J^\prime\right)
\label{eq41}.
\end{eqnarray}
Completing this integration and using the entropy expression
$S_{EH}=\pi(r_H^2+a^2+n^2)/\chi$, we obtain
\begin{equation}
\textrm{Im}\,S=-\frac{1}{2}\Delta S_{EH}\label{eq42},
\end{equation}
where $\Delta S_{EH}=S^\prime_{EH}-S_{EH}$ is the difference of
Bekenstein-Hawking entropies of the H-NUT-KN-K spacetime before and
after the emission of the particle. In fact, if one bears in mind
that
\begin{equation}
T^\prime=\frac{(r^\prime_H-r^\prime_-)(r^\prime_H-r^\prime_0)
(r^\prime_C-r^\prime_H)}{4\pi\ell^2({r_H^\prime}^2+a^2+n^2)}\label{eq43},
\end{equation}
one can get
\begin{equation}
\frac{1}{T^\prime}(\textrm{d}M^\prime-V_0^\prime
\textrm{d}Q^\prime-\Omega^\prime_H
\textrm{d}J^\prime)=\textrm{d}S^\prime\label{eq44}.
\end{equation}
That means, (\ref{eq42}) is a natural result of the first law of
black hole thermodynamics. Therefore, the emission rate of the
tunneling particle is
\begin{equation}
\Gamma\sim e^{-2\,\textrm{Im}\,S}=e^{\Delta S_{EH}}\label{eq45}.
\end{equation}
Obviously, the emission spectrum (\ref{eq45}) deviates from the
thermal spectrum.
In quantum mechanics, the tunneling rate is obtained by
\begin{equation}
\Gamma(i\rightarrow f)\sim |a_{if}|^2\cdot\alpha_n\label{eq46},
\end{equation}
where $a_{if}$ is the amplitude for the tunneling action and
$\alpha_n=n_f/n_i$ is the phase space factor with $n_i$ and $n_f$
being the number of the initial and final states, respectively.
Since $S_j\sim \textrm{ln}\,n_j$, i.e., $n_j\sim e^{S_j}$,
$(j=i,\,f)$, then
\begin{equation}
\Gamma\sim \frac{e^{S_f}}{e^{S_i}}=e^{S_f-S_i}=e^{\Delta
S}\label{eq47}.
\end{equation}
Equation (\ref{eq47}) is consistent with our result obtained by
applying the KPW's semi-classical quantum tunneling process. Hence
equation (\ref{eq45}) satisfies the underlying unitary theory in
quantum mechanics, and takes the same functional form as that of
uncharged massless particles \cite{fifty seven}.
\subsection{Tunneling at the Cosmological Horizon}\label{sec:3.2}
The particle is found tunneled into the cosmological horizon
differently from the particle's tunneling behavior of the event
horizon. When the particle with energy $\omega$ and charge $q$
tunnels into the cosmological horizon, Eqs. (\ref{eq8}),
(\ref{eq17}), (\ref{eq23}) and (\ref{eq29}) should have to modify by
replacing $M$ with $(M+\omega)$ and $Q$ with $(Q+q)$ after taking
the self-gravitation action into account. Thus, after tunneling the
particle with energy $\omega$ and charge $q$ into the cosmological
horizon, the radial geodesic takes the form
\begin{equation}
\dot{r}=\frac{\Delta^{\prime\prime}_r}{2}\left[\frac{
\Delta_\theta(\rho-a\mathcal A)^2\textrm{sin}^2\theta}
{\Sigma(\Sigma-\Delta^{\prime\prime}_r)(\Delta_\theta\rho^2
\textrm{sin}^2\theta-\Delta^{\prime\prime}_r\mathcal A^2)}\right]
^{\frac{1}{2}}\label{eq48},
\end{equation}
where
\[
\Delta_r^{\prime\prime}=(r^2+a^2+n^2)\left[1-\frac{1}
{\ell^2}(r^2+5n^2)\right]-2\left\{(M+\omega)r+n^2\right\}
+(Q+q)^2+P^2.
\]
Different from the event horizon,
($-\frac{M}{\chi^2}$,\,$-\frac{Q}{\chi}$) and
($-\frac{M+\omega}{\chi^2}$,\,$-\frac{Q+q}{\chi}$) are,
respectively, the total mass and electric charge of the H-NUT-KN-K
spacetime before and after the particle with energy $\omega$ and
charge $q$ tunnels into. We treat the spacetime as a charged
conducting sphere.
The imaginary part of the action for the charged massive particle
incoming from the cosmological horizon can be expressed as follows:
\begin{eqnarray}
\textrm{Im}\,S&=&\textrm{Im}\int_{r_{Ci}}^{r_{Cf}} \left[\int
-\frac{1}{\chi^2}\frac{2\,\sqrt{\Sigma(\Sigma
-\tilde{\Delta}_r^{\prime\prime})(\Delta_\theta\rho^2
\textrm{sin}^2\theta-\tilde{\Delta}_r^{\prime\prime}\mathcal A^2)}
}{\tilde{\Delta}_r^{\prime\prime}\,\sqrt{
\Delta_\theta(\rho-a\mathcal A)^2\textrm{sin}^2\theta}}
\right.\nonumber\\
&&\times\left.\left(\textrm{d}M^\prime-\frac{Q^\prime r_C^\prime}
{{r_C^\prime}^2+a^2+n^2}\textrm{d}Q^\prime-\Omega^\prime_C
\textrm{d}J^\prime\right)\right] \textrm{d}r\label{eq49},
\end{eqnarray}
where
\begin{eqnarray*}
\tilde{\Delta}_r^{\prime\prime}&=&(r^2+a^2+n^2)\left[1-\frac{1}
{\ell^2}\,(r^2+5\,n^2)\right]-2\,(M^\prime\,r+n^2)
+{Q^\prime}^2+P^2\\
&=&\frac{1}{\ell^2}(r-r^\prime_-)(r-r^\prime_0)
(r-r^\prime_H)(r-r^\prime_C),
\end{eqnarray*}
$r_{Ci}$ and $r_{Cf}$ are the locations of the cosmological horizon
before and after the particle of energy $\omega$ and charge $q$ is
tunneling into. There exists a single pole at the cosmological
horizon in (\ref{eq49}). Carrying out the $r$ integral, we have
\begin{eqnarray}
\textrm{Im}\,S&=&-\frac{1}{2}\int_{(-\frac{M}{\chi^2},\,-\frac{Q}
{\chi})}^{(-\frac{M+\omega}{\chi^2},\,-\frac{Q+q} {\chi})}
\frac{1}{\chi}\frac{4\pi\ell^2({r_C^\prime}^2+a^2+n^2)}{
(r^\prime_C-r^\prime_-)(r^\prime_C-r^\prime_0)
(r^\prime_C-r^\prime_H)}\nonumber\\
&&\times\left(\textrm{d}M^\prime-\frac{Q^\prime
r_C^\prime}{{r_C^\prime}^2+a^2+n^2}
\textrm{d}Q^\prime-\Omega^\prime_C \textrm{d}J^\prime\right)\nonumber\\
&=&-\frac{1}{2}\Delta S_{CH}\label{eq50},
\end{eqnarray}
where $\Delta S_{CH}$ is the change in Bekenstein-Hawking entropy
during the process of emission. The tunneling rate from the
cosmological horizon is therefore
\begin{equation}
\Gamma\sim e^{-2\,\textrm{Im}\,S}=e^{\Delta S_{CH}}\label{eq51}.
\end{equation}
It also deviates from the pure thermal spectrum and is consistent
with the underlying unitary theory, and takes the same functional
form as that of uncharged massless particles \cite{fifty seven}.
\section{Concluding Remarks}\label{sec:4}
In this paper we present our investigation of tunneling radiation
characteristics of massive charged particles from a more general
spacetime, namely, the NUT-Kerr-Newman-Kasuya-de Sitter spacetime,
which we call the Hot-NUT-Kerr-Newman-Kasuya (H-NUT-KN-K, for
briefness) spacetime, since the de Sitter spacetime has the
interpretation of being hot \cite{fourty seven}. We apply KPW's
framework
\cite{six,seven,eight,nine,ten,eleven,twelve,thirteen,fourteen,fifteen}
to calculate the emission rate at the event/cosmological horizon. We
first introduce a simple but useful Painlev\'e coordinate
\cite{fifty eight} which transforms the line element in a convenient
form with having many superior features in favor of our study.
Secondly, we treat the charged massive particle as a de Broglie
wave, and derive the equation of motion by computing the phase
velocity. Thirdly, we take into account the particle's
self-gravitation and treat the background spacetime as dynamical.
Then the energy conservation and the angular momentum conservation
as well as the electric charge conservation are enforced in a
natural way. Adapting this tunneling picture we were able to compute
the tunneling rate and the radiant spectrum for a massive charged
particle with revised Lagrangian function and WKB approximation. The
result displays that tunneling rate is related to the change of
Bekenstein-Hawking entropy and depends on the emitted particle's
energy and electric charge. Meanwhile, this implies that the
emission spectrum is not perfect thermal but is in agreement with an
underlying unitary theory.
The result obtained by us reduces to the Kerr-Newman black hole case
for $\ell\rightarrow\infty$, $P=0=n$, and gives the result of Zhang
et al. \cite{fourty five}. For $\ell\rightarrow\infty$, $a=0=n$, our
result reduces to that of the Reissner-Nordstr\"om black hole, as
was obtained in \cite{twenty nine}. Moreover, by suitably choosing
the parameters of the spacetime, the result of this paper can be
specialized for all the interesting black hole spacetimes, de Sitter
spacetimes as well as the NUT spacetime which has curious properties
\cite{fifty one}. The NUT spacetime is a generalization of the Schwarzschild spacetime and plays an important role in the conceptual development of general relativity and in the construction of brane solutions in string theory and M-theory \cite{sixty one,sixty two,sixty three}. The NUT spacetime has been of particular interest in recent years because of the role it plays in furthering our understanding of the AdS-CFT correspondence \cite{sixty four,sixty five,sixty six}. Solutions of Einstein equations with negative cosmological constant $\Lambda $ and a nonvanishing NUT charge have a boundary metric that has closed timelike curves. The behavior of quantum field theory is significantly different in such spacetimes. It is of interest to understand how AdS-CFT correspondence works in these sorts of cases \cite{sixty seven}. Our result can directly be extended to the AdS case (as was obtained in Taub-NUT-AdS spacetimes in \cite{sixty eight}) by changing the sign of the cosmological parameter $\ell^2$ to a negative one. In view of these attractive features, the study of this paper is interesting.
Our study indicates that the emission process satisfies the first
law of black hole thermodynamics, which is, in fact, a combination
of the energy conservation law:
$\textrm{d}M-V_0\textrm{d}Q-\Omega_H\textrm{d}J=dQ_h$ and the second
law of thermodynamics: $\textrm{d}S=dQ_h/T$, $Q_h$ being the heat
quantity. Indeed, the equation of energy conservation is suitable
for any process, reversible or irreversible, but
$\textrm{d}S=dQ_h/T$ is only reliable for the reversible process;
for an irreversible process, $\textrm{d}S>dQ_h/T$. The emission
process in KPW tunneling framework is thus an reversible one. In
this picture, the background spacetime and the outside approach an
thermal equilibrium by the process of entropy flux
$\textrm{d}S=dQ_h/T$. As the H-NUT-KN-K spacetime radiates, its
entropy decreases but the total entropy of the system remains
constant, and the information is preserved. But in fact, the
existence of the negative heat capacity makes an evaporating black
hole a highly unstable system, and the thermal equilibrium between
the black hole and the outside becomes unstable, there will exist
difference in temperature. Then the process is irreversible,
$\textrm{d}S>dQ_h/T$, and the underlying unitary theory is not
satisfied. There will be information loss during the evaporation and
the KPW's tunneling framework will fail to prove the information
conservation. Further, the preceding study is still a semi-classical
analysis -- the radiation is treated as point particles. The
validity of such an approximation can only exist in the low energy
regime. To properly address the information loss problem, a better
understanding of physics at the Planck scale is a necessary
prerequisite, especially that of the last stages or the endpoint of
Hawking evaporation.
Using the interesting method of complex paths Shankanarayanan et al.
\cite{sixty nine,seventy,seventy one} investigated the Hawking
radiation by tunneling approach, considering the amplitude for pair
creation both inside and outside the horizon. In their formalism the
tunneling of particles produced just inside the horizon also
contributes to the Hawking radiation.
Akhmedov et al. \cite{seventy two,seventy three} investigated Hawking
radiation in the quasi-classical tunneling picture by the
Hamilton-Jacobi equations, using
$\Gamma\propto\textrm{exp}\{\textrm{Im}(\oint p\textrm{d}r)\}$
\cite{seventy four}, rather than
$\Gamma\propto\textrm{exp}\{2\,\textrm{Im}(\int p\textrm{d}r)\}$,
and argued that the former expression for $\Gamma$ is correct since
$\oint p\textrm{d}r$ is invariant under canonical transformation,
while $\int p\textrm{d}r$ is not. According to their argument the
temperature of the Hawking radiation should be twice as large as
originally calculated.
Wu et al. \cite{seventy five} studied the tunneling effect near a
weakly isolated horizon \cite{seventy six} by applying the null
geodesic method of KPW and the Hamilton-Jacobi method \cite{thirty
one}, both lead to the same result. However, there are subtle
differences, e.g., in KPW's method, only the canonical time
direction can define the horizon mass and lead to the first law of
black hole mechanics, while the thermal spectrum exists for any
choice of time direction in the Hamilton-Jacobi method. Berezin et al. \cite{seventy seven} used a self-consistent canonical quantization of self-gravitating spherical shell to describe Hawking radiation as tunneling. Their work is analogous to KPW but due to the fact that they took into account back reaction of the shell on the metric they did not have a singular potential at $r_g$ (it is smoothed in their case between $r_{\rm in}$ and $r_{\rm out}$) and the use of the semi-classical approximation to describe tunneling seems more justified.
Since the discovery of the first exact solution of Einstein's field
equations, the studying property of black holes is always a
highlight of gravitational physics. Providing mechanisms to fuel the
most powerful engines in the cosmos, black holes are playing a major
role in relativistic astrophysics. Indeed, the famous Hawking
radiation from the event horizon of black holes is one of the most
important achievements of quantum field theory in curved
spacetimes. In fact, due to Hawking evaporation classical general
relativity, statistical physics, and quantum field theory are
connected in quantum black hole physics. It is generally believed
that the deep investigation of black hole physics would be helpful
to set up a satisfactory quantum theory of gravity. In view of this,
tunneling process of Hawking radiation deserves more investigations
in a wider context.
\vspace{1.0cm}
\noindent
{\large\bf Acknowledgement}\\
I am thankful to SIDA as well as the Abdus Salam International
Centre for Theoretical Physics, Trieste, Italy, where this paper was
produced during my Associateship visit.
\newpage
|
2,869,038,154,102 | arxiv | \section{\@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}}
\def\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}
\makeatother
\usepackage{amsmath,amssymb,amsthm}
\usepackage{mathrsfs}
\usepackage{mathabx}\changenotsign
\usepackage{dsfont}
\usepackage{xcolor}
\usepackage[backref]{hyperref}
\hypersetup{
colorlinks,
linkcolor={red!60!black},
citecolor={green!60!black},
urlcolor={blue!60!black}
}
\usepackage{graphicx}
\usepackage[open,openlevel=2,atend]{bookmark}
\usepackage[abbrev,msc-links,backrefs]{amsrefs}
\usepackage{doi}
\renewcommand{\doitext}{DOI\,}
\renewcommand{\PrintDOI}[1]{\doi{#1}}
\renewcommand{\eprint}[1]{\href{http://arxiv.org/abs/#1}{arXiv:#1}}
\usepackage[T1]{fontenc}
\usepackage{lmodern}
\usepackage[babel]{microtype}
\usepackage[english]{babel}
\linespread{1.3}
\usepackage{geometry}
\geometry{left=27.5mm,right=27.5mm, top=25mm, bottom=25mm}
\numberwithin{equation}{section}
\numberwithin{figure}{section}
\usepackage{enumitem}
\def\upshape({\itshape \roman*\,}){\upshape({\itshape \roman*\,})}
\def\upshape(\Roman*){\upshape(\Roman*)}
\def\upshape({\itshape \alph*\,}){\upshape({\itshape \alph*\,})}
\def\upshape({\itshape \Alph*\,}){\upshape({\itshape \Alph*\,})}
\def\upshape({\itshape \arabic*\,}){\upshape({\itshape \arabic*\,})}
\let\polishlcross=\ifmmode\ell\else\polishlcross\fi
\def\ifmmode\ell\else\polishlcross\fi{\ifmmode\ell\else\polishlcross\fi}
\def\ \text{and}\ {\ \text{and}\ }
\def\quad\text{and}\quad{\quad\text{and}\quad}
\def\qquad\text{and}\qquad{\qquad\text{and}\qquad}
\let\emptyset=\varnothing
\let\setminus=\smallsetminus
\let\backslash=\smallsetminus
\let\sm=\setminus
\makeatletter
\def\mathpalette\mov@rlay{\mathpalette\mov@rlay}
\def\mov@rlay#1#2{\leavevmode\vtop{ \baselineskip\z@skip \lineskiplimit-\maxdimen
\ialign{\hfil$\m@th#1##$\hfil\cr#2\crcr}}}
\newcommand{\charfusion}[3][\mathord]{
#1{\ifx#1\mathop\vphantom{#2}\fi
\mathpalette\mov@rlay{#2\cr#3}
}
\ifx#1\mathop\expandafter\displaylimits\fi}
\makeatother
\newcommand{\charfusion[\mathbin]{\cup}{\cdot}}{\charfusion[\mathbin]{\cup}{\cdot}}
\newcommand{\charfusion[\mathop]{\bigcup}{\cdot}}{\charfusion[\mathop]{\bigcup}{\cdot}}
\DeclareFontFamily{U} {MnSymbolC}{}
\DeclareSymbolFont{MnSyC} {U} {MnSymbolC}{m}{n}
\DeclareFontShape{U}{MnSymbolC}{m}{n}{
<-6> MnSymbolC5
<6-7> MnSymbolC6
<7-8> MnSymbolC7
<8-9> MnSymbolC8
<9-10> MnSymbolC9
<10-12> MnSymbolC10
<12-> MnSymbolC12}{}
\DeclareMathSymbol{\powerset}{\mathord}{MnSyC}{180}
\usepackage{tikz}
\usetikzlibrary{calc,arrows,decorations.pathmorphing}
\pgfdeclarelayer{background}
\pgfdeclarelayer{foreground}
\pgfdeclarelayer{front}
\pgfsetlayers{background,main,foreground,front}
\usepackage{adjustbox}
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem{problem}[theorem]{Problem}
\theoremstyle{remark}
\newtheorem*{remark}{Remark}
\newenvironment{poc}{\renewcommand{\qedsymbol}{$\blacksquare$}\begin{proof}[Proof of claim]}{\end{proof}}
\def{C}{{C}}
\newcommand{\ensuremath{\varepsilon}}{\ensuremath{\varepsilon}}
\def\mathds{P}{\mathds{P}}
\def\mathds{E}{\mathds{E}}
\def\mathds{N}{\mathds{N}}
\def\mathds{Z}{\mathds{Z}}
\def\mathds{R}{\mathds{R}}
\def\mathcal{P}{\mathcal{P}}
\def\mathds{S}{\mathds{S}}
\def\mathscr{S}{\mathscr{S}}
\def\mathcal{H}{\mathcal{H}}
\def\mathcal{G}{\mathcal{G}}
\defG^{(r)}{G^{(r)}}
\def\mathrm{e}{\mathrm{e}}
\DeclareMathOperator\Deg{d}
\DeclareMathOperator\ex{ex}
\newcommand*{\abs}[1]{\lvert#1\rvert}
\newcommand{33}{33}
\newcommand{\ensuremath{\varepsilon}}{\ensuremath{\varepsilon}}
\newcommand{\alpha}{\alpha}
\let\originalleft\left
\let\originalright\right
\renewcommand{\left}{\mathopen{}\mathclose\bgroup\originalleft}
\renewcommand{\right}{\aftergroup\egroup\originalright}
\makeatletter
\def\imod#1{\allowbreak\mkern10mu({\operator@font mod}\,\,#1)}
\makeatother
\newcommand{}{}
\begin{document}
\title{Sharp thresholds for nonlinear Hamiltonian cycles in hypergraphs}
\author{Bhargav Narayanan}
\address{Department of Mathematics, Rutgers University, Piscataway, NJ 08854, USA}
\email{[email protected]}
\author{Mathias Schacht}
\address{Department of Mathematics, Yale University, New Haven, USA}
\email{[email protected]}
\subjclass[2010]{Primary 05C80; Secondary 05C65, 05C45}
\begin{abstract}
For positive integers $r > \ell$, an $r$-uniform hypergraph is called an \emph{$\ell$-cycle} if there exists a cyclic ordering of its vertices such that each of its edges consists of~$r$ consecutive vertices, and such that every pair of consecutive edges (in the natural ordering of the edges) intersect in precisely $\ell$ vertices. Such cycles are said to be \emph{linear} when~$\ell = 1$, and \emph{nonlinear} when $\ell > 1$. We determine the sharp threshold for nonlinear Hamiltonian cycles and show that for all $r > \ell > 1$, the threshold $p^*_{r, \ell} (n)$ for the appearance of a Hamiltonian $\ell$-cycle in the random $r$-uniform hypergraph on $n$ vertices is sharp and \mbox{is~$p^*_{r, \ell} (n) = \lambda(r,\ifmmode\ell\else\polishlcross\fi) (\frac{\mathrm{e}}{n})^{r - \ell}$} for an explicitly specified function $\lambda$.
This resolves several questions raised by Dudek and Frieze in 2011.
\end{abstract}
\maketitle
\section{Introduction}
A basic problem in probabilistic combinatorics concerns locating the critical density at which a substructure of interest appears inside a random structure (with high probability). In the context of random graph theory, the question of when a random graph contains a \emph{Hamiltonian cycle} has received considerable attention.
Indeed, from the foundational works of P\'osa~\cite{ham1}, Koml\'os and Szemer\'edi~\cite{ham2}, Bollob\'as~\cite{ham3}, and Ajtai, Koml\'os and Szemer\'edi~\cite{ham4}, we have a very complete picture, understanding not only the \emph{sharp threshold} for this problem but the \emph{hitting time} as well. Since these early breakthroughs, there have been a number of papers locating thresholds, both asymptotic and sharp, for various spanning subgraphs of interest (see, e.g.,~\cites{oliver,richard} and the references therein for various related results).
In contrast, threshold results for spanning structures in the context of random hypergraph theory have been somewhat harder to come by. Indeed, even the basic question of locating the asymptotic threshold at which a random $r$-uniform hypergraph (or \emph{$r$-graph}, for short) contains a matching, i.e., a spanning collection of disjoint edges, proved to be a major challenge, resisting the efforts of a number of researchers up until the breakthrough work of Johansson, Kahn and Vu~\cite{jkv}; more recently, both the sharp threshold as well as the hitting time for this problem have been obtained by Kahn~\cite{jeff}. In the light of these developments for matchings, we study what is perhaps the next most natural question in this setting, namely that of when a random $r$-graph contains a Hamiltonian cycle; our main contribution is to resolve the sharp threshold problem for nonlinear Hamiltonian cycles.
There are multiple notions of cycles in hypergraphs, so let us recall the relevant definitions: given positive integers $r > \ell \ge 1$, an $r$-graph is called an \emph{$\ell$-cycle} if there exists a cyclic ordering of its vertices such that each of its edges consists of $r$ consecutive vertices in the ordering, and such that every pair of consecutive edges (in the natural ordering of the edges) intersect in precisely $\ell$ vertices (see Figure~\ref{pic:cycle} for an example). A \emph{Hamiltonian $\ell$-cycle} is then an $\ell$-cycle spanning the entire vertex set; of course, an $r$-graph on $n$ vertices may only contain a Hamiltonian $\ell$-cycle when $(r-\ell) \,|\, n$, and such a cycle then has precisely $n / (r - \ell)$ edges. Finally, by convention, an $\ell$-cycle is called \emph{linear} (or \emph{loose}) when $\ell = 1$, \emph{nonlinear} when $\ell > 1$, and \emph{tight} when $\ell = r-1$.
Given $r > \ell \ge 1$, we set
\[\lambda(r,\ifmmode\ell\else\polishlcross\fi) = t!\cdot(s-t)!,\]
where $s = r - \ell$ and $1 \le t \le s$ is the unique integer satisfying $t = r \imod{s}$, and define
\[ p^*_{r, \ell} (n) = \frac{\lambda(r,\ifmmode\ell\else\polishlcross\fi)\mathrm{e}^s}{ n^{s}}.
\]
Writing $G^{(r)}(n,p)$ for the binomial random $r$-graph on $n$ vertices, where each possible $r$-set of vertices appears as an edge independently with probability $p$, our main result is as follows.
\begin{theorem}\label{mainthm}
For all integers $r > \ell > 1$ and all $\ensuremath{\varepsilon} > 0$, as $n \to \infty$ with $(r-\ell) \,|\, n$, we have
\[
\mathds{P} \left( G^{(r)}(n,p) \text{ contains a Hamiltonian $\ell$-cycle} \right) \to
\begin{cases}
1 & \mbox{if } p > (1+ \ensuremath{\varepsilon})p^*, \text{ and} \\
0 & \mbox{if } p < (1 -\ensuremath{\varepsilon})p^*, \\
\end{cases}
\]
where we abbreviate $p^* = p^*_{r, \ell} (n)$.
\end{theorem}
\begin{figure}
\begin{center}
\trimbox{0cm 0cm 0cm 0cm}{
\begin{tikzpicture}[scale = 0.6]
\foreach \x in {0,1,2,3,4,5,6,7}
\node at (3*\x, 0) [inner sep=0.5mm, circle, fill=black!100] {};
\foreach \x in {0,1,2,3,4,5,6}
\node at (3*\x+1, 0) [inner sep=0.5mm, circle, fill=black!100] {};
\foreach \x in {0,1,2,3,4,5,6}
\node at (3*\x+2, 0) [inner sep=0.5mm, circle, fill=black!100] {};
\foreach \x in {0,3,6,9,12,15}
\draw[shift={(\x,0)}] (3,0) ellipse (3.5cm and 0.5cm);
\draw[dotted, thick] (-1.5,0)--(-0.5,0);
\draw[dotted, thick] (21.5,0)--(22.5,0);
\end{tikzpicture}
}
\end{center}
\caption{A $7$-uniform $4$-cycle.}\label{pic:cycle}
\end{figure}
The critical density $p^*_{r, \ell}$ appearing in our result corresponds to the so-called `expectation threshold', namely the density above which the expected number of Hamiltonian $\ell$-cycles in~$G^{(r)}(n,p)$ begins to diverge. A moment's thought should convince the reader that, unlike in the case of linear Hamiltonian cycles where one has to worry about isolated vertices, there are no `coupon collector type' obstacles to the presence of nonlinear Hamiltonian cycles; therefore, the conclusion of Theorem~\ref{mainthm} should not come as a surprise. Indeed, the problem of whether something like Theorem~\ref{mainthm} ought to hold was raised by Dudek and Frieze~\cites{loose, tight}. Towards such a result, they showed that $p^*_{r, \ell}$ is an asymptotic threshold for all nonlinear Hamiltonian cycles, that $p^*_{r, \ell}$ is a sharp threshold for tight Hamiltonian cycles when $r\ge 4$, and that $p^*_{r, \ell}$ is a semi-sharp threshold for all $r > \ell \ge 3$.
The main difficulty in proving Theorem~\ref{mainthm} is that, with the exception of the case of tight Hamiltonian cycles with $r \ge 4$ mentioned earlier, the second moment method is in itself not sufficient to prove the result; for instance, it is easy to verify, even in the simple case of $r = 3$ and $\ell = 2$ (i.e., tight Hamiltonian cycles in $3$-graphs) that the requisite second moment is too large to yield our result. To prove Theorem~\ref{mainthm}, we shall combine a careful second moment estimate, which necessitates working modulo various symmetries, with a powerful theorem of Friedgut~\cite{friedgut1} characterising coarse thresholds.
This paper is organised as follows. We gather the tools we require in Section~\ref{prelim}. The proof of Theorem~\ref{mainthm} follows in Section~\ref{proof}. We conclude in Section~\ref{conc} with a discussion of some open problems.
\section{Preliminaries}\label{prelim}
We begin with some background on thresholds. Recall that a \emph{monotone $r$-graph property~$W$} is a sequence $(W_n)_{n \ge 0}$ of families of $r$-graphs, where $W_n$ is a family of $r$-graphs on~$n$ vertices closed under the addition of edges and invariant under $r$-graph isomorphism.
Given a monotone $r$-graph property~$W = (W_n)_{n \ge 0}$, a function $p^*(n)$ is said to be a \emph{threshold} or \emph{asymptotic threshold} for $W$ if $\mathds{P}(G^{(r)}(n,p) \in W_n)$ tends, as $n \to \infty$, either to $1$ or $0$ as $p / p^*$ tends either to $\infty$ or $0$ respectively, and a function $p^*(n)$ is said to be a \emph{sharp threshold} for $W$ if $\mathds{P}(G^{(r)}(n,p) \in W_n)$ tends, as $n \to \infty$, either to $1$ or $0$ as $p / p^*$ remains bounded away from $1$ either from above or below respectively. Of course, thresholds and sharp thresholds are not unique, but following common practice, we will often say `the' threshold or sharp threshold when referring to the appropriate equivalence class of functions. Finally, a function $p^*(n)$ is said to be a \emph{semi-sharp threshold} for $W$ if there exist constants $C_0 \le 1 \le C_1$ such that $\mathds{P}(G^{(r)}(n,p) \in W_n)$ tends, as $n \to \infty$, either to $1$ or $0$ as~$p / p^*$ remains bounded below by $C_1$ or above by $C_0$ respectively; while we do not need this notion ourselves, we give this definition to place existing results around our main result in the appropriate context.
That every monotone property has an asymptotic threshold follows from a (much more general) result of Bollob\'as and Thomason~\cite{thresh}. Unlike with asymptotic thresholds, a monotone property need not necessarily have a sharp threshold; such properties are said to have \emph{coarse thresholds}. We shall make use of a powerful characterisation of monotone properties that have coarse thresholds due to Friedgut~\cite{friedgut1} which says, roughly, that such properties are `approximable by a local property'; a concrete formulation at a level of generality sufficient for our purposes, see~\cite{friedgut2}, is as follows.
\begin{proposition}\label{st}
Fix $r \in \mathds{N}$ and let $W = (W_n)_{n \ge 0}$ be a monotone $r$-graph property that has a coarse threshold. Then there exists a constant $\alpha > 0$, a threshold function $\hat p = \hat p(n)$ with
\[ \alpha < \mathds{P}\big(G^{(r)}(n, \hat p) \in W_n\big) < 1-3\alpha \]
for all $n \in \mathds{N}$, a constant $\beta > 0$ and a fixed $r$-graph $F$ such that the following holds: for infinitely many $n \in \mathds{N}$, there exists an $r$-graph on $n$ vertices $H_n \notin W_n$ such that
\[ \mathds{P}\big(H_n \cup G^{(r)}(n,\beta \hat p) \in W_n\big) < 1-2\alpha, \]
where the random $r$-graph $G^{(r)}(n, \beta \hat p)$ is taken to be on the same vertex set as $H_n$, and
\[ \mathds{P}\big(H_n \cup \tilde F \in W_n\big) > 1-\alpha, \]
where $\tilde F$ denotes a random copy of $F$ on the same vertex set as $H_n$. \qed
\end{proposition}
We shall also require the Paley--Zygmund inequality.
\begin{proposition}\label{pz}
If $X$ is a non-negative random variable, then
\[ \mathds{P}(X > 0) \ge \frac{\mathds{E}[X]^2}{\mathds{E}[X^2]}. \eqno\qed \]
\end{proposition}
Finally, we collect together some standard estimates for factorials and binomial coefficients.
\begin{proposition}\label{stirling}
For all $n \in \mathds{N}$, we have
\[ \sqrt{2\pi n} \left(\frac{n}{\mathrm{e}}\right)^n \le n! \le \mathrm{e} \sqrt{ n} \left(\frac{n}{\mathrm{e}}\right)^n,\]
and for all positive integers $1\le k \le n$, we have
\[ \binom{n}{k} \le \left(\frac{\mathrm{e} n}{k}\right)^k. \eqno \qed \]
\end{proposition}
\section{Proof of the main result}\label{proof}
In this section, we shall prove Theorem~\ref{mainthm}. We begin by setting up some notational conventions that we shall adhere to in the sequel.
In what follows, we fix $r, \ell \in \mathds{N}$ with $r > \ell > 1$, set $s = r - \ell$, take $t$ to be the unique integer satisfying $t = r \imod{s}$ with $1 \le t \le s$, and set $\lambda = t!(s-t)!$. We shall henceforth assume that $n$ is a large integer divisible by $s$, and we set $m = n / s$ so that $m$ is the number of edges in an $\ell$-cycle on $n$ vertices. Finally, all $r$-graphs on $n$ vertices in the sequel will implicitly be assumed to be on the vertex set $[n] = \{1, 2, \dots, n\}$.
To deal with $r$-graph cycles on the vertex set $[n]$, we shall define an equivalence relation on $S_n$, the symmetric group of permutations of $[n] = \{1, 2,\dots, n\}$; we shall ignore the group structure of $S_n$ for the most part, so for us, a permutation $\sigma \in S_n$ is just an arrangement $\sigma(1), \sigma(2), \dots, \sigma(n)$ of the elements of $[n]$ (namely, vertices), at locations indexed by $[n]$.
We divide $[n]$ into $m$ \emph{blocks} of size $s$, where for $0\le i < m$, the $i$-th such block is comprised of the interval $\{is+1, is+2, \dots, is + s\}$ of vertices, and we further divide each such block into two \emph{subblocks}, where the $t$-subblock of a block consists of the first $t$ vertices in the block, and the $(s-t)$-subblock of a block consists of the last $s-t$ vertices in the block. Now, define an equivalence relation on $S_n$ by saying that two permutations $\sigma$ and $\tau$ are \emph{subblock equivalent} if $\tau$ may be obtained from $\sigma$ by only rearranging vertices within subblocks; in other words, an equivalence class of this equivalence relation may be viewed as an element of the quotient $Q_n = S_n / (S_t \times S_{s-t})^m$.
\begin{figure}
\begin{center}
\trimbox{0cm 0cm 0cm 0cm}{
\begin{tikzpicture}[scale = 0.6]
\draw[fill = gray!20] (5.7,-1) rectangle (8.3,1);
\draw[shift={(3,0)},fill = gray!20] (5.7,-1) rectangle (8.3,1);
\draw[shift={(6,0)}, fill = gray!20] (5.7,-1) rectangle (8.3,1);
\foreach \x in {0,1,2,3,4,5,6,7}
\node at (3*\x, 0) [inner sep=0.5mm, circle, fill=red!100] {};
\foreach \x in {0,1,2,3,4,5,6}
\node at (3*\x+1, 0) [inner sep=0.5mm, circle, fill=blue!100] {};
\foreach \x in {0,1,2,3,4,5,6}
\node at (3*\x+2, 0) [inner sep=0.5mm, circle, fill=blue!100] {};
\foreach \x in {0,3,6,9,12,15}
\draw[shift={(\x,0)}] (3,0) ellipse (3.5cm and 0.5cm);
\draw[dotted, thick] (-1.5,0)--(-0.5,0);
\draw[dotted, thick] (21.5,0)--(22.5,0);
\end{tikzpicture}
}
\end{center}
\caption{Blocks and subblocks of a $7$-uniform $4$-cycle.}\label{pic:blockstruc}
\end{figure}
The definition of the above equivalence relation is motivated by the natural $\ell$-cycle associated with a permutation: given $\sigma \in S_n$, consider the $r$-graph $H_\sigma$ on $[n]$ with $m$ edges, where for $0\le i < m$, the $i$-th edge of $H_\sigma$ is the $r$-set $\{\sigma(is+1), \sigma(is+2), \dots, \sigma(is + r)\}$, of course with indices being considered cyclically modulo $n$. It is easy to verify both that $H_\sigma$ is an $\ell$-cycle for each $\sigma \in S_n$, and that if $\sigma$ and $\tau$ are subblock equivalent, then $H_\sigma = H_\tau$; see Figure~\ref{pic:blockstruc} for an illustration. Hence, in what follows, we shall abuse notation and call the elements of $Q_n$ permutations (when strictly speaking, they are equivalence classes of permutations), and for $\sigma \in Q_n$, we write $H_\sigma$ for the natural $\ell$-cycle associated with $\sigma$.
We parameterise $p = Cp^*_{r,l}(n) = C \lambda \mathrm{e}^s / n^s$ for some constant $C > 0$, and work with $G = G^{(r)}(n,p)$, where we take the vertex set of $G$ to be $[n]$. Therefore, our goal is to show that $G$ contains a Hamiltonian $\ell$-cycle with high probability when $C > 1$ (namely, the $1$-statement), and that $G$ does not contain a Hamiltonian $\ell$-cycle with high probability when $C < 1$ (namely, the $0$-statement).
In what follows, constants suppressed by asymptotic notation are allowed to depend on fixed parameters (quantities depending only on $r$, $\ell$, $C$, etc.) but not on variables that depend on $n$, which we send to infinity along multiples of $s$. We also adopt the standard convention that an event holds with high probability if the probability of the event in question is $1-o(1)$ as $n \to \infty$.
We shall focus our attention on the random variable $X$ that counts the number of $\sigma \in Q_n$ for which the $\ell$-cycle $H_\sigma$ is contained in $G$, noting that $G$ contains a Hamiltonian $\ell$-cycle if and only if $X > 0$.
We start by computing the first moment of $X$.
\begin{lemma}\label{expectation}
We have $\mathds{E}[X] = |Q_n|p^m = n! (p / \lambda)^m$, so that
\[
\mathds{E}[X] \longrightarrow
\begin{cases}
\infty & \mbox{if } C > 1, \text{ and} \\
0 & \mbox{if } C < 1. \\
\end{cases}
\]
\end{lemma}
\begin{proof}
This follows from noting that $|Q_n| = n! / \lambda^m$, estimating $n!$ using Proposition~\ref{stirling}, and using the fact that $n = ms$.
\end{proof}
In particular, the above first moment estimate, combined with Markov's inequality, establishes the $0$-statement. To establish the $1$-statement, the following second moment estimate will be crucial.
\begin{lemma}\label{variance}
For $C > 1$, we have $\mathds{E}[X^2] = O(\mathds{E}[X]^2)$.
\end{lemma}
Let us point out that Lemma~\ref{variance} does not make the stronger promise that \[\mathds{E}[X^2] = (1+o(1))\mathds{E}[X]^2\,,\] and indeed, such an estimate does not hold generally for an arbitrary pair of integers $r > \ell > 1$.
\begin{proof}[Proof of Lemma~\ref{variance}]
To estimate the second moment of $X$, it will be convenient to make the following definition: for $0 \le b \le m$, let $N(b)$ denote, for any fixed permutation $\sigma \in Q_n$, the number of permutations $\tau \in Q_n$ meeting $\sigma$ in $b$ edges, by which we mean that $H_\tau$ intersects $H_\sigma$ in exactly $b$ edges. With this definition in place, using the trivial fact that $N(0) \le |Q_n|$, we have
\begin{align*}
\mathds{E}[X^2] & = \sum_{\sigma, \tau \in Q_n} \mathds{P} (H_\sigma \cup H_\tau \subset G) \\
& = |Q_n|p^m \sum_{b = 0}^m N(b) p^{m-b} \\
& \le |Q_n|^2p^{2m} + |Q_n|p^m \sum_{b = 1}^m N(b) p^{m-b} \\
& =\mathds{E}[X]^2 + \mathds{E}[X]\sum_{b = 1}^m N(b) p^{m-b},
\end{align*}
whence it follows that
\[
\frac{\mathds{E}[X^2] }{ \mathds{E}[X] ^2} \le 1 + \sum_{b = 1}^m \frac{N(b)p^{-b}} {|Q_n|},
\]
so to prove Lemma~\ref{variance}, it suffices to show that the sum
\[\Gamma = \sum_{b = 1}^m \frac{N(b)p^{-b}}{ |Q_n|} \]
satisfies the estimate $\Gamma = O(1)$ when $C > 1$.
The rough plan of attack now is similar to that adopted by Dudek and Frieze~\cite{tight}, but we shall require a more careful two-stage analysis since we require stronger estimates: first, we shall control the `canonical' contributions to $\Gamma$, and subsequently bound the `non-canonical' contributions in terms of the aforementioned `canonical' ones; we make precise these notions below.
An $r$-graph is called an \emph{$\ell$-path} if there exists a linear ordering of its vertices such that each of its edges consists of $r$ consecutive vertices, and such that every pair of consecutive edges (in the natural ordering of the edges) intersect in precisely $\ell$ vertices. Given a permutation $\sigma \in Q_n$, we say that $\tau \in Q_n$ meets $\sigma$ \emph{canonically} if $H_\tau$ meets $H_\sigma$ in a family of vertex-disjoint $\ell$-paths, and we otherwise say that $\tau$ meets $\sigma$ \emph{non-canonically}.
For $1 \le b \le m$, let $N_c(b)$ to be the number of permutations $\tau \in Q_n$ which canonically meet a fixed permutation $\sigma \in Q_n$ in $b$ edges, set $N'(b) = N(b) - N_c(b)$, and decompose $\Gamma = \Gamma_c + \Gamma'$, where naturally
\[\Gamma_c = \sum_{b = 1}^m \frac{N_c(b)p^{-b}} {|Q_n|}\] and
\[\Gamma' = \Gamma - \Gamma_c = \sum_{b = 1}^m \frac{N'(b)p^{-b} }{ |Q_n|}.\]
First, we bound the canonical contributions to $\Gamma$.
\begin{claim}\label{canonical}
For $C > 1$, we have $\Gamma_c = O(1)$.
\end{claim}
\begin{proof}
Fix a permutation $\sigma \in Q_n$ and for $1 \le a \le b$, write $N_c (b, a)$ for the number of permutations $\tau \in Q_n$ which meet $\sigma$ canonically in $b$ edges which together form $a$ vertex-disjoint $\ell$-paths in $H_\sigma$. We now proceed to estimate $N_c (b,a)$.
Given $\sigma$, a $(b,a)$-configuration in $\sigma$ is a collection of $b$ edges in~$H_\sigma$ which together form~$a$ vertex-disjoint $\ell$-paths; clearly, a $(b,a)$-configuration covers $sb + \ell a$ vertices. The number of ways to choose a $(b,a)$-configuration in $\sigma$ is clearly at most \[\binom{m}{a} \binom{b}{a},\] since there are at most $\binom{m}{a}$ ways of locating the leftmost edge in each of the $a$ $\ell$-paths in~$H_\sigma$, and the number of ways to subsequently choose the number of edges in each of these $a$ paths so that there are $b$ edges in total is clearly at most the number of solutions to the equation $x_1 + x_2 + \dots + x_a = b$ over the positive integers, which is $\binom{b-1}{a-1} \le \binom{b}{a}$.
Next, given a $(b,a)$-configuration $\mathcal{P}$ in $\sigma$, let us count the number of choices for $\tau \in Q_n$ for which $H_\tau$ contains $\mathcal{P}$. We do this in two steps. First, we count the number of ways in which the vertices covered by $\mathcal{P}$ can be embedded into $\tau$, and then estimate the number of ways in which the vertices not covered by $\mathcal{P}$ can be ordered in $\tau$, ensuring at all times that we only count up to subblock equivalence.
\begin{figure}
\begin{center}
\trimbox{0cm 0cm 0cm 0cm}{
\begin{tikzpicture}[scale = 0.6]
\draw[thick,right hook-latex] (0,1.5) --(4,1.5);
\draw[fill = gray!20] (5.7,-1) rectangle (14.3,1);
\foreach \x in {0,1,2,3,4,5,6,7}
\node at (3*\x, 0) [inner sep=0.5mm, circle, fill=red!100] {};
\foreach \x in {0,1,2,3,4,5,6}
\node at (3*\x+1, 0) [inner sep=0.5mm, circle, fill=blue!100] {};
\foreach \x in {0,1,2,3,4,5,6}
\node at (3*\x+2, 0) [inner sep=0.5mm, circle, fill=blue!100] {};
\foreach \x in {0,3,6,9,12,15}
\draw[shift={(\x,0)}] (3,0) ellipse (3.5cm and 0.5cm);
\draw[shift={(0,-5)},thick,left hook-latex] (21,1.5) --(17,1.5);
\draw[shift={(1,-5)},fill = gray!20] (5.7,-1) rectangle (14.3,1);
\foreach \x in {0,1,2,3,4,5,6,7}
\node at (3*\x, -5) [inner sep=0.5mm, circle, fill=red!100] {};
\foreach \x in {0,1,2,3,4,5,6}
\node at (3*\x+1, -5) [inner sep=0.5mm, circle, fill=blue!100] {};
\foreach \x in {0,1,2,3,4,5,6}
\node at (3*\x+2, -5) [inner sep=0.5mm, circle, fill=blue!100] {};
\foreach \x in {0,3,6,9,12,15}
\draw[shift={(\x,-5)}] (3,0) ellipse (3.5cm and 0.5cm);
\end{tikzpicture}
}
\end{center}
\caption{The rigid interior of a $7$-uniform $4$-path in the two possible directions of embedding.}\label{pic:rigid}
\end{figure}
Now, there are at most $a! \binom{m}{a}$ ways to choose the starting blocks of the leftmost edges of the $a$ distinct $\ell$-paths of $\mathcal{P}$ in $\tau$. Once the left endpoint of one of these $\ell$-paths has been fixed in $\tau$, we observe that there are only $O(1)$ ways, up to subblock equivalence, to embed the remaining vertices of this $\ell$-path into $\tau$; indeed, the relative ordering of all the vertices in an $\ell$-path, with $O(1)$ exceptions at the left and right extremes, is rigid up to subblock equivalence, up to a reversal of the direction of embedding (left-to-right or right-to-left), as shown in Figure~\ref{pic:rigid}. Consequently, once the location of the $a$ leftmost edges have been determined in $\tau$, the number of ways of embedding the rest of $\mathcal{P}$ into $\tau$ is at most $L^a$ for some $L = L(r,\ell)$. We conclude that the number of ways to embed $\mathcal{P}$ is at most
\[ \binom{m}{a} a! L^a.\]
Once we have embedded $\mathcal{P}$ into $\tau$, there are $(n-sb-\ell a)!$ ways to arrange the remaining vertices uncovered by $\mathcal{P}$, without accounting for subblock equivalence. It is easy to check that any embedding of $\mathcal{P}$ covers at most $b + \ell a$ blocks in $\tau$, so the number of choices for~$\tau \in Q_n$ with a given embedding of $\mathcal{P}$ is at most
\[ \frac{(n-sb-\ell a)!}{\lambda^{m-b-\ell a}}. \]
From the above estimates and using Proposition~\ref{stirling} to bound binomial coefficients, we conclude that
\begin{align}\label{canoncount}
N_c (b,a) & \le \binom{m}{a} \binom{b}{a} \binom{m}{a} a! L^a \frac{(n-sb-\ell a)!}{\lambda^{m-b-\ell a}} \nonumber \\
& \le \exp{(O(a))} \frac{n^{2a}b^a}{a^{2a}} \frac{(n-sb-\ell a)!}{\lambda^{m-b}},
\end{align}
where, as remarked upon before, constants suppressed by the asymptotic notation depend only on $r$ and $\ell$.
To finish the proof of the lemma, we now use, in order, the above bound~\eqref{canoncount}, the fact that $\ell \ge 2$, Proposition~\ref{stirling}, and the fact that $1+x \le \mathrm{e}^x$ for all $x \in \mathds{R}$ to show that
\begin{align*}
\Gamma_c & = \sum_{b = 1}^m \sum_{a = 1}^b \frac{N_c(b,a)p^{-b}}{|Q_n|} \\
& \le \sum_{b = 1}^m \sum_{a = 1}^b \exp{(O(a))} \frac{n^{2a}b^a}{a^{2a}} \frac{(n-sb-\ell a)! }{\lambda^{m-b}} \frac{\lambda^m}{n!} \frac{n^{sb}}{C^b\lambda^b \mathrm{e}^{sb}} \\
& \le \sum_{b = 1}^m \sum_{a = 1}^b C^{-b}\exp{(O(a))} \frac{n^{2a}b^a}{a^{2a}} \frac{(n-sb-2a)! }{n!} \frac{n^{sb}}{ \mathrm{e}^{sb}} \\
& \le \sum_{b = 1}^m \sum_{a = 1}^b C^{-b}\exp{(O(a))} \frac{n^{sb + 2a}b^a}{a^{2a}} \frac{(n-sb-2a)^{n -sb-2a} }{n^n} \\
& \le \sum_{b = 1}^m \sum_{a = 1}^b C^{-b}\exp{(O(a))} \frac{b^a}{a^{2a}} \left( 1 - \frac{sb+2a}{n} \right)^{n -sb-2a} \\
& \le \sum_{b = 1}^m \sum_{a = 1}^b C^{-b} \exp{(a\log b - 2a \log a -sb -2a + (sb + 2a)^2/n + O(a) )}.
\end{align*}
We uniformly have $(4a^2 + 4sab)/n = O(n)$, since $a \le b \le m \le n$, so the above estimate reduces to
\begin{equation}\label{est1}
\Gamma_c \le \sum_{b = 1}^m \left( C^{-b} \exp{(-sb + (sb)^2/n)} \sum_{a = 1}^b \exp{(a\log b - 2a \log a + O(a) )}\right).
\end{equation}
Finally, since $sb \le sm = n$, we uniformly have \[ \exp{(-sb + (sb)^2/n)} \le 1\] for all $1 \le b \le m$, and it is straightforward to verify that we uniformly have
\[\exp{(a\log b - 2a \log a + O(a) )} = \exp(o(b))\] for all $1 \le a \le b$. Using these two bounds in~\eqref{est1}, it follows that for $C > 1$, we have
\[ \Gamma_c \le \sum_{b = 1}^m C^{-b} b\exp(o(b)) = \sum_{b = 1}^m C^{-b + o(b)} = O(1), \]
proving the claim.
\end{proof}
The second and final step in the proof of Lemma~\ref{variance} is to estimate the non-canonical contributions to $\Gamma$.
\begin{claim}\label{non-canonical}
For $C > 1$, we have $\Gamma' = O(1)$.
\end{claim}
\begin{proof}
We shall prove the claim by means of a comparison argument: we shall demonstrate how we may group summands in $\Gamma'$ so as to get estimates analogous to those that we obtained for $\Gamma$ in the proof of Claim~\ref{canonical}.
For any $\sigma, \tau \in Q_n$, we may decompose the intersection of $H_\sigma$ and $H_\tau$ into a collection of vertex-disjoint \emph{weak paths}, where a \emph{weak path} is a just a sequence of edges in which every consecutive pair of edges intersect.
We fix a permutation $\sigma \in Q_n$ for the rest of the argument. Given a weak path $P'$ in $H_\sigma$, notice that there is a unique minimal $\ell$-path $P$ in $H_\sigma$ covering precisely the same set of vertices as $P'$; we call $P$ the \emph{minimal cover} of $P'$. Now, given any $\tau \in Q_n$, there is a unique minimal \emph{covering configuration in $\sigma$} associated with $\tau$ obtained by taking the minimal covers of each of the weak paths in which $H_\tau$ meets $H_\sigma$. To prove the claim, we shall show that the contributions to $\Gamma'$ from all those $\tau \in Q_n$ whose covering configuration is $\mathcal{P}$ is comparable to the contributions to $\Gamma_c$ from all those $\tau \in Q_n$ meeting $\sigma$ canonically in $\mathcal{P}$.
We fix a $(b,a)$ configuration $\mathcal{P}$ in $\sigma$ consisting of $b$ edges in total distributed across $a$ $\ell$-paths, and we consider the set of permutations $\tau \in Q_n$ with minimal cover $\mathcal{P}$ that meet $\sigma$ non-canonically; we additionally parametrise this set by $1 \le k \le b$, writing $Q(\mathcal{P}, k)$ for the set of such permutations $\tau$ for which there are $k$ edges of $\mathcal{P}$ missing from the intersection~$\mathcal{P}'$ of $H_\tau$ and $H_\sigma$.
We claim that the number of ways to select a configuration $\mathcal{P}'$ as above, and then embed the vertices covered by $\mathcal{P}'$ into a permutation $\tau \in Q(\mathcal{P}, k)$ in such a way that $\mathcal{P}'$ is contained in $H_\tau$ is, up to subblock equivalence, at most
\[\binom{b}{k} \binom{m}{a}a! R^{a + k}\]
for some $R = R(r, \ell)$.
We may verify the estimate above as follows. The number of possible choices for $\mathcal{P}'$, namely the number of ways to choose $k$ edges from $\mathcal{P}$ such that each of the $\ell$-paths of~$\mathcal{P}$ remains a weak path after these $k$ edges are removed, may be crudely bounded above by $\binom{b}{k}$. As in the proof of Claim~\ref{canonical}, there are $\binom{m}{a}a!$ ways to choose the starting blocks of the leftmost edges of the $a$ distinct weak paths of $\mathcal{P}'$ in $\tau$. Assume now that we have fixed $\mathcal{P}'$, the starting blocks in $\tau$ of the leftmost edges of the $a$ weak paths of $\mathcal{P}'$, and the directions of embedding of these weak paths into $\tau$ (for which there are $2^a$ choices). Now, it is easy to see from the linear structure of a weak path that the relative order of vertices in disjoint edges of a weak path must be preserved in any embedding of that weak path into $\tau$, so in particular, there are only $O(1)$ choices for the location in $\tau$ of any particular vertex covered by $\mathcal{P}'$ (once endpoints and directions of embedding have been fixed, as we have assumed). Furthermore, it follows from the rigidity of an $\ell$-path (as in the proof of Claim~\ref{canonical}) that any vertex covered by $\mathcal{P}'$, with $O(1)$ exceptions at the left and right extremities of each of the $a$ weak paths, which possesses potential embedding locations in more than one subblock must necessarily be within $O(1)$ distance (in $\sigma$) of some edge present in $\mathcal{P}$ but not in $\mathcal{P}'$; clearly there $O(k)$ such vertices in total. These facts taken together demonstrate the validity of the bound claimed above.
Now, noting that the contribution of any $\tau \in Q(\mathcal{P}, k)$ to $\Gamma'$ is a factor of $p^k$ times the contribution to $\Gamma_c$ from any $\tau \in Q_n$ meeting
$\sigma$ canonically in $\mathcal{P}$, we may mimic the proof of Claim~\ref{canonical} to show that
\[\Gamma' \le \sum_{b = 1}^m \sum_{a = 1}^b \left(\exp{(O(a))} \frac{n^{2a}b^a}{a^{2a}} \frac{(n-sb-\ell a)! }{\lambda^{m-b}} \frac{\lambda^m}{n!} \frac{n^{sb}}{C^b\lambda^b \mathrm{e}^{sb}} \sum_{k=1}^b\binom{b}{k}R^kp^k \right). \]
Observing that
\[\sum_{k=1}^b\binom{b}{k}R^kp^k = \sum_{k=1}^b \exp(O(k)) \frac{b^k}{k^kn^{sk}} = O(1), \]
we are left with an estimate for $\Gamma'$ of the same form as the one for $\Gamma'$ which we showed to be $O(1)$ when $C>1$ in the proof of Claim~\ref{canonical}; the claim follows.
\end{proof}
The two claims above together imply that $\Gamma = O(1)$ when $C > 1$, from which it follows that $\mathds{E}[X^2] = O(\mathds{E}[X]^2)$ when $C > 1$; the result follows.
\end{proof}
With our moment estimates in hand, we are now ready to prove our main result.
\begin{proof}[Proof of Theorem~\ref{mainthm}]
As mentioned earlier, the $0$-statement, namely that $G^{(r)}(n,p)$ does not contain a Hamiltonian $\ell$-cycle with high probability if $p < (1 + \ensuremath{\varepsilon})p^*_{r,\ell} (n)$ follows immediately from Lemma~\ref{expectation} and Markov's inequality.
We prove the $1$-statement, namely that $G^{(r)}(n,p)$ contains a Hamiltonian $\ell$-cycle with high probability if $p > (1 + \ensuremath{\varepsilon})p^*_{r,\ell} (n)$, by showing that the property of containing a Hamiltonian $\ell$-cycle has a sharp threshold, and that this sharp threshold must (asymptotically) necessarily be $p^*_{r,\ell}(n)$.
If $p > (1+\ensuremath{\varepsilon})p^*_{r,\ell} (n)$, then it follows from Lemma~\ref{variance} and the Paley--Zygmund inequality, i.e., Proposition~\ref{pz}, that $G^{(r)}(n,p)$ contains a Hamiltonian $\ell$-cycle with probability at least $\delta>0$ for some $\delta = \delta(\ensuremath{\varepsilon}, r, \ell)$; consequently, if the property of containing a Hamiltonian $\ell$-cycle has a sharp threshold, this sharp threshold is necessarily asymptotic to $p^*_{r, \ell}(n)$.
It remains to prove that the monotone $r$-graph property $W = (W_n)_{n \ge 0}$ of containing a Hamiltonian $\ell$-cycle has a sharp threshold, so suppose for the sake of a contradiction that~$W$ has a coarse threshold.
It follows from Proposition~\ref{st} that there is a fixed $r$-graph $F$ and a threshold function $\hat p = \hat p(n)$ with the property that for infinitely many $n \in \mathds{N}$, there is an $r$-graph $H_n \notin W_n$ on $n$ vertices such that adding a random copy of $F$ to $H_n$ is significantly more likely to make the resulting graph contain a Hamiltonian $\ell$-cycle than adding a random collection of edges of density about $\hat p$; concretely, for some universal constants $\alpha, \beta > 0$, we have
\begin{equation}\label{notboost} \mathds{P}(H_n \cup G^{(r)}(n,\beta \hat p) \in W_n ) < 1-2\alpha, \end{equation}
where the random $r$-graph $G^{(r)}(n, \beta \hat p)$ is taken to be on the same vertex set as $H_n$, and
\begin{equation}\label{boost} \mathds{P}(H_n \cup \tilde F \in W_n ) > 1-\alpha, \end{equation}
where $\tilde F$ denotes a random copy of $F$ on the same vertex set as $H_n$.
Now, the only way $F$ can help induce a Hamiltonian $\ell$-cycle in $H_n$ is through some sub-hypergraph of itself that appears in all large enough $\ell$-cycles, so by pigeonholing (and adding extra edges if necessary), we conclude from~\eqref{boost} that there exists a fixed $\ell$-path $P$, say with $k$ edges on $\ell + sk$ vertices, with the property that, for some universal constant $\gamma > 0$, we have
\[ \mathds{P}(H_n \cup \tilde P \in W_n ) > \gamma, \]
where $\tilde P$ again denotes a random copy of $P$ on the same vertex set as $H_n$. In other words, a positive fraction of all the possible ways to embed $P$ into the vertex set of $H_n$ are \emph{useful} and end up completing Hamiltonian $\ell$-cycles.
Since $\hat p$ is an asymptotic threshold for $W$, clearly $\hat p(n) = \Theta(p^*_{r,\ell} (n)) = \Theta(n^{-s})$, since $p^*_{r, \ell}$ is also an asymptotic threshold for $W$, as can be read off from the proof of Lemma~\ref{variance}. On the other hand, the expected number of useful copies of $P$ created by the addition of a~$\beta \hat p = \Theta(n^{-s})$ density of random edges to $H_n$ is
\[\Omega\left(\binom{n}{\ell + sk}(n^{-s})^k\right) = \Omega\left(n^\ell\right),
\] and a routine application of the second moment method (indeed, $\ell$-paths are suitably `balanced') shows that adding a $\beta \hat p$ density of random edges to $H_n$ must, with high probability, create at least one useful copy of $P$ in $H_n$ and complete a Hamiltonian $\ell$-cycle, contradicting~\eqref{notboost}.
We have now shown that $W$ has a sharp threshold, and that this threshold must be asymptotic to $p^*_{r,\ell}(n)$; the $1$-statement follows, completing the proof.
\end{proof}
\section{Conclusion}\label{conc}
There are two basic questions that our work raises; we conclude this paper by discussing these problems.
First, now that we have identified the sharp threshold for the appearance of nonlinear Hamiltonian cycles, one can and should ask about the `width' of the critical window. Since the sharp threshold corresponds to the expectation threshold, we do not expect the hitting time to be of interest. Nonetheless, it is plausible that the expectation threshold is much sharper than what we have shown, and we conjecture the following.
\begin{conjecture}\label{window}
For all integers $r > \ell > 1$, if $p = p(n)$ is such that, as $n \to \infty$, we have $\mathds{E}[X_\ell] \to \infty$, then
\[ \mathds{P} \left( G^{(r)}(n,p) \text{ contains a Hamiltonian $\ell$-cycle} \right) \to 1, \]
where $X_\ell$ is the random variable counting the number of Hamiltonian $\ell$-cycles in $G^{(r)}(n,p)$.
\end{conjecture}
Second, it is natural to ask what happens for linear Hamiltonian cycles. The proof of Theorem~\ref{mainthm} shows that the appearance of a linear Hamiltonian cycle in $G^{(r)}(n,p)$ has a sharp threshold, and we expect this sharp threshold to coincide with the sharp threshold for the disappearance of isolated vertices (i.e., vertices not contained in any edges). For $r \ge 3$, writing
\[p^{\mathrm{deg}}_r(n) = \frac{(r-1)!\log n} { n^{r-1}}\] to denote the sharp threshold for the disappearance of isolated vertices in $G^{(r)}(n,p)$, we predict the following.
\begin{conjecture}\label{linear}
For each $r \ge 3$, $p^{\mathrm{deg}}_r(n)$ is the sharp threshold for the appearance of a linear Hamiltonian cycle in $G^{(r)}(n,p)$.
\end{conjecture}
In the case where $r=3$, Frieze~\cite{alan} showed that $p^{\mathrm{deg}}_3$ is a semi-sharp threshold for the appearance of a linear Hamiltonian cycle, and Dudek and Frieze~\cite{loose} showed that $p^{\mathrm{deg}}_r$ is an asymptotic threshold for the appearance of a linear Hamiltonian cycle for all $r\ge 3$. Of course, we expect much more than Conjecture~\ref{linear} to be true and naturally expect the hitting time for the appearance of a linear Hamiltonian cycle to coincide with the hitting time for the disappearance of isolated vertices, but even Conjecture~\ref{linear} appears to be out of the reach of existing techniques.
\section*{Acknowledgements}
The first author wishes to acknowledge support from NSF grant DMS-1800521.
\begin{bibdiv}
\begin{biblist}
\bib{ham4}{article}{
author={Ajtai, M.},
author={Koml\'os, J.},
author={Szemer\'edi, E.},
title={First occurrence of Hamilton cycles in random graphs},
conference={
title={Cycles in graphs},
address={Burnaby, B.C.},
date={1982},
},
book={
series={North-Holland Math. Stud.},
volume={115},
publisher={North-Holland, Amsterdam},
},
date={1985},
pages={173--178},
review={\MR{821516}},
}
\bib{ham3}{article}{
author={Bollob\'as, B\'ela},
title={The evolution of sparse graphs},
conference={
title={Graph theory and combinatorics},
address={Cambridge},
date={1983},
},
book={
publisher={Academic Press, London},
},
date={1984},
pages={35--57},
review={\MR{777163}},
}
\bib{thresh}{article}{
author={Bollob\'{a}s, B.},
author={Thomason, A.},
title={Threshold functions},
journal={Combinatorica},
volume={7},
date={1987},
number={1},
pages={35--38},
issn={0209-9683},
review={\MR{905149}},
doi={10.1007/BF02579198},
}
\bib{tight}{article}{
author={Dudek, Andrzej},
author={Frieze, Alan},
title={Tight Hamilton cycles in random uniform hypergraphs},
journal={Random Structures Algorithms},
volume={42},
date={2013},
number={3},
pages={374--385},
issn={1042-9832},
review={\MR{3039684}},
doi={10.1002/rsa.20404},
}
\bib{loose}{article}{
author={Dudek, Andrzej},
author={Frieze, Alan},
title={Loose Hamilton cycles in random uniform hypergraphs},
journal={Electron. J. Combin.},
volume={18},
date={2011},
number={1},
pages={Paper 48, 14},
issn={1077-8926},
review={\MR{2776824}},
}
\bib{friedgut1}{article}{
author={Friedgut, Ehud},
title={Sharp thresholds of graph properties, and the $k$-sat problem},
note={With an appendix by Jean Bourgain},
journal={J. Amer. Math. Soc.},
volume={12},
date={1999},
number={4},
pages={1017--1054},
issn={0894-0347},
review={\MR{1678031}},
doi={10.1090/S0894-0347-99-00305-7},
}
\bib{friedgut2}{article}{
author={Friedgut, Ehud},
title={Hunting for sharp thresholds},
journal={Random Structures Algorithms},
volume={26},
date={2005},
number={1-2},
pages={37--51},
issn={1042-9832},
review={\MR{2116574}},
doi={10.1002/rsa.20042},
}
\bib{alan}{article}{
author={Frieze, Alan},
title={Loose Hamilton cycles in random 3-uniform hypergraphs},
journal={Electron. J. Combin.},
volume={17},
date={2010},
number={1},
pages={Note 28, 4},
issn={1077-8926},
review={\MR{2651737}},
}
\bib{jkv}{article}{
author={Johansson, Anders},
author={Kahn, Jeff},
author={Vu, Van},
title={Factors in random graphs},
journal={Random Structures Algorithms},
volume={33},
date={2008},
number={1},
pages={1--28},
issn={1042-9832},
review={\MR{2428975}},
doi={10.1002/rsa.20224},
}
\bib{jeff}{misc}{
author={Kahn, J.},
note={Personal communication},
date={February 2018},
}
\bib{ham2}{article}{
author={Koml\'{o}s, J\'{a}nos},
author={Szemer\'{e}di, Endre},
title={Limit distribution for the existence of Hamiltonian cycles in a
random graph},
journal={Discrete Math.},
volume={43},
date={1983},
number={1},
pages={55--63},
issn={0012-365X},
review={\MR{680304}},
doi={10.1016/0012-365X(83)90021-3},
}
\bib{richard}{article}{
author={Montgomery, R.},
title={Spanning trees in random graphs},
note={Submitted},
eprint={1810.03299},
}
\bib{ham1}{article}{
author={P\'osa, L.},
title={Hamiltonian circuits in random graphs},
journal={Discrete Math.},
volume={14},
date={1976},
number={4},
pages={359--364},
issn={0012-365X},
review={\MR{0389666}},
}
\bib{oliver}{article}{
author={Riordan, Oliver},
title={Spanning subgraphs of random graphs},
journal={Combin. Probab. Comput.},
volume={9},
date={2000},
number={2},
pages={125--148},
issn={0963-5483},
review={\MR{1762785}},
}
\end{biblist}
\end{bibdiv}
\end{document} |
2,869,038,154,103 | arxiv | \section{Introduction}
Normalization techniques~\cite{ba2016layer,ioffe2015batch,ulyanov2017instance,wu2018group} such as batch normalization (BN) \cite{ioffe2015batch} are indispensable components in deep neural networks (DNNs) \cite{he2016deep,huang2017densely}. They improve both learning and generalization capacity of DNNs.
Different normalizers have different properties.
For example, BN~\cite{ioffe2015batch} acts as a regularizer and improves generalization of a deep network \cite{luo2018understanding}.
Layer normalization (LN)~\cite{ba2016layer} accelerates the training of recurrent neural networks (RNNs) by stabilizing the hidden states in them.
Instance normalization (IN)~\cite{ulyanov2017instance} is able to filter out complex appearance variances~\cite{pan2018two}.
Group normalization (GN)~\cite{wu2018group} achieves stable accuracy in a wide range of batch sizes.
To further boost performance of DNNs, the recently-proposed Switchable Normalization (SN)
\cite{luo2018differentiable} offers a new viewpoint in deep learning: it learns importance ratios to compute the weighted average statistics of IN, BN and LN, so as to learn different combined normalizers for different convolution layers of a DNN.
SN is applicable in various computer vision problems and robust to a wide range of batch sizes.
Although SN has great successes,
it suffers from slowing the testing speed because each normalization layer is a combination of multiple normalizers.
To address the above issue, this work proposes \textit{Sparse Switchable Normalization} (SSN) that learns to select a single normalizer from a set of normalization methods for each convolution layer.
Instead of using $\ell_1$ and $\ell_0$ regularization to learn such sparse selection, which increases the difficulty of training deep networks, SSN turns this constrained optimization problem into feed-forward computations, making auto-differentiation applicable in most popular deep learning frameworks to train deep models with sparse constraints in an end-to-end manner.
In general, this work has three main \textbf{contributions}.
(1) We present Sparse Switchable Normalization (SSN) that learns to select a single normalizer for each normalization layer of a deep network to improve generalization ability and speed up inference compared to SN.
SSN inherits all advantages from SN, for example, it is applicable to many different tasks and robust to various batch sizes without any sensitive hyper-parameter.
(2) SSN is trained using a novel SparsestMax function that turns the sparse optimization problem into a simple forward propagation of a deep network. SparsestMax is an extension of softmax with sparsity guarantee and is designed to be a general technique to learn one-hot distribution. We provide its geometry interpretations compared to its counterparts such as softmax and sparsemax~\cite{MartinsA16}.
(3) SSN is demonstrated in multiple computer vision tasks including image classification in ImageNet \cite{russakovsky2015imagenet}, semantic segmentation in Cityscapes \cite{cordts2016cityscapes} and ADE20K \cite{zhou2017scene}, and action recognition in Kinetics \cite{kay2017kinetics}.
Systematic experiments show that SSN with SparsestMax achieves comparable or better performance than the other normalization methods.
\section{Sparse Switchable Normalization (SSN)}
\label{sec:second}
This section introduces SSN and SparsestMax.
\subsection{Formulation of SSN}
\label{sec:SSN}
We formulate SSN as
\begin{eqnarray}\label{eqn:sn}
&&\hspace{25pt}\hat{h}_{ncij}=\gamma\frac{h_{ncij}-\sum_{k=1}^{|\Omega|}p_k\mu_k}{\sqrt{\sum_{k=1}^{|\Omega|}p_k^\prime\sigma_k^2+\epsilon}}+\beta, \\\nonumber
&&\mathrm{ s.t.}~~\sum_{k=1}^{|\Omega|}p_k=1,~~\sum_{k=1}^{|\Omega|}p_k^\prime=1,~~\forall p_k, p_k^\prime \in\{0,1\}
\end{eqnarray}
where $h_{ncij}$ and $\hat{h}_{ncij}$ indicate a hidden pixel before and after normalization. The subscripts represent a pixel $(i,j)$ in the $c$-th channel of the $n$-th sample in a minibatch.
$\gamma$ and $\beta$ are a scale and a shift parameter respectively.
$\Omega=\{\mathrm{IN},\mathrm{BN},\mathrm{LN}\}$ is a set of normalizers.
$\mu_{k}$ and $\sigma_{k}^2$ are their means and variances, where $k\in\{1,2,3\}$ corresponds to different normalizers.
Moreover, $p_k$ and $p^\prime_k$ are importance ratios of mean and variance respectively.
We denote $\mathbf{p}=(p_1,p_2,p_3)$ and $\mathbf{p}^\prime=(p_1^\prime,p_2^\prime,p_3^\prime)$ as two vectors of ratios.
According to Eqn.\eqref{eqn:sn}, SSN is a normalizer with three constraints including $\|\mathbf{p}\|_1 = 1 $, $\|\mathbf{p}^\prime\|_1 = 1$, and for all $p_k,p_k^\prime\in\{0,1\}$.
These constraints encourage SSN to choose a single normalizer from $\Omega$ for each normalization layer.
If the sparse constraint $\forall p_k,p_k^\prime\in\{0,1\}$ is relaxed to a soft constraint $\forall p_k,p_k^\prime\in(0,1)$, SSN degrades SN \cite{luo2018differentiable}.
For example, the importance ratios $\mathbf{p}$ in SN can be learned using $\mathbf{p}=\mathrm{softmax}(\mathbf{z})$, where $\mathbf{z}$ are the learnable control parameters of a softmax function\footnote{The softmax function is defined by $p_k=\mathrm{softmax}_k(\mathbf{z})=\exp (z_k)/\sum_{k=1}^{|\Omega|}\exp (z_k)$.} and $\mathbf{z}$ can be optimized using back-propagation (BP).
Such slackness has been extensively employed in existing works
\cite{jang2016categorical,liu2018darts}.
However, softmax does not satisfy the sparse constraint in SSN.
\begin{figure}
\begin{center}
\includegraphics[width=1.0\linewidth]{figure1_5.pdf}
\end{center}
\caption{\textbf{Comparisons of softmax, sparsemax and SparsestMax.}
$\mathbf{O}$ is the origin of $\mathbb{R}^3$. The regular triangle denotes a 2-D simplex $\triangle^2$ embedded into $\mathbb{R}^3$. $\mathbf{u}$ is the center of the simplex. The cubes represent feature maps whose dimension is $N\times C\times H\times W$.
We represent IN, BN and LN by coloring different dimensions of those cubes. Each vertex represents one of three normalizers. As shown in the upper plot, output of softmax is closer to $\mathbf{u}$ than sparsemax and SparsestMax. SparsestMax makes important ratios converge to one of vertices of the simplex in an end-to-end manner, selecting only one normalizer from these three normalization methods.}
\label{fig:figure1}
\end{figure}
\textbf{Requirements.} Let $\mathbf{p}=f(\mathbf{z})$ be a function to learn $\mathbf{p}$ in SSN. Before presenting its formulation, we introduce four requirements of $f(\mathbf{z})$ in order to make SSN effective and easy to use as much as possible.
(1) \textbf{\emph{Unit length.}} The $\ell1$ norm of $\mathbf{p}$ is $1$ and for all $p_k\geq0$.
(2) \textbf {\emph{Completely sparse ratios.}} $\mathbf{p}$ is completely sparse. In other words, $f(\mathbf{z})$ is required to return an one-hot vector where only one entry is $1$ and the others are $0$s.
(3) \textbf{\emph{Easy to use.}} SSN can be implemented as a module and easily plugged into any network and task. To achieve this, all the constraints of $\mathbf{p}$ have to be satisfied and implemented in a forward pass of a network.
This is different from adding $\ell_0$ or $\ell_1$ penalty to a loss function, making model development cumbersome because coefficient of these penalties are often sensitive to batch sizes, network architectures, and tasks.
(4) \textbf{\emph{Stability.}} The optimization of $\mathbf{p}$ should be stable, meaning that
$f(\cdot)$ should be capable to maintain sparsity in the training phase.
For example, training is difficult if $f(\cdot)$ returns one normalizer in the current step and another one in the next step.
\noindent\textbf{Softmax and sparsemax?}
Two related functions are softmax and sparsemax, but they do not satisfy all the above requirements.
Firstly, $\mathrm{softmax}(\mathbf{z})$ is employed in SN \cite{luo2018differentiable}.
However, its parameters $\mathbf{z}$ always have full support, that is, $p_k=\mathrm{softmax}_k(\mathbf{z})\neq 0$ where $\mathrm{softmax}_k(\cdot)$ indicates the $k$-th element, implying that the selection of normalizers is not sparse in SN.
Secondly, another candidate is sparsemax~\cite{MartinsA16} that extends softmax to produces a sparse distribution.
The $\mathrm{sparsemax}(\mathbf{z})$ projects $\mathbf{z}$ to its closest point $\mathbf{p}$ on a ($K$-1)-dimensional simplex by minimizing the Euclidean distance between $\mathbf{p}$ and $\mathbf{z}$,
\begin{equation}\label{eqn:sparsemax}
\mathrm{sparsemax}(\mathbf{z}):=\underset{\mathbf{p}\in\triangle^{K-1}}{\mathrm{argmin}}\left\|\mathbf{p}-\mathbf{z}\right\|_2^2,
\end{equation}
where $\triangle^{K-1}$ denotes a ($K$-1)-D simplex that is a convex polyhedron containing $K$ vertices.
We have $\triangle^{K-1}:=\{\mathbf{p} \in \mathbb{R}^K|\mathbf{1}{^{\mkern-1.5mu\mathsf{T}}} \mathbf{p}=1,\mathbf{p}\geq \mathbf{0}\}$ where $\mathbf{1}$ is a vector of ones.
For example, when $K=3$, $\triangle^{2}$ represents a 2-D simplex that is
a regular triangle.
The vertices of the triangle indicate BN, IN, and LN respectively as shown in Fig.\ref{fig:figure1}.
By comparing softmax and sparsemax on the top of Fig.\ref{fig:figure1} when $\mathbf{z}$ is the same, the output $\mathbf{p}$ of softmax yellow dot is closer to $\mathbf{u}$ (center of the simplex) than that of sparsemax blue dot. In other words, sparsemax produces $\mathbf{p}$ that are closer to the boundary of the simplex than softmax, implying that sparsemax produces more sparse ratios than softmax. Take $\mathbf{z}=(0.8,0.6,0.1)$ as an example, $\mathrm{softmax}(\mathbf{z})=(0.43,0.35,0.22)$ while $\mathrm{sparsemax}(\mathbf{z})=(0.6,0.4,0)$, showing that sparsemax is likely to make some elements of $\mathbf{p}$ be zero. For the compactness of this work, we provide evaluation of the sparsemax \cite{held1974validation,MartinsA16} in section A of Appendix.
However, completely sparse ratios cannot be guaranteed because every point on the simplex could be a solution of Eqn.\eqref{eqn:sparsemax}.
\subsection{SparsestMax}
To satisfy all the constraints as discussed above, we introduce {\textit{SparsestMax}}, which is a novel sparse version of the softmax function.
The SparsestMax function is defined by
\begin{equation}\label{eqn:SparsestMax}
\mathrm{SparsestMax}(\mathbf{z};r):=\underset{\mathbf{p}\in\triangle^{K-1}_r}{\mathrm{argmin}}\left\|\mathbf{p}-\mathbf{z}\right\|_2^2,
\end{equation}
where $\triangle^{K-1}_r:=\{\mathbf{p} \in \mathbb{R}^K|\mathbf{1}{^{\mkern-1.5mu\mathsf{T}}} \mathbf{p}=1, \left\|\mathbf{p}-\mathbf{u}\right\|_2\geq r, \mathbf{p}\geq \mathbf{0}\}$ is a simplex with a circular constraint $\left\|\mathbf{p}-\mathbf{u}\right\|_2\geq r, \mathbf{1}{^{\mkern-1.5mu\mathsf{T}}} \mathbf{p}=1$. Here $\mathbf{u}=\frac{1}{K}\mathbf{1}$ is the center of the simplex and $\mathbf{1}$ is a vector of ones, and $r$ is radius of the circle.
Compared to sparsemax, SparsestMax introduces a circular constraint $\left\|\mathbf{p}-\mathbf{u}\right\|_2\geq r ,\, \mathbf{1}{^{\mkern-1.5mu\mathsf{T}}} \mathbf{p}=1$ that has an intuitively geometric meaning. Unlike sparsemax where the solution space is $\triangle^{K-1}$, the solution space of SparsestMax is a circle with center $\mathbf{u}$ and radius $r$ excluded from a simplex.
In order to satisfy the completely sparse requirement, we linearly increase $r$ from zero to $r_c$ in the training phase. $r_c$ is the radius of a circumcircle of the simplex.
To understand the important role of $r$, we emphasize two cases.
When $r\leq \left\|\mathbf{p}_0-\mathbf{u}\right\|_2$, where $\mathbf{p}_0$ is the output of sparsemax, then $\mathbf{p}_0$ is also the solution of Eqn.(\ref{eqn:SparsestMax}) because $\mathbf{p}_0$ satisfies the circular constraint.
When $r=r_c$,
the solution space of Eqn.(\ref{eqn:SparsestMax}) contains only $K$ vertices of the simplex, making $\mathrm{SparsestMax}(\mathbf{z};r_c)$ completely sparse.
\textbf{An example.} Fig.\ref{fig:figure2}(a-f) illustrate a concrete example in the case of $K=3$ and $\mathbf{z}=(0.5,0.3,0.2)$.
We can see that the output of softmax is more uniform than sparsemax and SparsestMax, and SparsestMax produces increasingly sparse output as $r$ grows.
With radius $r$ gradually increasing in the training phase, the computations of SparsestMax are discussed as below.
\textbf{Stage 1.} As shown in Fig.\ref{fig:figure2}(b,c), the solution of sparsemax is $\mathbf{p}_0=(0.5,0.3,0.2)$ given $\mathbf{z}=(0.5,0.3,0.2)$. When $r=0.15$,
$\mathbf{p}_0$ satisfies the constraint $\left\|\mathbf{p}_0-\mathbf{u}\right\|_2 \geq r$. Therefore, $\mathbf{p}_0$ is also the solution of SparsestMax.
In this case, SparsestMax is computed the same as sparsemax to return the optimal ratios.
\textbf{Stage 2.} As illustrated in Fig.\ref{fig:figure2}(d), when $r$ increases to $0.3$ and thus $\left\|\mathbf{p}_0-\mathbf{u}\right\|_2 < r$ when $\mathbf{p}_0=(0.5,0.3,0.2)$, it implies that the circular constraint is not satisfied.
In this case, SparsestMax returns the point $\mathbf{p}_1$ on the circle, which is computed by projecting $\mathbf{p}_0$ to the face of circle, that is, $\mathbf{p}_1=r\frac{\mathbf{p}_0-\mathbf{u}}{\left\|\mathbf{p}_0-\mathbf{u}\right\|_2}+\mathbf{u}=(0.56,0.39,0.15)$ as the output.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{figure3_5.pdf}
\end{center}
\caption{Illustration of (a) softmax, (b) sparsemax, (c-f) SparsestMax when $K=3$ and (g-i) SparsestMax when $K=4$. $\mathbf{u}$ is the center of the simplex. $\mathbf{u}=(\frac{1}{3},\frac{1}{3},\frac{1}{3})$ for $K=3$ and $\mathbf{u}=(\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{1}{4})$ for $K=4$. Given $\mathbf{z}=(0.5,0.3,0.2)$, (a) and (b) show that the outputs of softmax and sparsemax are $\mathbf{p}=(0.39,0.32,0.29)$ and $\mathbf{p}=(0.5,0.3,0.2)$ respectively. (c-f) show that the results of SparsestMax for $r=0.15, 0.3, 0.6$ and $0.816$ are $\mathbf{p}_0=(0.5,0.3,0.2)$, $\mathbf{p}_1=(0.56,0.29,0.15)$, $\mathbf{p}_3=(0.81,0.19,0)$ and $\mathbf{p}_3=(1,0,0)$ respectively when $K=3$, a concrete calculation is given in \textbf{Stage 1-4}. When $K=4$, given $\mathbf{z}=(0.3,0.25,0.23,0.22)$, the outputs of (g-i) are $\mathbf{p}_1=(0.49,0.25,0.15,0.11)$, $\mathbf{p}_3=(0.75,0.23,0.02,0)$ and $\mathbf{p}_3=(1,0,0,0)$ for $r=0.3, 0.6$ and $0.866$ respectively. All $\mathbf{p}_2$s are acquired by $\mathbf{p}_2=\mathrm{sparsemax} (\mathbf{p}_1)$. (e) and (f) show that when $\mathbf{p}_1$ is outside of the simplex $\triangle^{K-1}$, then projection space reduces to $\triangle^{K-2}$ for $K=3$ and $K=4$.}
\label{fig:figure2}
\end{figure}
\textbf{Stage 3.} As shown in Fig.\ref{fig:figure2}(e), when $r=0.6$, $\mathbf{p}_1$ moves out of the simplex. In this case, $\mathbf{p}_1$ is projected back to the closest point on the simplex, that is $\mathbf{p}_2$, which is then pushed to $\mathbf{p}_3$ by the SparsestMax function using
\begin{equation}\label{eqn:reprjball}
\mathbf{p}_3=r^\prime\frac{\mathbf{p}_2-\mathbf{u}^\prime}{\left\|\mathbf{p}_2-\mathbf{u}^\prime\right\|_2}+\mathbf{u}^\prime,
\end{equation}
where $u_i^\prime =\max\{\frac{(\mathbf{p}_1)_i}{2},0\},\, i=1,2,3$, $\mathbf{p}_2=\mathrm{sparsemax} (\mathbf{p}_1)$ and $r^\prime=\sqrt{r^2-\left\|\mathbf{u}-\mathbf{u}^\prime\right\|_2^2}$.
In fact, $\mathbf{p}_2$ lies on $\triangle^1$, $\mathbf{u}^\prime$ is the center of $\triangle^1$ and $\triangle^1$ is one of the three edges of $\triangle^2$. Eqn.(\ref{eqn:reprjball}) represents the projection from $\mathbf{p}_2$ to $\mathbf{p}_3$.
We have $\mathbf{p}_3=(0.81,0.19,0)$ as the output. It is noteworthy that when $\mathbf{p}_1$ is out of the simplex, $\mathbf{p}_3$ is a point of intersection of the simplex and the circle, and $\mathbf{p}_3$ can be determined by sorting $\mathbf{p}_0$. In this way, Eqn.(\ref{eqn:reprjball}) can be equivalently replaced by argmax function. However, Eqn.(\ref{eqn:reprjball}) shows great advantage on differentiable learning of parameter $\mathbf{z}$ when $K>3$.
\textbf{Stage 4.} As shown in Fig.\ref{fig:figure2}(f), the circle becomes the circumcircle of the simplex when $r=r_c=0.816$ for K =3, $\mathbf{p}_3$ moves to one of the three vertices. This vertex would be the closest point to $\mathbf{p}_0$. In this case, we have $\mathbf{p}_3=(1,0,0)$ as the output.
\textbf{Implementation.}
In fact, Eqn.(\ref{eqn:SparsestMax}) is an optimization problem with both linear and nonlinear constraints.
The above four stages can be rigorously derived from KKT conditions of the optimization problem.
The concrete evaluation procedure of $\mathrm{SparsestMax}$ in case where $K=3$ is presented in Algorithm \ref{alg:algone}.
We see that runtime of Algorithm \ref{alg:algone} mainly depends on the evaluation of sparsemax \cite{van2008probing} (line 1).
As for SSN, we adopt a $\mathcal{O}(K\mathrm{log}\,K)$ algorithm \cite{held1974validation} to evaluate sparsemax.
SparsestMax can be easily implemented using popular deep learning frameworks such as PyTorch~\cite{paszke2017automatic}.
\subsection{Discussions}
\textbf{Properties of SparsestMax.}
The SparsestMax function satisfies all four requirements discussed before.
Since the radius $r$ increases from 0 to $r_c$ as training progresses,
the solution space of Eqn.\ref{eqn:SparsestMax} shrinks to three vertices of the simplex, returning ratios as a one-hot vector.
The first two requirements are guaranteed until training converged.
For the third requirement, the SparsestMax is performed in a single forward pass of a deep network, instead of introducing an additional sparse regularization term to the loss function, where strength of the regularization is difficult to tune.
\textbf{Stability of Sparsity.} We explain that training SSN with SparsestMax is stable, satisfying the fourth requirement.
In general, once $p_k=\mathrm{SparsestMax}_k(\mathbf{z};r)=0$ for each $k$, derivative of the loss function wrt. $z_k$ is zero using chain rules as provided in section B of Appendix.
This property explicitly reveals that
once an element of $\mathbf{p}$ becomes 0, it will not `wake up' in the succeeding training phase, which has great advantage of maintaining sparsity in training.
We examine the above property for different stages as discussed before.
Here, we denote $\mathbf{p}-\mathbf{u}$ and $\left\|\mathbf{p}-\mathbf{u}\right\|_2$ as `\textit{sparse direction}' and `\textit{sparse distance}' respectively.
The situation when $p_k = 0$ only occurs on stage 1 and stage 3.
In stage 1, SparsestMax becomes sparsemax \cite{MartinsA16}, which indicates that if $p_k=0$, the $k$-th component in $\mathbf{p}$ is much less important than the others. Therefore, stopping learning $p_k$ is reasonable.
In stage 3, $p_k=0$ occurs when $\mathbf{p}_0$ moves to $\mathbf{p}_1$ and then $\mathbf{p}_2$.
In this case, we claim that $\mathbf{p}_1$ has learned a good sparse direction before it moves out of the simplex.
To see this, when $\left\|\mathbf{p}_0-\mathbf{u}\right\|_2<r,\, \mathbf{p}_1\geq \mathbf{0}$, let $\mathbf{g}_1$ be the gradients of the loss function with respect to $\mathbf{p}_1$ during back-propagation.
We can compute $g_0^d$ that is the directional derivative of the loss at $\mathbf{p}_0$ in the direction $\mathbf{p}_0-\mathbf{u}$. We have
\begin{equation}\label{eqn:dirg}
\begin{split}
g_0^d &=\left(\frac{\partial \mathbf{p}_1}{\partial \mathbf{p}_0}\right) {^{\mkern-1.5mu\mathsf{T}}} \mathbf{g}_1(\mathbf{p}_0-\mathbf{u})\\
&=\mathbf{g}_1{^{\mkern-1.5mu\mathsf{T}}}\frac{\left\|\mathbf{p}_0-\mathbf{u}\right\|^2-(\mathbf{p}_0-\mathbf{u}) (\mathbf{p}_0-\mathbf{u}){^{\mkern-1.5mu\mathsf{T}}}}{\left\|\mathbf{p}_0-\mathbf{u}\right\|^{\frac{5}{2}}} (\mathbf{p}_0-\mathbf{u})\\
&=0.
\end{split}
\end{equation}
Eqn.(\ref{eqn:dirg}) suggests that SGD would learn the sparse direction regardless of the sparse distance.
In other words, the importance ratios in SSN do not need to learn the sparse distance. They focus on updating the sparse direction to regulate the relative magnitudes of IN, BN, and LN in each training step.
This property intuitively reduces the difficulty when training the important ratios.
\begin{algorithm}[t]
\caption{SSN with SparsestMax when $K=3$.}
\label{alg:algone}
\begin{algorithmic}[1]
\Require
$\mathbf{z}, \mathbf{z}^\prime, \mathbf{u}, r, \mu_k, \sigma_k^2$
\Comment{$r$ increases from zero to $r_c$ in the training stage; $\mu_k$ and $\sigma_k$ denote means and variances from different normalizers, $k\in\{1,2,3\}$}
\Ensure
$\mu, \sigma^2$
\Comment{mean and variance in SSN}
\State $\mathbf{p}_0=\mathrm{sparsemax} (\mathbf{z})$
\State if $\left\|\mathbf{p}_0-\mathbf{u}\right\|_2 \geq r$ \, then
\State \quad $\mathbf{p}=\mathbf{p}_0$
\State else $\mathbf{p}_1=r\frac{\mathbf{p}_0-\mathbf{u}}{\left\|\mathbf{p}_0-\mathbf{u}\right\|_2}+\mathbf{u}$
\State \quad if $\mathbf{p}_1\geq \mathbf{0}$, \, then
\State \qquad $\mathbf{p}=\mathbf{p}_1$
\State \quad else compute $\mathbf{u}^\prime, r^\prime$ and $\mathbf{p}_2$
\Comment{see Stage 3}
\State \qquad $\mathbf{p}=r^\prime\frac{\mathbf{p}_2-\mathbf{u}^\prime}{\left\|\mathbf{p}_2-\mathbf{u}^\prime\right\|_2}+\mathbf{u}^\prime$
\State \quad end\, if
\State end\,if\\
\Return $\mu=\sum_{k=1}^{3}p_k\mu_k, \sigma^2=\sum_{k=1}^{3}p_k^\prime\sigma_k^2$
\Comment{$\mathbf{p}^\prime$ is computed the same as $\mathbf{p}$}
\end{algorithmic}
\end{algorithm}
\textbf{Efficient Computations.} Let $L$ be the total number of normalization layers of a deep network. In training phase, computational complexity of SparsestMax in the entire network is $\mathcal{O}(LK\mathrm{log} K)$, which is comparable to softmax $\mathcal{O}(LK)$ in SN when $K=3$.
However, SSN learns a completely sparse selection of normalizers, making it faster than SN in testing phase.
Unlike SN that needs to estimate statistics of IN, BN, and LN in every normalization layers, SSN computes statistics for only one normalizer.
On the other hand, we can turn BN in SSN into a linear transformation and then merge it into the previous convolution layer, which reduces computations.
\textbf{Extend to $\mathbf{K=n}$.}
SparsestMax can be generalized to case where $K=n$. As discussed before, it results in a one-hot vector under the guidance of a increasing circle. SparsestMax works by inheriting good merit of sparsemax and learning a good sparse direction.
By repeating this step, the projection space degenerates, ultimately leading to a one-hot distribution. Fig.\ref{fig:figure2} (g-i) visualizes Stage 2-4 when $K=4$. We list skeleton pseudo-code for SparsestMax when $K=n$ in Algorithm \ref{algsec}
\section{Relation with Previous Work}
As one of the most significant components in deep neural networks, normalization technique~\cite{ioffe2015batch,ba2016layer,ulyanov2017instance,wu2018group,luo2018differentiable} has achieved much attention in recent years.
These methods can be categorized into two groups:
methods normalizing activation over feature space such as \cite{ioffe2015batch,ba2016layer,ulyanov2017instance,wu2018group}
and methods normalizing weights over the parameter space like \cite{salimans2016weight, miyato2018spectral}.
All of them show that normalization methods make great contribution to stabilizing the training and boosting the performance of DNN.
Recent study of IBN~\cite{pan2018two} shows that the hybrid of multiple normalizers in the neural networks can greatly strengthen the generalization ability of DNN.
A more general case named Switchable Normalization (SN)~\cite{luo2018differentiable} is also proposed to select different normalizer combinations for different normalization layers.
Inspired by this work, we propose \textit{\textbf{SSN}} where the importance ratios are constrained to be completely sparse, while inheriting all benefits from SN at the same time.
Moreover, the one-hot output of importance ratios alleviates overfitting in the training stage and removes the redundant computations in the inference stage.
Other work focusing on the sparsity of parameters in DNN is also related to this article.
In \cite{scardapane2017group}, group Lasso penalty is adopted to impose group-level sparsity on network’s connections. But this work can hardly satisfy our standardization constraints, \ie sum of the importance ratios in each layer equals one.
Bayesian compression~\cite{louizos2017learning} includes a set of non-negative stochastic gates to determine which weight is zero, making re-parameterized $\ell_0$ penalty differentiable. However, such regularization term makes the model less accurate if applied to our setting where required $\ell_0$ norm is exactly equal to one.
Alternatively, sparsemax that preserves most of the attractive properties of softmax is proposed in~\cite{MartinsA16} to generate sparse distribution, but this distribution is usually not completely sparse.
This paper introduces \textit{SparsestMax}, which adds a circular constraint on sparsemax to achieve the goal of SSN.
It learns the sparse direction regardless of sparse distance in the training phase,
and guarantees to activate only one control parameter.
It can be embedded as a general component to any end-to-end training architectures to learn one-hot distribution.
\begin{algorithm}[t]
\caption{SparsestMax for $K=n$}
\label{algsec}
\begin{algorithmic}[1]
\Require
$\mathbf{z}, \, \mathbf{u}, \, r$
\Ensure
$\mathbf{p}=\mathrm{SparsestMax} (\mathbf{z},r,\mathbf{u})$
\State $\mathbf{p}_0=\mathrm{sparsemax} (\mathbf{z})$
\State if $\left\|\mathbf{p}_0-\mathbf{u}\right\|_2 \geq r$ \, then
\State \quad $\mathbf{p}=\mathbf{p}_0$
\State else $\mathbf{p}_1=r\frac{\mathbf{p}_0-\mathbf{u}}{\left\|\mathbf{p}_0-\mathbf{u}\right\|_2}+\mathbf{u}$
\State \quad if $\mathbf{p}_1\geq \mathbf{0}$, \, then
\State \qquad $\mathbf{p}=\mathbf{p}_1$
\State \quad else compute $\mathbf{u}^\prime, r^\prime$ and $\mathbf{p}_2$
\Comment{see Stage 3}
\State \qquad $\mathbf{z}=\mathbf{p}_2$, $\mathbf{p}=\mathrm{SparsestMax} (\mathbf{z},r^\prime,\mathbf{u}^\prime)$
\State \quad end\, if
\State end\,if\\
\Return $\mathbf{p}$
\end{algorithmic}
\end{algorithm}
\section{Experiments}\label{sec:experiment}
In this section, we apply SSN to several benchmarks including image classification, semantic segmentation and action recognition. We show its advantages in both performance and inference speed comparing to existing normalization methods.
\subsection{Image Classification in ImageNet}
In our experiments, we first evaluate SSN in the ImageNet classification dataset~\cite{russakovsky2015imagenet}, which has 1.28M training images and 50k validation images with 1000 categories. All classification results are evaluated on the 224$\times$224 pixels center crop of images in validation set, whose short sides are rescaled to 256 pixels.
\textbf{Implementation details.} All models are trained using 8 GPUs and here we denote batch sizes as the number of images on one single GPU and the mean and variance of BN are calculated within each GPU. For convolution layers, we follow the initialization method used by \cite{he2016deep}. Following~\cite{goyal2017accurate}, we initialize $\gamma$ to 1 for last normalization layers in each residual block and use 0 to initialize all other $\gamma$. Learnable control parameters $\mathbf{z}$ in SSN are initialized as 1. SGD with momentum is used for all parameters, while the learning rate of $\mathbf{z}$ is 1/10 of other parameters. We also apply a weight decay of 0.0001 to all parameters except $\mathbf{z}$. We train all the models for 100 epochs and decrease the learning rate by 10$\times$ at 30, 60 and 90 epochs. By default, our hyperparameter radius $r$ used as circular constraint increases from 0 to 1 linearly during the whole training process, and $\mathbf{z}$ will stop updating once its related importance ratio becomes completely sparse.
\textbf{Comparison with other normalization methods.} We evaluate all normalization methods using ResNet-50~\cite{he2016deep} with a regular batch size of 32 images per GPU. Table~\ref{tab:different_norms} shows that IN and LN achieve 71.6\% and 74.7\% top-1 accuracy respectively, indicating they are unsuitable for image classification task. BN works quite well in this setting, getting 76.4\% top-1 accuracy. SN combines the advantages of IN, LN and BN and outperforms BN by 0.5\%. Different from SN, SSN selects exactly one normalizer for each normalization layer, introducing stronger regularization and outperforming SN by 0.3\%. Fig.\ref{fig:SSNcurve} shows that SSN has lower training accuracy than SN while maintains even higher validation accuracy.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\linewidth]{ssn_sn_curve.pdf}
\end{center}
\caption{\textbf{Training and validation curves of SN and SSN} with a batch size of 32 images/GPU.}
\label{fig:SSNcurve}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{c | c c c c c c c c}
\hline
& IN & LN & BN & GN & SN & SSN \\
\hline
top-1 & 71.6 & 74.7 & 76.4 & 75.9 & 76.9 & \textbf{77.2} \\
$\Delta$ \vs BN & -4.8 & -1.7 & - & -0.5 & 0.5 & \textbf{0.8} \\
\hline
\end{tabular}
\end{center}
\caption{\textbf{Comparisons of top-1 accuracy(\%)} of ResNet-50 in ImageNet validation set. All models are trained with a batch size of 32 images/GPU. The second row shows the accuracy differences between BN and other normalization methods.
\label{tab:different_norms}
\end{table}
\textbf{Different batch sizes.} For the training of different batch sizes, we adopt the learning rate scaling rule from \cite{goyal2017accurate}, as the initial learning rate is 0.1 for the batch size of 32, and 0.1N/32 for a batch size of N. The performance of BN decreases from $76.4\%$ to $65.3\%$ when the batch size decreases from 32 to 2 because of the larger uncertainty of statistics. While GN and SN are less sensitive to batch size, SSN achieves better performance than these two methods and outperforms them in all batch size settings, indicating SSN is robust to batch size. The top-1 accuracies are reported in Table~\ref{tab:different_batchsize}. In Fig.\ref{fig:SSN-normselect}, we visualize the normalizer selection distribution of SSN in different batch sizes. Our results show that the network would prefer BN in larger batch size while LN in smaller batch size. We can also observe that the importance ratio distribution is generally different between $\mu$ and $\sigma$ which is consistent with study in \cite{luo2018understanding,teye2018bayesian}. At the same time, SSN would have a more distinct importance ratio distribution than SN.
\begin{figure}
\begin{center}
\includegraphics[width=1\linewidth]{mean_weight.pdf}
(a) importance ratios distribution of $\mu$
\includegraphics[width=1\linewidth]{var_weight.pdf}
(b) importance ratios distribution of $\sigma$
\end{center}
\caption{\textbf{Comparisons of importance ratios distribution} between SN and SSN. The model here is ResNet-50 with different batch sizes. (a) visualizes the importance ratios distribution of mean and (b) shows the result of variance. The x-axis denotes the batch size. SSN distribution are shaded.}
\label{fig:SSN-normselect}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{c|c c c c c}
\hline
batch size & 32 & 16 & 8 & 4 & 2 \\
\hline
BN & 76.4 & 76.3 & 75.2 & 72.7 & 65.3 \\
GN & 75.9 & 75.8 & 76.0 & 75.8 & \textbf{75.9} \\
SN & 76.9 & 76.7 & 76.7 & 75.9 & 75.6 \\
SSN & \textbf{77.2} & \textbf{77.0} & \textbf{76.8} & \textbf{76.1} & \textbf{75.9} \\
\hline
\end{tabular}
\end{center}
\caption{\textbf{Top-1 accuracy in different batch sizes.} We show ResNet-50's validation accuracy in ImageNet. SSN achieves higher performance in all batch size settings.
\label{tab:different_batchsize}
\end{table}
\textbf{Fast inference.} Different from SN, SSN only need to select one normalizer in each normalization layer, saving lots of computations and graphic memories. We test our inference speed using the batch size of 32 images on a single GTX 1080. For fair comparison, we implement all normalization layers in PyTorch.
All BN operations are merged into previous convolution operations. As showed in Table~\ref{tab:throughput},
BN is the fastest. SSN uses BN, IN, or LN in each layer, being the second fastest. SSN is faster than IN, LN, GN and SN in both ResNet-50 and ResNet-101 backbone. GN is slower than IN because it divides channels into groups. SN soft combines BN, IN, and LN, making it slower than SSN.
\begin{table}
\begin{center}
\begin{tabular}{ c | c c
\hline
& ResNet-50 & ResNet-101 \\% & I3D \\
\hline
BN & 259.756~$\pm$~2.136 & 157.461~$\pm$~0.482 \\% & 24.527$\pm$0.093 \\
\hline
IN & 186.238~$\pm$~0.698 & 116.841~$\pm$~0.289 \\% & 18.270$\pm$0.065 \\
LN & 184.506~$\pm$~0.054 & 115.070~$\pm$~0.028 \\% & 18.158$\pm$0.001 \\
GN & 183.131~$\pm$~0.277 & 113.332~$\pm$~0.023 \\% & 14.927$\pm$0.007 \\
SN & 183.509~$\pm$~0.026 & 113.992~$\pm$~0.015 \\% & 18.102$\pm$0.001 \\
SSN & \textbf{216.254~$\pm$~0.376} & \textbf{133.721~$\pm$~0.106} \\% & \textbf{18.917$\pm$0.006} \\
\hline
\end{tabular}
\end{center}
\caption{\textbf{Throughput (images/second) in inference time} over different normalization methods with ResNet-50 and ResNet-101 as backbone. Larger is better. The mean and standard deviation are calculated over 1000 batches.}
\label{tab:throughput}
\end{table}
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\linewidth]{ssn_sn_mean_32.pdf}
\includegraphics[width=0.9\linewidth]{ssn_sn_var_32.pdf}
\end{center}
\caption{\textbf{Comparison of normalizer selection} between SSN and SN with a batch size of 32 images per GPU. The network we use is ResNet-50, which has 53 normalization layers. The top and bottom plots denote importance ratios of mean and variance respectively. We shade normalizers after 3x3 conv and mark normalizers after downsampling shortcut with ``$\blacksquare$''. }
\label{fig:SSN-SN_layernorm}
\end{figure*}
\textbf{Comparison of normalizer selection between SSN and SN.} Fig.\ref{fig:SSN-SN_layernorm} compares the breakdown results of normalizer selection between SSN and SN for all normalization layers in ResNet-50 with a batch size of 32. Almost all dominating normalizers in SN are selected by SSN.
By our analysis, those normalization layers with uniform importance ratios in SN are expected to focus on learning sparse direction in SSN, and converge to a more appropriate normalizer.
\textbf{Learning sparse direction.}
As we mentioned in Eqn.(\ref{eqn:dirg}), SparsestMax focuses on learning sparse direction regardless of sparse distance of importance ratios. To verify this property, we visualize the convergence trajectory of importance ratios of some normalization layers across the network. As shown in Fig.\ref{fig:SSN-SN-sparsetraj}, the importance ratios in SSN make adjustment for their sparse directions under the guidance of an increasing circle in each iteration, and keep the direction stable until completely sparse. While the convergence behavior of those ratios in SN seems to be a bit messy.
%
\textbf{Insensitiveness to $r$'s increasing schedule.}
SSN has an important hyperparameter $r$, which is the radius of increasing circle. Here we examine that SSN is insensitive to $r$'s increasing schedule. Through our analysis, once the increasing circle is bigger than the inscribed circle of the simplex (i.e. $r>r_i=\sqrt{6}/6$ in the case of three normalizers), the sparse direction is likely to stop updating. In this case, the normalizer selection is determined since the gradients wrt. the control parameters become zero.
Therefore, the time stamp which $r$ reaches $r_i$ matters most in the increasing schedule.
In our default setting, $r$ would increase to $r_i$ at about 41 epoch when training 100 epochs. In our experiment, we make $r$ reach $r_i$ at 40, 50, 60 and 70 epochs respectively. Our result shows that the performance maintains at 77.2$\pm$0.04\%, showing that the schedule contributes little to the final performance.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\linewidth]{SN-SSN-sparsetrajectory.pdf}
\end{center}
\caption{\textbf{Comparison of convergence of importance ratios} in some normalization layers across the network. These plots visualize the variance importance ratios in (layer3.0.norm2), (layer3.1.norm1), (layer3.1.norm2), (layer3.2.norm1), (layer3.3.norm1) and (layer3.4.norm1) of ResNet-50 respectively.}
\label{fig:SSN-SN-sparsetraj}
\end{figure}
\textbf{One Stage \vs Two Stage.} We use argmax to derive a sparse normalizer architecture from pretrained SN model and compare it with SSN. For comparison, we continue to train argmaxed SN for 20 epochs with an initial learning rate of 0.001 and the cosine annealing learning rate decay schedule. As a result, the sparse structure derived from SN model only reaches 76.8\%, which is not comparable to our one-stage SSN.
In all, SSN obtains sparse structure and shows better performance without introducing additional computation.
\textbf{Four normalizers in $\Omega$.} To evaluate the extensibility of SparsestMax, we introduce GN~\cite{wu2018group} to initial $\Omega$ which contains IN, BN and LN.
For GN, we use a group number of 32 which is the same as default setting in \cite{wu2018group}. We apply both SN and SSN given the new $\Omega$ to ResNet-50 with a batch size of 32.
In such setting, that SSN obtains higher accuracy 77.3\% than 76.8\% in SN, demonstrating the potential extensibility of SparsestMax in a more generalized scenario.
\subsection{Semantic Segmentation in ADE and Cityscapes}
To investigate generalization ability of SSN in various computer vision tasks, we evaluate SSN in semantic segmentation with two standard benchmarks, \ie ADE20K~\cite{zhou2017scene} and Cityscapes~\cite{cordts2016cityscapes}.
For both of these two datasets, we use 2 samples per GPU.
For fair comparison with SN~\cite{luo2018differentiable}, we also adopted DeepLab~\cite{chen2018deeplab} with ResNet-50 as the backbone network,
where output\_stride=8 and
the last two blocks in the original ResNet contains atrous convolution layers with rate=2 and rate=4, respectively.
Then the bilinear operation is used to upsample the score maps to the size of ground truth.
In the training phase, we use the `poly' learning rate in both two datasets with power=0.9 and the auxiliary loss with the weight 0.4.
The same setting is also used in ~\cite{zhao2017pspnet}.
We compare proposed SSN with Synchronized BN (SyncBN), GN and SN.
For the former three normalization methods, we adopted their pretrained models in ImageNet.
For SSN, we employ SN ImageNet pretrained model~\cite{luo2018differentiable} and use SparsestMax to make the importance ratios completely sparse.
Note that the Synchronized BN is not adopted in both SN and SSN.
For ADE20K, we resize the input image to 450$\times$450 and train for 100,000 iterations with the initial $lr$ 0.02.
For multi-scale testing, we set input\_size=\{300, 400, 500, 600\}.
Table~\ref{tab:segmentation} reports the experiment result in the ADE20K validation set.
SSN outperforms SyncBN and GN with a margin without any bells and whistles in the training phase.
It also achieves 0.2\% higher mIoU than SN in the multi-scale testing.
For Cityscapes, we use random crop with the size 713$\times$713 for all models, and train them with 400 epochs.
The initial $lr$ is 0.01.
The multiple inference scales are \{1.0, 1.25, 1.5, 1.75\}.
According to Table~\ref{tab:segmentation}, SSN performs much better than SyncBN and GN.
It achieves comparable result with SN (75.7~\vs~75.8) in this benchmark.
\begin{table}
\begin{center}
\begin{tabular}{c| c| c}
\hline
& ADE20K mIoU$\%$ & Cityscapes mIoU$\%$ \\
\hline
SyncBN & 37.7 & 72.7 \\
GN & 36.3 & 72.2 \\
SN & 39.1 &\textbf{75.8} \\
SSN & \textbf{39.3} & 75.7 \\
\hline
\end{tabular}
\end{center}
\caption{\textbf{Experiment results in ADE20K validation set and cityscapes test set.}
The backbone network is ResNet-50 with dilated convolution layers.
We use mutli-scale inference in the test phase.
SyncBN denotes multi-GPU synchronization of BN.}
\label{tab:segmentation}
\end{table}
\subsection{Action Recognition in Kinetics}
We also apply SSN to action recognition task in Kinetics dataset~\cite{kay2017kinetics}. Here we use Inflated 3D (I3D) convolutional networks~\cite{carreira2017quo} with ResNet-50 as backbone. The network structure and training/validation settings all follow ResNet-50 I3D in \cite{NonLocal2018,wu2018group}. We use 32 frames as input for each video, these frames are sampled sequentially with one-frame gap between each other and randomly resized to [256,320]. Then 224$\times$224 random crop is applied on rescaled frames, and the cropped frames are passed through the network. To evaluate SSN, we use two types of pretrained models here, ResNet-50 SSN with all normalizer selections fixed and ResNet-50 SN with combined normalizers. ResNet-50 SN are trained using SparsestMax to learn sparse normalizer selection in Kinetics. Models are all trained in Kinetics training set using 8 GPUs, and the batch size settings used here are 8 and 4 videos.
During evaluation, for each video we average softmax scores from 10 clips as its final prediction. These clips are sampled evenly from whole video, and each one of them contains 32 frames. The evaluation accuracies in Kinetics validation set are shown in Table~\ref{tab:kinetics}. Both SSN$^1$and SSN$^2$ outperform the results of BN and GN in the batch size of 8 videos per GPU, and SSN$^1$ achieves the highest top-1 accuracy, it's 0.26\% higher than SN and 0.46\% higher than BN. For smaller batch size setting, the performance of SSN lies between SN and GN.
\begin{table}
\begin{center}
\begin{tabular}{c| c c | c c }
\hline
& \multicolumn{2}{c}{batch=8, length=32} & \multicolumn{2}{|c}{batch=4, length=32} \\
\hline
& top1 & top5 & top1 & top5 \\
\hline
BN & 73.3 & 90.7 & 72.1 & 90.0 \\
GN & 73.0 & 90.6 & 72.8 & 90.6 \\
SN & 73.5 & \textbf{91.2} & \textbf{73.3} & \textbf{91.2} \\
SSN$^1$ & \textbf{73.8} & \textbf{91.2} & 72.8 & 90.6 \\
SSN$^2$ & 73.4 & 91.1 & 73.0 & \textbf{91.2} \\
\hline
\end{tabular}
\end{center}
\caption{\textbf{Result of ResNet-50 I3D in Kinetics} with different normalization layers and batch sizes. SSN$^1$ is finetuned from ResNet-50 SSN ImageNet pretrained model, and SSN$^2$ is from ResNet-50 SN ImageNet pretrained model.}
\label{tab:kinetics}
\end{table}
\section{Conclusion}
In this work, we propose SSN for both performance boosting and inference acceleration.
SSN inherits all advantages of SN such as robustness to a wide range of batch sizes and applicability to various tasks, while avoiding redundant computations in SN.
This work has demonstrated SSN's superiority in multiple tasks of Computer Vision such as classification and segmentation.
To achieve SSN, we propose a novel sparse learning algorithm SparsestMax which turns constrained optimization problem into differentiable feed-forward computation.
We show that SparsestMax can be built as a block for learning one-hot distribution in any deep learning architecture and is expected to be trained end-to-end without any sensitive hyperparameter.
The application of proposed SparsestMax can be a fruitful future research direction.
{\small
\bibliographystyle{ieee}
|
2,869,038,154,104 | arxiv | \section{Motivation}
In the paper, we deal with the so-called mild bounded ancient solutions to the 2D Navier-Stokes equations in half-space with the homogeneous Dirichlet boundary conditions. As it has been explained in \cite{KNSS}, \cite{SS}, and \cite{SS1}, such type of solutions appears as a result of re-scaling solutions to the Navier-Stokes equations around a possible singular point. If they are in a sense ``trivial", then this point is not singular.
There are several interesting cases for which Liouville type theorems for ancient solutions to the Navier-Stokes equations turn out to be true. And their proofs are based on a reduction to a scalar equation with the further application of the strong maximum principle to it. For example, in 2D case, such a scalar equations is just the 2D vorticity equation. Unfortunately, this approach does not work in a half plane since non-slip boundary conditions in terms of the velocity does not implies the homogeneous Dirchlet boundary condition for the vorticity. However, there are some interesting results coming out from this approach, see paper \cite{Giga} and reference in it.
In the paper, we exploit a different approach related to the long time behaviour of solutions to a conjugate system. It has been already used in the proof of the Liouville type theorem for the Stokes system in half-space, see the paper \cite{JSS2} and the paper \cite{JSS1} for another approach.
Let $u$ be a mild bounded ancient solution to the Navier-Stokes equations in a half space, i.e., $u\in L_\infty(Q_-^+)$ ($|u|\leq 1$ a.e. in $Q_-^+=\{x\in\mathbb R^2_+, \,t<0\}$, where $\mathbb R^2_+=\{x=(x_1,x_2)\in\mathbb R^2:\,\,x_2>0\}$) and there exists a scalar function $p$ such that, for any $t<0$, $p=p^1+p^2$, where
\begin{equation}\label{pressure1equation}
\triangle p^1=-{\rm div}\,{\rm div} \,u\otimes u
\end{equation}
in $Q_-^+$ with $p^1_{,2}=0$ and $p^2(\cdot,t)$ is a harmonic function in $\mathbb R^2_+$ whose gradient obeys the inequality
\begin{equation}\label{p2bounded}
|\nabla p^2(x,t)|\leq c\ln (2+1/x_2)
\end{equation}
for $(x,t)\in Q_-^+$ and has the property
\begin{equation}\label{pressure2property}
\sup\limits_{x_1\in\mathbb R}|\nabla p^2(x,t)|\to 0
\end{equation}
as $x_2\to\infty$; $u$ and $p$ satisfy the classical Navier-Stokes system and boundary condition $u(x_1,0,t)=0$
in the following weak sense
\begin{equation}\label{weakformNSS}
\int\limits_{ Q_-^+}\Big(u\cdot(\partial_t\varphi+\triangle\varphi)+u\otimes u:\nabla\varphi+p\,{\rm div}\,\varphi\Big)dx dt=0
\end{equation}
for any $\varphi\in C^\infty_0(Q_-)$ with $\varphi(x_1,0,t)=0$ for $x_1\in \mathbb R$ and
\begin{equation}\label{weakdivergencefree}
\int\limits_{ Q_-^+}u\cdot \nabla q dx dt=0
\end{equation}
for any $q\in C^\infty_0(Q_-)$.
Here, $Q_-=\mathbb R^2\times\{t<0\}$.
We are going to prove the following fact:
\begin{theorem}\label{2dLiouville} Let $u$ be a mild bounded ancient solution to the Navier-Stokes equations in a half space. Assume in addition that
\begin{equation}\label{kineticenergy}
u\in L_{2,\infty}(Q_-^+).
\end{equation}
Then $u$ is identically equal to zero.
\end{theorem}
\begin{remark}\label{remarkaboutadditional} Motivation for additional condition (\ref{kineticenergy}) is as follows. The norm of the space $L_{2,\infty}(Q_-^+)$ is invariant with respect to the Navier-Stokes scaling
$$v(x,t)\to \lambda u(\lambda x, \lambda^2t).$$ So, if we study the smoothness of energy solutions in 2D, the corresponding norm stays bounded under scaling and a limiting procedure, leading to a mild bounded ancient solution, and thus condition (\ref{kineticenergy}) holds. For details, see \cite{SS1}. \end{remark}
\begin{lemma}\label{dissipationisbounded} Under assumptions of Theorem \ref{2dLiouville},
\begin{equation}\label{dissipation}
\nabla u\in L_2(Q^+_-).
\end{equation}
\end{lemma}
\textsc{Proof}
For fixed $A<0$, we can construct $\widetilde{u}$ as a solution
to the initial boundary value problem
$$\partial_t\widetilde{u}-\triangle\widetilde{u}+\nabla\widetilde{p}^2= -{\rm div H}$$
in $\mathbb R^2_+\times ]A,0[$, where $H=u\otimes u+p^1\mathbb I$,
$$\widetilde{u}(x_1,0,t)=0,$$
$$\widetilde{u}(x,A)=u(x,A)$$
with the help of the Green function $G$ and the kernel $K$ introduced by Solonnikov in \cite{Sol2003}, i.e.,
$$\widetilde{u}(x,t)=\int\limits_{\mathbb R^2_+}G(x,y,t-A)u(y,A)dy+\int\limits^t_A\int\limits_{\mathbb R^2_+}K(x,y,t-\tau)F(y,\tau)dy d\tau.$$
For the further details, we refer the reader to the paper \cite{SS1}.
Let us describe the properties of $\widetilde{u}$. Our first observation is that
$$
{\rm div}\,u\otimes u=u\cdot \nabla u\in L_{2,\infty}(Q^+_-)
$$
since $u\in L_{2,\infty}(Q^+_-)$ and $\nabla u\in L_\infty(Q^+_-)$. The last fact has been proven in \cite{SS1}. Hence,
$${\rm div H}\in L_{2,\infty}(Q^+_-).$$
By the properties of the kernels $G$ and $K$, such a solution $\widetilde{u}$ is bounded and satisfies the energy identity
$$\int\limits_{\mathbb R^2_+}|\widetilde{u}(x,t)|^2dx +2\int\limits_A^t\int\limits_{\mathbb R^2_+}|\nabla \widetilde{u}(x,\tau)|^2dx d\tau=\int\limits_{\mathbb R^2_+}|{u}(x,A)|^2dx+$$
$$+2\int\limits_A^t\int\limits_{\mathbb R^2_+}{\rm div H}(x,\tau)\cdot \widetilde{u}(x,\tau)dx d\tau$$
for all $A\leq t\leq 0$. In addition,
we can state that
for any $\delta >0$,
\begin{equation}\label{tildap2}
\int\limits_{A+\delta}^0\int\limits_{\mathbb R^2_+}|\nabla \widetilde{p}^2|^2dx dt<C( \delta,A)<\infty.
\end{equation}
Our aim is to show that $u=\widetilde{u}$ in $\mathbb R^2_+\times ]A,0[$.
It is easy to see that,
for any $R>0$,
$$\|v(\cdot, t)\|_{2,B_+(R)}\to 0$$
as $t\to A$, where $v=u-\widetilde{u}$. This follows from the facts that $u$ is continuous on the completion of the set $Q_+(R)$ for any $R>0$, see details in \cite{SS1}, and that $\widetilde{u}\in C([A,0];L_2(\mathbb R^2_+))$.
The latter property allows us to
show that $v$ satisfies that the identity
$$\int\limits^0_A\int\limits_{\mathbb R^2_+}(v\cdot \partial_t\varphi+v\cdot \triangle\varphi)dxdt=0$$
for any $\varphi\in C^\infty_0(Q_-)$ such that $\varphi(x_1,0,t)=0$ for any $x_1\in\mathbb R$ and any $t\in ]-\infty,0[$ and ${\rm div}\, \varphi=0$ in $Q^+_-$.
If we extend $v$ by zero for $t<A$, this field will be bounded ancient solution to the Stokes system and therefore has the form $v=(v_1(x_2,t),0)$, see \cite{JSS1} and \cite{JSS2}. The gradient of the corresponding pressure $p^2-\widetilde{p}^2$ depends only on $t$. However, by (\ref{pressure2property}) and by (\ref{tildap2}), this gradient must be zero.
And the Liouville theorem for the heat equation in the half-space implies that $v=0$.
Now, since $u=\widetilde{u}$, the energy identity implies
$$\int\limits_{\mathbb R^2_+}|{u}(x,0)|^2dx +2\int\limits_A^0\int\limits_{\mathbb R^2_+}|\nabla {u}(x,\tau)|^2dx d\tau=\int\limits_{\mathbb R^2_+}|{u}(x,A)|^2dx$$
for any $A<0$. This completes the proof of the lemma. $\Box$
\begin{remark}\label{uniformboundedness}
In fact, we have proven that
$$\int\limits^0_{A}\int\limits_{\mathbb R^2_+}|\nabla p^2|^2dx dt\leq c<\infty$$
for any $A<0$.
\end{remark}
Given a tensor-valued function $F\in C^\infty_0(Q_-^+)$,
let us consider the following initial boundary value problem:
\begin{equation}\label{perturbproblem}
\partial_tv+u\cdot \nabla v+\triangle v+\nabla q={\rm div}\, F,\qquad {\rm div}\,v=0
\end{equation}
in $Q_+=\mathbb R^2_+\times ]-\infty,0[$,
\begin{equation}\label{boundarypertproblem}
v(x_1,0,t)=0
\end{equation}
for any $x_1\in \mathbb R$ and $t\leq 0$, and
\begin{equation}\label{initialpertproblem}
v(x,0)=0
\end{equation}
for $x\in \mathbb R^2_+$. Here, vector-valued field $v$ and scalar function $q$ are unknown
Why we consider this system?
At least formally, we have the following identity
$$\int\limits_{Q_-^+}u\cdot {\rm div}\,{F}dx dt=$$
$$=\int\limits_{Q_-^+}u\cdot \Big( \partial_t{v}+u\cdot \nabla {v}+\triangle {v}+\nabla {q}\Big)dx dt =$$
$$=\int\limits_{Q_-^+}u\cdot \Big( \partial_t{v}+u\cdot \nabla {v}+\triangle {v}\Big)dx dt =$$
$$=\int\limits_{Q_-^+}\Big(-\partial_tu-{\rm div}\,u\otimes u+\triangle u\Big)\cdot {v}dx dt=$$
$$=\int\limits_{Q_-^+}\Big(-\partial_tu-{\rm div}\,u\otimes u+\triangle u-\nabla p\Big)\cdot {v}dx dt=0.$$
This would imply that $u$ is the function of $t$ only and thus, since $u$ is a mild bounded ancient solution, $u$ must be identically zero.
\pagebreak
\setcounter{equation}{0}
\section{Properties of Solutions to Dual Problem}
\begin{pro}\label{2p1} There exists a unique solution $v$ to (\ref{perturbproblem}), (\ref{boundarypertproblem}), and (\ref{initialpertproblem}) with the following properties:
$$v\in L_{2,\infty}(Q_-^+),\qquad \nabla v\in L_2(Q_-^+),$$
and, for all $T<0$,
$$\partial_tp, \nabla^2v, \nabla q\in L_2(\mathbb R^2_+\times ]T,0[).$$
\end{pro}
\textsc{Proof}
First of all, there exists a unique energy solution. This follows from the identity
$$\int\limits_{Q_-^+}(u\cdot\nabla v)\cdot vdx dt=0$$
and from the inequality
$$\Big|-\int\limits_{Q_-^+}{\rm div}\,F\cdot vdx dt\Big|=\Big|\int\limits_{Q_-^+}F:\nabla v dx dt\Big|\leq \Big(\int\limits_{Q_-^+}|F|^2dx dt\Big)^\frac 12\Big(\int\limits_{Q_-^+}|\nabla v|^2dx dt\Big)^\frac 12$$
So, we can state that
\begin{equation}\label{energyestimate}
v\in L_{2,\infty}(Q_-^+),\qquad \nabla v\in L_2(Q_-^+).
\end{equation}
The latter means that $u\cdot \nabla v\in L_2(Q_-^+)$. So, statements of Proposition \ref{2p1} follows from the theory for Stokes system.
\begin{flushright}
$\Box$
\end{flushright}
\pagebreak
\setcounter{equation}{0}
\section{Main Formula, Integration by Parts}
For smooth function $\psi\in C^\infty_0(\mathbb R^2\times \mathbb R)$, we have
$$\int\limits_{Q_-^+}u\cdot \psi{\rm div}\,{F}dx dt=$$
$$=\int\limits_{Q_-^+}u\cdot \psi\Big(\partial_t{v}+u\cdot\nabla {v}+\triangle {v}+\nabla {q}\Big)dx dt=$$$$=\int\limits_{Q_-^+}\Big(-u\cdot {v}\partial_t\psi-u\cdot {v}u\cdot\nabla\psi-u_i{v}_{i,j}\psi_{,j}+u_{i,j}
{v}_i\psi_{,j}-{q}u\cdot\nabla\psi\Big)dx dt-$$
$$-{v}\psi\cdot\Big(\partial_tu+u\cdot\nabla u-\triangle u\Big)dx dt=$$
$$=\int\limits_{Q_-^+}\Big(-u\cdot {v}\partial_t\psi-u\cdot {v}u\cdot\nabla\psi-2u_i{v}_{i,j}\psi_{,j}+
(u_{i,j}{v}_i+u_i{v}_{i,j})\psi_{,j}-{q}u\cdot\nabla\psi\Big)dx dt+$$$$+\int\limits_{Q_-^+}{v}\psi\cdot \nabla pdx dt=$$
$$=-\int\limits_{Q_-^+}\Big(u\cdot {v}\partial_t\psi+u\cdot {v}u\cdot\nabla\psi+2u_i{v}_{i,j}\psi_{,j}+u\cdot {v} \triangle\psi
+({q}u+p{v})\cdot\nabla\psi\Big)dx dt.$$
We pick $\psi(x,t)=\chi(t)\varphi(x)$. Using simple arguments and smoothness of $u$ and $v$, we can get rid of $\chi$ and have
$$J(T)=\int\limits_T^0\int\limits_{\mathbb R^2_+}u\cdot \varphi{\rm div}\,{F}dx dt=-\int\limits_{\mathbb R^2_+}\varphi(x)u(x,T)\cdot {v}(x,T)dx+$$$$+\int\limits_T^0\int\limits_{\mathbb R^2_+}\Big(u\cdot {v}u\cdot\nabla\varphi+2u_i{v}_{i,j}\varphi_{,j}+u\cdot {v} \triangle\varphi
+({q}u+p{v})\cdot\nabla\varphi\Big)dx dt.$$
Fix a cut-off function
$\varphi(x)=\xi(x/R)$, where $\xi\in C^\infty_0(\mathbb R^3)$ with the following properties: $0\leq \xi\leq 1$, $\xi(x)=1$ if $|x|\leq 1$, and $\xi(x)=0$ if $|x|\geq 2$.
Our aim is to show that
$$J_R=\int\limits_T^0\int\limits_{\mathbb R^2_+}\Big(u\cdot \widetilde{v}u\cdot\nabla\varphi+2u_i\widetilde{v}_{i,j}\varphi_{,j}+u\cdot \widetilde{v} \triangle\varphi
+(\widetilde{q}u+p\widetilde{v})\cdot\nabla\varphi\Big)dx dt$$
tends to zero if $R\to \infty$.
We start with
$$\Big|\int\limits_T^0\int\limits_{\mathbb R^2_+}2u_i{v}_{i,j}\varphi_{,j}dx dt\Big|\leq $$$$\leq \frac cR\Big(\int\limits_T^0\int\limits_{B_+(2R)}|u|^2dx dt\Big)^\frac 12\Big(\int\limits_T^0\int\limits_{B_+(2R)\setminus B_+(R)}|\nabla {v}|^2dx dt\Big)^\frac 12\leq $$$$\leq c\sqrt {-T}\Big(\int\limits_T^0\int\limits_{B_+(2R)\setminus B_+(R)}|\nabla {v}|^2dx dt\Big)^\frac 12 \to 0$$
as $R\to \infty$.
Next, since $|u|\leq 1$, we have
$$\Big|\int\limits_T^0\int\limits_{\mathbb R^2_+}u\cdot {v} \triangle\varphi dx dt \Big|\leq \frac c{R^2}\Big(\int\limits_T^0\int\limits_{B_+(2R)}|u|^2dx dt\Big)^\frac 12\Big(\int\limits_T^0\int\limits_{\mathbb R^2_+}| {v}|^2dx dt\Big)^\frac 12\leq $$$$\leq c \frac {-T}{R}\|{v}\|_{2,\infty,Q_-^+}\to 0$$
as $R\to \infty$.
The third term is estimated as follows (by boundedness of $u$):
$$\Big|\int\limits_T^0\int\limits_{\mathbb R^2_+}u\cdot {v}u\cdot\nabla\varphi dx dt\Big|\leq$$$$\leq \frac c{R}\Big(\int\limits_T^0\int\limits_{B_+(2R)}|u|^4dx dt\Big)^\frac 12\Big(\int\limits_T^0\int\limits_{B_+(2R)\setminus B_+(R)}| {v}|^2dx dt\Big)^\frac 12 \leq$$
$$\leq \frac c{R}\Big(\int\limits_T^0\int\limits_{B_+(2R)}|u|^2dx dt\Big)^\frac 12\Big(\int\limits_T^0\int\limits_{B_+(2R)\setminus B_+(R)}| {v}|^2dx dt\Big)^\frac 12 \leq$$$$\leq \frac c{R}\Big(\int\limits_T^0\int\limits_{\mathbb R^2_+}|u|^2dx dt\Big)^\frac 12\Big(\int\limits_T^0\int\limits_{\mathbb R^2_+}| {v}|^2dx dt\Big)^\frac 12 \leq$$$$\leq c\frac {-T}R\|u\|_{2,\infty,Q^+_-}\|u\|_{2,\infty,Q^+_-}\to 0$$
as $R\to \infty$.
The first term containing the pressure is estimated as follows. We have
$$\int\limits_T^0\int\limits_{\mathbb R^2_+}p{v}\cdot\nabla\varphi dx dt=
\int\limits_T^0\int\limits_{\mathbb R^2_+}p_R{v}\cdot\nabla\varphi dx dt,$$
where
$$p_R=p^1_R+p^2_R$$
with $p^1_R=p^1-[p^1]_{B_+(2R)}$ and $p^2_R=p^2-[p^2]_{B_+(2R)}$. By the assumptions,
after even extension, the function $p^1$ belongs to $L_\infty(-\infty, 0;BMO)$ and thus
$$\frac 1{R^2}\int\limits_{B_+(2R)}|p^1_R(x,t)|^2 dx\leq c$$
for all $t\leq 0$. As to $p^2_R$, we use Poincar\'{e} inequality
$$\frac 1{R^2}\int\limits_{B_+(2R)}|p^2_R(x,t)|^2 dx\leq \int\limits_{B_+(2R)}|\nabla p^2(x,t)|^2dx\leq \int\limits_{\mathbb R^2_+}|\nabla p^2(x,t)|^2dx.$$
So, by Lemma \ref{dissipationisbounded} and by the Lebesgue theorem about dominated convergence,
$$\Big|\int\limits_T^0\int\limits_{\mathbb R^2_+}p{v}\cdot\nabla\varphi dx dt\Big|\leq $$$$
\leq c\int\limits_T^0d\tau\Big(\int\limits_{B_+(2R)\setminus B_+(R)}|v(x,\tau)|^2dx\Big)^\frac 12+$$$$
+\frac cR\Big(R^2\int\limits^0_T\int\limits_{\mathbb R^2_+}|\nabla p^2|^2dx dt\Big)^\frac 12\Big(\int\limits^0_T\int\limits_{B_+(2R)\setminus B_+(R)}|v|^2dx dt\Big)^\frac 12
\to 0$$
as $R\to \infty$.
The last term is treated with the help of Poincar\'{e} inequality in the same way as $p^2_R$. Indeed,
$$\Big|\int\limits_T^0\int\limits_{\mathbb R^2_+}qu\cdot\nabla\varphi dx dt\Big|
=\Big|\int\limits_T^0\int\limits_{\mathbb R^2_+}(q-[q]_{B_+(2R)})u\cdot\nabla\varphi dx dt\Big|\leq$$
$$\leq \frac cR\Big(R^2\int\limits^0_T\int\limits_{B_+(2R)}|\nabla q|^2dx dt\Big)^\frac 12\Big(\int\limits^0_T\int\limits_{B_+(2R)\setminus B_+(R)}|u|^2dx dt\Big)^\frac 12.$$
The right hand side of the latter inequality tends to zero as $R\to\infty$ by the assumption that $u\in L_{2,\infty}(Q_-^+)$.
So, finally, we have
$$\int\limits^0_T\int\limits_{\mathbb R^2_+} u\cdot {\rm div}\,F dx dt=
-\lim\limits_{R\to\infty}\int\limits_{\mathbb R^2_+}\varphi(x)u(x,T)\cdot {v}(x,T)dx=$$
$$=-\int\limits_{\mathbb R^2_+}u(x,T)\cdot {v}(x,T)dx$$
Now, our aim to see what happens if $T\to-\infty$.
\pagebreak
\setcounter{equation}{0}
\section{$t\to-\infty$}
We shall show that
\begin{equation}\label{l2decay}
\|v(\cdot,t)\|_{2,\mathbb R^2_+}\to 0.
\end{equation}
as $t\to-\infty$.
Indeed, we also know
\begin{equation}\label{dissipto0}
\int\limits_{-\infty}^t\int\limits_{\mathbb R^2_+}|\nabla v|^2dx d\tau\to0
\end{equation}
as $t\to-\infty$.
By Ladyzhenskaya's inequality,
$$v\in L_4(Q_-^+)$$
and thus
\begin{equation}\label{L}
\int\limits_{-\infty}^t\int\limits_{\mathbb R^2_+}| v|^4dx d\tau\to0
\end{equation}
as $t\to-\infty$.
Now, for sufficiently large $-t_0$, we have
$$v=v^1+v^2,$$
where
$$\partial_tv^1 +\triangle v^1+\nabla q^1=0,\qquad {\rm div}\,v^1=0$$
in $R^2_+\times ]-\infty,t_0[$,
$$v^1(x_1,0,t)=0$$
for any $x_1\in\mathbb R$ and for any $t\leq t_0$, and
$$v^1(x,t_0)=v(x,t_0)$$
for any $x\in \mathbb R^2_+$.
As to $v^2$, it satisfies
$$\partial_tv^2 +\triangle v^2+\nabla q^2=-{\rm div}\, v\otimes u,\qquad {\rm div}\,v^2=0$$
in $R^2_+\times ]-\infty,t_0[$,
$$v^2(x_1,0,t)=0$$
for any $x_1\in\mathbb R$ and for any $t\leq t_0$, and
$$v^1(x,t_0)=0$$
for any $x\in \mathbb R^2_+$.
Then, it is well known that
\begin{equation}\label{l2decay1} \|v^1(\cdot,t)\|_{2,\mathbb R^2_+}\to 0\end{equation}
as $t\to-\infty$.
On the other hand, by the energy inequality,
$$ \frac 12 \|v^2(\cdot,t)\|_{2,\mathbb R^2_+}^2+\int\limits^{t_0}_t\int\limits_{\mathbb R^2_+}|\nabla v^2|^2dx d\tau=$$$$= \int\limits_t^{t_0}\int\limits_{\mathbb R^2_+}v^2\cdot{\rm div} \,v\otimes u dx d\tau=
- \int\limits_t^{t_0}\int\limits_{\mathbb R^2_+} \,v\otimes u:\nabla v^2 dx d\tau\leq$$
$$\leq \Big(\int\limits^{t_0}_t\int\limits_{\mathbb R^2_+}|\nabla v^2|^2dx d\tau\Big)^\frac 12\Big(\int\limits^{t_0}_t\int\limits_{\mathbb R^2_+}|u|^4dx d\tau\Big)^\frac 14\Big(\int\limits^{t_0}_t\int\limits_{\mathbb R^2_+}| v|^4dx d\tau\Big)^\frac 14$$
for $t<t_0$.
For the same reason as for $v$, we have
\begin{equation}\label{essential}
\int\limits^{0}_{\infty}\int\limits_{\mathbb R^2_+}|u|^4dx d\tau\leq c
\end{equation}
and thus, by the Cauchy inequality,
\begin{equation}\label{l2decay2}\|v^2(\cdot,t)\|_{2,\mathbb R^2_+}^2\leq c\Big(\int\limits^{t_0}_t\int\limits_{\mathbb R^2_+}| v|^4dx d\tau\Big)^\frac 12\leq c\Big(\int\limits^{t_0}_{-\infty}\int\limits_{\mathbb R^2_+}| v|^4dx d\tau\Big)^\frac 12\end{equation}
for all $t<t_0$.
It is not so difficult to deduce (\ref{l2decay}) from (\ref{dissipto0}), (\ref{l2decay1}), and (\ref{l2decay2}).
The only assumption we really need is (\ref{essential}) and it is true if $u\in L_{2,\infty}(Q^+_-)$ and $\nabla u\in L_2(Q^+_-)$. The latter follows from Ladyzhenskaya's inequality.
\pagebreak
|
2,869,038,154,105 | arxiv | \section*{Acknowledgments}
This research is partly supported by NSF Grants DMS-11-07012, DMS-09-15139, SES-08-51521
and NSA Grant H98230-08-1-0104.
|
2,869,038,154,106 | arxiv | \section{Introduction}
Model-based Design (MBD) of embedded systems is nowadays a standard,
easy and efficient way for capturing and verifying embedded software functional
requirements.
The main idea is to move away from manual coding, and
with the help of mathematical models create executable specifications using a
certain modeling framework.
These frameworks typically provide automatic code generators which generate consistent
imperative code ready to be deployed in real environments.
Matlab/Simulink \cite{mat1} is one of the most wide-spread tools for model-based design of
embedded systems which combines above features in a single framework.
Simulink utilizes block-diagram to represent system models at the algorithmic level.
For instance, in case of a control system, the model consists of the controller algorithm
block which controls the environment block (or the process to be controlled typically
modeled as a set differential equations).
A translation of Simulink models to Synchronous Dataflow Graphs (SDFGs)
\cite{LeeM87} which are, opposed to Simulink, formally based, is beneficial. Such
a translation would pave the way towards the application of several optimization and
formal verification techniques well-established for the SDFG domain.
For e.g. in a recent work in \cite{fakihJSA2015} the formal
real-time verification (based on model-checking) of SDF applications running on
Multiple-Processor-System-On-Chip (MPSoCs) with shared communication resources was
shown to be more viable than the real-time (RT) verification of generic tasks.
Also for SDFGs deadlocks and bounded buffer properties are decidable
\cite{LeeM87}. In addition with the help of mathematical methods
easy-to-analyze compile-time schedules can be constructed for SDFGs.
Furthermore, memory-efficient code optimization are available
\cite{bhattacharyya_clustering, bhattacharyya_synthesis_1999} to
enable efficient implementations of embedded systems.
In this paper, we present a translation procedure of a defined subset of
Simulink models to SDFGs based on the work in \cite{simulink2sdfWarsitz2016}. We extend
the approach in \cite{simulink2sdfWarsitz2016} by enabling the translation of Simulink
models with multirates features to SDFGs. In addition, we integrate the translation
procedure within Matlab/Simulink and utilize the automatic code-generation feature to
generate SDF-based code from Simulink models. Moreover, we enable an
automatic setup of a verification flow which allows a Software-In-the-Loop (SIL) simulation
showing the functional equivalence of the generated code to the reference model.
The paper is structured as follows. We will first recap the basic concepts of synchronous
dataflow graphs and Simulink models identifying their main differences. Afterwards,
we discuss the related work in Sect.~\ref{sec:RelatedWorkDF} mainly addressing
translation approaches of Simulink models to SDFGs. Next we elaborate on our translation
procedure in Sect.~\ref{subsec:Simulink2sdf}, starting with description of the set of
constraints on the Simulink model enabling the translation. In addition, we discuss the
code-generation and SIL verification features. Sect. \ref{Evaluation} demonstrate the
viability of our translation approach with the help of a Transmission Controller Unit
(TCU) case study. Finally, we conclude our work and give an outlook on open issues
and future work.
\section{Background}
\subsection{Synchronous Dataflow Graphs}
\label{sec:bg:sdfgs}
A \textit{synchronous (or static) data-flow graph (SDFG) }\cite{LeeM87}
is a directed graph (see Fig.\ref{fig:sdfg}) which, similar to general data-flow
graphs (DFGs),
consists mainly of nodes (called \textit{actors}) modeling atomic
functions/computations
and arcs
modeling the data flow (called \textit{channels}). In difference to DFGs, SDFGs
consume/produce a static number of data samples (\textit{tokens}) each time an
actor
executes (\textit{fires}).
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.35\textwidth]{jpeg-sdfg}
\end{center}
\caption{SDFG of a \textit{JPEG Encoder}}
\label{fig:sdfg}
\end{figure}
An SDFG suits well for modeling multi-rate streaming applications and DSP
algorithms and
also allows static scheduling and easy parallelization.
A port \textit{rate} denotes the number of tokens produced or consumed in every
activation
of an actor.
The data flow across a channel (which represents a FIFO buffer) is done according
to a
First-In-First-Out (FIFO) fashion.
Channels could also store initial tokens (called delays indicated by bullets in
the edges)
in their initial state which help resolving cyclic
dependencies (see \cite{LeeM87}).
Despite the analyzability advantage of SDFGs, yet this comes at the cost of their
expressiveness.
One of the main limitations of SDF Model of Computation (MoC) is that dynamism cannot be handled for e.g.
in the case where depending on the current scenario the application rates changes
(c.f. \cite{SDFImplementation2013}).
Another limitation (c.f. \cite{LeeM87}) of the SDF MoC is
that conditional control flow is only allowed within an actor functionality but
not among the actors. However, emulating control flow within the SDFG is
possible even though not always efficient (c.f. \cite{SDFImplementation2013}).
Due to above limitations, for e.g. stopping and restarting an SDFG is not possible
since an SDFG can have only
two states either running or waiting for input. In addition, reconfiguration of
an SDFG to be able to (de)activate different parts depending on specific modes
is not possible. Moreover, different rates depending on run-time conditions
are not supported. Also modeling exceptions which might require deactivating
some parts of the graph is not possible.
An additional issue is that the SDF model does not reflect the real-time nature of
the
connections to the real-time environment.
\subsection{Simulink}
\label{sec:bg:Simulink}
Simulink is a framework for modeling of \textit{dynamic systems} and simulating
them in virtual time.
Modeling of such systems is carried out graphically through a graphical editor
consisting mainly of blocks and arrows (\textit{connections}) between them
representing
signals.
Each block has its input, output and optionally state variables. The relationship
of the
inputs with the old state variables and the outputs update is realized through
mathematical functions.
One of the powerful features of Simulink is the ability to combine multiple
simulation
domains (continuous and discrete). This is very useful for embedded systems, where
in
general the controller has discrete model and the environment often needs to be
modeled
as a continuous one.
Simulink also supports a state-based MoC the \textit{Stateflow} \cite{stateflow}
which is
widely used to model discrete controllers.
Simulink allows a fast \textit{Model-in-the-Loop (MIL)} verification, where the
functional model (of the controller for example) is simulated and results are
documented to be compared with further refinements. In addition, a
\textit{Software-in-the-Loop (SIL)} verification
is also possible in which the controller model is replaced by the generated code
from
the \textit{Embedded Coder} \cite{Simulink-coder} (usually embedded in a
S-function) and the behavior of the code is
compared with the reference data achieved from MIL (described above).
In \cite{Simulinksdf2} a method was presented to automatically transform SDFGs
into SBDs (Synchronous Block Diagrams),
such that the semantics of SDF are preserved, and it was proven that Simulink can
be used
to capture and simulate SDF models. Also authors in \cite{gajski} support this
fact that dataflow models
fit well to concepts of block diagrams and are used by Simulink. In general, the MoC of Simulink is much
more expressive than that of the SDF having the advantage of being able to relax
all limitations of the SDF MoC but at the
cost of its analyzability.
\section{Related Work}
\label{sec:RelatedWorkDF}
In the last decade, several research \cite{caspi_Simulink_2003,
miller_formal_2005,
zhang_bridging_2013, buker_automated_2013} have been conducted to enable a
translation of
Simulink models to other formal models for the purpose of formal analysis. In the
following, we merely discuss previous work enabling the translation
of Simulink models to SDFGs.
In \cite{GitHubSimulink2SDF}, only the source code of a so-called
\textit{Simulink2SDF} tool was published which enables a very simple translation
of Simulink models to SDFGs. In this work all Simulink blocks, without
any distinction, were translated to data-flow actors and similarly connections
were translated in data-flow channels, the fact which makes the translation incomplete as
we will see in Sec.~\ref{subsec:Simulink2sdf}. In addition, our approach allows the
generation of executable SDF-code which is not possible in this approach.
In \cite{Simulink2sdfBsc_2011} a translation of Simulink models to
homogeneous SDFGs (HSDFGs) was pursued with the objective of analyzing
concurrency.
HSDFGs are SDFGs with the restriction that the number of consumed and produced
tokens of each actor must be equal to 1 \cite {lee_synchronous_1987}. The
translation has been done for a fixed number of functional blocks but important
attributes, such as the data type of a connection between blocks, have not been taken into
consideration by the the translation.
In \cite{Sim2Modal2Ptolmy2SDF2011} it was shown how a case study of a vehicle climate
control modeled in Simulink is imported to a tool (MoDAL) supporting SDF MoC. MoDAL, in
turn, exports the model in a format which can be imported by the Ptolemy tool
\cite{Ptolemy2011}. Ptolemy is then used to generate code from the SDF model.
In \cite{Sim2Modal2Ptolmy2SDF2011}, only the use-case model have been translated to an SDFG
without general defining a translation concept applicable at least to a subset of Simulink models.
In \cite{bostrom_contract-based_2015} a translation from Simulink models to
SDFGs was described. The aim of this work was to apply a methodology for
functional verification of Simulink models based on \textit{Contracts}.
\textit{Contracts} define pre- and post conditions to be fulfilled for programs or
program fragments.
In \cite{rtasKlikpoKM16}, the ability of SDFGs to model multi-periodic Simulink systems was
formally proved. There, in addition to systems with harmonic periods, also non-harmonic
periods are supported (unlike our work and that of \cite{bostrom_contract-based_2015}
where only harmonic periods are supported).
However, authors in above work, give no clear classification of critical Simulink
functional blocks (e.g. the \textit{switch} block with dynamic rates see
Sec.~\ref{subsec:Simulink2sdf})
which cannot be supported in the translation. In addition, \textit {Triggered-/Enabled}
subsystems and other important attributes such as the data type of a connection are not
supported. Furthermore, SDF-based code-generation was not considered.
Unlike the above work, we present a general translation concept based on a
classification of blocks and connections in Simulink models. Our approach enables the
translation of critical blocks (such as \textit{Enabled/Triggered} subsystems)
including the enrichment of the translated SDFG with important attributes such as
the data types of tokens, tokens' size and sampling rates of actors (in case of
multi-rate models).
This enables a seamless code generation of the model into SDF-based embedded
software ready to be deployed on target architecture. We also provide an automation of the
process of SDF-based code generation together with the SIL verification to prove the
soundness of the translation.
\section{Simulink to SDFG Translation}
\label{subsec:Simulink2sdf}
As already stated (see Sec.~\ref{sec:bg:Simulink}), Simulink MoC is much more
expressive
than the SDFG MoC. Unlike SDFGs, Simulink supports following additional features:
\begin{description}
\item[U1] \textbf{Hierarchy} (e.g. \textit{subsystem} blocks): While in
Simulink multiple functional blocks can be grouped into a subsystem, in
SDFGs each actor is atomic and therefore no hierarchy is supported. \label{U1}
\item[U2] \textbf{Control-flow logic/Conditional} (for e.g. \textit{switch
block}
or \textit{triggered subsystem} see \cite{mat1}): In Simulink control flow is
supported
on the block level. This means that depending on the value of a control signal at
a block,
different data rates could be output by the block. In contrary, in SDFGs data
rates
at input and output ports of an actor are fixed and control structures are only
allowed within the functional code of an actor and can't be represented in an
SDFG.
\label{U2}
\item[U3] \textbf{Connections}: \label{U3}
\begin{enumerate}
\item \textbf{Dataflow without connections} (e.g.
\textit{Goto/From} blocks):
In contrast to Simulink, there is no dataflow without a channel connection in
connected
and \textit{consistent}\footnote {Inconsistent SDFGs require
unlimited storage or lead to deadlocks during
execution\cite{lee_synchronous_1987}.}
SDFGs considered in this paper.
\item \textbf{Grouping of connections} (e.g. \textit{BusCreator} block
for
\textit{bus} signals):
In Simulink, connections with different properties (e.g. different data types) can
be grouped into one connection. This is not possible in an SDFG since the tokens
transfered among a
channel must have the same properties.
\item \textbf{Connection style}: While in Simulink the storage of data
between
blocks has the same behavior as that of a register where data can be
overwritten (in case of multi-rate models), the inter-actor communication via
channels in
SDFGs follows a (data-flow) FIFO buffer fashion, where tokens must be first
consumed
before being able to buffer new ones.
\end{enumerate}
\item[U4] \textbf{Sampling rates}: In addition to the number of data transported
over a
connection by every block activation, a periodic sampling rate is assigned to each
block
in Simulink to mark its periodic activation at this specific frequency. If all
blocks exhibit the same sampling periods in a model, then this model is called a
\textit{single-rate} model otherwise it is a \textit{multi-rate} model. In SDFGs,
however, an actor is only activated based on the availability of inputs. Actors do
not
have explicit sampling periods and therefore data rates can only be represented by
the
rates assigned to their (input/output) ports. \label{Multi-rate}
\end{description}
Because of the above differences, some constraints must be imposed on the
Simulink input model in order to enable its translation to an equivalent SDFG,
which we
will discuss in the following section.
\subsection{Constraints on the Simulink Model}
\label{subsec:constraints}
Only Simulink models with fixed-step solver are supported in the translation. In
case of multi-rates, rate transitions should be inserted to the Simulink model and
the rates
should be harmonic (divisible). These constraints are indispensable to enable deterministic
code generation
\cite{Simulink-coder-guidlines, bostrom_contract-based_2015}, since we aim with the help of
Simulink built-in
code-generator to generate SDF-compatible executable code for the translated SDF
application.
Even though it is possible to translate a Simulink model to multiple SDFGs, we deal only
with one application (implemented in
Simulink) at a time in this paper, which results after translation into one
equivalent SDFG.
This application is considered to be a control application having the general
structure depicted in Fig.~\ref{fig:codegeneration}.
Moreover, a correct functional simulation of the Simulink model is a
prerequisite for the
translation in order to get an executable SDFG.
In addition to above general prerequisites, the following constraints are imposed
on the
input Simulink model to enable the translation:
\begin{description}
\item[E1] \textbf{Hierarchy:}
\noindent
Hierarchical blocks (e.g. \textit{subsystems}), in which one or more functional
blocks of
the
types described in \textit{U3-1} and \textit{U3-2} exist, are not allowed to be
translated to atomic actors. Either these blocks should be removed from the entry
Simulink model (for they serve only visualization improvement purpose) or the
model
should be dissolved at the hierarchy level at which these components exist
where these blocks are translated and connected in
accordance with the rest of the SDFG. This constraint is
mandatory, otherwise if we allow an atomic translation of such hierarchical
functional
blocks, their contained functional blocks of the form \textit{U3-1} and
\textit{U3-2},
which may be connected with functional blocks in different hierarchical levels,
would
disappear in the target SDFG. A translation of these blocks would thus no longer
be
possible and would cause a malfunction of the target SDFG (see restriction
\textit{E3}).
\item[E2] \textbf{Control-flow logic/Conditional:}
Blocks such as \textit{Triggered/Enabled} subsystems can be translated just like
the
general subsystems. Upon dissolving the hierarchy of such subsystems, the control
flow takes place now within the atomic functionality of the actor without being in
contradiction to SDFG semantics (c.f. Sec.~\ref{sec:bg:sdfgs}).
In such a translation, however, additional control channels must be defined (see
Sec.~\ref{subsec:translationsteps}).
Yet, the case described in \textit{U2} must still be prohibited. In order to do
that,
there is an option ``allowing different data input sizes'' in Simulink for such
blocks,
which when disabled, prohibits outputs of variable sizes of a control
block\footnote{According to \cite{mat1} blocks having this option are:
\textit{ActionPort, Stateflow,
Enable/Trigger Subsysteme, Switch, Multiport Switch} and \textit{Manual Switch}.}.
A special case of these blocks is the powerful stateflow supported by Simulink.
In our translation we do not flatten the stateflow block and we always translate
it into one atomic actor.
\item[E3] \textbf{Connections}
\begin{enumerate}
\item \textbf{Dataflow without connections:}
For blocks having the same behavior described in \textit{U3-1} (such as From/Goto
or
DataStoreRead,/DataStoreWrite blocks), we assume that the
source
block (e.g. DataStoreWrite block), intermediate block (e.g. DataStoreMemory block)
and the
target block (e.g. DataStoreRead block) which communicate without connections are
available in the input Simulink model. This constraint is important as Simulink
allows
instantiating a source blocks without for instantiating for e.g. the sink block.
\item \textbf{Grouping of connections:}
In order to support the translation of Simulink models with blocks having the
same
behavior as those described in \textit{U3-2}\footnote{e.g.
\textit{BusCreator/BusSelector}, \textit{Bus Assignment} and \textit{Merge}
blocks
\cite{mat1}.}, two constraints must be imposed. The first one is that every block
which groups multiple signals (e.g. BusCreator) into one signal must be directly
connected
to a block which have the opposite functionality (e.g. BusSelector). The second
constraint
is
imposed on the block (e.g. BusSelector) which takes the grouped signals and splits
them
again. An ``Output as bus'' should be prohibited in the options of this block.
By doing this, grouping of signals for better visibility in the Simulink model is
still
with the limitation above allowed, while prohibiting grouping of signals of
different
parameters in one signal in the target translation.
\end{enumerate}
\end{description}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Simulink-Model}
\caption{Original Simulink model: \textit{Red} having a sample time of 2, \textit{Green}
having a sample time 4}
\label{fig:transOriginal}
\end{figure}
\subsection{Translation Procedure}
\label{subsec:translationsteps}
In the following, we will roughly describe the translation procedure implemented to
extract an SDFG from a Simulink model with the help of an academical Simulink
multirate example in Fig.~\ref{fig:transOriginal}. For the translation two main
phases are required: the \textit{pre-translation} phase where the original Simulink model
is prepared and checked for the above defined constraints and the \textit{translation}
phase where the translation takes place.
\begin{enumerate}
\item \textbf{Pre-Translation phase:}
\begin{enumerate}
\item \textbf{Checking Requirements:}
Here the Simulink model is checked if it fulfills the
constraints described above. If this is not the case the translation is aborted with an
output of the list of unfulfilled constraints.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{ueb_dissolving_hierarchy}
\caption{Dissolving hierarchy to the desired level}
\label{fig:transhierachy}
\end{figure}
\item \textbf{Dissolving hierarchy:}
In this step, a top-down flattening of the Simulink model (respecting
\textit{E1}), till the required
depth level is reached, is done (see Fig.~\ref{fig:transhierachy}).
\item \textbf{Removing connecting blocks of type U3-1/U3-2:}
Here, blocks respecting the \textit{E3-1/E3-2} constraint
are removed. When doing this, the predecessor block of the source block (e.g.
DataStoreWrite block) is directly connected either to the intermediate (if existent) block
(e.g. DataMemory block) or to the successor block of the target block (e.g.
DataStoreRead block) and these connecting blocks (source and target blocks) are removed
(see
Fig.~\ref{fig:transBus} where BusCreator/BusSelector and Goto/From blocks are removed).
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{ueb_removing_bus}
\caption{Removing connecting blocks of type \textit{U3-1/U3-2}}
\label{fig:transBus}
\end{figure}
\item \textbf{Inserting rate-transition blocks:}
Here rate-transition block are inserted between blocks connected to each other
and having different sample rates (see Fig.~\ref{fig:transRateTrans}).
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{ueb_adding_rate_transitions}
\caption{Inserting rate-transition blocks between blocks of different sample rates}
\label{fig:transRateTrans}
\end{figure}
\end{enumerate}
\item \textbf{Translation phase:}
In this step, the modified Simulink model is directly translated into an SDFG (see
Fig.~\ref{fig:sdfg_representation}) according to the following procedure:
\begin{enumerate}
\item \textbf{Translation of blocks:}
If $B$ is the set of all blocks in Simulink model $M$
then each block $b_l \in B$
in $M$ is
translated into an unique actor in the translated SDFG $a_l\in
\mathcal{A}$ (where $\mathcal{A}$ is the set of actors see
Fig.~\ref{fig:sdfg_representation}).
\item \textbf{Translation of connections:}
Each output port $b_l.o$ is translated into a unique output port
$a_l.p_o$ and
each input port $b_l.i$ is translated into a unique input port $a_l.p_i$.
In case multiple connections $t_1, t_2, \cdots, t_n$ going out from an output port
$p_{o1}$ in Simulink (which is permitted in Simulink but not in the SDFG, see connections
of \textit{statechart} before
Fig.~\ref{fig:transOriginal} and after translation Fig.~\ref{fig:sdfg_representation}),
then for each one
of these connections, the output port is replicated $p_{o11}, p_{o12}, \cdots,
p_{o1m}$
(each having the same properties) in the resulting SDFG, in order to guarantee
that every channel
$d\in \mathcal{D}$ (set of all channels in an SDFG) has unique input and output ports.
Now, each connection $t \in M$ in the Simulink model is translated into
a channel
$d\in \mathcal{D}$ in the SDFG (see Fig.~\ref{fig:sdfg_representation}).
\item \textbf{Extraction of Tokens' sizes and types}: The number of
the data transfered over a connection represents the size of a token produced/consumed
when an actor fires (e.g. \texttt{Constant} actor produces a token of size $2$ in
Fig.~\ref{fig:sdfg_representation}) and their data type represents the data type of that
token (e.g. \texttt{double} in Fig.~\ref{fig:sdfg_representation}). These parameters can
be extracted from the model for every connection.
\item \textbf{Handling Multi-rates}: The following method for handling multirates was inspired
from \cite{bostrom_contract-based_2015, zhang_bridging_2013}.
To determine the rates of the actors' input and
output ports we must differentiate between three cases: \textit{fast-to-slow} transition,
\textit{slow-to-fast} transitions and transitions between blocks having the same rates.
For the latter case, source and destination actors are denoted by a rate of \texttt{1} on
their ports indicating
the production/consumption of one token (of specific size per channel) whenever
activated.
In case of \textit{slow-to-fast} transition (see e.g. in Fig.~\ref{fig:Multirate-s2f} and
in Fig.~\ref{fig:TransSim2SDF}), the rate of the output port of the rate-transition actor
\begin{equation}
R.p_o.rate = b_{src}.sp/b_{dst}.sp,
\end{equation}
where $p_o$ is the output port of the actor $R$, $b_{src}$ and $b_{dst}$
are the source and destination blocks connected via rate-transition block and $sp$ the
sample time of the corresponding block. The rate of the input port of $R$ is set to
\texttt{1}. This basically realizes multiple copies of tokens of the slower actor for the
faster actor to run.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{Multirate-s2f}
\caption{Example of a Simulink slow-to-fast multirate model shown in (a). By adding a
rate-transition actor $R$, a valid translation to SDFG can be achieved in (b).}
\label{fig:Multirate-s2f}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{Multirate-f2s}
\caption{Example of a Simulink fast-to-slow multirate model (a) and its equivalent
SDFG in (b).}
\label{fig:Multirate-f2s}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=0.45\textwidth]{SDFG-Darstellung}
\caption{Translating modified Simulink model with \textit{fast-to-slow} transitions into
SDF}
\label{fig:sdfg_representation}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth]{code_gen_diagram}
\caption{Structure of the model transformation and code-generation framework.
\textit{Simulink Model:} consisting mainly of a controller and the environment model.
\textit{Code
generator:} implementing the translation of Simulink models to SDFGs, generating SDF
code and the verification model. In this figure, the
transformation is exemplary applied on the sub-block \texttt{B3} of the
controller Simulink model. \textit{SDF based C-code:} executable code
of the generated SDFG. \textit{Verification model:} consists of the reference
controller and
environment models with an extra \textit{S-function builder} block in which the generated
SDF code
is embedded and connected to the environment allowing SIL verification.}
\label{fig:codegeneration}
\end{figure*}
In case of \textit{fast-to-slow}
transition, the rate $R.p_i.rate$ of the input port ($p_i$) of the rate-transition actor
$R$ can be calculated as follows:
\begin{equation}
R.p_i.rate = b_{dst}.sp/b_{src}.sp,
\end{equation}
The output port rate is set to 1. This mainly accumulates tokens on the rate-transition
actor
and outputs the most freshest token of the faster actor.
Furthermore, in this case
a number of delay tokens equal to:
\begin{equation}
d.delay = (b_{dst}.sp/b_{src}.sp)-1,
\end{equation}
are placed on the input channel $d\in \mathcal{D}$) of the
rate-transition actor in order to enable considering the initial token produced by the
fast actors at the first firing (see
Fig.~\ref{fig:Multirate-f2s} and Fig.~\ref{fig:sdfg_representation}).
\item \textbf{Adding event channels:} if the subsystem is a triggered one then, depending
on the hierarchy level chosen, extra connections are added in this step for handling
(enabling/triggering) events. These edges are needed when the
hierarchy of a enabled/triggered subsystem is dissolved. In this case,
each block, belonging to the triggered or enabled subsystem has to be sensitive to the
(triggering/enabling) event and thus is connected with the event source.
\end{enumerate}
\end{enumerate}
Finally, the actors in the resulting SDF graph can be statically scheduled to
obtain a minimal periodic static schedule (\texttt{Constant} \texttt{Constant1}
\texttt{Product} )$^2$ (\texttt{RateTransition} \texttt{UnitDelay} \texttt{Chart}
\texttt{Out1} \texttt{Out2}).
\subsection{Code-generation and SIL Simulation}
\label{subsec:sil}
After describing the translation procedure of Simulink models into SDFGs, we will
describe in the following the corresponding implementation on top of Simulink
and how to utilize Simulink code-generator to enable SDF code generation and SIL
verification.
Generating an equivalent SDF-compatible C code is useful to verify the functional
equivalence between Simulink models and the generated SDFGs on one side, and to enable the
direct code deployment on target hardware platforms, on the other side.
Fig.~\ref{fig:codegeneration} shows the different steps involved in the model
transformation process within our code-generation framework.
The code generator constitutes the major part of our model transformation, taking the
Simulink model as an input and generating the SDF code and the verification (SIL) model as
output. We implemented the code generator as a Matlab script taking use of the Matlab
API to manipulate and extract needed information from Simulink models. The implemented
code generator constitutes mainly of the following functions:
\begin{itemize}
\item \textit{Check Requirements:} the Simulink
model is checked if it fulfills the constraints described Sect.\ref{subsec:constraints}.
For e.g. in case of multirates, the rates are checked whether or not these are
integers and divisible.
\item \textit{Clean Model:} in this step, the chosen subsystem (to be translated) is
restructured according to the pre-translation phase (see
Sect.~\ref{subsec:translationsteps}): hierarchies dissolved, routing blocks dissolved and
rate-transition blocks inserted. In addition, every block of the desirable hierarchy is
packaged in a subsystem and the connections are updated since the code-generation is only
possible for subsystems.
\item \textit{Generate SDF Code:} this function uses the
\textit{Simulink Embedded Coder} and an SDF API to generate SDF-based embedded C code from
the modified model of the previous step (see example at the right of
Fig.~\ref{fig:codegeneration}).
In this case, embedded C code is first generated for each block at the the chosen
hierarchy level. The SDF-based C code is generated by using the predefined SDF library
files (\texttt{SDFLib.h, SDFLib.c} implemented according to description in
\cite{SDFImplementation2013}) that have been already loaded into the folder structure. The
output are two files (\texttt{sdfg\_<Name>.h, sdfg\_<Name>.c}) for every SDFG, in which
the actors and channels are defined and instantiated according to the translation concept.
For each actor a corresponding function is generated (E.g. \texttt{Product\_actor()} see
Fig.~\ref{fig:codegeneration}), in which
data availability of every
input channel is checked (implemented as FIFO \texttt{queue}) and, if all inputs are read
(E.g. \texttt{dequeue(q1, P\_U.In1)}), the
actor executes its internal computation behavior (implemented in a step function for e.g.
\texttt{Product\_step()}) and the results are written into
its output channels (E.g. \texttt{enqueue(q3, P\_Y.Out1)}).
In addition, a basic valid static schedule is generated and implemented for the SDFG
(see \texttt{sdfg\_step()} in Fig.~\ref{fig:codegeneration}).
\item \textit{Generate Verification model:}
the latest step targets the realization of a SIL simulation (see bottom-right of
Fig.~\ref{fig:codegeneration}). For this, we further enhance the code generator to allow
the automatic integration of the generated SDF-compatible code into a C file of an
\textit{S-function} block. The S-function block is then automatically generated and
inserted into a new-created verification model. The verification model includes also the
original subsystem (controller) with the environment model. The S-function has the same
interfaces as the original subsystem which allows a seamless SIL simulation with the
environment model. Doing this, the functional equivalence of the translated model and the
original one, can be verified automatically.
\end{itemize}
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=0.75\textwidth]{SIL-Transmission}
\end{center}
\caption[Motorcontrol]{SDF code-generation of the transmission controller
model \cite{SIL_Transmision} (with \textit{slow-to-fast} transitions)}
\label{fig:TransSim2SDF}
\end{figure*}
\begin{figure*}[!ht]
\subfloat[Model-In-the-Loop Simulation Results\label{subfig-1:MIL-Trans}]{%
\includegraphics[width=0.5\textwidth]{TransSILResultsRef}
}
\hfill
\subfloat[Software-In-the-Loop Simulation Results\label{subfig-2:SIL-Trans}]{%
\includegraphics[width=0.5\textwidth]{TransSILResultsSDF}
}
\caption{Verification results of the Transmission control model showing equivalent
outputs of the SIL (see Fig.~\ref{subfig-2:SIL-Trans}) and the MIL (see
Fig.~\ref{subfig-1:MIL-Trans}) simulations.}
\label{fig:TransmissionResults}
\end{figure*}
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=0.97\textwidth]{SIL-ClimateCtrl}
\end{center}
\caption[Motorcontrol]{SDF code-generation of the single-rate \textit{triggered}
Climate controller model \cite{SIL_Heat}}
\label{fig:HeatSim2SDF}
\end{figure*}
\begin{figure*}[!ht]
\subfloat[Model-In-the-Loop Simulation Results\label{subfig-1:MIL-Heat}]{%
\includegraphics[width=0.5\textwidth]{HeatSILResultsRef}
}
\hfill
\subfloat[Software-In-the-Loop Simulation Results\label{subfig-2:SIL-Heat}]{%
\includegraphics[width=0.5\textwidth]{HeatSILResultsSDF}
}
\caption{Verification results of the Heat control model showing equivalent outputs of
the SIL
(see
Fig.~\ref{subfig-2:SIL-Heat}) and the MIL (see
Fig.~\ref{subfig-1:MIL-Heat}) simulations.}
\label{fig:HeatResults}
\end{figure*}
\section{Evaluation}\label{Evaluation}
We have conducted two experiments to demonstrate the viability of our approach
being able of translating a Transmission Controller Unit (TCU) model (c.f.
\cite{SIL_Transmision}) and a Climate Controller model (c.f.
\cite{SIL_Heat}) each to a corresponding SDFG and to
generate for each case an equivalent SDF C code.
The TCU model depicted in Fig.~\ref{fig:TransSim2SDF} is a typical model exhibiting
multirates.
The translation for the TCU subsystem (seen at the bottom of Fig.~\ref{fig:TransSim2SDF})
was straightforward since the model respected (per construction) the constraints made in
Sect.~\ref{subsec:translationsteps}.
Fig.~\ref{fig:TransmissionResults} shows that the outputs (impeller torque, output torque)
of both the reference TCU and the generated SDF-compatible TCU code are equivalent.
More complexity is exhibited by the Automatic Climate Control System (seen
in Fig.~\ref{fig:HeatSim2SDF}), where the \textit{Heater controller} subsystem was
translated. In
addition to the variety of blocks used, the Heater subsystem is a triggered subsystem which
only executes when the enable signal is true.
As seen in the generated SDFG (see Fig.~\ref{fig:HeatSim2SDF}), the \texttt{Enable} actor
is connected via extra-created channels to all actors within the \textit{Heater\_Control}
SDFG. Only if a true value arrives at these dedicated channels,
then the corresponding actor will be activated to perform its internal computation. If
this is not the case, the actor will read its input queues, skip the computation part
(step function) and update output queues with values of the previous step results.
Also the SIL and MIL results of this experiment show equivalent values as depicted in
Fig.~\ref{fig:HeatResults} concluding a functionally equivalent SDF code-generation.
\section{Conclusion}
In this work, a translation approach for Simulink models (respecting defined rules)
to SDFGs was presented.
Thanks to the automated code-generation of SDF code from the original Simulink
model and
the Software-in-the-loop simulation, tests can be automated to show the functional
equivalence of this translation.
The translation was demonstrated successfully with a medium-sized Transmission Controller
Unit model from the automotive domain and with a Climate Controller use-case.
In future work, we will take a look at the possibility of optimizing the
code-generation of Simulink models for MPSoCs. For this, we can take use of the generated
SDF code and mature optimizing/parallelization techniques from the SDF
research domain \cite{bhattacharyya_clustering, bhattacharyya_synthesis_1999} to
enable efficient implementations of embedded systems.
\section{Acknowledgments}
This work has been partially supported by the SAFEPOWER project with funding
from the European Union's Horizon 2020 research and innovation programme under
grant agreement No 646531.
\newpage
\bibliographystyle{abbrv}
|
2,869,038,154,107 | arxiv | \section{Introduction} \label{Abschn: Einleitung}
The center vortex picture is one of the most intuitive and prolific explanation of
colour confinement in strong interactions. It was first proposed by Mack and Petkova
\cite{mack}, but lay dormant until the advent of new gauge fixing techniques
which permitted the detection of center vortex structures directly within
lattice Yang-Mills configurations \cite{DelDebbio}. These numerical studies have
revealed a large mount of evidence in favour of a center vortex picture of confinement:
The center vortex density detected on the lattice in the maximual
center gauge after center projection properly scales with the lattice constant in the
continuum limit and therefore must be considered a physical quantity \cite{Langfeld:1997jx}.
When center vortices are removed from the ensemble of gauge field configurations the
string tension is lost in the temporal Wilson loop. Conversely, keeping the center vortex
configurations only, the static quark potential extracted from the temporal Wilson loop
is linearly rising at all distances \cite{DelDebbio}. Center vortices also seem to
carry the non-trivial topological content of gauge fields: the Pontryagin index can
be understood as self-intersection number of center vortex sheets in four Euclidean
dimensions \cite{Engelhardt:1999xw,Reinhardt:2001kf} or in terms of the writhing number
of their 3-dimensional projection which are loops \cite{Reinhardt:2001kf}.
For the colour group $SU(2)$, attempts to restore the structure of the underlying (fat) vortices
suggest that the topological charge also receives contributions from the colour structure of
self-intersection regions of such fat vortices \cite{Nejad:2015aia,Nejad:2016fcl}.
Removing the center vortex content of the gauge fields makes the field configuration
topological trivial and simultaneously restores chiral symmetry. The Pontryagin index
\cite{Bertle:2001xd} as well as the quark condensate \cite{Gattnar:2004gx,Hollwieser:2008tq}
are both lost when center vortices are removed, see also
\cite{Reinhardt:2003ku,*Reinhardt:2002cm}. In the case of $SU(3)$,
this link of center vortices to both confinement and chiral symmetry breaking
has also been observed directly in lattice simulations of the low lying hadron
spectrum \cite{OMalley:2012}.
Finally, the center vortex picture also gives
a natural explanation of the deconfinement phase transition which appears as a
depercolation transition from a confined phase of percolating vortices to a smoothly
interacting gas of small vortices winding dominantly around the compactified
Euclidean time axis \cite{Engelhardt:1999fd}.
Center vortices detected on the lattice after center projection form loops in $D = 3$ dimensions
and surfaces in $D = 4$; in both cases, they live on the \emph{dual} lattice and are closed
due to Bianchi's identity. While a gas of closed loops can be treated analytically, see
e.g.~\cite{Oxman:2017tel}, an ensemble of closed sheets is described by string theory,
which has to be treated numerically. The main features of $D = 4$ center vortices detected
on the lattice after center projection, such as the emergence of the string tension or
the order of the deconfinement transition, can all be reproduced in an effective
\emph{random center vortex model}: in this approach, vortices are described on a rather
coarse dual lattice (to account for the finite vortex thickness), with the action given
by the vortex area (Nambu-Goto term) plus a penalty for the curvature of the vortex sheets
to account for vortex stiffness \cite{Engelhardt:1999wr,engelhardt,Quandt:2004gy}.
The model was originally formulated for the gauge group $SU(2)$ \cite{Engelhardt:1999wr}
and later extended to $SU(3)$ in Ref.~\cite{engelhardt}.
The $SU(3)$ group has two non-trivial center elements
$z_{1/2} = e^{\pm i 2 \pi/3}$ which are related by $z^2_1 = z_2 \, , \, z^2_2 = z_1$.
Due to this property two $z_1$ center vortices can fuse to a single $z_2$ vortex
sheet and vice versa (see Fig.~\ref{fig:2} below).
This vortex branching is a new element absent in the gauge group $SU(2)$. In
Ref.~\cite{engelhardt} it was found within the random center vortex model that the
deconfinement phase transition is accompanied with a strong reduction of the
vortex branching and fusion. In the present paper we investigate the branching of
center projected lattice vortices found in the maximal center gauge.
This paper is organized as follows: In section \ref{sec:branch} we describe the
geometrical and physical properties of vortex branching and develop the
necessary quantities to study this new phenomenon on the lattice.
Section \ref{sec:setup} gives details on our numerical setup and the lattice
parameters and techniques used in the simulations. The results are presented and
discussed in section \ref{sec:results}, and we close with a short summary and an
outlook to future investigations.
\section{Center vortex branching points}
\label{sec:branch}
On the lattice, center vortices are detected by first fixing all links $U_\mu(x)$ to
a suitable center gauge, preferably the so-called \emph{maximal center gauge} (MCG),
cf.~eq.~(\ref{mcg}) below.
This condition attempts to find a gauge transformation which brings each link, on average,
as close as possible to a center element. The transformed links are then projected on
the nearest center-element, $U_\mu(x) \to Z_\mu(x) \in \mathbb{Z}_N$, and since it was
already close, we can hope that the resulting $\mathbb{Z}_N$ theory preserves the relevant
features of the original Yang-Mills theory. In fact, it has been shown that the
string tension is retained to almost $100\%$ under center projection for the colour
group $G=SU(2)$ and still to about $62\%$ for $G=SU(3)$ \cite{langfeld}, while the
string tension disappears for all $G$ if vortices are removed \cite{DelDebbio, forcrand}.
Also the near-zero modes of the Dirac operator relevant for chiral symmetry breaking disappear
if vortices are removed from the physical ensemble \cite{Gattnar:2004gx,Hollwieser:2008tq}.
The center projected theory is much simpler to analyze. Since all links are center-valued
after projection, so are the plaquettes. If such a center-valued plaquette happens
to be non-trivial, it is said to be pierced by a center vortex, i.e.~the corresponding
\emph{dual} plaquette is considered part of a center vortex world sheet. For $G=SU(3)$,
in particular, we associate a center projected plaquette $Z_{\mu\nu}(x)$
in the original lattice with a \emph{triality} $q_{\alpha\beta}(x^\ast) \in \{0,1,2\}$
on the dual lattice, where
\begin{align}
Z_{\mu\nu}(x) = \exp\left[i\,\frac{\pi}{3}\,\epsilon_{\mu\nu\alpha\beta}\,
q_{\alpha\beta}(x^\ast)\right]\,.
\label{triality}
\end{align}
Here, the usual sum convention over greek indices is in effect, and the footpoint of the dual
plaquette is defined as $x^\ast = x + (\mathbf{e}_\mu + \mathbf{e}_\nu -
\mathbf{e}_\alpha-\mathbf{e}_\beta)/2$. As the reader may convince herself, this assignment
is such that the initial and dual plaquette link with each other. The triality can be viewed
as a quantized flux of field strength flowing through the original plaquette. It is,
however, only defined modulo $N=3$ so that a $q=1$ vortex is equivalent to $q=-2$, which
in turn is a $q=2$ vortex with opposite direction of flux. This ambiguity gives rise to different
geometrical interpretations (see.~Fig.~\ref{fig:2}), but it does not affect the quantities
studied in the present work. The vortex world sheet itself is now composed of all
connected non-trivial dual plaquettes. This world sheet may \emph{branch} along
links of the dual lattice where three or more vortex plaquettes join, cf.~the left
panel of Fig.~\ref{fig:1}.
For the actual measurement, we study the branching in the orignal
lattice, where the branching link is dual to an elementary cube, while the plaquettes
attached to the branching link are dual to the plaquettes on the surface of the cube.
Geometrically, this can be visualized in the 3D slice\footnote{Such slices are obtained
by holding either the Euclidean time coordinate $x_0$ (\emph{time slice}) or a
space coordinate $x_i$ (\emph{space slice}) fixed.} of the original lattice which
contains the cube, cf.~Fig.~\ref{fig:1}: in this slice, the vortex plaquettes
are projected onto links which are dual to the non-trivial plaquettes and represent the
center flux through the plaquettes. Vortex matter thus appears as a network of closed lines
composed of non-trivial dual links. These thin lines are the projection vortices in which
the center flux of the unprojected (thick) vortex is compressed into a narrow tube with
a cross section of only a single plaquette.
Vortex branching in a 3D slice occurs at \emph{branching points} which
are the projection of the branching links in the 4D
lattice.
Geometrically, the branching points are located in the middle of the cubes dual to
the branching links as illustrated in Fig.~\ref{fig:1}: the vortex lines
entering an elementary cube must pierce the plaquettes on its surface,
and so up to six vortices can join at any given point of the dual 3D
slice.\footnote{Equivalently, up to six vortex plaquettes in $D=4$ can join a common branching link.}
We call this number $\nu(x^\ast) \in \{0,\ldots,6\}$ of vortex lines joining at a site
$x^\ast$ of the dual 3D slice its \emph{branching genus}. Clearly, $\nu=0$ means that
no vortex passes through $x^\ast$, while $\nu=2$ means that a vortex goes in and out
without branching (but possibly changing its direction).
The cases $\nu=4$ and $\nu=6$ correspond to vortex self-intersections (or osculation
points), which are also present in the case of $G=SU(2)$.
The odd numbers $\nu=3$ and $\nu=5$, however, are genuine vortex branchings
which cannot be observed in $SU(2)$ and are thus a new feature
of the center projected theory for the more complex colour group $SU(3)$.
In the present study, we investigate the distribution of branching points in
3D slices across the deconfinement phase transition.
\begin{figure}[t]
\begin{center}
\includegraphics[width=4cm]{branch3D}
\hspace*{2cm}
\includegraphics[width=4cm]{cube}
\hspace*{2cm}
\includegraphics[width=3cm]{branch2b}
\end{center}
\caption{Illustration of vortex branching. The single and double arrows on the lines
represent triality $q=1$ and $q=2$, respectively. The left figure represents a $\nu=3$
vortex branching in the full $4D$ lattice. The graphic in the middle shows the same
situation from a 3D slice, where the vortex plaquettes are replaced by
three flux tubes joining at a branching point $x^\ast$. The tubes enter the elementary
cube surrounding $x^\ast$ by piercing three of its six surface plaquettes. The
right figure gives a simplified picture where only the branching vortex lines are displayed.}
\label{fig:1}
\end{figure}
It should also be mentioned that the case $\nu=1$ would represent a vortex end-point
which is forbidden by Bianchi's identity, i.e.~flux conservation modulo 3. More precisely,
Bianchi's identity in the present case states that the sum of the trialities of
all plaquettes in an elementary cube of a 3D slice must vanish modulo $N$ (the number of
colours). This holds even for cubes on the edge of the lattice if periodic boundary
conditions are employed. Clearly, this rule is violated if the cube has only $\nu=1$
non-trivial plaquette, which is hence forbidden. In our numerical study, the number of
$\nu=1$ branching points must then be exactly zero, which is a good test on our
algorithmical book-keeping.
Finally, we must also stress that $\nu=6$ branchings for the colour group $G=SU(2)$
are \emph{always} self-intersections or osculation points, while they can also be interpreted
as \emph{double vortex branchings} in the case of $G=SU(3)$. With the present technique,
we cannot keep track of the orientation of vortices (i.e.~the direction of vortex flux),
and hence are unable to distinguish double branchings from complex self-intersections.
Fortunately, $\nu=6$ branching points are so extremely rare that they can be neglected
entirely for our numerical analysis. If we speak of vortex branching, we thus always
mean the cases $\nu=3$ and $\nu=5$, which only exist for $G=SU(3)$, and for which all
possible interpretations involve a single vortex branching.
Table \ref{tab:2} summarizes again the different sorts of
vortex branchings and their geometrical meaning.
\renewcommand{\arraystretch}{1.2}
\begin{table}[h!]
\centering
\begin{tabular}{c|l}
\toprule
$\nu=0$ & no vortex\\
$\nu=1$ & vortex endpoint, forbidden by Bianchi's identity\\
$\nu=2$ & non-branching vortex\\
$\nu=3$ & simple vortex branching\\
$\nu=4$ & vortex self-intersection/osculation\\
$\nu=5$ & complex vortex branching\\
$\nu=6$ & complex vortex self-intersection/osculation/double branching
\\ \botrule
\end{tabular}
\caption{Possible vortex branching types and their geometrical interpretation.
As explained in Fig.~\ref{fig:2}, there is some arbitrariness in
the geometrical picture, while the branching genus $\nu$ is independent of all conventions.}
\label{tab:2}
\end{table}
\section{Numerical setup}
\label{sec:setup}
We simulate $SU(3)$ Yang-Mills theory on a hypercubic lattice using the standard Wilson
action as a sum over all plaquettes $U_P \equiv U_{\mu \nu}(x)$
\begin{align}
S=\sum_P \left[1-\frac{1}{2N}\,\mathrm{tr}(U_P + U_P^\dagger )\right].
\end{align}
Configurations are updated with the pseudo-heatbath algorithm due to Cabibbo and Marinari
\cite{su3heatbath} applied to a full set of $SU(2)$ subgroups. To study finite temperature,
we reduce the extent $L_t$ of the Euclidean time direction, while keeping the spatial
extent $L_s \gg L_t$ to eliminate possible finite size effects,
\begin{align}
T = \frac{1}{a(\beta)\,L_t}\,.
\end{align}
Since the variation of $L_t$ only allows for a rather coarse temperature grid, we have
also varied the lattice spacing $a(\beta)$ by considering three different couplings
$\beta$ within the scaling window.\footnote{Finer temperature resolutions through the
use of anisotropic lattices proved to be unnecessary for the present investigation.}
Table \ref{tab:1} lists the lattice extents and coupling constants used
in our simulations.
For each run, the lattice was thermalized using at least 100 heatbath sweeps, and
measurements were then taken on $70$ to $200$ thermalized configurations (depending on $L_t$),
with $10$ sweeps between measurements to reduce auto-correlations. For each measurement,
the following sequence of steps was performed:
\begin{figure}[t]
\begin{center}
\includegraphics[width=2.5cm]{branch2b}
\hspace*{2cm}
\includegraphics[width=2.5cm]{branch2a}
\\[2mm]
\includegraphics[width=2.5cm]{branch4a}
\hspace*{1cm}
\includegraphics[width=2.5cm]{branch4b}
\hspace*{1cm}
\includegraphics[width=2.5cm]{branch4c}
\end{center}
\caption{Ambiguities in the interpretation of $SU(3)$ vortex branching.
In the top line, the simple branching of a $q=2$ vortex on the left can be
equivalently described as three $q=1$ vortices emenating from a common source,
i.e. as $\mathbb{Z}_3$ center monopole. Similarly, the
self-intersection of a $q=1$ vortex in the bottom line (left), is equivalent to
an osculation point of e.g. two $q=1$ vortices (middle) or a $q=1$ and a
$q=2$ vortex (right).}
\label{fig:2}
\end{figure}
\medskip
\noindent\paragraph{Gauge fixing to maximal center gauge (MCG):} This is achieved by maximizing
the functional
\begin{align}
F=\frac{1}{V} \sum\limits_{\{x,\mu\}} \left|\frac{1}{N}\,\mathrm{tr}\,U_\mu(x)\right|^2\,,
\label{mcg}
\end{align}
under gauge rotations, where $N=3$ is the number of colours and $V=\prod_\mu L_\mu$ is the
lattice volume. The main gauge fixing algorithm used in this study is iterated
overrelaxation \cite{overrelax} in which the local quantity
\begin{align}
F_x = \sum_\mu \left( \bigl|\mathrm{tr}\, \big\{\Omega (x) U_\mu (x)\big\} \bigr|^2 +
\bigl|\mathrm{tr} \, \big\{U_\mu (x-\hat{\mu})\Omega^\dagger (x)\big\} \bigr|^2\right)
\end{align}
is maximized with respect to a local gauge rotation $\Omega(x) \in SU(3) $
at each lattice site $x$. We stop this process when the largest relative change of $F_x$
at all sites $x$ falls below $10^{-6}$. More advanced g.f.~techniques such as
simulated annealing \cite{gf_anneal} or Landau gauge preconditioners
\cite{gf_landau} from multiple random initial gauge copies have also been
tested. While such methods are known to have a significant effect on the
propagators of the theory in any gauge \cite{gf_green, gf_green2, *gf_green3},
we found that they have very little effect, at our lattice sizes, on the gauge
fixing functional and the vortex geometry investigated here. For the production
runs, we have therefore reverted to simple overrelaxation with random starts.
\medskip
\noindent\paragraph{Center projection:} Once a configuration is fixed to MCG, each link
is projected to its closest center element $U_\mu(x) \rightarrow Z_\mu (x)$ by first
splitting off the phase
\begin{equation}
\mathrm{tr}\, U_\mu(x) = \left|\mathrm{tr}\, U_\mu(x)\right| \cdot
e^{ 2 \pi i \delta_\mu / N},
\end{equation}
which defines $\delta_\mu \in \mathbb{R}$ modulo $N$. After rounding $(\delta_\mu \,\mathrm{mod}\, N)$
to the closest integer $q_\mu \in [0,N-1]$, we can then extract the center projected link as
\begin{align}
Z_\mu(x) \equiv \exp\left(i\,\frac{2\pi}{N}\,q_\mu\right)\mathbb{1} \in \mathbb{Z}_N\,.
\end{align}
In the case of $SU(3)$, we will call the integer $q_\mu \in \{0,1,2\} $ the \emph{triality} of
a center element. As mentioned earlier, the triality is only defined modulo 3,
i.e.~$q_\mu= -2$ is identical to $q_\mu=1$. While this ambiguity alters the geometric interpretation
of a given vortex distribution (cf.~Fig.~\ref{fig:2}),
both the existence of a vortex branching point and its genus (the number of vortex lines meeting at
the point) are independent of the triality assignment.
\medskip
\noindent\paragraph{Vortex identification:} After center projection, all links are center valued,
and so are the projected plaquettes. If such a center-valued plaquette happens
to be non-trivial, we interpret this as a center vortex piercing the plaquette, i.e. the
corresponding dual plaquette is part of the center vortex world sheet. The exact formula for
the triality assignment of the vortex plaquettes was given in eq.~(\ref{triality}) above. For the
computation of the area density of vortices, it is sufficient to consider a 2D plane in the
original lattice and count the number of non-trivial plaquettes after center projection.
\medskip
\noindent\paragraph{branching points:} As explained earlier, center vortices appear within a
time or space slice as a network of links on the lattice dual to the slice. At each
point $x^\ast$ of this dual 3D slice, between $\nu=0,2,\ldots,6$ vortex lines may join.
Since the point $x^\ast$ is the center of an elementary cube of the original time or
space slice, the vortices joining in $x^\ast$ must enter or exit the cube and hence
pierce some or all of the six plaquettes on its surface. We can thus determine
$\nu(x^\ast)$ simply by counting the number of non-trivial plaquettes on elementary
cubes in 3D slices of the lattice, and assign it to the possible branching point
$x^\ast$ in the middle of the cube.
\renewcommand{\arraystretch}{1.2}
\setlength{\tabcolsep}{8pt}
\begin{table}[t!]
\centering
\begin{tabular}{c||ccccc|ccccc|ccccc}
\toprule
$\beta$& \multicolumn{5}{c|}{$5.8$} & \multicolumn{5}{c|}{$ 5.85 $} &
\multicolumn{5}{c}{$ 5.9 $}\\
$ L_t$ & 3 & 4 & 5 & 6 & 9 & 4 & 5 & 6 & 7 & 10 & 4 & 5 & 6 & 7 & 10 \\
\# configs &
140 & 102 & 106 & 90 & 92 & 122 & 98 & 80 & 73 & 73 & 119 & 107 & 75 & 74 & 78
\\ \botrule
\end{tabular}
\caption{\label{tab:1} Parameters for the finite temperature simulations. The last
row gives the number of configurations used for measurements, and the spatial
lattice size was $L_s = 24$ in all cases.}
\end{table}
\section{Results}
\label{sec:results}
The vortex area density is known to be a physical quantity in the sense that it scales
properly with the lattice spacing $a(\beta)$ (see below) \cite{Langfeld:1997jx}. This entails that the overall
amount of vortex matter quickly decays with increasing coupling $\beta$. To improve the
statistics, we therefore choose coupling constants $\beta$ near the lower end of the scaling
window $ 5.7 \lesssim\beta \lesssim 7 $, cf.~table \ref{tab:1}. Since this implies a
rather coarse lattice, we must ensure that the lattice size in the short time direction
does not become too small. For the values of $\beta$ chosen in our simulation,
$L_t = \big[a(\beta)\,T\big]^{-1} \gg 1$ for temperatures at least up to $T\lesssim 2 T^\ast$, which is entirely
sufficient for the present purpose. We have also checked that increasing the spatial volume
from $L_s=16$ to $L_s=24$ has only marginal effects on the results, so that finite volume
errors are also under control. In the final results, we only include the findings for the
larger lattice extent $L_s = 24$.
\begin{figure}[t!]
\centering
\includegraphics[width = 0.6 \textwidth]{fldichte.pdf}
\caption{Vortex area density near the phase transition}
\label{fig:3}
\end{figure}
The properties of vortex matter are intimately related to the choice and implementation
of the gauge condition, as well as the absence of lattice artifacts. In particular,
the vortex area density only survives the continuum limit if MCG is chosen and implemented
accurately, and the lattice spacing is sufficiently small to suppress artifacts.
As an independent test of these conditions, we have therefore re-analyzed the area
density $\rho$ of vortex matter. In lattice units, this is defined as the ratio
\begin{align}
\hat{\rho}(\beta) = a(\beta)^2\,\rho =
\frac{\# \text{non-trivial\, center plaquettes}}{\# \text{total\, plaquettes}}
\label{xvdens}
\end{align}
in every 2D plane within the lattice. (We average over all planes in the full lattice
or in appropriate 3D slices in order to improve the statistics.) After gauge fixing
and center projection, the measurement of the vortex density is therefore a simple
matter of counting non-trivial plaquettes. If we assume that the vortex area density
is a physical quantity that survives the continuum limit, we should have
$\rho = c\,\sigma$, where $\sigma$ is the physical string tension and $c$ is a
dimensionless numerical constant. A random vortex scenario \cite{mack} entails
$\sigma = \frac{3}{2} \rho$ for $G=SU(3)$ which corresponds to $c=0.67$.
Previous lattice studies found a somewhat smaller value of about $c=0.5$ instead,
indicating that the random vortex picture for MCG vortices at $T=0$ is not always
justified \cite{langfeld}. In lattice units, these findings translate into
\begin{align}
\frac{\hat{\rho}(\beta)}{\hat{\sigma}(\beta)} =
\frac{a(\beta)^2\,\rho}{a(\beta)^2\,\sigma} = \frac{\rho}{\sigma} = c \simeq 0.5
\qquad\qquad\text{indep.~of $\beta$ in scaling window}\,.
\label{vdens}
\end{align}
For our values of the coupling as in table \ref{tab:1}, we have not measured the
area density $\hat{\rho}(\beta)$ at $T=0$ directly, but instead took the data from the
largest temporal extent $L_t = 10$ which corresponds to a temperature
$T/T^\ast \approx 0.55$ deep within the confined phase. Since the string tension and
the vortex density do not change significantly until very close to the phase transition,
the $L_t=10$ data should still be indicative for the values at $T=0$.
From these results and the string tension data $\hat{\sigma}(\beta)$ in Ref.~\cite{lucini},
the ratio (\ref{vdens}) can then be determined as follows:
\renewcommand{\arraystretch}{1.2}
\setlength{\tabcolsep}{10pt}
\begin{table}[h]
\centering
\begin{tabular}{c|ccc}
\toprule
$\beta$& $5.8 $ &\ $ 5.85 $ & $ 5.9 $\\
$c$ & $0.558$ & $0.573$ & $0.591$
\\ \botrule
\end{tabular}
\label{tab:3}
\end{table}
\noindent
As can be seen from this chart, the ratio (\ref{vdens}) is indeed roughly constant
in the considered coupling range, and also in fair agreement with previous lattice
studies \cite{langfeld}, given the fact that we did not really make a $T=0$ simulation.
In addition, inadequately gauge fixed configurations would show increased randomness which
would lead to a significant drop in the vortex density as compared to the string tension
data from Ref.~\cite{lucini}, and hence a much smaller value of $c$.
We thus conclude that our chosen lattice setup and gf.~algorithm are sufficient for
the present investigation.
\medskip
Next we study the finite temperature behaviour of vortex matter.
The critical deconfinement temperature for $G=SU(3)$ is given by
$T^\ast / \sqrt{\sigma} \approx 0.64$ \cite{lucini} . Since we do not measure the string tension
independently, we can use eq.~(\ref{vdens})
\begin{align}
\frac{T^\ast}{\sqrt{\rho_0}} = \frac{T^\ast}{\sqrt{\sigma}}\,\sqrt{\frac{\sigma}{\rho_0}}
= \frac{T^\ast / \sqrt{\sigma}}{\sqrt{c}} \approx 0.90
\end{align}
to determine the critical temperature in units of the zero-temperature vortex density
$\rho_0 \equiv \rho(T=0)$ which sets the scale in our simulations. In absolute units,
\begin{align}
\sqrt{\rho_0} = \sqrt{c\,\sigma} \approx 330\,\mathrm{MeV}\,.
\end{align}
From the results in Fig.~\ref{fig:3} we see that there is roughly a 50\% drop
in the vortex density at the critical temperature, which is consistent with
the findings of Ref.~\cite{langfeld}. A complete loss of
vortex matter at $T^\ast$ would mean that both the temporal \emph{and} spatial string
tension would vanish in the deconfined phase, contrary to lattice results \cite{sigma_spatial}.
What happens instead is a \emph{percolation phase transition} in which the geometric
arrangement of vortices changes from a mostly random ensemble to a configuration in
which most vortices are aligned along the short time direction \cite{Engelhardt:1999fd}.
Since this leads to a
nearly vanishing vortex density in space slices while the average density only
drops mildly, the density in time slices and the associated spatial string tension
must even increase for $T > T^\ast$.
This considerations imply that a good order parameter for confinement in the
vortex picture should be sensitive to the randomness or order in the geometric
arrangement of vortex matter and, as a consequence, should behave differently
in temporal or spatial 3D slices of the lattice. A prime candidate in
$SU(3)$ Yang-Mills theory is the 3-volume density of \emph{branching points},
since it is directly defined in 3D slices and describes deviations of the
vortex cluster from a straight aligned ensemble. This has previously been
studied in the effective center vortex model \cite{engelhardt} where indeed
a significant drop of vortex branching was observed in the deconfined phase,
but not directly in lattice Yang-Mills theory.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.7cm]{rho_b_space}
\includegraphics[width=8.7cm]{rho_b_hut_space}
\end{center}
\caption{Scaling of the volume density of vortex branching in space slices of the lattice.
The physical density (\ref{rbx}) (\emph{left}) shows no apparent scaling violations.
For comparison, the dimensionless density (\ref{rb}) (\emph{right}) shows
the amount of scaling violations to be expected for the present range of couplings.
Error bars for the physical density are much larger since they also include
uncertainties in the physical scale taken from Ref.~\cite{lucini}.}
\label{fig:4}
\end{figure}
Since vortex branching implies a deviation from a straight vortex flow,
we expect that it is suppressed in the deconfined phase where most vortices
wind directly around the short time direction. In addition, the residual
branching for $T > T^\ast$ should be predominantly in a space direction
(since the vortices are already temporally aligned) and should hence be
mostly visible in \emph{time slices}, where the vortex matter is expected to
still form large percolating clusters. In space slices, by contrast,
vortices are mostly aligned (along the time axis) in the deconfined phase,
and the suppression of the remnant branching for $T > T^\ast$ should be
much more pronounced.
\bigskip\noindent
To test these expectations, we have measured the (dimensionless)
volume density of branching points
\begin{align}
\hat{\rho}_B \equiv \frac{\text{\# branching points in lattice dual to 3D slice}}
{\text{\# total sites in lattice dual to 3D slice}} =
\frac{\text{\# elementary cubes in 3D slice with $\nu \in \{3,5\}$ }}
{\text{\# all elementary cubes in 3D slice}}\,,
\label{rb}
\end{align}
by assigning the vortex genus $\nu \in \{0,\ldots,6\} $ to all elementary cubes
in a 3D slice, cf.~section \ref{sec:branch}, and counting them. (To improve the
statistics, we have averaged over space- and time slices separately using the
same thermalized configurations.) Generally, we find
\begin{enumerate}
\item vortex endpoints with $\nu=1$ do not appear, i.e.~vortices are closed
in accordance with Bianchi's identity;
\item vortex branchings are rare as compared to $\nu=2$ non-branching vortex matter;
\item complex vortex branchings with $\nu=5$ are very rare and significantly
reduced as compared to the simple branchings with $\nu=3$; numerically, the
$\nu=5$ branchings contribute with only $0.1\ldots 1.0 \%$ to the total
branching probability.
\end{enumerate}
To construct a quantity which has the chance of scaling to the continuum, we
must express the branching density in physical units,
\begin{align}
\rho_B(T,\beta) \equiv \frac{\hat{\rho}_B(T,\beta)}{a(\beta)^3}\,,
\label{rbx}
\end{align}
where $a(\beta)$ is the lattice spacing at coupling $\beta$, which we take from
Ref.~\cite{lucini}. Eq.~(\ref{rbx}) is
indeed a physical quantity as can be be seen directly from the result in Fig.~\ref{fig:4}
where the data for all $\beta$ considered here fall on a common curve. Since we only
considered a limited range of couplings $\beta$, one could be worried that
possible scaling violations in $\rho_B$ would not be very pronounced. As can be seen from
the right panel of Fig.~\ref{fig:4}, this is not the case: the dimensionless density (\ref{rb}),
for instance, exhibits large scaling violations which are clearly visible even for our
restricted range of couplings. This gives a strong indication that the branching density
$\rho_B(T)$ really survives the continuum limit, even though further simulations at large
couplings would be helpfull to corroborate this fact.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.7cm]{rho_b_space}
\includegraphics[width=8.7cm]{rho_b_time}
\end{center}
\caption{Volume density of vortex branching points in physical units, measured in
space slices (\emph{left}) and time slices (\emph{right}).
Error bars include statistical errors and uncertainties
in in the physical scale taken from Ref.~\cite{lucini}.}
\label{fig:5}
\end{figure}
From Fig.~\ref{fig:5}, the physical branching density indeed shows a
rapid drop at the critical temperature $T=T^\ast$, while it stays roughly
constant below and above $T^\ast$. In particular, the maximal value is
expected at $T \to 0$. We have not made independent measurements
at $T=0$, but the available data from $L_t=9$ and $L_t=10$ corresponding to
$T/T^\ast = 0.55$ should still be indicative for the value at zero temperature
since the vortex properties are known to show no significant change until very close
to the phase transition. With this assumption, we find, in absolute units,
\begin{align}
\rho_B(0) \approx 5.86 \,\mathrm{fm}^{-3} = (0.56\,\mathrm{fm})^{-3}\,.
\label{rb0}
\end{align}
There is also a remnant branching density in the deconfined phase, but this
is much smaller in space slices ($20\%$ of $\rho_B(0)$) than in time slices
$(60\%$), in agreement with our geometrical discussion of vortex branching
above. In fact, the branching density in time slices even increases slightly
with the temperature within the deconfined phase.
Next, we want to demonstrate that the steep drop in the branching density
is \emph{not} due to an overall reduction of vortex matter itself, but
rather signals a geometrical re-arrangement. Instead of studying $\rho_B / \rho$
directly, we make a small detour and first introduce the
\emph{branching probability}
\begin{align}
q_B \equiv \frac{\text{\# elementary cubes in 3D slice with $\nu \in \{3,5\}$ }}
{\text{\# all elementary cubes in 3D slice with $\nu \neq 0$}}\,,
\label{qb}
\end{align}
which gives the likelihood that a vortex which enters an elementary cube of
edge length equal to the lattice spacing $a(\beta)$ will actually branch
within that cube. The branching probability $q_B$ itself cannot be a physical
quantity since it is expected to be proportional to the lattice spacing $a$ near the
continuum limit.\footnote{To see this, assume that the probability of branching
in a cube of edge length $a \ll 1$ is $q \ll 1$, and consider a cube of length $n a$
composed of $n^3$ sub-cubes of length $a$. Since vortices are stiff, most non-branching
vortices do not change their direction if $a \ll 1$ and just pass straight through $n$
sub-cubes. The probability of non-branching within the $n a$-cube is therefore
$(1-q)^n$ at small spacing, so that the branching probability in the $n a$-cube
becomes $1-(1-q)^n \approx n q$, i.e.~it is proportional to the edge length of the cube.}
This entails that the \emph{branching probability per unit length}
\begin{align}
w_B(T,\beta) \equiv \frac{q_B(T,\beta)}{a(\beta)}
\label{wb}
\end{align}
could be a physical quantity. As can be seen from Fig.~\ref{fig:6}, this is indeed the
case as the curves for $w_B$ for all available couplings fall on a common curve.
The temperature dependence of the physical quantity $w_B(T)$ is very similar to the
branching density in Fig.~\ref{fig:4}, with the drop at $T=T^\ast$ being reduced from
$75\%$ to about $50\%$. The qualitative features of the branching probability per unit
length are, however, very similar to the branching point density, and both are physical
quantities that scale to the continuum.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.7cm]{w_space}
\includegraphics[width=8.7cm]{w_time}
\end{center}
\caption{Branching probability per unit length (\ref{wb}) in physical units, measured
in space slices (\emph{left}) and time slices (\emph{right}). Error bars include
statistical errors and uncertainties in the physical scale taken from Ref.~\cite{lucini}.}
\label{fig:6}
\end{figure}
Next we want to show that the branching probability per unit length $w_B$ is actually
related to the ratio $\rho_B / \rho$ of branching points and vortex matter density.
To see this, we consider an arbitrary 3D slice containing $V$ sites and thus also
$V$ elementary cubes. The number of cubes of branching genus $\nu$ is denoted by $N_\nu$,
and obviously $\sum_{\nu=0}^6 N_\nu = V$. Then the dimensionless branching density (\ref{rb})
can be expressed with eq.~(\ref{qb}) as
\begin{align}
\hat{\rho}_B &= \frac{N_3 + N_5}{V}
= q_B \cdot \frac{\sum\limits_{\nu=2}^6 N_\nu}{V}
= 3 q_B\,\frac{\sum\limits_{\nu=2}^6 \big[\nu + (2-\nu)\big] N_\nu}{6V}
= 3 q_B\,\frac{\sum\limits_{\nu=2}^6 \nu N_\nu}{6V}\cdot \left \{ 1 -
\frac{\sum_{\nu=2}^6 (\nu-2) N_\nu}{\sum_{\nu=2}^6 \nu N_\nu}\right\}
= 3 q_B\,\hat{\rho}\,\lambda
\label{rel0}
\end{align}
with the dimensionless factor
\begin{align}
\lambda \equiv 1 - \frac{\sum\limits_{\nu=2}^6 (\nu-2) N_\nu}{\sum\limits_{\nu=2}^6 \nu N_\nu}
\in [0,1]\,.
\label{lambda}
\end{align}
In the last step in eq.~(\ref{rel0}), we have used the fact that a cube with branching
genus $\nu$ has $\nu$ non-trivial plaquettes on its surface, each of which is shared with an
adjacent cube. Thus, the sum $\sum_\nu \nu N_\nu$ counts every non-trivial plaquette twice,
and the dimensionless vortex area density eq.~(\ref{xvdens}) becomes, after averaging
over all planes in the 3D slice,
\[
\hat{\rho} = \frac{\frac{1}{2}\,\sum\limits_{\nu=0}^6 \nu N_\nu}{3 V}
= \frac{\sum\limits_{\nu=2}^6 \nu N_\nu}{6V}\,,
\]
since a 3D slice with $V$ sites and periodic boundary conditions contains
a total of $3V$ plaquettes. After inserting appropriate factors of the lattice
spacing in eq.~(\ref{rel0}), we obtain the exact relation
\begin{align}
\rho_B(T) = 3\,w_B(T)\,\rho(T) \,\lambda(T, a)\,.
\label{exa}
\end{align}
As indicated, the coefficient $\lambda$ may depend on the temperature and the
lattice spacing, but it must fall in the range $[0,1]$. As a consequence, we obtain
an exact inequality between physical quantities,
\begin{align}
\rho_B(T) \le 3 \,w_b(T) \rho(T)\,,
\label{ineq}
\end{align}
which must be valid at all temperatures. Moreover, the deviation from unity in
the coefficient $\lambda$ can be estimated, from eq.~(\ref{lambda}),
\begin{align*}
\lambda = 1 - \frac{\sum\limits_{\nu=2}^6 (\nu-2)N_\nu}{\sum\limits_{\nu=2}^6 \nu N_\nu}
= 1 - \frac{N_3 + N_5}{\sum\limits_{\nu=2}^6 \nu N_\nu} +
2 \,\frac{N_4 + N_5 + 2 N_6}{\sum\limits_{\nu=2}^6 \nu N_\nu} = 1 -
\frac{1}{6}\,\frac{\hat{\rho}_B}{\hat{\rho}} + \mathcal{O}\big(\frac{N_4}{N_2}\big)
= 1 - \frac{1}{6}\,\frac{\rho_B(T)}{\rho(T)}\,a + \mathcal{O}\big(\frac{N_4}{N_2}\big)\,.
\end{align*}
Here, the leading correction to unity vanishes in the continuum limit $a \to 0$ since both
$\rho_B$ and $\rho$ are physical. Furthermore, the next-to-leading term has the simple
branching $\nu=3$ removed and starts with the probability of self-intersection or osculation,
which is small and presumably also proportional to $a$, by the same argument that led from
eq.~(\ref{qb}) to eq.~(\ref{wb}) above. Thus, it is conceivable that
$\lambda(T,a) = 1 + \mathcal{O}(a)$ and eq.~(\ref{exa}) turns into the relation
\begin{align}
w_B(T) = \frac{1}{3}\,\frac{\rho_B(T)}{\rho(T)}
\label{conject}
\end{align}
for $a \to 0$. This is renormalization group invariant. We have tested this conjecture
numerically by computing the relevant coefficient $\lambda(T, a)$ from eq.~(\ref{lambda}).
The result is presented in Fig.~\ref{fig:7}, where we accumulate all available data for all
temperatures and lattice spacings. As can be seen, $\lambda$ is indeed in the range $[0,1]$,
independent of temperature and very close to unity. Since the overall statistical uncertainty
is about $5\%$ and our calculations were all done at the lower end of the scaling window
with a relatively large lattice spacing $a$, our numerics are at least compatible with
$\lambda=1$ and hence eq.~(\ref{conject}) in the continuum limit. Further calculations with
larger and finer lattices are clearly necessary to corroborate this conjecture.
\begin{figure}[t]
\begin{center}
\includegraphics[width=12cm]{lambda}
\end{center}
\caption{The ratio $\lambda$ of physical quantities from eq.~(\ref{lambda}).
Data comprises all available couplings and temperatures. Statistical errors
are generally at the $5\%$ level, but no error bars have been displayed to
improve the readability of the plot.}
\label{fig:7}
\end{figure}
Eq.~(\ref{conject}) shows that the drop of the branching density $\rho_B$ at the
phase transition is \emph{not} due to an overall reduction of vortex matter $\rho$,
since the branching probability per unit length, $w_B \sim \rho_B / \rho$ shows
the same qualitative behaviour as $\rho_B$, even after scaling out the overall vortex
density. The conclusion is that both the branching point density
$\rho_B(T)$ from eq.~(\ref{rbx}) and the branching probability $w_B(T)$ per unit length
eq.~(\ref{wb}) can be used as a reliable indicator for the phase transition,
and as a signal for the change in geometrical order of the vortices at the deconfinement
transition. Our findings in full Yang-Mills theory match the general expectations
discussed above and also comply with the predictions made in the random vortex
world-surface model \cite{engelhardt}.
\section{Conclusion}
In this work, we have studied the probability of center vortex branching within $SU(3)$
Yang-Mills theory on the lattice. The general expectation, confirmed only in models
so far, was that the branching probability should be sensitive to the geometry of
vortex clusters and thus provide an alternative indicator for the deconfinement
phase transition. We were able to corroborate this conjecture: both the branching
point density $\rho_B(T)$ and the branching probability per unit length $w_B(T)$
are independent of the lattice spacing and exhibits a steep drop at the critical
temperature, though a remnant branching probability remains even above $T^\ast$.
This effect is much more pronounced in space slices of the original lattice,
which clearly indicates a dominant alignment of vortices along the short time
direction within the deconfined phase. The same conclusion can be drawn from
the renormalization group invariant relation $w_B \sim \rho_B / \rho$, which
proves that the drop in the branching density is \emph{not} due to an overall
reduction of the vortex matter $\rho$, but instead must be caused by the change
in the geometry of the vortex cluster.
In future studies, it would be interesting to directly control the branching of
vortices and study its effect on the confinement and the chiral symmetry breaking
e.g.~through the Dirac spectrum in the background of such branching-free configurations.
The control over vortex branching could also address the obvious conjecture that the
different (first) order of the phase transition for $G=SU(3)$ as compared to the weaker
second order transition of $G=SU(2)$ is a result of the new geometrical feature
of vortex branching.
\section*{Acknowledgment}
This work was supported by Deutsche Forschungsgemeinschaft (DFG) under
contract Re 856/9-2.
|
2,869,038,154,108 | arxiv | \section*{Resumen}
\end{center}
\justify
Uno de los desafíos de la física teórica hoy en día es la unificación de la relatividad general y la teoría cuántica de campos, o equivalentemente, la formulación de una teoría de gravedad cuántica. Ambas teorías, bien comprobadas experimentalmente durante el siglo pasado, presentan incompatibilidades fundamentales que tienen su origen en el papel que el espacio-tiempo juega en ellas (es una variable dinámica en relatividad general, y un marco estático en teoría cuántica de campos). Ha habido numerosos intentos de formular una teoría de gravedad cuántica, como la teoría de cuerdas, teoría cuántica de bucles, teoría de conjuntos causales, etc. En algunos de estos marcos, el espacio-tiempo adquiere una estructura fundamental y característica, muy diferente de la noción de espacio-tiempo continuo de relatividad especial. Sin embargo, ni se entiende completamente la dinámica de estas teorías, ni son fácilmente contrastables con observaciones experimentales. Al principio de este siglo ha empezado a desarrollarse una teoría que todavía está germinando, la relatividad doblemente especial. El punto de partida de esta teoría es completamente distinto al de las otras perspectivas: no es una teoría fundamental, sino que es considerada un límite de bajas energías de una teoría de gravedad cuántica que intenta estudiar sus posibles elementos residuales. En particular, en relatividad doblemente especial se generaliza el principio de relatividad de Einstein, añadiendo a la velocidad de la luz $c$ otro invariante relativista, la longitud de Planck $l_p$. Esta idea puede tener evidencias experimentales, dando lugar a lo que se conoce como fenomenología de gravedad cuántica. Por otro lado, la relatividad doblemente especial implica la existencia de una ley de composición deformada para la energía y el momento, lo que lleva a un espacio-tiempo con ingredientes no locales, un elemento que también aparece en otras aproximaciones de gravedad cuántica.
En esta tesis, tras mostrar las motivaciones para considerar deformaciones de la relatividad especial, estudiaremos el papel que juegan los cambios en las variables momento en una cinemática relativista deformada, observando que hay una forma simple de definir una deformación usando simplemente un cambio de variables. Veremos que una de las cinemáticas más estudiadas en los modelos de relatividad doblemente especial, $\kappa$-Poincaré, puede obtenerse a través de este método orden a orden. Esto conduce a demasiadas leyes de composición deformadas, llevándonos a pensar que podría ser necesario un criterio matemático o físico para restringir las cinemáticas posibles.
En muchos trabajos de la literatura se ha explorado una conexión entre el modelo de $\kappa$-Poincaré y un espacio de momentos curvo. Veremos que considerando un espacio de momentos maximalmente simétrico se puede construir una cinemática relativista deformada, y que entre las posibles cinemáticas se obtiene $\kappa$-Poincaré como un caso particular cuando la curvatura del espacio de momentos es positiva.
Esta ley de composición deformada altera el comportamiento del espacio-tiempo. Como veremos, en el marco de la relatividad doblemente especial, hay una pérdida de la noción de localidad de interacciones debido a la ley de composición deformada para los momentos. Estudiaremos cómo aparece esta pérdida y cómo un nuevo espacio-tiempo, que es no conmutativo, puede considerarse para hacer que las interacciones sean locales. Veremos también que hay una relación entre los marcos de localidad y geometría.
Después, consideraremos dos estudios fenomenológicos. En el primero, analizaremos el posible retraso en tiempo de vuelo para fotones como consecuencia de una cinemática deformada. Esto se hará considerando que los observables están definidos en un espacio-tiempo conmutativo o no conmutativo. Encontraremos que, mientras que en el primer caso podría existir un retraso en el tiempo, dependiendo de la elección de variables momento con las que uno trabaja, en este último esquema se obtiene que no hay retraso en el tiempo, independientemente de la elección de variables. Ya que las medidas de retrasos en tiempos de vuelo podrían ser el único test fenomenológico de relatividad doblemente especial para pequeñas energías comparadas con la escala de Planck, la ausencia de retrasos en tiempos de vuelo implicaría que las restricciones en la escala de alta energía que caracteriza relatividad doblemente especial podría ser órdenes de magnitud menor que la energía de Planck. Con esta observación en mente, haremos algunos cálculos en teoría cuántica de campos con una suposición simple para las reglas de Feynman modificadas correspondientes a procesos de partículas, concluyendo que una cinemática deformada con una escala de energía de unos pocos TeV's es compatible con los datos experimentales.
Sin embargo, los estudios anteriores de tiempos de retrasos se llevan a cabo en espacio-tiempo llano, que no es la forma correcta de considerar la propagación de un fotón en un universo en expansión. En la última parte de la tesis, estudiaremos la generalización del enfoque geométrico para un espacio-tiempo curvo. Desarrollaremos la construcción de una métrica en el fibrado cotangente que tiene en cuenta la cinemática relativista deformada en presencia de una geometría no trivial en el espacio-tiempo. Con una generalización de los procedimientos usuales de relatividad general, estudiaremos las consecuencias fenomenoló-\\gicas de una métrica dependiente del momento en el fibrado cotangente para un universo en expansión y para un agujero negro estacionario.
\chapter{Change of variables at first and second order}
\label{appendix_second_order_a}
In this Appendix, we will obtain the DCL and DLT from a generic change of variables up to second order. We first start by taking into account the terms proportional to $(1/\Lambda)^2$ coming from the first order change of variables of Eq.~\eqref{p,q} to $p^2$, $q^2$:
\begin{equation}
\begin{split}
P^2 & \,=\, p^2 + \frac{v_1^L v_1^L}{\Lambda^2} \left[q^2 (n\cdot p)^2 - 2 (p\cdot q)(n\cdot p)(n\cdot q) + (p\cdot q)^2n^2\right] \\ & + \frac{v_2^L v_2^L}{\Lambda^2} \left[p^2 q^2 n^2 + 2 (p\cdot q)(n\cdot p) (n\cdot q) - p^2 (n\cdot q)^2 - q^2 (n\cdot p)^2 - (p\cdot q)^2n^2\right]\,, \\
Q^2 &\,= \,q^2 + \frac{v_1^R v_1^R}{\Lambda^2} \left[p^2 (n\cdot q)^2 - 2 (p\cdot q)(n\cdot p)(n\cdot q) + (p\cdot q)^2n^2\right] \\ & + \frac{v_2^R v_2^R}{\Lambda^2} \left[p^2 q^2 n^2 + 2 (p\cdot q) (n\cdot p) (n\cdot q) - p^2 (n\cdot q)^2 - q^2 (n\cdot p)^2 - (p\cdot q)^2n^2\right]\,.
\end{split}
\end{equation}
The following change of variables is compatible with $p^2=P^2$ up to second order
{\small
\begin{equation}
\begin{split}
P_\mu \,&=\, p_\mu + \frac{v_1^L}{\Lambda} \left[q_\mu (n\cdot p) - n_\mu (p\cdot q)\right] + \frac{v_2^L}{\Lambda} \epsilon_{\mu\nu\rho\sigma} p^\nu q^\rho n^\sigma - \frac{v_1^L v_1^L}{2 \Lambda^2} \left[n_\mu q^2 (n\cdot p) -\right. \\
& \left. 2 n_\mu (p\cdot q)(n\cdot q) +q_\mu (p\cdot q) n^2 \right]
- \frac{v_2^L v_2^L}{2 \Lambda^2} \left[p_\mu q^2 n^2 + 2 n_\mu (p\cdot q) (n\cdot q) - p_\mu (n\cdot q)^2 - n_\mu q^2 (n\cdot p)\right.\\
& \left. - q_\mu (p\cdot q) n^2\right]+ \frac{v_3^L}{\Lambda^2} \left[p_\mu (n\cdot p) - n_\mu p^2\right] (n\cdot q) + \frac{v_4^L}{\Lambda^2} \left[q_\mu (n\cdot p) - n_\mu (p\cdot q)\right] (n\cdot p) +\\
&
\frac{v_5^L}{\Lambda^2} \left[q_\mu (n\cdot p) - n_\mu (p\cdot q)\right] (n\cdot q) +
\frac{v_6^L}{\Lambda^2} (n\cdot p) \epsilon_{\mu\nu\rho\sigma} p^\nu q^\rho n^\sigma +
\frac{v_7^L}{\Lambda^2} (n\cdot q) \epsilon_{\mu\nu\rho\sigma} p^\nu q^\rho n^\sigma \,,
\end{split}
\label{P->p}
\end{equation}}
while for the variable $Q$ we obtain
{\small
\begin{equation}
\begin{split}
Q_\mu \,&=\, q_\mu + \frac{v_1^R}{\Lambda} \left[p_\mu (n\cdot q) - n_\mu (p\cdot q)\right] + \frac{v_2^R}{\Lambda} \epsilon_{\mu\nu\rho\sigma} q^\nu p^\rho n^\sigma - \frac{v_1^R v_1^R}{2 \Lambda^2} \left[n_\mu p^2 (n\cdot q) \right. \\
& \left. - 2 n_\mu (p\cdot q)(n\cdot p) + p_\mu (p\cdot q) n^2 \right]
- \frac{v_2^R v_2^R}{2 \Lambda^2} \left[q_\mu p^2 n^2 + 2 n_\mu (p\cdot q) (n\cdot p) - q_\mu (n\cdot p)^2 -\right. \\
& \left. n_\mu p^2 (n\cdot q)- p_\mu (p\cdot q) n^2\right] + \frac{v_3^R}{\Lambda^2} \left[q_\mu (n\cdot q) - n_\mu q^2\right] (n\cdot p) + \frac{v_4^R}{\Lambda^2} \left[p_\mu (n\cdot q) - n_\mu (p\cdot q)\right] (n\cdot q)
\\ &+\frac{v_5^R}{\Lambda^2} \left[p_\mu (n\cdot q) - n_\mu (p\cdot q)\right] (n\cdot p) +
\frac{v_6^R}{\Lambda^2} (n\cdot q) \epsilon_{\mu\nu\rho\sigma} q^\nu p^\rho n^\sigma +
\frac{v_7^R}{\Lambda^2} (n\cdot p) \epsilon_{\mu\nu\rho\sigma} q^\nu p^\rho n^\sigma \,.
\end{split}
\label{Q->q}
\end{equation}}
We see we have a total of 14 parameters $(v_1^L,\ldots,v_7^L;v_1^R,\ldots,v_7^R)$ that form a generic change of variables up to second order. In order to obtain the DCL in the new variables $(p, q)$, we apply it to the composition law of the variables $(P, Q)$. As these variables transform linearly, the composition law must be composed of covariant terms under linear Lorentz transformations:
\begin{equation}
\left[P\bigoplus Q\right]_\mu \,= \,P_\mu + Q_\mu + \frac{c_1}{\Lambda^2} P_\mu Q^2 + \frac{c_2}{\Lambda^2} Q_\mu P^2 + \frac{c_3}{\Lambda^2} P_\mu (P\cdot Q) + \frac{c_4}{\Lambda^2} Q_\mu (P\cdot Q) \,.
\label{ccl2a}
\end{equation}
Then, applying~\eqref{P->p}-\eqref{Q->q} to Eq.~\eqref{ccl2a} one obtains the DCL
\begin{equation}
\begin{split}
\left[p\oplus q\right]_\mu &\,= \,p_\mu + q_\mu + \frac{v_1^L}{\Lambda} \left[q_\mu (n\cdot p) - n_\mu (p\cdot q)\right] + \frac{v_1^R}{\Lambda} \left[p_\mu (n\cdot q) - n_\mu (p\cdot q)\right] + \\\
&\frac{(v_2^L-v_2^R)}{\Lambda} \epsilon_{\mu\nu\rho\sigma} p^\nu q^\rho n^\sigma + \frac{c_1}{\Lambda^2} p_\mu q^2 + \frac{c_2}{\Lambda^2} q_\mu p^2 + \frac{c_3}{\Lambda^2} p_\mu (p\cdot q) + \frac{c_4}{\Lambda^2} q_\mu (p\cdot q) \\
& - \frac{v_1^L v_1^L}{2 \Lambda^2} \left[n_\mu q^2 (n\cdot p) - 2 n_\mu (p\cdot q)(n\cdot q) + q_\mu (p\cdot q) n^2\right] - \frac{v_1^R v_1^R}{2 \Lambda^2} \left[n_\mu p^2 (n\cdot q) -\right. \\
& \left. 2 n_\mu (p\cdot q)(n\cdot p) +p_\mu (p\cdot q) n^2\right] - \frac{v_2^L v_2^L}{2 \Lambda^2} \left[p_\mu q^2 n^2 + 2 n_\mu (p\cdot q) (n\cdot q) - p_\mu (n\cdot q)^2\right. \\
&\left. - n_\mu q^2 (n\cdot p) - q_\mu (p\cdot q) n^2 \right] - \frac{v_2^R v_2^R}{2 \Lambda^2} \left[q_\mu p^2 n^2 + 2 n_\mu (p\cdot q) (n\cdot p) - q_\mu (n\cdot p)^2 \right.\\
& \left. - n_\mu p^2 (n\cdot q) - p_\mu (p\cdot q) n^2\right] + \frac{v_3^L}{\Lambda^2} \left[p_\mu (n\cdot p) - n_\mu p^2\right] (n\cdot q) + \\ & \frac{v_3^R}{\Lambda^2} \left[q_\mu (n\cdot q) - n_\mu q^2\right] (n\cdot p)+ \frac{v_4^L}{\Lambda^2} \left[q_\mu (n\cdot p) - n_\mu (p\cdot q)\right] (n\cdot p) + \\
& \frac{v_4^R}{\Lambda^2} \left[p_\mu (n\cdot q) - n_\mu (p\cdot q)\right] (n\cdot q) + \frac{v_5^L}{\Lambda^2} \left[q_\mu (n\cdot p) - n_\mu (p\cdot q)\right] (n\cdot q) + \\
& \frac{v_5^R}{\Lambda^2} \left[p_\mu (n\cdot q) - n_\mu (p\cdot q)\right] (n\cdot p)+ \frac{(v_6^L - v_7^R)}{\Lambda^2} (n\cdot p) \epsilon_{\mu\nu\rho\sigma} p^\nu q^\rho n^\sigma \\
&+ \frac{(v_7^L - v_6^R)}{\Lambda^2} (n\cdot q) \epsilon_{\mu\nu\rho\sigma} p^\nu q^\rho n^\sigma \,.
\end{split}
\label{cl2}
\end{equation}
In order to consider the rotational invariant case, we take $n_\mu=(1, 0, 0, 0)$ in Eq.~\eqref{cl2}, obtaining Eq.~\ref{generalCL}.
In order to obtain the DLT in the two-particle system, $(p,q) \to (p',q')$, one can follow the same procedure used to obtain Eqs.~\eqref{p'1}-\eqref{q'1} in Sec.~\ref{sec:covariant}; after some algebra, one obtains $p'$
{\small
\begin{equation}
\begin{split}
p'_\mu &\,=\, \tilde{p}_\mu + \omega^{\alpha\beta} n_\beta \left[\frac{v_1^L}{\Lambda} p_\alpha q_\mu - \frac{v_1^L}{\Lambda} \eta_{\alpha\mu} (p\cdot q) - \frac{v_2^L}{\Lambda} \epsilon_{\alpha\mu\nu\rho} p^\nu q^\rho\right] \\
& + \omega^{\alpha\beta} n_\beta \left[-\frac{v_1^L v_1^R}{\Lambda^2} \left(q_\alpha p_\mu - \eta_{\alpha\mu} (p\cdot q)\right) (n\cdot p) - \frac{v_1^L v_2^R}{\Lambda^2} \epsilon_{\alpha\mu\nu\rho} p^\nu q^\rho (n\cdot p) - \frac{v_1^L v_1^L}{\Lambda^2} p_\alpha q_\mu (n\cdot q) \right. \\
& \left. + \frac{v_1^L v_2^L}{\Lambda^2} \epsilon_{\alpha\nu\rho\sigma} q_\mu p^\nu q^\rho n^\sigma + \frac{v_1^L v_1^L}{\Lambda^2} \left(p_\alpha q^2 - q_\alpha (p\cdot q)\right) n_\mu + \frac{v_1^L v_1^R}{\Lambda^2} \left(q_\alpha p^2 - p_\alpha (p\cdot q)\right) n_\mu \right. \\
& \left. - \frac{v_2^L v_1^L}{\Lambda^2} \epsilon_ {\alpha\mu\nu\rho} q^\nu n^\rho (p\cdot q) + \frac{v_2^L v_1^R}{\Lambda^2} \epsilon_ {\alpha\mu\nu\rho} p^\nu n^\rho (p\cdot q) + \frac{v_2^L v_2^L}{\Lambda^2} \left[q_\alpha \left(p_\mu (n\cdot q) - q_\mu (n\cdot p)\right) - \right.\right. \\
& \left. \left.
\eta_{\alpha\mu} \left((n\cdot q)(p\cdot q) - (n\cdot p) q^2\right)\right] - \frac{v_2^L v_2^R}{\Lambda^2} \left[p_\alpha \left(q_\mu (n\cdot p) - p_\mu (n\cdot q)\right) - \eta_{\alpha\mu} \left((n\cdot p)(p\cdot q)+ \right. \right. \right. \\
& \left.\left.\left.- (n\cdot q) p^2\right)\right] \frac{v_2^L v_2^L}{\Lambda^2} q_\alpha p_\mu (n\cdot q) - \frac{(v_1^L v_1^L - v_2^L v_2^L)}{2 \Lambda^2} \left(p_\alpha n_\mu + \eta_{\alpha\mu} (n\cdot p)\right) q^2 + \right. \\
& \left. \frac{(v_1^L v_1^L - v_2^L v_2^L - v_5^L)}{\Lambda^2} \left(q_\alpha n_\mu + \eta_{\alpha\mu} (n\cdot q)\right) (p\cdot q) + \frac{v_3^L}{\Lambda^2} \left(q_\alpha (n\cdot p) + p_\alpha (n\cdot q)\right) p_\mu - \right. \\
& \left. \frac{v_3^L}{\Lambda^2} \left(q_\alpha n_\mu + \eta_{\alpha\mu} (n\cdot q)\right) p^2 + \frac{v_4^L}{\Lambda^2} 2 p_\alpha q_\mu (n\cdot p) - \frac{v_4^L}{\Lambda^2} \left(p_\alpha n_\mu + \eta_{\alpha\mu} (n\cdot p)\right) (p\cdot q) \right. \\
& \left. + \frac{v_5^L}{\Lambda^2} \left(q_\alpha (n\cdot p) + p_\alpha (n\cdot q)\right) q_\mu + \frac{v_6^L}{\Lambda^2} \left[p_\alpha \epsilon_{\mu\nu\rho\sigma} p^\nu q^\rho n^\sigma - \epsilon_{\alpha\mu\nu\rho} p^\nu q^\rho (n\cdot p)\right] \right. \\
& \left.+ \frac{v_7^L}{\Lambda^2} \left[q_\alpha \epsilon_{\mu\nu\rho\sigma} p^\nu q^\rho n^\sigma - \epsilon_{\alpha\mu\nu\rho} p^\nu q^\rho (n\cdot q)\right]\right] \,,
\end{split}
\label{p'2}
\end{equation}}\normalsize
and for the second momentum variable ($q'$), one gets the same expression but interchanging $p\leftrightarrow q$ and $v^i_L \leftrightarrow v^i_R$
{\small
\begin{equation}
\begin{split}
q'_\mu &\,= \,\tilde{q}_\mu + \omega^{\alpha\beta} n_\beta \left[\frac{v_1^R}{\Lambda} q_\alpha p_\mu - \frac{v_1^R}{\Lambda} \eta_{\alpha\mu} (p\cdot q) - \frac{v_2^R}{\Lambda} \epsilon_{\alpha\mu\nu\rho} q^\nu p^\rho\right] \\
& + \omega^{\alpha\beta} n_\beta \left[-\frac{v_1^R v_1^L}{\Lambda^2} \left(p_\alpha q_\mu - \eta_{\alpha\mu} (p\cdot q)\right) (n\cdot q) - \frac{v_1^R v_2^L}{\Lambda^2} \epsilon_{\alpha\mu\nu\rho} q^\nu p^\rho (n\cdot q) - \frac{v_1^R v_1^R}{\Lambda^2} q_\alpha p_\mu (n\cdot p) \right. \\
& \left. + \frac{v_1^R v_2^R}{\Lambda^2} \epsilon_{\alpha\nu\rho\sigma} p_\mu q^\nu p^\rho n^\sigma + \frac{v_1^R v_1^R}{\Lambda^2} \left(q_\alpha p^2 - p_\alpha (p\cdot q)\right) n_\mu + \frac{v_1^R v_1^L}{\Lambda^2} \left(p_\alpha q^2 - q_\alpha (p\cdot q)\right) n_\mu \right. \\
& \left. - \frac{v_2^R v_1^R}{\Lambda^2} \epsilon_ {\alpha\mu\nu\rho} p^\nu n^\rho (p\cdot q) + \frac{v_2^R v_1^L}{\Lambda^2} \epsilon_ {\alpha\mu\nu\rho} q^\nu n^\rho (p\cdot q) + \frac{v_2^R v_2^R}{\Lambda^2} \left[p_\alpha \left(q_\mu (n\cdot p) - p_\mu (n\cdot q)\right) \right. \right. \\
& \left. \left. - \eta_{\alpha\mu} \left((n\cdot p)(p\cdot q) - (n\cdot q) p^2\right)\right] - \frac{v_2^Rv_2^L}{\Lambda^2} \left[q_\alpha \left(p_\mu (n\cdot q) - q_\mu (n\cdot p)\right) - \eta_{\alpha\mu} \left((n\cdot q)(p\cdot q)\right.\right.\right. \\
& \left.\left.\left. - (n\cdot p) q^2\right)\right] + \frac{v_2^R v_2^R}{\Lambda^2} p_\alpha q_\mu (n\cdot p) - \frac{(v_1^R v_1^R - v_2^R v_2^R)}{2 \Lambda^2} \left(q_\alpha n_\mu + \eta_{\alpha\mu} (n\cdot q)\right) p^2 \right. \\
& \left. + \frac{(v_1^R v_1^R- v_2^R v_2^R - v_5^R)}{\Lambda^2} \left(p_\alpha n_\mu + \eta_{\alpha\mu} (n\cdot p)\right) (p\cdot q)+ \frac{v_3^R}{\Lambda^2} \left(p_\alpha (n\cdot q) + q_\alpha (n\cdot p)\right) q_\mu \right. \\
& \left. - \frac{v_3^R}{\Lambda^2} \left(p_\alpha n_\mu + \eta_{\alpha\mu} (n\cdot p)\right) q^2+ \frac{v_4^R}{\Lambda^2} 2 q_\alpha p_\mu (n\cdot q) - \frac{v_4^R}{\Lambda^2} \left(q_\alpha n_\mu + \eta_{\alpha\mu} (n\cdot q)\right) (p\cdot q) \right. \\
& \left. + \frac{v_5^R}{\Lambda^2} \left(p_\alpha (n\cdot q) + q_\alpha (n\cdot p)\right) p_\mu+ \frac{v_6^R}{\Lambda^2} \left[q_\alpha \epsilon_{\mu\nu\rho\sigma} q^\nu p^\rho n^\sigma - \epsilon_{\alpha\mu\nu\rho} q^\nu p^\rho (n\cdot q)\right] \right. \\
&\left.+ \frac{v_7^R}{\Lambda^2} \left[p_\alpha \epsilon_{\mu\nu\rho\sigma} q^\nu p^\rho n^\sigma - \epsilon_{\alpha\mu\nu\rho} q^\nu p^\rho (n\cdot p)\right]\right] \,.
\end{split}
\label{q'2}
\end{equation}}
\normalsize
As one could expect, the coefficients of the DLT are determined by the 14 parameters $(v_i^L;v_i^R),\, i=1,\ldots 7$, appearing in the change of variables.
If we take again $n_\mu=(1, 0, 0, 0)$ in Eqs.~\eqref{p'2} and~\eqref{q'2} we find
\begin{equation}
\begin{split}
p_{0}^{\prime}&\,=\,p_{0}+\vec{p}\cdot \vec{\xi}-\frac{v_{1}^{L}}{\Lambda}q_{0}\left(\vec{p}\cdot \vec{\xi}\right)+\frac{v_{2}^{L}}{\Lambda}\left(\vec{p}\wedge\vec{q}\right)\cdot \vec{\xi}+\frac{v_{1}^{L}v_{1}^{L}-v_{2}^{L}v_{2}^{L}-2v_{5}^{L}}{2\Lambda^{2}}q_{0}^{2}\left(\vec{p}\cdot \vec{\xi}\right) \\
&+\frac{v_{1}^{L}v_{1}^{L}+v_{2}^{L}v_{2}^{L}}{2\Lambda^{2}}\vec{q}^{2}\left(\vec{p}\cdot \vec{\xi}\right) +\frac{v_{1}^{L}v_{1}^{R}-v_{3}^{L}}{\Lambda^{2}}\vec{p}^{2}\left(\vec{q}\cdot \vec{\xi}\right) +
\frac{v_{1}^{L}v_{1}^{R}-v_{3}^{L}-v_{4}^{L}}{\Lambda^{2}}p_{0}q_{0}\left(\vec{p}\cdot \vec{\xi}\right) \\
& -\frac{v_{1}^{L}v_{1}^{R}+v_{4}^{L}}{\Lambda^{2}}\left(\vec{p}\cdot \vec{q}\right)\left(\vec{p}\cdot \vec{\xi}\right)-\frac{v_{2}^{L}v_{2}^{L}+v_{5}^{L}}{\Lambda^{2}}\left(\vec{p}\cdot\vec{q}\right)\left(\vec{q}\cdot\vec{\xi}\right)+\frac{v_{1}^{L}v_{2}^{R}+v_{6}^{L}}{\Lambda^{2}}p_{0}\left(\vec{p}\wedge\vec{q}\right)\vec{\xi}\\
&+\frac{-v_{1}^{L}v_{2}^{L}+v_{7}^{L}}{\Lambda^{2}}q_{0}\left(\vec{p}\wedge\vec{q}\right)\vec{\xi} \, ,
\end{split}
\label{generaltr1}
\end{equation}
\begin{equation}
\begin{split}
p_{i}^{\prime}&\,=\,p_{i}+p_{0}\xi_{i}-\frac{v_{1}^{L}}{\Lambda}\left[q_{i}\left(\vec{p}\cdot \vec{\xi}\right)+\left(p\cdot q\right)\xi_{i}\right]-\frac{v_{2}^{L}}{\Lambda^{2}}\left(q_{0}\epsilon_{ijk}p_{j}\xi_{k}-p_{0}\epsilon_{ijk}q_{j}\xi_{k}\right)+ \\
&\frac{v_{1}^{L}v_{1}^{R}-v_{3}^{L}-v_{4}^{L}}{\Lambda^{2}}p_{0}^{2}q_{0}\xi_{i}+
\frac{v_{1}^{L}v_{1}^{L}-v_{2}^{L}v_{2}^{L}-2v_{5}^{L}}{2\Lambda^{2}}p_{0}q_{0}^{2}\xi_{i}+\frac{-v_{1}^{L}v_{1}^{R}-v_{2}^{L}v_{2}^{R}+v_{4}^{L}}{\Lambda^{2}}\left(\vec{p}\cdot \vec{q}\right)p_{0}\xi_{i}+ \\
&\frac{-v_{1}^{L}v_{1}^{L}+2v_{2}^{L}v_{2}^{L}+v_{5}^{L}}{\Lambda^{2}}\left(\vec{p}\cdot \vec{q}\right)q_{0}\xi_{i}+\frac{v_{2}^{L}v_{2}^{R}+v_{3}^{L}}{\Lambda^{2}}\vec{p}^{2}q_{0}\xi_{i} +
\frac{v_{1}^{L}v_{1}^{L}-3v_{2}^{L}v_{2}^{L}}{2\Lambda^{2}}p_{0}\vec{q}^{2}\xi_{i}+
\\ &\frac{v_{2}^{L}v_{2}^{R}-2v_{4}^{L}}{\Lambda^{2}}p_{0}q_{i}\left(\vec{p}\cdot \vec{\xi}\right)-\frac{v_{2}^{L}v_{2}^{R}+v_{3}^{L}}{\Lambda^{2}}p_{i}q_{0}\left(\vec{p}\cdot \vec{\xi}\right)+\frac{v_{2}^{L}v_{2}^{L}-v_{5}^{L}}{\Lambda^{2}}p_{0}q_{i}\left(\vec{q}\cdot \vec{\xi}\right)-\\
&\frac{2v_{2}^{L}v_{2}^{L}}{\Lambda^{2}}p_{i}q_{0}\left(\vec{q}\cdot \vec{\xi}\right) + \frac{v_{1}^{L}v_{1}^{R}-v_{3}^{L}}{\Lambda^{2}}p_{0}p_{i}\left(\vec{q}\cdot \vec{\xi}\right)+\frac{v_{1}^{L}v_{1}^{L}-v_{5}^{L}}{\Lambda^{2}}q_{0}q_{i}\left(\vec{p}\cdot \vec{\xi}\right)-\frac{v_{1}^{L}v_{2}^{L}}{\Lambda^{2}}q_{i}\left(\vec{p}\wedge\vec{q}\right)\vec{\xi}+ \\
&\frac{v_{1}^{L}v_{2}^{R}+v_{6}^{L}}{\Lambda^{2}}p_{0}^{2}\epsilon_{ijk}q_{j}\xi_{k}-\frac{v_{6}^{L}}{\Lambda^{2}}\left(\vec{p}\cdot\vec{\xi}\right)\epsilon_{ijk}p_{j}q_{k}+\frac{v_{7}^{L}}{\Lambda^{2}}p_{0}q_{0}\epsilon_{ijk}q_{j}\xi_{k}-\frac{v_{7}^{L}}{\Lambda^{2}}\left(\vec{q}\cdot \vec{\xi}\right)\epsilon_{ijk}p_{j}q_{k}-\\
&\frac{v_{7}^{L}}{\Lambda^{2}}q_{0}^{2}\epsilon_{ijk}p_{j}\xi_{k}-\frac{v_{1}^{L}v_{2}^{R}+v_{6}^{L}}{\Lambda^{2}}p_{0}q_{0}\epsilon_{ijk}p_{j}\xi_{k}+\frac{v_{1}^{L}v_{2}^{L}}{\Lambda^{2}}\left(p\cdot q\right)\epsilon_{ijk}q_{j}\xi_{k} -\frac{v_{1}^{R}v_{2}^{L}}{\Lambda^{2}}\left(p\cdot q\right)\epsilon_{ijk}p_{j}\xi_{k} \,,
\end{split}
\end{equation}
\begin{equation}
\begin{split}
q_{0}^{\prime}&\,=\,q_{0}+\vec{q}\cdot \vec{\xi}-\frac{v_{1}^{R}}{\Lambda}p_{0}\left(\vec{q}\cdot \vec{\xi}\right)+\frac{v_{2}^{R}}{\Lambda}\left(\vec{q}\wedge\vec{p}\right)\cdot \vec{\xi}+\frac{v_{1}^{R}v_{1}^{R}-v_{2}^{R}v_{2}^{R}-2v_{5}^{R}}{2\Lambda^{2}}p_{0}^{2}\left(\vec{q}\cdot \vec{\xi}\right)+ \\
& \frac{v_{1}^{R}v_{1}^{R}+v_{2}^{R}v_{2}^{R}}{2\Lambda^{2}}\vec{p}^{2}\left(\vec{q}\cdot \vec{\xi}\right)+\frac{v_{1}^{L}v_{1}^{R}-v_{3}^{R}}{\Lambda^{2}}\vec{q}^{2}\left(\vec{p}\cdot \vec{\xi}\right) +
\frac{v_{1}^{L}v_{1}^{R}-v_{3}^{R}-v_{4}^{R}}{\Lambda^{2}}q_{0}p_{0}\left(\vec{q}\cdot \vec{\xi}\right) - \\
&
\frac{v_{1}^{L}v_{1}^{R}+v_{4}^{R}}{\Lambda^{2}}\left(\vec{p}\cdot \vec{q}\right)\left(\vec{q}\cdot \vec{\xi}\right) -
\frac{v_{2}^{R}v_{2}^{R}+v_{5}^{R}}{\Lambda^{2}}\left(\vec{p}\cdot\vec{q}\right)\left(\vec{p}\cdot\vec{\xi}\right)-\frac{v_{1}^{R}v_{2}^{L}+v_{6}^{R}}{\Lambda^{2}}q_{0}\left(\vec{p}\wedge\vec{q}\right)\vec{\xi}\\
&+\frac{v_{1}^{R}v_{2}^{R}-v_{7}^{R}}{\Lambda^{2}}p_{0}\left(\vec{p}\wedge\vec{q}\right)\vec{\xi} \, ,
\end{split}
\end{equation}
\begin{equation}
\begin{split}
q_{i}^{\prime}&\,=\,q_{i}+q_{0}\xi_{i}-\frac{v_{1}^{R}}{\Lambda}\left[p_{i}\left(\vec{q}\cdot \vec{\xi}\right)+\left(p\cdot q\right)\xi_{i}\right]-\frac{v_{2}^{R}}{\Lambda^{2}}\left(p_{0}\epsilon_{ijk}q_{j}\xi_{k}-q_{0}\epsilon_{ijk}p_{j}\xi_{k}\right)+\\
&\frac{v_{1}^{L}v_{1}^{R}-v_{3}^{R}-v_{4}^{R}}{\Lambda^{2}}q_{0}^{2}p_{0}\xi_{i} +
\frac{v_{1}^{R}v_{1}^{R}-v_{2}^{R}v_{2}^{R}-2v_{5}^{R}}{2\Lambda^{2}}q_{0}p_{0}^{2}\xi_{i}+\frac{-v_{1}^{L}v_{1}^{R}-v_{2}^{R}v_{2}^{L}+v_{4}^{R}}{\Lambda^{2}}\left(\vec{p}\cdot \vec{q}\right)q_{0}\xi_{i}+ \\
&\frac{-v_{1}^{R}v_{1}^{R}+2v_{2}^{R}v_{2}^{R}+v_{5}^{R}}{\Lambda^{2}}\left(\vec{p}\cdot \vec{q}\right)p_{0}\xi_{i}+\frac{v_{2}^{L}v_{2}^{R}+v_{3}^{R}}{\Lambda^{2}}\vec{q}^{2}p_{0}\xi_{i} +
\frac{v_{1}^{R}v_{1}^{R}-3v_{2}^{R}v_{2}^{R}}{2\Lambda^{2}}q_{0}\vec{p}^{2}\xi_{i}+
\\ &\frac{v_{2}^{L}v_{2}^{R}-2v_{4}^{R}}{\Lambda^{2}}q_{0}p_{i}\left(\vec{q}\cdot \vec{\xi}\right)-\frac{v_{2}^{L}v_{2}^{R}+v_{3}^{R}}{\Lambda^{2}}q_{i}p_{0}\left(\vec{q}\cdot \vec{\xi}\right)+\frac{v_{2}^{R}v_{2}^{R}-v_{5}^{R}}{\Lambda^{2}}q_{0}p_{i}\left(\vec{p}\cdot \vec{\xi}\right)- \\
&
\frac{2v_{2}^{R}v_{2}^{R}}{\Lambda^{2}}q_{i}p_{0}\left(\vec{p}\cdot \vec{\xi}\right)+\frac{v_{1}^{L}v_{1}^{R}-v_{3}^{R}}{\Lambda^{2}}q_{0}q_{i}\left(\vec{p}\cdot \vec{\xi}\right)+\frac{v_{1}^{R}v_{1}^{R}-v_{5}^{R}}{\Lambda^{2}}p_{0}p_{i}\left(\vec{q}\cdot \vec{\xi}\right)+\frac{v_{1}^{R}v_{2}^{R}}{\Lambda^{2}}p_{i}\left(\vec{p}\wedge\vec{q}\right)\vec{\xi}+ \\
&\frac{v_{1}^{R}v_{2}^{L}+v_{6}^{R}}{\Lambda^{2}}q_{0}^{2}\epsilon_{ijk}p_{j}\xi_{k}+\frac{v_{6}^{R}}{\Lambda^{2}}\left(\vec{q}\cdot\vec{\xi}\right)\epsilon_{ijk}p_{j}q_{k}+\frac{v_{7}^{R}}{\Lambda^{2}}p_{0}q_{0}\epsilon_{ijk}p_{j}\xi_{k}+
\frac{v_{7}^{R}}{\Lambda^{2}}\left(\vec{p}\cdot \vec{\xi}\right)\epsilon_{ijk}p_{j}q_{k}- \\
&\frac{v_{7}^{R}}{\Lambda^{2}}p_{0}^{2}\epsilon_{ijk}q_{j}\xi_{k}-\frac{v_{1}^{R}v_{2}^{L}+v_{6}^{R}}{\Lambda^{2}}p_{0}q_{0}\epsilon_{ijk}q_{j}\xi_{k}+\frac{v_{1}^{R}v_{2}^{R}}{\Lambda^{2}}\left(p\cdot q\right)\epsilon_{ijk}p_{j}\xi_{k}-\frac{v_{1}^{L}v_{2}^{R}}{\Lambda^{2}}\left(p\cdot q\right)\epsilon_{ijk}q_{j}\xi_{k} \,.
\end{split}
\label{generaltr4}
\end{equation}
These are the DLT generalizing Eq.~\eqref{DLT-1st} up to order $(1/\Lambda)^2$ making $p^2$ and $q^2$ invariant, and their coefficients depend on the 14 parameters that characterize a generic change of variables in the two-particle system.
\chapter{Momentum space geometry}
\section{Translations in de Sitter}
\label{appendix_translations}
We are going to obtain the translations of de Sitter space Sec.~\ref{subsection_kappa_desitter} when the tetrad is the one proposed in Eq.~\eqref{bicross-tetrad}. The condition~\eqref{T(a,k)} can be written in terms of the composition law as
\begin{equation}
e_\mu^\alpha(p \oplus q) \,=\, \frac{\partial (p \oplus q)_\mu}{\partial q_\nu} \,e_\nu^\alpha(q)\,.
\end{equation}
Then, the system of equations we need to solve is
\begin{equation}
\begin{split}
\text{For}\quad \mu\,&=\,0,\quad \nu\,=\,0 \qquad 1\,=\,\frac{\partial (p\oplus q)_0}{\partial q_0} \,.\\
\text{For}\quad \mu\,&=\,i,\quad \nu\,=\,0 \qquad 0\,=\,\frac{\partial (p\oplus q)_0}{\partial q_j}\,\delta^i_j \,e^{\pm q_0/\Lambda} \,.\\
\text{For}\quad \mu\,&=\,0,\quad \nu\,=\,i \qquad 0\,=\,\frac{\partial (p\oplus q)_i}{\partial q_0}\,.\\
\text{For}\quad \mu\,&=\,i,\quad \nu\,=\,j \qquad \delta^i_j\, e^{\pm (p \oplus q)_0/\Lambda}\,=\,\frac{\partial (p\oplus q)_j}{\partial q_k} \delta^i_k\, e^{\pm q_0/\Lambda}\,.\\
\end{split}
\end{equation}
The first equation implies that
\begin{equation}
(p\oplus q)_0\,=\,p_0\, f \left(\frac{p_0}{\Lambda},\frac{\vec{p}^2}{\Lambda^2},\frac{\vec{q}^2}{\Lambda^2},\frac{\vec{p}\cdot \vec{q}}{\Lambda^2}\right)+q_0\,,
\end{equation}
but the second one requires the component zero of the composition law to be independent of the spatial components of the momentum $q$, so by condition~\eqref{eq:cl0}, $f=1$ and then
\begin{equation}
(p\oplus q)_0\,=\,p_0+q_0\,.
\end{equation}
The fourth equation can be now written as
\begin{equation}
\delta^i_j\, e^{\pm (p_0+q_0)/\Lambda}\,=\,\frac{\partial (p\oplus q)_j}{\partial q_k} \delta^i_k\, e^{\pm q_0/\Lambda}\,,
\end{equation}
so
\begin{equation}
(p\oplus q)_i\,=\,p_i\, g\left(\frac{p_0}{\Lambda},\frac{q_0}{\Lambda},\frac{\vec{p}^2}{\Lambda^2}\right)+q_i e^{\pm p_0/\Lambda}\,,
\end{equation}
but by virtue of the third equation, the condition~\eqref{eq:cl0} gives that $g=1$, so the composition law finally is
\begin{equation}
(p\oplus q)_0\,=\,p_0+q_0\,,\qquad (p\oplus q)_i\,=\,p_i+q_i e^{-p_0/\Lambda}\,.
\label{DCL-bcb}
\end{equation}
\section{Algebra of isometry generators in de Sitter and anti-de Sitter spaces}
\label{appendix_algebra}
In this appendix, we want to study the algebra of isometries of the maximally symmetric spaces of de Sitter and anti-de Sitter. We can start in de Sitter space with the algebra of generators of translations $(T^\alpha_S, J^{\beta\gamma})$ which is Lorentz covariant
\begin{equation}
\begin{split}
&\{T^\alpha_S, T^\beta_S\} \,=\, \frac{J^{\alpha\beta}}{\Lambda^2}\,, \quad\quad \{T^\alpha_S, J^{\beta\gamma}\} \,=\, \eta^{\alpha\beta} T^\gamma_S - \eta^{\alpha\gamma} T^\beta_S\,,\\
&\lbrace J^{\alpha\beta},J^{\gamma\delta}\rbrace\,=\, \eta^{\beta\gamma}J^{\alpha\delta} - \eta^{\alpha\gamma}J^{\beta\delta} - \eta^{\beta\delta}J^{\alpha\gamma} + \eta^{\alpha\delta}J^{\beta\gamma}.
\label{cov_generators}
\end{split}
\end{equation}
In order to have an associative composition law, we saw in Ch.~\ref{chapter_curved_momentum_space} that the translation generators must form a four-dimensional subalgebra, so we can consider the following change of basis of the spatial translations
\begin{equation}
T^0_\kappa \,=\, T^0_S, \quad\quad\quad T^i_\kappa \,=\, T^i_S \pm \frac{J^{0i}}{\Lambda},
\label{eq:change_generators}
\end{equation}
and then, the new algebra of the translations is
\begin{equation}
\lbrace T_\kappa^0, T_\kappa^i\rbrace\,=\, \mp \frac{T_\kappa^i}{\Lambda}\,,
\end{equation}
which is the algebra from which one can deduce the $\kappa$-Poincar\'e kinematics, as we saw in Sec.~\ref{sec:examples}.
However, one can not obtain a closed subalgebra of the generators of translations for the anti-de Sitter algebra. The difference resides in the minus sign appearing in the algebra of anti-de Sitter space isometries (which corresponds to substitute $\Lambda^2$ by $-\Lambda^2$ in (\ref{cov_generators})). If one tries to make a similar change on the basis to Eq.~\eqref{eq:change_generators} of these generators, one sees that one can not find a closed subalgebra. Then we can conclude that there is no way to obtain a DRK with an associative composition law in anti-de Sitter momentum space.
From a generic change of translation generators (always maintaining isotropy)
\begin{equation}
T^0_H \,=\, T^0_S, \quad\quad\quad T^i_H \,=\, T^i_S + \alpha \frac{J^{0i}}{\Lambda},
\end{equation}
where $\alpha$ is an arbitrary parameter, one can find the DCL corresponding to the hybrids models which have a Lorentz covariant term as in Snyder kinematics and a non-covariant one as in $\kappa$-Poincar\'e kinematics.
\chapter{Locality and noncommutative spacetime}
\label{appendix:locality}
\section{Different representations of a noncommutative spacetime}
\label{ncst-rep}
One can make a canonical transformation in phase space $(x, k) \to (x', k')$
\begin{equation}
k_\mu \,=\, f_\mu(k')\,,\quad x^\mu \,=\, x^{\prime\nu} g^\mu_\nu(k')\,,
\end{equation}
for any set of momentum dependent functions $f_\mu$, with
\begin{equation}
g^\mu_\rho(k') \frac{\partial f_\nu(k')}{\partial k'_\rho} \,=\, \delta^\mu_\nu \,.
\end{equation}
One can write the noncommutative space-time coordinates as a function of the new canonical phase-space coordinates
\begin{equation}
\tilde{x}^\mu \doteq x^\nu \varphi^\mu_\nu(k) \,=\, x^{\prime\rho} g^\nu_\rho(k') \varphi^\mu_\nu(f(k')) \,.
\end{equation}
Introducing
\begin{equation}
\varphi^{\prime\mu}_\rho(k') \doteq g^\nu_\rho(k') \varphi^\mu_\nu(f(k')) \,=\, \frac{\partial k'_\rho}{\partial k_\nu} \varphi^\mu_\nu(k)\,,
\end{equation}
the noncommutative spacetime is
\begin{equation}
\tilde{x}^\mu \,=\, x^{\prime\rho} \varphi^{\prime \mu}_\rho(k') \,.
\end{equation}
We see then that different canonical coordinates related by the choice of momentum variables lead to different representations of the same noncommutative spacetime with different functions $\varphi^\mu_\nu(k)$.
\section{SR spacetime from the locality condition for a commutative DCL}
\label{append-commut}
In subsection~\ref{sec:firstattempt} we saw that the first way to try to implement locality requires the condition of Eq.~\eqref{eq:limitsym}, which is not valid for a generic DCL (in particular, it is valid for a commutative DCL). We show here that the spacetime defined by Eq.~\eqref{eq:firstspt} is indeed the SR spacetime. We start by checking that the new space-time coordinates $\tilde{x}^\mu$ are in fact commutative coordinates. If one derives with respect to $p_\sigma$ the first equality of Eq.~\eqref{loc1}, one gets
\begin{equation}
\frac{\partial \varphi^\mu_\nu(p\oplus q)}{\partial p_\sigma} \,=\, \frac{\partial}{\partial q_\rho} \left(\frac{\partial [p\oplus q]_\nu}{\partial p_\sigma}\right) \,\varphi^\mu_\rho(q) \,.
\label{eq:pre1}
\end{equation}
Using that the left hand side of the previous equation is in fact (applying the chain rule)
\begin{equation}
\frac{\partial \varphi^\mu_\nu(p\oplus q)}{\partial p_\sigma} \,=\, \frac{\partial \varphi^\mu_\nu(p\oplus q)}{\partial [p\oplus q]_\rho} \,\frac{\partial [p\oplus q]_\rho}{\partial p_\sigma}\,,
\label{eq:pre2}
\end{equation}
and taking the limit $p\to 0$ of the right hand sides of Eqs.~\eqref{eq:pre1} and Eq.~\eqref{eq:pre2}, one gets, using also Eq.~\eqref{eq:limit2},
\begin{equation}
\frac{\partial \varphi^\mu_\nu(q)}{\partial q_\rho} \,\varphi^\sigma_\rho(q) \,=\, \frac{\partial \varphi^\sigma_\nu(q)}{\partial q_\rho} \,\varphi^\mu_\rho(q) \,.
\label{eq:pre3}
\end{equation}
Comparing Eq.~\eqref{eq:pre3} with Eq.~\eqref{eq:commNCspt}, we see that $\{\tilde{x}^\mu, \tilde{x}^\sigma\}=0.$
As the space-time coordinates $\tilde{x}$ commute, one can define new momentum variables $\tilde{p}_\mu=g_\mu(p)$ such that $(\tilde{x},\tilde{p})$ are canonical conjugate variables, as in the case of $(x,p)$, that is: a canonical transformation that relates both phase-space coordinates exists. Indeed, one can prove that the composition of the new momentum variables are in fact the sum
\begin{equation}
[\tilde{p}\,\tilde{\oplus} \,\tilde{q}]_\mu \,\doteq\, g_\mu(p\oplus q) \,=\, \tilde{p}_\mu + \tilde{q}_\mu\,,
\label{lcl}
\end{equation}
(where the DCL $\tilde{\oplus}$ has been defined as in Eq.~\eqref{eq:DCLdef}), so that $(\tilde{x},\tilde{p})$ is in fact the phase space of SR.
In order to do that, we will firstly check that Eq.~\eqref{loc1} is invariant under different choices of momentum variables. If we consider new momentum variables $\tilde{p}_\mu = g_\mu(p)$, we can calculate the derivative of the DCL with respect to $\tilde{p}_\rho$
\begin{equation}
\frac{\partial[\tilde{p}\,\tilde{\oplus}\, \tilde{q}]_\nu}{\partial \tilde{p}_\rho} \,=\, \frac{\partial g_\nu (p\oplus q)}{\partial g_\rho (p)}\,=\,\frac{\partial g_\nu (p\oplus q)}{\partial [p\oplus q]_\lambda}\,\frac{\partial [p\oplus q]_\lambda}{\partial p_\sigma }\,\frac{\partial p_\sigma}{\partial g_\rho (p)}\,,
\end{equation}
and also
\begin{equation}
\tilde{\varphi}^\mu _\rho (\tilde{p}) \,= \, \lim\limits_{g(l)\rightarrow 0}\,\frac{\partial g_\rho (l\oplus p)}{\partial g_\mu (l)}\,=\,\lim\limits_{l\rightarrow 0}\,\frac{\partial g_\rho (l\oplus p)}{\partial [l\oplus p]_\xi}\,\frac{\partial [l\oplus p]_\xi }{\partial l_\eta}\,\frac{\partial l_\eta}{\partial g_\mu (l)}\,=\,\,\frac{\partial g_\rho (p)}{\partial p_\xi}\,\varphi _\xi ^\mu (p)\,,
\label{phi_transformation}
\end{equation}
where we used Eq.~\eqref{eq:limit2} and also the condition in the change of variables $\tilde{p}=0\implies p=0$. Taking both results and using Eq.~\eqref{loc1} one finds
\begin{equation}
\frac{\partial[\tilde{p}\,\tilde{\oplus}\, \tilde{q}]_\nu}{\partial \tilde{p}_\rho} \,\tilde{\varphi}^\mu _\rho (\tilde{p}) \,= \, \frac{\partial g_\nu (p\oplus q)}{\partial [p\oplus q]_\lambda} \,\frac{\partial[p\oplus q]_\lambda}{\partial p_\sigma} \,\varphi^\mu _\sigma (p) \,.
\end{equation}
One can do the same for the third term of Eq.~\eqref{loc1} obtaining
\begin{equation}
\frac{\partial[\tilde{p}\,\tilde{\oplus}\, \tilde{q}]_\nu}{\partial \tilde{q}_\rho} \,\tilde{\varphi}^\mu _\rho (\tilde{q}) \,= \, \frac{\partial g_\nu (p\oplus q)}{\partial [p\oplus q]_\lambda} \,\frac{\partial[p\oplus q]_\lambda}{\partial q_\sigma} \,\varphi^\mu _\sigma (q) \,.
\end{equation}
And finally, the first term of Eq.~\eqref{loc1}, following Eq.~\eqref{phi_transformation}, transforms as
\begin{equation}
\tilde{\varphi}^\mu _\nu (\tilde{p}\,\tilde{\oplus}\, \tilde{q}) \,= \,\frac{\partial g_\nu (p\oplus q)}{\partial [p\oplus q]_\lambda}\,\varphi _\lambda ^\mu (p\oplus q)\,.
\end{equation}
To summarize, we have seen that if Eq.~\eqref{loc1} is satisfied for the variables ($p$, $q$), also will be by ($\tilde{p}$, $\tilde{q}$) obtained by $\tilde{p}_\mu = g_\mu(p)$, $\tilde{q}_\mu = g_\mu(q)$. This means that one can always choose this change of basis in such a way that $\tilde{\varphi}^\mu_\nu(\tilde{k}) = \delta^\mu_\nu$, leading to
\begin{equation}
\frac{\partial[\tilde{p}\,\tilde{\oplus}\, \tilde{q}]_\nu}{\partial \tilde{p}_\mu} \,=\, \frac{\partial[\tilde{p}\,\tilde{\oplus}\, \tilde{q}]_\nu}{\partial \tilde{q}_\mu} \,=\, \delta^\mu_\nu\,,
\end{equation}
and then $[\tilde{p}\,\tilde{\oplus}\, \tilde{q}]_\nu = \tilde{p}_\nu + \tilde{q}_\nu$, as it was stated in Sect.~\ref{sec:firstattempt}.
\section{DCL of \texorpdfstring{$\kappa$}{k}-Poincaré in the bicrossproduct basis through locality}
\label{append-bicross}
As we did in \ref{appendix_translations}, we will compute the composition law obtained from the locality condition when $\varphi_{L\,\nu}^{\:\:\mu}(p, q)\,=\, \varphi^\mu_\nu(p)$ taking $\varphi^\mu_\nu(p)$ of Eq.~\eqref{eq:phibicross}, which corresponds to $\kappa$-Poincaré in the bicrossproduct basis. From Eq.~\eqref{varphi-oplus}, we have the following system of equations to solve
\begin{equation}
\begin{split}
\text{For}\quad \mu\,&=\,0,\quad \nu\,=\,0 \qquad 1\,=\,\frac{\partial (p\oplus q)_0}{\partial p_0}-\frac{\partial (p\oplus q)_0}{\partial p_i}\,\frac{p_i}{\Lambda} \,.\\
\text{For}\quad \mu\,&=\,i,\quad \nu\,=\,0 \qquad 0\,=\,\frac{\partial (p\oplus q)_0}{\partial p_j}\,\delta^i_j \,.\\
\text{For}\quad \mu\,&=\,0,\quad \nu\,=\,i \qquad -\frac{(p \oplus q)_i}{\Lambda}\,=\,\frac{\partial (p\oplus q)_i}{\partial p_0}-\frac{\partial (p\oplus q)_i}{\partial p_j}\,\frac{p_j}{\Lambda}\,.\\
\text{For}\quad \mu\,&=\,i,\quad \nu\,=\,j \qquad \delta^i_j\,=\,\frac{\partial (p\oplus q)_j}{\partial p_i}\,.\\
\end{split}
\end{equation}
We follow a similar strategy to find the composition law. The first two equations give the result
\begin{equation}
(p\oplus q)_0\,=\,p_0+q_0\,.
\end{equation}
The fourth equation requires the composition law to be linear in the spatial components of $p$, so the composition can be written as
\begin{equation} (p\oplus q)_i\,=\,p_i+q_i \cdot f\left(\frac{p_0}{\Lambda},\frac{q_0}{\Lambda},\frac{\vec{q}^2}{\Lambda^2}\right)\,.
\end{equation}
Introducing this into the third equation, we obtain a differential equation to solve
\begin{equation}
f\,=\,e^{-p_0/\Lambda}\,g\left(\frac{q_0}{\Lambda},\frac{\vec{q}^2}{\Lambda^2}\right)\,.
\end{equation}
Taking into account the condition~\eqref{eq:cl0}, we conclude that $g\,=\,1$ and then the composition reads
\begin{equation}
(p\oplus q)_0\,=\,p_0+q_0\,,\qquad (p\oplus q)_i\,=\,p_i+q_i e^{-p_0/\Lambda}\,.
\label{DCL-bcb-locality}
\end{equation}
\section{Lorentz transformation in the bicrossproduct basis from locality}
\label{append-2pLT}
In order to obtain the Lorentz transformations in the one-particle system, we will impose that the Lorentz generators, together with the noncommutative space-time coordinates, should close a 10 dimensional algebra. The modified Poisson brackets are the ones involving the boost generators $J^{0i}$:
\begin{equation}
\lbrace \tilde{x}^0, J^{0i} \rbrace \,=\, \tilde{x}^i + \frac{1}{\Lambda} J^{0i} \,,\qquad \lbrace \tilde{x}^j, J^{0i}\rbrace \,=\, \delta^i_j \tilde{x}^0 + \frac{1}{\Lambda} J^{ji}\,.
\label{eq:boost_pb}
\end{equation}
Using the notation
\begin{equation}
\tilde{x}^\mu \,=\, x^\nu \varphi^\mu_\nu(k) \,,\quad J^{0i} \,=\, x^\mu {\cal J}^{0i}_\mu(k) \,, \quad J^{ij} \,=\, x^\mu {\cal J}^{ij}_\mu (k) = x^j k_i - x^i k_j\,,
\end{equation}
Eq.~\eqref{eq:boost_pb} leads to the system of equations
\begin{equation}
\frac{\partial\varphi^0_\mu}{\partial k_\nu} {\cal J}^{0i}_\nu - \frac{\partial {\cal J}^{0i}_\mu}{\partial k_\nu} \varphi^0_\nu \,=\, \varphi^i_\mu + \frac{1}{\Lambda} {\cal J}^{0i}_\mu \,,\,\,\,
\frac{\partial\varphi^j_\mu}{\partial k_\nu} {\cal J}^{0i}_\nu - \frac{\partial {\cal J}^{0i}_\mu}{\partial k_\nu} \varphi^j_\nu \,=\, \delta^i_j \varphi^0_\mu + \frac{1}{\Lambda} \left(\delta^i_\mu k_j - \delta^j_\mu k_i\right)\,.
\label{J0i-xtilde}
\end{equation}
Inserting the expression of $\varphi^\mu_\nu(k)$ of Eq.~\eqref{eq:phibicross} which corresponds to $\kappa$-Poincaré in the bicrossproduct basis we can obtain $ {\cal J}^{0i}_\mu$ from (\ref{J0i-xtilde}). One finally obtains
\begin{equation}
{\cal J}^{0i}_0 \,=\, - k_i \,,\qquad {\cal J}^{0i}_j \,=\, \delta^i_j \,\frac{\Lambda}{2} \left[e^{-2 k_0/\Lambda} - 1 - \frac{\vec{k}^2}{\Lambda^2}\right] + \,\frac{k_i k_j}{\Lambda}\,.
\label{psi(0i)}
\end{equation}
Now we can write the Poisson brackets of $k_\mu$ and $J^{0i}$
\begin{equation}
\lbrace k_0, J^{0i}\rbrace \,=\, - k_i \,,\qquad \lbrace k_j, J^{0i}\rbrace \,=\, \delta^i_j \,\frac{\Lambda}{2} \left[e^{-2 k_0/\Lambda} - 1 - \frac{\vec{k}^2}{\Lambda^2}\right] + \,\frac{k_i k_j}{\Lambda} \,.
\end{equation}
\section{Lorentz transformation in the one-particle system of the local DCL1 kinematics}
\label{LT-one-particle}
We consider that the Lorentz generators $J^{\alpha\beta}$ are given by imposing that the space-time coordinates $\tilde{x}^\alpha$ with them form a ten-dimensional Lie algebra. From the Lorentz algebra generated by $J^{\alpha\beta}$ and the space-time coordinates algebra $\tilde{x}^\alpha$ (when there is no mixing of phase-space coordinates in $\tilde{y}^\alpha$) ,
\begin{equation}
\{\tilde{x}^i, \tilde{x}^0\} \,=\, - (\epsilon/\Lambda) \,\tilde{x}^i\,, \quad\quad\quad \{\tilde{x}^i, \tilde{x}^j\} \,=\, 0\,,
\end{equation}
one can determine the rest of the Poisson brackets through Jacobi identities:
\begin{equation}
\begin{split}
\{\tilde{x}^0, J^{0j}\} \,&=\, \tilde{x}^j - (\epsilon/\Lambda) J^{0j}\,, \quad\quad \{\tilde{x}^i, J^{0j}\} \,=\, \delta^{ij} \tilde{x}^0 - (\epsilon/\Lambda) J^{ij}\,,\\ \{\tilde{x}^0, J^{jk}\} \,&=\, 0\,, \quad\quad \{\tilde{x}^i, J^{jk}\} \,=\, \delta^{ik} \tilde{x}^j - \delta^{ij} \tilde{x}^k\,.
\end{split}
\label{xtilde-J}
\end{equation}
Using
\begin{equation}
\{\tilde{x}^\alpha, J^{\beta\gamma}\} \,=\, \{x^\nu \varphi^\alpha_\nu(k), x^\rho {\cal J}^{\beta\gamma}_\rho(k)\} \,=\, x^\mu \left(\frac{\partial\varphi^\alpha_\mu(k)}{\partial k_\rho} {\cal J}^{\beta\gamma}_\rho(k) - \frac{\partial J^{\beta\gamma}_\mu(k)}{\partial k_\nu} \varphi^\alpha_\nu(k)\right)
\end{equation}
in the algebra (\ref{xtilde-J}), one obtains
\begin{align}
\left(\frac{\partial\varphi^0_\mu(k)}{\partial k_\rho} {\cal J}^{0j}_\rho(k) - \frac{\partial {\cal J}^{0j}_\mu(k)}{\partial k_\nu} \varphi^0_\nu(k)\right) \,=& \varphi^j_\mu(k) \blue{-} (\epsilon/\Lambda) {\cal J}^{0j}_\mu(k)\,, \nonumber \\
\left(\frac{\partial\varphi^i_\mu(k)}{\partial k_\rho} {\cal J}^{0j}_\rho(k) - \frac{\partial {\cal J}^{0j}_\mu(k)}{\partial k_\nu} \varphi^i_\nu(k)\right) \,=& \delta^{ij} \varphi^0_\mu(k) \blue{-} (\epsilon/\Lambda) {\cal J}^{ij}_\mu(k)\,, \nonumber \\
\left(\frac{\partial\varphi^0_\mu(k)}{\partial k_\rho} {\cal J}^{jk}_\rho(k) - \frac{\partial {\cal J}^{jk}_\mu(k)}{\partial k_\nu} \varphi^0_\nu(k)\right) \,=& 0\,, \nonumber \\
\left(\frac{\partial\varphi^i_\mu(k)}{\partial k_\rho} {\cal J}^{jk}_\rho(k) - \frac{\partial {\cal J}^{jk}_\mu(k)}{\partial k_\nu} \varphi^i_\nu(k)\right) \,=& \delta^{ik} \varphi^j_\mu(k) - \delta^{ij} \varphi^k_\mu(k)\,.
\end{align}
From the functions determining the generalized space-time coordinates for the one-particle system of the local DCL1 kinematics according to Eq.~\eqref{magicformula},
\begin{align}
& \varphi^0_0(k) \,=\, \lim_{l\to 0} \frac{\partial(l\oplus k)_0}{\partial l_0} \,=\, 1 + \epsilon k_0/\Lambda\,, &\quad\quad &\varphi^j_0(k) \,=\, \lim_{l\to 0} \frac{\partial(l\oplus k)_0}{\partial l_j} \,=\, 0 \,,\nonumber \\
& \varphi^0_i(k) \,=\, \lim_{l\to 0} \frac{\partial(l\oplus k)_i}{\partial k_0} \,=\, \epsilon k_i/\Lambda\,, &\quad\quad &\varphi^j_i(k) \,=\, \lim_{l\to 0} \frac{\partial(l\oplus k)_i}{\partial k_j} \,=\, \delta_i^j\,,
\end{align}
we find the following system of equations for ${\cal J}^{\alpha\beta}_\mu(k)$:
\begin{align}
& \frac{\partial{\cal J}^{0j}_0}{\partial k_0} \,=\, (\epsilon/\Lambda) \,\left[2 {\cal J}^{0j}_0 - k_0 \frac{\partial{\cal J}^{0j}_0}{\partial k_0} - k_k \frac{\partial{\cal J}^{0j}_0}{\partial k_k}\right]\,, \nonumber \\
& \frac{\partial{\cal J}^{0j}_0}{\partial k_i} \,=\, - \delta^{ij} (1 + \epsilon k_0/\Lambda) + (\epsilon/\Lambda) {\cal J}^{ij}_0\,, \nonumber \\
& \frac{\partial{\cal J}^{0j}_l}{\partial k_0} \,=\, - \delta^j_l + (\epsilon/\Lambda) \,\left[2 {\cal J}^{0j}_l - k_0 \frac{\partial{\cal J}^{0j}_l}{\partial k_0} - k_k \frac{\partial{\cal J}^{0j}_l}{\partial k_k}\right]\,, \nonumber \\
& \frac{\partial{\cal J}^{0j}_l}{\partial k_i} \,=\, - \delta^{ij} \epsilon k_l/\Lambda + (\epsilon/\Lambda) {\cal J}^{ij}_l\,, \nonumber \\
& \frac{\partial{\cal J}^{jk}_0}{\partial k_0} \,=\, (\epsilon/\Lambda) \,\left[{\cal J}^{jk}_0 - k_0 \frac{\partial{\cal J}^{jk}_0}{\partial k_0} - k_m \frac{\partial{\cal J}^{jk}_0}{\partial k_m}\right]\,, \quad\quad
\frac{\partial{\cal J}^{jk}_0}{\partial k_i} \,=\, 0\,, \nonumber \\
& \frac{\partial{\cal J}^{jk}_l}{\partial k_0} \,=\, (\epsilon/\Lambda) \,\left[{\cal J}^{jk}_l - k_0 \frac{\partial{\cal J}^{jk}_l}{\partial k_0} - k_m \frac{\partial{\cal J}^{jk}_l}{\partial k_m}\right]\,, \quad\quad
\frac{\partial{\cal J}^{jk}_l}{\partial k_i} \,=\, \delta^{ij} \delta^k_l - \delta^{ik} \delta^j_l\,.
\end{align}
Imposing the condition that in the limit $(k_0^2/\Lambda^2)\to 0$, $(\vec{k}^2/\Lambda^2)\to 0$, we should recover linear Lorentz transformation
\begin{equation}
{\cal J}^{0j}_0 \to - k_j\,, \quad\quad {\cal J}^{0j}_k \to - \delta^j_k \,k_0\,, \quad\quad {\cal J}^{jk}_0 \to 0\,, \quad\quad {\cal J}^{jk}_l \to (\delta^k_l k_j - \delta^j_l k_k)\,,
\end{equation}
we find a unique solution:
\begin{align}
& {\cal J}^{ij}_0(k) \,=\, 0\,, &\quad&{\cal J}^{ij}_k(k) \,=\, \delta^j_k \, k_i - \delta^i_k \, k_j\,, \nonumber \\
& {\cal J}^{0j}_0(k) \,=\, - k_j (1+\epsilon k_0/\Lambda)\,, &\quad&{\cal J}^{0j}_k(k) \,=\, \delta^j_k \left[-k_0 - \epsilon k_0^2/2\Lambda\right] + (\epsilon/\Lambda) \left[\vec{k}^2/2 - k_j k_k\right]\,.
\end{align}
\chapter{Resonances and cross sections with a DRK}
\section{BSR Extension of the Breit--Wigner Distribution}
\label{appendix:B-W}
We start from Eq.~\eqref{eq:BW-BSR},
\begin{equation}
f_{\text{BSR}}(m^2)\,=\,\frac {K}{\left[\mu^{2}(m^2)-M_X^{2}\right]^{2}+M_X^{2}\Gamma_X ^{2}}.
\label{eqapp:BW-BSR}
\end{equation}
In order to simplify future expressions, we introduce the dimensionless variables
\begin{equation}\label{dimensionless}
\tau:=\frac{m^2}{M_X^2}\quad \quad\quad \gamma:=\frac{\Gamma_X^2}{M_X^2}\quad\quad \quad \lambda:=\frac{\Lambda^2}{M_X^2},
\end{equation}
so we can write Eq.~\eqref{eqapp:BW-BSR} as
\begin{equation}
\label{simplification}
f_{\text{BSR}}(m^2)=\frac{K}{M_X^4F(\tau)}\,,
\end{equation}
where
\begin{equation}
F(\tau):=\left[\tau\left(1+\epsilon\frac{\tau}{\lambda}\right)-1\right]^2+\gamma\,,
\end{equation}
for the BSR we are considering Eq.~\eqref{eq:DCL_tp}.
For a resonance, we need $\gamma\ll 1$, so in order to have a peak we need that the following equation holds
\begin{equation}
\tau\left(1+\epsilon\frac{\tau}{\lambda}\right)-1=0\,,
\end{equation}
with solutions
\begin{equation}
\tau^*=-\frac{\lambda}{2\epsilon}\left[1\pm\left(1+4\frac{\epsilon}{\lambda}\right)^{1/2}\right].
\end{equation}
We can make a Taylor expansion at $\tau^*$ in order to study the distribution close to the peaks. Evaluating the derivatives of $F(\tau)$ up to second order
\begin{equation}
\left.\frac{dF}{d\tau}\right\vert_{\tau=\tau^*}=2\left[\tau^*\left(1+\epsilon\frac{\tau^*}{\lambda}\right)-1 \right]\left(1+2\frac{\epsilon}{\lambda}\tau^*\right)=0\,,
\end{equation}
\begin{equation}
\left.\frac{d^2F}{d\tau^2}\right\vert_{\tau=\tau^*}=2\left(1+2\frac{\epsilon}{\lambda}\tau^* \right)^2=2\left(1+4\frac{\epsilon}{\lambda}\right)\,,
\end{equation}
one obtains
\begin{equation}
F(\tau)\approx \gamma+\left(1+4\frac{\epsilon}{\lambda}\right)(\tau-\tau^*)^2\,,
\end{equation}
and substituting in Eq.~\eqref{simplification}, and using Eq.~\eqref{dimensionless}, one finds
\begin{equation}
f_{\text{BSR}}(m^2)\approx \frac{1}{M^4_X\left(1+4\epsilon M_X^2/\Lambda^2 \right)}\cdot \frac{K}{((m^2-{m^*}^2))^2+M_X^2\Gamma_X^2 \left(1+4\epsilon M_X^2/\Lambda^2 \right)^{-1}}\,,
\label{BSR exp}
\end{equation}
where
\begin{equation}\label{m_*}
{m^*}^2:=M_X^2\tau^*=\frac{\Lambda^2}{2\epsilon}\left[-1\pm\left(1+4\epsilon\frac{M_X^2}{\Lambda^2}\right)^{1/2} \right]\,.
\end{equation}
One can see that the maximum value of the distribution~\eqref{BSR exp} is reached at $m^2={m^*}^2$, and one must study separately if the value of $\epsilon$ of Eq.~\eqref{eq:DCL_tp} is positive or negative.
For $\epsilon=+1$, one obtains a unique solution for the pole
\begin{equation}\label{epsilon+}
{m^*}^2=\frac{\Lambda^2}{2}\left[\left(1+4\frac{M_X^2}{\Lambda^2}\right)^{1/2}-1 \right].
\end{equation}
For this case, one can see that the shape of the distribution of Eq.~\eqref{BSR exp} is the same as in SR, but the difference resides in that the position of the peak $(m^2={m^*}^2)$ is not the squared mass of the resonance. One can easily see from Eq.~\eqref{epsilon+} that one recovers the peak in the SR case when $M_X\ll \Lambda$,
\begin{equation}
{m^*}^2\approx\frac{\Lambda^2}{2}\left[1+2\frac{M_X^2}{\Lambda^2}-1 \right]=M_X^2.
\end{equation}
The width of the peak can be computed from Eq.~\eqref{BSR exp} (to be compared with the Breit--Wigner distribution~\eqref{eq:BW}):
\begin{equation}
{\Gamma^*}^2=\frac{M_X^2\Gamma_X^2}{{m^*}^2\left(1+4{M_X^2}/{\Lambda^2}\right)}=\Gamma_X^2\frac{2{M_X^2}/{\Lambda^2}}{\left(1+4{M_X^2}/{\Lambda^2}\right)\left[\left(1+4{M_X^2}/{\Lambda^2}\right)^{1/2}-1 \right]},
\label{eq:width+}
\end{equation}
which also leads to the SR decay width of the resonance when $M_X^2\ll \Lambda^2$.
The $\epsilon=-1$ case is much more interesting. If
\begin{equation}
1-4\frac{M_X^2}{\Lambda^2}>0,
\end{equation}
that is, when $M_X<\Lambda/2$, Eq.~\eqref{m_*} gives two solutions for ${m^*}^2>0$,
\begin{equation}\label{epsilon-}
{m^*_{\pm}}^2=\frac{\Lambda^2}{2}\left[1\pm\left(1-4\frac{M_X^2}{\Lambda^2}\right)^{1/2} \right].
\end{equation}
Then, one finds two peaks (at $m^2={m^*_{\pm}}^2$) in the squared mass distribution in contrast to the SR case, where there is only one. One can read from the position of these two peaks $\Lambda^2$ and $M_X^2$ using Eq.~\eqref{epsilon-}:
\begin{equation}\label{changes}
\Lambda^2=({m^*_+}^2+{m^*_-}^2)\,, \quad \quad\quad M_X^2=\frac{{m^*_+}^2{m^*_-}^2}{({m^*_+}^2+{m^*_-}^2)}\,.
\end{equation}
As in Eq.~\eqref{eq:width+}, the widths of the two peaks are
\begin{equation}\label{gamma1}
{\Gamma_\pm^*}^2=\frac{M_X^2\Gamma_X^2}{{m_\pm^*}^2\left(1-4{M_X^2}/{\Lambda^2}\right)}.
\end{equation}
From $M_X^2$ and $\Lambda^2$ of Eq.~\eqref{changes}, we get
\begin{equation}
1-4\frac{M_X^2}{\Lambda^2}=\frac{({m_+^*}^2-{m_-^*}^2)^2}{({m_+^*}^2+{m_-^*}^2)^2}.
\end{equation}
Substituting in Eq.~\eqref{gamma1} one finds
\begin{equation}\label{gamma}
{\Gamma_\pm^*}^2=\Gamma_X^2\frac{({m_+^*}^2+{m_-^*}^2){m_\mp^*}^2}{({m_+^*}^2-{m_-^*}^2)^2}.
\end{equation}
From Eq.~\eqref{gamma}, one can find the decay width of the resonance $X$:
\begin{equation} \label{eq:width}
\Gamma_X^2={\Gamma_+^*}^2\frac{({m_+^*}^2-{m_-^*}^2)^2}{{m_-^*}^2({m_+^*}^2+{m_-^*}^2)}={\Gamma_-^*}^2\frac{({m_+^*}^2-{m_-^*}^2)^2}{{m_+^*}^2({m_+^*}^2+{m_-^*}^2)}=({\Gamma_+^*}^2+{\Gamma_-^*}^2)\left[\frac{{m_+^*}^2-{m_-^*}^2}{{m_+^*}^2+{m_-^*}^2}\right]^2.
\end{equation}
When $M_X^2\ll\Lambda^2$, the expressions for the poles (Eq.~\eqref{epsilon-}) and the widths (Eq.~\eqref{gamma1}) are
\begin{equation}
{m^*_{\pm}}^2\approx\frac{\Lambda^2}{2}\left[1\pm \left(1-2\frac{M_X^2}{\Lambda^2}\right)\right],
\end{equation}
\begin{equation}
{\Gamma_\pm^*}^2\approx\Gamma_X^2\frac{2{M_X^2}/{\Lambda^2}}{\left[1\pm\left(1-2{M_X^2}/{\Lambda^2}\right) \right]},
\end{equation}
so in this limit
\begin{equation}
\begin{array}{ll}
{m_+^*}^2\approx \Lambda^2 \,,& {\Gamma_+^*}^2\approx \Gamma_X^2 {M_X^2}/{\Lambda^2}\,, \\
{m_-^*}^2 \approx M_X^2 \,,& {\Gamma_-^*}^2\approx \Gamma_X^2\,,
\end{array}
\end{equation}
and one can see that for one of the peaks ($-$) one finds the result of SR, while the other peak ($+$) is shifted by a factor $\Lambda/M_X$, and its width reduced by a factor $M_X/\Lambda$ with respect to the SR peak.
We can note that for $M_X>\Lambda/2$ there is not any peak, since the square root of Eq.~\eqref{epsilon-} becomes negative. This would lead to an ``invisible'' resonance. In the limit $M_X\rightarrow \Lambda/2$ one can see that the two poles coincide with their width tending to infinite.
We have not considered the dependence on $m^2$ of the $K$ factor taking into account how the resonance is produced and the decay width of the two particles because we assume that the analysis is carried out near the peaks, where $K\approx K({m^*}^2)$, and then we can neglect the variation of $K$ with respect to $m^2$.
\section{Cross sections with a DCL}
\label{appendix_cross_sections}
In this part of the appendix we show how to obtain the cross section of the process $e^-(k) e^+(\overline{k}) \rightarrow Z \rightarrow \mu^-(p) \mu^+(\overline{p})$ in the BSR case. Firstly, we need to compute the two-particle phase-space integral
\begin{equation}
\widehat{F}^{(\alpha)}(E_0) \,\doteq\, \overline{PS}_2^{(\alpha)} F(k, \overline{k}, p, \overline{p})
\end{equation}
for different Lorentz invariant functions $F$ of the four momenta $k$, $\overline{k}$, $p$, $\overline{p}$. First, we are going to use the Dirac delta function $\delta_\alpha^{(4)}(k, \overline{k}; p, \overline{p})$ that takes into account the conservation law for each channel $\alpha$ and that will let us express $\overline{p}$ as a function $\overline{p}^{(\alpha)}(k, \overline{k}, p)$ of the other remaining three momenta $k$, $\overline{k}$, $p$. Therefore, we have
\begin{equation}
\widehat{F}^{(\alpha)}(E_0) \,=\, \frac{1}{(2\pi)^2} \int d^4p \,\delta(p^2) \theta(p_0) \,\delta\left(\overline{p}^{(\alpha)\,2}(k, \overline{k}, p)\right) \theta\left(\overline{p}_0^{(\alpha)}(k, \overline{k}, p)\right) F_\alpha(k, \overline{k}, p)\,,
\end{equation}
where
\begin{equation}
F_\alpha(k, \overline{k}, p) \,=\, F(k, \overline{k}, p, \overline{p}^{(\alpha)}(k, \overline{k}, p))\,.
\end{equation}
Then, integrating over $p_0$ and $|\vec{p}|$ with the remaining two Dirac delta functions we find
\begin{equation}
\widehat{F}^{(\alpha)}(E_0) \,=\, \frac{1}{8\pi^2} \int d\Omega_{\hat{p}} \frac{E^{(\alpha)}(k, \overline{k}, \hat{p})}{|\frac{\partial\overline{p}^{(\alpha)\,2}}{\partial p_0}|_{p_0=E^{(\alpha)}(k, \overline{k}, \hat{p})}} \,F_\alpha(k, \overline{k}, p)|_{|\vec{p}|=p_0=E^{(\alpha)}(k, \overline{k}\,, \hat{p})}\,,
\end{equation}
where $E^{(\alpha)}(k, \overline{k}, \hat{p})$ is the positive value of $p_0$ such that $\overline{p}^{(\alpha)\,2}=0$. Due to the rotational invariance and the choice $\vec{\overline{k}}=-\vec{k}$ one can show that $E^{(\alpha)}$ is a function of the energy $E_0$ of the particles in the initial state and the angle $\theta$ between the directions of $\vec{k}$ and $\vec{p}$. So we have
\begin{equation}
\widehat{F}^{(\alpha)}(E_0) \,=\, \frac{1}{4\pi} \int d\cos\theta \frac{E^{(\alpha)}(E_0, \cos\theta)}{|\frac{\partial\overline{p}^{(\alpha)\,2}}{\partial p_0}|_{p_0=E^{(\alpha)}(E_0, \cos\theta)}} F_\alpha(k, \overline{k}, p)|_{|\vec{p}|=p_0=E^{(\alpha)}(E_0, \cos\theta)}\, .
\label{eq:Falpha}
\end{equation}
In order to obtain the expression of $E^{(\alpha)}(E_0, \cos\theta)$ and $|\frac{\partial\overline{p}^{(\alpha)\,2}}{\partial p_0}|_{p_0=E^{(\alpha)}(E_0, \cos\theta)}$, we need to use the explicit form of the conservation law for each channel.
For the first channel, we have $k\oplus\overline{k}=p\oplus\overline{p}$, and then
\begin{equation}
k_\mu + \overline{k}_\mu + \frac{k\cdot \overline{k}}{2\overline{\Lambda}^2} k_\mu \,=\,
p_\mu + \overline{p}_\mu + \frac{p\cdot \overline{p}}{2\overline{\Lambda}^2} p_\mu\,.
\end{equation}
This leads to
\begin{equation}
p\cdot \overline{p}^{(1)} \,=\, k\cdot p + \overline{k}\cdot p + \frac{(k\cdot \overline{k})(k\cdot p)}{2\overline{\Lambda}^2}\,,
\end{equation}
and, neglecting terms proportional to $(1/\overline{\Lambda}^4)$, one has
\begin{equation}
\overline{p}^{(1)}_\mu \,=\, k_\mu + \overline{k}_\mu - p_\mu + \frac{k\cdot \overline{k}}{2\overline{\Lambda}^2} k_\mu - \frac{(k\cdot p+\overline{k}\cdot p)}{2\overline{\Lambda}^2} p_\mu\,,
\end{equation}
and
\begin{align}
\overline{p}^{(1)\,2} &\,=\, 2 k\cdot \overline{k} - 2 k\cdot p - 2 \overline{k}\cdot p + \frac{(k\cdot \overline{k})^2}{\overline{\Lambda}^2} - \frac{(k\cdot \overline{k})(k\cdot p)}{\overline{\Lambda}^2} - \frac{(k\cdot p + \overline{k}\cdot p)^2}{\overline{\Lambda}^2}\,, \\
k\cdot \overline{p}^{(1)} &\,=\, k\cdot \overline{k} - k\cdot p - \frac{(k\cdot p+\overline{k}\cdot p)k\cdot p}{2\overline{\Lambda}^2}\,, \\
\overline{k}\cdot \overline{p}^{(1)} &\,=\, k\cdot \overline{k} - \overline{k}\cdot p + \frac{(k\cdot \overline{k})^2}{2\overline{\Lambda}^2} - \frac{(k\cdot p+\overline{k}\cdot p)\overline{k}\cdot p}{2\overline{\Lambda}^2} \,.
\end{align}
In the reference frame where $k_\mu = E_0 (1, \hat{k})$, $\overline{k}_\mu = E_0 (1, -\hat{k})$, one finds
\begin{align}
\overline{p}^{(1)\,2} &\,=\, 4 E_0^2 - 4 E_0 p_0 + \frac{4E_0^4}{\overline{\Lambda}^2} - \frac{2E_0^3}{\overline{\Lambda}^2} p_0 (1-\cos\theta) - \frac{4E_0^2}{\overline{\Lambda}^2} p_0^2\,, \\
k\cdot \overline{p}^{(1)} &\,=\, 2 E_0^2 - E_0 p_0 (1-\cos\theta) - \frac{E_0^3}{\overline{\Lambda}^2} p_0 (1-\cos\theta)\,, \\
\overline{k}\cdot \overline{p}^{(1)} &\,=\, 2E_0^2 - E_0 p_0 (1+\cos\theta) + \frac{2E_0^4}{\overline{\Lambda}^2} - \frac{E_0^2}{\overline{\Lambda}^2} p_0^2 (1+\cos\theta)\,.
\end{align}
From the expression of $\overline{p}^{(1)\,2}$, one arrives to
\begin{equation}
\frac{\partial \overline{p}^{(1)\,2}}{\partial p_0} \,=\, - 4E_0 - \frac{2E_0^3}{\overline{\Lambda}^2} (1-\cos\theta) - \frac{8E_0^2}{\overline{\Lambda}^2} p_0,\quad\quad\quad E^{(1)}\,=\,E_0\left(1 - \frac{E_0^2}{2\overline{\Lambda}^2} (1-\cos\theta)\right)\,.
\end{equation}
One can do a similar analysis for the other channels.
In order to compute the cross section, we consider these four invariant functions
\begin{equation}
\label{eq:Finv}
F_{\pm}\,=\,t^2 \pm u^2\,=\,(k\cdot p + \overline{k}\cdot \overline{p})^2 \pm (k\cdot \overline{p} + \overline{k}\cdot p)^2\, ,
\end{equation}
\begin{equation}
\label{eq:F-inv}
\begin{split}
\overline{F}_{\pm}\,=\,\overline{t}^2 \pm \overline{u}^2\,=\,&\left[(k\cdot p + \overline{k}\cdot \overline{p})^2 - (k\cdot p + \overline{k}\cdot \overline{p})\left[(k\cdot p)^2+(\overline{k}\cdot \overline{p})^2\right]/\overline{\Lambda}^2\right] \\
&\pm \left[(k\cdot \overline{p} + \overline{k}\cdot p)^2 - (k\cdot \overline{p} + \overline{k}\cdot p)\left[(k\cdot \overline{p})^2+(\overline{k}\cdot p)^2\right]/\overline{\Lambda}^2\right] \,,
\end{split}
\end{equation}
and the corresponding phase-space integrals $\widehat{F}_{\pm}^{(\alpha)}(E_0)$, $\widehat{\overline{F}}_{\pm}^{(\alpha)}(E_0)$. Then, the two cross sections computed with the two assumptions of the dynamical factor $A$ proposed in Sec.~\ref{sec:dynamical} are
\begingroup\makeatletter\def\f@size{9}\check@mathfonts
\def\maketag@@@#1{\hbox{\m@th\fontsize{10}{10}\selectfont \normalfont#1}}%
\begin{align}
\overline{\sigma}^{(1)} &\,=\, \frac{e^4}{256 \sin^4\theta_W\cos^4\theta_W E_0^2\left(1+\frac{E_0^2}{\Lambda^2}\right)} \,\frac{1}{\left[(\overline{s} - \overline{M}_Z^2)^2 + \overline{\Gamma}_Z^2 \overline{M}_Z^2\right]} \, \left[(C_V^2 + C_A^2)^2 \sum_\alpha \widehat{F}_+^{(\alpha)}(E_0) - 4 C_V^2 C_A^2 \sum_\alpha \widehat{F}_-^{(\alpha)}(E_0)\right]\,, \\
\overline{\sigma}^{(2)} &\,=\, \frac{e^4}{256 \sin^4\theta_W\cos^4\theta_W E_0^2\left(1+\frac{E_0^2}{\Lambda^2}\right)} \,\frac{1}{\left[(\overline{s} - \overline{M}_Z^2)^2 + \overline{\Gamma}_Z^2 \overline{M}_Z^2\right]} \, \left[(C_V^2 + C_A^2)^2 \sum_\alpha \widehat{\overline{F}}_+^{(\alpha)}(E_0) - 4 C_V^2 C_A^2 \sum_\alpha \widehat{\overline{F}}_-^{(\alpha)}(E_0)\right] \,.
\end{align}
\endgroup
Substituting Eqs.~\eqref{eq:Finv} and \eqref{eq:F-inv} in Eq.~\eqref{eq:Falpha}, one obtains the cross sections of Eqs.~\eqref{eq:cross_section_final_1}-\eqref{eq:cross_section_final_2}.
\chapter{Scalar of curvature of momentum space}
\label{appendix:cotangent}
We show in this appendix that when one considers a metric in the cotangent bundle starting from a maximally symmetric momentum space with the proposal of Ch.~\ref{ch:cotangent}, the scalar of curvature of the momentum space is also constant. The definition of the momentum curvature tensor for flat spacetime is~\eqref{eq:Riemann_p}
\begin{equation}
S_{\sigma}^{\mu\nu\rho}(k)\,=\, \frac{\partial C^{\mu\nu}_\sigma(k)}{\partial k_\rho}-\frac{\partial C^{\mu\rho}_\sigma(k)}{\partial k_\nu}+C_\sigma^{\lambda\nu}(k) \,C^{\mu\rho}_\lambda(k)-C_\sigma^{\lambda\rho}(k)\,C^{\mu\nu}_\lambda(k)\,,
\end{equation}
which can be rewritten using Eq.~\eqref{eq:affine_connection_p}
\begin{equation}
\begin{split}
S^{\sigma\kappa\lambda\mu}(k)\,&=\, \frac{1}{2}\left(\frac{\partial^2 g_k^{\sigma\mu}(k)}{\partial k_\kappa \partial k_\lambda}+\frac{\partial^2 g_k^{\kappa\lambda}(k)}{\partial k_\sigma \partial k_\mu}-\frac{\partial^2 g_k^{\sigma\lambda}(k)}{\partial k_\kappa \partial k_\mu}-\frac{\partial^2 g_k^{\kappa\mu}(k)}{\partial k_\sigma \partial k_\lambda}\right)\\
&+g_k^{\nu\tau}(k)\left(C_\nu^{\kappa\lambda}(k)\,C^{\sigma\mu}_\tau(k)-C_\nu^{\kappa\mu}(k)\,C^{\sigma\lambda}_\tau(k)\right)\,,
\end{split}
\end{equation}
where we have risen the low index. We have proposed in Ch.~\ref{ch:cotangent} that a possible way to consider a curvature in spacetime in the MRK and in the metric is to replace $k\rightarrow \bar{k}=\bar{e}k$, so the momentum curvature tensor is
\begin{equation}
\begin{split}
S^{\sigma\kappa\lambda\mu}(\bar{k})\,&=\, \frac{1}{2}\left(\frac{\partial^2 g_{\bar{k}}^{\sigma\mu}(\bar{k})}{\partial \bar{k}_\kappa \partial \bar{k}_\lambda}+\frac{\partial^2 g_{\bar{k}}^{\kappa\lambda}(\bar{k})}{\partial \bar{k}_\sigma \partial \bar{k}_\mu}-\frac{\partial^2 g_{\bar{k}}^{\sigma\lambda}(\bar{k})}{\partial \bar{k}_\kappa \partial \bar{k}_\mu}-\frac{\partial^2 g_{\bar{k}}^{\kappa\mu}(\bar{k})}{\partial \bar{k}_\sigma \partial \bar{k}_\lambda}\right)\\
&+g_{\bar{k}}^{\nu\tau}(\bar{k})\left(C_\nu^{\kappa\lambda}(\bar{k})\,C^{\sigma\mu}_\tau(\bar{k})-C_\nu^{\kappa\mu}(\bar{k})\,C^{\sigma\lambda}_\tau(\bar{k})\right)\,,
\end{split}
\end{equation}
which contracting gives
\begin{equation}
S^{\sigma\kappa\lambda\mu}(\bar{k})g^{\bar{k}}_{\sigma\lambda}(\bar{k})g^{\bar{k}}_{\kappa\mu}(\bar{k})\,=\,\text{const}\,,
\end{equation}
due to the fact that the starting momentum space was a maximally symmetric space, and where $g^{\bar{k}}_{\kappa\nu}(\bar{k})$ is the inverse of the metric
\begin{equation}
g^{\bar{k}}_{\kappa\nu}(\bar{k})g_{\bar{k}}^{\kappa\mu}(\bar{k})\,=\,\delta^\mu_\nu\,.
\end{equation}
From this, one can obtain the scalar of curvature in momentum space from the cotangent bundle metric
\begin{equation}
g_{\mu\nu}(x,k)\,=\,e^\rho_\mu(x)g^{\bar{k}}_{\rho\sigma}(\bar{k})e^\sigma_\nu(x)\,,
\end{equation}
with the momentum curvature tensor depending on momentum and spacetime coordinates
\begin{equation}
\begin{split}
S^{\sigma\kappa\lambda\mu}(x,k)\,&=\, \frac{1}{2}\left(\frac{\partial^2 g^{\sigma\mu}(x,k)}{\partial k_\kappa \partial k_\lambda}+\frac{\partial^2 g^{\kappa\lambda}(x,k)}{\partial k_\sigma \partial k_\mu}-\frac{\partial^2 g^{\sigma\lambda}(x,k)}{\partial k_\kappa \partial k_\mu}-\frac{\partial^2 g^{\kappa\mu}(x,k)}{\partial k_\sigma \partial k_\lambda}\right)\\
&+g^{\nu\tau}(x,k)\left(C_\nu^{\kappa\lambda}(x,k)\,C^{\sigma\mu}_\tau(x,k)-C_\nu^{\kappa\mu}(x,k)\,C^{\sigma\lambda}_\tau(x,k)\right)\,.
\end{split}
\end{equation}
After some computations one arrives to
\begin{equation}
S^{\sigma\kappa\lambda\mu}(x,k)g_{\sigma\lambda}(x,k)g_{\kappa\mu}(x,k)\,=\,S^{\sigma\kappa\lambda\mu}(\bar{k})g^{\bar{k}}_{\sigma\lambda}(\bar{k})g^{\bar{k}}_{\kappa\mu}(\bar{k})\,=\,\text{const}\,.
\end{equation}
Then, through the way we propose here, if the starting momentum space is maximally symmetric, the resulting metric in the cotangent bundle has a constant momentum scalar of curvature. Now we can understand why we have found that there are 10 transformations for the momentum (momentum isometries of the metric) for a fixed point $x$: 4 related with translations and 6 which leave the momentum origin invariant (the phase space point $(x,0)$).
\chapter{Introduction}
\label{chapter_intro}
\ifpdf
\graphicspath{{Chapter1/Figs/Raster/}{Chapter1/Figs/PDF/}{Chapter1/Figs/}}
\else
\graphicspath{{Chapter1/Figs/Vector/}{Chapter1/Figs/}}
\fi
\epigraph{I study myself more than any other subject. That is my metaphysics, that is my physics.}{Michel de Montaigne}
\nomenclature[z-DSR]{DSR}{Doubly Special Relativity
\nomenclature[z-PDG]{PDG}{Particle Data Group
\nomenclature[z-BSR]{BSR}{Beyond Special Relativity
\nomenclature[z-UHECR]{UHECR}{ultra-high Energy Cosmic Rays}%
\nomenclature[z-CMB]{CMB}{Cosmic Microwave Background}%
\nomenclature[z-pp]{pp}{proton-proton}%
\nomenclature[z-SR]{SR}{Special Relativity}%
\nomenclature[z-QFT]{QFT}{Quantum Field Theory}%
\nomenclature[z-LEP]{LEP}{Large Electron-Positron collider}%
\nomenclature[z-GR]{GR}{General Relativity}%
\nomenclature[z-QT]{QT}{Quantum Theory}%
\nomenclature[z-LIV]{LIV}{Lorentz Invariance Violation}%
\nomenclature[z-LI]{LI}{Lorentz Invariance}%
\nomenclature[z-EFT]{EFT}{Effective Field Theory}%
\nomenclature[z-SME]{SME}{Standard Model Extension}%
\nomenclature[z-SM]{SM}{Standard Model}%
\nomenclature[z-mSME]{mSME}{minimal Standard Model Extension}%
\nomenclature[z-GRB]{GRB}{Gamma Ray Burst}%
\nomenclature[z-AGN]{AGN}{Active Galactic Nuclei}%
\nomenclature[z-DSR]{DSR}{Doubly Special Relativity}%
\nomenclature[z-DSR]{DSR}{Doubly Special Relativity}%
\nomenclature[z-DSR]{DSR}{Doubly Special Relativity}%
\nomenclature[z-DCL]{DCL}{Deformed Composition Law}%
\nomenclature[z-DDR]{DDR}{Deformed Dispersion Relation}%
\nomenclature[z-DRK]{DRK}{Deformed Relativistic Kinematics}%
\nomenclature[z-DLT]{DLT}{Deformed Lorentz Transformations}%
\nomenclature[g-lambda]{$\Lambda$}{High energy scale}%
\nomenclature[g-varphi]{$\varphi$}{Tetrad in momentum space/ function characterizing the noncommutative spacetime}%
\nomenclature[g-vvarphi]{$\bar{\varphi}$}{Inverse function of $\varphi$}%
\nomenclature[x-oplus]{$\oplus$}{Composition law}%
\nomenclature[r-hat]{$\hat{}$}{Antipode}%
\nomenclature[x-n]{$n^\mu$}{Unitary timelike vector $(1,0,0,0)$}%
Since the dawn of humanity, human beings have tried to explain all observed phenomena. With scientific research new theories have been developed, so more phenomena have been explained. Thanks to the technology provided by this research, smaller scales have been studied, leading to new processes that had to be understood. The problem arises when one knows that
there are some missing parts or inconsistencies in the theory, while no new experimental observations are available to guide its development.
This undesirable fact emerges in theoretical physics nowadays.
One source of inconsistencies appears when one tries to unify general relativity (GR) and quantum theory (QT). One of the possible issues that impedes the unification of these two theories is the role that spacetime plays in them. While in quantum field theory (QFT) spacetime is given
from the very beginning,
as a framework in which the processes of interactions can be described, in GR the spacetime is understood as the deformation of a flat 4-dimensional space modeled by matter and radiation. Of course, one can consider a quantum theory of gravitation where the mediation of the interaction is carried out by the graviton, a spin-2 particle, leading to Einstein’s equations~\cite{Feynman:1996kb}. The problem of this approach is that this theory is not renormalizable, and then it gives well-defined predictions only for energies below the Planck scale.
With the huge machinery that these two theories provide we can describe, on the one hand, the massive objects (GR), and on the other one, the lightest particles (QFT), so one could naively say that a theory that contains both, a quantum gravity theory (QGT), would be completely unnecessary. But this is not the case if one wants to study the propagation and interaction of very tiny and energetic particles: the kinematics of the processes should take into account the quantum and gravitational effects together, something unthinkable if QFT and GR cannot be studied in the same framework. This kind of interactions did take place at the beginning of the universe, where a huge amount of matter was concentrated in a minute region of space. So in order to describe the first instants of the universe, a complete understanding of a QGT should be indispensable.
Besides this, we do not know what happens inside a black hole, which is a source of contradiction between GR and QT~\cite{Hawking:1976ra}. What happens with the information when it crosses the event horizon? If one considers that the information is lost, one is going against what QT says. If on the other hand, the information remains encrypted in the horizon surface, the evaporation of the black hole~\cite{Hawking:1974sw} would lead to a contradiction between pure and mixed states. In fact, one of the possible solutions to the information paradox, named firewall~\cite{Almheiri:2012rt} because it proposes that due to the existence of mixed states there would be particles ``burning'' an observer in free fall into the black hole, violates the equivalence principle, which states that one should not feel anything while crossing the event horizon.
Another question is: what happens when one comes to the singularity? To answer all these questions, we need a QGT.
Another problem that one finds is that in QT one assumes that spacetime is given and studies with all detail the properties and movement of particles in it, including matter and radiation. In GR, and specially in cosmology, one takes the opposite way: the properties of matter and radiation are given (through state equations) and one describes the resultant spacetime.
There is also a difficulty in defining spacetime. Einstein thought about being able to describe the space-time coordinates through the exchange of light signals~\cite{Einstein1905}, but when one uses this procedure, one neglects all information about the energy of the photons and assumes that the same spacetime is rebuilt by exchange of light signals of different frequencies. However, what would happen if the speed of light depends on the energy of the photon, as it happens in many theoretical frameworks which try to unify GR with QT? In this case, the energy of the photon would affect the own structure of spacetime. Also, this procedure of identifying points of spacetime assumes that interactions are local events, happening at the same point of spacetime. This is no longer valid when one has a deformed relativistic kinematics~\cite{AmelinoCamelia:2011bm,AmelinoCamelia:2011pe}, as we will see.
Presumably, all these paradoxes and inconsistencies could be avoided if a QGT was known. Despite our ignorance about the possible consequences and implications of a complete QGT, we can pose the main properties that such a theory should have, and the characteristic phenomenological impacts that may result.
\section{Towards a QGT: main ideas and ingredients}
\label{sec:QGT}
In the last 60 years, numerous theories have tried to avoid the inconsistencies that appear when one tries to put in the same scheme GR and QFT: string theory~\cite{Mukhi:2011zz,Aharony:1999ks,Dienes:1996du}, loop quantum gravity~\cite{Sahlmann:2010zf,Dupuis:2012yw}, supergravity~\cite{VanNieuwenhuizen:1981ae,Taylor:1983su}, or causal set theory~\cite{Wallden:2010sh,Wallden:2013kka,Henson:2006kf}. The main problem is the lack of experimental observations that could tell us which is the correct theory corresponding to a QGT~\cite{LectNotes702}.
In most of these theories, as they are trying to consider a generalization of the classical version of spacetime, a minimum length appears~\cite{Gross:1987ar,Amati:1988tn,Garay1995}, which is usually considered to be the Planck one, and then the Planck energy is taken as a characteristic energy. In order to obtain the Planck length $l_p$, time $t_p$, mass $M_p$ and energy $E_p$, one only needs to use the physical constants of quantum mechanics ~$\hbar$, relativity $c$ and gravitation $G$,
\begin{eqnarray}
l_p\,&=&\, \sqrt{\frac{\hbar G}{c^3}}\,=\, 1.6 \times \,10^{-35}\,\text{m}\,, \nonumber \\
t_p\,&=&\, \sqrt{\frac{\hbar G}{c^5}}\,=\, 5.4 \times\,10^{-44}\,\text{s}\,, \nonumber \\
\frac{E_p}{c^2}\,&=&\,M_p\,=\, \sqrt{\frac{\hbar c}{G}}\,=\, 2.2 \times\,10^{-8}\,\text{kg}\,=\, 1.2\times\,10^{19} \,\text{GeV}/c^2\,.
\end{eqnarray}
The consideration of a minimum length provokes that the spacetime acquires some very particular features that must be studied in order to know what kind of theories we are facing.
Along this thesis (although, exceptionally, not in this chapter), we will be using natural units, in which ~$\hbar$, $c$ and $G$ are 1.
\subsection{Minimum length scenario}
There are numerous consequences of having a minimal length (see Ref.~\cite{Schiller:1996fw} for more information). We can enumerate some of them:
\begin{itemize}
\item The concept of a space-time manifold disappears. SR, QFT and GR are developed under the idea that time is a continuous concept (it admits a description in terms of real numbers). But due to the presence of a minimum length and time, we have an uncertainty in the measure of distances and times that impedes us to synchronize two clocks with more precision than the Planck time. Due to this impossibility of synchronizing clocks in a precise way, the idea of a univocal time coordinate for a reference frame is only approximated, and it cannot be maintained in an accurate description of nature. We do not have a way to sort events for times smaller than the Planck one either. One then is forced to forget about the idea of time as a unique ``point''. For example, at Planckian scales the concept of proper time disappears.
\item In this way one has a quantized spacetime, in the sense that it is discrete and non-continuous. Due to this quantization, the concepts of a point in space and of an instant of time is lost, as a consequence of the impossibility of measuring with a greater resolution than the Planck scale. Also, this gives place to a modification of the commutation rules (as we will see below), because the measure of space and time leads to non-vanishing uncertainties for position and time, $\Delta x \, \Delta t \geq l_p \, t_p$.
\item Since one cannot determine the metric at these scales, the sense of curvature is lost. That is, the impossibility of measuring lengths is exactly equivalent to curvature fluctuations. It is possible then to imagine that spacetime is like a foam ~\cite{Wheeler:1955zz,Ng:2011rn} at very small scales. Particles would notice these effects due to the quantum fluctuations of spacetime, being more and more relevant for higher energies.
\item
Due to this imprecision of measuring at Planckian scales, the concepts of spatial order, translational invariance, vacuum isotropy and global coordinates systems, lose all experimental support at these dimensions. Moreover, spacetime is neither invariant under Lorentz transformations, nor diffeomorphisms or dilation transformations, so all fundamental symmetries of SR and GR are only valid approximations for scales larger than the Planck one.
\item At the Planck scale we lose the naive sense of dimensions. The number of dimensions of a space can be obtained by determining how many points can be chosen such the distance between them are equal. Then, if one can find $n$ points, the space has $n-1$ dimensions. For example, in 1D one has two points, in 2D three points, and so on. The lack of
precise measurements makes impossible to determine the number of dimensions at Planckian scales with this method. With all this, we see that the physical spacetime cannot be a set of mathematical points. We also are not able to distinguish at small scales if a distance is timelike or spacelike. At Planckian scales, space and time cannot be distinguished. Summarizing, spacetime at these scales is neither continuous, nor ordered, nor metric gifted, nor four-dimensional, nor made of points.
\item Since space and time are not continuous, observables do not vary continuously, either. This means that at Planckian scales, observables cannot be described with real numbers with (potentially) infinity precision. Nor the physical fields can be described as continuous functions.
\item Also the concept of point particle disappears. In fact, it is completely senseless. Of course, the existence of a minimum length, for empty space so for objects, is related with this fact. If the term of point is senseless, also the concept of point particle is lost.
\item If one takes as valid that the size of an elementary particle is always smaller than its Compton wavelength and always bigger than the Planck one, one can prove that the mass of particles must be less than the Planck mass. In QFT we know that the difference between a real or virtual particle is if it is on-shell or off-shell. Due to these uncertainties in measurements, at Planckian scales one cannot know if a particle is real or virtual. As antimatter can be described as matter moving backwards in time, and since the difference between backwards and forwards cannot be determined at Planckian scales, one cannot distinguish between matter and antimatter at these ranges. Since we do not have well-defined rotations, the spin of a particle cannot be properly defined at Planck scales, and then, we cannot distinguish between bosons and fermions, or, in other words, we cannot distinguish matter and radiation at these scales.
\item Finally, let us think about the inertial mass of a tiny object. In order to determine it, we must push it, that is, elaborate a scattering experiment. In order to determine the inertial mass inside a region of size $R$, a wavelength smaller than $R$ must be used, so one needs high energies. That means that the particle will feel attraction due to gravity interaction to the probe (as we will see in the next subsection). Then, at Planckian scales, inertial and gravitational mass cannot be distinguished. To determine the mass in a Planck volume, a wavelength of Planck size has to be used. But, as the minimal error in the wavelength is also the Planck length, the error in the mass becomes so big as the Planck energy is.
In this way, one cannot differentiate between matter and vacuum, and then, when a particle with Planck energy is traveling through spacetime, it can be scattered by the own fluctuations of spacetime, making impossible to say if it has been scattered by vacuum or matter.
\end{itemize}
With all these examples, we see that physics at Planck scales is completely different to what we are used to and to what we can even imagine.
In the following section we will consider some gedanken experiments in order to shed some light about how new physics effects may arise.
\subsection{GUP: generalized uncertainty principle}
\label{sec:gup}
We have seen many consequences of having a minimum length, but we do not have explored through a physical intuition how this minimum length could appear. In this subsection we study a thought experiment in which the Planck scale arises.
First of all, we consider the gedanken Heisenberg microscope experiment in QT. According to classical optics, the wavelength of a photon with momentum ``$\omega$'' establishes a limit in the possible resolution $\Delta x$ in the position of the particle which interacts with the photon
\begin{equation}
\Delta x \gtrsim \frac{1}{2 \pi \omega \sin{\epsilon}}\,,
\label{eq:delta_x}
\end{equation}
where $\epsilon$ is the aperture angle of the microscope lens. But the photon used to measure the position of the particle has a recoiling when it is scattered and it transfers momentum to the particle. As one does not know the direction of the photon with more resolution than $\epsilon$, this leads to an uncertainty in the momentum of the particle in the $x$ direction
\begin{equation}
\Delta p_x \gtrsim \omega \sin{\epsilon}\,.
\end{equation}
Taking all this together, one obtains the uncertainty
\begin{equation}
\Delta x\, \Delta p_x \gtrsim \frac{1}{2 \pi}\,.
\end{equation}
This is a fundamental property of the quantum nature of matter.
We can recreate the mental Heisenberg microscope experiment including the gravitational attraction between the particle whose position one wants to know and the probe used for that aim~\cite{Hossenfelder:2012jw,Garay1995}. As we have seen, the interaction of the photon with the particle does not take place in a well-defined point, but in a region of size $R$. For the interaction to take place and the measurement to be possible, the time passed between the interaction and the measurement has to be at least of the order $\tau \gtrsim R$. The photon carries an energy that, even though small, it exerts a gravitational attraction over the particle whose position we want to measure. The gravitational acceleration acting over the particle is at least of the order of
\begin{equation}
a \approx \frac{G \omega}{R^2}\,,
\end{equation}
and, assuming that the particle is non-relativistic and much slower than the photon, the acceleration acts approximately along the time the photon is in the region of the interaction, so the particle acquires a speed
\begin{equation}
v\approx a\,R\,=\, \frac{G \omega}{R}\,.
\end{equation}
So, in a time $R$ the acquired velocity allows the particle to travel a distance
\begin{equation}
L\approx G\omega\,.
\end{equation}
However, since the direction of the photon is unknown with a width of angle $\epsilon$, the direction of the acceleration and the movement of the particle are also unknown. The projection over the $x$ axis gives and additional uncertainty of
\begin{equation}
\Delta x \gtrsim G\omega\,\sin{\epsilon}\,.
\label{eq:delta_x2}
\end{equation}
Combining Eq.~\eqref{eq:delta_x} and Eq.~\eqref{eq:delta_x2} we see that
\begin{equation}
\Delta x \gtrsim \sqrt{G}\,=\,l_p\,.
\end{equation}
One can refine this argument taking into account that, strictly speaking, during the experiment the photon momentum is increased by
\begin{equation}
\frac{Gm\omega}{R}\,,
\end{equation}
where $m$ is the mass of the particle. This increases the uncertainty of the momentum of the particle
\begin{equation}
\Delta p_x \gtrsim \omega\left(1+\frac{G m}{R}\right)\,\sin{\epsilon}\,,
\end{equation}
and during the time in which the photon is in the interaction region, it is translated
\begin{equation}
\Delta x \approx \frac{R \,\Delta p_x}{m}\,, \qquad \text{so} \qquad \Delta x \gtrsim \omega\left(G+\frac{R}{m}\right)\,\sin{\epsilon}\,,
\end{equation}
which is bigger than the previous uncertainty and then the limit in which one is not considering gravity is still satisfied.
Assuming that the regular uncertainty and the gravitational one add linearly, one gets
\begin{equation}
\Delta x\gtrsim \frac{1}{\Delta p_x}+G\,\Delta p_x\,.
\end{equation}
This result is also obtained in string theory through completely different assumptions~\cite{Kato:1990bd,Susskind:1993ki}.
With this thought experiment, we see that when one adds the gravitational interaction to the usual Heisenberg microscope we obtain a generalized uncertainty principle, which leads to a modification of the commutation rules. This could be then considered as an ingredient that a QGT should have. From the fact we have different commutation rules, one can guess that it is necessary a completely different notion of spacetime, and then the symmetries acting on it should also be different.
\subsection{Spacetime and symmetries in a QGT}
As we have seen in the previous subsection, the introduction of a minimum length leads to nontrivial commutation rules, which could be considered as a way to parametrize a quantum nature of spacetime. The idea of a quantum spacetime was firstly proposed by Heisenberg and Ivanenko as an attempt to avoid the ultraviolet divergences of QFT. This idea passed from Heisenberg to Peierls and to Robert Oppenheimer, and finally to Snyder, who published the first concrete example in 1947~\cite{Snyder:1946qz}. This is a Lorentz covariant model in which the commutator of two coordinates is proportional to the Lorentz generator
\begin{equation}
\left[x^\mu,x^\nu\right]\,=\,i\frac{J^{\mu \nu}}{\Lambda^2} \,,
\end{equation}
where $\Lambda$ has dimensions of energy by dimensional arguments\footnote{Remember that we are using natural units, making that the inverse of length is an energy.}. But this model, originally proposed to try to avoid the ultraviolet divergences in QFT, was forgotten when renormalization appeared as a systematic way to avoid the divergences at the level of the relations between observables. Recently, the model has been reconsidered when noncommutativity was seen as a way to go towards a QGT.
Another widely studied model is the canonical noncommutativity~\cite{Szabo:2001kg,Douglas:2001ba},
\begin{equation}
\left[x^\mu,x^\nu\right]\,=\, i \Theta^{\mu \nu} \,,
\end{equation}
where $\Theta^{\mu \nu}$ is a constant matrix with dimensions of length squared. In this particular simple case of noncommutativity it has been possible to study a QFT with the standard perturbative approach.
The last model we mention here, named $\kappa$-Minkowski\footnote{We will study it in more detail in Sec.\ref{sec:DSR} and Sec.~\ref{sec:examples} as it is included in the scheme of $\kappa$-Poincar\'{e}.}~\cite{Smolinski1994}, has the following non-vanishing commutation rules
\begin{equation}
\left[x^0,x^i\right]\,=\,- i \frac{x^i}{\Lambda} \,,
\end{equation}
where $\Lambda$ has also dimensions of energy.
Snyder noncommutative spacetime is very peculiar from the point of view of symmetries since the usual Lorentz transformations used in SR are still valid. But in general, in the other models of noncommutativity, linear Lorentz invariance is not a symmetry of the new spacetime, which is in agreement with what we have seen previously: the classical concept of a continuum spacetime has to be replaced somehow for Planckian scales, where new effects due to the quantum nature of gravity (for example, creation and evaporation of virtual black holes~\cite{Kallosh:1995hi}) should appear. So, while SR postulates Lorentz invariance as an exact symmetry of Nature (every experimental test up to date is in accordance with it~\cite{Kostelecky:2008ts,Long:2014swa,Kostelecky:2016pyx,Kostelecky:2016kkn}; see also the papers in~Ref.~\cite{LectNotes702}), a QGT is expected to modify someway this symmetry. Many theories which try to describe a QGT include a modification of Lorentz invariance in a form or another (for a review, see Ref.~\cite{AmelinoCamelia:2008qg}), and the possible experimental observations that confirms or refutes this hypothesis would be very important in order to constrain these possible theories. A way to go beyond the Lorentz invariance is to consider that this symmetry would be violated for energies comparable with the high energy scale. This is precisely what is studied in the so-called Lorentz-invariance violation theories (LIV). In this way, the SR symmetries are only low energy approximations of the true symmetries of spacetime. We will study in the next subsection the usual theoretical framework in which these kind of theories are formulated and the main experiments where a LIV effect could be manifest.
\subsection{Lorentz Invariance Violation}
As we have previously mentioned, the symmetries of the ``classical'' spacetime have to be broken of deformed at high energies due to the possible new effects of the quantum spacetime. LIV theories consider that Lorentz symmetry is violated at high energies, establishing that there is a preferred frame of reference (normally an observer aligned with the cosmic microwave background (CMB), in such a way that this radiation is isotropic). A conservative way to consider this theory is to assume the validity of the field theory framework. Then, all the terms that violate Lorentz invariance (LI) are added to the standard model (SM), leading to an effective field theory (EFT) known as the standard model extension (SME)~\cite{Colladay:1998fq} (in the simplest model one considers only operators of dimension 4 or less, known as the minimal SME, or mSME) with the condition that they do not change the field content and that the gauge symmetry is not violated.
Historically, in the middle of the past century researchers realized that LIV could have some phenomenological observations~\cite{Dirac:1951:TA,Bjorken:1963vg,Phillips:1966zzc,Pavlopoulos:1967dm,Redei:1967zz}, and in the seventies and eighties theoretical bases were settled pointing how LI could be established for low energies without being an exact symmetry at all scales~\cite{Nielsen:1978is,Ellis:1980jm,Zee:1981sy,Nielsen:1982kx,Chadha:1982qq,Nielsen:1982sz}. However, this possible way to go beyond SR did not draw much attention since it was thought that effects of new physics would only appear for energies comparable to the Planck mass. It seems impossible to talk about phenomenology of such a theory being the Planck energy of the order of $10^{19}$ GeV and having only access to energies of $10^4$ GeV from particle accelerators and $10^{11}$ GeV from particles coming from cosmic rays. But over the past few years people have realized that there could be some effects at low energy that could find out evidences of a LIV due to amplification processes~\cite{Mattingly:2005re}. These effects were baptized as ``Windows on Quantum Gravity''. A partial list of these {\em windows on QG} includes (see Refs.~\cite{Mattingly:2005re,Liberati2013} for a review):
\begin{itemize}
\item \textbf{Change in the results of the experiment as the laboratory moves}
Due to the existence of a preferred frame of reference, there is a change in the measurement since the laboratory moves (due to the rotation and translation of the Earth), and then different results should be obtained depending on the spatial location where the experiment takes place and the time when it is done. To carry out the experiment, two ``clocks'', i.e., two atomic transition frequencies of different materials or with different orientations, are positioned in the same point of the space. During the movement of the ``clocks'', they would take different components of the tensors appearing in the deformed Lorentz violating EFT of the mSME. This supposed difference of time between the clock frequencies should be measured over a long time in order to be appreciable, and its absence puts constraints in the parameters of the model (generally for protons and neutrons~\cite{Kostelecky:1999mr}).
\item \textbf{Cumulative effects}
There are two important effects trying to be measured. On the one hand, if there is a deformed dispersion relation (DDR) with Lorentz invariance violating terms, the velocity of particles (and particularly photons), would depend on their energy (this effect was considered for the first time in Ref.~\cite{Amelino-Camelia1998}). This could be measured for photons coming from a gamma-ray burst (GRB), pulsars, or active galactic nuclei (AGN), due to the long distance they travel, amplifying the possible effect\footnote{This effect will be seen in more detail in Sec.\ref{sec:DSR}}. On the other hand, some terms in the mSME would produce a time delay for photons due to a helicity dependence of the velocity, phenomenon baptized as birefringence~\cite{Maccione:2008tq}.
\item \textbf{Threshold of allowed (SR forbidden) reactions}
Due to the existence of a preferred reference frame, some reactions forbidden in SR are now allowed starting at some threshold energy. For example, photon splitting $\gamma \rightarrow e^+e^- $ is not allowed in usual QFT because of kinematics and charge-parity (CP) conservation, but it could be possible in a LIV scenario from some threshold energy~\cite{Jacobson:2002hd}.
\item \textbf{Shifting of existing threshold reactions}
The GZK cutoff~\cite{Greisen:1966jv,Zatsepin:1966jv} is a theoretical limit on the energy of the ultra-high energy cosmic rays (UHECR) that come to our galaxy due to interactions with CMB photons, i.e. interactions like $\gamma_{CMB}+p \rightarrow \Delta^{+} \rightarrow p + \pi^0$ or $\gamma_{CMB}+p \rightarrow \Delta^{+} \rightarrow n + \pi^{+}$. In the LI case these interactions have a threshold energy of $5\times 10^{19}$ eV. Experimentally, the suppression of the UHECR flux was confirmed only recently~\cite{Roth:2007in,Thomson:2006mm} in the Auger and HiRes experiments and also in the AGASA collaboration~\cite{Takeda:1998ps}. Even though this cutoff could be expected due to the finite acceleration power of UHECR sources, the fact that the maximum energies of UHECR coincide with the proposed GZK cutoff makes plausible its explanation as due to the interaction with the CMB photons. However, if a LIV scenario is present this cutoff could be modified. The GZK cutoff is a good arena for constraining LIV since the threshold of the interaction of high energy protons with CMB photons is very sensitive to a LIV in the kinematics~\cite{Jacobson:2002hd}.
\end{itemize}
Despite the efforts of the scientific community, until now there is no clear evidences of LIV. Current experiments have only been able to put constraints in the SME parameters~\cite{Kostelecky:2008ts}.
In this thesis, the main field of research is a different way to go beyond SR. In this framework there is also a high energy scale parameterizing departures from SR, but preserving a relativity principle.
\section{DSR: Doubly Special Relativity}
\label{sec:DSR}
In the previous subsection we briefly summarized the most important features of LIV. Now we can wonder if there is another option instead of violating Lorentz symmetry for going beyond SR (BSR). One could consider that Lorentz symmetry is not violated at Planckian scales but deformed. This is nothing new in physics; some symmetries have been deformed when another, more complete theory which encompasses the previous one, is considered. For example, Poincar\'{e} transformations, that are the symmetries of SR, are a deformation of the Galilean transformations in classical mechanics. In this deformation, a new invariant parameter appears, the speed of light. Similarly, in a theory beyond SR (thinking on some approximation of a QGT), one could have a Poincar\'{e} deformed symmetry with a new parameter. This is what doubly special relativity (DSR) considers (see Ref.~\cite{AmelinoCamelia:2008qg} for a review).
In this theory, the Einstein relativity principle is generalized adding a new relativistic invariant to the speed of light $c$, the Planck length $l_P$. This is why this theory is also-called Doubly Special Relativity. The Planck length is normally considered as a minimum length. Of course, it is assumed that in the limit in which $l_P$ tends to 0, DSR becomes the standard SR.
\subsection{Introduction of the theory}
We start this subsection by summarizing the first papers~\cite{Amelino-Camelia2001,Amelino-Camelia2002a}, in which DSR was formulated as a low energy limit of a QGT that could have some experimental consequences. These papers formulate DSR as the result of the introduction of a new invariant scale in SR, the Planck length $l_P$, in a parallel way as the speed of light $c$ is introduced as a fundamental scale to obtain SR from the Galilean relativity principle.
One starts with the relativity principle (R.P.) introduced by Galileo:
\begin{itemize}
\item (R.P.): The laws of physics take the same form in all inertial frame, i.e. these laws are the same for all inertial frames.
\end{itemize}
In this approach there is not any fundamental scale. Using this postulate, one can derive the composition law of velocities $v^\prime=v_0+v$ that describes the velocity of a projectile measured by an observer, when a second observer, moving with velocity $v_0$ with respect to the former, sees it with a velocity $v$. One just impose that $v^\prime=f(v_0,v)$ where $f$ must satisfy $f(0, v) = v$, $f(v_0, 0) = v_0$, $f(v, v_0) = f(v_0, v)$, $f(-v_0,-v) =- f(v_0, v)$, and by dimensional analysis the known law is obtained.
In SR, Einstein introduced a fundamental velocity scale in such a way that it is consistent with the relativity principle. Then in SR, every observer agrees that the speed of light is $c$. The new relativity principle (Einstein laws, E.L.) can be written as
\begin{itemize}
\item (E.L.): The laws of physics involve a fundamental velocity scale $c$, corresponding with the speed of light measured by each inertial observer.
\end{itemize}
Form (R.P.) and (E.L.) one can read the new expression for the composition of velocities, being $v^\prime=f(v_0,v;c)=(v_0+v)/(1+v_0 v /c^2)$.
If one wants to include a new invariant scale one can proceed as before:
\begin{itemize}
\item (L.1.): The laws of physics involve a fundamental velocity scale $c$, and a fundamental length scale $l_P$.
\item (L.1.b): The value of the fundamental velocity scale $c$ can be measured by each
inertial observer as the speed of light with wavelength $\lambda$ much larger than $l_P$
(more rigorously, $c$ is obtained as the $\lambda/ l_P \rightarrow \infty $ limit of the speed of light of wavelength $\lambda$).
\end{itemize}
(L.1.b) appears since the addition of a new length scale would introduce in principle a dependence of the speed of light as a function of the energy of the photon, or equivalently, the speed of the photon would depend on the quotient $\lambda/ l_P $. So due to this scenario, a new addendum to the relativity principle can be written:
\begin{itemize}
\item (L.1.c): Each inertial observer can establish the value of $l_p$ (same value for all
inertial observers) by determining the dispersion relation for photons, which takes
the form $E^2- c^2p^2 + f(E, p; l_p) = 0$, where the function $f$ is the same for all
inertial observers. In particular, all inertial observers agree on the leading $l_p$
dependence of $f$: $ f(E, p; l_p) \simeq \eta l_p c \vec{p}^2 E$.
\end{itemize}
Here $\eta$ is the dimensionless constant coefficient of the first term of an infinite series expansion. This expression was used in Refs.\cite{Amelino-Camelia2001,Amelino-Camelia2002a} only as an example in order to study the possible effects of this new deformed relativity principle.
\subsection{Deformed relativistic kinematics}
\label{sub_DRK}
Since the dispersion relation has changed, the usual Lorentz transformations are no longer valid, and in order to save the relativity principle, one has to consider deformed transformation rules assuring that every inertial observer uses the same dispersion relation\footnote{This is a crucial difference between LIV and DSR. In LIV scenarios there is a deformed dispersion relation, different for each observer, while in DSR the dispersion relation is the same for every observer, so deformed transformation rules are needed.}.
Normally, it is considered that the isotropy of the space remains unaltered (so rotations are not deformed), but there is a modification of the differential operators
\begin{equation}
B_i\,=\, i c p_i \frac{\partial}{\partial E} +i [E/c- \eta l_p ( E^2/c^2-\vec{p}^2)] \frac{\partial}{\partial p_i}-i\eta l_p p_i p_j \frac{\partial}{\partial p_j}\,,
\label{eq:boosts}
\end{equation}
that represent the generators of ``boosts'' acting on momentum space. The quotation marks are added in order to remark that these transformations are no longer the SR boosts.
Now we have seen how the kinematics is changed in the one-particle sector, we can wonder what happens while considering a simple scattering process, $a+b\rightarrow c+d$. The conservation law has to be consistent with the deformed transformation rules in order to be valid in every inertial frame (in particular, all observers must agree on whether or not a certain process is allowed). In SR the conservation law is the sum, but in the case we are considering, the deformed composition law is
\begin{equation}
\begin{split}
E_a\oplus E_b\,&=\,E_a+E_b+l_p c p_a p_b\,,\\ \nonumber
p_a\oplus p_b\,&=\,p_a+p_b+l_p (E_a p_b+p_a E_b)/c\,.
\end{split}
\end{equation}
One can check that this composition rule is compatible with the deformed transformations~\eqref{eq:boosts}~\footnote{Note that in fact this example can be obtained from the SR kinematics through a change of momentum basis (in Ch.~\ref{chapter_second_order} we will study this in detail).}.
We see that the main ingredients of a DSR relativistic kinematics are: a deformed dispersion relation, a deformed composition law for the momenta, and nonlinear Lorentz transformations making compatible the two previous pieces with a relativity principle \footnote{There are DSR models where there is no modification in the dispersion relation, and the Lorentz transformations in the one-particle system are linear~\cite{Borowiec2010}.}.
\subsection{Thought experiments in DSR}
\label{sec:thought_experiments}
We have seen in Sec.~\ref{sec:QGT} that one of the ingredients that a QGT should have is a minimum length, and this implies an uncertainty in the measurement of time and position. If DSR considers a minimum length, somehow these uncertainties should appear.
As in Sec.~\ref{sec:QGT}, in order to understand how a minimum length, deformed commutation rules and uncertainties in measurements emerge when the gravitational interaction plays a role in the Heisenberg microscope, in this subsection we will study some thought experiments and see what are the consequences of having a new fundamental length scale introduced in the dispersion relation as before, leading to a momentum dependent velocity for photons~\cite{Amelino-Camelia2002,Amelino-Camelia2002a,AmelinoCamelia:2010pd}.
\subsubsection{Minimum length uncertainty}
Starting from the dispersion relation for photons $E^2\simeq c^2 p^2+\eta l_p c E p^2$ one can obtain the velocity as
\begin{equation}
\frac{d E}{d p}\,=\,v_\gamma (p)\,\simeq \,c \left(1+\eta l_p \frac{|p|}{2} \right)\,.
\end{equation}
Since the velocity is momentum dependent, any uncertainty on the momentum of a photon used as a probe for length measurements would induce an uncertainty on its speed. The first uncertainty comes from the position of the photon, which is related to the momentum uncertainty through $\Delta x_1 \geq 1/\Delta p$ (since we are using units in which $\,\hbar=1$). As there is a velocity uncertainty $\Delta v_\gamma\sim |\eta| \Delta p l_p c$ and the distance traveled during its flight is $L = v_\gamma T$, the uncertainty of the distance is $\Delta x_2\sim |\eta| \Delta p l_p L$. As $\Delta x_1$ decreases with $\Delta p$ while $\Delta x_2$ increases with it, one easily finds that $\Delta L\geq \sqrt{\eta L\, l_p}$. If $|\eta|\simeq 1$, this procedure is only meaningful for $L>l_p$, finding $\Delta L>l_p$. This result is obviously observer independent by construction. One can see that the fact of postulating a deformed dispersion relation including a length scale leads to the interpretation of this scale as a minimum length.
\subsubsection{Minimum length and time}
In SR, the Lorentz contraction implies that, given a length measured by an observer, it is always possible to find another observer for which the measured length is arbitrarily small. In DSR, this can be no longer valid, so in order to understand better what would happen in this frame, let us consider a simple thought experiment.
Let us imagine two observers with their own spaceships moving in the same direction with different velocities, i.e. one is at rest and the other is moving with respect to the first one with a velocity $V$. In order to measure the distance between $A$ and $B$, two points on the ship at rest, there is a mirror at $B$ and the distance is measured as half of the time needed by a photon with momentum $p_0$ emitted at $A$ to come back to the initial position after reflection by the mirror. Timing is provided by a digital light clock with the same system used before: a mirror placed at $C$ (at the same rest ship, in a cross direction to $AB$) and a photon with the same energy emitted from $A$ would measure the distance between these two points. The observer at rest measures the distance between $A$ and $B$ obtaining $AB= v_\gamma (p_0) N \tau_0/2$, where $N$ is the number of ticks done by the digital light clock during the journey of the photon traveling from $A\rightarrow B \rightarrow A$ and $\tau_0$ is the interval of time corresponding to each tick of the clock $(\tau_0=2\,AC/v_\gamma (p_0))$. The observer on the second ship moving with velocity $V$ with respect to the one at rest will see that the elapsed time for the photon going from $A\rightarrow C \rightarrow A$ is given by
\begin{equation}
\tau\,=\,\frac{v_\gamma(p_0)}{\sqrt{v^2_\gamma (p^\prime)-V^2}}\tau_0\,,
\label{eq:tau_int}
\end{equation}
where $p^\prime$ is related to $p_0$ through the formula for boosts in a direction orthogonal to the one of motion of the photon. On the other hand, the observer who sees the ship moving measures that the time in which the photon goes and come back from A to B is given by
\begin{equation}
N \tau\,=\,\frac{AB'}{v_\gamma(p)-V}+\frac{AB'}{v_\gamma(p)+V}\,=\,\frac{2 AB'v_\gamma(p) }{v^2_\gamma(p)-V^2}\,,
\end{equation}
and then,
\begin{equation}
AB'\,=\,\frac{v^2_\gamma(p)-V^2}{v_\gamma (p)}N\frac{\tau}{2}\,,
\label{eq:AB}
\end{equation}
where $p$ is given by the action of a finite boost Eq.~\eqref{eq:boosts} over $p_0$. Combining Eq.\eqref{eq:AB} and Eq.\eqref{eq:tau_int} one obtains
\begin{equation}
AB'\,=\,\frac{\left(v^2_\gamma(p)-V^2\right)v_\gamma(p_0)}{v_\gamma (p) \sqrt{v^2_\gamma (p^\prime)-V^2}}N\frac{\tau_0}{2}\,=\,\frac{v^2_\gamma(p)-V^2}{v_\gamma (p) \sqrt{v^2_\gamma (p^\prime)-V^2}}AB\,.
\end{equation}
This derivation is analogous to the SR one, where a momentum velocity of the photon is taken into account. The implications of this formula are very easy when one considers the small $V$ and small momentum limit, since one recovers the result of SR. For large $V$, $AB'$ has two important contributions
\begin{equation}
AB'\,> \frac{\sqrt{c^2-V^2}}{c}\,AB+\frac{\eta c l_p \,AB}{ \sqrt{c^2-V^2}\,AB'}\,,
\end{equation}
where $|p|>|\Delta p|>1/AB'$ is imposed (the probe wavelength must be shorter than the distance being measured). The first one is the usual Lorentz contribution and the last one makes that, for $\eta$ positive and of order 1, $AB'>l_p$ for all values of $V$. This study is only taking into account leading order corrections, so when $V$ is large enough, the correction term is actually bigger than the 0th order contribution to $AB'$. But we see that there is a modification of the Lorentz contraction in such a way that a minimum length appears, unlike in the SR case, where for photons ($V=c$) the measured length is zero.
\subsubsection{Spacetime fuzziness for classical particles}
The necessity of imposing nonlinear boosts in order to keep the relativity principle has important consequences in the propagation of particles and in the identification of intervals of time.
Let us consider an observer $O$ who sees two different particles of masses $m_1$ and $m_2$ moving with the same speed and following the same trajectory. Another observer $O'$ boosted with respect to $O$ would see that these particles are ``near'' only for a limited amount of time: the particles would become more and more separated as they move. According to this, the concept of ``trajectory'' should be removed from this picture.
A similar effect occurs considering Eqs.\eqref{eq:boosts} and \eqref{eq:tau_int}. For simplicity, let us consider two photons with energies $E_2$ and $E_1$ such that $v(E_2)=2 v(E_1)$, i.e. the difference in energy is large enough to induce a doubling of the speed, making the time lapsed for the first photon to describe the same trajectory twice that of the second, $\tau_1=2 \tau_2$. But when one considers another observer moving with a speed $V$ with respect to the first one
\begin{equation}
\tau_1^\prime \,=\,\frac{v(E_1)}{\sqrt{v(E_1^\prime)^2-V^2}}\tau_1\,\neq\, 2\tau_2^\prime\,=\,2 \tau_2 \frac{v(E_2)}{\sqrt{v(E_2^\prime)^2-V^2}}\,.
\end{equation}
These implications on spacetime that DSR kinematics is provoking would lead to think that the usual concept of spacetime would have to be modified in this scheme, leading to a new spacetime where these ``paradoxes'' do not appear.
\subsection{Relation with Hopf algebras}
\label{sec:Hopf_algebras}
The example set in (L.1.c) is considering only the first order modification in $l_p$ of the dispersion relation. In order to construct a relativistic kinematics at all order with a fundamental length scale, one needs a new ingredient, a mathematical tool. In this context, the use of Hopf algebras is introduced~\cite{Majid:1995qg}, and a particular example is considered, the deformation of Poincar\'{e} symmetries through quantum algebras known as $\kappa$-Poincar\'{e}~\cite{Lukierski:1992dt,Lukierski:1993df,Majid1994,Lukierski1995}.
We have previously seen that there are modifications in the kinematics when a minimum length is considered. For the one-particle sector, one has a deformed dispersion relation and a deformed Lorentz transformation. For the two-particle system, a deformed composition law (DCL) for the momenta appears. In $\kappa$-Poincar\'{e}, there is a modification of the dispersion relation, of the Lorentz symmetries in the one-particle system, and a coproduct of momenta and Lorentz transformations in the two-particle system\footnote{The coproduct of the boost does not appear in the example considered in Refs.\cite{Amelino-Camelia2001,Amelino-Camelia2002a} due to the fact that the composition law is just the sum expressed in other variables.}. The coproduct of momenta is considered as a deformed composition law and the coproduct of the boosts tell us how one momentum changes under Lorentz transformations in presence of another momentum. One of the most studied bases in $\kappa$-Poincar\'{e} is the bicrossproduct basis\footnote{For this and other bases see Refs.~\cite{KowalskiGlikman:2002we,Lukierski:1991pn}.}. All the ingredients of this basis are
\begin{equation}
\begin{split}
m^2\,&=\,\left(2 \kappa \sinh{\left(\frac{p_0}{2 \kappa}\right)} \right)^2-\vec{p}^2 e^{p_0/\kappa}\,,\\
\left[N_i,p_j\right]\,&=\, i \delta_{ij}\left(\frac{\kappa}{2}\left(1-e^{-2p_0/\kappa}\right)+\frac{\vec{p}^2}{2\kappa}\right) -i \frac{p_i p_j}{\kappa}\,, \qquad \left[N_i,p_0\right]\,=\, i p_i\,,\\
\Delta\left(M_i\right)\,&=\,M_i \otimes \mathbb{I}+ \mathbb{I}\otimes M_i\,, \qquad \Delta\left(N_i\right)\,=\,N_i \otimes \mathbb{I}+ e^{-p_0/\kappa}\otimes M_i +\frac{1}{\kappa}\epsilon_{i\,j\,k} p_j\otimes M_k\,, \\
\Delta\left(p_0\right)\,&=\,p_0 \otimes \mathbb{I}+ \mathbb{I}\otimes p_0\,, \qquad
\Delta\left(p_1\right)\,=\,p_1 \otimes \mathbb{I}+ \mathbb{I}\otimes e^{-p_0/\kappa}\,.
\end{split}
\label{eq:coproducts}
\end{equation}
Besides the deformed relativistic kinematics (DRK), as we have seen previously in this section through thought experiments, a minimum length appears, and we know from Sec.\ref{sec:QGT} this fact is related to a noncommutativity of the space-time coordinates. Hopf algebras also gives the commutators of phase space coordinates and in particular, in the bicrossproduct basis of $\kappa$-Poincar\'{e}, the commutators are
\begin{equation}
\begin{split}
\left[x^0,x^i\right]\,&=\,-i\,\frac{x^i}{\kappa} \,,\qquad \left[x^0,p_0\right]\,=\,-i\,\\
\left[x^0,p_i\right]\,&=\,i\,\frac{p_i}{\kappa} \,,\qquad \left[x^0,p_i\right]\,=\,-\delta^i_j \,,\qquad \left[x^i,p_0\right]\,=\,0\,.
\end{split}
\label{eq:pairing_intro}
\end{equation}
We see that this gives all the ingredients that a deformed relativistic kinematics should have and also gives nontrivial commutators in phase space, making Hopf algebras an attractive way for studying DSR theories.
\subsection{Phenomenology}
\label{sec_phenomenology_DSR}
We have said at the subsection dedicated to LIV that there are numerous ways to look for possible LIV effects. But in the context of DSR, the phenomenology is completely different. In LIV, there is not an equivalence of inertial frames, so in order to observe an effect on the threshold of a reaction, the particles involved in the process must have enough energy. The first order correction for the threshold of a reaction is $E^3/m^2 \Lambda$, where $E$ is the energy of a particle involved in the process measured by our Earth-based laboratory frame, and $m$ is a mass that controls the corresponding SR threshold, so the energy has to be high enough in order to have a non-negligible correction. In contrast, in DSR there is a relativity principle, so the threshold of a reaction cannot depend on the observer; there is no new threshold for particle decays at a certain energy of the decaying particle: the energy of the initial particle is not relativistic invariant, so the threshold of such reaction cannot depend on it. Moreover, as a consequence of having a relativity principle, cancellations of effects in the deformed dispersion relation and the conservation law appear~\cite{Carmona:2010ze,Carmona:2014aba}, so many of the effects that can be observed in the LIV case are completely invisible in this context.
Then, in principle, the only experiment that can report some observations are time delay of astroparticles\footnote{This is true if the parameter that characterizes the deformation is of the order of the Planck energy, but as we will see in Ch.~\ref{chapter_twin}, if one leaves out this restriction, there could appear other possible observations in the next generation of particle accelerators. This in principle seems absurd because the time delay experiments put high restrictions a to first order deviation to SR~\cite{Vasileiou:2015wja,Abdalla:2019krx,Ellis:2018lca}, but in Refs.~\cite{Carmona:2017oit,Carmona:2018xwm,Carmona:2019oph} it is shown that such a modification does not necessarily imply a time delay. We will study in more detail how this possibility appears in Ch.~\ref{chapter_time_delay}.}, so many models of emission, propagation and detection of photons and neutrinos have been studied~\cite{AmelinoCamelia:2011cv,Freidel:2011}. In those works, a time delay for photons could appear due to a DDR, which leads to a velocity depending on the energy. For energies much smaller than the Planck one, the DDR can be written in a power series
\begin{equation}
E^2-\vec{p}^2-m^2\,\approx\,\zeta_n\, E^2\left(\frac{E}{\Lambda}\right)^n\,,
\end{equation}
where the coefficients $\zeta_n$ are the $n$-th order to the modification of the dispersion relation. Considering the speed as
\begin{equation}
v\,=\,\frac{d E}{d p}\,,
\end{equation}
one can check that this leads to a flight time delay
\begin{equation}
\Delta t \sim \frac{d}{c} \zeta_n \left(\frac{E}{\Lambda}\right)^n\,,
\end{equation}
where $d$ is the distance between the emission and detection. This time delay could be measured, as in the LIV case, for photons with different energies that come from a very distant source. But unlike the LIV scenario, the time delay depends on the model for the propagation and detection, making important the choice of momentum space-time variables (see Refs.~\cite{Carmona:2017oit,Carmona:2018xwm}). In Ch.~\ref{chapter_time_delay} we will study this phenomenology in more detail.
\subsection{Curved momentum space}
In the 30's, Born considered a duality between spacetime and momentum space, leading to a curved momentum space~\cite{Born:1938} (this idea was discussed also by Snyder some years after~\cite{Snyder:1946qz}). This proposal was postulated as an attempt to avoid the ultraviolet divergences in QFT, and until some years ago, it was not considered as a way to go beyond SR.
In this work, it was shown that a ``reciprocity'' (name chosen from the lattice theory of crystals) between space-time and momentum variables appears in physics. For example, in the description of a free particle in quantum theory through a plane wave
\begin{equation}
\psi (x)\,=\, e^{i p_\mu \,x^\mu}\,,
\end{equation}
the role played by the $x$'s or the $p$'s are completely identical. $p_\mu$ can be seen as usual as the translation generator of the space-time coordinates, $-i \partial /\partial x^\mu$, or, conversely, $x ^\mu$ can be treated as the translations in momentum space, being $-i \partial /\partial p_\mu$. As in GR, the line element is written as
\begin{equation}
ds^2\,=\, dx^\mu \,g_{\mu \nu} \,dx^\nu\,.
\end{equation}
Now, thanks to this duality, a line element in momentum space can be considered:
\begin{equation}
ds^2\,=\, dp_\mu\, \gamma^{\,\mu \nu}\, dp_\nu\,.
\end{equation}
Also, it is proposed that this new metric in momentum space, that has not to be the same that the space-time metric, can be also a Riemannian metric, leading then to the Einstein's equations
\begin{equation}
P^{\mu\nu}-\left(\frac{1}{2} \,P+\lambda^\prime\right)\gamma^{\,\mu\nu}\,=\, -\kappa^\prime T^{\mu\nu}\,,
\end{equation}
where $P^{\mu\nu}$ is the Ricci tensor, $P$ the scalar of curvature, and $\lambda^\prime$ and $\kappa^\prime$ play the dual role of the cosmological constant and $8\pi G$ of GR. We can understand the $T^{\mu\nu}$ term if one keeps in mind that the integrals $\int{T_{\mu 0}dx dy dz}$ are the four momentum of the system considered, so $\int{T^{\mu 0}dp_x dp_y dp_z}$ would be the space-time coordinates. The interpretation of the $\lambda^\prime$ and $\kappa^\prime$ and their possible connection with their dual of the space-time counterpart was not clear for Born.
An anti-de Sitter momentum space is considered in such a way that there is a maximum value for the momentum (note that the aim of this proposal is to get rid of the ultraviolet divergences in QFT, and a cutoff would be necessary). With this assumption, the author sees that a lattice structure for the spacetime appears, an ingredient that, as we have seen, a QGT should have.
In Refs.~\cite{AmelinoCamelia:2008qg,AmelinoCamelia:2011nt,Lobo:2016blj} it was proposed a way to establish a connection between a geometry in momentum space and a deformed relativistic kinematics. In Ch.~\ref{chapter_curved_momentum_space} we will explain this work in depth and will make another proposal to relate a curved momentum space and DSR, in such a way that a noncommutative spacetime appears in a natural way (this will be studied in Ch.~\ref{chapter_locality}), leading to a similar conclusion to the lattice structure obtained in Ref.~\cite{Born:1938}.
\subsection{About momentum variables in DSR}
Since the first papers laying the foundations of the theory, DSR was criticized because it was considered to be just SR in a complicated choice of coordinates~\cite{Jafari:2006rr}. The point is that the deformed dispersion relation and the deformed composition law of momenta proposed in Sec.~\ref{sub_DRK} can be obtained through a change of momentum basis. But in general, a DRK cannot be obtained in such a way, like for example $\kappa$-Poincar\'{e}. Any deformed kinematics with a non-symmetric composition law cannot be SR, because there is no change of momentum variables that reproduce it. In such a way, DSR is safe.
In a collision of particles in DSR, the energy and momentum of the initial and final state particles do not fix the total initial and final momenta, since there are different channels for the reactions due to the non-symmetric DCL (see for example Ref.~\cite{Albalate:2018kcf}), i.e different total momentum states would be characterized by different orderings of momenta in the DCL. This will be studied in detail in Ch.~\ref{chapter_twin}.
Also, there is a controversy about what the physical momentum variables are~\cite{AmelinoCamelia:2010pd}. In SR we use the variables where the conservation law for momenta is the sum and where the dispersion relation is quadratic in momentum. We could wonder why no other coordinates are used. It seems a silly question in the sense that every study in SR is easier in the usual coordinates, and the use of another (more complicated) ones would be a mess and a waste of time. But in the DSR scheme, this naive and useful argument is no longer valid. There are a lot of representations of $\kappa$-Poincaré~\cite{KowalskiGlikman:2002we}, and in some of them, the dispersion relation is the usual one, but the DCL takes a non-simple form (the so-called ``classical basis'' is an example); however, in other bases, the DCL is a simple expression but, conversely, the DDR is not trivial (the ``bicrossproduct'' basis). So the criteria used in SR to choose the physical variables cannot be used in these schemes. From the point of view of the algebra, any basis is completely equivalent, but from the point of view of physics, only one should be the nature choice (supposing $\kappa$-Poincar\'{e} is the correct deformation of SR). Ideally, one could use any momentum variable if it were possible to identify the momentum variable from a certain signal in the detector. The problem resides in the fact that the physics involved in the detection is too complicated to be able to take into account the effect of a change in momentum variables in relation with the detector signal. Maybe some physical criteria could identify the physical momentum variables.
We have discussed in this section the fact that there are many ways to represent the kinematics of $\kappa$-Poincaré in different momentum variables. But in addition to this particular model, there are also a lot of them characterizing a DRK (this will be studied in Ch.~\ref{chapter_curved_momentum_space}). In the next chapter, we will study how to construct a generic DRK from a simple trick, and how $\kappa$-Poincaré is contained as a particular example.
\chapter{Beyond Special Relativity kinematics}
\label{chapter_second_order}
\ifpdf
\graphicspath{{Chapter2/Figs/Raster/}{Chapter2/Figs/PDF/}{Chapter2/Figs/}}
\else
\graphicspath{{Chapter2/Figs/Vector/}{Chapter2/Figs/}}
\fi
\epigraph{Science, my lad, is made up of mistakes, but they are mistakes which it is useful to make, because they lead little by little to the truth.}{Jules Verne, A Journey to the Center of the Earth}
As we have discussed at the end of Sec~\ref{sec:DSR}, there are many ways to consider a relativistic kinematics that goes beyond the usual SR framework. In order to satisfy a relativity principle, all the ingredients that integrate the kinematics must be related, i.e. given a DCL and a DDR, some deformed Lorentz transformations (DLT) making them compatible have to exist.
In Ref.~\cite{AmelinoCamelia:2011yi}, a generic DRK with only first order terms in the high energy scale was studied, analyzing how the ingredients of the kinematics must be related in order to have a relativity principle. In that work, a particular simple process (a photon decay to an electron-positron pair) was used in order to obtain what are called ``golden rules'', relationships between the coefficients of the DDR and DCL. Due to the considered simplifications (the particular choice of the process), only one such a rule was obtained. In Ref.~\cite{Carmona:2012un}, a generalization of the previous work was carried out, without restricting to a particular process, and including a new term proportional to the Levi-Civita symbol in the DCL. In that work, the most general DRK is obtained at first order and it is shown that in fact, there are two golden rules.
In the previous work there was also a discussion about the choice of momentum variables. It was found that the one-particle sector, i.e. the DDR and the DLT for the one-particle system, can be reduced to SR kinematics just through a nonlinear change of momentum basis. But when the two-particle sector is considered, one cannot carry out this simplification in a general case. Only when the composition law is symmetric, one can find such a change of basis that indicates the considered kinematics is just SR in a fancy choice of momentum variables. The discussion whether different bases reproduce different physics will be treated along this thesis. We consider here that the main ingredient that characterizes if a kinematics is just SR in other variables, is the fact that one can reproduce the SR kinematics through a change of basis\footnote{We will see strong motivations for such assumption in Chs.~\ref{chapter_curved_momentum_space}-\ref{chapter_locality}.}. Obviously, with this prescription, any non-symmetric composition law is a kinematics which goes beyond SR.
In Ref.~\cite{Carmona:2016obd}, we developed a systematic way to consider different kinematics compatible with the relativity principle. Starting from the SR kinematics, one can obtain the most general DRK at first order studied in~\cite{Carmona:2012un}. With this method, a DRK up to second order is worked out, in such a way that (by construction) it is relativistic invariant, i.e. the appropriate relationship between the ingredients of the kinematics holds. A DRK with only second order additional terms had been previously considered in the DSR literature in relation to the Snyder noncommutativity~\cite{Battisti:2010sr}, but it had not been studied with such a detail before this work. Obviously, possible new physics effects at second order will be of less relevance than first order effects if both corrections are present, which is what is usually considered in the literature for near-future experiments~\cite{AmelinoCamelia:2008qg}. The motivation to go beyond first order DRK is that there are strong limits (constraining the high energy scale to values larger than the Planck one) to this possible modification imposed by experiments that try to measure photon time delays coming from astrophysical sources, like gamma-ray bursts and blazars~\cite{Vasileiou:2015wja,Abdalla:2019krx,Ellis:2018lca}. Also, in Ref.~\cite{Stecker:2014oxa}, a possible explanation for the apparent cutoff in the energy spectrum of neutrinos observed by IceCube is offered assuming second order Planckian physics instead of a first order modification (see Ref.~\cite{Carmona:2019xxp} for an analytic version of the same study). From a theoretical point of view, as we have seen in Sec.~\ref{sec:gup}, a generalized uncertainty principle of the Heisenberg microscope incorporates corrections at second order in the characteristic length. Moreover, in the supersymmetry framework, $d=6$ Lorentz-violating operators (terms proportional to $\Lambda^{-2}$) can suppress Lorentz violation effects at low energies generated through radiative corrections, while these effects appear for $d=5$ Lorentz-violating operators ($\Lambda^{-1}$ corrections to SR)~\cite{Collins2004,Bolokhov2005,Kislat2015}.
In this chapter, we will start by summarizing the results obtained in Ref.~\cite{Carmona:2012un}, and how the results from that work can be reproduced by a change of variables. After that, we will compute the second order DRK with the same procedure, and we will see that the particular example of $\kappa$-Poincar\'e kinematics in the classical basis~\cite{Borowiec2010} is obtained through this method.
\section{Beyond SR at first order}
\label{sec:firstorder}
\subsection{A summary of previous results}
\label{sec:summary}
We will start by reviewing the results obtained in Ref.~\cite{Carmona:2012un}. The most general expression for a first-order (in a series expansion in the inverse of the high energy scale) DDR compatible with rotational invariance as a function of the components of the momentum is parametrized by two adimensional coefficients $\alpha_1, \alpha_2$:
\begin{equation}
C(p)\,=\,p_0^2-\vec{p}^2+\frac{\alpha_1}{\Lambda}p_0^3+\frac{\alpha_2}{\Lambda}p_0\vec{p}^2=m^2\,,
\label{eq:DDR}
\end{equation}
while the DCL is parametrized by five adimensional coefficients $\beta_1, \beta_2, \gamma_1, \gamma_2, \gamma_3$,
\begin{equation}
\left[p\oplus q\right]_0 \,=\, p_0 + q_0 + \frac{\beta_1}{\Lambda} \, p_0 q_0 + \frac{\beta_2}{\Lambda} \, \vec{p}\cdot\vec{q}\,, \,\,\,\,\, \left[p \oplus q\right]_i \,=\, p_i + q_i + \frac{\gamma_1}{\Lambda} \, p_0 q_i + \frac{\gamma_2}{\Lambda} \, p_i q_0
+ \frac{\gamma_3}{\Lambda} \, \epsilon_{ijk} p_j q_k \,,
\label{eq:DCL}
\end{equation}
where $\epsilon_{ijk}$ is the Levi-Civita symbol. By definition, a DCL has to satisfy the following conditions
\begin{equation}
(p \oplus q)|_{q=0} \,=\, p\,, \quad \quad \quad \quad (p \oplus q)|_{p=0} \,=\, q\,,
\label{eq:cl0}
\end{equation}
leaving room only for these five parameters when one assumes a linear implementation of rotational invariance.
The most general form of the Lorentz transformations in the one-particle system is
\begin{eqnarray}
\left[T(p)\right]_0 &\,=\,& p_0 + (\vec{p} \cdot \vec{\xi}) + \frac{\lambda_1}{\Lambda} \, p_0 (\vec{p} \cdot \vec{\xi})\,, \nonumber \\
\left[T(p)\right]_i &\,=\,& p_i + p_0 \xi_i + \frac{\lambda_2}{\Lambda} \, p_0^2 \xi_i + \frac{\lambda_3}{\Lambda} \, {\vec p}^{\,2} \xi_i + \frac{(\lambda_1 + 2\lambda_2 + 2\lambda_3)}{\Lambda} \, p_i ({\vec p} \cdot {\vec \xi}) \,,
\label{T-one}
\end{eqnarray}
where $\vec{\xi}$ is the vector parameter of the boost, and the $\lambda_i$ are dimensionless coefficients. These expressions are obtained after imposing that these transformations must satisfy the Lorentz algebra, i.e. the commutator of two boosts corresponds to a rotation.
The invariance of the dispersion relation under this transformation, $C(T(p))=C(p)$, requires the coefficients of the DDR to be a function of those of the boosts
\begin{equation}
\alpha_1 \,=-\,2(\lambda_1+\lambda_2+2\lambda_3)\,, \quad\quad \alpha_2\,=\,2(\lambda_1+2\lambda_2+3\lambda_3)\,.
\label{alphalambda}
\end{equation}
As we have mentioned previously, a modification in the transformations of the two-particle system is needed in order to have a relativity principle, making the DLT to depend on both momenta. Then, we are looking for a DLT such that $(p,q) \to (T^L_q(p),T^R_p(q))$, where
\begin{equation}
T^L_q(p) \,=\, T(p) + {\bar T}^L_q(p)\,, {\hskip 1cm} T^R_p(q) \,=\, T(q) + {\bar T}^R_p(q) \,.
\label{eq:boost2}
\end{equation}
When one considers the most general transformation in the two-particle system and imposes that they are Lorentz transformations and that they leave the DDR invariant, the final form for the DLT in the two-particle system is obtained:
\begin{eqnarray}
\left[{\bar T}^L_q(p)\right]_0 &\,=\,& \frac{\eta_1^L}{\Lambda} \, q_0 ({\vec p} \cdot {\vec \xi}) + \frac{\eta_2^L}{\Lambda} \, ({\vec p} \wedge {\vec q}) \cdot {\vec \xi}\,, \nonumber \\
\left[{\bar T}^L_q(p)\right]_i &\,=\,& \frac{\eta_1^L}{\Lambda} \, p_0 q_0 \xi_i + \frac{\eta_2^L}{\Lambda}\left( \, q_0 \epsilon_{ijk} p_j \xi_k - p_0 \epsilon_{ijk} q_j \xi_k\right)+\frac{\eta_1^L}{\Lambda}\left(q_i ({\vec p} \cdot {\vec \xi})-({\vec p} \cdot {\vec q}) \xi_i \right) \,,
\nonumber \\
\left[{\bar T}^R_p(q)\right]_0 &\,=\,& \frac{\eta_1^R}{\Lambda} \, p_0 ({\vec q} \cdot {\vec \xi}) + \frac{\eta_2^R}{\Lambda} \, ({\vec q} \wedge {\vec p}) \cdot {\vec \xi}\,, \nonumber \\
\left[{\bar T}^R_p(q)\right]_i &\,=\,& \frac{\eta_1^R}{\Lambda} \, q_0 p_0 \xi_i-\frac{\eta_2^R}{\Lambda} \left( p_0 \epsilon_{ijk} q_j \xi_k-q_0 \epsilon_{ijk} p_j \xi_k\right)+\frac{\eta_1^R}{\Lambda} \left(p_i ({\vec q} \cdot {\vec \xi}) - ({\vec q} \cdot {\vec p}) \xi_i\right) \,.\nonumber \\
\label{eq:gen2pboost}
\end{eqnarray}
The last step is to find the relationship between the boosts and the DCL coefficients. In order to do so, we have to impose the relativity principle for a simple process, a particle with momentum $(p \oplus q)$ decays into two particles of momenta $p$ and $q$. The relativity principle imposes that the conservation law for the momenta must be satisfied for every inertial observer, that is,
\begin{equation}
T(p\oplus q)\,=\,T_q^L(p)\oplus T_p^R(q) \,.
\label{eq:RP-1}
\end{equation}
This imposes the following relations between the coefficients of the DLT and DCL:
\begin{alignat}{3}
\beta_1 &\,=\, 2 \,(\lambda_1 + \lambda_2 + 2\lambda_3)\,, \quad\quad &
\beta_2 &\,=\, -2 \lambda_3 - \eta_1^L - \eta_1^R\,, \quad\quad & \label{betalambda}\\
\gamma_1 &\,=\, \lambda_1 + 2\lambda_2 + 2\lambda_3 - \eta_1^L\,, \quad\quad &
\gamma_2 &\,=\, \lambda_1 + 2\lambda_2 + 2\lambda_3 - \eta_1^R\,, \quad\quad & \gamma_3 \,=\, \eta_2^L - \eta_2^R \,.
\label{gammalambda}
\end{alignat}
Now one can establish a relationship between the DDR and DCL through the DLT coefficients, i.e. the two conditions imposed by the relativity principle, the ``golden rules'', over the adimensional coefficients of the DDR and DCL:
\begin{equation}
\alpha_1\,=\,-\beta_1\,, \quad \quad \quad \alpha_2\,=\,\gamma_1+\gamma_2-\beta_2\,.
\label{eq:GR}
\end{equation}
\subsection{Change of variables and change of basis}
\label{sec:change}
In Ref.~\cite{Carmona:2016obd} we introduced a new mathematical trick in order to avoid the previous tedious computation that leads to Eqs.~\eqref{alphalambda}, \eqref{betalambda}, \eqref{gammalambda}. The main idea is that one can construct the same DRK in an easy way through a change of variables in the two-particle system. We will distinguish here between two different changes of momentum variables. The first one is what we will call a \emph{change of basis} $(p_0,\vec{p}) \to (P_0,\vec{P})$, following Ref.~\cite{Carmona:2012un}, where the new momentum variables are just a function of the old ones (preserving rotational invariance),
\begin{equation}
\begin{split}
p_0& \,=\,P_0+\frac{\delta_1}{\Lambda}P_0^2+\frac{\delta_2}{\Lambda}\vec{P}^2\equiv \mathcal{B}_0(P_0,\vec{P})\equiv \mathcal{B}_0(P)\,, \\
p_i&=P_i+\frac{\delta_3}{\Lambda} P_0 P_i \equiv \mathcal{B}_i(P)\,.
\end{split}
\label{eq:ch-base}
\end{equation}
This change of basis is the same for all particles involved in a process. The name of change of basis is taken form the Hopf algebra context, where different coproducts (DCL) in the $\kappa$-Poincar\'e deformation are related through this kind of transformation. From a geometrical point of view, different bases would correspond to different momentum coordinates on a curved momentum space. So from the algebraical and geometrical perspectives, a change of basis has no content. However, this is not the case from the point of view of DSR, where different bases are supposed to be physically nonequivalent\footnote{This will be treated from a theoretical point of view in Ch.~\ref{chapter_locality}, and the possible phenomenological consequences of this non-equivalence in Ch.~\ref{chapter_time_delay}.}.
Let us consider that $(P_0,\vec{P})$ are the SR momentum variables with a linear conservation law, i.e the sum. Let us see what is the new kinematics one gets with the change of basis~\eqref{eq:ch-base} (this was done systematically in Ref.~\cite{Carmona:2012un}). In order to compute the DCL in the new variables one has to use the inverse of~\eqref{eq:ch-base}
\begin{equation}
\begin{split}
P_0& \,=\,p_0-\frac{\delta_1}{\Lambda}p_0^2-\frac{\delta_2}{\Lambda}\vec{p}^2\equiv \mathcal{B}^{-1}_0(p)\,, \\
P_i&\,=\,p_i-\frac{\delta_3}{\Lambda} p_0 p_i \equiv \mathcal{B}^{-1}_i(p)\,,
\end{split}
\label{eq:invch-base}
\end{equation}
and then
\begin{equation}
(P+Q)_0\,=\,P_0+Q_0=\mathcal{B}^{-1}_0(p)+\mathcal{B}^{-1}_0(q)\,=\,p_0+q_0-\frac{\delta_1}{\Lambda} (p_0^2+q_0^2)-\frac{\delta_2}{\Lambda}(\vec{p}^2+\vec{q}^2)\,.
\label{eq:DCL-1}
\end{equation}
We see that this cannot define a DCL since it does not satisfy~\eqref{eq:cl0}. However, this condition can be implemented as
\begin{equation}
(p\oplus q)_\mu \,\equiv\, \mathcal{B}_\mu\left(\mathcal{B}^{-1}(p)+\mathcal{B}^{-1}(q)\right)\,.
\label{eq:DCLdef}
\end{equation}
This procedure is used in the DSR literature in order to describe the ``physical variables'' from the ``auxiliary variables'' (the SR variables which compose and transform linearly)~\cite{Judes:2002bw}. One then gets
\begin{align}
(p\oplus q)_0&\, =\,p_0+q_0+\frac{2\delta_1}{\Lambda}p_0 q_0+\frac{2\delta_2}{\Lambda}\vec{p}\cdot\vec{q}\,,\\
(p\oplus q)_i& \,=\,p_i+q_i+\frac{\delta_3}{\Lambda}p_0 q_i+\frac{\delta_3}{\Lambda}q_0 p_i\,.
\label{eq:DCL-2}
\end{align}
We see that there is a relationship between the $\delta$'s and $\beta$'s and $\gamma$'s, in such a way that only the symmetric part of the composition law is reproduced. In particular, one sees that $\beta_1=2\delta_1$, $\beta_2=2\delta_2$, $\gamma_1=\gamma_2=\delta_3$, and $\gamma_3=0$.
Besides the modification in the conservation law, a change of basis produces a modification in the dispersion relation and in the Lorentz transformation in the one-particle system. In particular, if $(P_0,\vec{P})$ transform as in SR under a Lorentz boost, $P'_0=P_0+\vec{P}\cdot\vec{\xi}\,$, $\vec{P}'=\vec{P}+P_0\vec{\xi}\,$, the new boosts for the new momentum coordinates are
\begin{align}
[T(p)]_0 \equiv \mathcal{B}_0(P')&\,=\,P_0+\vec{P}\cdot\vec{\xi}+\frac{\delta_1}{\Lambda}(p_0+\vec{p}\cdot\vec{\xi})^2+\frac{\delta_2}{\Lambda}(\vec{p}+p_0\vec{\xi})^2\nonumber \\
&=\,p_0+\frac{(2\delta_1+2\delta_2-\delta_3)}{\Lambda}p_0\vec{p}\cdot\vec{\xi}\,, \\
[T(p)]_i \equiv \mathcal{B}_i(P')&\,=\,P_i+P_0\xi_i+\frac{\delta_3}{\Lambda}(p_0+\vec{p}\cdot\vec{\xi})(p_i+p_0\xi_i) \nonumber \\
&=\,p_i+p_0 \xi_i+\frac{(\delta_3-\delta_1)}{\Lambda}p_0^2 \xi_i-\frac{\delta_2}{\Lambda}\vec{p}^2\xi_i+\frac{\delta_3}{\Lambda}p_i \vec{p}\cdot\vec{\xi} \, .
\label{eq:ch-LT}
\end{align}
Again, we can compare these results with Eq.~\eqref{T-one},
\begin{equation}
\lambda_1\,=\,2\delta_1+2\delta_2-\delta_3\,,\qquad \lambda_2\,=\,\delta_3-\delta_1 \,,\qquad \lambda_3\,=-\,\delta_2\,.
\label{eq:lambdafromdelta}
\end{equation}
For the DDR, one obtains
\begin{equation}
C(p)\equiv P_0^2-\vec{P}^2\,=\,p_0^2-\vec{p}^2-\frac{2\delta_1}{\Lambda}p_0^3+\frac{2(\delta_3-\delta_2)}{\Lambda}p_0\vec{p}^2\,=\,m^2,
\label{eq:ch-DR}
\end{equation}
and, comparing with the results of Eq.~\eqref{eq:DDR},
\begin{equation}
\alpha_1\,=\,-2\delta_1=-\beta_1\,, \quad \quad \alpha_2\,=\,2(\delta_3-\delta_2)=\gamma_1+\gamma_2-\beta_2\,.
\label{eq:alphafromdelta}
\end{equation}
By construction, all the results agree with Eqs.~\eqref{eq:lambdafromdelta} and~\eqref{alphalambda}, and with the golden rules~\eqref{eq:GR}.
The other kind of transformations will be denoted \emph{change of variables}, which is in fact a change of variables in the two-particle system, where
\begin{equation}
\begin{split}
(P,Q) \to (p,q)=(\mathcal{F}^L(P,Q),\mathcal{F}^R(P,Q)) \,\text{ such that } \mathcal{F}^L(P,0)\,=\,P \,,\, \mathcal{F}^L(0,Q)\,=\,0\,, \\
\text{ and } \mathcal{F}^R(0,Q)\,=\,Q \,,\, \mathcal{F}^R(P,0)\,=\,0\,.
\label{eq:restrictedch}
\end{split}
\end{equation}
While a change of basis has a clear interpretation in the algebraic or geometric approaches, this is not the case for this kind of transformation. It is only used as a mathematical trick in order to compute a DRK from SR kinematics in a simple way without doing all the tedious work showed in Ref.~\cite{Carmona:2012un} for the first order case. In fact, we are going to see that this trick reproduces the most general DRK at first order.
Let us consider again that $(P,Q)$ are the SR variables with linear Lorentz transformations and conservation law. Instead of considering a change of basis as in Eq.~\eqref{eq:ch-base}, we are going to see what DRK is obtained from the change of variables of Eq.~\eqref{eq:restrictedch}, which will be compatible with a relativity principle by construction. Since we are only considering a change of variables and not a change of basis, we will reproduce the DRK and their relation with the $\eta$'s coefficients obtained in Eqs.~\eqref{betalambda}-\eqref{gammalambda} without the $\lambda$'s, as they only appear when a change of basis is explored. We will denote this kind of basis where there are no modification of the Lorentz transformations in the one-particle system the \emph{classical basis}, as it is usually the name appearing in the Hopf algebras scheme (see Sec.~\ref{sec:Hopf}). So through a change of variables, we can only construct a DRK in the classical basis.
In order to check that the kinematics obtained through a change of variables is compatible with the relativity principle, we start by defining
\begin{equation}
p\oplus q \equiv P+Q\,,
\label{eq:DCL-ch}
\end{equation}
which satisfies the condition~\eqref{eq:cl0},
\begin{equation}
(p \oplus q)|_{q=0} \,=\,\left[\left(\mathcal{F}^{-1}\right)^L (p,q)+ \left(\mathcal{F}^{-1}\right)^R (p,q)\right]_{q=0}\,=\,p+0\,=\,p\,,
\label{eq:demo1}
\end{equation}
being then a good definition of a DCL. Here we have used the inverse of the change of variables, $P=\left(\mathcal{F}^{-1}\right)^L (p,q)$, $Q=\left(\mathcal{F}^{-1}\right)^R (p,q)$, and the features of Eq.~\eqref{eq:restrictedch}. As we are not changing the basis, the total momentum also transforms linearly, as a single momentum does.
The transformations of $(p,q)$ are given by
\begin{equation}
T_q^L(p)\equiv\mathcal{F}^L(P',Q')\,, \quad \quad T_p^R(q)\equiv\mathcal{F}^R(P',Q')\,,
\label{eq:MTL-ch}
\end{equation}
where $P'$, $Q'$, are the linear Lorentz transformed momenta of $P$, $Q$. Then,
\begin{equation}
T_q^L(p) \oplus T_p^R(q)\,= \,\mathcal{F}^L(P',Q') \oplus \mathcal{F}^R(P',Q') \,= \,P'+Q' \,= \,(P+Q)'\, =\, T(p\oplus q)\, ,
\label{eq:demo2}
\end{equation}
where we have used Eq.~\eqref{eq:MTL-ch}, that $P$ and $Q$ transform linearly, and the definition appearing in Eq.~\eqref{eq:DCL-ch}. We can see that condition~\eqref{eq:RP-1} is automatically satisfied when a change of variables is made.
Since we have not deformed the one-particle transformations, the Casimir is the SR one, $C(p)=p_0^2-\vec{p}^2=m^2$ (this is a particular characteristic of the classical basis, being the Casimir different for any other basis).
One cannot take the most general expression of the change of variables compatible with rotational invariance because, as it was explained in Sec.~\ref{sec:summary}, the Casimir of the two-particle system must be $P_0^2-\vec{P}^2=p_0^2-\vec{p}^2$, $Q_0^2-\vec{Q}^2=q_0^2-\vec{q}^2$. Once this condition is implemented at first order in $1/\Lambda$, one obtains
\begin{equation}
\begin{split}
P_{0}\,=\,p_{0}+\frac{v_{1}^{L}}{\Lambda}\vec{p}.\vec{q}\,,\qquad & P_{i}\,=\,p_{i}+\frac{v_{1}^{L}}{\Lambda}p_{0}q_{i}+\frac{v_{2}^{L}}{\Lambda}\epsilon_{ijk}p_{j}q_{k}\,,
\\
Q_{0}\,=\,q_{0}+\frac{v_{1}^{R}}{\Lambda}\vec{p}.\vec{q}\,,\qquad & Q_{i}\,=\,q_{i}+\frac{v_{1}^{R}}{\Lambda}q_{0}p_{i}+\frac{v_{2}^{R}}{\Lambda}\epsilon_{ijk}q_{j}p_{k}\,.
\label{ch-var}
\end{split}
\end{equation}
Using definition~\eqref{eq:DCL-ch}, one gets
\begin{equation}
\begin{split}
\left[p\oplus q\right]_0&\,=\,P_0+Q_0=p_{0}+q_{0}+\frac{v_{1}^{L}+v_{1}^{R}}{\Lambda}\vec{p}.\vec{q}\,,
\\
\left[p\oplus q\right]_i&\,=\,P_i+Q_i=p_{i}+q_{i}+\frac{v_{1}^{L}}{\Lambda}p_{0}q_{i}+\frac{v_{1}^{R}}{\Lambda}q_{0}p_{i}+\frac{v_{2}^{L}-v_{2}^{R}}{\Lambda}\epsilon_{ijk}p_{j}q_{k}\,,
\end{split}
\label{chvar-cl-1st}
\end{equation}
so we can establish a correspondence with Eq.~\eqref{eq:DCL}, obtaining
\begin{align}
\beta_{1}\,=\,0\,,{\hskip1cm}\beta_{2}\,=\,v_{1}^{L}+v_{1}^{R}\,,
{\hskip1cm}\gamma_{1}\,=\,v_{1}^{L}\,,{\hskip1cm}\gamma_{2}\,=\,v_{1}^{R}\,,{\hskip1cm}\gamma_{3}\,=\,v_{2}^{L}-v_{2}^{R}\,.
\label{DCLpar-ch}
\end{align}
We see that we have obtained the general solution of the golden rules appearing in Eq.~\eqref{eq:GR} when $\alpha_1=\alpha_2=0$ (note that this is the main property of the classical basis).
One can obtain from Eq.~\eqref{eq:MTL-ch} the transformation law in the two-particle system:
\begin{align}
\eta_{1}^{L,R}\,=\,-v_{1}^{L,R}\,,\qquad \eta_{2}^{L,R}\,=\,v_{2}^{L,R}\,.
\label{eta-v}
\end{align}
In this way, we obtain Eqs.~\eqref{betalambda}-\eqref{gammalambda} in the particular case when all $\lambda$'s vanish. Then, one can combine a change of basis and change of variables in order to obtain the most general DRK at first order in $1/\Lambda$, as there is a one-to-one correspondence between the parameters of the kinematics and those of the change of basis~\eqref{eq:ch-base} and the change of variables~\eqref{ch-var}. We will now assume that this property is true at higher orders, so that through this mathematical trick we can obtain the most general relativistic kinematics at any order from what we will denote a covariant composition law.\footnote{There is not a proof of this assumption, but even if this is not true, this is a way to produce examples of DRKs without tedious calculations.}
\subsection{Covariant notation}
\label{sec:covariant}
Before studying the kinematics at second order, it is very convenient to use a covariant notation in order to simplify the calculations. We will study now how this can be done for the change of variables satisfying again
\begin{equation}
(P, 0) \to (P, 0)\,, {\hskip 1cm} (0, Q) \to (0, Q) \,,{\hskip 1cm} P^2 \,=\, p^2\,, {\hskip 1cm} Q^2\, =\, q^2 \,.
\end{equation}
In order for the Casimir in the two-particle system to be the same as the Casimir that appears in the one-particle system, the change of variables must be such that the terms proportional to $(1/\Lambda)$ appearing in $P$ must be orthogonal to the momentum $p$, and also to $q$. We will introduce a fixed vector $n$ and the Levi-Civita tensor with $\epsilon_{0123}=-1$ in order to rewrite the deformed kinematics. The most general change of variables with this requirement written in covariant notation is
\begin{align}
P_\mu &\,=\, p_\mu + \frac{v_1^L}{\Lambda} \left[q_\mu (n\cdot p) - n_\mu (p\cdot q)\right] + \frac{v_2^L}{\Lambda} \epsilon_{\mu\nu\rho\sigma} p^\nu q^\rho n^\sigma\,, \\
Q_\mu &\,= \,q_\mu + \frac{v_1^R}{\Lambda} \left[p_\mu (n\cdot q) - n_\mu (p\cdot q)\right] + \frac{v_2^R}{\Lambda} \epsilon_{\mu\nu\rho\sigma} q^\nu p^\rho n^\sigma \,.
\label{p,q}
\end{align}
One obtains the previous results of~\eqref{ch-var} when $n_\mu=(1,0,0,0)$.~\footnote{The expression obtained is not really covariant because $n$ does not transform under Lorentz transformations like a vector (because it is a fixed vector).}
As before, we suppose that $P$ variables compose additively, and then we obtain the composition law in the $p$ variables
\begin{equation}
\begin{split}
\left(p\oplus q\right)_\mu \equiv \left[P \bigoplus Q\right]_\mu \,=\, P_\mu + Q_\mu & \,= \,p_\mu + q_\mu + \frac{v_1^L}{\Lambda} (n\cdot p) q_\mu + \frac{v_1^R}{\Lambda} (n\cdot q) p_\mu \\ & - \frac{(v_1^L+v_1^R)}{\Lambda} n_\mu (p\cdot q) +
\frac{(v_2^L-v_2^R)}{\Lambda} \epsilon_{\mu\nu\rho\sigma} p^\nu q^\rho n^\sigma \,.
\end{split}
\label{cl1}
\end{equation}
Taking $n_\mu = (1, 0, 0, 0)$ in this composition law we obtain
\begin{equation}
\begin{split}
\left[p\oplus q\right]_0 &\,= \,p_0 + q_0 + \frac{(v_1^L + v_1^R)}{\Lambda} \vec{p}\cdot \vec{q}\,, \\
\left[p\oplus q\right]_i &\,= \,p_i + q_i + \frac{v_1^L}{\Lambda} p_0 q_i + \frac{v_1^R}{\Lambda} q_0 p_i + \frac{(v_2^L - v_2^R)}{\Lambda} \epsilon_{ijk} p_j q_k \,,
\end{split}
\end{equation}
giving the same result appearing in Eq.~\eqref{chvar-cl-1st}.
In order to simplify the notation for the Lorentz transformations of Secs.~\ref{sec:summary}-\ref{sec:change}, we will denote by $(p',q')$ or $(P',Q')$ the transformed momenta of $(p,q)$ or $(P,Q)$, instead of the previous convention of $T_q^L(p)$or $T_p^R(q)$. Also, we will use the notation $\tilde{X}_\mu \equiv \Lambda_\mu^{\:\nu} X_\nu$, where the $\Lambda_\mu^{\:\nu}$ are the usual Lorentz transformation matrices. Then, we see that
\begin{equation}
\begin{split}
&p'_\mu + \frac{v_1^L}{\Lambda} \left[q'_\mu (n\cdot p') - n_\mu (p'.q')\right] + \frac{v_2^L}{\Lambda} \epsilon_{\mu\nu\rho\sigma} p^{\prime\,\nu} q^{\prime\,\rho} n^\sigma \equiv P'_\mu \,, \\
& =\, \Lambda_\mu^{\:\nu} P_\nu \,= \,\tilde{p}_\mu + \frac{v_1^L}{\Lambda} \left[\tilde{q}_\mu (\tilde{n}\cdot\tilde{p}) - \tilde{n}_\mu (\tilde{p}\cdot\tilde{q})\right] + \frac{v_2^L}{\Lambda} \epsilon_{\mu\nu\rho\sigma} \tilde{p}^\nu \tilde{q}^\rho \tilde{n}^\sigma \,.
\end{split}
\end{equation}
We can realize that $p'$ is equal to $\tilde{p}$ when one neglects terms proportional to $(1/\Lambda)$, which is obvious since the Lorentz transformations of the new momentum variables are a consequence of the change of variables. So at first order we have
\begin{equation}
p'_\mu \,= \,\tilde{p}_\mu + \frac{v_1^L}{\Lambda} \left[\tilde{q}_\mu \left((\tilde{n}-n)\cdot\tilde{p}\right) - \left(\tilde{n}_\mu - n_\mu\right) (\tilde{p}\cdot\tilde{q})\right] + \frac{v_2^L}{\Lambda} \epsilon_{\mu\nu\rho\sigma} \tilde{p}^\nu \tilde{q}^\rho (\tilde{n}^\sigma - n^\sigma) \,.
\end{equation}
For an infinitesimal Lorentz transformation, we have
\begin{equation}
\tilde{X}_\mu \,= \,X_\mu + \omega^{\alpha\beta} \eta_{\mu\alpha} X_\beta \,,
\end{equation}
where $\omega^{\alpha\beta}=-\omega^{\beta\alpha}$ are the infinitesimal transformation parameters, and then
\begin{equation}
p'_\mu \,= \,\tilde{p}_\mu + \omega^{\alpha\beta} n_\beta \left[\frac{v_1^L}{\Lambda} \left(q_\mu p_\alpha - \eta_{\mu\alpha} (p\cdot q)\right) + \frac{v_2^L}{\Lambda} \epsilon_{\mu\alpha\nu\rho} p^\nu q^\rho\right] \,.
\label{p'1}
\end{equation}
A similar procedure leads to the transformation for the second variable $q$
\begin{equation}
q'_\mu \,=\, \tilde{q}_\mu + \omega^{\alpha\beta} n_\beta \left[\frac{v_1^R}{\Lambda} \left(p_\mu q_\alpha - \eta_{\mu\alpha} (p\cdot q)\right) + \frac{v_2^R}{\Lambda} \epsilon_{\mu\alpha\nu\rho} q^\nu p^\rho\right]\,.
\label{q'1}
\end{equation}
Eqs.~(\ref{p'1})-(\ref{q'1}) are the new Lorentz transformations of the variables $(p, q)$ at first order.
For $n_\mu = (1, 0, 0, 0)$, one obtains
\begin{equation}
\omega^{0\beta} n_\beta \,= \,0\,, {\hskip 1cm} \omega^{i\beta} n_\beta \,= \,\omega^{i0}=\xi^i = -\xi_i \,,
\end{equation}
and hence, (\ref{p'1})-(\ref{q'1}) become
\begin{equation}
\begin{split}
p'_0 &\,= \,p_0 + (\vec{p} \cdot \vec{\xi}) - \frac{v_1^L}{\Lambda} q_0 (\vec{p} \cdot \vec{\xi}) + \frac{v_2^L}{\Lambda} ({\vec p} \wedge {\vec q}) \cdot {\vec \xi}\,, \\
p'_i &\,= \,p_i + p_0 \xi_i - \frac{v_1^L}{\Lambda} \left(q_i (\vec{p} \cdot \vec{\xi}) - (p\cdot q) \xi_i\right) - \frac{v_2^L}{\Lambda} \left(q_0 (\vec{p} \wedge \vec {\xi})_i - p_0 (\vec{q} \wedge \vec {\xi})_i\right)\,, \\
q'_0 &\,= \,q_0 + (\vec{q} \cdot \vec{\xi}) - \frac{v_1^R}{\Lambda} p_0 (\vec{q} \cdot \vec{\xi}) + \frac{v_2^R}{\Lambda} ({\vec q} \wedge {\vec p}) \cdot {\vec \xi}\,, \\
q'_i &\,= \,q_i + q_0 \xi_i - \frac{v_1^R}{\Lambda} \left(p_i (\vec{q} \cdot \vec{\xi}) - (p\cdot q) \xi_i\right) - \frac{v_2^R}{\Lambda} \left(p_0 (\vec{q} \wedge \vec {\xi})_i - q_0 (\vec{p} \wedge \vec {\xi})_i\right) \,,
\end{split}
\label{DLT-1st}
\end{equation}
leading to the same result of Eq.~\eqref{eta-v}.
With this, we conclude the discussion of how a DRK at first order can be obtained from a change of variables in covariant notation. Now we can consider a change of basis written in covariant notation in order to reproduce the previous results. The most general expression of a change of basis has also three terms,
\begin{equation}
X_\mu \,=\, \hat{X}_\mu + \frac{b_1}{\Lambda} \hat{X}_\mu (n\cdot\hat{X}) + \frac{b_2}{\Lambda} n_\mu \hat{X}^2 + \frac{b_3}{\Lambda} n_\mu (n\cdot\hat{X})^2
\label{eq:basiscov}
\end{equation}
(where $X$ stands for $p$ or $q$), which leads to the dispersion relation
\begin{equation}
m^2 \,=\, p^2 \,= \,\hat{p}^2 + \frac{2(b_1+b_2)}{\Lambda} \hat{p}^2 (n\cdot\hat{p}) + \frac{2 b_3}{\Lambda} (n\cdot\hat{p})^3 \,,
\label{drhat1}
\end{equation}
and also to a new composition law at first order $\hat{p}\,\hat{\oplus}\,\hat{q}$ obtained from Eqs.~\eqref{eq:DCLdef} and~\eqref{eq:basiscov}
\begin{equation}
\left[p\oplus q\right]_\mu \,=\, \left[\hat{p}\,\hat{\oplus}\,\hat{q}\right]_\mu + \frac{b_1}{\Lambda} (\hat{p}+\hat{q})_\mu \left(n\cdot(\hat{p}+\hat{q})\right) + \frac{b_2}{\Lambda} n_\mu (\hat{p}+\hat{q})^2 + \frac{b_3}{\Lambda} n_\mu \left(n\cdot(\hat{p}+\hat{q})\right)^2 \,.
\end{equation}
In the hat variables, the DCL finally reads
\begin{equation}
\begin{split}
\left[\hat{p}\,\hat{\oplus}\, \hat{q}\right]_\mu & \,=\, \hat{p}_\mu + \hat{q}_\mu + \frac{v_1^L - b_1}{\Lambda} (n\cdot\hat{p}) \hat{q}_\mu + \frac{v_1^R - b_1}{\Lambda} (n\cdot\hat{q}) \hat{p}_\mu \\ & - \frac{(v_1^L+v_1^R) + 2 b_2}{\Lambda} n_\mu (\hat{p}\cdot\hat{q}) - \frac{2 b_3}{\Lambda} n_\mu (n\cdot\hat{p}) (n\cdot\hat{q}) + \frac{(v_2^L-v_2^R)}{\Lambda} \epsilon_{\mu\nu\rho\sigma} \hat{p}^\nu \hat{q}^\rho n^\sigma \,.
\end{split}
\label{clhat1}
\end{equation}
Taking $n_\mu=(1, 0, 0, 0)$ in Eqs.~\eqref{drhat1} and~\eqref{clhat1}, we obtain the new DDR for these variables
\begin{equation}
\begin{split}
m^2 \,&=\, \hat{p}^2 + \frac{2(b_1+b_2)}{\Lambda} \hat{p}^2 \hat{p}_0 + \frac{2 b_3}{\Lambda} \left(\hat{p}_0\right)^3\\
\,&=\,
\hat{p}_0^2 - {\vec{\hat{p}}}^2 + \frac{2(b_1+b_2+b_3)}{\Lambda} \left(\hat{p}_0\right)^3 - \frac{2(b_1+b_2)}{\Lambda}\hat{p}_0 \vec{\hat{p}}^2 \,,
\end{split}
\end{equation}
to be compared with Eq.~\eqref{eq:DDR}, and
\begin{equation}
\begin{split}
& \left[\hat{p}\,\hat{\oplus}\,\hat{q}\right]_0 \,=\, \hat{p}_0 + \hat{q}_0 - \frac{2(b_1+b_2+b_3)}{\Lambda} \hat{p}_0 \hat{q}_0 + \frac{(v_1^L + v_1^R) + 2 b_2}{\Lambda}\, \vec{\hat{p}}\cdot \vec{\hat{q}}\,, \\
& \left[\hat{p}\,\hat{\oplus}\, \hat{q}\right]_i \,=\, \hat{p}_i + \hat{q}_i
+ \frac{v_1^L - b_1}{\Lambda} \hat{p}_0 \hat{q}_i + \frac{v_1^R - b_1}{\Lambda} \hat{q}_0 \hat{p}_i + \frac{(v_2^L - v_2^R)}{\Lambda} \epsilon_{ijk} \hat{p}_j \hat{q}_k \,,
\end{split}
\end{equation}
to be compared with Eq.~\eqref{eq:DCL}. As expected, the golden rules~\eqref{eq:GR} are satisfied, obtaining the same results appearing in Sec.~\ref{sec:summary} for a DRK at first order (DDR and DCL), compatible with rotational invariance.
\section{Beyond SR at second order}
\label{sec:second}
In this section we will obtain a deformed kinematics at second order by performing a change of variables from momentum variables which transform linearly. As we will see, this does not imply that the composition law of the original variables is just the sum. We could get a general kinematics by making a change of basis over the obtained kinematics, but since we want to compare our results with those of the literature, and in particular with the kinematics derived from Hopf algebras, this is not mandatory. We will compare our kinematics with the one obtained in the Hopf algebra framework in the classical basis, where the one-particle momentum variable transforms linearly.
\subsection{Change of variables up to second order}
We proceed as in Sec.~\ref{sec:covariant} finding the most general expression of a change of variables at second order $(P, Q) \to (p, q)$ compatible with $p^2=P^2$, $q^2=Q^2$. The complete calculation is developed in Appendix~\ref{appendix_second_order_a}. Here, we only summarize the main results and procedures. As in the first order case, we first start by obtaining the most general change of variables compatible with $p^2=P^2$ up to second order, obtaining Eqs.~\eqref{P->p}-\eqref{Q->q}, which have a total of 14 parameters $(v_1^L,\ldots,v_7^L;v_1^R,\ldots,v_7^R)$.
As we are applying a change of variables to momenta which transform linearly, the starting composition law (which we will call \textit{covariant} composition law, also noted in Ref.~\cite{Ivetic:2016qtz}) will be a sum of terms covariant under linear Lorentz transformations:
\begin{equation}
\left[P\bigoplus Q\right]_\mu \,= \,P_\mu + Q_\mu + \frac{c_1}{\Lambda^2} P_\mu Q^2 + \frac{c_2}{\Lambda^2} Q_\mu P^2 + \frac{c_3}{\Lambda^2} P_\mu (P\cdot Q) + \frac{c_4}{\Lambda^2} Q_\mu (P\cdot Q) \,.
\label{ccl2}
\end{equation}
Then, we can obtain a generic DCL and DLT in the two-particle system by applying a generic change of variables to this covariant composition law.
As it is showed in Appendix~\ref{appendix_second_order_a}, a generic DCL obtained through a change of variables up to second order has coefficients depending on 16 parameters: the four parameters appearing in the covariant composition law~\eqref{ccl2}, and 12 combinations of the 14 parameters of the change of variables \eqref{P->p}-\eqref{Q->q}. For the case $n_\mu=(1, 0, 0, 0)$, the composition law reads
\begin{equation}
\begin{split}
&\left[p\oplus q\right]_0 \,=\, p_0 + q_0 + \frac{(v_1^L + v_1^R)}{\Lambda} \vec{p}\cdot\vec{q} + \frac{(2 c_1 - v_1^L v_1^L- 2v_3^R)}{2 \Lambda^2} p_0 q^2 + \frac{(2 c_2 - v_1^R v_1^R - 2 v_3^L)}{2 \Lambda^2} q_0 p^2 \\ & +
\frac{(2 c_3 + v_1^R v_1^R - v_2^R v_2^R - 2 v_4^L - 2 v_5^R)}{2 \Lambda^2} p_0 (p\cdot q) + \frac{(2 c_4 + v_1^L v_1^L - v_2^L v_2^L - 2 v_5^L - 2 v_4^R)}{2 \Lambda^2} q_0 (p\cdot q) \\ & + \frac{(v_2^R v_2^R + 2 v_3^L+ 2 v_4^L + 2 v_5^R)}{2 \Lambda^2} p_0^2 q_0 + \frac{(v_2^L v_2^L + 2 v_3^R + 2 v_5^L + 2 v_4^R)}{2 \Lambda^2} p_0 q_0^2\,, \\
&\left[p\oplus q\right]_i\,= \,p_i + q_i + \frac{v_1^L}{\Lambda} p_0 q_i + \frac{v_1^R}{\Lambda} q_0 p_i + \frac{(v_2^L - v_2^R)}{\Lambda} \epsilon_{ijk}p_{j} q_{k} + \frac{(2c_1 - v_2^L v_2^L)}{2 \Lambda^2} p_i q^2 + \\ & \frac{(2 c_2 - v_2^R v_2^R)}{2 \Lambda^2} q_i p^2 + \frac{(2c_3 - v_1^R v_1^R + v_2^R v_2^R)}{2 \Lambda^2} p_i (p\cdot q) + \frac{(2c_4 - v_1^L v_1^L + v_2^L v_2^L)}{2 \Lambda^2} q_i (p\cdot q) + \\ & \frac{(v_2^R v_2^R + 2 v_4^L)}{2 \Lambda^2} p_0^2 q_i + \frac{(v_2^L v_2^L + 2 v_4^R)}{2 \Lambda^2} p_i q_0^2 + \frac{(v_3^L+ v_5^R)}{\Lambda^2} p_i p_0 q_0 + \frac{(v_3^R + v_5^L)}{\Lambda^2} q_i p_0 q_0 \\ & + \frac{(v_6^L-v_7^R)}{\Lambda^2} p_0\, \epsilon_{ijk}p_{j} q_{k} + \frac{(v_7^L-v_6^R)}{\Lambda^2} q_0\, \epsilon_{ijk}p_{j} q_{k} \,,
\end{split}
\label{generalCL}
\end{equation}
where we can identify, following the same notation used at first order, the dimensionless coefficients:
{\footnotesize
\begin{align}
\beta_1 & = 0 & \beta_2 & = v_1^L + v_1^R & 2\beta_3 &= 2 c_1 - v_1^L v_1^L- 2v_3^R \nonumber \\
2\beta_4 &= 2 c_2 - v_1^R v_1^R - 2 v_3^L & 2\beta_5 &= 2 c_3 + v_1^R v_1^R - v_2^R v_2^R - 2 v_4^L - 2 v_5^R & 2\beta_6 &= 2 c_4 + v_1^L v_1^L - v_2^L v_2^L - 2 v_5^L - 2 v_4^R \nonumber \\
2\beta_7 &= v_2^R v_2^R + 2 v_3^L+ 2 v_4^L + 2 v_5^R & 2\beta_8 & =v_2^L v_2^L + 2 v_3^R + 2 v_5^L + 2 v_4^R & \gamma_1 & = v_1^L \nonumber \\
\gamma_2 & =v_1^R & \gamma_3 &=v_2^L - v_2^R & 2\gamma_4 &= 2c_1 - v_2^L v_2^L \nonumber \\
2\gamma_5 &= 2 c_2 - v_2^R v_2^R & 2\gamma_6 &=2c_3 - v_1^R v_1^R + v_2^R v_2^R & 2\gamma_7 &=2c_4 - v_1^L v_1^L + v_2^L v_2^L\nonumber \\
2\gamma_8 &=v_2^R v_2^R + 2 v_4^L & 2\gamma_9 &=v_2^L v_2^L + 2 v_4^R & \gamma_{10} & =v_3^L+ v_5^R \nonumber \\
\gamma_{11} &=v_3^R + v_5^L & \gamma_{12} &=v_6^L-v_7^R & \gamma_{13} &=v_7^L-v_6^R\,.
\label{DCLpar-ch-2nd}
\end{align}}
\normalsize
These are the generalization of Eq.~\eqref{DCLpar-ch} at second order. As it was shown in the first-order case, we can obtain the golden rules at second order by using the relations in Eq.~\eqref{DCLpar-ch-2nd}
\begin{equation}
\begin{split}
\beta_1\,=\,\beta_2 - \gamma_1 - \gamma_2 &\,=\,0\\
\beta_3 + \beta_6 - \gamma_4 -\gamma_7 + \gamma_9 +\gamma_{11} -\frac{\gamma_1^2}{2}&\,=\,0\\
\beta_4 + \beta_5 - \gamma_5 -\gamma_6 + \gamma_8 +\gamma_{10} -\frac{\gamma_2^2}{2}&=0\\
\beta_7 - \gamma_8 -\gamma_{10}=\beta_8 - \gamma_9 -\gamma_{11} &\,=\,0\,.
\end{split}
\label{gr-upto-2nd}
\end{equation}
Also, in Appendix~\ref{appendix_second_order_a}, we obtain the DLT in the two-particle system, $(p,q) \to (p',q')$, using the same procedure used to obtain Eqs.~\eqref{p'1}-\eqref{q'1} in Sec.~\ref{sec:covariant}. Their coefficients depend on the 14 parameters that characterize a generic change of variables in the two-particle system. The 4 parameters $c_i$ of the covariant composition law do not appear, since a covariant DCL is compatible with linear LT. Note also that only at first order the coefficients of the DCL are determined by the ones of the DLT. This is no longer true at second order due to the Lorentz covariant terms that can be present in the DCL.
The previous kinematics can be generalized by means of a change of basis that will modify the DDR, making it invariant at second order. In the following subsection we will consider a simplified case in which the corrections to SR start directly at second order.
\subsection{Change of variables and change of basis starting at second order}
\label{sec:second order}
As we saw at the beginning of the chapter, there are some phenomenological indications, and also theoretical arguments, that seem to suggest that the corrections in a deformed kinematics could start at second order. In this subsection, we will study this case, finding the most general DRK in the same way we did in Sec.~\ref{sec:covariant} for the first order case.
The DRK obtained from a change of variables can be easily obtained by making $v_1^L, v_1^R , v_2^L, v_2^R$ equal to zero in Eqs.~\eqref{cl2}-\eqref{q'2}.
The change of basis starting at second order is
\begin{equation}
X_\mu \,= \,\hat{X}_\mu + \frac{b_4}{\Lambda^2} n_\mu \hat{X}^2 (n\cdot\hat{X}) + \frac{b_5}{\Lambda^2} \hat{X}_\mu (n\cdot\hat{X})^2 + \frac{b_6}{\Lambda^2} n_\mu (n\cdot\hat{X})^3 \,,
\label{eq:basiscov2}
\end{equation}
that leads to the DDR
\begin{equation}
m^2 \,= \,p^2\, =\, \hat{p}^2 + \frac{2(b_4+b_5)}{\Lambda^2} \hat{p}^2 (n\cdot\hat{p})^2 + \frac{2 b_6}{\Lambda^2} (n\cdot\hat{p})^4 \,.\
\label{drhat2}
\end{equation}
Choosing $n_\mu=(1, 0, 0, 0)$ in Eq.~\eqref{drhat2}, we obtain
\begin{equation}
m^{2}=\hat{p}_{0}^{2}-{\vec{\hat{p}}}^{2}+\frac{\alpha_{3}}{\Lambda^2}\left(\hat{p}_{0}\right)^{4}+\frac{\alpha_{4}}{\Lambda^2}(\hat{p}_{0})^{2}\vec{\hat{p}}^{2} \,,
\end{equation}
with
\begin{equation}
\alpha_{3}=2(b_{4}+b_{5}+b_{6})\,,\qquad\alpha_{4}=-2(b_{4}+b_{5})\,,
\end{equation}
which is the DDR that generalizes Eq.~\eqref{eq:DDR} when the corrections to SR start at second order.
The DCL coefficients in this particular case are obtained from Eqs.~\eqref{DCLpar-ch-2nd} are now:
\begin{align}
\beta_3 &= c_1 - v_3^R -b_4 & \beta_4 &= c_2 - v_3^L -b_4 & \beta_5 &= c_3 - v_4^L - v_5^R-2b_4 \nonumber \\
\beta_6 &= c_4 - v_5^L - v_4^R -2 b_4 & \beta_7 &= v_3^L+ v_4^L + v_5^R- 3b_5- 3b_6 & \beta_8 & = v_3^R + v_5^L + v_4^R -3b_5 - 3b_6 \nonumber \\
\gamma_4 &= c_1 & \gamma_5 &= c_2 & \gamma_6 &=c_3 \nonumber \\ \gamma_7 &=c_4 & \gamma_8 &= v_4^L-b_5 & \gamma_9 &= v_4^R-b_5 \nonumber \\
\gamma_{10} & =v_3^L+ v_5^R-2b_5 &
\gamma_{11} &=v_3^R + v_5^L-2b_5 & \gamma_{12} &=v_6^L-v_7^R \nonumber \\ \gamma_{13} &=v_7^L-v_6^R\,.
\label{DCLpar-ch-only-2nd}
\end{align}
As we did for the first order case Eq.~\eqref{eq:GR}, we can find the golden rules at second order:
\begin{equation}
\begin{split}
\beta_3 + \beta_6 - \gamma_4 -\gamma_7 + \gamma_9 +\gamma_{11}\, =\,
\beta_4 + \beta_5 - \gamma_5 -\gamma_6 + \gamma_8 +\gamma_{10} &\,=\, \frac{3}{2} \, \alpha_4\,,\\
\beta_7 - \gamma_8 -\gamma_{10}=\beta_8 - \gamma_9 -\gamma_{11} &\,=\, -\frac{3}{2} \, (\alpha_3 + \alpha_4)\,.
\end{split}
\label{gr-at-2nd}
\end{equation}
\subsection{Generalized kinematics and the choice of momentum variables}
\label{sec:choice}
In the preceding subsections, we have constructed a DRK at second order in $(1/\Lambda)$ through a change of variables. At the beginning of the chapter, we have mentioned that there is a controversy about the physical meaning of the momentum variables. While there is a clear distinction between kinematics related through a change of variables, those kinematics related through a change of basis are completely equivalent from the algebraic and geometric point of views. However, from a physical point of view, this may not be the case.
Whatever the situation is, one can wonder whether there are DRKs that cannot be obtained from SR with the procedure proposed in the previous subsections. As we saw in Sec.~\ref{sec:firstorder}, the most general DRK at first order can be obtained following this prescription. We will see now that this is not the case for a general DRK at second order. The difference lays in the covariant terms of the composition law~\eqref{ccl2}, that cannot be generated by a covariant change of basis.
In order to do so, we start with the additive composition law in the variables $\left\{ \hat{P}\,,\hat{Q}\right\}$. We can make a covariant change of basis
\begin{equation}
\tilde{P}_{\mu}\,=\,\hat{P}_{\mu}\left(1+\frac{b}{\Lambda^{2}}\hat{P}^{2}\right)\,,
\end{equation}
that leaves
the dispersion relation invariant since $\hat{P}^{2}$ is an invariant. As we did in Eq.~\eqref{eq:DCLdef}, we can find the DCL generated by this change of basis
\begin{equation}
\left[\tilde{P}\tilde{\bigoplus} \tilde{Q}\right]_{\mu}\,=\,\tilde{P}_{\mu}+\tilde{Q}_{\mu}-\frac{b}{\Lambda^{2}}\tilde{P}_{\mu}\tilde{Q}^{2}-\frac{b}{\Lambda^{2}}\tilde{Q}_{\mu}\tilde{P}^{2}-\frac{2b}{\Lambda^{2}}\tilde{P}_{\mu}(\tilde{P}\cdot\tilde{Q})-\frac{2b}{\Lambda^{2}}\tilde{Q}_{\mu}(\tilde{P}\cdot\tilde{Q}) \,.
\end{equation}
Moreover, in order to obtain a generic DCL from the procedure proposed in this chapter, we need to make a covariant change of variables in such a way that momentum variables do not mix in the dispersion relations
\begin{equation}
\tilde{P}_{\mu}\,=\,P_{\mu}+\frac{v^{L}}{\Lambda^{2}}\left(Q_{\mu}P^{2}-P_{\mu}(P\cdot Q)\right)\,,\qquad\tilde{Q}_{\mu}\,=\,Q_{\mu}+\frac{v^{R}}{\Lambda^{2}}\left(P_{\mu}Q^{2}-Q_{\mu}(P\cdot Q)\right) \,.
\end{equation}
We finally obtain
\begin{equation}
\begin{split}
\left[P\bigoplus Q\right]_{\mu}\,=&\,P_{\mu}+Q_{\mu}+\frac{v^{R}-b}{\Lambda^{2}}P_{\mu}Q^{2}+\frac{v^{L}-b}{\Lambda^{2}}Q_{\mu}P^{2}\\&-\frac{v^{L}+2b}{\Lambda^{2}}P_{\mu}(P\cdot Q)-\frac{v^{R}+2b}{\Lambda^{2}}Q_{\mu}(P\cdot Q) \,.
\end{split}
\label{cclcv}
\end{equation}
As one can see comparing Eq.~\eqref{cclcv} with Eq.~\eqref{ccl2}, there are three parameters in the DCL obtained through a change of basis and a change of variables, while in a generic covariant composition law there are four. This shows the impossibility of obtaining the most general covariant composition law with the methods we used for the first order case.
Summarizing, 17 out of the 18 parameters $(v_i^L, v_i^R , c_i)$ can be reproduced by a change of basis and variables. This means that not any DRK can be obtained through this procedure from the linear composition (the sum) but, at least up to second order, it is possible to obtain the most general composition law applying a change of basis and a change of variables to a generic covariant composition. The parameter that cannot be generated is a combination of the coefficients of the covariant composition law $c_i$.
\begin{comment}
Even if a DRK is the same than the SR one if there is a change of basis and variables that can go from one to the other, we see that not all the kinematics can be reproduced through this procedure.
\end{comment}
\section{Relation with the formalism of Hopf algebras}
\label{sec:Hopf}
We can compare now our results of the previous subsections with the kinematics obtained in the formalism of Hopf algebras. Since we have obtained the most general kinematics up to second order with linear Lorentz transformations in the one-particle system, we are able to see if there is a correspondence with the so-called classical basis of $\kappa$-Poincaré~\cite{Borowiec2010}:
\begin{equation}
\Delta\left(N_{i}\right)\,=\,N_{i}\otimes \mathbb{1}+\left(\mathbb{1}-\frac{P_{0}}{\Lambda}+\frac{P_{0}^{2}}{2\Lambda^{2}}+\frac{\vec{P}^{2}}{2\Lambda^{2}}\right)\otimes N_{i}-\frac{1}{\Lambda}\epsilon_{ijk}P_{j}\left(\mathbb{1}-\frac{P_{0}}{\Lambda}\right)\otimes J_{k}\,,
\label{co-boost}
\end{equation}
\begin{align}
\Delta\left(P_{0}\right)\,=\,&P_{0}\otimes\left(\mathbb{1}+\frac{P_{0}}{\Lambda}+\frac{P_{0}^{2}}{2\Lambda^{2}}-\frac{\vec{P}^{2}}{2\Lambda^{2}}\right)+\left(\mathbb{1}-\frac{P_{0}}{\Lambda}+\frac{P_{0}^{2}}{2\Lambda^{2}}+\frac{\vec{P}^{2}}{2\Lambda^{2}}\right)\otimes P_{0}\nonumber\\&+\frac{1}{\Lambda}P_{m}\left(\mathbb{1}-\frac{P_{0}}{\Lambda}\right)\otimes P_{m} \,,
\label{co-p0} \\
\Delta\left(P_{i}\right)\,=\,&P_{i}\otimes\left(\mathbb{1}+\frac{P_{0}}{\Lambda}+\frac{P_{0}^{2}}{2\Lambda^{2}}-\frac{\vec{P}^{2}}{2\Lambda^{2}}\right)+\mathbb{1}\otimes P_{i}\, .
\label{co-pi}
\end{align}
One can see that $[\Delta(N_j),C\otimes\mathbb{1}]=[\Delta(N_j),\mathbb{1}\otimes C]=0$, since the Casimir of the algebra $C$, commutes with the $(P_0,P_i,J_i,N_i)$ generators. This shows that the Casimir is trivially extended to the tensor product of the algebras (or in our language of Sec.~\ref{sec:summary}, that the DDR does not mix momentum variables).
In order to find the relation between these algebraic expressions and the kinematical language used in this thesis, we can consider that the generators of the Poincaré algebra act as operators on the basis of the momentum operator, $P_\mu |p\rangle=p_\mu |p\rangle$. The boost generators $N_j$ in SR satisfy
\begin{equation}
|p'\rangle \,=\, (\mathbb{1}-i\xi_j N_j+\mathcal{O}(\xi^2)) |p\rangle\,,
\end{equation}
where $|p'\rangle\equiv |p\rangle'$ is the transformed state from $|p\rangle$ with a boost. Neglecting terms of order $\mathcal{O}(\xi^2)$, we find
\begin{align}
-i\xi_j [N_j,P_\mu]|p\rangle &\,=\, - i\xi_j (N_j P_\mu-P_\mu N_j)|p\rangle \,=\, p_\mu (|p'\rangle-|p\rangle)-p'_\mu|p'\rangle+p_\mu|p\rangle \nonumber \\
& \,=\, (p-p')_\mu|p'\rangle\,=\,(p-p')_\mu|p\rangle+\mathcal{O}(\xi^2) \,.
\label{deriv}
\end{align}
From here we obtain
\begin{equation}
p'_\mu \,=\, p_\mu + i\xi_j [f_j(p)]_\mu\,,
\label{relac-1}
\end{equation}
where $[f_j(p)]_\mu$ are the eigenvalues of $[N_j,P_\mu]$, being a function of the $P_\mu$:
\begin{equation}
f_j(P_\mu)|p\rangle\,\equiv\, [N_j,P_\mu]|p\rangle \,= \,[f_j(p)]_\mu |p\rangle\,.
\end{equation}
These relations can be extended for the two-particle system. Then, we can define
\begin{equation}
(P_\mu \otimes \mathbb{1})|p';q'\rangle \,=\, p'_\mu|p';q'\rangle\,, \quad \quad
(\mathbb{1} \otimes P_\mu)|p';q'\rangle\, =\, q'_\mu|p';q'\rangle\,,
\end{equation}
and the generators of co-boosts, $\Delta(N_j)$, satisfying
\begin{equation}
|p';q'\rangle \,=\, (\mathbb{1}-i\xi_j \Delta(N_j)+\mathcal{O}(\xi^2)) |p;q\rangle\,.
\end{equation}
So Eq.~\eqref{relac-1} is generalized to
\begin{equation}
p'_\mu \,=\, p_\mu + i\xi_j [f^{(1)}_j(p,q)]_\mu\,, \quad \quad q'_\mu\,=\,q_\mu+i\xi_j[f^{(2)}_j(p,q)]_\mu\,,
\label{co-transformed}
\end{equation}
where $[f^{(1)}_j(p,q)]_\mu$ and $[f^{(2)}_j(p,q)]_\mu$ are the eigenvalues of $[\Delta(N_j),P_\mu\otimes \mathbb{1}]$ and $[\Delta(N_j),\mathbb{1}\otimes P_\mu]$, respectively:
\begin{equation}
[\Delta(N_j),P_\mu\otimes \mathbb{1}] |p;q\rangle \,=\, [f^{(1)}_j(p,q)]_\mu |p;q\rangle\,, \quad \quad [\Delta(N_j),\mathbb{1}\otimes P_\mu]|p;q\rangle \,=\, [f^{(2)}_j(p,q)]_\mu|p;q\rangle\,.
\label{co-transformed2}
\end{equation}
Finally, the coproduct $\Delta(P_\mu)$ acting in the two-particle system momentum space is
\begin{equation}
\Delta(P_\mu)|p;q\rangle\,=\, (p\oplus q)_\mu|p;q\rangle\,.
\label{coprod-CL}
\end{equation}
With the previous relations, we can now make explicit the correspondence between our language and that of $\kappa$-Poincaré. From Eq.~\eqref{coprod-CL} and Eqs.~\eqref{co-p0} and \eqref{co-pi}, the DCL of $\kappa$-Poincaré in the classical basis is
\begin{equation}
\begin{split}
(p\oplus q)_0&\,=\,p_0+q_0+\frac{\vec{p}\cdot\vec{q}}{\Lambda}+\frac{p_0}{2\Lambda^2}\left(q_0^2-\vec{q}^2\right) + \frac{q_0}{2\Lambda^2}\left(p_0^2+\vec{p}^2\right) - \frac{p_0}{\Lambda^2}(\vec{p}\cdot \vec{q})\,,\\
(p\oplus q)_i&\,=\,p_i+q_i+\frac{q_0 p_i}{\Lambda} + \frac{p_i}{\Lambda^2}\left(q_0^2-\vec{q}^2\right)\,.
\end{split}
\label{eq:kappa-CL}
\end{equation}
From the coproduct of the boost, Eq.~\eqref{co-boost}, and using Eqs.~\eqref{co-transformed}-\eqref{co-transformed2}, together with the usual commutation relations $[N_i,P_0]=-iP_j$, $[N_i,P_j,]=i\delta_{ij}P_0$, $[J_i,P_j]=i\epsilon_{ijk}P_m$ (we are working in the classical basis, where the Lorentz transformations in the one-particle system are linear), we obtain
\begin{equation}
\begin{split}
p'_0&\,=\,p_0+\vec{p}\cdot\vec{\xi}\,, \quad \quad \quad \quad p'_i\,=\,p_i+p_0\xi_i \,, \\
q'_0&\,=\,q_{0}+\vec{q}.\vec{\xi}\left(1-\frac{p_{0}}{\Lambda}+\frac{p_{0}^{2}}{2\Lambda^{2}}+\frac{\vec{p}^{2}}{2\Lambda^{2}}\right)\,,
\\
q'_{i}&\,=\,q_{i}+q_{0}\xi_{i}\left(1-\frac{p_{0}}{\Lambda}+\frac{p_{0}^{2}}{2\Lambda^{2}}+\frac{\vec{p}^{2}}{2\Lambda^{2}}\right)+(\vec{p}\cdot\vec{q})\xi_{i}\left(\frac{1}{\Lambda}-\frac{p_{0}}{\Lambda^{2}}\right)+\vec{q}\cdot\vec{\xi}\left(-\frac{p_{i}}{\Lambda}+\frac{p_{0}p_{i}}{\Lambda^{2}}\right) \,.
\end{split}
\label{eq:kappa-transformed}
\end{equation}
Comparing Eq.~\eqref{eq:kappa-CL} with Eq.~\eqref{generalCL}, and Eq.~\eqref{eq:kappa-transformed} with Eqs.~\eqref{generaltr1}-\eqref{generaltr4}, we see that the choice of the coefficients that reproduces $\kappa$-Poincaré in the classical basis is
\[
v_{1}^{R}\,=\,1\,,\qquad c_{1}\,=\,c_{3}\,=\,\frac{1}{2} \,,
\]
being the rest of the parameters equal to zero. We can see that, as expected, $\kappa$-Poincaré is a particular case of our general framework that includes a DRK beyond SR up to second order in the power expansion of $\kappa$ ($1/\Lambda$). In fact, we can reproduce the covariant terms of $\kappa$-Poincaré kinematics with $b=v^L=-v^R/2=-1/6$.
We have found a systematic way to obtain all the possible DRKs up to second order, but this work can be generalized order by order. Then, we have plenty of ways to go beyond SR. A physical criteria is needed in order to constrain the possible kinematics, an additional ingredient that is still not clear. In this sense, to consider a different framework might lead to a better understanding of how a DRK appears and what represents from a physical point of view. This will be the aim of the next chapter, where we will study how a DRK naturally emerges from the geometry properties of a curved momentum space.
\chapter{Curved momentum space}
\label{chapter_curved_momentum_space}
\ifpdf
\graphicspath{{Chapter3/Figs/Raster/}{Chapter3/Figs/PDF/}{Chapter3/Figs/}}
\else
\graphicspath{{Chapter3/Figs/Vector/}{Chapter3/Figs/}}
\fi
\epigraph{Equations are just the boring part of mathematics. I attempt to see things in terms of geometry.}{Stephen Hawking}
As we have mentioned in the previous chapters, Hopf algebras are a mathematical tool which has been used as a way to characterize a DRK, considering the DDR as the Casimir of the Poincar\'{e} algebra in a certain basis, and the DCL as given by the coproduct operation. The description of symmetries in terms of Hopf algebras introduces a noncommutative spacetime~\cite{Majid:1999tc} that can be understood as the dual of a curved momentum space. In the particular case of the deformation of $\kappa$-Poincar\'{e}~\cite{Lukierski:1991pn}, the noncommutative spacetime that arises is $\kappa$-Minkowski, as we have shown in the Introduction, from which one can deduce a momentum geometry corresponding to de Sitter~\cite{KowalskiGlikman:2002ft}.
In Refs.~\cite{AmelinoCamelia:2011bm,Amelino-Camelia:2013sba,Lobo:2016blj} there are other proposals that try to establish a relation between a geometry in momentum space and a deformed kinematics. In Ref.~\cite{AmelinoCamelia:2011bm}, the DDR is defined as the squared of the distance in momentum space from the origin to a point $p$, and the DCL is associated to a non-metrical connection. The main problem of this work is that there is no mention to Lorentz transformations, and then to a relativity principle, the fundamental ingredient of a DRK.
Another proposal was presented in Ref.~\cite{Amelino-Camelia:2013sba}, achieving a different path to establish a relation between a DCL and a curved momentum space through a connection, which in this case can be (but it is not mandatory) affine to the metric that defines the DDR in the same way as before. This link is carried out by parallel transport, implemented by a connection in momentum space, which indicates how momenta must compose. They found a way to implement some DLT implementing the relativity principle; with this procedure any connection could be considered, giving any possible DRK, and then, this would reduce to the study of a generic DRK as we did in the previous chapter.
In Ref.~\cite{Lobo:2016blj}, a possible correspondence between a DCL and the isometries of a curved momentum space related to translations (transformations that do not leave the origin invariant) is considered. The Lorentz transformations are the homogeneous transformations (leaving the origin invariant), in such a way that a relativity principle holds if the DDR is compatible with the DCL and this one with the DLT. As one would want 10 isometries (6 boosts and 4 translations), one should consider only maximally symmetric spaces. Then, there is only room for three options: Minkowski, de Sitter or anti-de Sitter momentum space.
However, in Ref.~\cite{Lobo:2016blj} there is not a clear way to obtain the DCL, because in fact, there are a lot of isometries that do not leave the origin invariant, so a new ingredient is mandatory. Moreover, the relativity principle argument is not really clear since one needs to talk about the transformed momenta of a set of two particles, as we saw in Ch.~\ref{chapter_second_order}.
In this chapter, we will first review the geometrical framework proposed in Ref.~\cite{AmelinoCamelia:2011bm}. After that, we will make clear our proposal~\cite{Carmona:2019fwf}. We present a precise way to understand a DCL: it is associated to translations, but in order to find the correct one, we must impose their generators to form a concrete subalgebra inside the algebra of isometries of the momentum space metric.
We will see how the much studied $\kappa$-Poincar\'{e} kinematics can be obtained from our proposal. In fact, the method we propose can be used in order to obtain other DRKs, such as Snyder~\cite{Battisti:2010sr} and the so-called hybrid models~\cite{Meljanac:2009ej}.
Finally, we will see the correspondence between our prescription and the one proposed in Refs.~\cite{AmelinoCamelia:2011bm,Amelino-Camelia:2013sba}.
\section{Momentum space geometry in relative locality}
In Ref.~\cite{AmelinoCamelia:2011nt}, a physical observer who can measure the energies and momenta of particles in her vicinity is considered.
This observer can define a metric in momentum space by performing measurements in a one-particle system, and a (non-metrical) connection by performing measurements in a multi-particle system.
The one-particle system measurement allows the observer to determine the geometry of momentum space through the dispersion relation, considering it as the square of the geodesic distance from the origin to a point $p$ in momentum space, which corresponds to the momentum of the particle,
\begin{equation}
D^2(p)\,\equiv \,D^2(0,p)\,=\,m^2\,.
\end{equation}
The kinetic energy measurement defines the geodesic distance between two particles of mass $m$: $p$, which is at rest, and another particle $p'$ with kinetic energy $K$, i.e. $D(p)=D(p')=m$, and
\begin{equation}
D^2(p,p')\,=\,-2 m K\,,
\end{equation}
where the minus sign appears since we are considering a Lorentzian momentum manifold. From both measurements she can reconstruct a metric in momentum space
\begin{equation}
dk^2\,=\, h^{\mu\nu}(k)dk_\mu dk_\nu\,.
\end{equation}
This metric must reduce to the Minkowski space in the limit $\Lambda\rightarrow 0$. Also, they argued this metric must possess 10 isometries (transformations that leave the form of the metric invariant), 6 related with Lorentz transformations and 4 with translations, so the only possible metrics are those that correspond to a maximally symmetric space, leading to only three options: Minkowski, de Sitter or anti-de Sitter momentum spaces.
From the measurement of a system of particles, she can deduce the composition law of momenta, which is an operation that joins two momenta, and in order to consider more particles, the total momentum is computed by gathering momenta in pairs. The authors define also a momentum called \textit{antipode} $\hat{p}$ (which was previously introduced in the context of Hopf algebras~\cite{Majid:1995qg}) in such a way that $\hat{p}\oplus p=0$.
This composition law, which in principle cannot be assumed linear, nor commutative, nor associative, defines the geometry of momentum space related to the algebra of combinations of momentum. The connection at the origin is
\begin{equation}
\Gamma^{\tau \lambda}_\nu (0)\,=\,-\left.\frac{\partial^2 (p\oplus q)_\nu}{\partial p_\tau \partial q_\lambda}\right\rvert_{p,q \rightarrow 0}\,,
\end{equation}
and the torsion
\begin{equation}
T^{\tau \lambda}_\nu (0)\,=\,-\left.\frac{\partial^2 \left((p\oplus q)-(q\oplus p)\right)_\nu}{\partial p_\tau \partial q_\lambda}\right\rvert_{p,q \rightarrow 0}\,.
\end{equation}
The curvature tensor is determined from the lack of associativity of the composition law
\begin{equation}
R^{\mu\nu\rho}_\sigma (0)\,=\,2 \left.\frac{\partial^3 \left((p\oplus q)\oplus k-p\oplus (q\oplus k)\right)_\sigma}{\partial p_{[\mu} \partial q_{\nu]} \partial k_\rho}\right\rvert_{p,q,k \rightarrow 0}\,,
\end{equation}
where the bracket denotes the anti-symmetrization. They suggested that the non-associativity of the composition law, giving a non-vanishing curvature tensor in momentum space, could be tested with experiments.
In order to obtain the connection at any point, they defined a new composition depending on another momentum $k$
\begin{equation}
(p\oplus_k q) \,\doteq\, k\oplus\left((\hat{k}\oplus p)\oplus(\hat{k}\oplus q)\right)\,.
\label{k-DCL}
\end{equation}
Then, they claimed that the connection at a point $k$ can be determined by
\begin{equation}
\Gamma^{\tau \lambda}_\nu (k)\,=\,-\left.\frac{\partial^2 (p\oplus_{k}q)_\nu}{\partial p_\tau \partial q_\lambda}\right\rvert_{p,q \rightarrow k}\,.
\label{k-connection}
\end{equation}
In principle, the connection is not metrical in the sense that it is not the affine connection given by the metric defining the DDR. This fact is argued from the construction they gave, separating the dispersion relation from the composition law from the very beginning.
But a DRK is not only composed of a DDR and a DCL. In order to have a relativity principle, a DLT for the one and two-particle systems must make the previous ingredients compatible. The Lorentz transformations of the one-particle system are proposed to be determined by the metric, being directly compatible with the DDR (the explicit expression of the distance is invariant under isometries). However, it is not clear how to implement the two-particle transformations, making all the ingredients of the kinematics compatible with each other.
In the next section, we present another proposal which tries to avoid these problems and puts all the ingredients of the kinematics in the same framework.
\section{Derivation of a DRK from the momentum space geometry}
\label{sec:derivation}
As we have commented previously, a DRK is composed of a DDR, a DCL and, in order to have a relativity principle, a DLT for the one and two-particle systems, making the previous ingredients compatible. In this section we will explain how we propose to construct a DRK from the geometry of a maximally symmetric momentum space.
\subsection{Definition of the deformed kinematics}
In a maximally symmetric space, there are 10 isometries. We will denote our momentum space metric as $g_{\mu\nu}(k)$\footnote{There is a particular choice of coordinates in momentum space which leads the metric to take the simple form $g_{\mu\nu}(k)=\eta_{\mu\nu} \pm k_\mu k_\nu/\Lambda^2$, where the de Sitter (anti-de Sitter) space corresponds with the positive (negative) sign.}. By definition, an isometry is a transformation $k\to k'$ satisfying
\begin{equation}
g_{\mu\nu}(k') \,=\, \frac{\partial k'_\mu}{\partial k_\rho} \frac{\partial k'_\nu}{\partial k_\sigma} g_{\rho\sigma}(k)\, .
\end{equation}
One can always take a system of coordinates in such a way that $g_{\mu\nu}(0)=\eta_{\mu\nu}$, and we write the isometries in the form
\begin{equation}
k'_\mu \,=\, [T_a(k)]_\mu \,=\, T_\mu(a, k)\,, \quad\quad\quad k'_\mu \,=\, [J_\omega(k)]_\mu \,=\,J_\mu(\omega, k)\,,
\end{equation}
where $a$ is a set of four parameters and $\omega$ of six, and
\begin{equation}
T_\mu(a, 0) \,=\, a_\mu\,, \quad\quad\quad J_\mu(\omega, 0) \,=\, 0\,,
\end{equation}
so $J_\mu(\omega, k)$ are the 6 isometries forming a subgroup that leave the origin in momentum space invariant, and $T_\mu(a, k)$ are the other 4 isometries which transform the origin and that one can call translations.
We will identify the isometries $k'_\mu = J_\mu(\omega, k)$ with the DLT of the one-particle system, being $\omega$ the six parameters of a Lorentz transformation. The dispersion relation is defined, rather than as the square of the distance from the origin to a point $k$ (which was the approach taken in the previous section), as any arbitrary function of this distance with the SR limit when the high energy scale tends to infinity\footnote{This disquisition can be avoided with a redefinition of the mass with the same function $f$ that relates the Casimir with the distance $C(k)=f(D(0,k))$.}. Then, under a Lorentz transformation, the equality $C(k)=C(k')$ holds, allowing us to determine the Casimir directly from $J_\mu(\omega, k)$. In this way we avoid the computation of the distance and obtain in a simple way the dependence on $k$ of $C(k)$.
The other 4 isometries $k'_\mu = T_\mu(a, k)$ related with translations define the composition law $p\oplus q$ of two momenta $p$, $q$ through
\begin{equation}
(p\oplus q)_\mu \doteq T_\mu(p, q)\,.
\label{DCL-translations}
\end{equation}
One can easily see that the DCL is related to the translation composition through
\begin{equation}
p\oplus q=T_p(q)=T_p(T_q(0))=(T_p \circ T_q)(0)\,.
\label{T-composition}
\end{equation}
Note that the equation above implies that $T_{(p\oplus q)}$ differs from $(T_p \circ T_q)$ by a Lorentz transformation, since it is a transformation that leaves the origin invariant.
From this perspective, a DRK (in Sec.~\ref{sec:diagram} we will see that with this construction a relativity principle holds) can be obtained by identifying the isometries $T_a$, $J_\omega$ with the composition law and the Lorentz transformations, which fixes the dispersion relation.
Then, starting from a metric, we can deduce the DRK by obtaining $T_a$, $J_\omega$ through
\begin{equation}
g_{\mu\nu}(T_a(k)) \,=\, \frac{\partial T_\mu(a, k)}{\partial k_\rho} \frac{\partial T_\nu(a, k)}{\partial k_\sigma} g_{\rho\sigma}(k), \quad\quad
g_{\mu\nu}(J_\omega(k)) \,=\, \frac{\partial J_\mu(\omega, k)}{\partial k_\rho} \frac{\partial J_\nu(\omega, k)}{\partial k_\sigma} g_{\rho\sigma}(k)\,.
\label{T,J}
\end{equation}
The previous equations have to be satisfied for any choice of the parameters $a$, $\omega$. From the limit $k\to 0$ in (\ref{T,J})
\begin{equation}
\begin{split}
g_{\mu\nu}(a) \,=&\, \left[\lim_{k\to 0} \frac{\partial T_\mu(a, k)}{\partial k_\rho}\right] \,
\left[\lim_{k\to 0} \frac{\partial T_\nu(a, k)}{\partial k_\sigma}\right] \,\eta_{\rho\sigma}\,, \\
\eta_{\mu\nu} \,=&\, \left[\lim_{k\to 0} \frac{\partial J_\mu(\omega, k)}{\partial k_\rho}\right] \,
\left[\lim_{k\to 0} \frac{\partial J_\nu(\omega, k)}{\partial k_\sigma}\right] \,\eta_{\rho\sigma}\,,
\end{split}
\end{equation}
one can identify
\begin{equation}
\lim_{k\to 0} \frac{\partial T_\mu(a, k)}{\partial k_\rho} \,=\, \delta^\rho_\alpha e_\mu^\alpha(a)\,, \quad\quad\quad
\lim_{k\to 0} \frac{\partial J_\mu(\omega, k)}{\partial k_\rho} \,=\, L_\mu^\rho(\omega)\,,
\label{e,L}
\end{equation}
where $e_\mu^\alpha(k)$ is the (inverse of\footnote{Note that the metric $g_{\mu\nu}$ is the inverse of $g^{\mu\nu}$.} the) tetrad of the momentum space, and $L_\mu^\rho(\omega)$ is the standard Lorentz transformation matrix with parameters $\omega$. From Eq.~\eqref{DCL-translations} and Eq.~\eqref{e,L}, one obtains
\begin{equation}
\lim_{k\to 0} \frac{\partial(a\oplus k)_\mu}{\partial k_\rho} \,=\, \delta^\rho_\alpha e_\mu^\alpha(a)\,,
\label{magicformula}
\end{equation}
which leads to a fundamental relationship between the DCL and the momentum space tetrad.
For infinitesimal transformations, we have
\begin{equation}
T_\mu(\epsilon, k) = k_\mu + \epsilon_\alpha {\cal T}_\mu^\alpha(k)\,, \quad\quad\quad
J_\mu(\epsilon, k) = k_\mu + \epsilon_{\beta\gamma} {\cal J}^{\beta\gamma}_\mu(k)\,,
\label{infinit_tr}
\end{equation}
and Eq.~(\ref{T,J}) leads to the equations
\begin{equation}
\frac{\partial g_{\mu\nu}(k)}{\partial k_\rho} {\cal T}^\alpha_\rho(k) \,=\, \frac{\partial{\cal T}^\alpha_\mu(k)}{\partial k_\rho} g_{\rho\nu}(k) +
\frac{\partial{\cal T}^\alpha_\nu(k)}{\partial k_\rho} g_{\mu\rho}(k)\,,
\label{cal(T)}
\end{equation}
\begin{equation}
\frac{\partial g_{\mu\nu}(k)}{\partial k_\rho} {\cal J}^{\beta\gamma}_\rho(k) \,=\,
\frac{\partial{\cal J}^{\beta\gamma}_\mu(k)}{\partial k_\rho} g_{\rho\nu}(k) +
\frac{\partial{\cal J}^{\beta\gamma}_\nu(k)}{\partial k_\rho} g_{\mu\rho}(k)\,,
\label{cal(J)}
\end{equation}
which allow us to obtain the Killing vectors ${\cal J}^{\beta\gamma}$, but do not completely determine ${\cal T}^\alpha$. This is due to the fact that if ${\cal T}^\alpha$, ${\cal J}^{\beta\gamma}$ are a solution of the Killing equations (\eqref{cal(T)}-\eqref{cal(J)}), then ${\cal T}^{\prime \alpha} = {\cal T}^\alpha + c^\alpha_{\beta\gamma} {\cal J}^{\beta\gamma}$ is also a solution of Eq.~(\ref{cal(T)}) for any arbitrary constants $c^\alpha_{\beta\gamma}$, and then $T'_\mu(\epsilon, 0)=T_\mu(\epsilon, 0)=\epsilon_\mu$. This observation is completely equivalent to the comment after Eq.~\eqref{T-composition}. In order to eliminate this ambiguity, since we know that the isometry generators close an algebra, we can chose them as
\begin{equation}
T^\alpha \,=\, x^\mu {\cal T}^\alpha_\mu(k), \quad\quad\quad J^{\alpha\beta} \,=\, x^\mu {\cal J}^{\alpha\beta}_\mu(k)\,,
\label{generators_withx}
\end{equation}
so that their Poisson brackets
\begin{align}
&\{T^\alpha, T^\beta\} \,=\, x^\rho \left(\frac{\partial{\cal T}^\alpha_\rho(k)}{\partial k_\sigma} {\cal T}^\beta_\sigma(k) - \frac{\partial{\cal T}^\beta_\rho(k)}{\partial k_\sigma} {\cal T}^\alpha_\sigma(k)\right)\,, \\
&\{T^\alpha, J^{\beta\gamma}\} \,=\, x^\rho \left(\frac{\partial{\cal T}^\alpha_\rho(k)}{\partial k_\sigma} {\cal J}^{\beta\gamma}_\sigma(k) - \frac{\partial{\cal J}^{\beta\gamma}_\rho(k)}{\partial k_\sigma} {\cal T}^\alpha_\sigma(k)\right)\,,
\end{align}
close a particular algebra. Then, we see that this ambiguity in defining the translations is just the ambiguity in the choice of the isometry algebra, i.e., in the basis. Every choice of the translation generators will lead to a different DCL, and then to a different DRK.
\subsection{Relativistic deformed kinematics}
\label{sec:diagram}
In this subsection we will prove that the kinematics obtained as proposed before is in fact a DRK. The proof can be sketched in the next diagram:
\begin{center}
\begin{tikzpicture}
\node (v1) at (-2,1) {$q$};
\node (v4) at (2,1) {$\bar q$};
\node (v2) at (-2,-1) {$p \oplus q$};
\node (v3) at (2,-1) {$(p \oplus q)^\prime$};
\draw [->] (v1) edge (v2);
\draw [->] (v4) edge (v3);
\draw [->] (v2) edge (v3);
\node at (-2.6,0) {$T_p$};
\node at (2.7,0) {$T_{p^\prime}$};
\node at (0,-1.4) {$J_\omega$};
\end{tikzpicture}
\end{center}
where the momentum with prime indicates the transformation through ${\cal J}_\omega$, and $T_p$, $T_{p'}$ are the translations with parameters $p$ and $p'$. One can define $\bar{q}$ as the point that satisfies
\begin{equation}
(p\oplus q)' \,=\, (p' \oplus \bar{q})\,.
\label{qbar1}
\end{equation}
One sees that in the case $q=0$, also $\bar{q}=0$, and in any other case with $q\neq 0$, the point $\bar{q}$ is obtained from $q$ by an isometry, which is a composition of the translation $T_p$, a Lorentz transformation $J_\omega$, and the inverse of the translation $T_{p'}$ (since the isometries are a group of transformations, any composition of isometries is also an isometry). So we have found that there is an isometry $q\rightarrow \bar{q}$, that leaves the origin invariant, and then
\begin{equation}
C(q) \,=\, C(\bar{q})\,,
\label{qbar2}
\end{equation}
since they are at the same distance from the origin. Eqs.~\eqref{qbar1}-\eqref{qbar2} imply that the deformed kinematics with ingredients $C$ and $\oplus$ is a DRK if one identifies the momenta $(p', \bar{q})$ as the two-particle Lorentz transformation of $(p, q)$. In particular, Eq.~(\ref{qbar1}) tells us that the DCL is invariant under the previously defined Lorentz transformation and Eq.~(\ref{qbar2}), together with $C(p)=C(p')$, that the DDR of both momenta is also Lorentz invariant. We can see that with this definition of the two-particle Lorentz transformations, one of the particles ($p$) transforms as a single momentum, but the transformation of the other one ($q$) depends of both momenta. This computation will be carried out in the next subsection in the particular example of $\kappa$-Poincaré.
\section{Isotropic relativistic deformed kinematics}
\label{sec:examples}
In this section we derive the construction in detail for two simple isotropic kinematics, $\kappa$-Poincaré and Snyder. Also, we will show how to construct a DRK beyond these two simple cases, the kinematics known as hybrid models.
If the DRK is isotropic, the general form of the algebra of the generators of isometries is
\begin{equation}
\{T^0, T^i\} \,=\, \frac{c_1}{\Lambda} T^i + \frac{c_2}{\Lambda^2} J^{0i}, \quad\quad\quad \{T^i, T^j\} \,=\, \frac{c_2}{\Lambda^2} J^{ij}\,,
\label{isoRDK}
\end{equation}
where we assume that the generators $J^{\alpha\beta}$ satisfy the standard Lorentz algebra, and due to the fact that isometries are a group, the Poisson brackets of $T^\alpha$ and $J^{\beta\gamma}$ are fixed by Jacobi identities\footnote{The coefficients proportional to the Lorentz generators in Eq.~\eqref{isoRDK} are the same also due to Jacobi identities.}. For each choice of the coefficients $(c_1/\Lambda)$ and $(c_2/\Lambda^2)$ (and then for the algebra) and for each choice of a metric of a maximally symmetric momentum space in isotropic coordinates, one has to obtain the isometries of such metric so that their generators close the chosen algebra in order to find a DRK.
\subsection{\texorpdfstring{$\kappa$}{k}-Poincaré relativistic kinematics}
\label{subsection_kappa_desitter}
We can consider the simple case where $c_2=0$ in Eq.~\eqref{isoRDK}, so the generators of translations close a subalgebra\footnote{We have reabsorbed the coefficient $c_1$ in the scale $\Lambda$.}
\begin{equation}
\{T^0, T^i\} \,=\, \pm \frac{1}{\Lambda} T^i\,.
\label{Talgebra}
\end{equation}
A well known result of differential geometry (see Ch.6 of Ref.~\cite{Chern:1999jn}) is that if the generators of left-translations $T^\alpha$ transforming $k \to T_a(k) = (a\oplus k)$ form a Lie algebra, the generators of right-translations $\tilde{T}^\alpha$ transforming $k \to (k\oplus a)$, close the same algebra but with a different sign
\begin{equation}
\{\tilde{T}^0, \tilde{T}^i\} \,=\, \mp \frac{1}{\Lambda} \tilde{T}^i \,.
\label{Ttildealgebra}
\end{equation}
We have found the explicit relation between the infinitesimal right-translations and the tetrad of the momentum metric in Eq.~\eqref{magicformula}, which gives
\begin{equation}
(k\oplus\epsilon)_\mu=k_\mu+\epsilon_\alpha e^\alpha_\mu\equiv \tilde{T}_\mu(k,\epsilon).
\end{equation}
Comparing with Eq.~\eqref{infinit_tr} and Eq.~\eqref{generators_withx}, we see that right-translation generators are given by
\begin{equation}
\tilde{T}^\alpha \,=\, x^\mu e^\alpha_\mu(k)\,.
\label{Ttilde}
\end{equation}
Since both algebras \eqref{Talgebra}-\eqref{Ttildealgebra} satisfy $\kappa$-Minkowski noncommutativity, the problem to find a tetrad $e^\alpha_\mu(k)$ compatible with the algebra of Eq.~(\ref{Ttildealgebra}) is equivalent to the problem of obtaining a representation of this noncommutativity expressed in terms of canonical coordinates of the phase space. One can easily confirm that the choice of the tetrad
\begin{equation}
e^0_0(k) \,=\, 1\,, \quad\quad\quad e^0_i(k) \,=\, e^i_0(k) \,=\, 0\,, \quad\quad\quad e^i_j (k) \,=\, \delta^i_j e^{\mp k_0/\Lambda}\,,
\label{bicross-tetrad}
\end{equation}
leads to a representation of $\kappa$-Minkowski noncommutativity.
In order to obtain the finite translations $T_\mu(a,k)$, which in this case form a group, one can try to generalize Eq.~\eqref{e,L} to define a transformation that does not change the form of the tetrad:
\begin{equation}
e_\mu^\alpha(T(a, k)) \,=\, \frac{\partial T_\mu(a, k)}{\partial k_\nu} \,e_\nu^\alpha(k)\,.
\label{T(a,k)}
\end{equation}
Obviously, if $T_\mu(a,k)$ is a solution to the previous equation, it implies that the translation leaves the tetrad invariant, and then the metric, so it is therefore an isometry. Then, one can check that translations form a group since the composition of two transformations leaving the tetrad invariant also leaves the tetrad invariant. Indeed, Eq. \eqref{T(a,k)} can be explicitly solved in order to obtain the finite translations. For the particular choice of the tetrad in Eq.~\eqref{bicross-tetrad}, the translations read (see \ref{appendix_translations})
\begin{equation}
T_0(a, k) \,=\, a_0 + k_0, \quad\quad\quad T_i(a, k) \,=\, a_i + k_i e^{\mp a_0/\Lambda}\,,
\end{equation}
and then the DCL is
\begin{equation}
(p\oplus q)_0 \,=\, T_0(p, q) \,=\, p_0 + q_0\,, \quad\quad\quad
(p\oplus q)_i \,=\, T_i(p, q) \,=\, p_i + q_i e^{\mp p_0/\Lambda}\,,
\label{kappa-DCL}
\end{equation}
which is the one obtained in the bicrossproduct basis of $\kappa$-Poincaré kinematics (up to a sign depending on the choice of the initial sign of $\Lambda$ in Eq.~\eqref{bicross-tetrad}).
From the equation
\begin{equation}
\frac{\partial C(k)}{\partial k_\mu} \,{\cal J}^{\alpha\beta}_\mu(k) \,=\, 0 \,,
\label{eq:casimir_J}
\end{equation}
one can obtain the DDR, where ${\cal J}^{\alpha\beta}$ are the infinitesimal Lorentz transformations satisfying Eq.~\eqref{cal(J)} with the metric $g_{\mu\nu}(k)=e^\alpha_\mu(k)\eta_{\alpha\beta}e^\beta_\nu(k)$ defined by the tetrad~\eqref{bicross-tetrad}:
\begin{equation}
\begin{split}
&0 \,=\, \frac{{\cal J}^{\alpha\beta}_0(k)}{\partial k_0}\,, \quad
0 \,=\, - \frac{{\cal J}^{\alpha\beta}_0(k)}{\partial k_i} e^{\mp 2k_0/\Lambda} + \frac{{\cal J}^{\alpha\beta}_i(k)}{\partial k_0}\,, \\
&\pm \frac{2}{\Lambda} {\cal J}^{\alpha\beta}_0(k) \delta_{ij} \,=\, - \frac{\partial{\cal J}^{\alpha\beta}_i(k)}{\partial k_j} - \frac{\partial{\cal J}^{\alpha\beta}_j(k)}{\partial k_i}\,.
\end{split}
\end{equation}
One gets finally
\begin{equation}
{\cal J}^{0i}_0(k) \,=\, -k_i\,, \quad \quad \quad {\cal J}^{0i}_j(k)\,=\, \pm \delta^i_j \,\frac{\Lambda}{2} \left[e^{\mp 2 k_0/\Lambda} - 1 - \frac{\vec{k}^2}{\Lambda^2}\right] \pm \,\frac{k_i k_j}{\Lambda}\,,
\label{eq:j_momentum_space}
\end{equation}
and then
\begin{equation}
C(k) \,=\, \Lambda^2 \left(e^{k_0/\Lambda} + e^{-k_0/\Lambda} - 2\right) - e^{\pm k_0/\Lambda} \vec{k}^2 \,,
\label{eq:casimir_momentum_space}
\end{equation}
which is the same function of the momentum which defines the DDR of $\kappa$-Poincaré kinematics in the bicrossproduct basis (up to the sign in $\Lambda$).
The last ingredient we need in order to complete the discussion of the kinematics is the two-particle Lorentz transformations. Using the diagram in Sec.~\ref{sec:diagram}, one has to find $\bar{q}$ so that
\begin{equation}
(p\oplus q)' \,=\, p'\oplus \bar{q}\,.
\end{equation}
Equating both expressions and taking only the linear terms in $\epsilon_{\alpha\beta}$ (parameters of the infinitesimal Lorentz transformation) one arrives to the equation
\begin{equation}
\epsilon_{\alpha\beta} {\cal J}^{\alpha\beta}_\mu(p\oplus q) \,=\, \epsilon_{\alpha\beta} \frac{\partial(p\oplus q)_\mu}{\partial p_\nu} {\cal J}^{\alpha\beta}_\nu(p) + \frac{\partial(p\oplus q)_\mu}{\partial q_\nu} (\bar{q}_\nu - q_\nu)\,.
\end{equation}
From the DCL of \eqref{kappa-DCL} with the minus sign, we find
\begin{align}
& \frac{\partial(p\oplus q)_0}{\partial p_0} \,=\, 1\,, \quad\quad
\frac{\partial(p\oplus q)_0}{\partial p_i} \,=\, 0\,, \quad\quad
\frac{\partial(p\oplus q)_i}{\partial p_0} \,=\, - \frac{q_i}{\Lambda} e^{-p_0/\Lambda}\,, \quad\quad
\frac{\partial(p\oplus q)_i}{\partial p_j} \,=\, \delta_i^j\,, \\
& \frac{\partial(p\oplus q)_0}{\partial q_0} \,=\, 1\,, \quad\quad \frac{\partial(p\oplus q)_0}{\partial q_i} \,=\, 0\,, \quad\quad
\frac{\partial(p\oplus q)_i}{\partial q_0} \,=\, 0\,, \quad\quad \frac{\partial(p\oplus q)_i}{\partial q_j} \,=\, \delta_i^j e^{-p_0/\Lambda}\,.
\end{align}
Then, we obtain
\begin{equation}
\begin{split}
\bar{q}_0 \,&=\, q_0 + \epsilon_{\alpha\beta} \left[{\cal J}^{\alpha\beta}_0(p\oplus q) - {\cal J}^{\alpha\beta}_0(p)\right]\,, \\
\bar{q}_i \,&=\, q_i + \epsilon_{\alpha\beta} \, e^{p_0/\Lambda} \, \left[{\cal J}^{\alpha\beta}_i(p\oplus q) - {\cal J}^{\alpha\beta}_i(p) + \frac{q_i}{\Lambda} e^{-p_0/\Lambda} {\cal J}^{\alpha\beta}_0(p)\right]\,,
\end{split}
\label{eq:jr_momentum_space}
\end{equation}
and one can check that this is the Lorentz transformation of the two-particle system of $\kappa$-Poincaré in the bicrossproduct basis~\eqref{eq:coproducts}.
For the choice of the tetrad in Eq.~\eqref{bicross-tetrad}, the metric in momentum space reads~\footnote{This is the de Sitter metric written in the comoving coordinate system used in Ref.~\cite{Gubitosi:2013rna}.}
\begin{equation}
g_{00}(k) \,=\, 1\,, \quad\quad\quad g_{0i}(k) \,=\, g_{i0}(k) \,=\, 0\,, \quad\quad\quad g_{ij}(k) \,=\, - \delta_{ij} e^{\mp 2k_0/\Lambda}\,.
\label{bicross-metric}
\end{equation}
Computing the Riemann-Christoffel tensor, one can check that it corresponds to a de Sitter momentum space with curvature $(12/\Lambda^2)$.\footnote{In Appendix~\ref{appendix_algebra} it is shown that the way we have constructed the DRK as imposing the invariance of the tetrad cannot be followed for the case of anti-de Sitter space.}
To summarize, we have found the $\kappa$-Poincaré kinematics in the bicrossproduct basis~\cite{KowalskiGlikman:2002we} from geometric ingredients of a de Sitter momentum space with the choice of the tetrad of Eq.~\eqref{bicross-tetrad}. For different choices of tetrad (in such a way that the generators of Eq.~\eqref{Ttilde} close the algebra Eq.~\eqref{Ttildealgebra}), one will find the $\kappa$-Poincaré kinematics in different bases. Then, the different bases of the deformed kinematics are just different choices of coordinates in de Sitter space. Note that when generators of right-translations constructed from the momentum space tetrad close the algebra of Eq.~\eqref{Ttildealgebra}, the DCL obtained is associative (this can be easily understood since as the generators of translations close an algebra Eq.~\eqref{Talgebra}, translations form a group).
\subsection{Beyond \texorpdfstring{$\kappa$}{k}-Poincaré relativistic kinematics}
The other simple choice in the algebra of the translation generators is $c_1=0$, leading to the Snyder algebra explained in the introduction. As the generators of translations do not close an algebra, we cannot follow the same procedure we did in the previous case for obtaining the $\kappa$-Poincaré kinematics. But considering the simple covariant form of the de Sitter metric, $g_{\mu\nu}(k) =\eta_{\mu\nu} + k_\mu k_\nu/\Lambda^2$, one can find the DCL just requiring to be covariant
\begin{equation}
(p\oplus q)_\mu \,=\, p_\mu f_L\left(p^2/\Lambda^2, p\cdot q/\Lambda^2, q^2/\Lambda^2\right) + q_\mu f_R\left(p^2/\Lambda^2, p\cdot q/\Lambda^2, q^2/\Lambda^2\right)\,,
\label{DCLSnyder-1}
\end{equation}
asking for the following equation to hold:
\begin{equation}
\eta_{\mu\nu} + \frac{(p\oplus q)_\mu (p\oplus q)_\nu}{\Lambda^2} \,=\, \frac{\partial(p\oplus q)_\mu}{\partial q_\rho} \frac{\partial(p\oplus q)_\nu}{\partial q_\sigma} \left(\eta_{\rho\sigma} + \frac{q_\rho q_\sigma}{\Lambda^2}\right)\,.
\end{equation}
Then, one can solve for the two functions $f_L$, $f_R$ of three variables, obtaining
\begin{equation}
\begin{split}
f_L\left(p^2/\Lambda^2, p\cdot q/\Lambda^2, q^2/\Lambda^2\right) \,=&\, \sqrt{1+\frac{q^2}{\Lambda^2}}+\frac{p\cdot q}{\Lambda^2\left(1+\sqrt{1+p^2/\Lambda^2}\right)}\,,\\
f_R\left(p^2/\Lambda^2, p\cdot q/\Lambda^2, q^2/\Lambda^2\right) \,=&\, 1\,,
\label{DCLSnyder-2}
\end{split}
\end{equation}
which is the DCL of Snyder kinematics in the Maggiore representation previously derived in Ref.~\cite{Battisti:2010sr} (the first order terms were obtained also in Ref.~\cite{Banburski:2013jfa}).
From the infinitesimal generators of translations
\begin{equation}
{\cal T}^\mu_\nu(p)\,=\,\left.\frac{\partial \left(k\oplus p\right)_\nu}{\partial k_\mu} \right\rvert_{k \rightarrow 0}\,=\,\delta^\mu_\nu \sqrt{1+\frac{p^2}{\Lambda^2}} \,,
\end{equation}
one can see that $T^\alpha=x^\nu{\cal T}^\alpha_\nu$ form the Snyder algebra
\begin{equation}
\{T^\alpha, T^\beta\} \,=\, \frac{1}{\Lambda^2} J^{\alpha\beta}\,.
\end{equation}
From linear Lorentz covariance, one can deduce that the dispersion relation $C(p)$ is just a function of $p^2$, and the Lorentz transformations both in the one and two-particle systems are linear (the same Lorentz transformations used in SR).
Different choices of momentum coordinates making the metric to be expressed in covariant terms will lead to different representations of the Snyder kinematics. For the anti-de Sitter case, the DCL is the one obtained in Eq.~\eqref{DCLSnyder-2} just replacing $(1/\Lambda^2)$ by $-(1/\Lambda^2)$, since the anti-de Sitter metric is the same of de Sitter proposed at the beginning of this subsection interchanging $(1/\Lambda^2)$ by $-(1/\Lambda^2)$.
When both coefficients $c_1$, $c_2$ are non-zero, one has algebras of the generators of translations known as hybrid models~\cite{Meljanac:2009ej}. The DCL in these cases can be obtained from a power expansion in $(1/\Lambda)$ asking to be an isometry and that their generators close the desired algebra. With this procedure, one can get the same kinematics found in Ref.~\cite{Meljanac:2009ej}.
The DCL obtained when the generators of translations close a subalgebra (the case of $\kappa$-Poincaré) is the only one which is associative. The other compositions obtained when the algebra is Snyder or any hybrid model do not have this property (see Eqs.~\eqref{DCLSnyder-1} and~\eqref{DCLSnyder-2}). This is an important difference between the algebraic and geometric approaches: the only isotropic DRK obtained from the Hopf algebra approach is $\kappa$-Poincaré, since one asks the generators of translations to close an algebra (and then, one finds an associative composition of momenta), eliminating any other option. With this proposal, identifying a correspondence between translations of a maximally symmetric momentum space whose generators close a certain algebra and a DCL, we open up the possibility to construct more DRK in a simple way. It is clear that associativity is a crucial property for studying processes with a DRK, so somehow the $\kappa$-Poincaré scenario seems special. Note also that the two different perspectives (algebraic and geometrical approaches) has only one common DRK, which might indicate that $\kappa$-Poincaré is a preferred kinematics.
\section{Comparison with previous works}
\label{relative_locality_comparison}
In this section, we will compare the prescription followed in the previous sections with the one proposed in Ref.~\cite{AmelinoCamelia:2011bm}. This comparison can only be carried out for the $\kappa$-Poincaré kinematics, since as we will see, the associativity property of the composition law plays a crucial role. In order to make the comparison, we can derive with respect to $p_\tau$ the equation of the invariance of the tetrad under translations Eq.~\eqref{T(a,k)}, written in terms of the DCL
\begin{equation}
\frac{\partial e^\alpha_\nu(p\oplus q)}{\partial p_\tau} \,=\, \frac{\partial e^\alpha_\nu(p\oplus q)}{\partial (p\oplus q)_\sigma} \frac{\partial(p\oplus q)_\sigma}{\partial p_\tau} \,=\, \frac{\partial^2(p\oplus q)_\nu}{\partial p_\tau \partial q_\rho} e^\alpha_\rho(q)\,.
\end{equation}
One can find the second derivative of the DCL
\begin{equation}
\frac{\partial^2(p\oplus q)_\nu}{\partial p_\tau \partial q_\rho} \,=\, e^\rho_\alpha(q) \frac{\partial e^\alpha_\nu(p\oplus q)}{\partial (p\oplus q)_\sigma} \frac{\partial(p\oplus q)_\sigma}{\partial p_\tau}\,,
\end{equation}
where $e^\nu_\alpha$ is the inverse of $e^\alpha_\nu$, $e^\alpha_\nu e^\mu_\alpha=\delta^\mu_\nu$.
But also using Eq.~\eqref{T(a,k)}, one has
\begin{equation}
e^\rho_\alpha(q) \,=\, \frac{\partial(p\oplus q)_\mu}{\partial q_\rho} e^\mu_\alpha(p\oplus q)\,,
\label{magicformula2}
\end{equation}
and then
\begin{equation}
\frac{\partial^2(p\oplus q)_\nu}{\partial p_\tau \partial q_\rho} + \Gamma^{\sigma\mu}_\nu(p\oplus q) \,\frac{\partial(p\oplus q)_\sigma}{\partial p_\tau} \,\frac{\partial(p\oplus q)_\mu}{\partial q_\rho} \,=\, 0\,,
\label{geodesic_tetrad}
\end{equation}
where
\begin{equation}
\Gamma^{\sigma\mu}_\nu(k) \,\doteq\, - e^\mu_\alpha(k) \, \frac{\partial e^\alpha_\nu(k)}{\partial k_\sigma}\,.
\label{e-connection}
\end{equation}
It can be checked that the combination of tetrads and derivatives appearing in Eq.~\eqref{e-connection} in fact transforms like a connection under a change of momentum coordinates.
In Ref.~\cite{Amelino-Camelia:2013sba}, it is proposed another way to define a connection and a DCL in momentum space through parallel transport, establishing a link between these two ingredients. It is easy to check that the DCL obtained in this way satisfies Eq.~\eqref{geodesic_tetrad}. This equation only determines the DCL for a given connection if one imposes the associativity property of the composition. Comparing with the previous reference, one then concludes that the DCL obtained from translations that leaves the form of the tetrad invariant is the associative composition law one finds by parallel transport, with the connection constructed from a tetrad and its derivatives as in Eq.\eqref{e-connection}.
Finally, if the DCL is associative, then Eq.~\eqref{k-DCL} reduces to
\begin{equation}
(p\oplus_k q) \,=\, p\oplus\hat{k}\oplus q.
\end{equation}
Replacing $q$ by $(\hat{k}\oplus q)$ in Eq.~\eqref{geodesic_tetrad}, which is valid for any momenta ($p, q$), one obtains
\begin{equation}
\frac{\partial^2 (p \oplus \hat{k} \oplus q)_\nu}{\partial p_\tau \partial(\hat{k} \oplus q)_\rho}+\Gamma^{\sigma \mu}_\nu (p \oplus \hat{k} \oplus q) \frac{\partial (p \oplus \hat{k} \oplus q)_\sigma}{\partial p_\tau}\frac{\partial (p \oplus \hat{k} \oplus q)_\mu}{\partial(\hat{k} \oplus q)_\rho}\,=\,0\,.
\end{equation}
Multiplying by $\partial(\hat{k} \oplus q)_\rho/\partial q_\lambda$, one finds
\begin{equation}
\frac{\partial^2 (p \oplus \hat{k} \oplus q)_\nu}{\partial p_\tau \partial q_\lambda}+\Gamma^{\sigma \mu}_\nu (p \oplus \hat{k} \oplus q) \frac{\partial (p \oplus \hat{k} \oplus q)_\sigma}{\partial p_\tau}\frac{\partial (p \oplus \hat{k} \oplus q)_\mu}{\partial q_\lambda}\,=\,0\,.
\label{connection_1}
\end{equation}
Taking $p=q=k$ in Eq.~\eqref{connection_1}, one finally gets
\begin{equation}
\Gamma^{\tau \lambda}_\nu (k)\,=\,-\left.\frac{\partial^2 (p\oplus_{k}q)_\nu}{\partial p_\tau \partial q_\lambda}\right\rvert_{p,q \rightarrow k}\,,
\end{equation}
which is the same expression of Eq.~\eqref{k-connection} proposed in Ref.~\cite{AmelinoCamelia:2011bm}. This concludes that the connection of Eq.~\eqref{e-connection} constructed from the tetrad is the same connection given by the prescription developed in Ref.~\cite{AmelinoCamelia:2011bm} when the DCL is associative.
\begin{comment}
In this chapter we discussed the modification of the kinematics due to a curvature of the momentum space, but we did not mention at all the possible effects that spacetime suffers from it. As we will see in the next chapter, the presence of a DRK produces nontrivial consequences in spacetime.
\end{comment}
\chapter{Spacetime from local interactions}
\label{chapter_locality}
\ifpdf
\graphicspath{{Chapter4/Figs/Raster/}{Chapter4/Figs/PDF/}{Chapter4/Figs/}}
\else
\graphicspath{{Chapter4/Figs/Vector/}{Chapter4/Figs/}}
\fi
\epigraph{Like the physical, the psychical is not necessarily in reality what it appears to us to be.}{Sigmund Freud}
In the previous chapter we have seen that a DRK can be understood from the geometry of a curved (maximally symmetric) momentum space. However, we have not discussed the effects that this curvature, and then the kinematics, provoke on spacetime. A possible consequence of a DCL considered in numerous works is a noncommutative spacetime. In particular, in Refs.~\cite{Meljanac:2009ej,Battisti:2010sr}, a composition law is obtained from the product of plane waves for $\kappa$-Minkowski, Snyder and hybrid models noncommutativity. Also, from the Hopf algebra perspective, it is possible to obtain a modified Heisenberg algebra with $\kappa$-Minkowski spacetime from a DCL through the ``pairing'' operation~\cite{Kosinski_paring}.
In all these works, however, there is a lack of physical understanding about the relation between a DCL and a noncommutative spacetime. In this chapter, we will try to show how these ingredients are related giving a physical intuition. As we will see in Sec.~\ref{sec:relative_locality_intro}, a DCL produces a loss of locality in canonical spacetime. From an action of free relativistic particles, the authors of Ref.~\cite{AmelinoCamelia:2011bm} derived such effect including an interaction term defined by the energy-momentum conservation, which is determined by the DCL. It is possible to understand this nonlocality from the following argument: since the total momentum can be viewed as the generator of translations in spacetime, a modification of it as a function of all momenta will produce nontrivial translations. This means that only an observer placed where the interaction takes place will see such interaction as local, but not any other related to him by a translation. Along this chapter, we will see that there are different ways to choose space-time coordinates (depending on momentum) which we call ``physical'' coordinates~\cite{Carmona:2017cry}, in which the interactions are local. We will see that there is a relationship between this approach and the results obtained through Hopf algebras, and also with the momentum space geometry studied in the previous chapter.
\section{Relative Locality}
\label{sec:relative_locality_intro}
In this section we explain the main results of~\cite{AmelinoCamelia:2011nt}. We first start by the following action
\begin{equation}
S_{\text{total}}\,=\, S_{\text{free}}^{\text{in}}+S_{\text{free}}^{\text{out}} +S_{\text{int}}\,,
\label{eq:action}
\end{equation}
where the first part describes the free propagation of the $N$ incoming worldlines
\begin{equation}
S_{\text{free}}^{\text{in}}\,=\,\sum_{J=1}^{N}\int^{0}_{-\infty} ds \left(x^{\mu}_J \dot k^{J}_{\mu}+\mathcal{N}_J\left(C(k^{J})-m^2_J\right)\right)\,,
\label{eq:action_in}
\end{equation}
and the outgoing worldlines are given by the second term
\begin{equation}
S_{\text{free}}^{\text{out}}\,=\,\sum_{J=N+1}^{2N}\int_{0}^{\infty} ds \left(x^{\mu}_J \dot k^{J}_{\mu}+\mathcal{N}_J\left(C(k^{J})-m^2_J\right)\right)\,.
\label{eq:action_out}
\end{equation}
In the previous expressions $s$ plays the role of an arbitrary parameter characterizing the worldline of the particle and $\mathcal{N}_J$ is the Lagrange multiplier imposing on all particles the condition to be on mass shell
\begin{equation}
C(k^J)\,=\,m^2_J\,,
\end{equation}
for a Casimir $C(k^J)$ (which in principle is deformed).
The interaction term appearing in the action is the conservation law of momenta times a Lagrange multiplier
\begin{equation}
S_{\text{int}}\,=\,\left( \bigoplus\limits_{N+1\leq J\leq 2N} k^J_\nu(0)\,\,- \bigoplus\limits_{1\leq J\leq N} k^J_\nu(0)\right) \xi^\nu\,.
\label{eq:action_int}
\end{equation}
The parametrization $s$ is chosen in such a way that the interaction occurs at $s=0$ for every particle, and $\xi$ can be seen as a Lagrange multiplier imposing the momentum conservation at that point.
Varying the action and integrating by parts one finds
\begin{equation}
\delta S_{\text{total}}\,=\,\sum_J \int_{s_1}^{s_2}\left(\delta x^{\mu}_J \dot k^J_{\mu} - \delta k^{J}_{\mu}\left[\dot x^{\mu}_J-\mathcal{N}_J\frac{\partial C(k^J)}{\partial k^{J}_{\mu}} \right]\right)+\mathcal{R}\,,
\label{deltaS}
\end{equation}
where the term $\mathcal{R}$ contains the variation of $S_{\text{int}}$ and also the boundary terms appearing after the integration by parts, and $s_{1,2}$ are $-\infty$, 0 or 0, $\infty$ depending on the incoming or outgoing character of the terms.
One finds
{\small
\begin{equation}
\begin{split}
\mathcal{R} \,=\,\left(\bigoplus\limits_{N+1\leq J\leq 2N}k^J_\nu(0)\,\,- \bigoplus\limits_{1\leq J\leq N} k^J_\nu(0)\right)& \delta\xi^\nu+ \sum_{J=1}^{N} \left(x^{\mu}_J (0) - \xi^{\nu} \frac{\partial}{\partial k^J_{\mu}} \left[ \bigoplus\limits_{1\leq J\leq N} k^I_\nu\right](0)\right)\delta k^J_{\mu}(0)\\- &
\sum_{J=N+1}^{2N} \left(x^{\mu}_J (0) -\xi^{\nu} \frac{\partial}{\partial k^J_{\mu}} \left[ \bigoplus\limits_{N+1\leq J\leq 2N} k^I_\nu\right](0)\right) \delta k^J_{\mu}(0)\,,
\end{split}
\end{equation}}
\normalsize
where the $x^{\mu}_J (0)$ are the space-time coordinates of the worldline at the initial (final) point for $1\leq J\leq N$ ($N+1\leq J\leq 2N$).
The worldlines of particles must obey the variational principle $\delta S_{\text{total}}=0$ for any variation $\delta\xi^\mu$, $\delta x_J^\mu$, $\delta k^J_\mu$. From the variation of the Lagrange multiplier of the interaction term $\delta \xi^\nu$, one obtains the momentum conservation at the interaction point, and for the variation with respect $\delta k^J_{\mu}(0)$ one finds\footnote{The variation of $\delta x^\mu_J(s)$ in Eq.(\ref{deltaS}) implies constant momenta along each worldline.}
\begin{equation}
\begin{split}
x^{\mu}_J (0)\,&=\, \xi^{\nu} \frac{\partial}{\partial k^J_{\mu}} \left[\bigoplus\limits_{1\leq J\leq N} k^I_\nu\right] \, \text{for } J=1,\ldots N
\,,\\
x^{\mu}_J (0)\,&=\, \xi^{\nu} \frac{\partial}{\partial k^J_{\mu}} \left[\bigoplus\limits_{N+1\leq J\leq 2N} k^I_\nu\right] \, \text{for } J=N+1,\ldots 2N \,.
\label{eq:endWL}
\end{split}
\end{equation}
The transformation
\begin{equation}
\begin{split}
\delta \xi^\mu&\,=\,a^\mu, \quad
\delta x^\mu_J\,=\,a^\nu\frac{\partial}{\partial k^J_{\mu}} \left[\bigoplus\limits_{1\leq J\leq N} k^I_\nu\right] (J=1,\ldots N)\,,\\
\delta x^\mu_J&\,=\,a^\nu \frac{\partial}{\partial k^J_{\mu}} \left[\bigoplus\limits_{N+1\leq J\leq 2N} k^I_\nu\right] (J=N+1,\ldots 2N)\,,
\quad \delta k^J_\mu\,=\,0\,,
\label{eq:translation}
\end{split}
\end{equation}
is a symmetry of the action (translational invariance) connecting different solutions from the variational principle. We see from Eq.~\eqref{eq:endWL} that only an observer placed at the interaction point ($\xi^\mu=0$) will see the interaction as local (all $x^\mu_J(0)$ coincide, being zero). One can choose the Lagrange multiplier $\xi^\mu$ so the interaction will be local only for one observer, but any other one will see the interaction as non-local. This shows the loss of absolute locality, effect baptized as relative locality.
In the next sections we will see that we can avoid this nonlocality choosing new space-time coordinates, which, in fact, do not commute.
\subsection{Construction of noncommutative spacetimes}
In the literature, a noncommutative spacetime is usually considered through new space-time coordinates $\tilde{x}$ constructed from canonical phase-space coordinates ($x$, $k$).\,\footnote{See Refs.~\cite{Meljanac:2016jwk},\cite{Loret:2016jrg},\cite{Carmona:2017oit} for recent references where this construction is used.} One can write these coordinates as a linear function of space-time coordinates and a function $\varphi(k)$ through the combination
\begin{equation}
\tilde{x}^\mu \,=\, x^\nu \,\varphi^\mu_\nu(k)\,.
\label{eq:NCspt}
\end{equation}
This set of functions has to reduce to the delta function when the momentum tends to zero (or when the high energy scale tends to infinity) in order to recover the SR result. Using the usual Poisson brackets \begin{equation}
\left\lbrace k_{\nu}\,,\,x^{\mu} \right\rbrace \,=\,\delta^{\mu}_{\nu}\,,
\end{equation}
the Poisson brackets of these new noncommutative space-time coordinates are
\begin{equation}
\begin{split}
\{\tilde{x}^\mu, \tilde{x}^\sigma\} &\,=\, \{ x^\nu \varphi^\mu_\nu(k), x^\rho \varphi^\sigma_\rho(k)\} \,=\, x^\nu \frac{\partial\varphi^\mu_\nu(k)}{\partial k_\rho} \,\varphi^\sigma_\rho(k) \,-\, x^\rho \frac{\partial\varphi^\sigma_\rho(k)}{\partial k_\nu} \,\varphi^\mu_\nu(k) \\
&=\, x^\nu \,\left(\frac{\partial\varphi^\mu_\nu(k)}{\partial k_\rho} \,\varphi^\sigma_\rho(k) \,-\, \frac{\partial\varphi^\sigma_\nu(k)}{\partial k_\rho} \,\varphi^\mu_\rho(k)\right)\,,
\end{split}
\label{eq:commNCspt}
\end{equation}
and in phase space leads to the modified Heisenberg algebra
\begin{equation}
\{k_\nu, \tilde{x}^\mu\}\,=\, \varphi^\mu_\nu(k)\,.
\end{equation}
Note that different choices of $\varphi^\mu_\nu(k)$ can lead to the same spacetime noncommutativity, leading to different representations of the same algebra. As it is shown in Appendix~\ref{ncst-rep}, different choices of canonical phase-space coordinates corresponding to different choices of momentum variables give different representations of the same space-time noncommutativity.
\subsection{Noncommutative spacetime in \texorpdfstring{$\kappa$}{k}-Poincaré Hopf algebra}
The formalism of Hopf algebras gives a DCL, which is referred to as the ``coproduct'', and from it one can obtain, through the mathematical procedure known as ``pairing'', the resulting modified phase space. For the case of $\kappa$-Poincaré in the bicrossproduct basis, as we have seen in the introduction, the coproduct of momenta $P_\mu$ is
\begin{equation}
\Delta(P_0)\,=\,P_0\otimes \mathbb{1}+ \mathbb{1}\otimes P_0 \,,\qquad \Delta(P_i)\,=\,P_i\otimes \mathbb{1} + e^{-P_0/\Lambda} \otimes P_i\,,
\label{coproduct}
\end{equation}
which leads to the composition law~\eqref{kappa-DCL} obtained in the previous chapter
\begin{equation}
(p\oplus q)_0\,=\,p_0 + q_0 \,, \qquad (p\oplus q)_i \,=\,p_i + e^{-p_0/\Lambda} q_i\,.
\label{composition}
\end{equation}
The ``pairing'' operation of Hopf algebras formalism allows us to determine the modified Heisenberg algebra for a given coproduct~\cite{KowalskiGlikman:2002we,KowalskiGlikman:2002jr}. The bracket (pairing) $\langle *,* \rangle$ between momentum and position variables is defined as
\begin{equation}
\langle p_\nu , x^\mu\rangle \,=\, \delta^\mu_\nu\,.
\end{equation}
This bracket has the following properties
\begin{equation}
\langle p, x y\rangle \,=\, \langle p_{(1)}, x \rangle \langle p_{(2)}, y \rangle\,,\qquad \langle pq, x \rangle \,=\, \langle p, x_{(1)} \rangle \langle q_{(2)}, x_{(2)}\rangle\,,
\label{eq:pairing_pxx}
\end{equation}
where we have used the notation
\begin{equation}
\Delta\, t\,=\,\sum \,t_{(1)}\otimes t_{(2)}\,.
\end{equation}
One can see that by definition
\begin{equation}
\langle \mathbb{1} , \mathbb{1}\rangle \,=\, \mathbb{1}\,.
\end{equation}
Also, since momenta commute, the position coproduct is
\begin{equation}
\Delta\, x^\mu\,=\,\mathbb{1}\otimes x^\mu+ x^\mu \otimes \mathbb{1}\,.
\end{equation}
In order to determine the Poisson brackets between the momentum and position one uses
\begin{equation}
\lbrace p,x \rbrace\,=\,x_{(1)}\langle p_{(1)}, x_{(2)}\rangle p_{(2)}- x\,p \,,
\label{eq:parentesis_xpp}
\end{equation}
where $x\,p$ is the usual multiplication.
For the bicrossproduct basis, one obtains from Eq.~\eqref{eq:pairing_pxx}
\begin{equation}
\langle k_i, x^0 x^j \rangle\,=\,-\frac{1}{\Lambda}\,,\qquad \lbrace k_i, x^j x^0 \rbrace\,=\,0 \,,
\label{eq:parentesis_pxx}
\end{equation}
and then
\begin{equation}
\lbrace x^0 , x^i \rbrace\,=\,-\frac{1}{\Lambda}x^i \,.
\label{eq:parentesis_xx}
\end{equation}
From Eq.~\eqref{eq:parentesis_xpp} one can deduce the rest of the phase space Poisson brackets (which are the ones showed in~\eqref{eq:pairing_intro})
\begin{equation}
\lbrace\tilde{x}^0, k_0\rbrace \,=\,-1\,,\qquad \lbrace\tilde{x}^0, k_i\rbrace \,=\,\frac{k_i}{\Lambda}\,,\qquad \lbrace\tilde{x}^i, k_j\rbrace \,=\,-\delta^i_j\,,
\qquad \lbrace\tilde{x}^i, k_0\rbrace \,=\,0\,.
\label{eq:parentesis_xp}
\end{equation}
In the case of $\kappa$-Minkowski spacetime, the set of functions $\varphi^\mu_\nu(k)$ that leads to Eqs.~\eqref{eq:parentesis_xx}-\eqref{eq:parentesis_xp} is
\begin{equation}
\varphi^0_0(k)=1 \,,\qquad \varphi^0_i(k)=-\frac{k_i}{\Lambda} \,,\qquad \varphi^i_j(k)=\delta^i_j \,,\qquad \varphi^i_0(k)=0\,.
\label{eq:phibicross}
\end{equation}
We can also use a covariant notation as we did in Ch.\ref{chapter_second_order} and rewrite Eq.~\eqref{eq:phibicross} as
\begin{equation}
\varphi^\mu_\nu(k)\,=\,\delta^\mu_\nu-\frac{1}{\Lambda} n^\mu k_\nu + \frac{k\cdot n}{\Lambda} n^\mu n_\nu\,,
\label{phi}
\end{equation}
where, as in Ch.\ref{chapter_second_order}, $n^\mu$ is a fixed timelike vector of components $n^\mu=(1,0,0,0)$.
We have seen that, in the context of Hopf algebras, a DCL defines a noncommutative spacetime through the pairing operation. This connection is established from a purely mathematical perspective. As we have shown, to consider an action involving an interacting term with a DCL leads to nonlocal effects. What we propose is that the associated spacetime to a DCL could be defined asking for locality of interactions in such spacetime~\cite{Carmona:2017cry,Carmona:2019vsh}. In the next sections, we will discuss different possibilities to find such spacetime.
\section{First attempt to implement locality}
\label{sec:firstattempt}
For the sake of simplicity, let us consider the process of two particles in the initial state with momenta $k$, $l$ and a total momentum $k\oplus l$, giving two particles in the final state with momenta $p$, $q$ and total momentum $p\oplus q$, i.e. we are considering the particular case $N=2$ of the relative locality model presented at the beginning of Sec.~\ref{sec:relative_locality_intro}. From Eq.~\eqref{eq:endWL} we find
\begin{equation}
\begin{split}
w^\mu(0) \,=&\, \xi^\nu \frac{\partial(k\oplus l)_\nu}{\partial k_\mu}\,,\qquad x^\mu(0) \,=\, \xi^\nu \frac{\partial(k\oplus l)_\nu}{\partial l_\mu}\,,\\
y^\mu(0) \,=&\, \xi^\nu \frac{\partial(p\oplus q)_\nu}{\partial p_\mu} \,, \qquad z^\mu(0) \,=\, \xi^\nu \frac{\partial(p\oplus q)_\nu}{\partial q_\mu}\,,
\end{split}
\end{equation}
where $w^\mu(0)$, $x^\mu(0)$ are the space-time coordinates of the end points of the worldlines of the initial state particles with momenta $k$, $l$ and $y^\mu(0)$, $z^\mu(0)$ the coordinates of the starting points of the worldlines of the final state particles with momenta $p$, $q$.
When the composition law is the sum $p\oplus q = p + q$, which is the case of SR, the interaction is local $w^\mu(0) = x^\mu(0) = y^\mu(0) = z^\mu(0) = \xi^\mu$, so one can define events in spacetime through the interaction of particles. This is no longer possible when the DCL is nonlinear in momenta.
In order to implement locality, one can introduce new space-time coordinates $\tilde{x}$ as in Eq.~\eqref{eq:NCspt}:
\begin{equation}
\tilde{x}^\mu \,=\, x^\nu \,\varphi^\mu_\nu(k) ,
\label{eq:firstspt}
\end{equation}
the functions $\varphi^\mu_\nu(k)$ being the same for all particles, so that in these coordinates the end and starting points of the worldlines are
\begin{align}
& \tilde{x}^\mu(0) \,=\, \xi^\nu \frac{\partial (k\oplus l)_\nu}{\partial k_\rho} \, \varphi^\mu_\rho(k) \,,\qquad \tilde{w}^\mu(0) \,=\, \xi^\nu \frac{\partial (k\oplus l)_\nu}{\partial l_\rho} \, \varphi^\mu_\rho(l) \,,
\nonumber \\ & \tilde{y}^\mu(0) \,=\, \xi^\nu \frac{\partial (p\oplus q)_\nu}{\partial p_\rho} \, \varphi^\mu_\rho(p)\,,\qquad \tilde{z}^\mu(0) \,=\, \xi^\nu \frac{\partial (p\oplus q)_\nu}{\partial q_\rho} \, \varphi^\mu_\rho(q) \,.
\end{align}
Therefore, the interaction will be local if one can find for a given DCL a set of functions $\varphi^\mu_\nu$ such that\footnote{Note that the conservation of momenta implies that $k\oplus l = p\oplus q$.}
\begin{equation}
\frac{\partial(k\oplus l)_\nu}{\partial k_\rho} \, \varphi^\mu_\rho(k)
\,=\, \frac{\partial(k\oplus l)_\nu}{\partial l_\rho} \, \varphi^\mu_\rho(l) \,=\, \frac{\partial(p\oplus q)_\nu}{\partial p_\rho} \,\varphi^\mu_\rho(p) \,=\, \frac{\partial(p\oplus q)_\nu}{\partial q_\rho} \, \varphi^\mu_\rho(q) \,,
\label{loc0}
\end{equation}
and then having $\tilde{w}^\mu(0)=\tilde{x}^\mu(0)=\tilde{y}^\mu(0)=\tilde{z}^\mu(0)$, making possible the definition of an event in this new spacetime. We can now consider the limit when one of the momenta $l$ goes to zero, using that $\lim_{l\to 0} (k\oplus l) = k, $\footnote{Remember the consistency condition of the DCL Eq.~\eqref{eq:demo1}.} giving place to the conservation law $k=p\oplus q$, and then, Eq.~(\ref{loc0}) implies that
\begin{equation}
\boxed{\varphi^\mu_\nu(p\oplus q) \,=\, \frac{\partial (p\oplus q)_\nu}{\partial p_\rho} \,\varphi^\mu_\rho(p) \,=\, \frac{\partial (p\oplus q)_\nu}{\partial q_\rho} \, \varphi^\mu_\rho(q)} \,.
\label{loc1}
\end{equation}
Taking the limit $p\to 0$ of the previous equation one finds
\begin{equation}
\varphi^\mu_\nu(q) \,=\, \lim_{p\to 0} \frac{\partial (p\oplus q)_\nu}{\partial p_\mu} \,=\, \varphi^\mu_\nu(q) \,,
\label{eq:limit1}
\end{equation}
where we have taken into account that $\lim_{p\to 0} \varphi^\mu_\rho(p) = \delta^\mu_\rho$.\footnote{Remember the conditions on the small momentum limit over $\varphi^\mu_\nu(p)$ explained after Eq.~\eqref{eq:NCspt}.} Moreover, taking the limit $q\to 0$ one has
\begin{equation}
\varphi^\mu_\nu(p) \,=\, \varphi^\mu_\nu(p) \,=\, \lim_{q\to 0} \frac{\partial (p\oplus q)_\nu}{\partial q_\mu} \,.
\label{eq:limit2}
\end{equation}
If we change the labels $p$ and $q$ in Eq.~\eqref{eq:limit2} and compare with Eq.~\eqref{eq:limit1}, we can conclude that
\begin{equation}
\lim_{p\to 0} \frac{\partial (p\oplus q)_\nu}{\partial p_\mu} \,=\,\lim_{p\to 0} \frac{\partial (q\oplus p)_\nu}{\partial p_\mu}\,.
\label{eq:limitsym}
\end{equation}
This condition is not satisfied by every DCL. In fact, one can see that a symmetric DCL
\begin{equation}
p\oplus q = q\oplus p\,,
\end{equation}
satisfies Eq.~\eqref{eq:limitsym}. However, we know that the $\kappa$-Poincaré Hopf algebra composition law does not fulfill this requirement (see Eq.~\eqref{composition}).
Moreover, in Appendix~\ref{append-commut} it is proven that the way of implementing locality of Eq.~\eqref{loc1} gives rise to $\tilde{x}$ coordinates which in fact are commutative, so one can identify new variables $\tilde{p}_\mu=g_\mu(p)$ satisfying $\{\tilde{p}_\nu, \tilde{x}^\mu\} = \delta^\mu_\nu$, which corresponds to a linear DCL, $[\tilde{p}\,\tilde{\oplus}\, \tilde{q}]_\mu = \tilde{p}_\mu + \tilde{q}_\mu$. This is telling us that this implementation is related by a canonical transformation to the SR variables $(\tilde{x},\tilde{p})$ and then, the new spacetime obtained asking for locality is just the spacetime of SR. Since the previous procedure does not let us study a generic (noncommutative) composition law (as is the case $\kappa$-Poincaré), we have to find some way to go beyond Eq.~\eqref{loc1}.
\section{Second attempt: two different spacetimes in the two-particle system}
\label{section_second_attempt}
We have seen in the previous chapters that, in order to implement a relativity principle in a DRK with a generic composition law, nontrivial Lorentz transformations in the one and two-particle systems are requested. In general, the two-particle system transformations mix both momentum variables. Hence, it is not then strange that the noncommutative coordinates one should introduce for having local interactions for a generic DCL should also mix momenta. One possible way to proceed is to consider the simple case
\begin{equation}
\tilde{y}^\mu \,=\, y^\nu \,\varphi_{L\,\nu}^{\,\mu}(p, q) \,,\quad
\tilde{z}^\mu \,=\, z^\nu \, \varphi_{R\,\nu}^{\,\mu}(p, q) \,.
\label{y-z-coordinates}
\end{equation}
The interactions will be local if the following equation holds
\begin{equation}
\boxed{\varphi^\mu_\nu(p\oplus q) \,=\, \frac{\partial (p\oplus q)_\nu}{\partial p_\rho} \,\varphi_{L\,\rho}^{\,\mu}(p, q) \,=\, \frac{\partial (p\oplus q)_\nu}{\partial q_\rho} \, \varphi_{R\,\rho}^{\,\mu}(p, q)} \,.
\label{loc2}
\end{equation}
We now define the functions $\phi_L$, $\phi_R$ through the composition law $p\oplus q$ as
\begin{equation}
\phi_{L\,\sigma}^{\:\:\nu}(p, q) \,\frac{\partial(p\oplus q)_\nu}{\partial p_\rho} \,=\, \delta^\rho_\sigma\,,\quad
\phi_{R\,\sigma}^{\:\:\nu}(p, q) \,\frac{\partial(p\oplus q)_\nu}{\partial q_\rho} \,=\, \delta^\rho_\sigma \,.
\end{equation}
These functions allow us to write the spacetime of a two-particle system given the spacetime of a one-particle system (i.e. $\varphi$):
\begin{equation}
\varphi_{L\,\sigma}^{\:\:\mu}(p, q) \,=\, \phi_{L\,\sigma}^{\:\:\nu}(p, q) \,\, \varphi^\mu_\nu(p\oplus q) \,,\quad
\varphi_{R\,\sigma}^{\:\:\mu}(p, q) \,=\, \phi_{R\,\sigma}^{\:\:\nu}(p, q) \,\, \varphi^\mu_\nu(p\oplus q)\,.
\label{phiL-phiR-phi}
\end{equation}
As $\phi_{L\,\sigma}^{\:\:\nu}(p, 0) = \phi_{R\,\sigma}^{\:\:\nu}(0, q) = \delta^\nu_\sigma$, then
\begin{equation}
\varphi_{L\,\sigma}^{\:\:\mu}(p, 0) = \varphi^\mu_\sigma(p)\,, \quad \varphi_{R\,\sigma}^{\:\:\mu}(0, q) = \varphi^\mu_\sigma(q)\,,
\end{equation}
which is the result of taking the limits $q\to 0$, $p\to 0$ in Eq.~(\ref{loc2}).
Then, given a function $\varphi$ and a DCL, without any relation between them, locality can always be implemented. However, if the DCL is constructed by the multiplication of plane waves with a noncommutative spacetime~\cite{Meljanac:2009ej},\cite{Battisti:2010sr} or in the Hopf algebra framework~\cite{Lukierski:1991pn}, this is not the case: given a specific representation of a particular noncommutativity, one and only one DCL is obtained. Therefore, there is an ambiguity in how to select these two ingredients from the perspective we are considering here. This shows that an additional criteria should be looked for in order to establish such connection.
A possible way to restrict these two ingredients is to consider the relation given by the geometrical interpretation studied in Ch.~\ref{chapter_curved_momentum_space}. In order to reproduce the result obtained in the previous chapter, and then the relation between DCL and spacetime given by the Hopf algebras formalism, we ask that in the two-particle system the spacetime of one of the particles should not depend on the other momentum,
\begin{equation}
\varphi^{\:\:\mu}_{R\rho}(p, q) \,=\, \varphi^{\:\:\mu}_{R\rho}(0, q) \,=\,\varphi^\mu_\rho(q) \,,
\label{eq:restriction_R}
\end{equation}
and Eq.~(\ref{loc2}) implies that
\begin{equation}
\varphi^\mu_\nu(p\oplus q) \,=\, \frac{\partial (p\oplus q)_\nu}{\partial q_\rho} \,\varphi^\mu_\rho(q)\,,
\label{varphi-oplus}
\end{equation}
which determines the DCL for a given noncommutativity (i.e., for a given function $\varphi$). Taking the limit $p\to 0$ one has
\begin{equation}
\varphi^\mu_\nu(p) \,=\, \lim_{q\to 0} \frac{\partial (p\oplus q)_\nu}{\partial q_\mu}\,,
\label{magic_formula}
\end{equation}
and therefore, it is possible to determine the one-particle spacetime given a certain composition law. Eq.~(\ref{magic_formula}) can be interpreted in a simple way: the infinitesimal change of the momentum variable $p_\mu$ generated by the noncommutative space-time coordinates $\tilde{x}$ with parameters $\epsilon$ is
\begin{equation}
\delta p_\mu \,=\, \epsilon_\nu \lbrace\tilde{x}^\nu, p_\mu\rbrace \,=\, - \epsilon_\nu \varphi^\nu_\mu(p) \,=\, - \epsilon_\nu \lim_{q\to 0} \frac{\partial(p\oplus q)_\mu}{\partial q_\nu} \,=\, - \left[(p\oplus \epsilon)_\mu - p_\mu\right] \,.
\label{deltap}
\end{equation}
The noncommutative coordinates can be interpreted as the translation generators in momentum space defined by the DCL. The interpretation of the DCL as the momentum (right-) translation generators is the same one found in the previous chapter, which is obvious since we have imposed the same relation obtained in the geometrical context relating the composition law with the tetrad of the momentum space (observe that in Eqs.~\eqref{varphi-oplus}, \eqref{T(a,k)} the set of functions giving the noncommutativity plays the same role than the tetrad of the momentum space)~\footnote{Note that in fact the $\varphi$ functions transform under a canonical transformation~\eqref{ncst-rep} as a tetrad does under a change of momentum coordinates (see Appendix~\ref{ncst-rep}).}. Then, one can construct the physical coordinates by multiplying the canonical space-time coordinates times the momentum space tetrad. This is restricting the possible noncommutativity if one wants to keep a relativistic kinematics obtained from a geometrical interpretation, since only $\kappa$-Minkowski is allowed (see the discussion of~\ref{subsection_kappa_desitter}). In order to study other possible DRK in the geometrical framework one has to lift the restriction imposed in~\eqref{eq:restriction_R}.
Similar results would have been obtained if we would have considered the case where it is the spacetime of the particle with momentum $p$ which is chosen to be independent of the particle with momentum $q$. In this case one would have
\begin{equation}
\varphi^{\:\:\mu}_{L\rho}(p, q) \,=\, \varphi^{\:\:\mu}_{L\rho}(p, 0) \,=\,\varphi^\mu_\rho(p)\,,
\end{equation}
\begin{equation}
\varphi^\mu_\nu(p\oplus q) \,=\, \frac{\partial (p\oplus q)_\nu}{\partial p_\rho} \,\varphi^\mu_\rho(p) \,,
\label{varphi-oplus2}
\end{equation}
\begin{equation}
\varphi^\mu_\nu(q) \,=\, \lim_{p\to 0} \frac{\partial (p\oplus q)_\nu}{\partial p_\mu}\,,
\label{eq:phi-tau}
\end{equation}
and
\begin{equation}
\delta p_\mu \,=\,- \left[(\epsilon\oplus p)_\mu - p_\mu\right] \,.
\label{eq:TP}
\end{equation}
Then, for the $\kappa$-Minkowski noncommutativity with the proposed prescription, given a DCL one obtains two possible different noncommutative spacetimes (different representations of the same algebra up to a sign) given by Eqs.~\eqref{magic_formula}, \eqref{eq:phi-tau}\footnote{This ambivalence materializes also in the geometrical framework since the relation between tetrad and translations (composition law) is the same as in the locality framework, causing that the same DCL leads to two different coordinate representations of de Sitter space (and then different tetrads).}. This ambiguity, together with the possible existence of a privileged choice of (physical) momentum variables, are open problems deserving further study.
\subsection{Application to \texorpdfstring{$\kappa$}{k}-Poincaré}
In this subsection we will see how to implement locality in the particular case of $\kappa$-Poincaré kinematics. We start by considering the noncommutativity of $\kappa$-Minkowski
\begin{equation}
\lbrace\tilde{x}^\mu, \tilde{x}^\nu\rbrace \,=\, \frac{1}{\Lambda} \,\left( \tilde{x}^\mu n^\nu - \tilde{x}^\nu n^\mu \right) \,,
\end{equation}
where $\varphi^\mu_\nu(k)$ is such that (using \eqref{eq:NCspt} and \eqref{eq:commNCspt})
\begin{equation}
\frac{\partial\varphi^\mu_\alpha(k)}{\partial k_\beta} \varphi^\nu_\beta(k) - \frac{\partial\varphi^\nu_\alpha(k)}{\partial k_\beta} \varphi^\mu_\beta(k) \,=\, \frac{1}{\Lambda} \,\left( \varphi^\mu_\alpha(k) n^\nu - \varphi^\nu_\alpha(k) n^\mu \right)\,.
\end{equation}
By virtue of simplicity, we will take $\varphi^\mu_\nu(k)$ to be the one appearing in the bicrossproduct basis, Eq.~\eqref{phi}. Imposing $\varphi_{L\,\nu}^{\:\:\mu}(p, q)\,=\, \varphi^\mu_\nu(p)$, we can find unequivocally the DCL from Eq.~(\ref{varphi-oplus}). The result (see Appendix~\ref{append-bicross}) is the DCL obtained in that basis~\eqref{composition}. As we saw in Ch.~\ref{chapter_curved_momentum_space}, if we consider the functions $\varphi^\mu_\nu(k)$ to be the tetrad in momentum space of Eq.~\eqref{bicross-tetrad} and we impose $\varphi_{R\,\nu}^{\:\:\mu}(p, q)\,=\, \varphi^\mu_\nu(q)$, we obtain exactly the same DCL, the one we understood in that chapter as the translations in a de Sitter momentum space. With all this we see that the framework of Hopf algebras is contained as a particular case in our proposal of implementation of locality.
In Ch.~\ref{chapter_curved_momentum_space} we have shown how to implement the relativity principle from geometrical considerations. Here we will follow another approach without any mention to geometry, which leads to another implementation of the relativity principle. The DLT in the one-particle system is obtained given the function $\varphi^\mu_\nu(k)$ of \eqref{phi} asking for the noncomutative spacetime to form a ten-dimensional Lie algebra (see Appendix~\ref{append-2pLT}), obtaining~\eqref{eq:j_momentum_space}, and therefore, the Casimir and the Lorentz transformation in the two-particle system are the ones obtained in~\ref{subsection_kappa_desitter}, Eqs.~\eqref{eq:casimir_momentum_space} and~\eqref{eq:jr_momentum_space} respectively.
In order to complete the discussion of the $\kappa$-Poincaré algebra from the point of view of locality of interactions, one can determine $\varphi_{R\,\nu}^{\:\:\mu}(p, q)$ through Eq.~\eqref{phiL-phiR-phi}:
\begin{equation}
\varphi_{R\,\nu}^{\:\:\mu}(p, q)\,=\,\delta^\mu_\nu \,e^{pn/\Lambda}+\frac{1}{\Lambda}n^\mu\left(n_\nu(e^{pn/\Lambda}\,pn+qn+(1-e^{pn/\Lambda})\,\Lambda)-e^{pn/\Lambda}\,p_\nu-q_\nu\right) ,
\end{equation}
or, in components,
\begin{equation}
\varphi_{R\,0}^{\:\:0}(p, q)\,=\,1 \,,\quad \varphi_{R\,0}^{\:\:i}(p, q)\,=\,0 \,,\quad \varphi_{R\,i}^{\:\:0}(p, q)\,=\,-\frac{e^{p_0/\Lambda}p_i+q_i}{\Lambda}\,, \quad \varphi_{R\,j}^{\:\:i}(p, q)\,=\,e^{p_0/\Lambda}\delta^i_j \,.
\end{equation}
As expected,
\begin{equation}
\varphi_{R\,\nu}^{\:\:\mu}(0, q)\,=\, \varphi^\mu_\nu(q)\,.
\end{equation}
Now, we can compute the two-particle phase-space Poisson brackets that are different from zero:
\begin{equation}
\begin{split}
\lbrace\tilde{y}^0, \tilde{y}^i\rbrace \,=\,-\frac{\tilde{y}^i}{\Lambda}\,,\quad \lbrace\tilde{y}^0, p_0\rbrace \,=\,-1\,,\quad \lbrace\tilde{y}^0, p_i\rbrace \,=\,\frac{p_i}{\Lambda}\,,\quad \lbrace\tilde{y}^i, p_j\rbrace \,=\,-\delta^i_j\,, \quad \lbrace\tilde{y}^0, \tilde{z}^i\rbrace \,=\,-\frac{\tilde{z}^i}{\Lambda},\\
\lbrace\tilde{z}^0, \tilde{z}^i\rbrace \,=\,-\frac{\tilde{z}^i}{\Lambda}\,,\quad \lbrace\tilde{z}^0, q_0\rbrace \,=\,-1\,,\quad \lbrace\tilde{z}^0, q_i\rbrace \,=\,\frac{e^{p_0/\Lambda}p_i+q_i}{\Lambda}\,,\quad \lbrace\tilde{z}^i, q_j\rbrace \,=\,-e^{p_0/\Lambda}\delta^i_j\,.
\end{split}
\end{equation}
Note that all the Poisson brackets of the two space-time coordinates close an algebra, being independent of momenta.
The one-particle noncommutative spacetime we get from locality when we impose $\varphi_{L\,\nu}^{\:\:\mu}(p, q)\,=\, \varphi^\mu_\nu(p)$ is the one obtained through the pairing operation in the Hopf algebra framework. This leads us to interpret that algebraic procedure from the physical criteria of imposing locality of interactions, understanding how a noncommutative spacetime crops up (and thus, a modification of the Poisson brackets of phase-space coordinates) in a natural way when a DCL is considered.
\section{Third attempt: mixing of space-time coordinates}
\label{sec_st_locality_third}
We have seen in the previous section that, in the way proposed to implement locality, there is no restriction on the noncommutative spacetime, nor on the composition law. Any combination of both ingredients admits the implementation of locality.
In this section, we pose another way to implement locality, imposing that the noncommutative coordinates are defined as a sum of two terms, each one having only the phase-space coordinates of one of the particles,
\begin{equation}
\tilde{y}^\alpha \,=\, y^\mu \varphi^\alpha_\mu(p) + z^\mu \varphi^{(2)\alpha}_{(1)\mu}(q)\,, \quad\quad
\tilde{z}^\alpha \,=\, z^\mu \varphi^\alpha_\mu(q) + y^\mu \varphi^{(1)\alpha}_{(2)\mu}(p)\,.
\end{equation}
We impose that $\varphi^{(2)\alpha}_{(1)\mu}(0) = \varphi^{(1)\alpha}_{(2)\mu}(0) = 0$ so, when one of the momenta tends to zero, the one-particle coordinates are $\tilde{x}^\alpha = x^\mu \varphi^\alpha_\mu(k)$.
Locality in the generalized spacetime requires to find a set of functions $\varphi^\alpha_\mu(k)$, $\varphi^{(2)\alpha}_{(1)\mu}(k)$ and $\varphi^{(1)\alpha}_{(2)\mu}(k)$ satisfying the set of equations
{\small \begin{equation}
\boxed{\varphi^\alpha_\nu(p\oplus q)\,=\,\frac{\partial(p\oplus q)_\mu}{\partial p_\nu} \varphi^\alpha_\nu(p) +
\frac{\partial(p\oplus q)_\mu}{\partial q_\nu} \varphi^{(2)\alpha}_{(1)\nu}(q) =\frac{\partial(p\oplus q)_\mu}{\partial q_\nu} \varphi^\alpha_\nu(q) +
\frac{\partial(p\oplus q)_\mu}{\partial p_\nu} \varphi^{(1)\alpha}_{(2)\nu}(p)}\,.
\label{loc-eq}
\end{equation}}
\normalsize
The set of functions $\varphi^{(2)\alpha}_{(1)\mu}(k)$ and $\varphi^{(1)\alpha}_{(2)\mu}(k)$ can be obtained given $\varphi^\alpha_\mu(k)$ and the DCL taking the limit $p\to 0$ or $q\to 0$ in \eqref{loc-eq}
\begin{equation}
\varphi^{(2)\alpha}_{(1)\mu}(q) \,=\, \varphi^\alpha_\mu(q) - \lim_{k\to 0} \frac{\partial(k\oplus q)_\mu}{\partial k_\alpha}, \quad\quad
\varphi^{(1)\alpha}_{(2)\mu}(p) \,=\, \varphi^\alpha_\mu(p) - \lim_{k\to 0} \frac{\partial(p\oplus k)_\mu}{\partial k_\alpha}\,.
\label{eq:phi12}
\end{equation}
Using the functions $\varphi^{(2)}_{(1)}$, $\varphi^{(1)}_{(2)}$ into the locality equations, we find
\begin{align}
& \frac{\partial(p\oplus q)_\mu}{\partial q_\nu} \, \lim_{l\to 0} \frac{\partial(l\oplus q)_\nu}{\partial l_\alpha} \,=\, \frac{\partial(p\oplus q)_\mu}{\partial p_\nu} \, \lim_{l\to 0} \frac{\partial(p\oplus l)_\nu}{\partial l_\alpha} \,=\, \nonumber \\ & \:\:\: \frac{\partial(p\oplus q)_\mu}{\partial p_\nu} \varphi^\alpha_\nu(p) +
\frac{\partial(p\oplus q)_\mu}{\partial q_\nu} \varphi^\alpha_\nu(q) - \varphi^\alpha_\mu(p\oplus q)\,.
\label{loc-oplus-varphi}
\end{align}
The first equality imposes a condition on the DCL in order to be compatible with locality, while the second one is establishing a relation between the functions $\varphi^\alpha_\mu$ and the DCL.
We can introduce the relative coordinate
\begin{align}
\tilde{x}^\alpha_{(12)} \,\doteq\, \tilde{y}^\alpha - \tilde{z}^\alpha & = y^\mu \left[\varphi^\alpha_\mu(p) - \varphi^{(2)\alpha}_{(1)\mu}(p)\right] - z^\mu
\left[\varphi^\alpha_\mu(q) - \varphi^{(1)\alpha}_{(2)\mu}(q)\right] \nonumber \\
&= y^\mu \,\lim_{l\to 0} \frac{\partial(p\oplus l)_\mu}{\partial l_\alpha} - z^\mu \,\lim_{l\to 0} \frac{\partial(l\oplus q)_\mu}{\partial l_\alpha}\,.
\label{xtilde}
\end{align}
Now, one might use the total momentum to spot the effect of an infinitesimal translation with parameters $\epsilon^\mu$ on the relative coordinate
\begin{align}
\delta\tilde{x}^\alpha_{(12)} \,&=\, \epsilon^\mu \{\tilde{x}^\alpha_{(12)}, (p\oplus q)_\mu\} \nonumber \\
&=\, \epsilon^\mu \left[- \frac{\partial(p\oplus q)_\mu}{\partial p_\nu} \lim_{l\to 0} \frac{\partial(p\oplus l)_\nu}{\partial l_\alpha} + \frac{\partial(p\oplus q)_\mu}{\partial q_\nu} \lim_{l\to 0} \frac{\partial(l\oplus q)_\nu}{\partial l_\alpha}\right]\,.
\end{align}
The last term of the previous expression is zero as a consequence of the conditions that the DCL must satisfy in order to be possible to implement locality. This is showing the invariance of the relative coordinate under translations, implying that if one observers sees the interaction as local, it will be local for any other observer translated with respect to the former.
It is easy to test out that the following identities hold,
\begin{align}
& \frac{\partial(p\oplus q)_\mu}{\partial q_\nu} \, \lim_{l\to 0} \frac{\partial(l\oplus q)_\nu}{\partial l_\alpha} \,=\, \lim_{l\to 0} \frac{\partial(p\oplus (l\oplus q))_\mu}{\partial (l\oplus q)_\nu} \, \frac{\partial(l\oplus q)_\nu}{\partial l_\alpha} \,=\, \lim_{l\to 0} \frac{\partial(p\oplus (l\oplus q))_\mu}{\partial l_\alpha}\,, \\ \nonumber
& \frac{\partial(p\oplus q)_\mu}{\partial p_\nu} \, \lim_{l\to 0} \frac{\partial(p\oplus l)_\nu}{\partial l_\alpha} \,=\, \lim_{l\to 0} \,\frac{\partial((p \oplus l)\oplus q)_\mu}{\partial (p\oplus l)_\nu} \, \frac{\partial(p\oplus l)_\nu}{\partial l_\alpha} \,=\, \lim_{l\to 0} \frac{\partial((p\oplus l)\oplus q)_\mu}{\partial l_\alpha}\,,
\end{align}
and then, from the first equality of~\eqref{loc-oplus-varphi} one can find
\begin{equation}
\lim_{l\to 0} \frac{\partial(p\oplus (l\oplus q))_\mu}{\partial l_\alpha} \,=\, \lim_{l\to 0} \frac{\partial((p\oplus l)\oplus q)_\mu}{\partial l_\alpha}\,,
\end{equation}
which leads to
\begin{equation}
(p\oplus \epsilon)\oplus q \,=\, p\oplus(\epsilon\oplus q)\,.
\label{eq:associativity}
\end{equation}
Now we can check that in fact, any associative composition law is compatible with the implementation of locality. Making the choice $\varphi^{(2)\alpha}_{(1)\mu}(q)=0$ in Eq.~\eqref{eq:phi12}\footnote{This can be done also for the alternative choice $\varphi^{(1)\alpha}_{(2)\mu}(p)=0$.}, one has
\begin{equation}
\begin{split}
\frac{\partial(p\oplus q)_\mu}{\partial p_\nu} \,\varphi^\alpha_\nu(p) &\,=\, \frac{\partial(p\oplus q)_\mu}{\partial p_\nu} \,\lim_{l\to 0} \frac{\partial(l\oplus p)_\nu}{\partial l_\alpha}\\& =\, \lim_{l\to 0} \left[\frac{\partial((l\oplus p)\oplus q)_\mu}{\partial(l\oplus p)_\nu} \,\frac{\partial(l\oplus p)_\nu}{\partial l_\alpha}\right]
\,=\, \lim_{l\to 0} \frac{\partial((l\oplus p)\oplus q)_\mu}{\partial l_\alpha}\,, \\
\frac{\partial(p\oplus q)_\mu}{\partial q_\nu} \,\varphi^\alpha_\nu(q) &\,=\, \frac{\partial(p\oplus q)_\mu}{\partial q_\nu} \,\lim_{l\to 0} \frac{\partial(l\oplus q)_\nu}{\partial l_\alpha} \\ & =\, \lim_{l\to 0} \left[\frac{\partial(p\oplus(l\oplus q)_\mu}{\partial(l\oplus q)_\nu} \,\frac{\partial(l\oplus q)_\nu}{\partial l_\alpha}\right]\,=\, \lim_{l\to 0} \frac{\partial(p\oplus(l\oplus q)_\mu}{\partial l_\alpha}\,, \\
\varphi^\alpha_\mu(p\oplus q) &\,=\, \lim_{l\to 0} \frac{\partial(l\oplus(p\oplus q))_\mu}{\partial l_\alpha}\,.
\end{split}
\end{equation}
It is easy to verify that Eqs.~\eqref{loc-oplus-varphi} hold
\begin{equation}
\begin{split}
\lim_{l\to 0} \frac{\partial(p\oplus (l\oplus q))_\mu}{\partial l_\alpha} \,&=\, \lim_{l\to 0} \frac{\partial((p\oplus l)\oplus q)_\mu}{\partial l_\alpha} \,=\, \lim_{l\to 0} \frac{\partial((l\oplus p)\oplus q)_\mu}{\partial l_\alpha} \\ &\,+\, \lim_{l\to 0} \frac{\partial(p\oplus(l\oplus q))_\mu}{\partial l_\alpha} \,-\,\lim_{l\to 0} \frac{\partial(l\oplus(p\oplus q))_\mu}{\partial l_\alpha}\,,
\end{split}
\label{eq:associativity_locality_proof}
\end{equation}
proving that any associative composition law is locality compatible.
\subsection{First-order deformed composition law of four-momenta (DCL1)}
\label{sec_first_order}
In this subsection, we regard a DCL with only linear terms in the inverse of $\Lambda$ and we see what conditions the implementation of locality enforces. At first order, we can write the most general isotropic composition law (DCL1) in a covariant way
\begin{equation}
(p\oplus q)_\mu \,=\, p_\mu + q_\mu + \frac{c_\mu^{\nu\rho}}{\Lambda} p_\nu q_\rho\,,
\end{equation}
where $c_\mu^{\nu\rho}$ is
\begin{equation}
c_\mu^{\nu\rho} \,=\, c_1 \,\delta_\mu^\nu n^\rho + c_2 \,\delta_\mu^\rho n^\nu + c_3 \,\eta^{\nu\rho} n_\mu + c_4 \,n_\mu n^\nu n^\rho + c_5 \,\epsilon_\mu^{\:\:\nu\rho\sigma} n_\sigma\,,
\end{equation}
being $n_\mu = (1, 0, 0, 0)$ and $c_i$ arbitrary constants. This leads to the general composition law~\eqref{cl1} studied in Ch.~\ref{chapter_second_order}.
We can wonder now if this composition can satisfy the conditions imposed by locality. We have
\begin{align}
& \frac{\partial(p\oplus q)_\mu}{\partial q_\nu} \,=\, \delta^\nu_\mu + \frac{c_\mu^{\rho\nu}}{\Lambda} p_\rho\,, \quad\quad \lim_{l\to 0} \frac{\partial(l\oplus q)_\nu}{\partial l_\alpha} \,=\, \delta^\alpha_\nu + \frac{c^{\alpha\sigma}_\nu}{\Lambda} q_\sigma\,, \\ \nonumber
& \frac{\partial(p\oplus q)_\mu}{\partial p_\nu} \,=\, \delta^\nu_\mu + \frac{c_\mu^{\nu\sigma}}{\Lambda} q_\sigma\,, \quad\quad \lim_{l\to 0} \frac{\partial(p\oplus l)_\nu}{\partial l_\alpha} \,=\, \delta^\alpha_\nu + \frac{c_\nu^{\rho\alpha}}{\Lambda} p_\rho\,,
\end{align}
and
\begin{align}
& \frac{\partial(p\oplus q)_\mu}{\partial q_\nu} \, \lim_{l\to 0} \frac{\partial(l\oplus q)_\nu}{\partial l_\alpha} \,=\, \delta^\alpha_\mu + \frac{c_\mu^{\rho\alpha}}{\Lambda} p_\rho + \frac{c^{\alpha\sigma}_\mu}{\Lambda} q_\sigma + \frac{c_\mu^{\rho\nu} c^{\alpha\sigma}_\nu}{\Lambda^2} p_\rho q_\sigma\,, \nonumber \\
& \frac{\partial(p\oplus q)_\mu}{\partial p_\nu} \, \lim_{l\to 0} \frac{\partial(p\oplus l)_\nu}{\partial l_\alpha} \,=\, \delta^\alpha_\mu + \frac{c_\mu^{\rho\alpha}}{\Lambda} p_\rho + \frac{c^{\alpha\sigma}_\mu}{\Lambda} q_\sigma + \frac{c_\mu^{\nu\sigma} c^{\rho\alpha}_\nu}{\Lambda^2} p_\rho q_\sigma\,.
\end{align}
Then, a DCL1 will be compatible with locality if the following equality holds
\begin{equation}
c_\mu^{\rho\nu} c^{\alpha\sigma}_\nu \,=\, c_\mu^{\nu\sigma} c^{\rho\alpha}_\nu\,.
\end{equation}
This requirement is equivalent to demand to the composition to be associative, which is the condition~\eqref{eq:associativity} when the composition law has only first order terms. We obtain four possible cases for the DCL1
\begin{equation}
c_\mu^{\nu\rho} \,=\, \delta_\mu^\rho n^\nu, \quad c_\mu^{\nu\rho} \,=\, \delta_\mu^\nu n^\rho, \quad c_\mu^{\nu\rho} \,=\, \delta_\mu^\nu n^\rho + \delta_\mu^\rho n^\nu - n_\mu n^\nu n^\rho, \quad c_\mu^{\nu\rho} \,=\, \eta^{\nu\rho} n_\mu - n_\mu n^\nu n^\rho\,.
\end{equation}
The last two cases are not relevant because it is easy to check that the compositions are obtained by a change of basis ($k'_\mu = f_\mu(k)$) from the sum ($(p'\oplus' q')_\mu \doteq (p\oplus q)'_\mu=p'_\mu + q'_\mu$), and we have SR in knotty variables\footnote{For the first of them the function is $f_0(k)=\Lambda \log(1+k_0/\Lambda)$, $f_i(k)=(1+k_0/\Lambda) k_i$, while for the last one $f_0(k)=k_0+\vec{k}^2/(2\Lambda)$, $f_i(k)=k_i$.}.
In the first two cases, we have a noncommutative composition law (in fact the latter is obtained from the former exchanging the role of the momentum). As the composition is not commutative, it is not possible to find a change of basis in which momenta compose additively.
The explicit form of the first DCL1 is
\begin{equation}
(p\oplus q)_0 \,=\, p_0 + q_0 + \epsilon\, \frac{p_0 q_0}{\Lambda} \, ,\quad\quad\quad\quad
(p\oplus q)_i \,=\, p_i + q_i + \epsilon\, \frac{p_0 q_i}{\Lambda}\,,
\label{DCL(1)}
\end{equation}
where $\epsilon = \pm 1$ is an overall sign for the modification in the composition law and an arbitrary constant can be reabsorbed in the definition of the scale $\Lambda$. We will see in Sec.~\ref{sec_comparison} that this composition law corresponds in fact to $\kappa$-Poincaré.
When $\epsilon=-1$, one has
\begin{equation}
\left(1-\frac{(p\oplus q)_0}{\Lambda}\right) \,=\, \left(1-\frac{p_0}{\Lambda}\right) \left(1-\frac{q_0}{\Lambda}\right)\,,
\end{equation}
making the scale $\Lambda$ to play the role of a cutoff in the energy, being therefore the choice of sign that reproduces the DCL in the DSR framework, as we will see in Sec.~\ref{sec_comparison}. With the other choice of sign $\epsilon=+1$, the scale $\Lambda$ is not a maximum energy, going beyond DSR scenarios.
From the explicit form of the local DCL1~\eqref{DCL(1)} we can obtain the expression for the relative generalized space-time coordinates
\begin{align}
& \lim_{l\to 0} \frac{\partial(p\oplus l)_0}{\partial l_0} \,=\, 1 + \epsilon \frac{p_0}{\Lambda}\,,& \quad\quad &\lim_{l\to 0} \frac{\partial(p\oplus l)_0}{\partial l_i} \,=\, 0\,,\nonumber \\
&\lim_{l\to 0} \frac{\partial(p\oplus l)_i}{\partial l_0} \,=\, 0\,,&\quad\quad & \lim_{l\to 0} \frac{\partial(p\oplus l)_i}{\partial l_j} \,=\, \delta_i^j \left(1 + \epsilon \frac{p_0}{\Lambda}\right)\,,\nonumber \\
& \lim_{l\to 0} \frac{\partial(l\oplus q)_0}{\partial l_0} \,=\, 1 + \epsilon \frac{q_0}{\Lambda}\,,& \quad\quad &\lim_{l\to 0} \frac{\partial(l\oplus q)_0}{\partial l_i} \,=\, 0\,, \nonumber \\
&\lim_{l\to 0} \frac{\partial(l\oplus q)_i}{\partial l_0} \,=\, \epsilon \frac{q_i}{\Lambda}\,,&\quad\quad &\lim_{l\to 0} \frac{\partial(l\oplus q)_i}{\partial l_j} \,=\, \delta^j_i\,,
\end{align}
and then
\begin{equation}
\tilde{x}^0_{(12)} \,=\, y^0 (1+\epsilon p_0/\Lambda) - z^0 (1+\epsilon q_0/\Lambda) - z^j \epsilon q_j/\Lambda\,, \quad
\tilde{x}^i_{(12)} \,=\, y^i (1+\epsilon p_0/\Lambda) - z^i\,.
\end{equation}
Therefore, one can check that the relative space-time coordinates of the two-particle system are in fact the coordinates of a $\kappa$-Minkowski spacetime with $\kappa=\epsilon/\Lambda$
\begin{align}
\{\tilde{x}^i_{(12)}, \tilde{x}^0_{(12)}\} \,=&\,\{y^i (1+\epsilon p_0/\Lambda), y^0 (1+\epsilon p_0/\Lambda)\} + \{z^i, z^j \epsilon q_j/\Lambda\} \nonumber \\ =& (\epsilon/\Lambda) \,\left[y^i (1+\epsilon p_0/\Lambda) - z^i\right] \,=\, (\epsilon/\Lambda) \,\tilde{x}^i_{(12)}\,.
\end{align}
In order to obtain the generalized space-time coordinates of the two-particle system, one has to solve \eqref{loc-oplus-varphi}, using the local DCL1, for the functions $\varphi^\alpha_\mu(k)$. The main issue is that these equations do not completely determine the explicit form of $\varphi^\alpha_\mu(k)$ and therefore, the generalized space-time coordinates of the one-particle are not completely determined. To do so, we need another requirement.
One can observe that the local DCL1~\eqref{DCL(1)}
\begin{equation}
\left(p\oplus q\right)_\mu \,=\, p_{\:\mu} + \left(1 + \epsilon p_0/\Lambda\right) \,q_{\:\mu}\,,
\end{equation}
is a sum of $p_\mu$ (independent of $q$) and a term proportional to $q_\mu$ depending on $p$. Then one can consider an ad hoc prescription in which the generalized space-time coordinates $\tilde{y}^\mu$ depends only on its phase-space coordinates ($y, p$), while $\tilde{z}^\mu$ depends on the phase-space coordinates of both particles ($y, p, z, q$), making
\begin{equation}
\varphi^{(2)\alpha}_{(1)\mu}(q) \,=\, 0\,, \quad\quad \rightarrow \quad\quad \varphi^\alpha_\mu(p) \,=\, \lim_{l\to 0} \frac{\partial(l\oplus p)_\mu}{\partial l_\alpha}\,.
\label{simplphi}
\end{equation}
This can be done since, as we have proven in Eq.~\eqref{eq:associativity_locality_proof}, any associative composition law (as is the case for DCL1) is compatible with locality with the choice $\varphi^{(2)\alpha}_{(1)\mu}(q)=0$.
We can obtain the generalized space-time coordinates for the one-particle system through the explicit expression of the DCL1
\begin{align}
\tilde{x}^0 \,=&\, x^\mu \lim_{l\to 0} \frac{\partial(l\oplus k)_\mu}{\partial l_0} \,=\, x^0 (1 + \epsilon k_0/\Lambda) + x^j \epsilon k_j/\Lambda\,, \nonumber \\
\tilde{x}^i \,=&\, x^\mu \lim_{l\to 0} \frac{\partial(l\oplus k)_\mu}{\partial l_i} \,=\, x^i\,,
\label{eq:tilde_1}
\end{align}
and then
\begin{equation}
\{\tilde{x}^i, \tilde{x}^0\} \,=\, \{x^i, x^j \epsilon k_j/\Lambda\} \,=\, - (\epsilon/\Lambda) x^i \,=\, - (\epsilon/\Lambda) \tilde{x}^i\,.
\label{xtilde-}
\end{equation}
This is obvious from the fact that, as we saw in the previous sections, if the relation between the functions $ \varphi^\alpha_\mu(k)$ and the (associative) composition law is the one given in Eq.~\eqref{simplphi}, the resultant spacetime is $\kappa$-Minkowski.
One can proceed in the same way with the other composition law which allows to implement locality
\begin{equation}
\left(p\oplus q\right)_\mu \,=\, \left(1 + \epsilon q_0/\Lambda\right) \,p_{\:\mu} + q_{\:\mu}\,,
\label{DCL(1')}
\end{equation}
considering now that the generalized space-time coordinates $\tilde{z}^\mu$ depend only on the phase-space coordinates ($z, q$), while $\tilde{y}^\mu$ depend on the phase-space coordinates of both particles ($y, p, z, q$). This leads to
\begin{equation}
\varphi^{(1)\alpha}_{(2)\mu}(p) \,=\, 0\,, \quad\quad \rightarrow \quad\quad \varphi^\alpha_\mu(p) \,=\, \lim_{l\to 0} \frac{\partial(p\oplus l)_\mu}{\partial l_\alpha}\,.
\end{equation}
In this case we have
\begin{align}
\tilde{x}^0 \,=&\, x^\mu \lim_{l\to 0} \frac{\partial(k\oplus l)_\mu}{\partial l_0} \,=\, x^0 (1 + \epsilon k_0/\Lambda) + x^j \epsilon k_j/\Lambda\,, \nonumber \\
\tilde{x}^i \,=&\, x^\mu \lim_{l\to 0} \frac{\partial(k\oplus l)_\mu}{\partial l_i} \,=\, x^i\,.
\label{eq:tilde_2}
\end{align}
We see that, by construction, we obtain the same expressions for the generalized space-time coordinates of the one-particle system.
\subsection{Local DCL1 as a relativistic kinematics}
\label{sec_rel_kinematics}
As we discussed previously, any kinematics has three ingredients: a DCL, a DDR and, in order to have a relativity principle, a DLT in the two-particle system, making the former constituents compatible. This can be done in the same way as we did in the previous section for the second attempt to implement locality, obtaining for the one-particle system (see Appendix~\ref{LT-one-particle}):
\begin{align}
& {\cal J}^{ij}_0(k) \,=\, 0\,, \quad\quad\quad {\cal J}^{ij}_k(k) \,=\, \delta^j_k \, k_i - \delta^i_k \, k_j\,, \nonumber \\
& {\cal J}^{0j}_0(k) \,=\, - k_j (1+\epsilon k_0/\Lambda)\,, \quad\quad\quad {\cal J}^{0j}_k \,=\, \delta^j_k \left[-k_0 - \epsilon k_0^2/2\Lambda\right] + (\epsilon/\Lambda) \left[\vec{k}^2/2 - k_j k_k\right]\,,
\label{LT1}
\end{align}
and for the two-particle system, we impose the condition ${\cal J}_{(1)\mu}^{\,\alpha \beta}(p,q)={\cal J}_{\mu}^{\,\alpha \beta}(p)$ for the first momentum, so the second momentum must transform as
\begin{align}
{\cal J}_{(2)0}^{\,0i}(p, q)\,=&\left(1+ \epsilon p_0/\Lambda\right) {\cal J}_0^{0i}(q)\,, \nonumber \\
{\cal J}_{(2)j}^{\,0i}(p, q)\,=&\left(1+\epsilon p_0/\Lambda\right) {\cal J}_j^{0i}(q) + (\epsilon/\Lambda) \,\left(p_j q_i - \delta^i_j \vec{p}^{(1)}\cdot\vec{p}^{(2)}\right)\,, \nonumber\\
{\cal J}_{(2)0}^{\,ij}(p, q)\,=&\, {\cal J}^{ij}_0(q)\,, \quad\quad\quad
{\cal J}_{(2)k}^{\,ij}(p, q)\,=\, {\cal J}^{ij}_k(q)\,,
\label{calJ(2)}
\end{align}
so that the composition is invariant under the DLT, i.e.
\begin{equation}
{\cal J}^{\alpha\beta}_\mu(p\oplus q) \,=\, \frac{\partial(p\oplus q)_\mu}{\partial p_\nu} \, {\cal J}^{\alpha\beta}_{\nu}(p) + \frac{\partial(p\oplus q)_\mu}{\partial q_\nu} \, {\cal J}^{\alpha\beta}_{(2)\nu}(p, q)\,.
\end{equation}
From the DLT of the one-particle system~\eqref{LT1}, we are able to determine the DDR from
\begin{equation}
\{C(k), J^{\alpha\beta}\} \,=\, \frac{\partial C(k)}{\partial k_\mu} \, {\cal J}^{\alpha\beta}_\mu(k) \,=\, 0\,,
\label{LI-DDR}
\end{equation}
obtaining
\begin{equation}
C(k) \,=\, \frac{k_0^2 - \vec{k}^2}{(1 + \epsilon k_0/\Lambda)}\,.
\end{equation}
In order to have a relativistic kinematics, we need to see that
\begin{equation}
J^{\alpha\beta}_{(2)} \,=\, y^\mu \,{\cal J}^{\alpha\beta}_\mu(p) + z^\mu \,{\cal J}^{\alpha\beta}_{R\,\mu}(p, q)
\end{equation}
is a representation of the Lorentz algebra and that
\begin{equation}
\frac{\partial C(q)}{\partial q_\mu} \,{\cal J}^{\alpha\beta}_{R\,\mu}(p, q) \,=\, 0
\end{equation}
holds. One can check that both statements are true from the expressions~\eqref{LT1}-\eqref{LI-DDR}. So we have shown that one can implement locality and a relativity principle from the composition law~(\ref{DCL(1)}) with $\tilde{y}^\alpha$ depending on the phase-space coordinates ($y, p$) and $\tilde{z}^\alpha$ depending on all the phase-space coordinates. The relativity principle is obtained by making that the Lorentz transformation of the first momentum does not depend on the second one, implying that the Lorentz transformation of the second momentum depends on both momenta. This is a particular (simple) example to implement locality and the relativity principle with the local DCL1 (\ref{DCL(1)}).
\subsection{Local DCL1 and \texorpdfstring{$\kappa$}{k}-Poincaré kinematics}
\label{sec_comparison}
We can wonder about the possible momentum basis in the starting point of the section. This can be analyzed by considering new momentum coordinates $k'_\mu$ related nonlinearly to $k_\nu$, obtaining a new dispersion relation $C'$ and a new deformed composition law $\oplus'$ given by
\begin{equation}
C(k) \,=\, C'(k')\,, \quad\quad\quad (p'\oplus' q')_\mu \,=\, (p\oplus q)'_\mu\,.
\end{equation}
Then we have
\begin{equation}
\begin{split}
\varphi^{\prime \alpha}_\mu(k') \,=\, \lim_{l'\to 0} \frac{\partial(l'\oplus' k')_\mu}{\partial l'_\alpha} \,&=\, \lim_{l'\to 0} \frac{\partial(l\oplus k)'_\mu}{\partial l'_\alpha} \,=\, \lim_{l\to 0} \frac{\partial l_\beta}{\partial l'_\alpha} \frac{\partial(l\oplus k)'_\mu}{\partial l_\beta}\\
&=\, \lim_{l\to 0} \frac{\partial(l\oplus k)'_\mu}{\partial(l\oplus k)_\nu} \frac{\partial(l\oplus k)_\nu}{\partial l_\alpha} \,=\, \frac{\partial k'_\mu}{\partial k_\nu} \varphi^\alpha_\nu(k)\,,
\end{split}
\label{varphi'-varphi}
\end{equation}
where we have used that $\partial l_\beta/\partial l'_\alpha=\delta_{\beta}^\alpha$ when $l\to 0$.
Moreover, a nonlinear change of momentum basis $k\to k'$ defines a change on canonical phase-space coordinates
\begin{equation}
x^{\prime \mu} \,=\, x^\rho \frac{\partial k_\rho}{\partial k'_\mu}\,,
\end{equation}
and then
\begin{equation}
x^{\prime \mu} \varphi^{\prime \alpha}_\mu(k') \,=\, x^\rho \frac{\partial k_\rho}{\partial k'_\mu} \varphi^{\prime \alpha}_\mu(k') \,=\, x^\rho \frac{\partial k_\rho}{\partial k'_\mu} \frac{\partial k'_\mu}{\partial k_\nu} \varphi^\alpha_\nu(k) \,=\, x^\nu \varphi^\alpha_\nu(k)\,,
\end{equation}
where we have used Eq.~\eqref{varphi'-varphi} in the second equality.
This falls into the same result obtained in the previous attempts: the non-commutative coordinates are invariant under canonical transformations $\tilde{x}^{\prime \alpha}=\tilde{x}^\alpha$.
For the two-particle system, one has
\begin{equation}
\varphi^{\prime (2)\alpha}_{(1)\mu}(q') \,=\, \varphi^{\prime \alpha}_\mu(q') - \lim_{l'\to 0} \frac{\partial(l'\oplus' q')_\mu}{\partial l'_\alpha}\,,
\end{equation}
and the same argument used in (\ref{varphi'-varphi}) leads to
\begin{equation}
\lim_{l'\to 0} \frac{\partial(l'\oplus' q')_\mu}{\partial l'_\alpha} \,=\, \frac{\partial q'_\mu}{\partial q_\nu} \,\lim_{l\to 0} \frac{\partial(l\oplus q)_\mu}{\partial l_\alpha}\,,
\end{equation}
and then one finds
\begin{equation}
\varphi^{\prime (2) \alpha}_{(1)\mu}(q') \,=\, \frac{\partial q'_\mu}{\partial q_\nu} \,\varphi^{(2)\alpha}_{(1)\nu}(q)\,.
\end{equation}
The space-time coordinates of the two-particle system change as
\begin{equation}
y^{\prime \mu} \,=\, y^\nu \,\frac{\partial p_\nu}{\partial p'_\mu}\,, \quad\quad
z^{\prime \mu} \,=\, z^\nu \,\frac{\partial q_\nu}{\partial q'_\mu}\,,
\end{equation}
and as in the one-particle system, we find that the generalized space-time coordinates of the two-particle system are invariant under canonical transformations
\begin{equation}
\tilde{y}^{\prime \alpha} \,=\, \tilde{y}^\alpha\,, \quad\quad\quad \tilde{z}^{\prime \alpha} \,=\, \tilde{z}^\alpha\,.
\end{equation}
This implies that all the obtained results for the local DCL1 (\ref{DCL(1)}) (crossing of worldlines, a $\kappa$-Minkowski noncommutative spacetime, and a DRK) do not depend on the phase-space coordinates (momentum basis) one uses.
If we consider the change of momentum basis $k_\mu \to k'_\mu$
\begin{equation}
k_i \,=\, k'_i\,, \quad\quad\quad (1 + \epsilon k_0/\Lambda) \,=\, e^{\epsilon k'_0/\Lambda}\,,
\end{equation}
on the kinematics of the local DCL1, one finds the composition law
\begin{equation}
(p'\oplus' q')_0 \,=\, p'_0 + q'_0, \quad\quad\quad
(p'\oplus' q')_i \,=\, p'_i + e^{\epsilon p'_0/\Lambda} \,q'_0\,,
\end{equation}
and the dispersion relation
\begin{equation}
\frac{k_0^2 - \vec{k}^2}{(1+\epsilon k_0/\Lambda)} \,=\, \Lambda^2 \left(e^{\epsilon k'_0/\Lambda} + e^{-\epsilon k'_0/\Lambda} -2\right) - \vec{k}^{\prime 2} \,e^{-\epsilon k'_0/\Lambda}\,,
\label{C(p)-bcb}
\end{equation}
obtained in Ch.~\ref{chapter_curved_momentum_space}, which is $\kappa$-Poincaré in the bicrossproduct basis (when $\epsilon=-1$). Then we can conclude that the local DCL1 kinematics is the $\kappa$-Poincaré kinematics.
As we also found in the geometry section, there is a new kinematics corresponding to the case $\epsilon=1$, which cannot allow us to identify $\Lambda$ as a cutoff on the energy. This is a possibility that should be considered and has been overlooked in DSR scenarios.
In the second attempt of Sec.~\ref{section_second_attempt}, we found that $\kappa$-Poincar\'e kinematics is compatible with locality. This new way to implement locality allow us to determine the general form of a DCL1 compatible with locality, and $\kappa$-Minkowski as the generalized spacetime of the relative coordinates of the two-particle system.
\subsection{Associativity of the composition law of momenta, locality and relativistic kinematics}
\label{sec:associativity}
In Sec.~\ref{sec_first_order} we saw that a DCL1 must be associative in order to be able to implement locality and then, any kinematics related with it by a change of basis will be associative. Also, at the beginning of Sec.~\ref{sec_st_locality_third}, we have also proved that any associative DCL is locality compatible. Then, this raises the question if associativity will be a necessary condition to have local interactions.
Using the notation
\begin{equation}
L^\alpha_\nu(q) \,\doteq\, \lim_{l\to 0} \frac{\partial(l\oplus q)_\nu}{\partial l_\alpha}\,, \quad\quad\quad
R^\alpha_\nu(p) \,\doteq\, \lim_{l\to 0} \frac{\partial(p\oplus l)_\nu}{\partial l_\alpha}\,,
\end{equation}
we can derive with respect to $p_\rho$ both sides of Eq.~\eqref{loc-oplus-varphi}, finding
\begin{equation}
\frac{\partial^2(p\oplus q)_\mu}{\partial q_\nu \partial p_\rho} \,L^\alpha_\nu(q) \,=\,
\frac{\partial^2(p\oplus q)_\mu}{\partial p_\nu \partial q_\rho} R^\alpha_\nu(p) + \frac{\partial(p\oplus q)_\mu}{\partial p_\nu} \frac{\partial R_\nu^\alpha(p)}{\partial p_\rho}\,.
\end{equation}
Taking the limit $p\to 0$ one has
\begin{equation}
\frac{\partial L^\rho_\mu(q)}{\partial q_\nu} \, L^\alpha_\nu(q) \,=\, \frac{L^{\alpha\rho}_\mu(q)}{\Lambda} + L^\nu_\mu(q) \, \frac{c^{\rho\alpha}_\nu}{\Lambda}\,,
\label{L}
\end{equation}
being
\begin{equation}
\frac{L^{\alpha\rho}_\mu(q)}{\Lambda} \,\doteq\, \lim_{p\to 0} \frac{\partial^{2}(p\oplus q)_\mu}{\partial p_\alpha \partial p_\rho}
\end{equation}
the coefficient of the term proportional to $p_\alpha p_\rho$ in $(p\oplus q)_\mu$, and
\begin{equation}
\frac{c^{\rho\alpha}_\nu}{\Lambda} \,\doteq\, \lim_{p, q \to 0} \frac{\partial^2(p\oplus q)_\nu}{\partial p_\rho \partial q_\alpha}\,,
\end{equation}
the coefficient of the term proportional to $p_\rho q_\alpha$ in $(p\oplus q)_\nu$. Due to the symmetry under the exchange $\alpha\leftrightarrow \rho$ of $L^{\alpha\rho}_\mu$ in Eq.~\eqref{L}, one finds
\begin{equation}
\frac{\partial L^\rho_\mu(q)}{\partial q_\nu} \, L^\alpha_\nu(q) - \frac{\partial L^\alpha_\mu(q)}{\partial q_\nu} \, L^\rho_\nu(q) \,=\, \frac{(c^{\rho\alpha}_\nu - c^{\alpha\rho}_\nu)}{\Lambda} \, L^\nu_\mu(q)\,.
\end{equation}
This implies that the generators
\begin{equation}
T_L^\mu \,\doteq\,z^\rho \,L^\mu_\rho(q)\,,
\end{equation}
form a Lie algebra
\begin{equation}
\{T_L^\mu, T_L^\nu \} \,=\, \frac{(c^{\mu\nu}_\rho - c^{\nu\mu}_\rho)}{\Lambda} \, T_L^\rho\,.
\end{equation}
Therefore, the infinitesimal transformation of the momentum $q$ with parameter $\epsilon$ is given by
\begin{equation}
\delta q_\mu \,=\, \epsilon_\nu \{q_\mu, T_L^\nu \} \,=\, \epsilon_\nu L^\nu_\mu(q) \,=\, \epsilon_\nu \lim_{l\to 0} \frac{\partial (l\oplus q)_\mu}{\partial l_\nu} \,=\, (\epsilon\oplus q)_\mu - q_\mu\,.
\end{equation}
If the composition law is associative, this allows us to define the finite transformation starting from the infinitesimal one generated by the $T_L^\mu$, as
\begin{equation}
q_\mu \to q'_\mu \,=\, (a \oplus q)_\mu\,,
\end{equation}
for a transformation with parameter $a$.
Proceeding in the same way, we can derive with respect to $q_\rho$ instead of $p_\rho$ the first equality of Eq.\eqref{loc-oplus-varphi}, and taking the limit $q\to 0$ one obtains that
\begin{equation}
T_R^\mu \doteq y^\nu R^\mu_\nu(p)
\end{equation}
are the generators of a Lie algebra
\begin{equation}
\{T_R^\mu, T_R^\nu \} \,=\, - \,\frac{(c^{\mu\nu}_\rho - c^{\nu\mu}_\rho)}{\Lambda} \, T_R^\rho\,,
\end{equation}
which is the same Lie algebra we have found for $T_L$ up to a sign\footnote{This is what we mentioned in Ch.~\ref{chapter_curved_momentum_space}: if the generators of left-translations form a Lie algebra, the generators of right-translations form the same algebra but with a different sign (see Ch.6 of Ref.~\cite{Chern:1999jn}).}. The infinitesimal transformation of the momentum $p$ with parameter $\epsilon$ is
\begin{equation}
\delta p_\mu \doteq \epsilon_\nu \{p_\mu, T_R^\nu \} \,=\, \epsilon_\nu R^\nu_\mu(p) \,=\, \epsilon_\nu \,\lim_{l\to 0} \frac{\partial(p\oplus l)_\mu}{\partial l_\nu} \,=\, (p\oplus \epsilon)_\mu - p_\mu\,,
\end{equation}
and this leads to a finite transformation if the composition law is associative
\begin{equation}
p_\mu \to p'_\mu \,=\, (p\oplus a)_\mu\,.
\end{equation}
In Ch.~\ref{chapter_curved_momentum_space} we saw that $\kappa$-Poincaré kinematics is the only DRK obtained from geometry whose generators of translations form a Lie algebra. This is why the local DCL1 is compatible with locality, since we proved that it is the $\kappa$-Poincaré kinematics in a different basis.
Any other relativistic kinematics obtained from the geometrical procedure (Snyder and hybrid models) lead to $T_{L,R}^\mu$ generators which do not close a Lie algebra, and then do not lead to locality of interactions. So in this scheme, locality selects $\kappa$-Poincaré kinematics as the exclusive relativistic isotropic kinematics going beyond SR framework and compatible with locality.
In this chapter we have seen a new ingredient, a noncommutative spacetime, that arises from a DRK in a natural way when locality is imposed. This is of vital importance since, as we saw in Sec.~\ref{sec:QGT}, a noncommutativity in space-time coordinates is a main ingredient of a QGT, giving place to a possible minimal length.
From the implementation of locality, we can solve the apparent paradox we saw in Sec.~\ref{sec:thought_experiments}, where we showed that there is a spacetime fuzziness for classical particles. If one observer $O$ sees two different particles of masses $m_1$ and $m_2$ moving with the same speed and following the same trajectory, another observer $O^\prime$ boosted with respect to $O$ would see that these particles are also following the same trajectory in the physical coordinates, but not in the canonical variables. This is telling us that maybe the physical coordinates are the good arena for considering physical processes and interactions.
Now that we understand better how a DCL affects the spacetime for all particles involved in an interaction, we can try to study some phenomenological aspects that can be observed in order to test the theory. In Sec.~\ref{sec_phenomenology_DSR}, we saw that time delay of flight of particles is in principle the only experimental observation in the DSR framework for low energies in comparison with the Planck scale. This will be the subject of the next chapter, in which the use of these privileged coordinates will be indispensable.
\chapter{Time delay for photons in the DSR framework}
\label{chapter_time_delay}
\ifpdf
\graphicspath{{Chapter5/Figs/Raster/}{Chapter5/Figs/PDF/}{Chapter5/Figs/}}
\else
\graphicspath{{Chapter5/Figs/Vector/}{Chapter5/Figs/}}
\fi
\epigraph{Truth is confirmed by inspection and delay; falsehood by haste and uncertainty.}{Publius Tacitus}
In this chapter we will study the time delay for photons in the DSR context. This is a very important phenomenological study since, as we saw in Sec.~\ref{sec:DSR}, the only window to test DSR theories with a high energy scale of the order of the Planck energy is precisely the time delay of astroparticles.
There are some studies of time delays in the DSR framework in the literature~\cite{AmelinoCamelia:2011cv,Loret:2014uia,Mignemi:2016ilu}. In the first two works the study is carried out with the noncommutativity of $\kappa$-Minkowski while in the third one, the Snyder spacetime is considered. In the former cases a time delay for photons is found, which differs from the result obtained in the latter, where there is an absence of such effect. Apparently, depending on the noncommutativity of the spacetime in which photons propagate, the results vary.
Along this chapter, we will consider three different models. In the first one, the results depend on the basis of $\kappa$-Poincar\'{e} one works with, making that the final result of the existence or not of a time delay is basis dependent~\cite{Carmona:2017oit}. Also we will study a generic space-time noncommutativity and see the necessary conditions in order to show a lack of time delay.
But one could think that something is wrong in the previous analysis since the results depend on the basis we are choosing. This means that the physics in the DSR framework would depend on the choice of coordinates on momentum space, making the results coordinate dependent. This leads us to study another formulation of time delays in such a way that the observables are defined in the physical coordinates of Ch.~\ref{chapter_locality}, and we will see that in this framework, the result of absence of time delay is basis independent in both noncommutative spacetimes, $\kappa$-Minkowski and Snyder~\cite{Carmona:2018xwm}.
Finally, we will consider another model where the time delay is studied in the framework of interactions, considering the emission and detection of a photon, not only its free propagation~\cite{Carmona:2019oph}. In this context, we will see that one should consider not only the particles involved in the emission and detection processes, but any other related to them, making in principle this model untreatable. This is why a cluster decomposition principle will be suggested as a way to avoid these inconsistencies.
\section{First approach: relative locality framework}
\label{sec:td_first}
In this section we will study the first model we have mentioned above, previously considered in the literature~\cite{AmelinoCamelia:2011cv,Loret:2014uia}. We will see that the absence or not of a time delay of flight for photons will depend on the realization of the noncommutative spacetime (choice of momentum basis) and also, on the considered noncommutativity~\cite{Carmona:2017oit}.
\subsection{Determination of time delays}
Let us consider two photons emitted simultaneously from a source at a distance $L$ from our laboratory, and let us suppose a DDR for the high energy photon, as we can neglect this modification for the low energy one. We can consider that the DDR is such that the speed of the high energy photon is lower than $1$, so a detector in our laboratory would measure a time delay $\tilde{T}$ between them.
But this is not the only contribution to the time delay: there is another correction due to the fact that photons see different (momentum dependent) spacetimes characterized with the function $\varphi^\mu_\nu(k)$, as we saw in the previous chapter. In a noncommutative spacetime translations (given by the DCL) act non trivially, as they depend on the momentum of the particle. This is the main difference between the model in DSR and the corresponding one of LIV. In LIV, the only contribution to time delay is the DDR, but in DSR, as the relativity principle has to be maintained, one needs to include the effect of non-trivial (momentum dependent) translations, whose effect in the one-particle system is depicted by a noncommutative spacetime.
Due to the effect of translations (we saw in Sec.~\ref{sec:relative_locality_intro} that it leads to non-local effects), we should correct the affirmation at the beginning of the section. When we have said that two photons are emitted simultaneously, we would have to say that they are so only for an observer at the source and then not for us, placed at the laboratory. Hence, in order to study the time delay we need to consider two observers: $A$, which is placed at the source and see the two photons emitted at the same time and at the same point, and $B$, which is at the detection point.
For simplicity, and without any loss of generality, we can treat the problem in $1+1$ dimensions, so we will write for the photon its energy $E$ and its momentum $k\equiv |\vec{k}|$. Note that we will use the same notation ($k$) for the four-momentum in 3+1 and for the momentum in 1+1. Since we have neglected the modification in the dispersion relation for the low energy photon, we can also consider that it propagates in a commutative spacetime (neglecting the contribution due to the $\varphi^\mu_\nu(k)$), so the low energy photon will behave as in SR, traveling at speed $1$.
We can compute the translations relating the noncommutative coordinates of observers $A$ and $B$ directly from the usual translations of the commutative ones, $x^B=x^A-L$, $t^B=t^A-L$:
\begin{align}
\tilde{t}^B& \,=\,\varphi^0_0 t^B+\varphi^0_1 x^B=\tilde{t}^A-L(\varphi^0_0 + \varphi^0_1) \label{traslaciont} \,,\\
\tilde{x}^B& \,=\,\varphi^1_0 t^B+\varphi^1_1 x^B=\tilde{x}^A-L(\varphi^1_0 + \varphi^1_1) \label{traslacionx} \,.
\end{align}
The worldline of the high energy particle for observer $A$ is
\begin{equation}
\tilde{x}^A\,=\,\tilde{v}\,\tilde{t}^A\,,
\label{eq:AWL}
\end{equation}
since $\tilde{x}^A=0,\, \tilde{t}^A=0$, are the initial conditions of the worldline, and $\tilde{v}$ is obtained through
\begin{equation}
\tilde{v}\,=\,\frac{\lbrace{C, \tilde{x}\rbrace}}{\lbrace{C, \tilde{t}\rbrace}}\,=\,\frac{\varphi^1_0 (\partial C/\partial E)-\varphi^1_1(\partial C/\partial k)}{\varphi^0_0 (\partial C/\partial E)-\varphi^0_1(\partial C/\partial k)}\,,
\label{eq:v_tilde}
\end{equation}
where the minus signs appear due to the fact that $k_1=-k^1=-k$, and so $\partial C/\partial k_1=-\partial C/\partial k$.
We can now compute the observer $B$ worldline by applying Eqs.~\eqref{traslaciont}-\eqref{traslacionx} to Eq.~\eqref{eq:AWL}:
\begin{equation}
\tilde{x}^B=\tilde{x}^A-L(\varphi^1_0 + \varphi^1_1)=\tilde{v}\,[\tilde{t}^B+L(\varphi^0_0 + \varphi^0_1)]-L(\varphi^1_0 + \varphi^1_1)\,.
\label{eq:BWL}
\end{equation}
The worldline for observer $B$ ends at $\tilde{x}^B=0$.\footnote{We assume that the detector is at rest, being the spatial location coincident for the detection of both photons.} The time delay $\tilde{T}\equiv \tilde{t}^B(\tilde{x}^B=0)$ can be obtained from Eq.~\eqref{eq:BWL}, giving
\begin{equation}
\begin{split}
\tilde{T}\,=\,&\tilde{v}^{-1} \,L(\varphi^1_0 + \varphi^1_1)-L(\varphi^0_0 + \varphi^0_1)\\
=\,&L\left[(\varphi^1_0 + \varphi^1_1)\frac{\varphi^0_0 (\partial C/\partial E)-\varphi^0_1(\partial C/\partial k)}{\varphi^1_0 (\partial C/\partial E)-\varphi^1_1(\partial C/\partial k)}-(\varphi^0_0 + \varphi^0_1)\right]\,,
\end{split}
\label{eq:time-delay}
\end{equation}
since the low energy photon arrives at $\tilde{t}^B=0$. This equation is valid not only for photons but for every relativistic particle. Also, one can check that one obtains the same results of SR for both cases just taking the limit $\Lambda\to \infty$.
\subsection{Momenta as generators of translations in spacetime}
We can write the following Poisson brackets with the functions $\varphi^\mu_\nu$
\begin{equation}
\{E,\tilde{t}\}\,=\,\varphi^0_0\,, \quad \quad \{E,\tilde{x}\}=\varphi^1_0\,, \quad \quad
\{k,\tilde{t}\}\,=\,-\varphi^0_1\,, \quad \quad \{k,\tilde{x}\}=-\varphi^1_1\,,
\end{equation}
where again the minus signs appear since $k_1=-k^1=-k$.
Then, we can express Eqs.~\eqref{traslaciont}-\eqref{traslacionx} in the following way
\begin{align}
\tilde{t}^B&\,=\,\tilde{t}^A-L\{E,\tilde{t}\}+L\{k,\tilde{t}\} \,,\\
\tilde{x}^B&\,=\,\tilde{x}^A-L\{E,\tilde{x}\}+L\{k,\tilde{x}\} \,.
\label{eq:transl}
\end{align}
These transformations are the translations generated by the momentum in the noncommutative spacetime, even if the $(\tilde{x},k)$ phase space is non-canonical. This is the procedure used in~\cite{AmelinoCamelia:2011cv,Loret:2014uia,Mignemi:2016ilu}.
Now we can write the time delay formula of Eq.~\eqref{eq:time-delay} in terms of Poisson brackets
\begin{equation}
\tilde{T}\,=\,\left(L\{E,\tilde{x}\}-L\{k,\tilde{x}\}\right) \cdot \left(\frac{(\partial C/\partial E)\{E,\tilde{t}\}+(\partial C/\partial k)\{k,\tilde{t}\}}{(\partial C/\partial E)\{E,\tilde{x}\}+(\partial C/\partial k)\{k,\tilde{x}\}}\right)-L\{E,\tilde{t}\}+L\{k,\tilde{t}\}\,.
\label{eq:genTD}
\end{equation}
For the simple case of a commutative spacetime, as we have $\{E,t\}=1$, $\{E,x\}=0$, $\{k,t\}=0$, $\{k,x\}=-1$, Eq.~\eqref{eq:genTD} gives
\begin{equation}
T=-L\left(1+\frac{\partial C/\partial E}{\partial C/\partial k}\right)\,.
\label{eq:canTD}
\end{equation}
When the dispersion relation is $C(k)=E^2-k^2$, one obtains the result of SR, $T=-L(1-E/p)$, which is zero for photons.
In order to obtain the first order approximation of Eq.~\eqref{eq:genTD}, keeping the leading terms, one can write the Poisson brackets as their usual value plus an infinitesimal deformation of order $\epsilon$, $\{E,\tilde{t}\}=1+(\{E,\tilde{t}\}-1)=1+\mathcal{O}(\epsilon)$, $\{k,\tilde{x}\}=1+(\{k,\tilde{x}\}-1)=1+\mathcal{O}(\epsilon)$, $\{E,\tilde{x}\}=\mathcal{O}(\epsilon)$, $\{k,\tilde{t}\}=\mathcal{O}(\epsilon)$, and also $(\partial C/\partial E)/(\partial C/\partial k)=-E/k+\mathcal{O}(\epsilon)$, giving
\begin{equation}
\frac{\tilde{T}}{L} \approx -\left(1-\frac{E}{k}\right)-\left(\frac{\partial C/\partial E}{\partial C/\partial k}+\frac{E}{k}\right)-
\left(1-\frac{E}{k}\right)\left(\{E,\tilde{t}\}-1\right) + \left(1-\frac{E}{k}\right)\frac{E}{k} \,\{E,\tilde{x}\}\,.
\label{eq:genTDaprox}
\end{equation}
The first term is the usual time delay in SR, the second one takes into account the effect due to the DDR, and the last two contributions reflect the deformed Heisenberg algebra involving $E$. There are no contributions of the Poisson brackets involving $k$ because they cancel out in the computation.
In the next two subsections, we will use this formula for different bases of $\kappa$-Minkowski and Snyder spacetimes.
\subsection{Photon time delay in \texorpdfstring{$\kappa$}{Lg}-Minkowski spacetime}
As we saw in Sec.~\ref{sec:QGT}, $\kappa$-Minkowski spacetime is defined by
\begin{equation}
[\tilde{x}^0,\tilde{x}^i]\,=\,-\frac{i}{\Lambda}\tilde{x}^i \,, \quad \quad [\tilde{x}^i,\tilde{x}^j]\,=\,0\,,
\label{eq:kM}
\end{equation}
and the non-vanishing Poisson bracket in $(1+1)$-dimensional spacetime is
\begin{equation}
\{\tilde{t},\tilde{x}\}\,=\,-\frac{1}{\Lambda}\tilde{x} \,.
\end{equation}
Now we are going to calculate the time delay for three different (well known) choices of momentum coordinates: the bicrossproduct, the classical and the Magueijo-Smolin basis.
\subsubsection{Bicrossproduct basis}
The DDR in this basis at leading order in $\Lambda^{-1}$ is
\begin{equation}
C(k)\,=\,k_0^2-\vec{k}^2-\frac{1}{\Lambda} k_0 \vec{k}^2\equiv m^2\,,
\label{eq:bicrossCasimir}
\end{equation}
and the Heisenberg algebra in $1+1$ dimensions is given by~\eqref{eq:pairing_intro}
\begin{equation}
\{E,\tilde{t}\}\,=\,1 \,,\quad \quad \{E,\tilde{x}\}\,=\,0\,, \quad \quad \{k,\tilde{t}\}\,=\,-\frac{k}{\Lambda}\,, \quad \quad \{k,\tilde{x}\}\,=\,-1\,.
\end{equation}
Using Eq.~\eqref{eq:bicrossCasimir} one finds
\begin{equation}
\frac{\partial C/\partial E}{\partial C/\partial k}+\frac{E}{k}\,=\,\frac{1}{\Lambda}\left(\frac{E^2}{k}+\frac{k}{2}\right),
\label{eq:bicrossdispterm}
\end{equation}
and then Eq.~\eqref{eq:genTDaprox} gives
\begin{equation}
\frac{\tilde{T}[\text{bicross}]}{L}\,=\,-\left(1-\frac{E}{k}\right)-\frac{1}{\Lambda}\left(\frac{E^2}{k}+\frac{k}{2}\right)\,.
\label{eq:bicrossTD2}
\end{equation}
In the case of photons, Eq.~\eqref{eq:bicrossCasimir} leads to $E=k\,(1+k/2\Lambda)$ at first order, so one obtains a time delay $Lk/\Lambda$ for the high energy photon with respect the low energy one. This result was previously obtained in Refs.~\cite{AmelinoCamelia:2011cv,Loret:2014uia}, making them to conclude that there is an energy dependent time delay for photons in $\kappa$-Minkowski spacetime.
But we are going to see that this result depends on the choice of basis one is working with.
\subsubsection{Classical basis}
Another choice of basis in $\kappa$-Poincaré is the classical basis (studied in Ch.~\ref{chapter_second_order}), with the same dispersion relation of SR
\begin{equation}
C(k)\,=\,k_0^2-\vec{k}^2\,,
\label{eq:classCasimir}
\end{equation}
and the Poisson brackets in $1+1$ dimensions are at leading order~\cite{KowalskiGlikman:2002jr}
\begin{equation}
\{E,\tilde{t}\}\,=\,1 \,,\quad \quad \{E,\tilde{x}\}\,=\,-\frac{k}{\Lambda} \,,\quad \quad \{k,\tilde{t}\}\,=\,0 \,,\quad \quad \{k,\tilde{x}\}\,=\,-\left(1+\frac{E}{\Lambda}\right)\,.
\label{eq:PP}
\end{equation}
Eq.~\eqref{eq:genTDaprox} gives in this case
\begin{equation}
\frac{\tilde{T}[\text{class}]}{L}\,=\,-\left(1-\frac{E}{k}\right)\left(1+\frac{E}{\Lambda}\right)\,.
\label{eq:classTD}
\end{equation}
Then, for massless particles ($E=k$), there is an absence of time delay in this basis, despite the noncommutativity of the spacetime.
\subsubsection{Magueijo-Smolin basis}
Another basis described in Ref.~\cite{KowalskiGlikman:2002jr} is the Magueijo-Smolin basis. The DDR at first order in this basis is
\begin{equation}
C(k)\,=\,k_0^2-\vec{k}^2+\frac{1}{\Lambda}k_0^3-\frac{1}{\Lambda}k_0\vec{k}^2\,,
\label{eq:MG-SCasimir}
\end{equation}
and the Heisenberg algebra in $1+1$ dimensions at leading order is
\begin{equation}
\{E,\tilde{t}\}\,=\,\left(1-\frac{2E}{\Lambda}\right)\,, \quad \quad \{E,\tilde{x}\}\,=\,-\frac{k}{\Lambda}\,, \quad \quad \{k,\tilde{t}\}\,=\,-\frac{k}{\Lambda}\,, \quad \quad \{k,\tilde{x}\}\,=\,-1\,.
\label{eq:PPM-S}
\end{equation}
From Eq.~\eqref{eq:MG-SCasimir} one can see
\begin{equation}
\frac{\partial C/\partial E}{\partial C/\partial k}+\frac{E}{k}\,=\,\left(1-\frac{E}{k}\right)\frac{E+k}{2\Lambda}\,,
\end{equation}
and then
\begin{equation}
\begin{split}
\frac{\tilde{T}[\text{M-S}]}{L}&\,=\,-\left(1-\frac{E}{k}\right)-\left(1-\frac{E}{k}\right)\frac{E+k}{2\Lambda}+\left(1-\frac{E}{k}\right)\frac{2E}{\Lambda}-\left(1-\frac{E}{k}\right)\frac{E}{\Lambda}\\
&\,=\,-\left(1-\frac{E}{k}\right)\left[1-\frac{E-k}{2\Lambda}\right]\,.
\label{eq:M-STD2}
\end{split}
\end{equation}
We find that, as in the previous basis, there is not a time delay for massless particles ($E=k$).
\subsection{Photon time delay in Snyder spacetime}
In Sec.~\ref{sec:QGT} we showed that the noncommutative Snyder spacetime is
\begin{equation}
[\tilde{x}_\mu,\tilde{x}_\nu]\,=\,\frac{i}{\Lambda^2} J_{\mu\nu}\,,
\end{equation}
with $J_{\mu\nu}$ the generators of the Lorentz algebra.
Also in this case, there are different basis (or realizations) of the same algebra in phase space. Here, we will discuss the time delay effect in the representation of Snyder and Maggiore.
\subsubsection{Snyder representation}
In the original representation proposed by Snyder, the Heisenberg algebra in $1+1$ dimensions is
\begin{equation}
\{E,\tilde{t}\}\,=\,\left(1+\frac{E^2}{\Lambda^2}\right)\,, \quad \quad \{E,\tilde{x}\}\,=\,\frac{Ek}{\Lambda^2}\,, \quad \quad \{k,\tilde{t}\}\,=\,\frac{Ek}{\Lambda^2}\,, \quad \quad \{k,\tilde{x}\}\,=\,-\left(1-\frac{k^2}{\Lambda^2}\right),
\label{eq:PPSny}
\end{equation}
and as the Casimir is $(E^2-k^2)$, one finds
\begin{equation}
\frac{\partial C/\partial E}{\partial C/\partial k}+\frac{E}{k}\,=\,0,
\end{equation}
Then Eq.~\eqref{eq:genTDaprox} gives
\begin{equation}
\frac{\tilde{T}[\text{Snyder}]}{L}\,=\,-\left(1-\frac{E}{k}\right)-\left(1-\frac{E}{k}\right)\frac{E^2}{\Lambda^2}+\left(1-\frac{E}{k}\right)\frac{E}{k}\frac{Ek}{\Lambda^2}\,=\,-\left(1-\frac{E}{k}\right),
\label{eq:TD-Snyder}
\end{equation}
so for the case of photons there is no time delay.
\subsubsection{Maggiore representation}
The Heisenberg algebra in the Maggiore representation~\cite{Maggiore:1993kv} at leading order is
\begin{equation}
\{E,\tilde{t}\}\,=\,1+\frac{E^2-k^2}{2\Lambda^2}\,,\quad \quad \{E,\tilde{x}\}\,=\,0\,, \quad \quad \{k,\tilde{t}\}\,=\,0\,, \quad \quad \{k,\tilde{x}\}\,=\,-1-\frac{E^2-k^2}{2\Lambda^2}\,.
\label{eq:PPMagg}
\end{equation}
Also in this representation the Casimir is $(E^2-k^2)$, so again
\begin{equation}
\frac{\partial C/\partial E}{\partial C/\partial k}+\frac{E}{k}\,=\,0,
\end{equation}
and then from Eq.~\eqref{eq:genTDaprox} one obtains
\begin{equation}
\frac{\tilde{T}[\text{Maggiore}]}{L}\,\,=\,\,-\left(1-\frac{E}{k}\right)-\left(1-\frac{E}{k}\right)\frac{E^2-k^2}{2\Lambda^2}\,\,=\,\,-\left(1-\frac{E}{k}\right)\left[1+\frac{E^2-k^2}{2\Lambda^2}\right] ,
\label{eq:TD-Maggiore}
\end{equation}
so for the Maggiore representation there is not either a time delay for photons.
The result of absence of time delay for photons was claimed in a previous paper~\cite{Mignemi:2016ilu} through a different procedure. These results are particular cases of our general expression of Eq.~\eqref{eq:genTD}.
\subsection{Interpretation of the results for time delays}
One can see that in all cases considered before the time delay is proportional to
\begin{equation}
L\left[(1+(\partial C/\partial E)/(\partial C/\partial k)\right]\,,
\label{eq:factor}
\end{equation}
i.e. to $(L/v-L)$, where $v$ is the velocity of propagation of the high energy particle in the commutative spacetime,
\begin{equation}
v\,\,=\,\,-\frac{\partial C/\partial k}{\partial C/\partial E}\,.
\label{velocidadC}
\end{equation}
This result can be read from Eq.~\eqref{eq:time-delay}:
\begin{equation}
\begin{split}
\tilde{T}\,\,=\,\,L\left[(\varphi^1_0+\varphi^1_1)\frac{\varphi^0_0+\varphi^0_1 v}{\varphi^1_0+\varphi^1_1 v}-(\varphi^0_0+\varphi^0_1)\right]\,\,=\,&\,\frac{L(\varphi^0_0\varphi^1_1-\varphi^1_0\varphi^0_1)}{\varphi^1_0+\varphi^1_1 v}(1-v)\\
\,=\,&\,\frac{\varphi^0_0\varphi^1_1-\varphi^1_0\varphi^0_1}{\varphi^1_1+\varphi^1_0/v}L\left(\frac{1}{v}-1\right)\,.
\end{split}
\end{equation}
Then, the leading contribution to the time delay is due only to the first terms in the power expansion of the DDR. This is in agreement with what we have found in the previous subsection: the only basis considered here where a time delay is present is the bicrossproduct realization of $\kappa$-Poincar\'{e}, which is the only one with an energy dependent velocity for photons.
\section{Second approach: locality of interactions}
\label{sec:NC}
We have seen in the previous section that the result of the existence or absence of a time delay is basis dependent, and also depends on the noncommutativity of the considered spacetime. The first dependence is really problematic since one would expect that the same results would be obtained independently of the choice of momentum coordinates. This leads us to consider another model of propagation of particles.
\subsection{Presentation of the model}
The main ingredient of this model is to consider that all observables are defined in the local (physical) coordinates of Ch.~\ref{chapter_locality}. It means that instead of defining the translations as in the previous section,
\begin{equation}
\xi^\mu_B\,=\,\xi^\mu_A+a^\mu\,,
\label{eq:xi}
\end{equation}
if we consider the new noncommutative coordinates defined by
\begin{equation}
\zeta^\mu\,=\,\xi^\nu \varphi^\mu_\nu(\mathcal{P}/\Lambda),
\label{eq:zeta-xi}
\end{equation}
where $\mathcal{P}$ is the total momentum of the interaction\footnote{In principle, one should consider all the momenta that intervene in the processes of emission and detection. Nonetheless, we can make a simplification considering that the $\varphi$ function depends only on the momentum of the detected particle. This will be treated in more detail in the next section, in which a cluster decomposition principle will be considered.}, we define the translation in these coordinates: the two observers $A$ and $B$ are connected by a translation with parameter $b$
\begin{equation}
\zeta^\mu_B\,=\,\zeta^\mu_A+b^\mu\,,
\label{eq:zeta}
\end{equation}
where
\begin{equation}
a^\nu \varphi^\mu_\nu(\mathcal{P}/\Lambda) \,=\, b^\mu\,.
\end{equation}
It is obvious that the results obtained with this relation between observers will be different that those obtained in the previous section.
\subsection{Computation of the time delay expression}
\label{sec:absence_time_delay}
In this subsection, we will compute the time delay defined by Eq.~\eqref{eq:zeta}. We consider again that both observers are separated by a distance $L$, but in contrast with the previous model, $L$ is the distance in the noncommutative space. We have then (we are still working in 1+1 dimensions)
\begin{equation}
\zeta^1_B\,=\,\zeta^1_A-L\,.
\label{eq:zetaTD}
\end{equation}
We can now compute the time delay. The detection of the photon for observer $B$ is at $\tilde{x}^1_d=0$ and the emission at $\tilde{x}^1_{e}=-L$, which is at the origin of spatial coordinates for observer $A$, according to Eq.~\eqref{eq:zetaTD}. In fact, we see that since interactions are local, we do not need to consider two different observers.
Then, the difference in time coordinates from the detection to the emission is
\begin{equation}
\tilde{x}^0_{d}\,=\,\tilde{x}^0_{e}+\frac{L}{\tilde{v}}\,,
\label{eq:TD1}
\end{equation}
where $\tilde{v}$ is the velocity of the photon in the noncommutative spacetime given by Eq.~\eqref{eq:v_tilde}. So the time delay $\tilde{T}$ is given by
\begin{equation}
\tilde{T}\doteq \tilde{x}^0_d - \tilde{x}^0_e - L \,=\,L\left(\frac{1}{\tilde{v}}-1\right)\,.
\label{eq:TD-2}
\end{equation}
One can check that the velocity defined in the noncommutative spacetime is independent of the choice of basis:
\begin{equation}
\lbrace{C(k),\tilde{x}^\mu\rbrace}\,=\,\frac{\partial C(k)}{\partial k_\nu}\varphi^\mu_\nu(k)\,=\,\frac{\partial C^{\prime}(k^{\prime})}{\partial k^{\prime}_\sigma}\frac{\partial k^{\prime}_\sigma}{\partial k_\nu}\varphi^\mu_\nu(k)\,=\,\frac{\partial C^{\prime}(k^{\prime})}{\partial k^{\prime}_\sigma}\varphi^{\prime \mu}_\sigma(k^{\prime})\,=\,\lbrace{C^{\prime}(k^{\prime}),\tilde{x}^{\prime \mu}\rbrace}'\,,
\label{eq:velocity_independence}
\end{equation}
where we have used the relation between $x$ and $x'$ given by the canonical transformation
\begin{equation}
k_\mu \,=\, f_\mu(k')\,,\quad x^\mu \,=\, x^{\prime\nu} g^\mu_\nu(k')\,,
\end{equation}
for any set of momentum dependent functions $f_\mu$, with
\begin{equation}
g^\mu_\rho(k') \frac{\partial f_\nu(k')}{\partial k'_\rho} \,=\, \delta^\mu_\nu \,,
\end{equation}
the transformation rule of $\varphi$ (see Appendix~\ref{appendix:locality})
\begin{equation}
\varphi^{\prime\mu}_\rho(k') \doteq g^\nu_\rho(k') \varphi^\mu_\nu(f(k')) \,=\, \frac{\partial k'_\rho}{\partial k_\nu} \varphi^\mu_\nu(k)\,,
\end{equation}
the fact that $C(k)=C^{\prime}(k^{\prime})$, and we have denoted by
\begin{equation}
\lbrace{A\,,\,B\rbrace}^{\prime}\,=\,\frac{\partial A}{\partial k^{\prime}_\rho}\frac{\partial B}{\partial x^{\prime^\rho}}-\frac{\partial A}{\partial x^{\prime \rho}}\frac{\partial B}{\partial k^{\prime}_\rho}
\end{equation}
the Poisson brackets in the new canonical coordinates. This reveals that in the physical coordinates, the velocity is the same independently of the canonical coordinates we use. Since the time delay is only a function of $L$ and $\tilde{v}$, this means that whether there is a time delay or not will be independent on the basis in which one makes the computation.
In particular, one can obtain $\tilde{v}$ in the bicrossproduct basis with Eq.~\eqref{eq:v_tilde}, finding that $\tilde{v}=1$,\footnote{We will understand why this happens from a geometrical point of view in Ch.~\ref{ch:cotangent}.} and then, there is no time delay in $\kappa$-Poincar\'{e}. This differs from the results of the previous section where the time delay was basis dependent, which support the use of this model against the other one since physics should not depend on the variables one works with.
Also, one can compute $\tilde{v}$ for the Snyder noncommutativity, obtaining the same result, $\tilde{v}=1$. The fact that we obtain the same results for these two different cases of noncommutativity can be easily understood since in both models the functions $\varphi^\mu_\nu(k)$, viewed as a tetrad in a de Sitter momentum space as in Ch.~\ref{chapter_curved_momentum_space}, are representing the same curved momentum space. This then leads to the result that there is no observable effect on the propagation of free particles in a flat spacetime due to a de Sitter momentum space.
\section{Third approach: multi-interaction process}
\label{sec:multi-interaction_td}
In the previous sections, we have considered that particles propagate in the physical spacetime and that there is a simple way to relate observers, Eqs.~\eqref{traslaciont}-\eqref{traslacionx} in Sec.~\ref{sec:td_first} and Eq.~\eqref{eq:zetaTD} in Sec.~\ref{sec:NC}. This is tricky somehow: the physical coordinates were introduced in order to have local interactions for all observers, but in the previous models we have studied only the propagation, not the interactions leading to the emission and detection processes of the photon. This approximation can be done if one considers a cluster decomposition principle~\cite{Carmona:2019oph}, eliminating the dependence on all other momenta involved in the emission and detection of the particle.
Moreover, in the model considered in Sec.~\ref{sec:NC}, the translation Eq.~\eqref{eq:zetaTD} is not a symmetry of the action
\begin{equation}
S \,=\, \int d\tau \left[\dot{x}^\mu k_\mu - N(\tau) C(k)\right] \,=\, \int d\tau \left[-\tilde{x}^\alpha \varphi^\mu_\alpha(k) \dot{k}_\mu - N(\tau) C(k)\right]\,.
\end{equation}
The translations can be identified with $x^\mu \to x^\mu + a^\mu$, which leave the action invariant, as it is considered in Sec.~\ref{sec:td_first} (previously studied in Refs.~\cite{AmelinoCamelia:2011cv,Loret:2014uia}). This differs from our second consideration were we used $\tilde{x}^\mu \to \tilde{x}^\mu + a^\mu$ as a translation between observers. This is not a symmetry of the action, but is a transformation that leaves invariant the set of equations of motion, and then, the set of solutions.
Another way to study the propagation of particles is to consider a multi-interaction process: one interaction defines the emission of a particle and another one the detection. This is the subject of this section.
\subsection{Two interactions}
A model with multi-interactions was proposed in Ref.~\cite{AmelinoCamelia:2011nt} as a way to study a possible time delay. A first interaction was considered, the emission of a high energy photon, then its propagation, and finally another interaction defining the detection. It was considered a simplified model where one pion decays into two photons, and one of them (the high energy one) interacts with a particle in the detector producing two particles.
Here, we will consider the model we proposed in Ref.~\cite{Carmona:2019oph}, a process with two two-particle interactions with three particles in the ingoing state with phase coordinates $(x_{-(i)}, p^{-(i)})$ and another three particles in the outgoing state with phase-space coordinates $(x_{+(j)}, p^{+(j)})$. The two particles participating in the first interaction are labeled with $i=1,2$, and the other two particles involved in the second interaction with $j=2,3$. There is another particle produced in the first interaction with phase-space coordinates $(y, q)$, participating in the second interaction, which will play the role of the detected photon. The action of Eq.~\eqref{eq:action} we used in Sec.~\ref{sec:relative_locality_intro} in order to see how a DRK modifies the nature of spacetime is particularized to
\begin{align}
S \,=&\, \int_{-\infty}^{\tau_1} d\tau \sum_{i=1,2} \left[x_{-(i)}^\mu(\tau) \dot{p}_\mu^{-(i)}(\tau) + N_{-(i)}(\tau) \left[C(p_{-(i)}(\tau)) - m_{-(i)}^2\right]\right] \nonumber \\
& + \int_{-\infty}^{\tau_2} \left[x_{-(3)}^\mu(\tau) \dot{p}_\mu^{-(3)}(\tau) + N_{-(3)}(\tau) \left[C(p_{-(3)}(\tau)) - m_{-(3)}^2\right]\right] \nonumber \\ & + \int_{\tau_1}^{\tau_2} d\tau \left[y^\mu(\tau) \dot{q}_\mu(\tau) + N(\tau) \left[C(q(\tau)) - m^2\right]\right] \nonumber \\
& + \int_{\tau_1}^{\infty} \left[x_{+(1)}^\mu(\tau) \dot{p}_\mu^{+(1)}(\tau) + N_{+(1)}(\tau) \left[C(p_{+(1)}(\tau)) - m_{+(1)}^2\right]\right] \nonumber \\ & + \int_{\tau_2}^{\infty} d\tau \sum_{j=2,3} \left[x_{+(j)}^\mu(\tau) \dot{p}_\mu^{+(j)}(\tau) + N_{+(j)}(\tau) \left[C(p_{+(j)}(\tau)) - m_{+(j)}^2\right]\right] \nonumber \\
& + \xi^\mu \left[\left(p^{+(1)}\oplus q\oplus p^{-(3)}\right)_\mu - \left(p^{-(1)}\oplus p^{-(2)}\oplus p^{-(3)}\right)_\mu\right](\tau_1) \nonumber \\ & + \chi^\mu \left[\left(p^{+(1)}\oplus p^{+(2)}\oplus p^{+(3)}\right)_\mu - \left(p^{+(1)}\oplus q\oplus p^{-(3)}\right)_\mu\right](\tau_2),
\end{align}
where we denote by $(k\oplus p\oplus q)$ the total four-momentum of a three-particle system with four-momenta $(k, p, q)$.
The extrema of the action satisfy the set of equations
\begin{equation}
\dot{p}^{-(i)} \,=\, \dot{p}^{+(j)} \,=\, \dot{q} \,=\, 0, \quad
\frac{\dot{x}_{-(i)}^\mu}{N_{-(i)}} \,=\, \frac{\partial C(p^{-(i)})}{\partial p^{-(i)}_\mu}, \quad \frac{\dot{x}^\mu_{+(j)}}{N_{+(j)}} \,=\, \frac{\partial C(p^{+(j)})}{\partial p_\mu^{+(j)}}, \quad \frac{\dot{y}^\mu}{N} \,=\, \frac{\partial C(q)}{\partial q_\mu}.
\end{equation}
The DRK is given by
\begin{align}
& C(p^{-(i)}) \,=\, m_{-(i)}^2, \quad\quad C(p^{+(j)}) \,=\, m_{+(j)}^2, \quad\quad C(q^2) \,=\, m^2, \nonumber \\
& p^{-(1)}\oplus p^{-(2)}\oplus p^{-(3)} \,=\, p^{+(1)}\oplus q\oplus p^{-(3)} \,=\, p^{+(1)}\oplus p^{+(2)} \oplus p^{+(3)},
\label{dk2}
\end{align}
and we also have
\begin{align}
& x^\mu_{-(i)}(\tau_1) \,=\, \xi^\nu \frac{\partial(p^{-(1)}\oplus p^{-(2)}\oplus p^{-(3)})_\nu}{\partial p^{-(i)}_\mu}, \:\: (i=1,2)\,, \nonumber \\
&x^\mu_{-(3)}(\tau_2) \,=\, \chi^\nu \frac{\partial(p^{+(1)}\oplus q\oplus p^{-(3)})_\nu}{\partial p^{-(3)}_\mu}\,,\quad x^\mu_{+(1)}(\tau_1) \,=\, \xi^\nu \frac{\partial(p^{+(1)}\oplus q\oplus p^{-(3)})_\nu}{\partial p^{+(1)}_\mu}\,,\nonumber \\
& x_{+(j)}^\mu(\tau_2) \,=\, \chi^\nu \frac{\partial(p^{+(1)}\oplus p^{+(2)} \oplus p^{+(3)})_\nu}{\partial p^{+(j)}_\mu}, \:\: (j=2,3), \nonumber \\
& y^\mu(\tau_1) \,=\, \xi^\nu \frac{\partial(p^{+(1)}\oplus q\oplus p^{-(3)})_\nu}{\partial q_\mu}, \quad\quad y^\mu(\tau_2) \,=\, \chi^\nu \frac{\partial(p^{+(1)}\oplus q\oplus p^{-(3)})_\nu}{\partial q_\mu}.
\end{align}
With these equations we can determine the four-momentum $q$ and impose some restrictions between the other momenta. We also have relations between the four-velocities of the particles and their momenta.
There is a new ingredient due to the presence of two interactions: on the one hand, from the equation for the four-velocity of the photon, one finds
\begin{equation}
y^\mu(\tau_2) - y^\mu(\tau_1) \,=\, \frac{\partial C(q)}{\partial q_\mu} \,\int_{\tau_1}^{\tau_2} d\tau\, N(\tau),
\end{equation}
and, from the conservation laws of the emission and detection interactions for the photon, one has
\begin{equation}
y^\mu(\tau_2) - y^\mu(\tau_1) \,=\, \left(\chi^\nu - \xi^\nu\right) \,\frac{\partial(p^{+(1)}\oplus q\oplus p^{-(3)})_\nu}{\partial q_\mu}.
\end{equation}
We find, combining both expressions,
\begin{equation}
\left(\chi^\nu - \xi^\nu\right) \,\frac{\partial(p^{+(1)}\oplus q\oplus p^{-(3)})_\nu}{\partial q_\mu} \,=\, \frac{\partial C(q)}{\partial q_\mu} \,\int_{\tau_1}^{\tau_2} d\tau\, N(\tau).
\end{equation}
Therefore, the difference of coordinates of the two interaction vertices is fixed and then, we only have a set of solutions depending on four arbitrary constants ($\xi^\mu$) as in the single-interaction process case, which reflects the invariance under translations.
There is one observer placed at the emission of the photon (for which $\xi^\mu=0$) that sees this process as local, but not the detection, and another observer at the detection (for which $\chi^\mu=0$) , which is related to the other observer by a translation, seeing the detection as local but not the emission. For any other observer both interactions are not local.
\subsection{Comparison with the previous models}
This proposal has in common with the first model that the velocity of the photon $v^i \,=\, \dot{y}^i/\dot{y}^0 \,=\, (\partial C(q)/\partial q_i)/(\partial C(q)/\partial q_0)$ has the same momentum dependence, which is determined by the DDR. But in order to determine the time of flight, one has to take into account that the emission and detection points of the photon ($y^\mu(\tau_1)$ and $y^\mu(\tau_2)$ respectively) and then the trajectory, depend on all momenta involved in both interactions. So the spectral and timing distribution of photons coming from a short GRB would differ from what one expects in SR, but in a very complicated and unpredictable way since we do not have access to all the details of the detection and emission interactions. Moreover, we have an inconsistency since, if we consider the emission and propagation of the photon, we should consider also the processes where all the particles involved were produced and so on, so one should know the conditions of every particle in the universe.
If a cluster decomposition principle holds in the DSR framework, one can take the second model as valid since the emission takes place very far from the detection. This is on the other hand the most natural way, since we have seen that in this approach the same velocity holds for all photons, independently on their energy, when one uses the physical coordinates in which the interactions are local.
We conclude that there are different perspectives of the time delay problem which deserves further investigation.
As we have seen, there are different models in the DSR framework which do not produce a time delay for photons, so the restrictions on the high energy scale that parametrizes DSR based on such experiments are inconclusive. Since this kind of observations are the only possible measurable effects for energies smaller than the high energy scale, this opens up the possibility that this scale is orders of magnitude lower than expected and then, observable consequences in high energy particle accelerators could be observed. In the next chapter we will consider such possibility, imposing constraints on the scale from the data obtained in them.
\chapter{Twin Peaks: beyond SR production of resonances}
\label{chapter_twin}
\ifpdf
\graphicspath{{Chapter6/Figs/Raster/}{Chapter6/Figs/PDF/}{Chapter6/Figs/}}
\else
\graphicspath{{Chapter6/Figs/Vector/}{Chapter6/Figs/}}
\fi
\epigraph{Harry, I’m going to let you in on a little secret. Every day, once a day, give yourself a present. Don’t plan it, don’t wait for it, just let it happen.}{Dale Cooper, Twin Peaks}
We have seen in the last chapter that there is no time delay of photons with different energies in many models inside the DSR context. Since this is the only phenomenological window to quantum gravity effects due to a deformed relativistic kinematics, the strong constraints based on this kind of experiments may loose they validity, and then, one can consider that the high energy scale that parametrizes a deviation of SR could be orders of magnitude smaller than expected, i.e. the Planck energy. In this chapter, we consider the possibility of a very low energy (with respect to the Planck energy) scale that characterizes modifications to SR in the framework of DSR, and that this modification could be observed in accelerator physics depending on the value of the scale~\cite{Albalate:2018kcf}. This has been done previously in the canonical noncommutativity we mentioned in the introduction for linear accelerators~\cite{Hewett:2000zp,Hewett:2001im,Mathews:2000we} and for hadron colliders~\cite{Alboteanu:2006hh,Ohl:2010zf,YaserAyazi:2012ni}, obtaining in the latter works a lower bound for the high energy scale of the order of TeV (for a review of canonical noncommutative phenomenology see~\cite{Hinchliffe:2002km}).
In this chapter, we will study the simple process of scattering of two particles, taking $Z$ production at the Large Electron-Positron collider (LEP). We will obtain a remarkable effect: two correlated peaks, that we have baptized as \emph{twin peaks}, are associated with a single resonance. We study this possible phenomenology using recent experimental data in order to constrain the scale parametrizing the deviation, obtaining a bound of the order of the TeV. Therefore, this effect might be observable in the next very high energy (VHE) proton collider. Also, we will present a more detailed analysis computing the total cross section of the process \mbox{$f_i \overline{f}_i \to X \to f_j \overline{f}_j$} with some prescriptions to include the effects of a particular DCL.
\section{Twin Peaks}
\label{sec:twin-peaks}
In this section, we start by considering that deviations from SR are characterized by an energy scale $\Lambda$ much smaller than the Planck energy, $\Lambda\ll M_P$. Then, we will try to look for its possible signals in the production of a resonance at a particle accelerator. In fact, we will see a new effect if the mass of the resonance is of the order of this scale.
We will start by modifying the standard expression of the Breit--Wigner distribution
\begin{equation}
f(q^2)\,=\,\frac{K}{(q^2-M^2_X)^2+M_X^2\Gamma_X^2}\,,
\label{eq:BW}
\end{equation}
where $q^2$ is the four-momentum squared of the resonance $X$, $M_X$ and $\Gamma_X$ are respectively its mass and decay width, and $K$ is a kinematic factor that can be considered constant in the region $q^2\sim M_X^2$ (i.e. $K$ is a smooth function of $q^2$ near $M_X^2$).
For a resonance produced by the scattering of two particles, and which decays into two particles, $q^2$ is the squared invariant mass of the two particles producing the resonance or of the two particles into which it decays. In SR, for two particles with four-momenta $p$ and $\overline{p}$, the squared invariant mass is
\begin{equation}
\begin{split}
m^2&\,=\,(p+\overline{p})_\mu (p+\overline{p})^\mu\,=\,(p+\overline{p})_0^2-\sum_i(p+\overline{p})_i^2\\
&\,=\,E^2+\overline{E}^2+2E\overline{E}-\sum_i p_i^2-\sum_i \overline{p}_i^2-2p\overline{p}\cos\theta \approx 2E\overline{E}(1-\cos\theta)\,,
\end{split}
\label{eq:s2}
\end{equation}
where $\theta$ is the angle between the directions of the particles and in the last expression we have used that in the ultra-relativistic limit $(E\sim p)$.
As we have seen in the previous chapters, the main ingredients of a DRK in DSR theories is a DDR and a DCL. In order to study the modification of the production of a resonance, we are going to maintain the usual form of the Breit--Wigner distribution of Eq.~\eqref{eq:BW} but we will modify the squared invariant mass of the process Eq.~\eqref{eq:s2} using a DCL. This corresponds to the case in which the dispersion relation is the one of SR. We have seen that in the Hopf algebra frameworks, one can work in the classical basis of $\kappa$ Poincar\'e~\cite{Borowiec2010}, in which the dispersion relation is the usual one. Along this chapter, we will use a simpler case (although it is not included in the Hopf algebras scheme) which satisfies our requirement about the dispersion relation and where the DCL is covariant, corresponding to the composition law for the Snyder algebra~\cite{Battisti:2010sr}. The simplest example is
\begin{equation}\label{eq:DCL_tp}
\mu^2:\,=\,(p\oplus\overline{p})^2\,=\,m^2\left(1+\epsilon\frac{m^2}{\Lambda^2}\right),
\end{equation}
where $\mu^2$ is the new invariant mass squared of the two-particle system, in which a new additional term to the one in SR appears, $\epsilon m^2/\Lambda^2$, where the parameter $\epsilon\,=\,\pm 1$ represents the two possible signs of the correction\footnote{This is related with the curvature sign of the maximally symmetric momentum space, i.e. if de Sitter or anti-de Sitter is considered, as we saw in Ch.~\ref{chapter_curved_momentum_space}.}.
The modification of the expression of the Breit--Wigner distribution of Eq.~\eqref{eq:BW} leads to consider $f_{\text{BSR}}=f(q^2=\mu^2)$ instead of $f_{\text{SR}}=f(q^2=m^2)$ in the production of the resonance~\footnote{This simple modification of the Breit--Wigner distribution is due to the fact that we are considering a modification of SR in which the dispersion relation (and then the propagator) is not modified. Then the only modification appears in the invariant mass and not in the explicit form of the distribution.}. As we do not have a dynamical theory with a DRK, the modification of the Breit--Wigner distribution as a function of $m^2$ is an ansatz, although in Sec.~\ref{sec:cross-section} we will see through a set of prescriptions a way to compute it. Then, we have
\begin{equation}
\label{eq:BW-BSR}
f_{\text{BSR}}(m^2)\,=\,\frac {K}{\left[\mu^{2}(m^2)-M_X^{2}\right]^{2}+M_X^{2}\Gamma_X ^{2}}\,.
\end{equation}
In Appendix \ref{appendix:B-W}, we show what are the conditions for a resonance to take place. From them, one can see that the choice $\epsilon\,=\,-1$ leads to a double peak with masses
\begin{equation
{m^*_{\pm}}^2\,=\,\frac{\Lambda^2}{2}\left[1\pm\left(1-4\frac{M_X^2}{\Lambda^2}\right)^{1/2} \right]\,,
\end{equation}
and widths
\begin{equation
{\Gamma_\pm^*}^2\,=\,\frac{M_X^2\Gamma_X^2}{{m_\pm^*}^2\left(1-4{M_X^2}/{\Lambda^2}\right)}\,.
\end{equation}
From the previous equation, we can find the following relationship between the widths of the two peaks,
\begin{equation}\label{eq:rel-peaks}
\frac{{\Gamma_+^*}^2}{{\Gamma_-^*}^2}\,=\,\frac{{m_-^*}^2}{{m_+^*}^2}\,,
\end{equation}
which will produce a vital distinction between a possible double peak (\emph{twin peaks}) in a BSR scenario and the production of two non-related resonances.
In the next section, we will study the $Z$-boson production with the previous prescription, which leads us to a lower bound on the high energy scale $\Lambda$, and to consider the possibility to have observable effects in a future VHE hadron collider.
\section{Searches for BSR resonances in colliders}
\label{sec:limits}
\subsection{Bounds on \texorpdfstring{$\Lambda$}{Lg} using LEP data}
\label{sec:limits_1}
The precision obtained in the measurement of the mass and decay width at the LEP collider~\cite{PhysRevD.98.030001}, makes the $Z$ boson a perfect candidate for our study:
\begin{equation}
M_Z^{\text{\text{exp}}}\,=\,91,1876\pm 0.0021\,\mathrm{GeV}\,,\quad\quad \Gamma_Z^{\text{exp}}\,=\,2.4952\pm 0.0023\,\mathrm{GeV}\,,
\end{equation}
where the superscript ``exp'' in the previous expressions remarks the fact that the mass and the decay width are obtained (experimentally) by fitting the standard Breit--Wigner distribution. The values in the case of a BSR scenario would be different from the true $M_Z$ and $\Gamma_Z$. Then, $M_Z^2$ is the value of $\mu^2$, and $(M_Z^{\text{exp}})^2$ is the value of $m^2$ at the peak of the distribution. According to Eq.~\eqref{eq:DCL_tp} one has
\begin{equation}
M_Z^2\,=\,(M_Z^{\text{exp}})^2\left(1+\epsilon\frac{(M_Z^{\text{exp}})^2}{\Lambda^2} \right)\,.
\end{equation}
\begin{comment}
We can put an upper bound to $\Lambda$ from the consistency of the value $M_Z$ obtained from LEP data with other determinations of the mass of the $Z$-boson. In order to calculate the maximum modification (${M}_Z=M_Z^{\text{exp}}+\delta M_Z$) of the $Z$ mass without the (SM) framework, one can put limits to this scale from some assumptions about the modification,
\end{comment}
If one assumes a maximum modification $\delta M_Z=M_Z-M_Z^{\text{exp}}$ of the $Z$ mass determination from LEP compatible with other observations, one can put a limit to the scale $\Lambda$
\begin{equation}\label{Lambda0}
\Lambda\,\geq\, \Lambda_0\,=\,M_Z^{\text{exp}}\left(\frac{M_Z^{\text{exp}}}{2\,\delta M_Z}\right)^{1/2} \,=\, 3,55 \,\left(\frac{30\,\text{MeV}}{\delta M_Z}\right)^{1/2} \,\text{TeV}\,.
\end{equation}
For the determination of the bound on $\Lambda$, we have compared LEP data with the energy dependence of the first of the peaks of the modified Breit--Wigner distribution cross section. Due to the small contribution of the tail of the second peak, we have not taken into account its effect.
We can see that an energy scale of the order of a few TeV could be compatible with LEP data bounds, which is of the order of magnitude found for the scale in the case of a canonical noncommutativity~\cite{Alboteanu:2006hh,Ohl:2010zf,YaserAyazi:2012ni}. As this energy can be reached in a future VHE hadron collider, in the following subsection we will study how to implement this modification for such a case.
\subsection{Searches for BSR resonances in a VHE hadron collider}
\label{sec:limits_2}
There is no evidence of a modification of SR due to the corrections considered in our model in LEP observations of the $Z$ boson. This can be understood since the mass of the $Z$ boson is not comparable to the high energy scale, making the proposed effect of a double peak completely unobservable, since the energy of the resonance is not high enough (see Eq.~\eqref{Lambda0}).
Since the lower bound on $\Lambda$ from LEP is of a few TeV, a future electron-positron collider like ILC will not have enough energy to observe the two peaks we are proposing. So let us consider than some future VHE hadron (proton--proton, pp) collider will be able to reach such energy and then will observe the two peaks of a new resonance at $m^2={m_\pm^*}^2$ (we are considering the interesting case $\epsilon=-1$). Let us suppose that the resonance is due to the annihilation of a quark-antiquark of momenta $p$ and $\overline{p}$ respectively, and that it decays to two fermions of momenta $q$ and $\overline{q}$, although other particles will be produced due to the hadron scattering. From Eq.~\eqref{BSR exp}, the differential cross section with respect to $m^2=(q+\overline{q})^2$ for each peak is
\begin{equation} \label{eq:difcs}
\frac{d\sigma}{dm^2}\approx \mathcal{F}_\pm(s,{m_\pm^*}^2) \frac{K_\pm}{(m^2-{m_\pm^*}^2)^2+{m_\pm^*}^2{\Gamma_\pm^*}^2}\,,
\end{equation}
where the function $\mathcal{F}_\pm(s,{m_\pm^*}^2)$ can be obtained from the parton model as follows.
We start by the usual Mandelstam variable $s$ of the pp system for the ultra-relativistic case,
\begin{equation}
s\,=\,(P+\overline{P})^2\,=\,2E\overline{E}(1-\cos\theta)\,=\, 4E\overline{E}\,,
\end{equation}
where $P$ and $\overline{P}$ are the momenta of the two protons in the initial state. One can write
\begin{equation}
P^\mu\,=\,\frac{\sqrt{s}}{2}(1,0,0,1)\,,\quad\quad\quad\quad \overline{P}^\mu\,=\,\frac{\sqrt{s}}{2}(1,0,0,-1)\,,
\end{equation}
for the momenta of the protons, and
\begin{equation}
p^\mu\,=\,xP^\mu\,,\quad\quad\quad \overline{p}^\mu\,=\,\overline{x}\overline{P}^\mu\,,
\end{equation}
($0<x,\overline{x}<1$) for the momenta of the quark--antiquark pair producing the resonance.
The squared invariant mass of the quark--antiquark system in SR is, according to Eq.~\eqref{eq:s2},
\begin{equation}
m^2\,=\,(p+\overline{p})^2\,=\,4E_p E_{\overline{p}}\,=\,4 x \overline{x} E \overline{E} \,=\, x \overline{x} s\,,
\label{eq:mxs}
\end{equation}
where we have used the same symbol ($m^2$) at the initial and final states since it is a conserved quantity, i.e. $(p+\overline{p})^2=(q+\overline{q})^2$. In the case of a DCL the energy-momentum conservation law requires that $(p\oplus\overline{p})^2=(q\oplus\overline{q})^2=\mu^2$, $\mu^2$ to be conserved but one can easily see from Eq.~\eqref{eq:DCL_tp} that the conservation of $\mu^2$ implies the conservation of $m^2$.
Using the relation $m^2=x\overline{x}s$ from Eq.~\eqref{eq:mxs} we can write $\mathcal{F}_\pm(s,{m_\pm^*}^2)$ as
\begin{equation}\label{eq:F}
\mathcal{F}_\pm(s,{m_\pm^*}^2)\,=\,\int_0^1\int_0^1 dx\, d\overline{x} \,f_q(x,{m_\pm^*}^2) \, f_{\overline{q}}(\overline{x},{m_\pm^*}^2) \, \delta\left(x\overline{x}-\frac{{m_\pm^*}^2}{s}\right)\,,
\end{equation}
where $f_q(x,{m_\pm^*}^2)$ is the parton distribution function. It is defined as the probability density to find a parton (quark) in a hadron (proton) with a fraction $x$ of its momentum when one probes the hadron at an energy scale $m^2 \sim {m_\pm^*}^2$, where ${m_\pm^*}^2$ is given by Eq.~\eqref{epsilon-}.
The $K_\pm$ factors in Eq.~\eqref{eq:difcs} take into account the coupling dependence and all the details of the annihilation of the quark--antiquark pair. Using the previous expressions, one can estimate the expected number of events for different pp colliders, the mass values ($M_X$) of different resonances and the characteristic scale ($\Lambda$). Also, the observation of a double peak would lead us to extract the true mass and width of the resonance through Eqs.~\eqref{changes} and~\eqref{eq:width}.
\section{Cross section calculation in a QFT approach BSR}
\label{sec:cross-section}
In this section we are trying to justify the ansatz of Eq.~\eqref{eq:BW-BSR}. In order to do so, we will consider the process $e^-(k) e^+(\overline{k})\rightarrow Z \rightarrow \mu^-(p) \mu^+(\overline{p})$ and study the modification through a DCL in the DSR scenarios. We will use modified dynamical squared matrix elements in which the ingredient of a DCL is introduced through new Mandelstam variables which replace the SR invariants. It is unavoidable to use some ad hoc prescriptions since we do not know how to introduce a DCL in the QFT framework. Anyway, it suggests that Eq.~\eqref{eq:BW-BSR} can be a good approximation, which is obvious since the main modification of the peak of the resonance is due to the variation of the propagator. Also, it helps us to show how to handle the problem of different \emph{channels}, since a generic DCL is not symmetric.
\subsection{Phase-space momentum integrals}
\label{integral}
Let us start with the modification of the two-particle phase-space integral in SR for the massless case:
\begin{equation}\label{PS2}
PS_2 \,\,=\,\, \int \frac{d^4p}{(2\pi)^3} \delta(p^2) \theta(p_0) \,\frac{d^4\overline{p}}{(2\pi)^3} \delta(\overline{p}^2) \theta(\overline{p}_0) \,(2\pi)^4 \delta^{(4)}[(k+\overline{k}) - (p+\overline{p})] \,.
\end{equation}
For simplicity, we will consider a DCL related to the Snyder algebra~\cite{Battisti:2010sr} (obtained by geometrical arguments in Ch.~\ref{chapter_curved_momentum_space})
\begin{equation}\label{BSRDCL}
\left[l\oplus q\right]^{\mu}\,=\,l^{\mu}\sqrt{1+\frac{q^{2}}{\overline{\Lambda}^{2}}}+\frac{1}{\overline{\Lambda}^{2}\left(1+\sqrt{1+{l^{2}}/{\overline{\Lambda}^{2}}}\right)}l^{\mu}\left(l\cdot q\right)+q^{\mu}\approx l^{\mu}+q^{\mu}+\frac{1}{2\overline{\Lambda}^{2}}l^{\mu}\left(l \cdot q\right)\,,
\end{equation}
where we have used the fact that the particles involved in the process are relativistic particles. One gets the relation used in Sec.~\ref{sec:twin-peaks} with $\epsilon\,=\,+1$ from $(l\oplus q)^2$ of Eq.~\eqref{BSRDCL}. The negative sign of the parameter $\epsilon\,=\,-1$ can be also found in Eq.~\eqref{BSRDCL} if one considers the two possible signs in the commutator of space-time coordinates of the Snyder algebra: $\left[x^\mu, x^\nu \right]\,=\,\pm J^{\mu\nu}/\overline{\Lambda}^2$. This is understood as well from the geometrical perspective, considering the DCL related to de Sitter or anti-de Sitter momentum spaces. From now on, we will use Eq.~\eqref{BSRDCL}. Note that Eq.~\eqref{eq:DCL_tp} can be recovered with $\Lambda^2\,=\,2\overline{\Lambda}^2$. We can now justify the proposed model of the previous section from this covariant DCL.
As the DRK we are considering has the same dispersion relation of SR, the only modification of the phase-space integral is due to the DCL which, not being symmetric, leads to four possible conservation laws (channels) in which the process\\* \mbox{$e^-(k) e^+(\overline{k}) \to Z \to \mu^-(p) \mu^+(\overline{p})$} can be produced. Then, we have four phase-space integrals, one for each channel ($\alpha\,=\,1,2,3,4$)
\begin{equation}\label{PS2bar}
\overline{PS}_2^{(\alpha)} \,=\, \int \frac{d^4p}{(2\pi)^3} \delta(p^2) \theta(p_0) \,\frac{d^4\overline{p}}{(2\pi)^3} \delta(\overline{p}^2) \theta(\overline{p}_0) \,(2\pi)^4 \delta^{(4)}_\alpha(k, \overline{k}; p, {\overline p}) \,,
\end{equation}
where
\begin{equation}
\begin{split}
\delta^{(4)}_1(k, \overline{k}; p, {\overline p}) &\,=\, \delta^{(4)}[(k\oplus\overline{k}) - (p\oplus\overline{p})]\,, \qquad
\delta^{(4)}_2(k, \overline{k}; p, {\overline p}) \,=\, \delta^{(4)}[(\overline{k}\oplus k) - (p\oplus\overline{p})]\,, \\
\delta^{(4)}_3(k, \overline{k}; p, {\overline p}) &\,=\, \delta^{(4)}[(k\oplus\overline{k}) - (\overline{p}\oplus p)]\,, \qquad
\delta^{(4)}_4(k, \overline{k}; p, {\overline p}) \,=\, \delta^{(4)}[(\overline{k}\oplus k) - (\overline{p}\oplus p)]\,.
\end{split}
\end{equation}
\subsection{Choice of the dynamical factor with a DCL}
\label{sec:dynamical}
As the scattering process is produced in a collider, one can assume that both particles of the initial state have the same modulus of the momentum (where we have neglected the masses since we are working in the ultra-relativistic limit)
\begin{equation}
k_\mu\,=\,\left(E_{0},\,\vec{k}\right)\,,\, \overline{k}_\mu\,=\,\left(E_{0},\,-\vec{k}\right)\,,\mbox{ with }E_{0}\,=\,|\vec{k}|\,.
\end{equation}
We take as starting point the SR cross section at the lowest order, which is a product of four factors: the kinematic factor of the initial state, the phase-space integral of the two-particle, the propagator of the $Z$-boson and the dynamical factor $A$
\begin{equation}
\sigma \,=\,\frac{1}{8 E_0^2} \, PS_2 \,\frac{1}{\left[(s - M_Z^2)^2 + \Gamma_Z^2 M_Z^2\right]} \,A\,.
\label{eq:sigma}
\end{equation}
The SM dynamical factor for this process is~\cite{Atsue2015}
\begin{equation}\label{A}
A \,=\, \frac{e^4}{2 \sin^4\theta_W \cos^4\theta_W} \, \left(\left[C_V^2+C_A^2\right]^2 \left[\left(\frac{t}{2}\right)^2 + \left(\frac{u}{2}\right)^2\right] - 4 C_V^2 C_A^2 \left[\left(\frac{t}{2}\right)^2 - \left(\frac{u}{2}\right)^2\right]\right)\,,
\end{equation}
where ($C_V$) and ($C_A$) are the corrections to the vector and axial weak charges respectively, ($\theta_W$) is the Weinberg angle and ($s$, $t$, $u$) the Mandelstam variables
\begin{align}
s&\,=\,\left(k+\overline{k}\right)^{2}\,=\,\left(p+\overline{p}\right)^{2}\,\,, \\
t&\,=\,\left(k-p\right)^{2}\,=\,\left(\overline{p}-\overline{k}\right)^{2}\,, \\
u&\,=\,\left(k-\overline{p}\right)^{2}\,=\,\left(p-\overline{k}\right)^{2}\,.
\end{align}
As we do not have a BSR-QFT, we do not know how the SR cross section of Eq.~\eqref{eq:sigma} is modified. But we can assume that the generalization should be compatible with Lorentz invariance and in the limit $\Lambda \to \infty$ we should recover the SR cross section $\sigma$ of Eq.~\eqref{eq:sigma}. So we can consider that, since the two-particle phase-space integral is modified as in Eq.~\eqref{PS2bar}, the generalization of the cross section will be\footnote{Note that this expression is corrected from the one used in~\cite{Albalate:2018kcf}, as the factor used there of $8 E_0^2$ is not relativistic invariant when a DCL is considered, while $2s$ is. However, the results barely change since the main contribution is due to the modification of the propagator.}
\begin{equation}\label{sigmaalpha}
\overline{\sigma}_\alpha \,=\, \frac{1}{2\,s} \, \overline{PS}_2^{(\alpha)} \,\frac{1}{\left[(\overline{s} - \overline{M}_Z^2)^2 + \overline{\Gamma}_Z^2 \overline{M}_Z^2\right]} \,\overline{A}_\alpha\,,
\end{equation}
for each channel $\alpha$. In our simple choice of the DCL, the squared total mass is the same for every channel
\begin{equation}
(k\oplus\overline{k})^2 \,=\, (\overline{k}\oplus k)^2 \,=\, (p\oplus\overline{p})^2 \,=\, (\overline{p}\oplus p)^2 \doteq \overline{s}\,.
\end{equation}
As we do not have a dynamic framework, we will consider two possible different assumptions for the dynamical factor $\overline{A}_\alpha$:
\begin{enumerate}[leftmargin=*,labelsep=4.9mm]
\item One can consider that the dynamical factor does not depend on the DCL, and then $\overline{A}_\alpha=A$. However, since the DCL implies that $(k-p)\neq (\overline{p}-\overline{k})$, $(k-\overline{p})\neq (p-\overline{k})$, one has to consider
\begin{align}
t &\,=\, \frac{1}{2} \left[(k-p)^2 + (\overline{p} - \overline{k})^2\right] \,=\, - k\cdot p - \overline{k}\cdot \overline{p}\,, \\
u &\,=\, \frac{1}{2} \left[(k-\overline{p})^2 + (p-\overline{k})^2\right] \,=\, - k\cdot \overline{p} - \overline{k}\cdot p\,.
\end{align}
\item Also, it is possible to regard that the generalization of $A$ is carried out by the replacement of the usual Mandelstam variables $t$, $u$ by new invariants $\overline{t}$, $\overline{u}$. Since we cannot find how to associate new invariants for each channel, we will consider that the dynamical factor is channel independent ($\overline{A}_\alpha=\overline{A}$), obtained from $A$ just replacing the usual Mandelstam variables \mbox{$t$, $u$ } by
\begin{align}
\overline{t} &\,=\, \frac{1}{2} \left[(k\oplus\hat{p})^2 + (\overline{p}\oplus \hat{\overline{k}})^2\right] \,=\, - k\cdot p - \overline{k}\cdot \overline{p} + \frac{(k\cdot p)^2}{2\overline{\Lambda}^2} + \frac{(\overline{k}\cdot \overline{p})^2}{2\overline{\Lambda}^2} \,,\\
\overline{u} &\,=\, \frac{1}{2} \left[(k\oplus\hat{\overline{p}})^2 + (p\oplus \hat{\overline{k}})^2\right] \,=\, - k\cdot \overline{p} - \overline{k}\cdot p + \frac{(k\cdot \overline{p})^2}{2\overline{\Lambda}^2} + \frac{(\overline{k}\cdot p)^2}{2\overline{\Lambda}^2}\,,
\end{align}
(note that in the particular case we are considering, the squared of a composition of two momenta is symmetric, regardless of the asymmetry of the DCL), where we have used the \textit{antipode} $\hat{p}$ defined in Ch.~\ref{chapter_curved_momentum_space}. One can check that for our choice of the DCL of Eq.~\eqref{BSRDCL}, the antipode is the usual of SR, i.e. just $-p$
\[
\begin{split}
\left[p\oplus \hat{p}\right]^{\mu}\,=\,\left[p\oplus -p\right]^{\mu}&\,=\, p^\mu \left\lbrace \sqrt{1+\frac{p^{2}}{\overline{\Lambda}^{2}}}-\frac{p^2}{\overline{\Lambda}^{2}\left(1+\sqrt{1+{p^{2}}/{\overline{\Lambda}^{2}}}\right)}-1 \right\rbrace \\
&\,=\, p^\mu\left\lbrace \frac{\overline{\Lambda}^2\left( \sqrt{1+p^2/\overline{\Lambda}^2}+1\right)+\overline{\Lambda}^2 p^2/\overline{\Lambda}^2 -p^2}{\overline{\Lambda}^{2}\left(1+\sqrt{1+p^2/\overline{\Lambda}^2}\right)}-1 \right\rbrace \,=\, 0\,.
\end{split}
\]
\end{enumerate}
We will consider that the scattering can be produced in all the possible channels with the same probability, so in order to compute the whole cross section, we will take the average of all channels
\begin{equation}\label{sigmabar}
\overline{\sigma} \,\doteq\, \frac{1}{4}\sum_\alpha \overline{\sigma}_\alpha\,,
\end{equation}
with $\overline{\sigma}_\alpha$ in Eq.~\eqref{sigmaalpha}. In fact, one has two assumptions from the two choices of the modified dynamical factor $\overline{A}$
\begin{align}
\overline{\sigma}^{(1)} &\,=\, \frac{1}{32 E_0^2} \,\frac{1}{\left[(\overline{s} - \overline{M}_Z^2)^2 + \overline{\Gamma}_Z^2 \overline{M}_Z^2\right]} \,\sum_\alpha \overline{PS}_2^{(\alpha)} \,A(t,u), \\
\overline{\sigma}^{(2)} &\,=\, \frac{1}{32 E_0^2} \,\frac{1}{\left[(\overline{s} - \overline{M}_Z^2)^2 + \overline{\Gamma}_Z^2 \overline{M}_Z^2\right]} \,\sum_\alpha \overline{PS}_2^{(\alpha)} \,A(\overline{t},\overline{u}).
\end{align}
In Appendix~\ref{appendix_cross_sections} it is showed how to obtain these two cross sections, finding
\begingroup\makeatletter\def\f@size{9}\check@mathfonts
\def\maketag@@@#1{\hbox{\m@th\fontsize{10}{10}\selectfont \normalfont#1}}%
\begin{align}
\overline{\sigma}^{(1)} &\,=\, \frac{e^4}{48 \pi \sin^4\theta_W \cos^4\theta_W\left(1+\frac{E_0^2}{\overline{\Lambda}^2}\right)} \,\frac{E_0^2}{\left[\left(4E_0^2(1+E_0^2/\overline{\Lambda}^2) - \overline{M}_Z^2\right)^2 + \overline{\Gamma}_Z^2 \overline{M}_Z^2\right]} \, \left((C_V^2+C_A^2)^2 \left[1 -\frac{E_0^2}{2\overline{\Lambda} ^2}\right] \right)\,,
\label{eq:cross_section_final_1}\\
\overline{\sigma}^{(2)} &\,=\, \frac{e^4}{48 \pi \sin^4\theta_W \cos^4\theta_W\left(1+\frac{E_0^2}{\overline{\Lambda}^2}\right)} \,\frac{E_0^2}{\left[\left(4E_0^2(1+E_0^2/\overline{\Lambda}^2) - \overline{M}_Z^2\right)^2 + \overline{\Gamma}_Z^2 \overline{M}_Z^2\right]} \,\left((C_V^2+C_A^2)^2 \left[1 -\frac{2 E_0^2}{\overline{\Lambda} ^2}\right] \right)\,.
\label{eq:cross_section_final_2}
\end{align}
\endgroup
\subsection{Constraints on \texorpdfstring{$\overline{\Lambda}$}{Lg}}
Now we are able to find the constraints on $\overline{\Lambda}$ due to the modified cross section, taking into account the Particle Data Group (PDG) data~\cite{PhysRevD.98.030001}. We require the cross section $\overline{\sigma}$ to be compatible with the PDG data for a value of $\overline{M}_Z$ and $\overline{\Gamma}_Z$ in an interval $\pm \,30\,\text{MeV}$ around their central values given by the PDG\footnote{It can be seen that for bigger values of $\delta\overline{M}_Z$ and $\delta\overline{\Gamma}_Z$ there is not a significant variation in the constraint for $\overline{\Lambda}$.}. As the SM is really successful, we will use the SR cross section, with the PDG values of $M_Z$ and $\Gamma_Z$ at one or two standard deviations, as a good approximation to the experimental data.
In Table~\ref{table:two} the obtained results are shown, where we have denoted by $\overline{\sigma}^{(j)}_{i}$ the cross section taking $i$ standard deviations in the data. Note that the constraints are independent of the sign in the DCL.
\begin{table}[H]
\caption{Bounds on the scale of new physics $\overline{\Lambda}$ from LEP
data of the $Z$ boson.}
\centering
\begin{tabular}{ccccc}
\toprule
{\bf Constraints} & \boldmath{$\overline{\sigma}^{(1)}_{1}$} & \boldmath{$\overline{\sigma}^{(1)}_{2}$} & \boldmath{$\overline{\sigma}^{(2)}_{1}$} & \boldmath{$\overline{\sigma}^{(2)}_{2}$} \\
\midrule
$\overline{\Lambda}${[}TeV{]} & 2.4 &1.8 & 2.7 & 1.9 \\
\bottomrule
\end{tabular}
\label{table:two}
\end{table}
As we can see from the previous table, although the computations developed along this section help us to understand better the BSR framework of QFT, the results show that the simple approximation used in Eq.~\eqref{eq:DCL_tp} gives a good estimate.
Once we have studied in the last two chapters some of the phenomenological consequences of DRK in flat spacetime, we can try to see how such modification of SR can be generalized in the context of a curved spacetime. This is necessary to study the possible existence or absence of time delays, since the universe expansion is not negligible due to the long distances photons travel from where they are emitted to our telescopes. In the next chapter we will see a new proposal incorporating a curvature of spacetime in the discussion carried out in Ch.~\ref{chapter_curved_momentum_space}.
\chapter{Cotangent bundle geometry}
\label{ch:cotangent}
\ifpdf
\graphicspath{{Chapter8/Figs/Raster/}{Chapter8/Figs/PDF/}{Chapter8/Figs/}}
\else
\graphicspath{{Chapter8/Figs/Vector/}{Chapter8/Figs/}}
\fi
\epigraph{Only those who risk going too far can possibly find out how far one can go.}{T.S.Eliot}
When we have considered the DRK as a way to go beyond SR, we have not taken into account its possible effects on the space-time metric (we studied how locality can be implemented when there is a DCL using some noncommutative coordinates, but we did not mention the space-time geometry). In fact, in Ch.~\ref{chapter_curved_momentum_space} we have seen how a DRK can be understood through a curved momentum space with a flat spacetime.
There are a lot of works in the literature studying the space-time consequences of a DDR in LIV scenarios~\cite{Kostelecky:2011qz,Barcelo:2001cp,Weinfurtner:2006wt,Hasse:2019zqi,Stavrinos:2016xyg}. Most of them have been developed by considering Finsler geometries, formulated by Finsler in 1918~\cite{zbMATH02613491} (these geometries are a generalization of Riemannian spaces where the space-time metric can depend also on vectors of the tangent space). However, in those works the introduction of a velocity dependent metric is considered out of the DSR context since there is no mention to a DLT and DCL, hence precluding the possibility to have a relativity principle.
In the DSR framework the starting point is also a DDR. However, there is a crucial difference between the LIV and DSR scenarios, since the latter implements a deformed Lorentz transformations (in the one-particle system) which makes the DDR invariant for different observers related by such transformation. The case of Finsler geometries in this context was considered for the cases of flat~\cite{Girelli:2006fw,Amelino-Camelia:2014rga} and curved spacetimes~\cite{Letizia:2016lew}, provoking a velocity dependence on the space-time metric. Besides Finsler geometries, which starts from a Lagrangian (in fact, Finsler geometries are particular realizations of Lagrange spaces~\cite{2012arXiv1203.4101M}), there is another possible approach to define a deformed metric from the Hamiltonian. This leads to Hamilton geometry~\cite{2012arXiv1203.4101M}, considered in~\cite{Barcaroli:2015xda}. In this kind of approach, the space-time metric depends on the phase-space coordinates (momentum and positions), instead of the tangent bundle coordinates (velocities and positions). Both geometries are particular cases of geometries in the tangent and cotangent bundle respectively.
Moreover, in~\cite{Rosati:2015pga} another way to consider possible phenomenology based on time delays in an expanding universe due to deformations of SR is explored. In that paper, it is studied both LIV and DSR scenarios, starting with a DDR and considering nontrivial translations (in a similar way in which such effect was studied in the first model of Ch.~\ref{chapter_time_delay}). In order to do so, they considered the expansion of the universe by gluing slices of de Sitter spacetimes, finding difficult to formulate such study in a direct way.
As we have mentioned, the DDR and the one-particle DLT are the only ingredients in all previous works. But as we saw in Ch.~\ref{chapter_second_order}, there is a particular basis in $\kappa$-Poincaré, the classical basis, in which the DDR and DLT are just the ones of SR, so, following the prescription used in these works, there would be no effect on the space-time metric.
Another geometrical interpretation was considered in ~\cite{Freidel:2018apz}. In that work, considering a Born geometry of a double phase space leads to a modified action of a quantum model that describes the propagation of a free relativistic particle, what they called a metaparticle. There, the DDR is obtained from the poles of the momentum integral representation of the metaparticle quantum propagator, instead of reading it from the constraint in the classical action and interpreting it as the squared distance in a curved momentum space, which is the approach we have discussed along this work.
Here we will study the case of curved spacetime and momentum spaces by considering a geometry in the cotangent bundle, i.e. a geometrical structure for all the phase-space coordinates~\cite{Relancio:2020zok}. As we will see, this is mandatory in order to make compatible a de Sitter momentum space and a generic space-time geometry. With our prescription, we find a nontrivial (momentum dependent) metric for whatever form of the DDR (as long as there is a nontrivial geometry for the momentum space), being able to describe the propagation of a free particle in the canonical variables. This differs from the perspective of~\cite{Rosati:2015pga}, where the considered metric for the spacetime is the one given by GR. However, as we will see, the existence or not of a time delay for photons in an expanding universe is still an open problem that deserves further research.
We will start by making clear our proposal of constructing a metric in the cotangent bundle, in such a way that the resulting momentum curvature tensor corresponds to a maximally symmetric momentum space. As in the flat space-time case, we will see that one can also identify 10 transformations in momentum space for a fixed point of spacetime. After that, we will introduce the main ingredients of the cotangent geometry~\cite{2012arXiv1203.4101M} that we will use along this chapter. Then, we will show the connection between this formalism and the common approach followed in the literature that considers an action with a DDR. Finally, we will study the phenomenological implications in two different space-time geometries, a Friedmann-Robertson-Walker universe and a Schwarzschild black hole.
\section{Metric in the cotangent bundle}
\label{sec:metric}
In this section, we will present a simple way to generalize the results obtained in Ch.~\ref{chapter_curved_momentum_space} taking into account a curvature of spacetime, characterized by a metric $g_{\mu\nu}^x(x)$, and we will explain, from our point of view, how to deal with a metric in the cotangent bundle depending on both momentum and space-time coordinates.
\subsection{Curved momentum and space-time spaces}
We start with the action of a free particle in SR
\begin{equation}
S\,=\,\int{\dot{x}^\mu k_\mu-\mathcal{N} \left(C(k)-m^2\right)}\,,
\label{eq:SR_action}
\end{equation}
where $C(k)=k_\alpha \eta^{\alpha\beta }k_\beta$. It is easy to check that the same equation of motions derived in GR by solving the geodesic equation can be obtained just by replacing $\bar{k}_\alpha=\bar{e}^\nu_\alpha (x) k_\nu$\footnote{To avoid confusions, we will use the symbol $\bar{e}$ in order to denote the inverse of the tetrad.} in Eq.~\eqref{eq:SR_action}
\begin{equation}
S\,=\,\int{\dot{x}^\mu k_\mu-\mathcal{N} \left(C(\bar{k})-m^2\right)}\,,
\label{eq:GR_action}
\end{equation}
where $\bar{e}^\nu_\alpha(x)$ is the inverse of the tetrad of the space-time metric $e^\nu_\alpha(x)$, so that
\begin{equation}
g^x_{\mu\nu}(x)\,= \, e^\alpha_\mu (x) \eta_{\alpha\beta} e^\beta_\nu (x)\,,
\label{eq:metric-st}
\end{equation}
and then, the dispersion relation is
\begin{equation}
C(\bar{k})\,=\,\bar{k}_\alpha \eta^{\alpha\beta }\bar{k}_\beta\,=\,k_\mu g_x^{\mu\nu}(x) k_\nu\,.
\label{eq:cass_GR}
\end{equation}
As we saw in Ch.~\ref{chapter_curved_momentum_space}, the dispersion relation can be interpreted as the distance in momentum space from the origin to a point $k$, so one can consider the following line element for momenta\footnote{We will use along the chapter the symbol $\varphi$ as the inverse of the momentum tetrad $\bar{\varphi}$ (remember the relation found in Ch.~\ref{chapter_locality} between the inverse of the momentum space tetrad and the functions allowing us to implement locality).}
\begin{equation}
d\sigma^2\,=\,dk_{ \alpha}g_k^{\alpha\beta}(k)dk_{ \beta}\,=\,dk_{ \alpha}\bar{\varphi}^\alpha_\gamma(k)\eta^{\gamma\delta}\bar{\varphi}^\beta_\delta(k)dk_{ \beta}\,,
\label{eq:line_m1}
\end{equation}
where $\bar{\varphi}^\alpha_\beta(k)$ is the inverse of the tetrad in momentum space $\varphi^\alpha_\beta(k)$. This can be easily extended to the curved space-time case introducing the variables $\bar{k}$ in the previous momentum line element, obtaining
\begin{equation}
d\sigma^2\,\coloneqq\,d \bar{k}_{\alpha}g_{\bar{k}}^{\alpha\beta}( \bar{k})d \bar{k}_{ \beta}\,=\,dk_{\mu}g^{\mu\nu}(x,k)dk_{\nu}\,,
\end{equation}
where in the second equality we have taken into account that the distance is computed along a fiber, i.e. the Casimir is viewed as the squared distance from the point $(x,0)$ to $(x,k)$ (we will see this in more detail in Sec.~\ref{sec:fb_properties}). The metric tensor $g^{\mu\nu}(x,k)$ in momentum space depending on space-time coordinates is constructed with the tetrad of spacetime and the original metric in momentum space, explicitly
\begin{equation}
g_{\mu\nu}(x,k)\,=\,\Phi^\alpha_\mu(x,k) \eta_{\alpha\beta}\Phi^\beta_\nu(x,k)\,,
\end{equation}
where
\begin{equation}
\Phi^\alpha_\mu(x,k)\,=\,e^\lambda_\mu(x)\varphi^\alpha_\lambda(\bar{k})\,.
\end{equation}
We can check that, in the way we have constructed this metric, it is invariant under space-time diffeomorphisms. A canonical transformation in phase space $(x, k) \to (x',k')$ of the form
\begin{equation}
x^{\prime \mu}\,=\,f^\mu(x)\,,\qquad k'_\mu\,=\,\frac{\partial x^\nu}{\partial x^{\prime \mu}} k_\nu\,,
\label{eq:canonical_transformation}
\end{equation}
makes the tetrad transforms as
\begin{equation}
\Phi^{\prime \mu}_\rho(x',k')\,=\, \frac{\partial x^\nu}{\partial x^{\prime \rho}} \Phi^\mu_\nu(x,k)\,,
\label{eq:complete_tetrad_transformation}
\end{equation}
because
\begin{equation}
\frac{\partial x^\mu}{\partial x^{\prime \rho}} e^\lambda_\mu(x)\varphi^\alpha_\lambda(\bar{k})\,=\,e^{\prime \kappa}_\rho(x')\varphi^{\prime \alpha}_\kappa(\bar{k}')
\label{eq:tetrad_transformation}
\end{equation}
holds, since the barred momentum variables are invariant under space-time diffeomorphisms
\begin{equation}
\bar{k}'_ \mu\,=\,k'_\nu \bar{e}^{\prime \nu}_\mu(x')\,=\,k_\nu \bar{e}^\nu_\mu(x) \,=\,\bar{k}_ \mu\,,
\end{equation}
due to the fact that the space-time tetrad transforms as
\begin{equation}
\bar{e}^{\prime \nu}_\mu(x')\,=\,\frac{\partial x^{\prime \nu}}{\partial x^\rho}\bar{e}^\rho_\mu(x)\,,
\end{equation}
and then, the tetrad of momentum space is invariant under this kind of transformations, as its argument does not change. Then, the metric is invariant under the same space-time diffeomorphisms of GR.
In the flat space-time case, we have seen that the momentum metric is invariant under momentum coordinate transformations. In the way we propose to construct the momentum metric when a curvature in both momentum and coordinate spaces is present, one loses this kind of invariance, i.e. a canonical transformation
\begin{equation}
k_\mu \,=\, h_\mu(k')\,,\quad x^\mu \,=\, x^{\prime \nu} j^\mu_\nu(k')\,,
\label{eq:primes}
\end{equation}
with
\begin{equation}
j^\mu_\rho(k') \frac{\partial h_\nu(k')}{\partial k'_\rho} \,=\, \delta^\mu_\nu \,,
\end{equation}
does not leave the metric invariant (as it happens in the GR case, where the metric is not invariant under these transformations). However, we have seen that this metric is invariant under space-time diffeomorphisms.
With this proposal, we are selecting somehow a particular choice of momentum variables against others, in contrast with the flat space-time formulation, where there was an independence on this choice. As it was shown in Ch.~\ref{chapter_intro}, there is a vast discussion about the possible existence of some ``physical'' momentum variables, the ones that nature ``prefers''. Within this framework it seems natural to think that, since the model is not invariant under the choice of momentum coordinates, there should be a preferred basis in which to formulate the physics.
In the rest of this subsection we will see that, from the definition of the metric, one can easily generalize for a curved spacetime the momentum transformations obtained in Ch.~\ref{chapter_curved_momentum_space} for a maximally symmetric momentum space: as in the flat space-time case, there are still 10 momentum isometries for a fixed spacetime point $x$, 4 translations and 6 transformations leaving the point in phase space $(x,0)$ invariant, and we can also understand the dispersion relation as the distance from the point $(x,0)$ to $(x,k)$. The fact that with this procedure we have also 10 momentum isometries can be understood since, if one considers as the starting point a momentum space with a constant scalar of curvature, then the new metric in momentum space will have also a constant momentum scalar of curvature (see Appendix~\ref{appendix:cotangent}).
\subsubsection{Modified translations}
In Ch.~\ref{chapter_curved_momentum_space} we have found the translations from Eq.~\eqref{T(a,k)}
\begin{equation}
\varphi^\mu_\nu(p\oplus q) \,=\, \frac{\partial (p\oplus q)_\nu}{\partial q_\rho} \, \varphi^\mu_\rho(q)\,,
\end{equation}
so we should find the new translations by replacing $p\rightarrow \bar{p}_\mu=\bar{e}_\mu^\nu(x) p_\nu$, $q\rightarrow \bar{q}_\mu=\bar{e}_\mu^\nu(x) q_\nu$ on it
\begin{equation}
\varphi^\mu_\nu(\bar{p} \oplus \bar{q}) \,=\, \frac{\partial (\bar{p} \oplus \bar{q})_\nu}{\partial \bar{q}_\rho} \, \varphi_\rho^{\,\mu}( \bar{q})\,.
\label{eq:tetrad_composition2}
\end{equation}
This leads us to introduce a generalized composition law ($\bar{\oplus}$) for a curved spacetime such that
\begin{equation}
(\bar{p} \oplus \bar{q})_\mu \,=\, \bar{e}_\mu^\nu(x) (p \bar{\oplus} q)_\nu\,.
\label{eq:composition_cotangent}
\end{equation}
Then, one has
\begin{comment}
\begin{equation}
\begin{split}
e^\tau_\nu(x)\varphi^\mu_\tau(\bar{p} \oplus \bar{q}) \,=\,e^\tau_\nu(x)\frac{\partial (\bar{p} \oplus \bar{q})_\tau}{\partial \bar{q}_\sigma}\varphi_\sigma^{\,\mu}(\bar{q})\,=&\,\frac{\partial (p \bar{\oplus} q)_\nu}{\partial (\bar{p} \oplus \bar{q})_\tau}\frac{\partial (\bar{p} \oplus \bar{q})_\tau}{\partial \bar{q}_\lambda}\frac{\partial \bar{q}_\lambda}{\partial q_\rho} \, e_\rho^\sigma (x)\varphi_\sigma^{\,\mu}(\bar{q})\\
=&\,\frac{\partial (p \bar{\oplus} q)_\nu}{\partial q_\rho} \, e_\rho^\sigma (x)\varphi_\sigma^{\,\mu}(\bar{q})\,,
\end{split}
\end{equation}
and then
\begin{equation}
\Phi^\mu_\nu(x,(p \bar{\oplus} q)) \,=\, \frac{\partial (p \bar{\oplus} q)_\nu}{\partial q_\rho} \, \Phi_\rho^{\,\mu}(x,q)\,,
\label{eq:tetrad_cotangent}
\end{equation}
where we have defined
\begin{equation}
(p \bar{\oplus} q)_\mu\,=\,e^\alpha_\mu(x)(\bar{p} \oplus \bar{q})_\alpha\,.
\label{eq:composition_cotangent}
\end{equation}
The new translations are the generalization of the composition law of $\kappa$-Poincaré for a curved spacetime, in such a way that
\begin{equation}
\bar{k}_\mu\,=\,(\bar{p} \oplus \bar{q})_\mu \implies k_\mu\,=\,(p \bar{\oplus} q)_\mu\,.
\label{eq:composition_cotangent2}
\end{equation}
\end{comment}
\begin{equation}
\begin{split}
e^\tau_\nu(x)\varphi^\mu_\tau(\bar{p} \oplus \bar{q}) \,=&\,e^\tau_\nu(x)\frac{\partial (\bar{p} \oplus \bar{q})_\tau}{\partial \bar{q}_\sigma}\varphi_\sigma^{\,\mu}(\bar{q})\,=\,e^\tau_\nu(x) \bar{e}^\lambda_\tau(x) \,\frac{\partial (p \bar{\oplus} q)_\lambda}{\partial \bar{q}_\sigma}\varphi_\sigma^{\,\mu}(\bar{q})\\
=&\,\frac{\partial (p \bar{\oplus} q)_\nu}{\partial\bar{q}_\sigma} \varphi_\sigma^{\,\mu}(\bar{q})\,=\,
\frac{\partial (p \bar{\oplus} q)_\nu}{\partial q_\rho}\frac{\partial q_\rho}{\partial\bar{q}_\sigma} \varphi_\sigma^{\,\mu}(\bar{q})\,=\,
\frac{\partial (p \bar{\oplus} q)_\nu}{\partial q_\rho} e_\rho^\sigma(x)
\varphi_\sigma^{\,\mu}(\bar{q})\,,
\end{split}
\end{equation}
i.e.
\begin{equation}
\Phi^\mu_\nu(x,(p \bar{\oplus} q)) \,=\, \frac{\partial (p \bar{\oplus} q)_\nu}{\partial q_\rho} \, \Phi_\rho^{\,\mu}(x,q)\,.
\label{eq:tetrad_cotangent}
\end{equation}
We have obtained, for a fixed $x$, the momentum isometries of the metric leaving the form of the tetrad invariant in the same way we did in Ch.~\ref{chapter_curved_momentum_space}.
As we saw in the same chapter, the translations defined in this way form by construction a group, so the composition law must be associative. We can now show that the barred composition law is also associative from the fact that the composition law $\oplus$ is associative. If we define $\bar{r}=(\bar{k}\oplus \bar{q})$ and $\bar{l}=(\bar{p}\oplus \bar{k})$, then we have $r=(k \bar{\oplus} q)$ and $l=(p \bar{\oplus} k)$. Hence
\begin{equation}
(\bar{p}\oplus \bar{r})_\mu\,=\,\bar{e}^\alpha_\mu(x)(p\bar{\oplus}r)_\alpha\,=\,\bar{e}^\alpha_\mu(x)(p\bar{\oplus}(k\bar{\oplus}q))_\alpha\,,
\end{equation}
and
\begin{equation}
(\bar{l}\oplus \bar{q})_\mu\,=\,\bar{e}^\alpha_\mu(x)(l\bar{\oplus}q)_\alpha\,=\,\bar{e}^\alpha_\mu(x)((p\bar{\oplus}k)\bar{\oplus}q)_\alpha\,,
\end{equation}
but due to the associativity of $\oplus$
\begin{equation}
(\bar{p}\oplus \bar{r})_\mu\,=\,(\bar{l}\oplus \bar{q})_\mu\,,
\end{equation}
and then
\begin{equation}
(p\bar{\oplus}(k\bar{\oplus}q))_\alpha\,=\,((p\bar{\oplus}k)\bar{\oplus}q)_\alpha\,,
\end{equation}
we conclude that $\bar{\oplus}$ is also associative. We have then shown that in the cotangent bundle with a constant scalar of curvature in momentum space, one can also define associative momentum translations.
\subsubsection{Modified Lorentz transformations}
One can also replace $k$ by $\bar{k}_\mu=\bar{e}_\mu^\nu(x) k_\nu$ in Eq.~\eqref{cal(J)}
\begin{equation}
\frac{\partial g^k_{\mu\nu}(k)}{\partial k_\rho} {\cal J}^{\beta\gamma}_\rho(k) \,=\,
\frac{\partial{\cal J}^{\beta\gamma}_\mu(k)}{\partial k_\rho} g^k_{\rho\nu}(k) +
\frac{\partial{\cal J}^{\beta\gamma}_\nu(k)}{\partial k_\rho} g^k_{\mu\rho}(k)\,,
\end{equation}
obtaining
\begin{equation}
\frac{\partial g^{\bar{k}}_{\mu\nu}(\bar{k})}{\partial \bar{k}_\rho} {\cal J}^{\beta\gamma}_\rho(\bar{k}) \,=\,
\frac{\partial{\cal J}^{\beta\gamma}_\mu(\bar{k})}{\partial \bar{k}_\rho} g^{\bar{k}}_{\rho\nu}(\bar{k}) +
\frac{\partial{\cal J}^{\beta\gamma}_\nu(\bar{k})}{\partial \bar{k}_\rho} g^{\bar{k}}_{\mu\rho}(\bar{k})\,.
\end{equation}
From here, we have
\begin{equation}
\frac{\partial g^{\bar{k}}_{\mu\nu}(\bar{k})}{\partial k_\sigma}e^\rho_\sigma(x) {\cal J}^{\alpha\beta}_\rho(\bar{k}) \,=\,
\frac{\partial{\cal J}^{\alpha\beta}_\mu(\bar{k})}{\partial k_\sigma}e^\rho_\sigma(x) g^{\bar{k}}_{\rho\nu}(\bar{k}) +
\frac{\partial{\cal J}^{\alpha\beta}_\nu(\bar{k})}{\partial k_\sigma}e^\rho_\sigma(x) g^{\bar{k}}_{\mu\rho}(\bar{k})\,.
\end{equation}
Multiplying by $e_\lambda^\mu(x)e_\tau^\nu(x)$ both sides of the previous equation one finds
\begin{equation}
\frac{\partial g_{\lambda \tau}(x,k)}{\partial k_\rho} \bar{{\cal J}}^{\alpha\beta}_\rho(x,k) \,=\,
\frac{\partial\bar{{\cal J}}^{\alpha\beta}_\lambda(x,k)}{\partial k_\rho} g_{\rho\tau}(x,k) +
\frac{\partial\bar{{\cal J}}^{\alpha\beta}_\tau (x,k)}{\partial k_\rho}g_{\lambda\rho}(x,k)\,,
\end{equation}
where we have defined
\begin{equation}
\bar{{\cal J}}^{\alpha\beta}_\mu(x,k) \,=\,e^\mu_\nu(x){\cal J}^{\alpha\beta}_\nu(\bar{k})\,.
\end{equation}
We see that $ \bar{{\cal J}}^{\alpha\beta}_\mu(x,k)$ are the new isometries of the momentum metric that leave the momentum origin invariant for a fixed point $x$.
\subsubsection{Modified dispersion relation}
With our prescription, the generalization to Eq.~\eqref{eq:casimir_J}
\begin{equation}
\frac{\partial C(k)}{\partial k_\mu} \,{\cal J}^{\alpha\beta}_\mu(k) \,=\, 0\, ,
\end{equation}
in presence of a curved spacetime is
\begin{equation}
\frac{\partial C(\bar{k})}{\partial \bar{k}_\mu}{\cal J}^{\alpha\beta}_\mu(\bar{k})\,=\,0\,.
\end{equation}
The generalized infinitesimal Lorentz transformation in curved spacetime, defined by $\bar{{\cal J}}^{\alpha\beta}_\lambda(x,k)$, when acting on $C(\bar{k})$ is
\begin{equation}
\begin{split}
\delta C(\bar{k}) \,=&\, \omega_{\alpha\beta} \frac{\partial C(\bar{k})}{\partial k_\lambda}\,\bar{{\cal J}}^{\alpha\beta}_\lambda(x,k) \,=\, \omega_{\alpha\beta} \frac{\partial C(\bar{k})}{\partial \bar{k}_\rho}\,\frac{\partial\bar{k}_\rho}{\partial k_\lambda}\,\bar{{\cal J}}^{\alpha\beta}_\lambda(x,k) \\
=&\,\omega_{\alpha\beta} \frac{\partial C(\bar{k})}{\partial \bar{k}_\rho}\,\bar{e}^\lambda_\rho(x)\,\bar{{\cal J}}^{\alpha\beta}_\lambda(x,k) \,=\,
\omega_{\alpha\beta} \frac{\partial C(\bar{k})}{\partial \bar{k}_\rho}\,{\cal J}^{\alpha\beta}_\rho(\bar{k}) \,=\, 0.
\end{split}
\end{equation}
\begin{comment}
Expanding the previous expression one finds
\begin{equation}
\frac{\partial C(\bar{k})}{\partial k_\rho}\frac{\partial k_\rho}{\partial \bar{k}_\mu}{\cal J}^{\alpha\beta}_\mu(\bar{k})\,=\,\frac{\partial C(\bar{k})}{\partial k_\rho}\bar{{\cal J}}^{\alpha\beta}_\rho(x,k)\,=\,\left\{C(\bar{k}),\bar{{\cal J}}^{\alpha\beta}(x,k) \right\}\,=\,0\,,
\end{equation}
where the last step is only valid for a fixed point $x$.
\end{comment}
At the beginning of this section we have proposed to take into account the space-time curvature in an action with a deformed Casimir considering the substitution $k\rightarrow \bar{k}=\bar{e}k$ in the DDR, as this works for the transition from SR to GR. We have just seen that if $C(k)$ can be viewed as the distance from the origin to a point $k$ of the momentum metric $g^k_{\mu\nu} (k)$, $C(\bar{k})$ is the distance from $(x,0)$ to $(x,k)$ of the momentum metric $g_{\mu\nu} (x,k)$, since the new DLT leave the new DDR invariant, so it can be considered as (a function of) the squared distance calculated with the new metric. This is in accordance with our initial assumption of taking $C(\bar{k})$ as the DDR in presence of a curved spacetime. We will study deeper the relationship between the action with $C(\bar{k})$ and this metric in Sec.~\ref{subsec_action_metric}.
\subsection{Main properties of the geometry in the cotangent bundle}
\label{sec:fb_properties}
We have proposed a way to generalize the momentum metric studied in Ch.~\ref{chapter_curved_momentum_space} including a nontrivial curvature in spacetime. This metric can be considered as a metric in the whole cotangent bundle (also for the particular case of flat spacetime) following the formalism presented in Ch.~4 of~\cite{2012arXiv1203.4101M}. In this subsection we summarize the main ingredients of this prescription that we will use along this chapter.
We denote by $H^\rho_{\mu\nu}$ the space-time affine connection of the metric defined asking that the covariant derivative of the metric vanishes
\begin{equation}
g_{\mu\nu;\rho}(x,k)\,=\, \frac{\delta g_{\mu\nu}(x,k)}{\delta x^\rho}-g_{\sigma\nu}(x,k)H^\sigma_{\rho\mu}(x,k)-g_{\sigma\mu}(x,k)H^\sigma_{\rho\nu}(x,k)\,=\,0\,,
\label{eq:cov_der}
\end{equation}
where we use a new derivative
\begin{equation}
\frac{\delta}{\delta x^\mu}\, \doteq \,\frac{\partial}{\partial x^\mu}+N_{\rho\mu}(x,k)\frac{ \partial}{\partial k_\rho}\,,
\label{eq:delta_derivative}
\end{equation}
and $N_{\mu\nu}(x,k)$ are the coefficients of the \textit{nonlinear connection} $N$, which is known as horizontal distribution. The cotangent bundle manifold can be decomposed into vertical and horizontal distributions, generated by $\partial/\partial k_\mu$ and $\delta/\delta x^\mu$ respectively, being the last one constructed from the horizontal distribution to be the
supplementary to the vertical distribution (the fiber). In the GR case, the \textit{nonlinear connection} coefficients are given by
\begin{equation}
N_{\mu\nu}(x,k)\, = \, k_\rho H^\rho_{\mu\nu}(x)\,.
\label{eq:nonlinear_connection}
\end{equation}
As in GR, the relation between the metric and the affine connection
\begin{equation}
H^\rho_{\mu\nu}(x,k)\,=\,\frac{1}{2}g^{\rho\sigma}(x,k)\left(\frac{\delta g_{\sigma\nu}(x,k)}{\delta x^\mu} +\frac{\delta g_{\sigma\mu}(x,k)}{\delta x^\nu} -\frac{\delta g_{\mu\nu}(x,k)}{\delta x^\sigma} \right)\,,
\label{eq:affine_connection_st}
\end{equation}
is still satisfied. The \textit{d-curvature tensor} is defined as
\begin{equation}
R_{\mu\nu\rho}(x,k)\,=\,\frac{\delta N_{\nu\mu}(x,k)}{\delta x^\rho}-\frac{\delta N_{\rho\mu}(x,k)}{\delta x^\nu}\,.
\label{eq:dtensor}
\end{equation}
This tensor represents the curvature of the phase space, measuring the integrability of spacetime as a subspace of the cotangent bundle and can be defined as the commutator between the horizontal vector fields
\begin{equation}
\left\{ \frac{\delta}{\delta x^\mu}\,,\frac{\delta}{\delta x^\nu}\right\}\,=\, R_{\mu\nu\rho}(x,k)\frac{\partial}{\partial k_\rho}\,.
\end{equation}
Also, it is easy to see that
\begin{equation}
R_{\mu\nu\rho}(x,k)\,=\,k_\sigma R^{*\sigma}_{\mu\nu\rho}(x,k)\,,
\end{equation}
where
\begin{equation}
R^{*\sigma}_{\mu\nu\rho}(x,k)\,=\,\left(\frac{\delta H_{\mu\nu}^\sigma(x,k)}{\delta x^\rho} -\frac{\delta H_{\mu \rho}^\sigma(x,k)}{\delta x^\nu} +H^\sigma_{\lambda\rho}(x,k)H_{\mu\nu}^\lambda(x,k)-H^\sigma_{\lambda\nu}(x,k)H_{\mu\rho}^\lambda(x,k)\right)\,.
\end{equation}
In the GR case, $R_{\mu\nu\rho}(x,k)=k_\sigma R^{\sigma}_{\mu\nu\rho}(x)$, being $R^{\sigma}_{\mu\nu\rho}(x)$ the Riemann tensor. The horizontal bundle would be integrable if and only if $R_{\mu\nu\rho}=0$ (see Refs.~\cite{2012arXiv1203.4101M}-\cite{Barcaroli:2015xda} for more details).
The momentum affine connection is defined as
\begin{equation}
C_\rho^{\mu\nu}(x,k)\,=\,\frac{1}{2}g_{\rho\sigma}\left(\frac{\partial g^{\sigma\nu}(x,k)}{\partial k_ \mu}+\frac{\partial g^{\sigma\mu}(x,k)}{\partial k_ \nu}-\frac{\partial g^{\mu \nu}(x,k)}{\partial k_ \sigma}\right)\,,
\label{eq:affine_connection_p}
\end{equation}
and then, the following covariant derivative can be defined
\begin{equation}
v_{\nu}^{\,;\mu}\,=\, \frac{\partial v_\nu}{\partial k_\mu}-v_\rho C^{\rho\mu}_\nu(x,k)\,.
\label{eq:k_covariant_derivative}
\end{equation}
The space-time curvature tensor is
\begin{equation}
R^{\sigma}_{\mu\nu\rho}(x,k)\,=\,R^{*\sigma}_{\mu\nu\rho}(x,k)+C^{\sigma\lambda}_\mu (x,k)R_{\lambda\nu\rho}(x,k)\,,
\label{eq:Riemann_st}
\end{equation}
and the corresponding one in momentum space is
\begin{equation}
S_{\sigma}^{\mu\nu\rho}(x,k)\,=\, \frac{\partial C^{\mu\nu}_\sigma(x,k)}{\partial k_\rho}-\frac{\partial C^{\mu\rho}_\sigma(x,k)}{\partial k_\nu}+C_\sigma^{\lambda\nu}(x,k)C^{\mu\rho}_\lambda(x,k)-C_\sigma^{\lambda\rho}(x,k)C^{\mu\nu}_\lambda(x,k)\,.
\label{eq:Riemann_p}
\end{equation}
The line element in the cotangent bundle is defined as
\begin{equation}
\mathcal{G}\,=\, g_{\mu\nu}(x,k) dx^\mu dx^\nu+g^{\mu\nu}(x,k) \delta k_\mu \delta k_\nu\,,
\end{equation}
where
\begin{equation}
\delta k_\mu \,=\, d k_\mu - N_{\nu\mu}(x,k)\,dx^\nu\,.
\end{equation}
Then, a vertical path is defined as a curve in the cotangent bundle with a fixed space-time point and the momentum satisfying the geodesic equation characterized by the affine connection of momentum space,
\begin{equation}
x^\mu\left(\tau\right)\,=\,x^\mu_0\,,\qquad \frac{d^2k_\mu}{d\tau^2}+C_\mu^{\nu\sigma}(x,k)\frac{dk_\nu}{d\tau}\frac{dk_\sigma}{d\tau}\,=\,0\,,
\end{equation}
while for an horizontal curve one has
\begin{equation}
\frac{d^2x^\mu}{d\tau^2}+H^\mu_{\nu\sigma}(x,k)\frac{dx^\nu}{d\tau}\frac{dx^\sigma}{d\tau}\,=\,0\,,\qquad \frac{\delta k_\lambda}{\delta \tau}\,=\,\frac{dk_\lambda}{d\tau}-N_{\sigma\lambda} (x,k)\frac{dx^\sigma}{d\tau}\,=\,0\,.
\label{eq:horizontal_geodesics}
\end{equation}
These equations are a generalization of the ones appearing in GR, obtaining them in the limit where the momentum affine connection vanishes and there is no momentum dependence in the space-time affine connection.
\begin{comment}
Also, one can obtain the commutator of two space-time covariant derivatives
\begin{equation}
u^\mu_{;\nu;\rho}-u^\mu_{;\rho;\nu}\,=\,u^\sigma R^\mu_{\sigma\nu\rho}-u^{\mu;\sigma}R_{\sigma\nu\rho}\,.
\label{eq:com_der}
\end{equation}
\end{comment}
\subsection{Modified Killing equation}
Here we derive the deformed Killing equation for a generic metric in the cotangent bundle. The variation of the space-time coordinates $x^\alpha$ along a vector field $\chi^\alpha$ is
\begin{equation}
\left(x'\right)^\alpha\,=\,x^\alpha+\chi^\alpha \Delta\lambda\,,
\label{eq:x_variation}
\end{equation}
where $\Delta\lambda$ is the infinitesimal parameter characterizing the variation. This variation of $x^\alpha$ provokes a variation on $k_\alpha$ given by
\begin{equation}
\left(k'\right)_\alpha\,=\,k_\beta \frac{\partial x^\beta}{\partial x^{\prime \alpha}}\,=\,k_\alpha-\frac{\partial\chi^\beta}{\partial x^\alpha}k_\beta \Delta\lambda\,,
\end{equation}
since $k$ transforms as a covector. The variation for a generic vector field depending on the phase-space variables $X^\alpha\left(x,k\right)$ is
\begin{equation}
\Delta X^\alpha\,=\,\frac{\partial X^\alpha}{\partial x^\beta} \Delta x^\beta+\frac{\partial X^\alpha}{\partial k_\beta} \Delta k_\beta\,=\,\frac{\partial X^\alpha}{\partial x^\beta}\chi^\beta \Delta\lambda-\frac{\partial X^\alpha}{\partial k_\beta}\frac{\partial\chi^\gamma}{\partial x^\beta}k_\gamma \,\Delta\lambda \,.
\label{eq:vector_variation}
\end{equation}
We obtain the Killing equation imposing the invariance of the line element along a vector field $\chi^\alpha$
\begin{equation}
\Delta\left(ds^2\right)\,=\, \Delta (g_{\mu\nu}dx^\mu dx^\nu)\,=\, \Delta(g_{\mu\nu})dx^\mu dx^\nu+g_{\mu\nu} \Delta(dx^\mu) dx^\nu +g_{\mu\nu} \Delta(dx^\nu) dx^\mu\,=\,0\,.
\label{eq:line_variation}
\end{equation}
From Eq.~\eqref{eq:vector_variation} we obtain the variation of the metric tensor
\begin{equation}
\Delta(g_{\mu\nu})\,=\,\frac{\partial g_ {\mu\nu}}{\partial x^\alpha} \chi^\alpha \Delta\lambda -\frac{\partial g_ {\mu\nu}}{\partial k_\alpha}\frac{\partial \chi^\gamma}{\partial x^\alpha}k_\gamma \,\Delta\lambda\,,
\end{equation}
while from Eq.~\eqref{eq:x_variation}
\begin{equation}
\Delta(dx^\alpha)\,=\,d(\Delta x^\alpha)\,=\,d(\chi^\alpha \Delta\lambda)\,=\,\frac{\partial\chi^\alpha}{\partial x^\beta}dx^\beta\Delta\lambda\,,
\end{equation}
and then, Eq.~\eqref{eq:line_variation} becomes
\begin{equation}
\begin{split}
\Delta\left(ds^2\right)\,=\,\left(\frac{\partial g_ {\mu\nu}}{\partial x^\alpha} \chi^\alpha -\frac{\partial g_ {\mu\nu}}{\partial k_\alpha}\frac{\partial \chi^\gamma}{\partial x^\alpha}k_\gamma \right) dx^\mu dx^\nu \Delta\lambda \\+ g_{\mu\nu}\left(\frac{\partial\chi^\mu}{\partial x^\beta}dx^\beta dx^\nu+\frac{\partial\chi^\nu}{\partial x^\beta}dx^\beta dx^\mu\right)\Delta\lambda\,,
\end{split}
\end{equation}
so the Killing equation is
\begin{equation}
\frac{\partial g_ {\mu\nu}}{\partial x^\alpha} \chi^\alpha -\frac{\partial g_ {\mu\nu}}{\partial k_\alpha}\frac{\partial \chi^\gamma}{\partial x^\alpha}k_\gamma + g_{\alpha\nu}\frac{\partial\chi^\alpha}{\partial x^\mu}+ g_{\alpha\mu}\frac{\partial\chi^\alpha}{\partial x^\nu}\,=\,0\,,
\label{eq:killing}
\end{equation}
which is the same result obtained in Ref.~\cite{Barcaroli:2015xda}.
\begin{comment}
One can rewrite this equation making it manifestly covariant just remembering that $\chi^\alpha$ does not depend on $k$, and hence,
\begin{equation}
\frac{\partial \chi^\alpha}{\partial x^\beta}\,=\, \frac{\delta\chi^\alpha}{\delta x^\beta}\,.
\end{equation}
Then, Eq.~\eqref{eq:killing} can be rewritten as
\begin{equation}
\begin{split}
0\,=&\, \left(\frac{\delta g_{\mu\nu}}{\delta x^\alpha}-\frac{\partial g_{\mu\nu}}{\partial k_\rho}H^\gamma_{\rho\alpha}k_\gamma\right) g^{\alpha\beta}\chi_\beta-\frac{\partial g_ {\mu\nu}}{\partial k_\alpha}\frac{\delta\chi^\gamma}{\delta x^\alpha}k_\gamma+\\
& g_{\lambda\nu}\left(\frac{\delta g^{\lambda\alpha}}{\delta x^\mu}\chi_\alpha+ g^{\lambda\alpha} \frac{\delta \chi_\alpha}{\delta x^\mu} \right)+g_{\lambda\mu}\left(\frac{\delta g^{\lambda\alpha}}{\delta x^\nu}\chi_\alpha+ g^{\lambda\alpha} \frac{\delta \chi_\alpha}{\delta x^\nu} \right)\,,
\end{split}
\end{equation}
and from Eqs.~\eqref{eq:affine_connection_st}-\eqref{eq:cov_der} one finally arrives to
\begin{equation}
\mathcal{L}_\chi g_{\mu\nu}\,=\,\chi_{\nu;\mu}+\chi_{\mu;\nu}-\frac{\partial g_ {\mu\nu}}{\partial k_\alpha} \chi^\gamma_{\,;\alpha} k_\gamma\,=\,0\,.
\label{eq:lie_metric}
\end{equation}
The same computation can be followed for a contravariant vector, which leads to the following Killing equation
\begin{equation}
\mathcal{L}_\chi u^\mu\,=\,\chi^\nu u^\mu_{\,;\nu}-u^\nu \chi^\mu_{\,;\nu}-\frac{\partial u^\mu}{\partial k_\alpha} \chi^\gamma_{\,;\alpha} k_\gamma\,.
\label{eq:lie_vec}
\end{equation}
\end{comment}
\subsection{Relationship between metric and action formalisms}
\label{subsec_action_metric}
In this subsection we will start by seeing the relationship between the metric and the distance from the origin to a point $k$. We will study this relation for the momentum metric, but it can be done for GR for space-time coordinates instead of momentum variables. After that, we will prove that there is a direct relation between the free action of a particle with a DDR and the line element of a momentum dependent metric for spacetime.
In~\cite{Bhattacharya2012RelationshipBG} it is showed that the following relation holds for the distance of a Riemannian manifold
\begin{equation}
\frac{\partial D(0,k)}{\partial k_\mu}\,=\,\frac{k_\nu g_k^{\mu\nu}(k)}{\sqrt{k_\rho g_k^{\rho\sigma}(k) k_\sigma}}\,,
\end{equation}
where $D(0,k)$ is the distance from a fixed point $0$ to $k$. This leads to
\begin{equation}
\frac{\partial D(0,k)}{\partial k_ \mu}g^k_{\mu\nu}(k) \frac{\partial D(0,k)}{\partial k_ \nu}\,=\,1\,.
\end{equation}
Moreover, this property is also checked in Ch.~3 of~\cite{Petersen2006} for the Minkowski space (inside the light cone and extended on the light cone by continuity) and so, by the Whitney embedding theorem~\cite{Burns1985}, valid for any pseudo-Riemannian manifold of dimension $n$, since they can be embedded in a Minkowski space of at most dimension $2n+1$. Through this property, we can establish a direct relationship between the metric and the Casimir defined as the square of the distance
\begin{equation}
\frac{\partial C(k)}{\partial k_ \mu}g^k_{\mu\nu}(k) \frac{\partial C(k)}{\partial k_ \nu}\,=\,4 C(k)\,.
\label{eq:casimir_definition}
\end{equation}
\begin{comment}
Starting by
\begin{equation}
d\sigma^2\,=\,dk_\mu g^{\mu\nu}(k) dk_\nu\,,
\end{equation}
we can parametrize a geodesic by the natural parameter $\sigma$,
\begin{equation}
1\,=\,\frac{dk(\sigma)_\mu}{d \sigma} g^{\mu\nu}(k(\sigma)) \frac{dk(\sigma)_\nu}{d \sigma}\,.
\end{equation}
One can compute the distance from the origin to a point $k$ along a geodesic $\gamma$ from
\begin{equation}
D(0,k)\,=\,\sigma(k)\,=\,\int_\gamma{\sqrt{\frac{dk(\sigma)_\mu}{d \sigma} g^{\mu\nu}(k(\sigma)) \frac{dk(\sigma)_\nu}{d \sigma}}}\,.
\end{equation}
Then, it is easy to check that the following equation holds
\begin{equation}
\frac{\partial D(0,k)}{\partial k_ \mu}g_{\mu\nu}(k) \frac{\partial D(0,k)}{\partial k_ \nu}\,=\,1\,.
\end{equation}
We can establish a direct relationship between the metric and the Casimir defined as the square of the distance
\begin{equation}
\frac{\partial C(k)}{\partial k_ \mu}g_{\mu\nu}(k) \frac{\partial C(k))}{\partial k_ \nu}\,=\,4 C(k)\,.
\label{eq:casimir_definition}
\end{equation}
being $\mathcal{N}=1/2m, 1$ when the geodesic is timelike or null respectively.
\end{comment}
On the other hand, from the action with a generic DDR
\begin{equation}
S\,=\,\int{\left(\dot{x}^\mu k_\mu-\mathcal{N} \left(C(k)-m^2\right)\right)d\tau}\,,
\label{eq:DSR_action}
\end{equation}
one can find
\begin{equation}
\dot{x}^\mu\,=\,\mathcal{N}\frac{\partial C(k)}{\partial k_\mu}\,,
\label{eq:velocity_action}
\end{equation}
being $\mathcal{N}=1/2m$ or $1$ when the geodesic is timelike or null respectively.
As we have seen in the previous subsection, the momentum metric can be considered as a metric for the whole cotangent bundle, so one can take the following line element in spacetime for an horizontal curve
\begin{equation}
ds^2\,=\, dx^\mu g^k_{\mu\nu}(k) dx^\nu\,.
\end{equation}
For the timelike case, one can choose the parameter of the curve to be the natural parameter $s$, and then
\begin{equation}
1\,=\, \dot{x}^\mu g^k_{\mu\nu}(k) \dot{x}^\nu\,.
\end{equation}
Substituting Eq.~\eqref{eq:velocity_action} in the previous equation one finds
\begin{equation}
\left. \frac{1}{4 m^2} \frac{\partial C(k)}{\partial k_\mu} g^k_{\mu\nu}(k) \frac{\partial C(k)}{\partial k_\nu}\right\rvert_{C(k)=m^2}\,=\, \frac{1}{4 m^2} 4 m^2\,=\,1 \,,
\end{equation}
where Eq.~\eqref{eq:casimir_definition} have been used. For a null geodesic one has
\begin{equation}
0\,=\, \dot{x}^\mu g^k_{\mu\nu}(k) \dot{x}^\nu\,,
\end{equation}
and therefore, using Eq.~\eqref{eq:velocity_action} one finds
\begin{equation}
\left. \frac{\partial C(k)}{\partial k_\mu} g^k_{\mu\nu}(k) \frac{\partial C(k)}{\partial k_\nu}\right\rvert_{C(k)=0}\,=\,0 \,,
\end{equation}
where again Eq.~\eqref{eq:casimir_definition} was used in the last step. One can see that considering an action with a DDR and the line element of spacetime with a momentum dependent metric whose squared distance is the DDR gives the same results\footnote{One arrives also to the same equations if a function of the squared distance is considered as the Casimir (for timelike geodesics one would have to redefine the mass with the same function).}.
One can also arrive to the same relation between these two formalisms for the generalization proposed of the cotangent bundle metric. The relation Eq.~\eqref{eq:casimir_definition} is generalized to
\begin{equation}
\frac{\partial C(\bar{k})}{\partial \bar{k}_ \mu}g^{\bar{k}}_{\mu\nu}(\bar{k}) \frac{\partial C(\bar{k})}{\partial \bar{k}_ \nu}\,=\,4 C(\bar{k})\,=\,\frac{\partial C(\bar{k})}{\partial k_ \mu}g_{\mu\nu}(x,k) \frac{\partial C(\bar{k})}{\partial k_\nu}\,.
\label{eq:casimir_definition_cst}
\end{equation}
From the action
\begin{equation}
S\,=\,\int{\dot{x}^\mu k_\mu-\mathcal{N} \left(C(\bar{k})-m^2\right)}
\label{eq:DGR_action}
\end{equation}
with the same Casimir function but with argument the barred momenta, one can find
\begin{equation}
\dot{x}^\mu\,=\,\mathcal{N}\frac{\partial C(\bar{k})}{\partial k_\mu}\,,
\label{eq:velocity_action_curved}
\end{equation}
where again $\mathcal{N}=1/2m$ or $1$ when the geodesic is timelike or null respectively. We see that the same relation found for the flat space-time case holds also for curved spacetime.
\subsection{Velocity in physical coordinates}
\label{sec:velocity_curved_physical}
From the previous subsection, we know that the photon trajectory is given by
\begin{equation}
ds^2\,=\, dx^\mu g^k_{\mu\nu}(k) dx^\nu \,=\, dx^\mu \varphi_\mu^\alpha(k) \eta_{\alpha \beta}\varphi_\nu^\beta(k) dx^\nu\,=\,0 \,.
\end{equation}
Then, since $\dot{k}=0$ along the trajectory, we have
\begin{equation}
d\tilde{x}^\alpha \eta_{\alpha \beta} d\tilde{x}^\beta\,=\,0 \,,
\end{equation}
with
\begin{equation}
\tilde{x}^\alpha\,=\,x^\mu \varphi^\alpha_\mu (k)\,,
\end{equation}
which are the physical coordinates found in Ch.~\ref{chapter_locality}. Now we can understand the result showed in Ch.~\ref{chapter_time_delay} of absence of a momentum dependence on times of flight for massless particles.
\section{Friedmann-Robertson-Walker metric}
\label{sec:rw}
Once we have defined our proposal of considering a nontrivial geometry for momentum and space-time coordinates, we can study its phenomenological implications. We will start by looking for possible effects on the Friedmann-Robertson-Walker universe. First of all, we will compute the velocity and the time dependence of momenta for photons both from the action of Eq.~\eqref{eq:GR_action} and through the line element of spacetime, checking that the same results are obtained. After that, we will study some phenomenological results in the Friedmann-Robertson-Walker universe.
In order to construct the metric in the cotangent bundle, we will use the tetrad for momentum space considered in Ch.~\ref{chapter_curved_momentum_space}
\begin{equation}
\varphi^0_0(k)\,=\,1\,,\qquad \varphi^0_i(k)\,=\,\varphi^i_0(k)\,=\,0\,,\qquad \varphi^i_j(k)\,=\,\delta^i_j e^{-k_0/\Lambda}\,,
\label{eq:RW_tetrad_p}
\end{equation}
and the space-time tetrad
\begin{equation}
e^0_0(x)\,=\,1\,,\qquad e^0_i(x)\,=\,e^i_0(x)\,=\,0\,,\qquad e^i_j(x)\,=\,\delta^i_j R(x^0)\,,
\label{eq:RW_tetrad_st}
\end{equation}
where $R(x^0)$ is the scale factor. With these ingredients, we can construct the cotangent bundle metric
\begin{equation}
g_{00}(x,k)\,=\,1\,,\qquad g_{0i}(x,k)\,=\,0\,, \qquad g_{ij}(x,k)\,=\,\eta_{ij}\, R^2(x^0) e^{-2k_0/\Lambda}\,.
\label{eq:RW_metric}
\end{equation}
One can easily check from Eq.~\eqref{eq:Riemann_p} that the scalar of curvature in momentum space is constant, $S=12/\Lambda^2$, and that the momentum curvature tensor corresponds to a maximally symmetric space
\begin{equation}
S_{\rho\sigma\mu\nu}\,\propto \, g_{\rho\mu}g_{\sigma\nu}-g_{\rho\nu}g_{\sigma\mu}\,,
\end{equation}
which is obvious from the result of Appendix~\ref{appendix:cotangent}.
\subsection{Velocities for photons}
We will compute the velocity of photons from the action
\begin{equation}
S\,=\,\int{\left(\dot{x}^\mu k_\mu-\mathcal{N} C(\bar{k})\right) }d\tau
\label{eq:action1}
\end{equation}
with the deformed Casimir of the bicrossproduct basis depending of $x$ and $k$
\begin{equation}
C(\bar{k})\,=\,\Lambda^2\left(e^{\bar{k}_0/\Lambda}+e^{-\bar{k}_0/\Lambda}-2\right)- \vec{\bar{k}}^2e^{\bar{k}_0/\Lambda}\,=\,\Lambda^2\left(e^{k_0/\Lambda}+e^{-k_0/\Lambda}-2\right)-\frac{ \vec{k}^2e^{k_0/\Lambda}}{R^2(x^0)} \,.
\label{eq:Casimir_RW}
\end{equation}
Setting $\dot{x}^0=1$, i.e. using time as the proper time, we can obtain the value of $\mathcal{N}$ as a function of position and momenta and then, we can obtain the velocity for massless particles (in 1+1 dimensions) as
\begin{equation}
v\,=\,\dot{x}^1\,=\,-\frac{4 \Lambda ^3 k_1 e^{2 k_0/\Lambda} \left(e^{k_0/\Lambda}-1\right) R(x^0)^2}
{\left(k_1^2 e^{2 k_0/\Lambda}-\Lambda ^2 e^{2k_0/\Lambda} R(x^0)^2+\Lambda ^2
R(x^0)^2\right)^2}\,.
\label{eq:velocity_RW_1}
\end{equation}
When one uses the Casimir in order to obtain $k_1$ as a function of $k_0$, one finds
\begin{equation}
k_1\,= -\,\Lambda \, e^{-k_0/\Lambda}
\left(e^{k_0/\Lambda }-1\right) R(x^0)\,,
\label{eq:RW_k}
\end{equation}
and then, by substitution of Eq.~\eqref{eq:RW_k} in Eq.~\eqref{eq:velocity_RW_1}, one can see that the velocity is
\begin{equation}
v\,=\, \frac{e^{k_0/\Lambda}}{R(x^0)}\,,
\label{eq:velocity_RW_casimir}
\end{equation}
so we see an energy dependent velocity in these coordinates. When $\Lambda$ goes to infinity one gets $v=1/R(x^0)$, which is the GR result.
This result can also be derived directly from the line element of the metric
\begin{equation}
0\,=\, (dx^0)^2-R(x^0)e^{-2 k_0/\Lambda}(dx^1)^2\,,
\end{equation}
which agrees with the discussion of the previous subsection: the same velocity is obtained from the action and from the line element of the metric.
Whether or not Eq.~\eqref{eq:velocity_RW_casimir} implies a time delay would require to consider the propagation in a generalization for a curved spacetime of the ``physical'' spacetime studied in Ch.~\ref{chapter_locality}.
\begin{comment}
In order to assure the possible existence of a time delay of flight for massless particles with different energies, we should analyze their free propagation in a generalization of the "physical" spacetime we have studied in Ch.~\ref{chapter_locality}, as we did in Ch.~\ref{chapter_time_delay}.
\end{comment}
\subsection{Momenta for photons}
We can obtain the momentum as a function of time looking for the extrema of the action~\eqref{eq:action1}
\begin{equation}
\dot{k}_0\,=\, - \frac{\Lambda\left(e^{k_0/\Lambda}-1\right)R'(x^0)}{R(x^0)}\,, \qquad \dot{k}_1\,=\,0\,.
\label{eq:momenta_RW}
\end{equation}
Solving the differential equation, one obtains the energy as a function of time
\begin{equation}
k_0\,=\, -\Lambda \log\left(1+\frac{e^{-E/\Lambda}-1}{R(x^0)}\right) \,,
\label{eq:energy_RW}
\end{equation}
where the constant of integration is the conserved energy along the geodesic, since when taking the limit $\Lambda$ going to infinity one has $E=k_0\, R(x^0)$, which is the barred energy.
\subsection{Redshift}
From the line element for photons we see that
\begin{equation}
0\,=\,(dx^0)^2-R^2(x^0)e^{-2k_0/\Lambda}d\vec{x}^2\,,
\end{equation}
and then
\begin{equation}
\int^{t_0}_{t_1}{\frac{dx^0\, e^{k_0/\Lambda}}{R(x^0)}}\,=\,\int^x_0 dx\,=\,x\,.
\label{eq:step}
\end{equation}
We can now write Eq.~\eqref{eq:step} as a function of $x^0$ using Eq.~\eqref{eq:energy_RW}, obtaining the quotient in frequencies (see Ch.~14 of Ref.~\cite{Weinberg:1972kfs})
\begin{equation}
\frac{\nu_0}{\nu_1}\,=\,\frac{\delta t_1}{\delta t_0}\,=\,\frac{R(t_1)\left(1+(e^{-E/\Lambda}-1)/R(t_1)\right)}{R(t_0)\left(1+(e^{-E/\Lambda}-1)/R(t_0)\right)}\,=\,\frac{R(t_1)+e^{-E/\Lambda}-1}{R(t_0)+e^{-E/\Lambda}-1}\,,
\end{equation}
and then, the redshift is
\begin{equation}
z\,=\,\frac{R(t_0)+e^{-E/\Lambda}-1}{R(t_1)+e^{-E/\Lambda}-1}-1\,.
\label{eq:redshift}
\end{equation}
Taking the limit $\Lambda\rightarrow \infty$ in the redshift one recovers the usual expression of GR for a Friedmann-Robertson-Walker space~\cite{Weinberg:1972kfs}. This equation reveals an energy dependence of the redshift and then, two particles with different energies suffer different redshifts. To illustrate it, let us suppose two particles emitted from a distant source with energies zero and $E$ such that $E\ll\Lambda$. The redshift for both particles at the detection point will be different for each one. Taking only the first term in the series expansion in $\Lambda$ we find
\begin{equation}
\frac{1+z(0)}{1+z(E)}\,=\, 1+\frac{E}{\Lambda}\left(\frac{1}{R(t_0)}-\frac{1}{R(t_1)}\right)\,.
\end{equation}
i.e. for the more energetic particle there is more redshift,
\begin{equation}
1+z(E)\,=\,(1+z(0))\left( 1-\frac{E}{\Lambda }\left(\frac{1}{R(t_0)}-\frac{1}{R(t_1)}\right)\right)\,,
\end{equation}
since the last factor is always greater than unity since, as the universe is expanding, $R(t_1)<R(t_0)$.
\subsection{Luminosity distance}
Following the procedure showed in Ch.~14 of Ref.~\cite{Weinberg:1972kfs}, we will obtain the luminosity distance for this metric. We start by considering a circular telescope mirror of radius $b$, with its center placed at the origin and its normal along the line of sight $x$ to the light source. The light rays that reach the limits of the mirror edge form a cone that, for a system of locally inertial coordinates at the source, and have a half-angle $|\epsilon|$ given by
\begin{equation}
b\,\approx\, R(t_0) e^{-k_0/\Lambda} x |\epsilon|\,,
\end{equation}
where $b$ is expressed as a proper distance. The solid angle that encompass this cone is
\begin{equation}
\pi |\epsilon|^2\,=\, \frac{\pi b^2}{R^2(t_0) e^{-2 k_0/\Lambda} x^2}\,,
\end{equation}
and then, the fraction of the photons that are emitted isotropically that arrives to the mirror is the ratio of this solid angle to $4\pi$, i.e.
\begin{equation}
\frac{|\epsilon|^2}{4}\,=\, \frac{ A}{4 \pi R^2(t_0) e^{-2k_0/\Lambda}x^2}\,,
\label{eq:fract}
\end{equation}
where $A$ is the proper area of the mirror
\begin{equation}
A\,=\,\pi b^2\,.
\end{equation}
But if a photon is emitted with an energy $h \nu_1$, it will be red-shifted to an energy
\begin{equation}
h \nu_1 \frac{R(t_1)+e^{-E/\Lambda}-1}{R(t_0)+e^{-E/\Lambda}-1}\,,
\end{equation}
and there will be a difference in the time of arrival for photons emitted at time intervals $\delta t_1$ given by
\begin{equation}
\delta t_1\frac{R(t_1)+e^{-E/\Lambda}-1}{R(t_0)+e^{-E/\Lambda}-1}\,,
\end{equation}
where $t_1$ is the time when the photon is emitted from the source and $t_0$ is the time of arrival at the mirror. Then, the fraction of the total power from the source which is received by the mirror $P$, is given by the absolute luminosity $L$, times a factor
\begin{equation}
\left(\frac{R(t_1)+e^{-E/\Lambda}-1}{R(t_0)+e^{-E/\Lambda}-1}\right)^2\,,
\end{equation}
multiplied by the fraction Eq.~\eqref{eq:fract}:
\begin{equation}
P\,=\,L\,A \frac{\left(R(t_1)+e^{-E/\Lambda}-1\right)^2}{4 \pi R^2(t_0)\left(R(t_0)+e^{-E/\Lambda}-1\right)^2 e^{-2 k_0/\Lambda}x^2}\,.
\end{equation}
The apparent luminosity $l$ is defined as the power per unit mirror area, so using Eq.~\eqref{eq:energy_RW} we obtain
\begin{equation}
l\,\equiv\,\frac{P}{A}\,=\, L \frac{\left(R(t_1)+e^{-E/\Lambda}-1\right)^2}{4 \pi \left(R(t_0)+e^{-E/\Lambda}-1\right)^4 x^2}\,.
\label{eq:apk_luminosity}
\end{equation}
The apparent luminosity of a source at rest placed at distance $d$, for an Euclidean space, is given by $L/4\pi d^2$, so in general one may define the luminosity distance $d_L$ of a light source as
\begin{equation}
d_L\,=\,\left(\frac{L}{4\pi l}\right)^{1/2}\,,
\end{equation}
and then Eq.~\eqref{eq:apk_luminosity} can be written
\begin{equation}
d_L\,=\, \frac{\left(R(t_0)+e^{-E/\Lambda}-1\right)^2\,x}{R(t_1)+e^{-E/\Lambda}-1}\,.
\end{equation}
We can find from the previous equation that
\begin{equation}
d_L\,=\,\left( \frac{R(t_0)+e^{-E/\Lambda}-1}{R(t_1)+e^{-E/\Lambda}-1}\right)^2 d\,,
\end{equation}
where
\begin{equation}
d\,=\,\left(R(t_1)+e^{-E/\Lambda}-1\right)x\,,
\end{equation}
is the proper distance between the source and us. From here, we can write the luminosity distance from the redshift expression
\begin{equation}
d_L\,=\,\left(1+z\right)^2 d\,,
\end{equation}
finding the same expression of GR.
We can calculate as we did for the redshift, the difference on the luminosity distance for photons with different energies, obtaining
\begin{equation}
\frac{d_L (0)}{d_L (E)}\,=\,\left(\frac{1+z(0)}{1+z(E)}\right)^2\,,
\end{equation}
so the luminosity distance will be greater for higher energies. This is an interesting result that perhaps could be tested in cosmographic analyses.
\subsection{Congruence of geodesics}
We study in this subsection the congruence of null geodesics for the metric of the cotangent bundle from the definition~\cite{Poisson:2009pwt}
\begin{equation}
\theta \,=\,\frac{1}{\delta S}\frac{d }{d \lambda}\delta S\,,
\end{equation}
where $\delta S$ is the infinitesimal change of area. For the metric~\eqref{eq:RW_metric} one obtains
\begin{equation}
\theta\,=\,2\frac{e^{k_0/\Lambda}R^\prime(t)}{R^2(t)}\,,
\label{eq:theta_RW}
\end{equation}
Making a series expansion in $1/\Lambda$ we get
\begin{equation}
\frac{\theta(0)}{\theta(E)} \,=\, 1-\frac{E}{R(t) \Lambda }\,.
\end{equation}
The expansion of the congruence is energy dependent, in such a way that is greater for larger energies.
Note that in Ch.~\ref{chapter_curved_momentum_space} we have mentioned that there are two possible choices of the sign of $\Lambda$ for the de Sitter metric~\eqref{bicross-metric} making that, for the other sign, all the previous results change: the speed, redshift, luminosity distance and congruence of geodesics of high energy photons would be smaller than the low energy ones.
\section{Schwarzschild metric}
\label{sec:sch}
In this section, we study the Schwarzschild black hole with a curvature in momentum space. We will use the tetrad corresponding to Lemaître coordinates~\cite{Landau:1982dva}
\begin{equation}
e^t_t\,=\,1\,,\qquad e^x_x\,=\, \sqrt{\frac{2M}{r}}\,,\qquad e^\theta_\theta(x)\,=\, r\,,\qquad e^\phi_\phi(x)\,=\, r \sin{\theta}\,,
\label{eq:Sch_tetrad}
\end{equation}
where
\begin{equation}
r\,=\,\left(\frac{3}{2}\left(x-t\right)\right)^{(2/3)}\left(2M\right)^{(1/3)}\,.
\end{equation}
Using the same momentum tetrad of Sec.~\ref{sec:rw}, one obtains the metric in the cotangent bundle
\begin{equation}
\begin{split}
g_{tt}(x,k)\,&=\,1\,, \qquad g_{xx}(x,k)\,=\,-\frac{2M}{r}e^{-2 k_0/\Lambda}\,, \\
g_{\theta\theta}(x,k)\,&=\, -r^2e^{-2 k_0/\Lambda}\,, \qquad g_{\phi\phi}(x,k)\,=\,- r^2 \sin^2{\theta}e^{-2 k_0/\Lambda}\,.
\label{eq:Sch_metric}
\end{split}
\end{equation}
As for the Friedmann-Robertson-Walker case, one can check that the momentum scalar of curvature is constant, $S=12/\Lambda^2$, and that the momentum curvature tensor corresponds to a maximally symmetric momentum space.
The purpose of this subsection is to study the event horizon for Schwarzschild black hole when there is a curvature in momentum space. In order to do so, we first compute the conserved energy along geodesics. After that, we will represent graphically the null geodesics, obtaining the same event horizon for every particle, independently of their energy. Besides this fact, we will find an energy dependent surface gravity, pointing to a possible dependence on the energy of the Hawking radiation.
\subsection{Energy from Killing equation}
Using Eq.~\eqref{eq:killing} for this metric one obtains
\begin{equation}
\chi^0\,=\,1\,,\qquad \chi^1\,=\,1\,,
\end{equation}
which gives the same Killing vector obtained in GR~\footnote{This can be easily understood from Eq.~\eqref{eq:killing}. If in GR a constant Killing vector exists for a given space-time geometry, then the same vector will be a Killing one for the deformed cotangent metric.}.
The same result can be obtained from the action Eq.~\eqref{eq:action1} with the Casimir
\begin{equation}
C(\bar{k})\,=\,\Lambda^2\left(e^{\bar{k}_0/\Lambda}+e^{-\bar{k}_0/\Lambda}-2\right)- \vec{\bar{k}}^2e^{\bar{k}_0/\Lambda}\,=\,\Lambda^2\left(e^{k_0/\Lambda}+e^{-k_0/\Lambda}-2\right)- \vec{k}^2e^{k_0/\Lambda}\frac{r}{2M} \,.
\label{eq:Casimir_Sch}
\end{equation}
Choosing $\tau=x^0$, $\mathcal{N}$ of Eq.~\eqref{eq:action1} can be expressed as a function of the phase-space variables, and then one can check that the derivatives of the momenta satisfy (in 1+1 dimensions)
\begin{equation}
\dot{k}_0+\dot{k}_1\,=\,0\,.
\end{equation}
From the Casimir one can obtain the relation between the spatial and zero momentum component for massless particles
\begin{equation}
k_1\,=\,\sqrt{\frac{2M}{r}}\Lambda \left(1-e^{-k_0/\Lambda}\right)\,,
\end{equation}
so the conserved energy is
\begin{equation}
E\,=\,k_0+k_1\,=\,k_0+\sqrt{\frac{2M}{r}}\Lambda \left(1-e^{-k_0/\Lambda}\right)\,.
\label{eq:energy_Sch}
\end{equation}
\subsection{Event horizon}
We can obtain the event horizon from the representation of the null ingoing and outgoing geodesics. In GR the horizon in the Lemaître coordinates is in $x-t=4M/3$ and the singularity is at $x=t$~\cite{Landau:1982dva}. From the line element of the metric Eq.~\eqref{eq:Sch_metric}, one can solve the differential equation
\begin{equation}
ds^2\,=\,0\,\implies\,\frac{dx}{dt}\,=\,\pm \left(\frac{3(x-t)}{4M}\right)^{(1/3)}e^{k_0/\Lambda}\,,
\label{eq:eq_geo}
\end{equation}
where $+$ stands for outgoing and $-$ for ingoing geodesics. Solving numerically this differential equation expressing $k_0$ as a function of the conserved energy (inverting Eq.~\eqref{eq:energy_Sch}), we can plot the geodesics for different energies, observing that there is no modification in the horizon: all the particles see the same horizon independently of their energy\footnote{Different trajectories appear in the following plots because we are using different initial conditions for different energies.}. Despite of the momentum dependence of the metric, we find that there is a common horizon for all particles, even if the velocity is energy dependent. For $M=1$ we plot the ingoing geodesics in Fig.~\ref{fig:ingoing}.
\begin{figure}[H]
\includegraphics[width=\linewidth]{Figs/ingoing}
\caption{Particles with three different velocities coming from outside the horizon, crossing it and finally arriving to the singularity.}
\label{fig:ingoing}
\end{figure}
Null particles emitted outside the horizon but near to it will escape in a finite time, see Fig.~\ref{fig:outgoing}.
\begin{figure}[H]
\includegraphics[width=\linewidth]{Figs/outgoing}
\caption{Outgoing null geodesics from outside the horizon.}
\label{fig:outgoing}
\end{figure}
One can also represent the geodesics starting inside the horizon and falling to the singularity, as in Fig.~\ref{fig:outgoing_2}.
\begin{figure}[H]
\includegraphics[width=\linewidth]{Figs/outgoing_2}
\caption{Null geodesics from inside the horizon falling at the singularity.}
\label{fig:outgoing_2}
\end{figure}
In Refs.~\cite{Dubovsky:2006vk,Barausse:2011pu,Blas:2011ni,Bhattacharyya:2015gwa} it is shown that in a LIV scenario, particles with different energies see different horizons. It can be proved that due to this effect, there is a violation of the second law of the black hole thermodynamics, leading to the possibility of construction of a perpetuum mobile (see however~\cite{Benkel:2018abt} for a possible resolution of this problem). With our prescription, we have found that there is a unique horizon for all particles, which is in agreement with the fact that in DSR theories there is a relativity principle, in contrast with the LIV scenarios.
\subsection{Surface gravity}
As it was pointed in~\cite{Cropp:2013zxi}, there are different procedures to obtain the surface gravity. One of them is related with the peeling off properties of null geodesics near the horizon
\begin{equation}
\frac{d |x_1(t)-x_2(t)| }{d t}\,\approx \, \kappa_{\text{peeling}}(t) |x_1(t)-x_2(t)|\,,
\label{eq:surface_gravity}
\end{equation}
where $x_1(t)$ and $x_2(t)$ represent two null geodesics on the same side of the horizon and the absence of factors is due to $\kappa_{\text{peeling}}=\kappa_{\text{inaffinity}}$ in the GR limit. From Eq.~\eqref{eq:eq_geo}, we obtain for two null geodesics with the same energy $k_0$:
\begin{equation}
\frac{d |x_1(t)-x_2(t)| }{d t}\,\approx \,\frac{e^{k_0/\Lambda}}{4M} |x_1(t)-x_2(t)|\,,
\end{equation}
and then,
\begin{equation}
\kappa_{\text{peeling}}\,=\,\frac{e^{k_0/\Lambda}}{4M}\,,
\end{equation}
with a dependence on the energy. This points to the possibility that the Hawking temperature~\cite{Poisson:2009pwt}
\begin{equation}
T\,=\,\frac{\kappa}{2 \pi}\,,
\end{equation}
could generally depend on the energy of the outgoing particles.
\chapter{Conclusions}
\ifpdf
\graphicspath{{Chapter9/Figs/Raster/}{Chapter9/Figs/PDF/}{Chapter9/Figs/}}
\else
\graphicspath{{Chapter9/Figs/Vector/}{Chapter9/Figs/}}
\fi
\epigraph{Cry in the dojo, laugh on the battlefield.}{Japanese proverb}
In this thesis we have studied a possible deviation of special relativity baptized as Doubly Special Relativity (DSR), in which a deformed relativistic kinematics, parametrized by a high energy scale, appears. This theory is proposed, not as a fundamental quantum gravity theory, but as a low energy limit of it, trying to provide some phenomenological observations that could point towards the correct approach to such a theory.
Firstly, we have extended to second order in an expansion in powers of the inverse of a high energy scale, a previous study at first order of a deformed kinematics compatible with the relativity principle (DRK). We have shown that the results can be obtained from a simple trick: a change of basis (modifying the Casimir and the Lorentz transformation in the one-particle system) and a change momentum variables in the two-particle system (changing the composition law and the Lorentz transformation in the two-particle system). The same method can be easily generalized to higher orders providing a way to obtain in a systematic way a DRK. But in doing so, one finds an enormous arbitrariness to go beyond special relativity, so that an additional requirement, physical or mathematical, may be needed.
In order to look for some additional ingredient to reduce this arbitrariness, we have considered two different perspectives. From a geometrical point of view, we have arrived to the conclusion that only a maximally symmetric momentum space could lead to a deformed relativistic kinematics when one identifies the composition law and the Lorentz transformations as the isometries of the metric. Since one wants 4 translations and 6 Lorentz generators, the momentum space must have 10 isometries, leaving only place for a maximally symmetric space. We have found that the most common examples of deformed kinematics appearing in the literature ($\kappa$-Poincaré, Snyder and hybrid models) can be reproduced and understood from this geometrical perspective by choosing properly the algebra of the generators of translations.
The previous study does not show the possible implications that a deformed kinematics provoke on spacetime. In previous works it is shown that, from an action in which the conservation of momentum is imposed through a deformed composition law, there is a non-locality of interactions for any observer not placed at the interaction point, making this effect larger when the observer sees the interaction farther and farther away. However, it is possible to find new noncommutative coordinates (we call them ``physical'') in which interactions are local. We have found different ways to impose locality and, in one of them, associativity in the composition law is required in order to have local interactions. This seems to select, among the previous kinematics obtained from a geometrical perspective, $\kappa$-Poincaré kinematics. Also, we have found a relation between the different perspectives, observing that the tetrad characterizing the momentum space curvature can be used to define the functions of momentum that determine the physical coordinates.
Once the spacetime consequences of a deformed kinematics are better understood, we have explored two of its phenomenological consequences. In DSR, the only current phenomenological observation due to a deformed kinematics is a possible time delay for massless particles with different energies. We have proposed three different models in the DSR context, showing that time delay is not necessarily a possible effect in this framework, depending on the model and on the kinematics (and basis) used. This removes the strong constraints on the high energy scale that parametrizes the kinematics of DSR, which could be many orders of magnitude below the Planck scale.
Considering this possibility, we have analyzed a process in QFT when a deformed kinematics is present. Despite the lack of a dynamical theory, the computations showed here can shed some light on how the usual QFT should be modified. On the one hand, we have shown that in the presence of a covariant composition law, instead of one peak associated to a resonance, another peak could appear, correlated to the former, allowing us to determine not only the mass of the particle, but also the high energy scale. We have baptized this effect as \emph{twin peaks}. If this scale is sufficiently small, this effect could be observed in a future high energy particle accelerator.
Besides, we have considered a simple process, an electron-positron pair going to $Z$ boson and decaying in a muon-antimuon pair. In order to do so, we have introduced simple prescriptions to take into account the effects of a (covariant) deformed composition law. We have shown that an scale of the order of some TeV could be compatible with the experimental data of such process.
Finally, we have investigated how a curvature of spacetime could modify the kinematics. This is a crucial ingredient in the study of time delays, since the expansion of the universe must be taken into account for photons coming from astrophysical sources. In order to do so, we have studied, from a geometrical point of view, how to consider simultaneously a curvature in spacetime and in momentum space. This can be done in the so-called cotangent bundle geometry, taking into account a nontrivial geometry for all the phase space. In this framework, we have analyzed the phenomenological consequences of a maximally symmetric momentum space combined with an expanding universe (Friedmann-Robertson-Walker) and a black hole (Schwarzschild) geometries for spacetime. In the first case, we have computed the velocity for massless particles, the redshift, the luminosity distance and the congruence of geodesics. Whether or not there is a time delay in this proposal is still an open question which deserves further study. For the black hole space-time geometry, we have seen that there is a common event horizon for all particles, in contrast with the result of a Lorentz violating scenario, where particles with different energies would see different horizons. This is in agreement with the relativity principle imposed in DSR. However, the surface gravity is energy dependent, what seems to indicate that the Hawking temperature could also show such behavior.
\chapter*{Conclusiones}
En esta tesis hemos estudiado una posible desviación de la relatividad especial en la que aparece una cinemática relativista deformada parametrizada por una escala de alta energía, bautizada como Relatividad Doblemente Especial (DSR). Esta teoría se propone, no como una teoría fundamental de gravedad cuántica, sino como un límite a bajas energías de ella, intentando proporcionar observaciones experimentales que podrían señalarnos cuál es el enfoque correcto de tal teoría.
Primero hemos extendido, a segundo orden en una expansión en potencias del inverso de una escala de alta energía, un estudio previo de una cinemática modificada compatible con el principio de la relatividad (DRK) a primer orden. Hemos mostrado que los resultados pueden obtenerse a partir de un simple truco: un cambio de variables (modificando el Casimir y las transformaciones de Lorentz en el sistema de una partícula) y un cambio de variables momento en el sistema de dos partículas (cambiando la ley de composición y las transformaciones de Lorentz en el sistema de dos partículas). El mismo método puede generalizarse fácilmente a órdenes superiores proporcionando una forma de conseguir de forma sistemática una DRK. Sin embargo, encontramos una enorme arbitrariedad para ir más allá de relatividad especial, de modo que puede ser necesario introducir un requerimiento adicional, físico o matemático.
Para buscar algún ingrediente que reduzca esta arbitrariedad, hemos considerado dos perspectivas distintas. Desde un punto de vista geométrico, hemos llegado a la conclusión de que sólo un espacio de momentos maximalmente simétrico podría conducirnos a una DRK cuando uno identifica la ley de composición y las transformaciones de Lorentz como isometrías de la métrica. Como uno quiere 4 traslaciones y 6 generadores Lorentz, el espacio de momentos debe tener 10 isometrías, dejando solo sitio para un espacio de momentos maximalmente simétrico. Hemos encontrado que los ejemplos más comunes de cinemáticas deformadas que aparecen en la literatura ($\kappa$-Poincaré, Snyder y modelos híbridos) pueden reproducirse y entenderse a partir de esta perspectiva geométrica eligiendo apropiadamente el álgebra de los generadores de traslaciones.
En el estudio previo no se ha mostrado las posibles implicaciones que una cinemática modificada provoca en el espacio-tiempo. En trabajos anteriores se ha mostrado que, a partir de una acción en la que la ley de conservación de momento está impuesta a través de una ley de composición, hay una no localidad de las interacciones para cualquier observador que no se encuentra en el punto de la interacción, haciéndose este efecto mayor cuando el observador ve la interacción más y más lejos. Sin embargo, es posible encontrar unas nuevas coordenadas no conmutativas de espacio-tiempo (que llamamos físicas) en las que las interacciones son locales. Hemos encontrado formas diferentes de imponer localidad y, en una de ellas, se requiere la asociatividad de la ley de composición para tener interacciones locales. Esto parece seleccionar de entre las cinemáticas obtenidas previamente desde una perspectiva geométrica el modelo de $\kappa$-Poincaré. Además, hemos encontrado una relación entre estos dos estudios, observado que la tétrada que caracteriza la curvatura del espacio de momentos puede usarse para definir las funciones de momento que determinan las coordenadas físicas.
Una vez que se han entendido mejor las consecuencias sobre el espacio-tiempo debido a una cinemática deformada, hemos desarrollado dos estudios fenomenológicos. En DSR, la única observación fenomenológica actual debido a una cinemática deformada es un posible retraso en el tiempo de vuelo para partículas sin masa con diferentes energías. Hemos propuesto tres modelos distintos en el contexto de DSR, mostrando que el retraso de tiempo no es necesariamente un efecto posible en este marco, dependiendo del modelo y de la cinemática (y de la base) utilizados. Esto elimina las fuertes restricciones en la escala de alta energía que parametriza las cinemáticas de DSR, que podría estar muchos órdenes de magnitud por debajo de la escala de Planck.
Considerando esta posibilidad, hemos analizado un proceso en QFT en presencia de una cinemática deformada. A pesar de la falta de una teoría dinámica, los cálculos mostrados aquí pueden arrojar algo de luz en cómo la QFT usual debería modificarse. Por otro lado, se ha mostrado que en presencia de una ley de composición covariante, en vez de tener un pico asociado a una resonancia, podría aparecer otro pico correlacionado con el primero, permitiéndonos determinar no solo la masa de la partícula, sino también la escala de alta energía. Hemos bautizado este efecto como \emph{twin peaks}. Si esta escala es lo suficientemente pequeña, este efecto podría ser observado en un futuro acelerador de partículas de altas energías.
Además, hemos estudiado un proceso simple, un par electrón-positrón yendo a un bosón $Z$ y desintegrándose en un par muón-antimuón. Para hacerlo, hemos implementado unas nuevas reglas de Feynman teniendo en cuenta una ley de composición covariante. Hemos mostrado que, en ambos casos, una escala del orden de algunos TeV podría ser compatible con los datos experimentales obtenidos de aceleradores de partículas para este proceso.
Finalmente, hemos investigado cómo una curvatura del espacio-tiempo podría modificar la cinemática. Esto es un ingrediente crucial en el estudio de retraso de tiempos de vuelo, ya que la expansión del universo debe tenerse en cuenta para fotones que provienen de fuentes astrofísicas. Para hacerlo, hemos estudiado, desde un punto de vista geométrico, cómo considerar simultáneamente una curvatura en el espacio-tiempo y en el espacio de momentos. Esto puede hacerse en la conocida como geometría en el fibrado cotangente, teniendo en cuenta una geometría no trivial para todo el espacio de fases. En este marco, hemos analizado las consecuencias fenomenológicas de un espacio de momentos maximalmente simétrico combinado con la geometría para el espacio-tiempo de un universo en expansión (Friedmann-Robertson-Walker) y de un agujero negro (Schwarzschild). En el primer caso, hemos calculado la velocidad para partículas sin masa, el corrimiento al rojo, la distancia lumínica y la congruencia de geodésicas. Si podría o no haber un retraso en el tiempo de vuelo en esta propuesta es todavía una pregunta abierta, que merece un trabajo futuro. Para el agujero negro, hemos visto que hay un horizonte de sucesos común para todas las partículas, lo que difiere del resultado para el caso de violación de invariancia Lorentz, donde las partículas con distintas energías verían distintos horizontes. Esto está de acuerdo con el principio de la relatividad impuesto en DSR. Sin embargo, la gravedad superficial depende de la energía, lo que parece indicar que la temperatura de Hawking podría mostrar también el mismo comportamiento.
|
2,869,038,154,109 | arxiv | \section{Introduction}
The research of control and planning in multi-agent systems (MASs) has been developing rapidly during the past decade with the growing demands from areas, such as cooperative vehicles~\cite{Mahony_RAM_2012, Cichella_CSM_2016}, Internet of Things~\cite{Ota_TSG_2012}, intelligent infrastructures~\cite{Blaabjerg_TIE_2006, Guerrero_TIE_2013, Dorfler_TCNS_2016}, and smart manufacturing~\cite{Leitao_EAAI_2009}. Distinct from other control problems, control of MASs is characterized by the issues and challenges, which include, but are not limited to, a great diversity of possible planning and execution schemes, limited information and resources of the local agents, constraints and randomness of communication networks, optimality and robustness of joint performance. A good summary of recent progress in multi-agent control can be found in~\cite{Amato_CDC_2013, Bensoussan_2013, Cao_TII_2013, Frank_2013, Oh_Auto_2015, Qin_TIE_2017, Zhang_arxiv_2019}. Building upon these results and challenges, this paper puts forward a distributed optimal control scheme for stochastic MASs by extending the linearly-solvable optimal control algorithms to MASs subject to stochastic dynamics in the presence of an explicit communication network and limited feedback information.
Linearly-solvable optimal control (LSOC) generally refers to the model-based stochastic optimal control (SOC) problems that can be linearized and solved with the facilitation of Cole-Hopf transformation, \textit{i.e.} exponential transformation of value function~\cite{Fleming_AMO_1977, Todorov_NIPS_2007}. Compared with other model-based SOC techniques, since LSOC formulates the optimality equations in linear form, it enjoys the superiority of analytical solution~\cite{Pan_NIPS_2015} and superposition principle~\cite{Todorov_NIPS_2009}, which makes LSOC a popular control scheme for robotics~\cite{Kupcsik_AI_2017, Williams_TRO_2018}. LSOC technique was first introduced to linearize and solve the Hamilton–Jacobi–Bellman (HJB) equation for continuous-time SOC problems~\cite{Fleming_AMO_1977}, and the application to discrete-time SOC, also known as the linearly-solvable Markov decision process (LSMDP), was initially studied in~\cite{Todorov_NIPS_2007}. More recent progress on single-agent LSOC problems can be found in~\cite{Peters_CAI_2010, Theodorou_JMLR_2010, Gomez_KDD_2014, Guan_TAC_2014, Pan_NIPS_2015, Williams_JGCD_2017}.
Different from many prevailing distributed control algorithms~\cite{Cao_TII_2013, Frank_2013, Oh_Auto_2015, Qin_TIE_2017}, such as consensus and synchronization that usually assume a given behavior, multi-agent SOC allows agents to have different objectives and optimizes the action choices for more general scenarios~\cite{Amato_CDC_2013}. Nevertheless, it is not straightforward to extend the single-agent SOC methods to multi-agent problems. The exponential growth of dimensionality in MASs and the consequent surges in computation and data storage demand more sophisticated and preferably distributed planning and execution algorithms. The involvement of communication networks (and constraints) requires the multi-agent SOC algorithms to achieve stability and optimality subject to local observation and more involved cost function. While the multi-agent Markov decision process (MDP) problem has received plenty of attention from both the fields of computer science and control engineering \cite{Guestrin_2002_NIPS, Becker_JAIR_2004, Amato_CDC_2013, Zhang_ICML_2018, Zhang_arxiv_2019, Zhang_arxiv_2019b}, there are relatively fewer results focused on multi-agent LSMDP. A recent result on multi-agent LSMDP represented the MAS problem as a single-agent problem by stacking the states of all agents into a joint state vector, and the scalability of the problem was addressed by parameterizing the value function~\cite{Daniel_2017}; however, since the planning and execution of the control action demand the knowledge of global states as well as a centralized coordination, the parallelization scheme of the algorithm was postponed in that paper. While there are more existing results focused on the multi-agent LSOC in continuous-time setting, most of these algorithms still depend on the knowledge of the global states, \textit{i.e.} a fully connected communication network, which may not be feasible or affordable to attain in practice. Some multi-agent LSOC algorithms also assume that the joint cost function can be factorized over agents, which basically simplifies the multi-agent control problem into multiple single-agent problems, and some features and advantages of MASs are therefore forfeited. Broek \textit{et al.} investigated the multi-agent LSOC problem for continuous-time systems governed by It\^o diffusion process~\cite{Broek_JAIR_2008}; a path integral formula was put forward to approximate the optimal control actions, and a graphical model inference approach was adopted to predict the optimal path distribution; nonetheless, the optimal control law assumed an accurate and complete knowledge of global states, and the inference was performed on the basis of mean-field approximation, which assumes that the cost function can be disjointly factorized over agents and ignores the correlations between agents. A distributed LSOC algorithm with infinite-horizon and discounted cost was studied in~\cite{Anderson_Robotica_2014} for solving a distance-based formation problem of nonholonomic vehicular network without explicit communication topology. The multi-agent LSOC problem was also recently discussed in~\cite{Williams_JGCD_2017} as an accessory result for a novel single-agent LSOC algorithm; an augmented dynamics was built by piling up the dynamics of all agents, and a single-agent LSOC algorithm was then applied to the augmented system. Similar to the discrete-time result in~\cite{Daniel_2017}, the continuous-time result resorting to augmented dynamics also presumes the fully connected network and faces the challenge that the computation and sampling schemes that originated from single-agent problem may become inefficient and possibly fail as the dimensions of augmented state and control grow exponentially in the number of agents.
To address the aforementioned challenges, this paper investigates the distributed LSOC algorithms for discrete-time and continuous-time MASs with consideration of local observation, correlations between neighboring agents, efficient sampling and parallel computing. A distributed framework is put forward to partition the connected network into multiple factorial subsystems, each of which comprises a (central) agent and its neighboring agents, such that the local control action of each agent, depending on the local observation, optimizes the joint cost function of a factorial subsystem, and the sampling and computational complexities of each agent are related to the size of the factorial subsystem instead of the entire network. Sampling and computation are parallelized to expedite the algorithms and exploit the resource in network, with state measurements, intermediate solutions, and sampled data exchanged over the communication network. For discrete-time multi-agent LSMDP problem, we linearize the joint Bellman equation of each factorial subsystem into a system of linear equations, which can be solved with parallel programming, making both the planning and execution phases fully decentralized. For continuous-time multi-agent LSOC problem, instead of adopting the mean-field assumption and ignoring the correlations between neighboring agents, joint cost functions are permitted in the subsystems; the joint optimality equation of each subsystem is first cast into a joint stochastic HJB equation, and then solved with a distributed path integral control method and a sample-efficient relative entropy policy search (REPS) method, respectively. The compositionality of LSOC is utilized to efficiently generate a composite controller for unlearned task from the existing controllers for learned tasks. Illustrative examples of coordinated UAV teams are presented to verify the effectiveness and advantages of multi-agent LSOC algorithms. Building upon our preliminary work on distributed path integral control for continuous-time MASs~\cite{Wan_arXiv_2020}, this paper not only integrates the distributed LSOC algorithms for both discrete-time and continuous-time MASs, but supplements the previous result with a distributed LSMDP algorithm for discrete-time MASs, a distributed REPS algorithm for continuous-time MASs, a compositionality algorithm for task generalization, and more illustrative examples.
The paper is organized as follows: \hyperref[sec2]{Section~2} introduces the preliminaries and formulations of multi-agent LSOC problems; \hyperref[sec3]{Section~3} presents the distributed LSOC algorithms for discrete-time and continuous-time MASs, respectively; \hyperref[sec4]{Section~4} shows the numerical examples, and \hyperref[sec5]{Section~5} draws the conclusions. Some notations used in this paper are defined as follows: For a set $\mathcal{S}$, $|\mathcal{S}|$ represents the cardinality of the set $\mathcal{S}$; for a matrix $X$ and a vector $v$, $\det X$ denotes the determinant of matrix $X$, and weighted square norm $\|v\|^2_X := v^\top X v$.
\section{Preliminaries and Problem Formulation}\label{sec2}
Preliminaries on MASs and LSOC are introduced in this section. The communication networks underlying MASs are represented by graphs, and the discrete-time and continuous-time LSOC problems are extended from single-agent scenario to MASs under a distributed planning and execution framework.
\titleformat*{\subsection}{\MySubSubtitleFont}
\titlespacing*{\subsection}{0em}{0.75em}{0.75em}[0em]
\subsection{Multi-Agent Systems and Distributed Framework}\label{sec2.1}
For a MAS consisting of $N \in \mathbb{N}$ agents, $\mathcal{D} = \{1, 2, \cdots, N\}$ denotes the index set of agents, and the communication network among agents is described by an undirected graph $\mathcal{G} = \{\mathcal{V}, \mathcal{E}\}$, which implies that the communication channel between any two agents is bilateral. The communication network $\mathcal{G}$ is assumed to be connected. An agent $i \in \mathcal{D}$ in the network is denoted as a vertex $v_i \in \mathcal{V} = \{v_1, v_2, \cdots, v_N\}$, and an undirected edge $(v_i, v_j) \in \mathcal{E} \subset \mathcal{V} \times \mathcal{V}$ in graph $\mathcal{G}$ implies that the agents $i$ and $j$ can measure the states of each other, which is denoted by $x_i = [x_{i(1)}, x_{i(2)}, \cdots, x_{i(M)}]^\top \in \mathbb{R}^{M}$ for agent $i \in \mathcal{D}$. Agents $i$ and $j$ are neighboring or adjacent agents, if there exists a communication channel between them, and the index set of agents neighboring to agent $i$ is denoted by $\mathcal{N}_i$ with $\bar{\mathcal{N}}_i = \mathcal{N}_i \cup \{ i \}$. We use column vectors $\bar{x}_i = [x_i^\top, x^\top_{j \in \mathcal{N}_i}]^\top \in \mathbb{R}^{M \cdot |\mathcal{\bar N}_i| }$ to denote the joint state of agent $i$ and its adjacent agents $\mathcal{N}_i$, which together group up the factorial subsystem $\bar{\mathcal{N}}_i$ of agent $i$, and $x = [x_1^\top, x_2^\top, \cdots, x^\top_n]^\top \in \mathbb{R}^{M \cdot N}$ denotes the global states of MAS. \hyperref[fig1]{Figure~1} shows a MAS and all its factorial subsystems.
To optimize the correlations between neighboring agents while not intensifying the communication and computational complexities of the network or each agent, our paper proposes a distributed planning and execution framework, as a trade-off scheme between the easiness of implementation and optimality of performance, for multi-agent LSOC problems. Under this distributed framework, the local control action $u_i$ of agent $i \in N$ is computed by solving the local LSOC problem defined in factorial subsystem $\bar{\mathcal{N}}_i$. Instead of requiring the cost functions fully factorized over agents \cite{Broek_JAIR_2008} or the knowledge of global states \cite{Daniel_2017, Williams_JGCD_2017}, joint cost functions are permitted in every factorial subsystem, which captures the correlations and cooperation between neighboring agents, and the local control action $u_i$ only relies on the local observation $\bar{x}_i$ of agent $i$, which also simplifies the structure of communication network. Meanwhile, the global computational complexity is no longer exponential with respect to the total amount of agents in network $|\mathcal{D}|$, but becomes linear with respect to it in general, and the computational complexity of local agent $i$ is only related to the number of agents in factorial subsystem $\bar{\mathcal{N}}_i$. However, since the local control actions are computed from the local observations with only partial information, the distributed LSOC law obtained under this framework is usually a sub-optimal solution, despite that it coincides with the global optimal solution when the communication network is fully connected. More explanations and discussions on this will be given at the end of this section. Before that, we first reformulate the discrete-time and continuous-time LSOC problems from the single-agent scenario to a multi-agent setting that is compatible with our distributed framework.
\begin{figure}[htpb]
\centering
\includegraphics[width=0.75\textwidth]{figure/Factored_Subsystems}
\caption{An example of MAS and factorial subsystems. MAS $\mathcal{G}$ with four agents can be partitioned into four factorial subsystems $\bar{\mathcal{N}}_1, \bar{\mathcal{N}}_2, \bar{\mathcal{N}}_3$, and $\bar{\mathcal{N}}_4$, and each subsystem is assumed to be fully connected.}\label{fig1}
\end{figure}
\subsection{Discrete-Time Dynamics}\label{Sec2_2}
Discrete-time SOC for single-agent systems, also known as the single-agent MDP, is briefly reviewed and then generalized to the networked MAS scenario. We consider the single-agent MDPs with finite state space and continuous control space in the first-exit setting. For a single agent $i \in \mathcal{D}$, the state variable $x_i$ belongs to a finite set $\mathcal{S}_i = \{s^i_1, s^i_2, \cdots\} = \mathcal{I}_i \cup \mathcal{B}_i$, which may be generated from an infinite-dimensional state problem by an appropriate coding scheme~\cite{Sutton_2018}. $\mathcal{I}_i \subset \mathcal{S}_i$ denotes the set of interior states of agent $i$, and $\mathcal{B}_i \subset \mathcal{S}_i$ denotes the set of boundary states. Without communication and interference from other agents, the passive dynamics of agent $i$ follows the probability distribution
\begin{equation*}
x'_i \sim p_i( \cdot | x_i),
\end{equation*}
where $x_i, x'_i \in \mathcal{S}_i$, and $p_i(x'_i | x_i)$ denotes the transition probability from state $x_i$ to $x'_i$. When taking control action $u_i$ at state $x_i$, the controlled dynamics of agent $i$ is described by the distribution mapping
\begin{equation}\label{single_agent_controlled}
x_i' \sim u_i ( \cdot | x_i) = p_i( \cdot | x_i, u_i),
\end{equation}
where $u_i(x'_i | x_i)$ or $p_i(x'_i | x_i, u_i)$ denotes the transition probability from state $x_i$ to state $x'_i$ subject to control $u_i$ that belongs to a continuous space. We require that $u_i(x'_i | x_i) = 0$, whenever $p_i(x'_i | x_i) = 0$, to prevent the direct transitions to the goal states. When $x_i \in \mathcal{I}_i \subset \mathcal{S}_i$, the immediate or running cost function of LSMDPs is designed as:
\begin{equation}\label{eq3}
c_i(x_i, u_i) = q_i(x_i) + \textrm{KL}( u_i(\cdot | x_i) \parallel p_i( \cdot | x_i)),
\end{equation}
where the state cost $q_i(x_i)$ can be an arbitrary function encoding how (un)desirable different states are, and the KL-divergence\footnote{The KL-divergence (relative entropy) between two discrete probability mass functions $p(x)$ and $q(x)$ is defined as\begin{equation*}\label{KLD}
\textrm{KL}(p \parallel q) = \sum_{x \in \mathcal{X}} p(x) \log[{p(x)} / {q(x)}],
\end{equation*}which has an absolute minimum $0$ when $p(x) = q(x), \forall x \in \mathcal{X}$. For two continuous probability density functions $p(x)$ and $q(x)$, the KL-divergence is defined as
\begin{equation*}
\textrm{KL}(p \parallel q) = \int_{x\in \chi} p(x) \log[p(x) / q(x)] dx.
\end{equation*}} measures the cost of control actions. When $x_i \in \mathcal{B}_i \subset \mathcal{S}_i$, the final cost function is defined as $\phi_i(x_i) \geq 0$. The cost-to-go function of first-exit problem starting at state-time pair $(x_i^{t_0}, t_0)$ is defined as
\begin{equation}\label{eq7}
J^{u_i}_i(x^{t_0}_i, t_0) = \mathbb{E}^{u_i} \bigg[ \phi_i(x_{i}^{t_f}) + \sum_{\tau = t_0}^{t_f - 1} c_i(x_i^\tau, u^\tau_i) \bigg],
\end{equation}
where $(x^\tau_i, u_i^\tau)$ is the state-action pair of agent $i$ at time step $\tau$, $x^{t_f}_i$ is the terminal or exit state, and the expectation $\mathbb{E}^{u_i}$ is taken with respect to the probability measure under which $x_i$ satisfies~\eqref{single_agent_controlled} given the control law $u_i = (u^{t_0}_i, u^{t_1}_i, \cdots, u^{t_f - 1}_i)$ and initial condition $x_i^{t_0}$. The objective of discrete-time stochastic optimal control problem is to find the optimal policy $u_i^*$ and value functions $V_i(x_i)$ by solving the Bellman equation
\begin{equation}\label{eq8}
V_i(x_i) = \min_{u_i} \left\{ c_i(x_i, u_i) + \mathbb{E}_{x'_i \sim u_i(\cdot | x_i)}[V_i(x'_i)] \right\},
\end{equation}
where the value function $V_i(x_i)$ is defined as the expected cumulative cost for starting at state $x_i$ and acting optimally thereafter, \textit{i.e.} $V_i(x_i) = \min_{u_i} J^{u_i}_i(x^{t_0}_i, t_0)$.
Based on the formulations of single-agent LSMDP and augmented dynamics, we introduce a multi-agent LSMDP formulation subject to the distributed framework of the factorial subsystem. For simplicity, we assume that the passive dynamics of agents in MAS are homogeneous and mutually independent, \textit{i.e.} agents without control are governed by identical dynamics and do not interfere or collide with each other. This assumption is also posited in many previous papers on multi-agent LSOC or distributed control~\cite{Olfati-Saber_PIEEE_2007, Broek_JAIR_2008, Frank_2013, Williams_JGCD_2017}. Since agent $i \in \mathcal{D}$ can only observe the states of neighboring agents $\mathcal{N}_i$, we are interested in the subsystem $\bar{\mathcal{N}}_i = \mathcal{N}_i \cup \{i\}$ when computing the control law of agent $i$. Hence, the autonomous dynamics of subsystem $\bar{\mathcal{N}}_i$ follow the distribution mapping
\begin{equation}\label{eq1}
\bar{x}'_i \sim \bar{p}_i( \cdot | \bar{x}_i) =\prod_{j \in {\mathcal{\bar{N}}}_i } p_j( \cdot | x_j),
\end{equation}
where the joint state $\bar{x}_i, \bar{x}'_i \in \prod_{j\in\mathcal{\bar{N}}_i} S_j = \bar{\mathcal{S}}_i = \{\bar{s}^i_1, \bar{s}^i_2, \cdots \}= \bar{\mathcal{I}}_i \cup \bar{\mathcal{B}}_i$, and distribution information $\bar{p}_i( \cdot | \bar{x}_i)$ are generally only accessible to agent $i$. Similarly, the global state of all agents $x' \sim p( \cdot | x) = \prod_{i=1}^{N} p_i( \cdot | x_i)$, which is usually not available to a local agent $i$ unless it can receive the information from all other agents $\mathcal{D} \backslash \{i\}$, \textit{e.g.} agent 3 in~\hyperref[fig1]{Figure~1}. Since the local control action $u_i$ only relies on the local observation of agent $i$, we assume that the joint posterior state $\bar{x}'_i$ of subsystem $\bar{\mathcal{N}}_i$ is exclusively determined by the joint prior state $\bar{x}_i$ and joint control $\bar{u}_i$ when computing the optimal control actions in subsystem $\bar{\mathcal{N}}_i$. More intuitively, under this assumption, the local LSOC algorithm in subsystem $\bar{\mathcal{N}}_i$ only requires the measurement of joint state $\bar{x}_i$ and treats subsystem $\bar{\mathcal{N}}_i$ as a complete connected network, as shown in~\hyperref[fig1]{Figure~1}. When each agent in $\mathcal{\bar{N}}_i$ samples their control action independently, the joint controlled dynamics of the factorial subsystem $\bar{\mathcal{N}}_i$ satisfies
\begin{equation}\label{eq2}
\bar{x}'_i \sim \bar{u}_i( \cdot | \bar{x}_i) = \prod_{j \in \mathcal{\bar{N}}_i} u_j ( \cdot | \bar{x}_i) = \prod_{j\in\mathcal{\bar{N}}_i} p_j( \cdot | x_i, x_{j\in\mathcal{N}_i}, u_j),
\end{equation}
where the joint state $\bar{x}_i$ and joint distribution $\bar{u}_i(\cdot | \bar{x}_i)$ are only accessible to agent $i$ in general. Once we figure out the joint control distribution $\bar{u}_i(\cdot | \bar{x}_i)$ for subsystem $\bar{\mathcal{N}}_i$, the local control distribution $u_i(\cdot | \bar{x}_i)$ of agent $i$ can be retrieved by calculating the marginal distribution. The joint immediate cost function for subsystem $\bar{\mathcal{N}}_i$ when $\bar{x}_i \in \mathcal{\bar{I}}_i$ is defined as follows
\begin{equation}\label{eq4}
c_i(\bar{x}_i, \bar{u}_i) =q_i(\bar{x}_i) + \textrm{KL}(\bar{u}_i( \cdot | \bar{x}_i) \parallel \bar{p}_i( \cdot | \bar{x}_i)) = q_i(\bar{x}_i) + \sum_{j \in \bar{\mathcal{N}}_i}\textrm{KL}(u_j( \cdot | \bar{x}_i) \parallel p_j( \cdot | x_j)) ,
\end{equation}
where the state cost $q_i(\bar{x}_i)$ can be an arbitrary function of joint state $\bar{x}_i$, \textit{i.e.} a constant or the norm of disagreement vector, and the second equality follows from \eqref{eq1} and \eqref{eq2}, which implies that the joint control cost is the cumulative sum of the local control costs. When $\bar{x}_i \in \mathcal{\bar{B}}_i$, the exit cost function $\phi_i(\bar{x}_i) = \sum_{j\in\mathcal{\bar{N}}_i} \omega^i_{j} \cdot \phi_j(x_j)$, where $\omega^i_{j} > 0$ is a weight measuring the priority of assignment on agent $j$. In order to improve the success rate in application, it is preferable to assign $\omega_i^i$ as the largest weight when computing the control distribution $\bar{u}_i$ in subsystem $\mathcal{\bar{N}}_i$. Subsequently, the joint cost-to-go function of first-exit problem in subsystem $\bar{\mathcal{N}}_i$ becomes
\begin{equation}\label{J_CTG_Discrete_Time}
J^{\bar{u}_i}_i(\bar{x}^{t_0}_i, t_0) = \mathbb{E}^{\bar{u}_i} \bigg[ \phi_i(\bar{x}_{i}^{t_f}) + \sum_{\tau = t_0}^{t_f - 1} c_i(\bar{x}_i^\tau, \bar{u}^\tau_i) \bigg].
\end{equation}
Some abuses of notations occur when we define the cost functions $c_i$ and $\phi_i$, cost-to-go function $J_i$, and value function $V_i$ in single-agent setting and the factorial subsystem; one can differentiate the different settings from the arguments of these functions. Derived from the single-agent Bellman equation~\eqref{eq8}, the joint optimal control action $\bar{u}_i^*( \cdot | \bar{x}_i)$ subject to the joint cost function~\eqref{eq4} can be solved from the following joint Bellman equation in subsystem $\bar{\mathcal{N}}_i$
\begin{equation}\label{CompBellman}
V_i(\bar{x}_i) = \min_{\bar u_i} \left\{ c_i(\bar{x}_i, \bar{u}_i) + \mathbb{E}_{\bar{x}'_i \sim \bar{u}_i(\cdot | \bar{x}_i)}[V_i(\bar{x}'_i)] \right\},
\end{equation}
where $V_i(\bar{x}_i)$ is the (joint) value function of joint state $\bar{x}_i$. A linearization method as well as a parallel programming method for solving~\eqref{CompBellman} will be discussed in~\hyperref[sec3]{Section~3}.
\subsection{Continuous-Time Dynamics}
For continuous-time LSOC problems, we first consider the dynamics of single agent $i$ described by the following It\^o diffusion process
\begin{equation}\label{eq10}
dx_i = f_i(x_i, t)dt + B_i(x_i) [u_i(x_i, t)dt + \sigma_i d w_i],
\end{equation}
where $x_i\in \mathbb{R}^{M}$ is agent $i$'s state vector from an uncountable state space; $f_i(x_i, t) + B_i(x_i) \cdot u _i(x_i, t) \in \mathbb{R}^{M}$ is the deterministic drift term with passive dynamics $f_i(x_i, t)$, control matrix $B_i(x_i) \in \mathbb{R}^{M\times P}$ and control action $u_i(x_i, t) \in \mathbb{R}^P$; noise $dw_i \in \mathbb{R}^P$ is a vector of possibly correlated\footnote{When the components of $d{\tilde w}_i = [d{\tilde w}_{i, (1)}, \cdots, d{\tilde w}_{i, (P)}]^\top$ are correlated and satisfy a multi-variate normal distribution $N(0, \Sigma_i)$, by using the Cholesky decomposition $\Sigma_i = \sigma_i \sigma^\top_i$, we can rewrite $d\tilde{w}_i = \sigma_i dw_i$, where $dw_i$ is a vector of Brownian components with zero drift and unit-variance rate.} Brownian components with zero mean and unit rate of variance, and the positive semi-definite matrix $\sigma_i \in \mathbb{R}^{P \times P}$ denotes the covariance of noise $dw_i$. When $x_i \in \mathcal{I}_i$, the running cost function is defined as
\begin{equation}\label{eq11}
c_i(x_i, u_i) = q_i(x_i) + \dfrac{1}{2}u_i(x_i, t)^\top R_i u_i(x_i, t),
\end{equation}
where $q_i(x_i) \geq 0$ is the state-related cost, and $u_i^\top R_iu_i$ is the control-quadratic term with matrix $R_i \in \mathbb{R}^{P\times P}$ being positive definite. When $x^{t_f}_i \in \mathcal{B}_i$, the terminal cost function is $\phi_i(x^{t_f}_i)$, where $t_f$ is the exit time. Hence, the cost-to-go function of first-exit problem is defined as
\begin{equation}\label{J_CTG_Cont_Time}
J^{u_i}_i(x_i^t, t) = \mathbb{E}^{u_i}_{x_i^t, t} \left[ \phi_i(x_{i}^{t_f}) + \int_{t}^{t_f} c_i(x_i(\tau), u_i(\tau)) \ d\tau \right],
\end{equation}
where the expectation is taken with respect to the probability measure under which $x_i$ is the solution to~\eqref{eq10} given the control law $u_i$ and initial condition $x_i(t)$. The value function is defined as the minimal cost-to-go function $V_i(x_i, t) = \min_{u_i} J_i^{u_i}(x_i^t, t)$. Subject to the dynamics~\eqref{eq10} and running cost function~\eqref{eq11}, the optimal control action $u_i^*$ can be solved from the following single-agent stochastic Hamilton–Jacobi–Bellman (HJB) equation:
\begin{align}\label{singleHJB}
- \partial_t V_i(x_i, t) = \min_{u_i} \Big\{ c_i(x_i, u_i) + [f_i(x_i, t) & + B_i(x_i)u_i(x_i,t)]^\top \cdot \nabla_{x_i} V_i(x_i, t) \\
& + \frac{1}{2} \textrm{tr} \left[B_i(x_i)\sigma_i\sigma^\top_iB_i(x_i)^\top \cdot \nabla^2_{x_ix_i} V_i(x_i, t) \right] \Big\}, \nonumber
\end{align}
where $\nabla_{x_i}$ and $\nabla^2_{x_i x_i}$ respectively refer to the gradient and Hessian matrix with $\nabla_{x_i} V_i = [ \partial V_i / \partial x_{i(1)}, \break \cdots, \partial V_i / \partial x_{i(M)}]^\top$ and elements $[\nabla^2_{x_ix_i}V_i]_{m,n} = \partial^2 V_i / \partial x_{i(m)} \partial x_{i(n)}$. A few methods have been proposed to solve the stochastic HJB in~\eqref{singleHJB}, such as the approximation methods via discrete-time MDPs or eigenfunction~\cite{Todorov_IEEESADPRL_2009} and path integral approaches~\cite{Kappen_PRL_2005, Broek_JAIR_2008, Theodorou_JMLR_2010}.
Similar to the extension of discrete-time LSMDP from single-agent setting to MAS, the joint continuous-time dynamics for factorial subsystem $\bar{\mathcal{N}}_i$ is described by
\begin{equation}\label{eq13}
d \bar{x}_i = \bar{f}_i(\bar{x}_i,t) dt + \bar{B}_i(\bar{x}_i) \left[ \bar{u}_i(\bar{x}_i,t) dt + \bar{\sigma}_i d\bar{w}_i \right],
\end{equation}
where the joint passive dynamics vector is denoted by $\bar{f}_i(\bar{x}_i,t) = [f_i(x_i, t)^\top, f_{j\in\mathcal{N}_i}(x_j,t)^\top]^\top \in \mathbb{R}^{M \cdot |\mathcal{\bar N}_i|}$, the joint control matrix is denoted by $\bar{B}_i(\bar{x}_i) = \textrm{diag}\{ B_i(x_i), B_{j\in\mathcal{N}_i}(x_{j}) \} \in \break \mathbb{R}^{M \cdot |\bar{\mathcal{N}}_i| \times P \cdot |\bar{\mathcal{N}}_i| }$, $\bar{u}_i(\bar{x}_i,t) = [u_i(\bar{x}_i,t)^\top, u_{j \in \mathcal{N}_i}(\bar{x}_i, t)^\top]^\top \in \mathbb{R}^{P \cdot |\bar{\mathcal{N}}_i|}$ is the joint control action, $d\bar{w}_i = [dw^\top_i, dw^\top_{j\in\mathcal{N}_i}]^\top \in \mathbb{R}^{P\cdot|\bar{\mathcal{N}}_i|}$ is the joint noise vector, and the joint covariance matrix is denoted by $\bar{\sigma}_i = \textrm{diag}\{ \sigma_i, \sigma_{j\in\mathcal{N}_i} \} \in \mathbb{R}^{P \cdot |\bar{\mathcal{N}}_i| \times P \cdot |\bar{\mathcal{N}}_i|}$. Analogous to the discrete-time scenario, we assume that the passive dynamics of agents are homogeneous and mutually independent, and for the local planning algorithm on agent $i$ or subsystem $\bar{\mathcal{N}}_i$, which computes the local control action $u_i(\bar{x}_i, t)$ for agent $i$ and joint control action $\bar{u}_i(\bar{x}_i, t)$ in subsystem $\bar{\mathcal{N}}_i$, the evolution of joint state $\bar{x}_i$ only depends on the current values of $\bar{x}_i$ and joint control $\bar{u}_i(\bar{x}_i, t)$. When $\bar{x}_i \in \mathcal{\bar I}_i$, the joint immediate cost function for subsystem $\bar{\mathcal{N}}_i$ is defined as
\begin{equation}\label{cont_cost}
c_i(\bar{x}_i, \bar{u}_i) = q_i(\bar{x}_i) + \frac{1}{2}\bar{u}_i(\bar{x}_i, t)^\top \bar{R}_i \bar{u}_i(\bar{x}_i, t),
\end{equation}
where the state-related cost $q_i(\bar{x}_i)$ can be an arbitrary function measuring the (un)desirability of different joint states $\bar{x}_i \in \mathcal{\bar S}_i$, and $\bar{u}_i^\top \bar{R}_i \bar{u}_i$ is the control-quadratic term with matrix $\bar{R}_i \in \mathbb{R}^{P \cdot |\bar{\mathcal{N}}_i| \times P \cdot |\bar{\mathcal{N}}_i|}$ being positive definite. When $\bar{R}_i = \textrm{diag}\{R_i, R_{j \in \mathcal{N}_i} \}$ with $R_i$ and $R_j$ defined in~\eqref{eq11}, the joint control cost term in~\eqref{cont_cost} satisfies $\bar{u}_i^\top \bar{R}_i \bar{u}_i = \sum_{j\in\mathcal{\bar{N}}_i}u_j^\top R_ju_j$, which is symmetric with respect to the relationship of discrete-time control costs in~\eqref{eq4}. When $\bar{x}_i \in \bar{\mathcal{B}}_i$, the terminal cost function is defined as $\phi_i(\bar{x}_i) = \sum_{j\in\mathcal{\bar{N}}_i}\omega_j^i \cdot \phi_j(x_j)$, where $\omega_j^i > 0$ is the weight measuring the priority of assignment on agent $j$, and we let the weight $\omega_i^i$ dominate other weights $\omega_{j \in \mathcal{N}_i}^i$ to improve the success rate. Compared with the cost functions fully factorized over agents, the joint cost functions in~\eqref{cont_cost} can gauge and facilitate the correlation and cooperation between neighboring agents. Subsequently, the joint cost-to-go function of first-exit problem in subsystem $\bar{\mathcal{N}}_i$ is defined as
\begin{equation*}
J^{\bar{u}_i}_i(\bar{x}_i^t, t) = \mathbb{E}^{\bar{u}_i}_{\bar{x}_i^t, t} \left[ \phi_i(\bar{x}_{i}^{t_f}) + \int_{t}^{t_f} c_i(\bar{x}_i(\tau), \bar{u}_i(\tau)) \ d\tau \right].
\end{equation*}
Let the (joint) value function $V_i(\bar{x}_i, t)$ be the minimal cost-to-go function, \textit{i.e.} $V_i(\bar{x}_i, t) = \min_{\bar{u}_i} J_i^{\bar{u}_i}(\bar{x}_i^t, t)$. We can then compute the joint optimal control action $\bar{u}_i^*$ of subsystem $\mathcal{\bar{N}}_i$ by solving the following joint optimality equation
\begin{equation}\label{ContBellman}
V_i(\bar{x}_i, t) = \min_{\bar{u}_i} \mathbb{E}^{\bar{u}_i}_{\bar{x}_i^t, t} \left[ \phi_i(\bar{x}_{i}^{t_f}) + \int_{t}^{t_f} c_i(\bar{x}_i(\tau), \bar{u}_i(\tau)) \ d\tau \right].
\end{equation}
A distributed path integral control algorithm and a distributed REPS algorithm for solving~\eqref{ContBellman} will be respectively discussed in~\hyperref[sec3]{Section~3}. Discussion on the relationship between discrete-time and continuous-time LSOC dynamics can be found in~\cite{Todorov_NIPS_2007, Todorov_PNAS_2009}.
\begin{remark}
Although each agent $i \in \mathcal{D}$ under this framework acts optimally to minimize a joint cost-to-go function defined in their subsystem $\mathcal{\bar{N}}_i$, the distributed control law obtained from solving the local problem \eqref{CompBellman} or \eqref{ContBellman} is still a sub-optimal solution unless the communication network $\mathcal{G}$ is fully connected. Two main reasons account for this sub-optimality. First, when solving \eqref{CompBellman} or \eqref{ContBellman} for the joint (or local) optimal control $\bar{u}_i^*$ (or $u_i^*$), we ignore the connections of agents outside the subsystem $\bar{\mathcal{N}}_i$ and assume that the evolution of joint state $\bar{x}_i$ only relies on the current values of $\bar{x}_i$ and joint control $\bar{u}_i$. This simplification is reasonable and almost accurate for the central agent $i$ of subsystem $\bar{\mathcal{N}}_i$, but not for the non-central agents $j \in \mathcal{N}_i$, which are usually adjacent to other agents in $\mathcal{N}_j \backslash \mathcal{N}_i$. Therefore, the local optimal control actions $\bar{u}_j^*$ of other agents $j \in \mathcal{D} \backslash \{i\}$ are respectively computed from their own subsystems $\bar{\mathcal{N}}_j$, which may contradict the joint optimal control $\bar{u}_i$ solved in subsystem $\bar{\mathcal{N}}_i$ and result in a sub-optimal solution. Similar conflicts widely exist in the distributed control and optimization problems subject to limited communication and partial observation, and some serious and heuristic studies on the global- and sub-optimality of distributed subsystems have been conducted in~\cite{Johari_MOR_2004, Nedic_TAC_2010, Frank_2013, Voulgaris_CDC_2017}. We will not dive into those technical details in this paper, as we believe that the executability of a sub-optimal plan with moderate communication and computational complexities should outweigh the performance gain of a global-optimal but computationally intractable plan in practice. In this regard, the distributed framework built upon factorial subsystems, which captures the correlations between neighboring agents while ignoring the further connections outside subsystems, provides a trade-off alternative between the optimality and complexity and is analogous to the structured prediction framework in supervised learning.
\end{remark}
\section{Distributed Linearly-Solvable Optimal Control}\label{sec3}
Subject to the multi-agent LSOC problems formulated in~\hyperref[sec2]{Section~2}, the linearization methods and distributed algorithms for solving the joint discrete-time Bellman equation~\eqref{CompBellman} and joint continuous-time optimality equation~\eqref{ContBellman} are discussed in this section.
\subsection{Discrete-Time Systems}\label{Sec3_1}
We first consider the discrete-time MAS with dynamics~\eqref{eq2} and immediate cost function~\eqref{eq4}, which give the joint Bellman equation~\eqref{CompBellman} in subsystem $\mathcal{\bar{N}}_i$
\begin{equation*}
V_i(\bar{x}_i) = \min_{\bar u_i} \left\{ c_i(\bar{x}_i, \bar{u}_i) + \mathbb{E}_{\bar{x}'_i \sim \bar{u}_i(\cdot | \bar{x}_i)}[V_i(\bar{x}'_i)] \right\}.
\end{equation*}
In order to compute the value function $V_i(\bar{x}_i)$ and optimal control action $\bar{u}_i^*( \cdot | \bar{x}_i)$ from equation~\eqref{CompBellman}, an exponential or Cole-Hopf transformation is employed to linearize~\eqref{CompBellman} into a system of linear equations, which can be cast into a decentralized programming and solved in parallel. Local optimal control distribution $u_i^*( \cdot | \bar{x}_i)$ that depends on the local observation of agent $i$ is then derived by marginalizing the joint control distribution $\bar{u}_i^*( \cdot | \bar{x}_i)$.
\subsubsection*{A. Linearization of Joint Bellman Equation}
Motivated by the exponential transformation employed in~\cite{Todorov_PNAS_2009} for single-agent system, we define the desirability function $Z_i(\bar{x}_i)$ for joint state $\bar{x}_i \in \mathcal{\bar I}_i$ in subsystem $\mathcal{\bar{N}}_i$ as
\begin{equation}\label{ExpTrans}
Z_i(\bar{x}_i) = \exp[-V_i(\bar{x}_i)],
\end{equation}
which implies that the desirability function $Z_i(\bar{x}_i)$ is negatively correlated with the value function $V_i(\bar{x}_i)$, and the value function can also be written conversely as a logarithm of the desirability function, $V_i(\bar{x}_i) = \log 1 / Z_i(\bar{x}_i)$. For boundary states $\bar{x}_i \in \mathcal{\bar B}_i$, the desirability functions are defined as $Z_i(\bar{x}_i) = \exp[-\phi_i(\bar{x}_i)]$. Based on the transformation~\eqref{ExpTrans}, a linearized joint Bellman equation~\eqref{CompBellman} along with the joint optimal control distribution is presented in \hyperref[thm1]{Theorem~1}.
\setcounter{theorem}{0}
\begin{theorem}\label{thm1}
With exponential transformation~\eqref{ExpTrans}, the joint Bellman equation~\eqref{CompBellman} for subsystem $\mathcal{\bar{N}}_i$ is equivalent to the following linear equation with respect to the desirability function
\begin{equation}\label{eq_prop1}
Z_i(\bar{x}_i) = \exp(-q_i(\bar{x}_i)) \cdot \sum_{\bar{x}'_i}\bar{p}_i(\bar{x}'_i|\bar{x}_i)Z_i(\bar{x}'_i),
\end{equation}
where $q_i(\bar{x}_i)$ is the state-related cost defined in~\eqref{cont_cost}, and $\bar{p}_i(\bar{x}_i'|\bar{x}_i)$ is the transition probability of passive dynamics in~\eqref{eq1}. The joint optimal control action $\bar{u}_i^*( \cdot | \bar{x}_i)$ solving~\eqref{CompBellman} satisfies
\begin{equation}\label{OptimalControl}
\bar{u}_i^*( \cdot | \bar{x}_i) = \frac{\bar{p}_i( \cdot | \bar{x}_i)Z_i(\cdot)}{\sum_{\bar{x}'_i}\bar{p}_i(\bar{x}'_i|\bar{x}_i)Z_i(\bar{x}'_i)},
\end{equation}
where $\bar{u}_i^*(\bar{x}_i' | \bar{x}_i)$ is the transition probability from $\bar{x}_i$ to $\bar{x}_i'$ in controlled dynamics~\eqref{eq2}.
\end{theorem}
\begin{proof}
See \hyperref[appA]{Appendix~A} for the proof.
\end{proof}
Joint optimal control action $\bar{u}^*_i(\bar{x}_i' | \bar{x}_i)$ in~\eqref{OptimalControl} only relies on the joint state $\bar{x}_i$ of subsystem $\mathcal{\bar{N}}_i$, \textit{i.e.} local observation of agent $i$. To execute this control action~\eqref{OptimalControl}, we still need to figure out the values of desirability functions or value functions, which can be solved from~\eqref{eq_prop1}. A conventional approach for solving~\eqref{eq_prop1}, which was adopted in~\cite{Todorov_NIPS_2007, Todorov_NIPS_2009, Todorov_PNAS_2009} for single-agent LSMDP, is to rewrite~\eqref{eq_prop1} as a recursive formula and approximate the solution by iterations. We can approximate the desirability functions of interior states $\bar{x}_i \in \mathcal{\bar{I}}_i$ by recursively executing the following update law
\begin{equation}\label{Z_update}
Z_{\mathcal{I}} = \Theta Z_{\mathcal{I}} + \Omega Z_\mathcal{B},
\end{equation}
where $Z_\mathcal{I}$ and $Z_\mathcal{B}$ are respectively the desirability vectors of interior states and boundary states; the diagonal matrix $\Theta = \textrm{diag}\{\exp(-q_{\mathcal{I}})\} \cdot P_{\mathcal{II}}$ with $q_{\mathcal{I}}$ denoting the state-related cost of interior states, $P_{\mathcal{II}} = [p_{mn}]$ denoting the transition probability matrix between interior states, and $p_{mn} = \bar{p}_i(\bar{x}_i' = \bar{s}^i_n \ | \ \bar{x}_i = \bar{s}^i_m)$ for $\bar{s}^i_n, \bar{s}^i_m \in \mathcal{\bar I}_i$; the matrix $\Omega = \textrm{diag}\{ \exp(-q_{\mathcal{I}}) \} \cdot P_{\mathcal{IB}}$ with $P_{\mathcal{IB}} = [p_{mn}]$ denoting the transition probability matrix from interior states to boundary states, and $p_{mn} = \bar{p}_i(\bar{x}_i' = \bar{s}^i_n \ | \ \bar{x}_i = \bar{s}^i_m)$ for $\bar{s}^i_m \in \mathcal{\bar I}_i$ and $\bar{s}^i_n \in \mathcal{\bar{B}}_i$. Assigning an initial value to the desirability vector $Z_{\mathcal{I}}$, the recursive formula~\eqref{Z_update} is guaranteed to converge to a unique solution, since the spectral radius of matrix $\Theta$ is less than $1$. More detailed convergence analysis on this iterative solver of LSMDPs has been given in~\cite{Todorov_NIPS_2007}. However, this centralized solver is inefficient when dealing with the MAS problems with high-dimensional state space, which requires the development of a distributed solver that exploits the resource of network and expedites the computation.
\subsubsection*{B. Distributed Planning Algorithm}
While most of the distributed SOC algorithms are executed by local agents in a decentralized approach, a great number of these algorithms still demand a centralized solver in planning phase, which becomes a bottleneck for their implementations when the amount of agents scales up~\cite{Amato_CDC_2013}. For a fully connected MAS with $N$ agents, when each agent has $|\mathcal{I}|$ interior states, the dimension of the vector $Z_\mathcal{I}$ in \eqref{Z_update} is $|\mathcal{I}|^N$, and as the number of agents $N$ grows up, it will become more intractable for a central computation unit to store all the data and execute all the computation required by~\eqref{Z_update} due to the \textit{curse of dimensionality}. Although the subsystem-based distributed framework can alleviate this problem by demanding a less complex network and making the dimension and computational complexity only related to the sizes of factorial subsystems, it is still preferable to utilize the resources of MAS by distributing the data and computational task of~\eqref{Z_update} to each local agent in $\bar{\mathcal{N}}_i$, instead of relying on a central planning agent. Hence, we rewrite the linear equation~\eqref{Z_update} in the following form
\begin{equation}\label{eq22}
(I - \Theta) Z_{\mathcal{I}} = \Omega Z_{\mathcal{B}},
\end{equation}
and formulate~\eqref{eq22} into a parallel programming problem. In order to solve the desirability vector $Z_{\mathcal{I}}$ from~\eqref{eq22} via a distributed approach, each agent in subsystem $\mathcal{\bar{N}}_i$ only needs to know (store) a subset (rows) of the partitioned matrix $\left[ I - \Theta, \hspace{5pt} \Omega Z_\mathcal{B} \right]$. Subject to the equality constraints laid by its portion of coefficients, agent $j \in \mathcal{\bar{N}}_i$ first initializes its own version of solution $Z_{\mathcal{I}, j}^{(0)}$ to~\eqref{eq22}. Every agent has access to the solutions of its neighboring agents, and the central agent $i$ can access the solutions of agents in $\mathcal{N}_i$. A consensus of $Z_{\mathcal{I}}$ in \eqref{eq22} can be reached among $Z_{\mathcal{I}, j\in\mathcal{\bar{N}}_i}$, when implementing the following synchronous distributed algorithm on each computational agent in $\mathcal{\bar{N}}_i$~\cite{Mou_TAC_2015}:
\begin{equation}\label{dist_alge}
Z^{(n+1)}_{\mathcal{I}, j} = Z^{(n)}_{\mathcal{I}, j} - P_j\left( Z^{(n)}_{\mathcal{I}, j} - \dfrac{1}{d_j} \sum_{k \in \mathcal{N}_j \cap \mathcal{\bar{N}}_i } Z^{(n)}_{\mathcal{I}, k} \right),
\end{equation}
where $P_j$ is the orthogonal projection matrix on the kernel of $[I- \Theta]_j$, rows of the matrix $I-\Theta$ stored in agent $j$; $d_j = |\mathcal{N}_j \cap \mathcal{\bar{N}}_i|$ is the amount of neighboring agents of agent $j$ in subsystem $\mathcal{\bar{N}}_i$; and $n$ is the index of update iteration. An asynchronous distributed algorithm~\cite{Liu_TAC_2018}, which does not require agents to concurrently update their solution $Z_{\mathcal{I}, i}$, can also be invoked to solve~\eqref{eq22}. Under these distributed algorithms, different versions of solution $Z_{\mathcal{I}, j}$ are exchanged across the network, and the requirements of data storage and computation can be allocated evenly to each agent in network, which improve the overall efficiency of algorithms. Meanwhile, these fully parallelized planning algorithms can be optimized and boosted further by naturally incorporating some parallel computing and Internet of Things (IoT) techniques, such as edge computing~\cite{Shi_IoT_2016}. Nonetheless, for MASs with massive population, \textit{i.e.} $N \rightarrow \infty$, most of the control schemes and algorithms introduced in this paper will become fruitless, and we may resort to the mean-field theory~\cite{Bensoussan_2013, Bakshi_arxiv_2018, Bakshi_TCNS_2019}, which describes MASs by probability density model rather than connected graph model.
\subsubsection*{C. Local Control Action}
After we figure out the desirability functions $Z_i(\bar{x}_i)$ for all the joint states $\bar{x}_i \in \bar{\mathcal{I}}_i \cup \bar{\mathcal{B}}_i$ in subsystem $\bar{\mathcal{N}}_i$, the local optimal control distribution $u_i^*(x'_i|\bar{x}_i)$ for central agent $i$ is derived by calculating the marginal distribution of $\bar{u}^*_i(\bar{x}'_i|\bar{x}_i)$
\begin{equation*}
u_i^*(x'_i|\bar{x}_i) = \sum_{j \in \mathcal{N}_i} \bar{u}_i^{*}(x'_i, x'_{j\in\mathcal{N}_i} | \bar{x}_i),
\end{equation*}
where both the joint distribution $\bar{u}^*_i(\bar{x}'_i|\bar{x}_i)$ and local distribution $u_i^*(x'_i|\bar{x}_i)$ rely on the local observation of central agent $i$. By sampling control action from marginal distribution $u_i^*(x'_i|\bar{x}_i)$, agent $i$ behaves optimally to minimize the joint cost-to-go function~\eqref{J_CTG_Discrete_Time} defined in subsystem $\bar{\mathcal{N}}_i$, and the local optimal control distribution $u_k^*(x'_k|\bar{x}_k)$ of other agents $k \in \mathcal{D} \backslash \{i\}$ in network $\mathcal{G}$ can be derived by repeating the preceding procedures in subsystems $\bar{\mathcal{N}}_{k}$. The procedures of multi-agent LSMDP algorithm are summarized as \hyperref[alg1]{Algorithm~1} in~\hyperref[appE]{Appendix~E}.
\subsection{Continuous-Time Systems}\label{Sec3_2}
We now consider the LSOC for continuous-time MASs subject to joint dynamics~\eqref{eq13} and joint immediate cost function~\eqref{cont_cost}, which can be formulated into the joint optimality equation~\eqref{ContBellman} as follows
\begin{equation*}
V_i(\bar{x}_i, t) = \min_{\bar{u}_i} \mathbb{E}^{\bar{u}_i}_{\bar{x}_i^t, t} \left[ \phi_i(\bar{x}_{i}^{t_f}) + \int_{t}^{t_f} c_i(\bar{x}_i(\tau), \bar{u}_i(\tau)) \ d\tau \right].
\end{equation*}
In order to solve~\eqref{ContBellman} and derive the local optimal control action $u^*_i(\bar{x}_i, t)$ for agent $i$, we first cast the joint optimality equation~\eqref{ContBellman} into a joint stochastic HJB equation that gives an analytic form for joint optimal control action $\bar{u}^*_i(\bar{x}_i, t)$. To solve for the value function, by resorting to the Cole-Hopf transformation, the stochastic HJB equation is linearized into a partial differential equation (PDE) with respect to desirability function. Feynman-Kac formula is then invoked to formulate the solution of the linearized PDE and joint optimal control action as the path integral formulae forward in time, which are later approximated respectively by a distributed Monte Carlo (MC) sampling method and a sample-efficient distributed REPS algorithm.
\subsubsection*{A. Linearization of Joint Optimality Equation}
Similar to the transformation~\eqref{ExpTrans} for discrete-time systems, we adopt the following Cole-Hopf or exponential transformation in continuous-time systems
\begin{equation}\label{ContTrans}
Z(\bar{x}_i, t) = \exp[-V_i(\bar{x}_i, t) / \lambda_i],
\end{equation}
where $\lambda_i \in \mathbb{R}$ is a scalar, and $Z(\bar{x}_i, t)$ is the desirability function of joint state $\bar{x}_i$ at time $t$. Conversely, we also have $V_i(\bar{x}_i, t) = \lambda_i \log Z(\bar{x}_i, t)$ from~\eqref{ContTrans}. In the following theorem, we convert the optimality equation~\eqref{ContBellman} into a joint stochastic HJB equation, which reveals an analytic form of joint optimal control action $\bar{u}^*_i(\bar{x}_i, t)$, and linearize the HJB equation into a linear PDE that has a closed-form solution for the desirability function.
\begin{theorem}\label{thm2}
Subject to the joint dynamics~\eqref{eq13} and immediate cost function~\eqref{cont_cost}, the joint optimality equation~\eqref{ContBellman} in subsystem $\mathcal{\bar{N}}_i$ is equivalent to the joint stochastic HJB equation
\begin{align}\label{sto_HJB}
-\partial_t V_i(\bar{x}_i, t) =& \min_{\bar{u}_i} \mathbb{E}_{\bar{x}_i,t}^{\bar{u}_i} \bigg[ \sum_{j \in \bar{\mathcal{N}}_i} [f_j(x_j,t) + B_j(x_j) u_j(\bar{x}_i, t)]^\top \cdot \nabla_{x_j} V_i(\bar{x}_i,t) + q_i(\bar{x}_i, t) \\
& + \frac{1}{2} \bar{u}_i(\bar{x}_i, t)^\top\bar{R}_i\bar{u}_i(\bar{x}_i, t) + \frac{1}{2} \sum_{j \in \bar{\mathcal{N}}_i} \textrm{tr} \left( B_j(x_j)\sigma_j \sigma_j^\top B_j(x_j)^\top \cdot \nabla_{x_jx_j}V_i(\bar{x}_i, t) \right) \bigg], \nonumber
\end{align}
with boundary condition $V_i(\bar{x}_i, t_f) = \phi_i(\bar{x}_i)$, and the optimum of~\eqref{sto_HJB} can be attained with the joint optimal control action
\begin{equation}\label{OptimalAction}
\bar{u}^*_i(\bar{x}_i, t) = -\bar{R}_i^{-1}\bar{B}_i(\bar{x}_i)^\top \nabla_{\bar{x}_i}V_i(\bar{x}_i, t).
\end{equation}
Subject to the transformation~\eqref{ContTrans}, control action~\eqref{OptimalAction} and condition $\bar{R}_i = (\bar{\sigma}_i\bar{\sigma}_i^\top / \lambda_i)^{-1}$, the joint stochastic HJB equation~\eqref{sto_HJB} can be linearized as
\begin{equation}\label{Z_Function}
\partial_{t} Z_i(\bar{x}_i, t) = \bigg[\frac{q_i(\bar{x}_i, t)}{\lambda_i} - \sum_{j\in\bar{\mathcal{N}}_i} f_j(x_j, t)^\top \nabla_{x_j} - \frac{1}{2}\sum_{j \in \bar{\mathcal{N}}_i} \textrm{tr}\left( B_j(x_j)\sigma_j \sigma_j^\top B_j(x_j)^\top \nabla_{x_jx_j} \right) \bigg]Z_i(\bar{x}_i,t)
\end{equation}
with boundary condition $Z_i(\bar{x}_i, t_f) = \exp[- \phi_i(\bar{x}_i) / \lambda_i]$, which has a solution
\begin{equation}\label{Z_Solution}
Z_i(\bar{x}_i, t) = \mathbb{E}_{\bar{x}_i,t}\left[ \exp\left( -\frac{1}{\lambda_i} \phi_i(\bar{y}^{t_f}_i) -\frac{1}{\lambda_i} \int_{t}^{t_f}q_i(\bar{y}_i, \tau) \ d\tau \right) \right],
\end{equation}
where the diffusion process $\bar{y}(t)$ satisfies the uncontrolled dynamics $d \bar{y}_i(\tau) = \bar{f}_i(\bar{y}_i,\tau) d\tau + \bar{B}_i(\bar{y}_i) \bar{\sigma}_i \cdot d\bar{w}_i(\tau)$ with initial condition $\bar{y}_i(t) = \bar{x}_i(t)$.
\end{theorem}
\begin{proof}
See \hyperref[appB]{Appendix~B} for the proof.
\end{proof}
Based on the transformation~\eqref{ContTrans}, the gradient of value function satisfies $\nabla_{\bar{x}_i} V_i(\bar{x}_i, t) = - \lambda_i \cdot \nabla_{\bar{x}_i} Z_i(\bar{x}_i,t) / Z_i(\bar{x}_i,t)$. Hence, the joint optimal control action~\eqref{OptimalAction} can be rewritten as
\begin{equation}\label{OptimalAction2}
\bar{u}^*_i(\bar{x}_i, t) = \lambda_i \bar{R}_i^{-1}\bar{B}^\top_i(\bar{x}_i) \cdot \frac{\nabla_{\bar{x}_i} Z_i(\bar{x}_i,t)}{Z_i(\bar{x}_i,t)} = \bar{\sigma}_i\bar{\sigma}_i^\top \bar{B}_i(\bar{x}_i)^\top \cdot \frac{\nabla_{\bar{x}_i} Z_i(\bar{x}_i,t)}{Z_i(\bar{x}_i,t)},
\end{equation}
where the second equality follows from the condition $\bar{R}_i = (\bar{\sigma}_i\bar{\sigma}_i^\top / \lambda_i)^{-1}$. Meanwhile, by virtue of Feynman-Kac formula, the stochastic HJB equation~\eqref{sto_HJB} that must be solved backward in time can now be solved by an expectation of diffusion process evolving forward in time. While a closed-form solution of the desirability function $Z_i(\bar{x}_i,t)$ is given in~\eqref{Z_Solution}, the expectation $\mathbb{E}_{\bar{x}_i, t}(\cdot)$ is defined on the sample space consisting of all possible uncontrolled trajectories initialized at $(\bar{x}_i, t)$, which makes this expectation intractable to compute. A common approach in statistical physics and quantum mechanics is to first formulate this expectation as a path integral~\cite{Moral_2004, Kappen_PRL_2005, Theodorou_Entropy_2015}, and then approximate the integral or optimal control action with various techniques, such as MC sampling~\cite{Kappen_PRL_2005} and policy improvement~\cite{Theodorou_JMLR_2010}. In the following subsections, we first formulate the desirability function~\eqref{Z_Function} and control action~\eqref{OptimalAction2} as path integrals and then approximate them with a distributed MC sampling algorithm and a sample-efficient distributed REPS algorithm, respectively.
\subsubsection*{B. Path Integral Formulation}
Before we show a path integral formula for the desirability function $Z_i(\bar{x}_i, t)$ in \eqref{Z_Solution}, some manipulations on joint dynamics~\eqref{eq13} are performed to avoid singularity problems \cite{Theodorou_JMLR_2010}. By rearranging the components of joint states $\bar{x}_i$ in \eqref{eq13}, the joint state vector $\bar{x}_i$ in subsystem $\mathcal{\bar{N}}_i$ can be partitioned as $[\bar{x}^\top_{i(n)}, \bar{x}^\top_{i(d)}]^\top$, where $\bar{x}_{i(n)} \in \mathbb{R}^{U\cdot |\bar{\mathcal{N}}_i|}$ and $\bar{x}_{i(d)} \in \bar{\mathbb{R}}^{D \cdot |\bar{\mathcal{N}}_i|}$ respectively indicate the joint non-directly actuated states and joint directly actuated states of subsystem $\mathcal{\bar{N}}_i$; $U$ and $D$ denote the dimensions of non-directly actuated states and directly actuated states for a single agent. Consequently, the joint passive dynamics term $\bar{f}_i(\bar{x}_i, t)$ and the joint control transition matrix $\bar{B}_i(\bar{x}_i)$ in \eqref{eq13} are partitioned as $[\bar{f}^{ \tiny\ \top}_{i(n)}, \bar{f}^{ \tiny\ \top}_{i(d)}]^\top$ and $[0, \bar{B}^\top_{i(d)}(\bar{x}_i)]^\top$, respectively. Hence, the joint dynamics~\eqref{eq13} can be rewritten in a partitioned vector form as follows
\begin{equation}\label{Partitioned_dynamics}
\left(\begin{matrix}
d\bar{x}_{i(n)}\\
d\bar{x}_{i(d)}
\end{matrix}\right) = \left(\begin{matrix}
\bar{f}_{i(n)}(\bar{x}_i,t)\\
\bar{f}_{i(d)}(\bar{x}_i,t)
\end{matrix}\right)dt + \left(\begin{matrix}
0\\
\bar{B}_{i(d)}(\bar{x}_i)
\end{matrix}\right)\left[ \bar{u}_i(\bar{x}_i,t) dt + \bar{\sigma}_i d\bar{w}_i \right].
\end{equation}
With the partitioned dynamics~\eqref{Partitioned_dynamics}, the path integral formulae for the desirability function~\eqref{Z_Solution} and joint optimal control action~\eqref{OptimalAction2} are given in~\hyperref[prop3]{Proposition~3}.
\begin{proposition}\label{prop3} Partition the time interval from $t$ to $t_f$ into $K$ intervals of equal length $\varepsilon > 0$, $t = t_0 < t_1 < \cdots < t_K = t_f$, and let the trajectory variable $\bar{x}_i^{(k)} = [\bar{x}_{i(n)}^{(k)\top}, \bar{x}_{i(d)}^{(k)\top}]^\top$ denote the segments of joint uncontrolled trajectories on time interval $[t_{k-1}, t_k)$, governed by joint dynamics~\eqref{Partitioned_dynamics} with $\bar{u}_i(\bar{x}_i,t) = 0$ and initial condition $\bar{x}_i(t) = \bar{x}_i^{(0)}$. The desirability function~\eqref{Z_Solution} in subsystem $\bar{\mathcal{N}}_i$ can then be reformulated as a path integral
\begin{equation}\label{Prop3E1}
Z_i(\bar{x}_i,t) = \lim_{\varepsilon \downarrow 0} \int \exp\left( -\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) - K D |\mathcal{\bar N}_i| / 2 \cdot \log (2\pi\lambda_i \varepsilon) \right) d\bar{\ell}_i,
\end{equation}
where the integral is over path variable $\bar \ell_i = (\bar{x}^{(1)}_i, \cdots, \bar{x}^{(K)}_i)$, \textit{i.e.} set of all uncontrolled trajectories initialized at $(\bar{x}_i, t)$, and the generalized path value
\begin{align}\label{Prop3E2}
\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) = \frac{\phi_i(\bar{x}^{(K)}_i)}{\lambda_i} + \frac{\varepsilon}{\lambda_i} & \sum_{k=0}^{K-1}q_i(\bar{x}^{(k)}_i, t_k) + \frac{1}{2}\sum_{k=0}^{K-1} \log \det(H_i^{(k)}) \allowdisplaybreaks \\
&+ \frac{\varepsilon}{2\lambda_i} \sum_{k=0}^{K-1} \left\| \frac{ \bar{x}_{i(d)}^{(k+1)} - \bar{x}_{i(d)}^{(k)}}{\varepsilon} - \bar{f}_{i(d)}(\bar{x}^{(k)}_i, t_k) \right\|^2_{\left(H_i^{(k)}\right)^{-1}} \nonumber
\end{align}
with $H_i^{(k)} = \bar{B}_{i(d)}(\bar{x}^{(k)}_i) \bar{\sigma}_i\bar{\sigma}_i^\top \bar{B}_{i(d)}(\bar{x}^{(k)}_i)^\top = \lambda_i\bar{B}_{i(d)}(\bar{x}^{(k)}_i)\bar{R}_i^{-1} \bar{B}_{i(d)}(\bar{x}^{(k)}_i)^\top$. Hence, the joint optimal control action in subsystem $\bar{\mathcal{N}}_i$ can be reformulated as a path integral
\begin{equation}\label{OptCtrlPath}
\bar{u}^*_i(\bar{x}_i, t) = \lambda_i \bar{R}_i^{-1} \bar{B}_{i(d)}(\bar{x}_i)^\top \cdot \lim_{\varepsilon \downarrow 0} \int \tilde{p}^*_i(\bar{\ell}_i | \bar{x}_i^{(0)}, t_0) \cdot \tilde{u}_i(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) \ d\bar{\ell}_i,
\end{equation}
where
\begin{equation}\label{OptPathDist}
\tilde{p}^*_i(\bar{\ell}_i | \bar{x}_i^{(0)}, t_0) = \frac{\exp( -\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) )}{\int \exp( -\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) ) \ d\bar{\ell}_i }
\end{equation}
is the optimal path distribution, and
\begin{equation}\label{inictrl}
\tilde{u}_i(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) = -\frac{\varepsilon}{\lambda_i}\nabla_{\bar{x}^{(0)}_{i(d)}}q_i(\bar{x}^{(0)}_i, t_0) + \left(H_i^{(0)}\right)^{-1} \left(\frac{\bar{x}_{i(d)}^{(1)} - \bar{x}_{i(d)}^{(0)}}{\varepsilon} - \bar{f}_{i(d)}(\bar{x}_i^{(0)}, t_0) \right)
\end{equation}
is the initial control variable.
\end{proposition}
\begin{proof}
See \hyperref[appC]{Appendix~C} for the proof.
\end{proof}
While \hyperref[prop3]{Proposition~3} gives the path integral formulae for desirability (value) function $Z_i(\bar{x}_i,t) $ and joint optimal control action $\bar{u}^*_i(\bar{x}_i, t)$, one of the challenges when implementing these formulae is the approximation of optimal path distribution~\eqref{OptPathDist} and integrals in~\eqref{Prop3E1} and~\eqref{OptCtrlPath}, since these integrals are defined on the set of all possible uncontrolled trajectories initialized at $(\bar{x}_i, t)$ or $(\bar{x}_i^{(0)}, t_0)$, which are intractable to exhaust and might become computationally expensive to sample as the state dimension and the amount of agents scale up. An intuitive solution is to formulate this problem as a statistical inference problem and predict the optimal path distribution $\tilde{p}^*_i(\bar{\ell}_i | \bar{x}_i^{(0)}, t_0)$ via various inference techniques, such as the easiest MC sampling~\cite{Kappen_PRL_2005} or Metropolis-Hastings sampling~\cite{Broek_JAIR_2008}. Provided a batch of uncontrolled trajectories $\mathcal{Y}_i = \{ (\bar{x}_i^{(0)}, \bar{\ell}_i^{[y]}) \}_{y = 1, \cdots, Y}$ and a fixed $\varepsilon$, the Monte Carlo estimation of optimal path distribution in~\eqref{OptPathDist} is
\begin{equation}\label{MC_Estimator}
\tilde{p}^*_i(\bar{\ell}_i^{[y]} | \bar{x}_i^{(0)}, t_0) \approx \frac{\exp( -\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i^{[y]}, t_0) )}{\sum_{y=1}^{Y} \exp( -\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}^{[y]}_i, t_0) ) },
\end{equation}
where $\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i^{[y]}, t_0)$ denotes the generalized path value~\eqref{Prop3E2} of trajectory $(\bar{x}_i^{(0)}, \bar{\ell}_i^{[y]})$, and the estimation of joint optimal control action in~\eqref{OptCtrlPath} is
\begin{equation}\label{MC_Estimato2}
\bar{u}^*_i(\bar{x}_i, t) = \lambda_i \bar{R}_i^{-1} \bar{B}_{i(d)}(x_i)^\top \cdot \sum_{y=1}^{Y} \tilde{p}^*_i(\bar{\ell}^{[y]}_i | \bar{x}_i^{(0)}, t_0) \cdot \tilde{u}_i(\bar{x}_i^{(0)}, \bar{\ell}^{[y]}_i, t_0),
\end{equation}
where $\tilde{u}_i(\bar{x}_i^{(0)}, \bar{\ell}^{[y]}_i, t_0)$ is the initial control of sample trajectory $(\bar{x}_i^{(0)}, \bar{\ell}_i^{[y]})$. To expedite the process of sampling, the sampling tasks of $\mathcal{Y}_{i\in \mathcal{D}}$ can be distributed to different agents in the network, \textit{i.e.} each agent $j$ in MAS $\mathcal{G}$ only samples its local uncontrolled trajectories $\{(x_j^{(0)}, \ell_j^{[y]})\}_{y = 1, \cdots, Y}$, and the optimal path distributions of each subsystem $\mathcal{\bar{N}}_i$ are approximated via~\eqref{MC_Estimator} after the central agent $i$ restores $\mathcal{Y}_i$ by collecting the samples from its neighbors $j \in \mathcal{\bar{N}}_i$. Meanwhile, the local trajectories $\{(x_j^{(0)}, \ell_j^{[y]})\}_{y = 1, \cdots, Y}$ of agent $j$ not only can be utilized by the local control algorithm of agent $j$ in subsystem $\bar{\mathcal{N}}_j$, but also can be used by the local control algorithms of the neighboring agents $k \in \mathcal{N}_j$ in subsystem $\bar{\mathcal{N}}_k$, and the parallel computation of GPUs can further facilitate the sampling processes by allowing each agent to concurrently generate multiple sample trajectories~\cite{Williams_JGCD_2017}. The continuous-time multi-agent LSOC algorithm based on estimator~\eqref{MC_Estimator} is summarized as \hyperref[alg2]{Algorithm~2} in~\hyperref[appE]{Appendix~E}.
However, since the aforementioned auxiliary techniques are still proposed on the basis of pure sampling estimator~\eqref{MC_Estimator}, the total amount of samples required for a good approximation is not reduced, which will hinder the implementations of \hyperref[prop3]{Proposition~3} and estimator~\eqref{MC_Estimator} when the sample trajectories are expensive to generate. Various investigations have been conducted to mitigate this issue. For the sample-efficient approximation of desirability (value) function $Z_i(\bar{x}_i,t)$, parameterized desirability (value) function, Laplace approximation~\cite{Kappen_PRL_2005}, approximation based on mean-field assumption~\cite{Broek_JAIR_2008}, and a forward-backward framework based on Gaussian process can be used~\cite{Pan_NIPS_2015}. Meanwhile, it is more common in practice to only efficiently predict the optimal path distribution $\tilde{p}^*_i(\bar{\ell}_i | \bar{x}_i^{(0)}, t_0)$ and optimal control action $\bar{u}^*_i(\bar{x}_i, t)$, while omitting the value of desirability function. Numerous statistical inference algorithms, such as Bayesian inference and variational inference~\cite{Boutselis_arxiv_2018}, can be adopted to predict the optimal path distribution, and diversified policy improvement and policy search algorithms, such as PI$^2$~\cite{Theodorou_JMLR_2010} and REPS~\cite{Gomez_KDD_2014}, can be employed to update the parameterized optimal control policy. Hence, in the following subsection, a sample-efficient distributed REPS algorithm is introduced to update the parameterized joint control policy for each subsystem and MAS.
\subsubsection*{C. Relative Entropy Policy Search in MAS}
Since the initial control $\tilde{u}_i(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0)$ can be readily calculated from sample trajectories and network communication, we only need to determine the optimal path distribution $\tilde{p}^*_i(\bar{\ell}_i | \bar{x}_i^{(0)}, t_0)$ before evaluating the joint optimal control action from~\eqref{OptCtrlPath}. To approximate this distribution more efficiently, we extend the REPS algorithm~\cite{Peters_CAI_2010, Gomez_KDD_2014} for path integral control in MAS. REPS is a model-based algorithm with competitive convergence rate, whose objective is to search for a parametric policy $\pi^{(k)}_i(\bar{u}_i^{(k)} | \bar{x}_i^{(k)})$ that generates a path distribution $\tilde{p}^\pi_i(\bar{\ell}_i| \bar{x}_i^{(0)}, t_0)$ to approximate the optimal path distribution $\tilde{p}^*_i(\bar{\ell}_i | \bar{x}_i^{(0)}, t_0)$. Compared with reinforcement learning algorithms and model-free policy improvement or search algorithms, since REPS is built on the basis of model, the policy and trajectory generated from REPS can automatically satisfy the constraints of the model. Moreover, for the MAS problems, REPS algorithm allows to consider a more general scenario when the initial condition $\bar{x}_i(t) = \bar{x}^{(0)}_i$ is stochastic and satisfies a distribution $\mu_i(\bar{x}^{(0)}_i, t_0)$. REPS algorithm alternates between two steps: a) \textit{Learning step} that approximates the optimal path distribution from samples generated by the current or initial policy, \textit{i.e.} $\tilde{p}^\pi_i(\bar{\ell}_i| \bar{x}_i^{(0)}, t_0) \rightarrow \tilde{p}^*_i(\bar{\ell}_i | \bar{x}_i^{(0)}, t_0)$, and b) \textit{Updating step} that updates the parametric policy $\pi^{(k)}_i(\bar{u}_i^{(k)} | \bar{x}_i^{(k)})$ to reproduce the desired path distribution $\tilde{p}^\pi_i(\bar{\ell}_i| \bar{x}_i^{(0)}, t_0)$ generated in step a). The algorithm terminates when the policy and approximate path distribution converge.
During the learning step, we approximate the joint optimal distribution $\tilde{p}_i^*(\bar{x}^{(0)}_i, \bar{\ell}_i) = \tilde{p}_i^*(\bar{\ell}_i | \bar{x}_i^{(0)}, t_0) \cdot \mu_i(\bar{x}^{(0)}_i, t_0)$ by minimizing the relative entropy (KL-divergence) between an approximate distribution $\tilde{p}_i(\bar{x}^{(0)}_i, \bar{\ell}_i)$ and $\tilde{p}_i^*(\bar{x}^{(0)}_i, \bar{\ell}_i)$ subject to a few constraints and a batch of sample trajectories $\mathcal{Y}_i = \{ (\bar{x}_i^{(0)}, \allowbreak \bar{\ell}_i^{[y]})\}_{y = 1, \cdots, Y}$ generated by the current or initial policy related to old distribution $\tilde{q}_i(\bar{x}^{(0)}_i, \bar{\ell}_i)$ from prior iteration. Similar to the distributed MC sampling method, the computation task of sample trajectories $\mathcal{Y}_i$ can either be exclusively assigned to the central agent $i$ or distributed among the agents in subsystem $\mathcal{\bar N}_i$ and then collected by the central agent $i$. However, unlike the local uncontrolled trajectories $\{(x_i^{(0)}, \ell_i^{[y]})\}_{y = 1, \cdots, Y}$ in the MC sampling method, which can be interchangeably used by the control algorithms of all agents in subsystem $\bar{\mathcal{N}}_i$, since the trajectory sets $\mathcal{Y}_{i \in \mathcal{D}}$ of the REPS approach are generated subject to different policies $\pi^{(k)}_i(u^{(k)}_i | \bar{x}^{(k)}_i)$ for $i\in\mathcal{D}$, the sample data in $\mathcal{Y}_{i}$ can only be specifically used by the local REPS algorithm in subsystem $\bar{\mathcal{N}}_i$. We can then update the approximate path distribution $\tilde{p}_i(\bar{x}^{(0)}_i, \bar{\ell}_i)$ by solving the following optimization problem
\begin{align}\label{LearningStep}
\arg \max_{\tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i)} & \int \tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i) \left[ -\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) - \log \tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i) \right] \ d\bar{x}_i^{(0)} d\bar{\ell}_i, \allowdisplaybreaks \\
\textrm{s.t.} & \int \tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i) \log\frac{\tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i)}{\tilde{q}_i(\bar{x}_i^{(0)}, \bar{\ell}_i)} \ d\bar{x}_i^{(0)} d\bar{\ell}_i \leq \delta, \nonumber \allowdisplaybreaks\\
& \int \tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i) \psi(\bar{x}_i^{(0)}) \ d\bar{x}_i^{(0)} d\bar{\ell}_i = \hat{\psi}_i^{(0)}, \nonumber \allowdisplaybreaks \\
& \int \tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i) \ d\bar{x}_i^{(0)} d\bar{\ell}_i = 1,\nonumber
\end{align}
where $\delta > 0$ is a parameter confining the update rate of approximate path distribution; $\psi_i(\bar{x}_i^{(0)})$ is a state feature vector of the initial condition; and $\hat{\psi}^{(0)}_i$ is the expectation of state feature vector subject to initial distribution $\mu_i(\bar{x}^{(0)}_i)$. When the initial state $\bar{x}_i^{(0)}$ is deterministic, \eqref{LearningStep} degenerates to an optimization problem for path distribution $\tilde{p}_i(\bar{\ell}_i | \bar{x}_i^{(0)}, t_0)$, and the second constraint in~\eqref{LearningStep} can be neglected. The optimization problem~\eqref{LearningStep} can be solved analytically with the method of Lagrange multipliers, which gives
\begin{equation}\label{distribution}
\tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i) = \exp\left( -\frac{1+\kappa + \eta}{1 + \kappa} \right) \cdot \tilde{q}_i(\bar{x}_i^{(0)}, \bar{\ell}_i) ^{\frac{\kappa}{1 + \kappa}} \cdot \exp\left( - \frac{\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) + \theta^\top \psi_{i}(\bar{x}_i^{(0)}) }{1 + \kappa} \right),
\end{equation}
where $\kappa$ and $\theta$ are the Lagrange multipliers that can be solved from the dual problem
\begin{equation}\label{DualProb}
\arg \min_{\kappa, \theta} \ g(\kappa, \theta), \qquad \textrm{s.t. }\kappa > 0,
\end{equation}
with objective function
\begin{align}\label{DualFun}
g(\kappa, \theta) &= \kappa \delta + \theta^\top \hat\psi_i^{(0)} + (1 + \kappa) \cdot \log \int \tilde{q}_i(\bar{x}_i^{(0)}, \bar{\ell}_i)^{\frac{\kappa}{1 + \kappa}} \\
&\hspace{116pt} \times \exp\left( - \frac{\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) + \theta^\top \psi_{i}(\bar{x}_i^{(0)}) }{1 + \kappa} \right) \ d\bar{x}_i^{(0)} d\bar{\ell}_i; \nonumber
\end{align}
and the dual variable $\eta$ can then be determined by the constraint obtained by substituting~\eqref{DualProb} into the normalization (last) constraint in~\eqref{LearningStep}. The approximation of the dual function $g(\kappa, \theta)$ from sample trajectories $\mathcal{Y}_i$, calculation of the old (or initial) path distribution $\tilde{q}_i(\bar{\ell}_i, \bar{x}_i^{(0)})$ from the current (or initial) policy $\pi^{(k)}_i(\bar{u}_i^{(k)} | \bar{x}_i^{(k)})$, and a brief interpretation on the formulation of optimization problem~\eqref{LearningStep} are explained in \hyperref[appD]{Appendix~D}.
In the updating step of REPS algorithm, we update the policy $\pi_i^{(k)}$ such that the joint distribution $\tilde{p}_i^\pi(\bar{x}_i^{(k+1)} | \bar{x}_i^{(k)}) = \int p(\bar{x}_i^{(k+1)}|\bar{x}_i^{(k)}, \bar{u}_i^{(k)}) \cdot \pi_i^{(k)}(\bar{u}_i^{(k)} | \bar{x}_i^{(k)}) \ d\bar{u}^{(k)}_i $ generated by policy $\pi^{(k)}_i$ approximates the path distribution $\tilde{p}_i(\bar{\ell}_i | \bar{x}_i^{(0)}, t_0)$ generated in the learning step~\eqref{LearningStep} and ultimately converges to the optimal path distribution $\tilde{p}^{*}_i(\bar{\ell}_i | \bar{x}_i^{(0)}, t_0)$ in~\eqref{OptPathDist}. More explanations on distribution $\tilde{p}_i^\pi$ and old distribution $\tilde{q}_i$ can be found in \eqref{OldDist} of \hyperref[appD]{Appendix~D}. In order to provide a concrete example, we consider a set of parameterized time-dependent Gaussian policies that are linear in states, \textit{i.e.} $\pi_i^{(k)}(\bar{u}_i^{(k)} | \bar{x}_i^{(k)}, \hat{a}_i^{(k)}, \hat{b}_i^{(k)}, \hat\Sigma^{(k)}_{i} ) \sim \mathcal{N}(\bar{u}_i^{(k)}|\hat{a}_i^{(k)}\bar{x}_i^{(k)} + \hat{b}_i^{(k)}, \hat{\Sigma}_i^{(k)})$ at time step $t_k < t_f$, where $\chi_i^{(k)} = (\hat{a}_i^{(k)}, \hat{b}_i^{(k)}, \hat\Sigma_i^{(k)})$ are the policy parameters to be updated. For simplicity, one can also construct a stationary Gaussian policy $\hat{\pi}_i(\bar{u}_i | \bar{x}_i, \hat{a}_i, \hat{b}_i, \hat\Sigma_{i} ) \sim \mathcal{N}(\bar{u}_i|\hat{a}_i\bar{x}_i + \hat{b}_i, \hat{\Sigma}_i)$, which was employed in~\cite{Kupcsik_CAI_2013} and shares the same philosophy as the parameter average techniques discussed in~\cite{Broek_JAIR_2008, Theodorou_JMLR_2010}. The parameters $\chi_i^{*(k)}$ in policy $\pi_i^{(k)}$ can be updated by minimizing the relative entropy between $\tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i)$ and $\tilde{p}^\pi_i(\bar{x}_i^{(0)}, \bar{\ell}_i)$, \textit{i.e.}
\begin{equation}\label{eq37}
\chi_i^{*(k)} =\arg {\textstyle \max_{\chi_i^{(k)}} } \ \int \tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i) \cdot \log {\pi}_i^{(k)}(\bar{u}_i^{*(k)} | \bar{x}_i^{(k)}, \chi_i^{(k)}) \ d\bar{x}_i^{(0)} d\bar{\ell}_i,
\end{equation}
where $\bar{u}_i^{*(k)}$ is the optimal control action. More detailed interpretation on policy update as well as approximation of~\eqref{eq37} from sample data $\mathcal{Y}_i$ can be found in \hyperref[appD]{Appendix~D}. When implementing the REPS algorithm introduced in this subsection, we initialize each iteration by generating a batch of sample trajectories $\mathcal{Y}_i$ from the current (or initial) policy, which was updated to restore the old approximate path distribution $\tilde{q}_i(\bar{x}_i^{(0)}, \bar{\ell}_i)$ in the prior iteration. With these sample trajectories, we update the path distribution through (\ref{LearningStep})-(\ref{DualFun}) and update the policy again through~\eqref{eq37} till convergence of the algorithm. This REPS-based continuous-time MAS control algorithm is also summarized as \hyperref[alg3]{Algorithm~3} in~\hyperref[appE]{Appendix~E}.
\subsubsection*{D. Local Control Action}
The preceding two distributed LSOC algorithms will return either a joint optimal control action $\bar{u}^*_i(\bar{x}_i, t)$ or a joint optimal control policy $\pi_i^{*}(\bar{u}^{*}_i(t)|\bar{x}_i(t), \chi_i^{*})$ for subsystem $\bar{\mathcal{N}}_i$ at joint state $\bar{x}_i(t)$. Similar to the treatment of distributed LSMDP in discrete-time MAS, only the central agent $i$ selects or samples its local control action $u^*_i(\bar{x}_i, t)$ from $\bar{u}^*_i(\bar{x}_i, t)$ or $\pi_i^{*}(\bar{u}^{*}_i(t)|\bar{x}_i(t), \chi_i^{*})$, while other agents $j \in \mathcal{D} \backslash \{i\}$ are guided by the control law, $\bar{u}^*_j(\bar{x}_j, t)$ or $\pi_j^{*}(\bar{u}^{*}_j(t)|\bar{x}_j(t), \chi_j^{*})$, computed in their own subsystems $\bar{\mathcal{N}}_j$.
For distributed LSOC based on the MC sampling estimator~\eqref{MC_Estimato2}, the local optimal control action $u^*_i(\bar{x}_i, t)$ can be directly selected from the joint control action $\bar{u}^*_i(\bar{x}_i, t)$. For distributed LSOC based on the REPS method, the recursive algorithm generates control policy $\pi^{(k)}_i(\bar{u}^{(k)}_i | \bar{x}_i^{(k)}, \chi_i^{(k)}) \sim \mathcal{N}(\bar{u}_i^{(k)}|\hat{a}_i^{(k)}\bar{x}_i^{(k)} + \hat{b}_i^{(k)}, \hat{\Sigma}_i^{(k)})$ for each time step $t = t_0 < t_1 < \cdots < t_K = t_f$ per iteration. When generating the trajectory set $\mathcal{Y}_i$ from the current control policy $\pi_i^{(k)}(\bar{u}^{(k)}_i|\bar{x}_i^{(k)}, \chi_i^{(k)})$, we sample the local control action of agent $i$ from the marginal distribution $\pi_i^{(k)}(u^{(k)}_i|\bar{x}_i^{(k)}, \chi_i^{(k)}) = \sum_{j \in \mathcal{N}_i} \pi^{(k)}_i({u}^{(k)}_i, {u}^{(k)}_{j\in\mathcal{N}_i} | \bar{x}_i^{(k)}, \chi_i^{(k)})$. After the convergence of local REPS algorithm in subsystem $\bar{\mathcal{N}}_i$, the local optimal control action $u_i^*(\bar{x}_i, t)$ of the central agent $i$ at state $x_i(t)$ can be sampled from the following marginal distribution
\begin{equation}\label{Marginalize_Policy}
\pi_i^{*(0)}(u^{*(0)}_i|\bar{x}_i^{(0)}, \chi_i^{*(0)}) = \sum_{j \in \mathcal{N}_i} \pi^{*(0)}_i({u}^{*(0)}_i, {u}^{*(0)}_{j\in\mathcal{N}_i} | \bar{x}_i^{(0)}, \chi_i^{*(0)}),
\end{equation}
which minimizes the joint cost-to-go function~\eqref{J_CTG_Cont_Time} in subsystem $\mathcal{\bar{N}}_i$ and only relies on the local observation of agent $i$.
\subsection{Generalization with Compositionality}\label{sec3.3}
While several accessory techniques have been introduced in~\hyperref[Sec3_1]{Section~3.1} and \hyperref[Sec3_2]{Section~3.2} to facilitate the computation of optimal control law, we have to repeat all the aforementioned procedures when a new optimal control task with different terminal cost or preferred exit state is assigned. A possible approach to solve this problem and generalize LSOC is to resort to contextual learning, such as the contextual policy search algorithm~\cite{Kupcsik_CAI_2013}, which adapts to different terminal conditions by introducing hyper-parameters and an additional layer of learning algorithm. However, this supplementary learning algorithm will undoubtedly increase the complexity and computational task for the entire control scheme. Thanks to the linearity of LSOC problems, a task-optimal controller for a new (unlearned) task can also be constructed from existing (learned) controllers by utilizing the compositionality principle~\cite{Broek_JAIR_2008, Todorov_NIPS_2009, daSilva_TOG_2009, Pan_NIPS_2015}. Suppose that we have $F$ previously learned (component) LSOC problems and a new (composite) LSOC problem with different terminal cost from the previous $F$ problems. Apart from different terminal costs, these $F+1$ multi-agent LSOC problems share the same communication network $\mathcal{G}$, state-related cost $q(\bar{x}_i)$, exit time $t_f$, and dynamics, which means that the state space, control space, and sets of interior and boundary states of these $F+1$ problems are also identical. Let $\phi_i^{\{f\}}(\bar{x}_i)$ with $f \in \{1, 2, \cdots, F\}$ be the terminal costs of $F$ component problems in subsystem $\bar{\mathcal{N}}_i$, and $\phi_i(\bar{x}_i)$ denotes the terminal cost of composite or new problem in subsystem $\bar{\mathcal{N}}_i$. We can efficiently construct a joint optimal control action $\bar{u}^*_i( \bar{x}'_i | \bar{x}_i)$ for the new task from the existing component controllers $\bar{u}^{*\{f\}}_i( \bar{x}'_i | \bar{x}_i)$ by the compositionality principle.
For multi-agent LSMDP in discrete-time MAS, when there exists a set of weights $\bar{\omega}_i^{\{f\}}$ such that
\begin{equation*}
\phi_i(\bar{x}_i) = -\log \bigg[ \sum_{f=1}^{F} \bar{\omega}_i^{\{f\}} \cdot \exp\Big( -\phi_i^{ \{f\} } (\bar{x}_i) \Big) \bigg],
\end{equation*}
by the definition of discrete-time desirability function~\eqref{ExpTrans}, we can imply that
\begin{equation}\label{Dis_Des_Composite}
Z_i(\bar{x}_i) = \sum_{f = 1}^{F} \bar{\omega}_i^{ \{f\} } \cdot Z_i^{\{f\}}(\bar{x}_i)
\end{equation}
for all $\bar{x}_i \in \bar{\mathcal{B}}_i$. Due to the linear relation in~\eqref{Z_update}, identity~\eqref{Dis_Des_Composite} should also hold for all interior states $\bar{x}_i \in \bar{\mathcal{I}}_i$. Substituting~\eqref{Dis_Des_Composite} into the optimal control action~\eqref{OptimalControl}, the task-optimal control action for the new task with terminal cost $\phi_i(\bar{x}_i)$ can be immediately generated from the existing controllers
\begin{equation}\label{Dis_Comp}
\bar{u}^*_i( \bar{x}'_i | \bar{x}_i) = \sum_{f = 1}^{F} \bar{W}_i^{\{f\}}(\bar{x}_i) \cdot \bar{u}^{*\{f\}}_i( \bar{x}'_i | \bar{x}_i),
\end{equation}
where $\bar{W}_i^{\{f\}}(\bar{x}_i) = \bar{\omega}_i^{\{f\}} \mathcal{W}^{\{f\}}_i(\bar{x}_i) / (\sum_{e=1}^{F} \bar{\omega}_i^{\{e\}} \mathcal{W}_i^{\{e\}}(\bar{x}_i))$ and $\mathcal{W}_i^{\{f\}}(\bar{x}_i) = \sum_{\bar{x}'_i}\bar{p}_i(\bar{x}'_i|\bar{x}_i)Z^{\{f\}}_i(\bar{x}'_i)$. For LSOC in MAS, compositionality principle can be used for not only the generalization of controllers for the same subsystem $\bar{\mathcal{N}}_i$, but the generalization of control law across the network. For any two subsystems that satisfy the aforementioned compatible conditions, the task-optimal controller of one subsystem can also be directly constructed from the existing computational result of the other subsystem by resorting to~\eqref{Dis_Comp}.
The generalization of continuous-time LSOC problem can be readily inferred by symmetry. For $F+1$ continuous-time LSOC problems that satisfy the compatible conditions, when there exist scalars $\lambda_i$, $\lambda_i^{\{f\}}$ and weights $\bar{\omega}_i^{\{f\}}$ such that
\begin{equation}\label{eq42}
\phi_i(\bar{x}_i) = -\lambda_i \log \bigg[ \sum_{f = 1}^{F} \bar{\omega}_i^{\{f\}} \cdot \exp \Big( -\frac{1}{\lambda_i^{\{ f \}}} \cdot \phi_i^{\{f\}} (\bar{x}_i) \Big) \bigg],
\end{equation}
by the definition of continuous-time desirability function~\eqref{ContTrans}, we readily have $Z_i(\bar{x}_i, t_f) = \sum_{f = 1}^{F} \bar{\omega}_i^{ \{f\} } \cdot Z_i^{\{f\}}(\bar{x}_i, t_f)$ for all $\bar{x}_i \in \bar{\mathcal{B}}_i$. Since $Z^{\{f\}}_i(\bar{x}_i, \tau)$ is the solution to linearized stochastic HJB equation \eqref{Z_Function}, the linear combination of desirability function
\begin{equation}\label{Con_Des_Composite}
Z_i(\bar{x}_i, \tau) = \sum_{f = 1}^{F} \bar{\omega}_i^{ \{f\} } \cdot Z_i^{\{f\}}(\bar{x}_i, \tau)
\end{equation}
holds everywhere from $t$ to $t_f$ on condition that \eqref{Con_Des_Composite} holds for all terminal states $\bar{\mathcal{B}}_i$, which is guaranteed by~\eqref{eq42}. Substituting~\eqref{Con_Des_Composite} into the continuous-time optimal controller~\eqref{OptimalAction2}, the task-optimal composite controller $\bar{u}_i^*(\bar{x}_i)$ can be constructed from the component controllers $\bar{u}_i^{\{f\}*}(\bar{x}_i)$ by
\begin{equation*}
\bar{u}_i^*(\bar{x}_i, t) = \sum_{f = 1}^{F} \bar{W}_i^{\{f\}}(\bar{x}_i, t) \cdot \bar{u}^{\{f\}*}_i( \bar{x}_i, t),
\end{equation*}
where $\bar{W}_i^{\{f\}}(\bar{x}_i, t) = \bar{\omega}_i^{\{f\}} Z_i^{\{f\}}(\bar{x}_i, t) / (\sum_{f=1}^{F} \bar{\omega}_i^{\{f\}} Z_i^{\{f\}}(\bar{x}_i, t))$. However, this generalization technique rigorously does not apply to the control policies based on the trajectory optimization or policy search methods, such as PI$^2$ \cite{Theodorou_JMLR_2010} and REPS in~\hyperref[Sec3_2]{Section~3.2}, since these policy approximation methods usually only predict the optimal path distribution or optimal control policy, while leaving the value or desirability function unknown.
\section{Illustrative Examples}\label{sec4}
Distributed LSOC can be deployed on a variety of MASs in reality, such as the distributed HVAC system in smart buildings, cooperative unmanned aerial vehicle (UAV) teams, and synchronous wind farms. Motivated by the experiments in~\cite{Broek_JAIR_2008, Williams_JGCD_2017, Daniel_2017}, we demonstrate the distributed LSOC algorithms with a cooperative UAV team in cluttered environment. Compared with these preceding experiments, the distributed LSOC algorithms in this paper allow to consider an explicit and simpler communication network underlying the UAV team, and the joint cost function between neighboring agents can be constructed and optimized with less computation. Consider a cooperative UAV team consisting of three agents illustrated in~\hyperref[fig2]{Figure~2}. UAV~1 and 2, connected by a solid line, are strongly coupled via their joint cost functions $q_1(\bar{x}_1)$ and $q_2(\bar{x}_2)$, and their local controllers $u_1(\bar{x}_1)$ and $u_2(\bar{x}_2)$ are designed to drive UAV~1 and 2 towards their exit state while minimizing the distance between two UAVs and avoiding obstacles. These setups are useful when we need multiple UAVs to fly closely towards an identical destination, \textit{e.g.} carrying and delivering a heavy package together or maintaining communication channels/networks. By contrast, although UAV~3 is also connected with UAV~1 and 2 via dotted lines, the immediate cost function of UAV~3 is designed to be independent or fully factorized from the states of UAV~1 and 2 in each subsystem, such that UAV 3 is only loosely coupled with UAVs~1 and~2 through their terminal cost functions, which restores the scenario considered in~\cite{Broek_JAIR_2008, Williams_JGCD_2017}. The local controller of UAV~3 is then synthesized to guide the UAV to its exit state with minimal cost. In the following subsections, we will verify both discrete-time and continuous-time distributed LSOC algorithms subject to this cooperative UAV team scenario.
\vspace{1em}
\begin{figure}[H]
\centering
\includegraphics[width=0.32\textwidth]{figure/CommGraph}
\caption{Communication network of UAV team. UAV 1 and 2 are strongly coupled through their running cost functions. UAV 3 is loosely coupled with UAV 1 and 2 through their terminal cost functions.}\label{fig2}
\end{figure}
\noindent
\subsection{UAVs wtih Discrete-Time Dynamics}
In order to verify the distributed LSMDP algorithm, we consider a cooperative UAV team described by the probability model introduced in \hyperref[Sec2_2]{Section~2.2}. The flight environment is described by a $5\times5$ grid with total $25$ cells to locate the positions of UAVs, and the shaded cells in~\hyperref[fig3]{Figure~3(a)} represent the obstacles. Each UAV is described by a state vector $x_i = [r_i, c_i]^\top$, where $r_i$ and $c_i \in \{1,2,3,4,5\}$ are respectively the row and column indices locating the $i^{\rm th}$ UAV. The cell with circled index \circled{$i$} denotes the initial state of the $i^{\rm th}$ UAV, and the one with boxed index \squared{ \hspace{-2pt} $i$ \hspace{-2pt} } indicates the exit state of the $i^{\rm th}$ UAV. The passive transition probabilities of interior, edge, and corner cells are shown in \hyperref[fig3]{Figure~3(b)}, \hyperref[fig3]{Figure~3(c)}, and \hyperref[fig3]{Figure~3(d)}, respectively, and the passive probability of a UAV transiting to an adjacent cell can be interpreted as the result of random winds. To fulfill the requirements of aforementioned scenario, the controlled transition distributions should i) drive UAV 1 and 2 to their terminal state $(5, 5)$ while shortening the distance between two UAVs, and ii) guide UAV 3 to terminal state $(1, 5)$ with minimum cost, which is related to obstacle avoidance, control cost, length of path, and flying time.
\vspace{1em}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{figure/DiscreteMap1}\label{fig3}
\caption{ Flight environment and UAV's passive transition dynamics. (a) Flight environment, initial states, and exit states of UAVs. (b) The passive transition probability of UAVs in the interior cells. (c) The passive transition probability of UAVs in the edge cells. (d) The passive transition probability of UAVs in the corner cells.}
\end{figure}
\vspace{0.5em}
In order to realize these objectives, we consider the state-related cost functions as follows
\begin{align}\label{DisCostFun}
q_1(\bar{x}_1) & = w_{12} \cdot (|r_1 - r_2| + |c_1 - c_2|) + o_1(x_1) \cdot o_2(x_2), \nonumber \allowdisplaybreaks\\
q_2(\bar{x}_2) & = w_{21} \cdot (|r_2 - r_1| + |c_2 - c_1|) + o_1(x_1) \cdot o_2(x_2),\allowdisplaybreaks \\
q_3(\bar{x}_3) & = o_3(x_3),\nonumber \allowdisplaybreaks
\end{align}
where in the following examples, weights on relative distance $\|x_1 - x_2\|_1$, defined by Manhattan distance, are $w_{12} = w_{21} = 3.5$; state and obstacle cost $o_i(x_i) = 30$ when the $i^{\rm th}$ UAV is in an obstacle cell, and $o_i(x_i) = 2.2$ when UAV $i$ is in a regular cell; and terminal cost $q_i(\bar{x}_i) = 0$ for $\bar{x}_i \in \bar{\mathcal{B}}_i$. State-related cost $q_1(\bar{x}_1)$ and $q_2(\bar{x}_2)$, which involve both the states of UAV 1 and 2, can measure the joint performance between two UAVs, while cost $q_3(\bar{x}_3)$ only contains the state of UAV 3. Without the costs on relative distance, \textit{i.e.} $w_{12} = w_{21} = 0$, this distributed LSMDP problem will degenerate to three independent shortest path problems with obstacle avoidance as considered in \cite{Broek_JAIR_2008, Williams_JGCD_2017, Daniel_2017}, and the optimal trajectories are straightforward\footnote{The shortest path for UAV 1 is $(1, 5) \rightarrow (1, 4) \rightarrow (2, 4) \rightarrow (3, 4) \rightarrow (3, 5) \rightarrow (4, 5) \rightarrow (5, 5)$. There are three different shortest paths for UAV 2, and the shortest path for UAV 3 is shown in~\hyperref[fig4]{Figure~4 (c)}.}. Desirability function $Z_i(\bar{x}_i)$ and local optimal control distribution $u^*_i(\cdot | \bar{x}_i)$ can then be computed by following~\hyperref[alg1]{Algorithm~1} in~\hyperref[appE]{Appendix~E}. \hyperref[fig4]{Figure~4} shows the maximum likelihood controlled trajectories of UAV team subject to passive dynamics in~\hyperref[fig3]{Figure~3} and cost functions~\eqref{DisCostFun}. Some curious readers may wonder why UAV~1 in \hyperref[fig4]{Figure~4(a)} decides to stay in $(1, 4)$ cell for two consecutive time steps rather than moving forward to $(1, 3)$ and then flying along with UAV~2, which generates a lower state-related cost. While this alternative corresponds to a lower state-related cost, it may not be the optimal trajectory minimizing the control cost and the overall immediate cost in~\eqref{eq4}. In order to verify this speculation, we increase the passive transition probabilities $p_i(\cdot | \bar{x}_i)$ to surpass certain thresholds in~\hyperref[fig5]{Figure~5(a)-(c)}, which can be interpreted as stronger winds in reality. With this altered passive dynamics and cost functions in~\eqref{DisCostFun}, the maximum likelihood controlled trajectory of UAV~1 is shown in~\hyperref[fig5]{Figure 5(d)}, which verifies our preceding reasoning. Trajectories of UAV 2 and 3 subject to altered passive dynamics are identical to the results in~\hyperref[fig4]{Figure~4}. To provide an intuitive view on the efficiency improvements of our distributed LSMDP algorithm, \hyperref[fig5.5]{Figure~6} presents the average data size and computational complexity on each UAV ($|\mathcal{S}_i| = 25$), gauged by the row number $m$ of matrices and vectors in~\eqref{Z_update} and~\eqref{dist_alge}, subject to the centralized programming~\eqref{Z_update} and parallel programming~\eqref{dist_alge} in different communication network, such as line, ring, complete binary tree, and fully connected topology, which restores the scenario with exponential complexity considered in \cite{Broek_JAIR_2008, Daniel_2017}.
\vspace{3em}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\textwidth]{figure/DiscreteResult1}\label{fig4}
\caption{UAVs' trajectories subject to optimal control policy. (a) Controlled trajectory of UAV~1: $(1, 5) \rightarrow (1, 4) \rightarrow (1, 4) \rightarrow (1, 4) \rightarrow (2, 4) \rightarrow (3, 4) \rightarrow (3, 5) \rightarrow (4, 5) \rightarrow (5, 5)$. (b) Controlled trajectory of UAV 2: $(1, 1) \rightarrow (1, 2) \rightarrow (1, 3) \rightarrow (1, 4) \rightarrow (2, 4) \rightarrow (3, 4) \rightarrow (3, 5) \rightarrow (4, 5) \rightarrow (5, 5)$. (c) Controlled trajectory of UAV~3: $(1, 5) \rightarrow (1, 4) \rightarrow (2, 4) \rightarrow (3, 4) \rightarrow (3, 3) \rightarrow (4, 3) \rightarrow (5, 3) \rightarrow (5, 2) \rightarrow (5, 1)$.}
\end{figure}
\clearpage
\begin{figure}[htpb]
\centering
\includegraphics[width=0.77\textwidth]{figure/DiscreteMap2}\label{fig5}
\caption{Altered passive transition dynamics and controlled trajectory of UAV 1 subject to new dynamics. (a-c) are respectively the passive transition probabilities for interior, cell, and corner cells. (d) Controlled trajectory of UAV 1 subject to the altered passive dynamics: $(1, 5) \rightarrow (1, 4) \rightarrow (1, 3) \rightarrow (1, 4) \rightarrow (2, 4) \rightarrow (3, 4) \rightarrow (3, 5) \rightarrow (4, 5) \rightarrow (5, 5)$.}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.45\textwidth]{figure/Complexity}\label{fig5.5}
\caption{Relationship between amount of agents $N$ and data dimension and computational complexity $O(m)$. Vertical axis is the value of $m$. Solid and dashed lines respectively represent the results attained by centralized algorithm~\eqref{Z_update} and parallel algorithm~\eqref{dist_alge}. Blue asterisk, red cross, green square, and magenta triangle respectively denote the results associated with fully connected, line, ring, and complete binary tree communication networks.}
\end{figure}
\vspace{1em}
\subsection{UAVs wtih Continuous-Time Dynamics}\label{sec4_1}
In order to verify the continuous-time distributed LSOC algorithms, consider the continuous-time UAV dynamics described by the following equation~\cite{Broek_JAIR_2008, Yao_AST_2019}:
\begin{equation}\label{Cont_Model}
\left(\begin{matrix}
dx_i\\
dy_i\\
dv_i\\
d\varphi_i
\end{matrix}\right) =
\left(\begin{matrix}
v_i \cos \varphi_i\\
v_i \sin \varphi_i\\
0\\
0
\end{matrix}\right) dt + \left(\begin{matrix}
0 & 0\\
0 & 0\\
1 & 0\\
0 & 1
\end{matrix}\right) \left[ \left( \begin{matrix}
u_i\\
\omega_i
\end{matrix} \right) dt + \left(\begin{matrix}
\sigma_i & 0\\
0 & \nu_i
\end{matrix}\right)dw_i
\right],
\end{equation}
where $(x_i, y_i)$, $v_i$, and $\varphi_i$ respectively denote the position, forward velocity, and direction angle of the $i^{\rm th}$ UAV; forward acceleration $u_i$ and angular velocity $\omega_i$ are the control inputs, and disturbance $w_i$ is a standard Brownian motion. Control transition matrix $\bar{B}_i(x_i)$ is a constant matrix in~\eqref{Cont_Model}, and we set noise level parameters $\sigma_i = 0.1$ and $\nu_i = 0.05$ in simulation. The communication network underlying the UAV team as well as the control objectives are the same as in the discrete-time scenario. Instead of adopting the Manhattan distance, the distance in continuous-time problem is associated with $L_2$ norm. Hence, we consider the state-related costs defined as follows
\begin{equation}\label{UAV_RunningCost}
\begin{split}
q_1(\bar{x}_1) & = w_{11} \cdot (\| (x_1, y_1) - (x_1^{t_f}, y_1^{t_f}) \|_2 - d^{\max}_1) + w_{12} \cdot ( \| (x_1, y_1) - (x_2, y_2) \|_2 - d^{\max}_{12}),\\
q_2(\bar{x}_2) & = w_{22} \cdot (\| (x_2, y_2) - (x_2^{t_f}, y_2^{t_f}) \|_2 - d^{\max}_2) + w_{21} \cdot ( \| (x_2, y_2) - (x_1, y_1) \|_2 - d^{\max}_{21}),\\
q_3(\bar{x}_3) & = w_{33} \cdot (\| (x_3, y_3) - (x_3^{t_f}, y_3^{t_f}) \|_2 - d^{\max}_3),
\end{split}
\end{equation}
where $w_{ii}$ is the weight on distance between the $i^{\rm th}$ UAV and its exit position; $w_{ij}$ is the weight on distance between the $i^{\rm th}$ and $j^{\rm th}$ UAVs; $d^{\max}_i$ usually denotes the initial distance between the $i^{\rm th}$ UAV and its destination, and $d_{ij}^{\max}$ denotes the initial distance between the $i^{\rm th}$ and $j^{\rm th}$ UAVs. The parameters $d^{\max}_i$ and $d_{ij}^{\max}$ are chosen to regularize the numerical accuracy and stability of algorithms.
To show an intuitive improvement brought by the joint state-related cost, we first verify the continuous-time LSOC algorithm in a simple flight environment without obstacle. Consider three UAVs forming the network in~\hyperref[fig2]{Figure~2} and with initial states $x_1^{0} = (5, 5, 0.5, 0)^\top$, $x_2^{0} = (5, 45, 0.5, 0)^\top$, $x_3^0 = (5, 25, 0.5, 0)^\top$ and identical exit state $x_i^{t_f} = (45, 25, 0, 0)^\top$ for $i = 1, 2, 3$. The exit time is $t_f = 25$, and the length of each control cycle is $0.2$. When sampling trajectory roll-outs $\mathcal{Y}_i$, the time interval from $t$ to $t_f$ is partitioned into $K = 7$ intervals of equal length $\varepsilon$, \textit{i.e.} $\varepsilon K = t_f - t$, until $\varepsilon$ becomes less than 0.2. Meanwhile, to make the exploration process more aggressive, we increase the noise level parameters $\sigma_i = 0.75$ and $\nu_i = 0.65$ when sampling trajectory roll-outs. The size of data set $\mathcal{Y}_i$ for estimator~\eqref{MC_Estimator} in each control cycle is $400$ sample trajectories, which can be generated concurrently by GPU~\cite{Williams_JGCD_2017}. Control weight matrices $\bar{R}_i$ are selected as identity matrices. Guided by the sampling-based distributed LSOC algorithm, \hyperref[alg2]{Algorithm~2} in \hyperref[appE]{Appendix~E}, UAV trajectories and the relative distance between UAV 1 and 2 in condition of both joint and independent state-related costs are presented in~\hyperref[fig6]{Figure~7}. Letting update rate constraint be $\delta_i = 25$ and the size of trajectory set be $|\mathcal{Y}_i| = 400$ and $150$ for the initial and subsequent policy iterations in every control period, simulation results obtained from the REPS-based distributed LSOC algorithm, \hyperref[alg3]{Algorithm~3} in \hyperref[appE]{Appendix~E}, are given in~\hyperref[fig7]{Figure~8}. \hyperref[fig6]{Figure~7} and \hyperref[fig7]{Figure~8} imply that joint state-related costs can significantly influence the trajectories and shorten the relative distance between UAV 1 and 2, which fulfills our preceding control requirements.
\begin{figure}[H]
\centering
\vspace{-0.5em}\includegraphics[width=0.90\textwidth]{figure/ContResult1}\vspace{-0em}\label{fig6}
\caption{UAV trajectories and the distance between UAV 1 and 2 from 50 trails subject to the sampling-based distributed LSOC algorithm. (a) Trajectories of UAVs. Red, blue and green lines are trajectories for UAV 1, 2 and 3, respectively. Dashed lines are from a trail with factorized (or independent) state costs ($w_{11} = w_{22} = 0.75$, $w_{33} = 1$, $w_{12} = w_{21} = 0$), and solid (transparent) lines are from trails with joint state costs ($w_{11} = w_{22} = 0.75$, $w_{33} = 1$, $w_{12} = w_{21} = 1.5$). (b) Distance between UAV 1 and 2. Red dashed line and blue solid line are respectively the mean distances from tails with factorized state cost and joint state cost. Height of strip represents one standard deviation.}
\end{figure}
\begin{figure}[H]
\centering
\vspace{-0.5em}\includegraphics[width=0.90\textwidth]{figure/ContResult3}\vspace{-0em}\label{fig7}
\caption{UAV trajectories and distance between UAV 1 and 2 from 50 trails subject to the REPS-based distributed LSOC algorithm. (a)~Trajectories of UAVs. Red, blue and green lines are trajectories for UAV 1, 2 and 3, respectively. Dashed lines are from a trail with factorized state costs ($w_{11} = w_{22} = w_{33} = 0.1$, $w_{12} = w_{21} = 0$), and solid (transparent) lines are from trails with joint state costs ($w_{11} = w_{22} = w_{33} = 0.1$, $w_{12} = w_{21} = 0.2$). (b) Red dashed line and blue solid line are respectively the mean distances from tails with factorized state cost and joint state cost. Height of strip represents one standard deviation.}
\end{figure}
We then consider a cluttered flight environment as the discrete-time example. Suppose three UAVs forming the network in~\hyperref[fig2]{Figure~2} and with initial states $x_1^{0} = (45, 5, 0.35, \pi)^\top$, $x_2^{0} = (5, 5, 0.65, 0)^\top$, $x_3^0 = (45, 5, 0.5, \pi)^\top$ and exit states $x_1^{t_f} = x_2^{t_f} = (45, 45, 0, \pi/2)^\top$, $x_3^{t_f} = (5, 45, 0, \pi)^\top$. The exit time is $t_f = 30$, and the length of each control cycle is $0.2$. When sampling trajectory roll-outs $\mathcal{Y}_i$, the time interval from $t$ to $t_f$ is partitioned into $K = 18$ intervals of equal length. The size of $\mathcal{Y}_i$ is $400$ trajectory roll-outs when adopting random sampling estimator, and the other parameters are the same as in the preceding continuous-time example. Subject to the sampling-based distributed LSOC algorithm, UAV trajectories and the relative distance between UAVs 1 and 2 subject to both joint and independent state-related costs are presented in~\hyperref[fig8]{Figure~9}. Letting the update rate constraint be $\delta_i = 50$ and the size of trajectory set be $|\mathcal{Y}_i| = 400$ and $150$ for the initial and subsequent policy iterations in every control period, experimental results obtained from the REPS-based distributed LSOC algorithm are given in~\hyperref[fig9]{Figure~10}. \hyperref[fig8]{Figure~9} and \hyperref[fig9]{Figure~10} show that our continuous-time distributed LSOC algorithms can guide UAVs to their terminal states, avoid obstacles, and shorten the relative distance between UAVs~1 and 2. It is also worth noticing that since there exist more than one shortest path for UAV 2 in condition of factorized state cost (see the footnote in \hyperref[sec4_1]{Section 4.1}), the standard variations of distance in \hyperref[fig8]{Figure~9} and \hyperref[fig9]{Figure~10} are significantly larger than other cases. Lastly, we compare the sample-efficiency between the sampling-based and REPS-based distributed LSOC algorithms in preceding two continuous-time examples. \hyperref[fig10]{Figure~11} shows the value of immediate cost function $c_2(\bar{x}_2, \bar{u}_2)$ from subsystem $\bar{\mathcal{N}}_2$ versus the amount of trajectory roll-outs, and the maximum numbers of trajectory roll-outs on horizontal axes are determined by the REPS-based trails with minimum amounts of sample roll-outs. In \hyperref[fig10]{Figure~11}, we can tell that the REPS-based distributed LSOC algorithm is more sample-efficient than the sampling-based algorithm.
\vspace{4em}
\begin{figure}[htpb]
\centering
\includegraphics[width=0.90\textwidth]{figure/ContResult2}\label{fig8}
\caption{UAV trajectories and relative distance between UAV 1 and 2 from 100 trails based on random sampling estimator. (a)~Trajectories of UAVs. Red, blue and green lines are trajectories for UAV 1, 2 and 3, respectively. Dashed lines are from a trail with factorized (or independent) state costs ($w_{11} = w_{22} = w_{33} = 1$, $w_{12} = w_{21} = 0$), and solid (transparent) lines are from trails with joint state costs ($w_{11} = w_{22} = w_{33} = 1$, $w_{12} =1.5, w_{21} = 0.5$). (b) Distance between UAV 1 and 2. Red dashed line and blue solid line are respectively the mean distances from tails with independent state cost and joint state cost. Height of strip represents one standard deviation.}
\end{figure}
\clearpage
\begin{figure}[H]
\centering
\includegraphics[width=0.90\textwidth]{figure/ContResult4}\label{fig9}\vspace{-0.5em}
\caption{UAV trajectories and relative distance between UAV 1 and 2 from 100 trails based on REPS. (a)~Trajectories of UAVs. Red, blue and green lines are trajectories for UAV 1, 2 and 3, respectively. Dashed lines are from a trail with factorized state costs ($w_{11} = w_{22} = w_{33} = 0.18$, $w_{12} = w_{21} = 0$), and solid (transparent) lines are from trails with joint state costs ($w_{11} = w_{22} = w_{33} = 0.18$, $w_{12} =0.27, w_{21} = 0.1$). (b) Distance between UAV 1 and 2. Red dashed line and blue solid line are respectively the mean distances from tails with independent state cost and joint state cost. Height of strip is one standard deviation.}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.90\textwidth]{figure/SampleEfficiency}\label{fig10}\vspace{-0.5em}
\caption{Sample-efficiency of continuous-time algorithms from 100 trails. (a) Immediate cost $c_2(\bar{x}_2, \bar{u}_2)$ in simple scenario without obstacle. Red dashed line is the mean of immediate cost subject to random sampling approach. Blue solid line is the mean of immediate cost subject to REPS algorithm. The height of strip is one standard deviation. (b) Immediate cost $c_2(\bar{x}_2, \bar{u}_2)$ in complex scenario with obstacles. Interpretation is the same as (a).}
\end{figure}
To verify the effectiveness of distributed LSOC algorithms in larger MAS with more agents, we consider a line-shape network consisting of nine UAVs as shown in \hyperref[fig12]{Figure~12}. These nine UAVs $i = \{1, 2, \cdots, 9\}$ are initially distributed at $x_i^0 = (10, 100 - 10i, 0.5, 0)$ as shown in \hyperref[fig13]{Figure~13}, and they can be divided into three groups based on their exit states. UAV 1 to 6 share a same exit state $A$ at $x_{1:6}^{t_f} = (90, 65, 0, 0)$; the exit state $B$ of UAV 7 and 8 is at $x_{7:8}^{t_f} = (90, 25, 0, 0)$, and UAV 9 is expected to exit at state $C$, $x_9^{t_f} = (90, 10, 0, 0)$, where the exit time $t_f = 40 \textrm{ sec}$. As exhibited in \hyperref[fig12]{Figure~12}, UAVs from different groups are either loosely coupled through their terminal cost functions or mutually independent, where the latter scenario does not require any communication between agents. A state-related cost function $q_i(\bar{x}_i)$ in subsystem $\bar{\mathcal{N}}_i$ is designed to consider and optimize the distances between neighboring agents and towards the exit state of agent $i$:
\begin{align*}
q_i(\bar{x}_i) = w_{ii} \cdot (\|(x_i, y_i) - (x_i^{t_f}, y_i^{t_f})\|_2 - d_i^{\max}) &+ w_{i, i-1} \cdot (\|(x_i, y_i) - (x_{i-1}, y_{i-1})\| - d^{\max}_{i, i-1}) \\
&+ w_{i, i+1} \cdot (\|(x_i, y_i) - (x_{i+1}, y_{i+1})\| - d^{\max}_{i, i+1}),
\end{align*}
where $w_{i,j}$ is the weight related to the distance between agent $i$ and $j$; $w_{i, j} = 0$ when $j = 0$ or $10$; $d_{i,j}^{\max}$ is the regularization term for numerical stability, which is assigned by the initial distance between agents $i$ and $j$ in this demonstration, and the remaining notations and parameters are the same as the assignments in~\eqref{UAV_RunningCost} and the first example in this subsection if not explicitly stated. Trajectories of UAV team subject to two distributed LSOC algorithms, \hyperref[alg2]{Algorithm~2} and \hyperref[alg3]{Algorithm~3} in \hyperref[appE]{Appendix~E}, are presented in \hyperref[fig13]{Figure~13}. For some network structures, such as line, loop, star and complete binary tree, in which the scale of every factorial subsystem is tractable, increasing the total number of agents in network will not dramatically boost the computational complexity on local agents thanks to the distributed LSOC framework proposed in this paper. Verification along with more simulation examples on the generalization of distributed LSOC controllers discussed in \hyperref[sec3.3]{Section~3.3} is supplemented in \cite{Song_arxiv_2020}.
\vspace{1em}
\begin{figure}[H]
\centering
\includegraphics[width=0.68\textwidth]{figure/CommGraph2}
\caption{Communication network of a UAV team with nine agents. UAV 1 to 6, as well as UAV 7 and 8 are strongly coupled (represented by solid lines) through their immediate cost functions. UAV 6 and 7, as well as UAV 8 and 9 are loosely coupled (represented by dashed lines) through their terminal cost functions.}\label{fig12}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.90\textwidth]{figure/Large}\label{fig13}
\caption{UAV trajectories subject to two distributed LSOC algorithms. (a) Trajectories of UAV team controlled by the sampling-based distributed LSOC algorithm with $w_{1,2} = w_{2,3} = w_{3,4} = w_{6,5} = w_{7,8} = w_{8,7} = w_{99} =1$, $w_{ii} = w_{4, 3} = w_{4, 5} = w_{5, 4} = w_{5, 6} = 0.5$, and $w_{2, 1} = w_{3, 2} = w_{6, 7} = w_{7, 6} = w_{9,8} = 0$. (b) Trajectories of UAV team controlled by the REPS-based distributed LSOC algorithm with $w_{1,2} = w_{2,3} = w_{3,4} = w_{6,5} = w_{7,8} = w_{8,7} = w_{99} =0.2$, $w_{ii} = w_{4, 3} = w_{4, 5} = w_{5, 4} = w_{5, 6} = 0.1$, and $w_{2, 1} = w_{3, 2} = w_{6, 7} = w_{7, 6} = w_{9,8} = 0$.}
\end{figure}
Many interesting problems remain unsolved in the area of distributed (linearly solvable) stochastic optimal control and deserve further investigation. Most of existing papers, including this paper, assumed that the passive and controlled dynamics of different agents are mutually independent. However, when we consider some practical constraints, such as the collisions between different UAVs, the passive and controlled dynamics of different agents are usually not mutually dependent. Meanwhile, while this paper considered the scenario when all states of local agents are fully observable, it will be interesting to study the MAS of partially observable agents with hidden states. Lastly, distributed LSOC algorithms subject random network, communication delay, infinite horizon or discounted cost are also worth our attention.
\section{Conclusion}\label{sec5}
Discrete-time and continuous-time distributed LSOC algorithms for networked MASs have been investigated in this paper. A distributed control framework based on factorial subsystems has been proposed, which allows to optimize the joint state or cost function between neighboring agents with local observation and tractable computational complexity. Under this distributed framework, the discrete-time multi-agent LSMDP problem was addressed by respectively solving the local systems of linear equations in each subsystem, and a parallel programming scheme was proposed to decentralize and expedite the computation. The optimal control action/policy for continuous-time multi-agent LSOC problem was formulated as a path integral, which was approximated by a distributed sampling method and a distributed REPS method, respectively. Numerical examples of coordinated UAV teams were presented to verify the effectiveness and advantages of these algorithms, and some open problems were given at the end of this paper.
\section*{Acknowledgments}
This work was supported in part by NSF-NRI, AFOSR, and ZJU-UIUC Institute Research Program. The authors would like to appreciate the constructive comments from Dr. Hunmin Lee and Gabriel Haberfeld. In this arXiv version, the authors would also like to thank the readers and staff on arXiv.org.
\section*{Conflict of Interest Statement}
The authors have agreed to publish this article, and we declare that there is no conflict of interests regarding the publication of this article.
\section*{Appendix A: Proof for Theorem 1}\label{appA}
\noindent \textit{Proof for \hyperref[thm1]{Theorem 1}}: Substituting the joint running cost function~\eqref{eq4} into the joint Bellman equation~\eqref{CompBellman}, and by the definitions of KL-divergence and exponential transformation~\eqref{ExpTrans}, we have
\begin{equation}\label{eqA1}
\begin{split}
V_i(\bar{x}_i) & = \min_{\bar{u}_i} \left\{ q_i(\bar{x}_i) + \textrm{KL}(\bar{u}_i(\cdot | \bar{x}_i) \ \| \ \bar{p}_i(\cdot | \bar{x}_i)) + \mathbb{E}_{\bar{x}'_i \sim \bar{u}_i(\cdot | \bar{x}_i)}[V_i(\bar{x}'_i)] \right\}\\
&= \min_{\bar{u}_i} \left\{ q_i(\bar{x}_i) + \mathbb{E}_{\bar{x}'_i \sim \bar{u}_i(\cdot | \bar{x}_i)} \left[\log \dfrac{\bar{u}_i(\bar{x}'_i | \bar{x}_i)}{\bar{p}_i(\bar{x}'_i | \bar{x}_i)}\right] + \mathbb{E}_{\bar{x}' \sim \bar{u}_i (\cdot | \bar{x}_i)}\left[\log \frac{1}{Z_i(\bar{x}'_i)}\right] \right\}\\
& = \min_{\bar{u}_i} \left\{ q_i(\bar{x}_i) + \mathbb{E}_{\bar{x}'_i \sim \bar{u}_i(\cdot | \bar{x}_i)} \left[\log \dfrac{\bar{u}_i(\bar{x}'_i|\bar{x}_i)}{\bar{p}_i(\bar{x}'_i|\bar{x}_i) Z_i(\bar{x}'_i)}\right] \right\}.
\end{split}
\end{equation}
The optimal policy will be straightforward if we can rewrite the expectation on the RHS of~\eqref{eqA1} as a KL-divergence and exploit the minimum condition of KL-divergence. While the control mapping $\bar{u}_i(\bar{x}_i'|\bar{x}_i)$ in~\eqref{eqA1} is a probability distribution, the denominator $p(x'|x)Z(x')$ is not necessarily a probability distribution. Hence, we define the following normalized term
\begin{equation}\label{eqA2}
\mathcal{W}_i(\bar{x}_i) = \sum_{\bar{x}'_i}\bar{p}_i(\bar{x}'_i|\bar{x}_i)Z_i(\bar{x}'_i).
\end{equation}
Since $p(x'|x)Z(x')/\mathcal{W}_i(\bar{x}_i)$ is a well-defined probability distribution, we can rewrite the joint Bellman equation~\eqref{eqA1} as follows
\begin{equation}\label{eqA3}
\begin{split}
V_i(\bar{x}_i) & = \min_{\bar{u}_i} \left\{ q_i(\bar{x}_i) + \mathbb{E}_{\bar{x}'_i \sim \bar{u}_i(\cdot | \bar{x}_i)} \left[ \log \dfrac{\bar{u}_i(\bar{x}'_i | \bar{x}_i)}{\bar{p}_i(\bar{x}'_i | \bar{x}_i) Z_i(\bar{x}'_i) / \mathcal{W}_i(\bar{x}_i)} - \log \mathcal{W}_i(\bar{x}_i) \right] \right\}\\
& = \min_{\bar{u}_i} \left\{ q_i(\bar{x}_i) - \log \mathcal{W}_i(\bar{x}_i) + \mathrm{KL}\left( \bar{u}_i( \cdot | \bar{x}_i) \ \Bigg\| \ \frac{\bar{p}_i(\cdot | \bar{x}_i)Z_i(\cdot)}{\mathcal{W}_i(\bar{x}_i)} \right) \right\},
\end{split}
\end{equation}
where only the last term depends on the joint control action $\bar{u}_i( \cdot | \bar{x}_i)$. According to the minimum condition of KL-divergence, the last term in~\eqref{eqA3} attains its absolute minimum at $0$ if and only if
\begin{equation*}
\bar{u}_i^*(\cdot | \bar{x}_i) = \frac{\bar{p}_i(\cdot | \bar{x}_i)Z_i(\cdot)}{\mathcal{W}_i(\bar{x}_i)},
\end{equation*}
which gives the optimal control action~\eqref{OptimalControl} in~\hyperref[thm1]{Theorem~1}. By substituting~\eqref{OptimalControl} into~\eqref{eqA3}, we can minimize the RHS of joint Bellman equation~\eqref{CompBellman} and remove the minimum operator:
\begin{equation}\label{eqA5}
V_i(\bar{x}_i) = q_i(\bar{x}_i) - \log \mathcal{W}_i(\bar{x}_i).
\end{equation}
Exponentiating both sides of~\eqref{eqA5} and substituting~\eqref{ExpTrans} and~\eqref{eqA2} into the result, the Bellman equation~\eqref{eqA5} can be rewritten as a linear equation with respect to the desirability function
\begin{equation}
\begin{split}
Z_i(\bar{x}_i) = \exp[-V_i(\bar{x}_i)] = \exp[-q_i(\bar{x}_i)] \cdot \mathcal{W}_i(\bar{x}_i) = \exp[-q_i(\bar{x}_i)] \cdot \sum_{\bar{x}'_i}\bar{p}_i(\bar{x}'_i|\bar{x}_i)Z_i(\bar{x}'_i),
\end{split}
\end{equation}
which implies~\eqref{eq_prop1} in~\hyperref[thm1]{Theorem~1}. This completes the proof. \qed
\section*{Appendix B: Proof for Theorem 2}\label{appB}
Before we present the proof for \hyperref[thm2]{Theorem~2}, Feynman–Kac formula that builds an important relationship between parabolic PDEs and stochastic processes is introduced as \hyperref[lem1]{Lemma~4}.
\begin{lemma}\label{lem1}[Feynman–Kac formula]
Consider the Kolmogorov backward equation (KBE) described as follows
\begin{equation*}
\partial_t Z(x, t) = \frac{q(x, t)}{\lambda} \cdot Z(x, t) - f(x, t)^\top \cdot \nabla_x Z(x, t) - \frac{1}{2}\textrm{tr}\left( B(x)\sigma \sigma^\top B(x)^\top \cdot \nabla_{xx}Z(x, t) \right),
\end{equation*}
where the terminal condition is given by $Z(x, t_f) = \exp[ -\phi(x({t_f})) / \lambda]$. Then the solution to this KBE can be written as a conditional expectation
\begin{equation*}
Z(x, t) = \mathbb{E}_{x, t}\left[ \exp\left( -\frac{1}{\lambda} \phi(y(t_f)) \right) - \frac{1}{\lambda} \int_{t}^{t_f} q(y, \tau) \ d\tau \ \Big| \ y(t) = x \right],
\end{equation*}
under the probability measure that $y$ is an It\^{o} diffusion process driven by the equation $d y(\tau) = f(y,\tau) d\tau + B(y) \sigma \cdot dw(\tau)$ with initial condition $y(t) = x$. \qed
\end{lemma}
\noindent We now show the proof for \hyperref[thm2]{Theorem 2}.
\vspace{0.5em}
\noindent \textit{Proof for \hyperref[thm2]{Theorem 2}}: First, we show that the joint optimality equation~\eqref{ContBellman} can be formulated into the joint stochastic HJB equation~\eqref{sto_HJB} that gives an analytic expression of optimal control action~\eqref{OptimalAction}. Substituting immediate cost function~\eqref{cont_cost} into optimality equation~\eqref{ContBellman} and letting $s$ be a time step between $t$ and $t_f$, optimality equation~\eqref{ContBellman} can be rewritten as
\begin{equation}\label{EqB1}
\begin{split}
V_i(\bar{x}_i, t) & = \min_{\bar{u}_i} \mathbb{E}^{\bar{u}_i}_{\bar{x}_i, t} \left[ \phi_i(\bar{x}_{i},t_f) + \int_{t}^{t_f} q_i(\bar{x}_i, \tau) + \frac{1}{2} \bar{u}_i(\bar{x}_i, \tau)^\top \bar{R}_i\bar{u}_i(\bar{x}_i, \tau) \ d\tau \right]\\
& = \min_{\bar{u}_i} \mathbb{E}_{\bar{x}_i,t}^{\bar{u}_i} \left[ V_i(\bar{x}_i, s) + \int_{t}^{s} q_i(\bar{x}_i, \tau) + \frac{1}{2} \bar{u}_i(\bar{x}_i, \tau)^\top \bar{R}_i\bar{u}_i(\bar{x}_i, \tau) \ d\tau \right].
\end{split}
\end{equation}
With some rearrangements and dividing both sides of~\eqref{EqB1} by $s - t > 0$, we have
\begin{equation}\label{EqB2}
0 = \min_{\bar{u}_i} \mathbb{E}_{\bar{x}_i,t}^{\bar{u}_i} \left[ \frac{V_i(\bar{x}_i, s) - V_i(\bar{x}_i,t)}{s - t} + \frac{1}{s - t}\int_{t}^{s} q_i(\bar{x}_i, \tau) + \frac{1}{2} \bar{u}_i(\bar{x}_i, \tau)^\top \bar{R}_i\bar{u}_i(\bar{x}_i, \tau) \ d\tau \right].
\end{equation}
By letting $s \rightarrow t$, the optimality equation~\eqref{EqB2} becomes
\begin{equation}\label{EqB3}
0 = \min_{\bar{u}_i} \mathbb{E}_{\bar{x}_i,t}^{\bar{u}_i} \left[ \frac{dV_i(\bar{x}_i, t)}{dt} + q_i(\bar{x}_i, t) + \frac{1}{2} \bar{u}_i(\bar{x}_i, t)^\top\bar{R}_i\bar{u}_i(\bar{x}_i, t) \right].
\end{equation}
Applying It\^{o}'s formula~\cite{LeGall_2016}, the differential $dV_i(\bar{x}_i, t)$ in~\eqref{EqB3} can be expanded as
\begin{equation}\label{eq54}
dV_i(\bar{x}_i, t) = \sum_{j \in \bar{\mathcal{N}}_i} \sum_{m=1}^{M} \frac{\partial V_i(\bar{x}_i, t)}{\partial x_{j(m)}} dx_{j(m)} + \frac{\partial V_i(\bar{x}_i, t)}{\partial t} dt + \frac{1}{2} \sum_{j, k \in \bar{\mathcal{N}}_i} \sum_{m, n = 1}^{M} \frac{\partial^2 V_i(\bar{x}_i, t)}{\partial x_{j(m)}\partial x_{k(n)}}dx_{j(m)} dx_{k(n)}.
\end{equation}
For conciseness, we will omit the indices (or subscripts) of state components, $(m)$ and $(n)$, in the following derivations. Dividing both sides of~\eqref{eq54} by $dt$, taking the expectation over all trajectories that initialized at $(\bar{x}_i^t, t)$ and subject to control action $\bar{u}_i$, and substituting the joint dynamics~\eqref{eq13} into the result, we have
\begin{equation}\label{EqB4}
\begin{split}
\mathbb{E}_{\bar{x}_i,t}^{\bar{u}_i}\left[ \frac{dV_i(\bar{x}_i, t)}{dt} \right] = \sum_{j \in \bar{\mathcal{N}}_i} [f_j(x_j,t) & + B_j(x_j) u_j(\bar{x}_i, t) ]^\top \cdot \nabla_{x_j} V_i(\bar{x}_i,t) + \frac{\partial V_i(\bar{x}_i, t)}{\partial t} \\
&+ \frac{1}{2} \sum_{j \in \bar{\mathcal{N}}_i} \textrm{tr} \left( B_j(x_j)\sigma_j \sigma_j^\top B_j(x_j)^\top \cdot \nabla_{x_jx_j}V_i(\bar{x}_i, t) \right),
\end{split}
\end{equation}
where the identity $\mathbb{E}_{\bar{x}_i,t}^{\bar{u}_i}[dx_{j(m)}dx_{k(n)}] = (\sigma_j \sigma^\top_j)_{mm} \delta_{jk} \delta_{mn} dt$ derived from the property of standard Brownian motion $\mathbb{E}_{\bar{x}_i,t}^{\bar{u}_i}[dw_{j(m)}dw_{k(n)}] = \delta_{jk} \delta_{mn} dt$ is invoked, and operators $\nabla_{x_j}$ and $\nabla_{x_jx_j}$ follow the same definitions in~\eqref{singleHJB}. Substituting~\eqref{EqB4} into~\eqref{EqB3}, the joint stochastic HJB equation~\eqref{sto_HJB} in \hyperref[thm2]{Theorem~2} is obtained
\begin{equation*}
\begin{split}
-\partial_t V_i(\bar{x}_i, t) = & \min_{\bar{u}_i} \mathbb{E}_{\bar{x}_i,t}^{\bar{u}_i} \bigg[ \sum_{j \in \bar{\mathcal{N}}_i} [f_j(x_j,t) + B_j(x_j) u_j(\bar{x}_i, t)]^\top \cdot \nabla_{x_j} V_i(\bar{x}_i,t) + q_i(\bar{x}_i, t) \\
& + \frac{1}{2} \bar{u}_i(\bar{x}_i, t)^\top\bar{R}_i\bar{u}_i(\bar{x}_i, t) + \frac{1}{2} \sum_{j \in \bar{\mathcal{N}}_i} \textrm{tr} \left( B_j(x_j)\sigma_j \sigma_j^\top B_j(x_j)^\top \cdot \nabla_{x_jx_j}V_i(\bar{x}_i, t) \right) \bigg],
\end{split}
\end{equation*}
where the boundary condition is given by $V_i(\bar{x}_i, t_f) = \phi_i(\bar{x}_i)$. The joint optimal control action $\bar{u}_i^*(\bar{x}_i, t)$ can be obtained by setting the derivative of~\eqref{sto_HJB} with respect to $\bar{u}_i(\bar{x}_i, t)$ equal to zero. When the control weights $R_j$ of each agent $j \in \bar{\mathcal{N}}_i$ are coupled, \textit{i.e.} the joint control weight matrix $\bar{R}_i$ cannot be formulated as a block diagonal matrix, the joint optimal control action for subsystem $\mathcal{\bar{N}}_i$ is given as~\eqref{OptimalAction} in~\hyperref[thm2]{Theorem~2}, $\bar{u}^*_i(\bar{x}_i, t) = -\bar{R}_i^{-1}$$\bar{B}_i(\bar{x}_i)^\top \cdot \nabla_{\bar{x}_i}V_i(x_i, t)$, where $\nabla_{\bar{x}_i}$ denotes the gradient with respect to the joint state $\bar{x}_i$. However, it is more common in practice that the joint control weight matrix is given by $\bar{R}_i = \textrm{diag}\{R_i, R_{j \in \mathcal{N}_i} \}$ as in~\eqref{cont_cost}, and the joint control cost satisfies $\frac{1}{2}\bar{u}_i^\top \bar{R}_i \bar{u}_i = \sum_{j\in\mathcal{\bar{N}}_i}\frac{1}{2}u_j^\top R_ju_j$. Setting the derivatives of~\eqref{sto_HJB} with respect to ${u}_j(\bar{x}_i, t)$ equal to zero in the latter case gives the local optimal control action of agent $j \in \mathcal{\bar{N}}_i$
\begin{equation}\label{LocOptCtrl}
u^*_{j}(\bar{x}_i, t) = - R_j^{-1} B_j(x_j)^\top \cdot \nabla_{x_j}V_i(\bar{x}_i, t).
\end{equation}
For conciseness of derivations and considering the formulation of~\eqref{sto_HJB}, we will mainly focus on the latter scenario, $\bar{R}_i = \textrm{diag}\{R_i, R_{j \in \mathcal{N}_i} \}$, in the remaining part of this proof.
In order to solve the stochastic HJB equation~\eqref{sto_HJB} and evaluate the optimal control action~\eqref{OptimalAction}, we consider to linearize~\eqref{sto_HJB} with the Cole-Hopf transformation~\eqref{ContTrans}, $V_i(\bar{x}_i, t) = \lambda_i \log Z(\bar{x}_i, t)$. Subject to this transformation, the derivative and gradients in~\eqref{sto_HJB} satisfy
\begin{equation}\label{der1}
\partial_t V_i(\bar{x}_i,t) = - \lambda_i \cdot \frac{ \partial_{t} Z_i(\bar{x}_i,t)}{Z_i(\bar{x}_i,t)},
\end{equation}
\begin{equation}\label{der2}
\nabla_{x_j}V_i(\bar{x}_i,t) = - \lambda_i \cdot \frac{\nabla_{x_j} Z_i(\bar{x}_i,t)}{Z_i(\bar{x}_i,t)},
\end{equation}
\begin{equation}\label{der3}
\nabla_{x_jx_j} V_i(\bar{x}_i,t) = - \lambda_i \cdot \left[ \frac{\nabla_{x_jx_j} Z_i(\bar{x}_i,t)}{Z_i(\bar{x}_i,t)} - \frac{\nabla_{x_j}Z_i(\bar{x}_i,t) \cdot \nabla_{x_j}Z_i(\bar{x}_i,t)^\top}{Z_i(\bar{x}_i,t)^2} \right].
\end{equation}
Equalities~\eqref{der2} and~\eqref{der3} will still hold when we replace the agent's state $x_j$ in $\nabla_{x_j}$ and $\nabla_{x_j x_j}$ by the joint state $\bar{x}_i$ and reformulate the joint HJB \eqref{sto_HJB} in a more compact form. Substituting local optimal control action~\eqref{LocOptCtrl}, and gradients~\eqref{der2} and \eqref{der3} into the stochastic HJB equation~\eqref{sto_HJB}, the corresponding terms of each agent $j \in \mathcal{\bar{N}}_i$ in~\eqref{sto_HJB} satisfy
\begin{align}\label{EqB5}
[B_j(x_j) u_j(\bar{x}_i, t)]^\top \nabla_{x_j} & V_i(\bar{x}_i, t) + \frac{1}{2}{u}_{j}(\bar{x}_i,t)^\top {R}_{j} u_j(\bar{x}_i, t) \nonumber \\
= & - \frac{1}{2} \nabla_{x_j}V_i(\bar{x}_i, t)^\top \cdot B_j(x_j)R_j^{-1} B_j(x_j)^\top \cdot \nabla_{x_j} V_i(\bar{x}_i, t) \\
= & \ \frac{-\lambda_i^2}{2 \cdot Z_i(\bar{x}_i, t)^2} \cdot \nabla_{x_j} Z_i(\bar{x}_i,t)^\top \cdot B_j(x_j)R_j^{-1} B_j(x_j)^\top \cdot \nabla_{x_j} Z_i(\bar{x}_i,t), \nonumber
\end{align}
\begin{align}\label{EqB6}
\frac{1}{2} \textrm{tr} \big( B_j(x_j)\sigma_j & \sigma_j^\top B_j(x_j)^\top \cdot \nabla_{x_jx_j} V_i(\bar{x}_i, t) \big) \nonumber\\
= & \ \frac{ -\lambda_i}{2 \cdot Z_i(\bar{x}_i,t)} \cdot \textrm{tr}\left( B_j(x_j)\sigma_j \sigma_j^\top B_j(x_j)^\top \cdot \nabla_{x_jx_j} Z_i(\bar{x}_i,t) \right) \\
& + \frac{\lambda_i}{2 \cdot Z_i(\bar{x}_i,t)^2} \cdot \textrm{tr} \left( B_j(x_j)\sigma_j \sigma_j^\top B_j(x_j)^\top \cdot \nabla_{x_j}Z_i(\bar{x}_i,t) \cdot \nabla_{x_j}Z_i(\bar{x}_i,t)^\top \right). \nonumber
\end{align}
By the properties of trace operator, the quadratic terms in~\eqref{EqB5} and~\eqref{EqB6} will be canceled if
\begin{equation}\label{EqB7}
\sigma_{j} \sigma^\top_{j} = \lambda_i R_j^{-1},
\end{equation}
\textit{i.e.} $R_j = (\sigma_{j} \sigma^\top_{j} / \lambda_i)^{-1}$, which is equivalent to the condition $\bar{\sigma}_i\bar{\sigma}_i^\top = \lambda_i \bar{R}^{-1}_i$ or $\bar{R}_i = (\bar{\sigma}_i\bar{\sigma}_i^\top / \lambda_i)^{-1}$ subject to joint dynamics~\eqref{eq13}. Substituting~\eqref{OptimalAction}, \eqref{der1}, \eqref{EqB5} and~\eqref{EqB6} into stochastic HJB equation~\eqref{sto_HJB}, we then remove the minimization operator and obtain the linearized PDE as~\eqref{Z_Function} in \hyperref[thm2]{Theorem~2}
\begin{equation*}
\partial_{t} Z_i(\bar{x}_i, t) = \left[\frac{q_i(\bar{x}_i, t)}{\lambda_i} - \sum_{j\in\bar{\mathcal{N}}_i} f_j(x_j, t) \nabla_{x_j} - \frac{1}{2}\sum_{j \in \bar{\mathcal{N}}_i} \textrm{tr}\left( B_j(x_j)\sigma_j \sigma_j^\top B_j(x_j)^\top \nabla_{x_jx_j} \right) \right]Z_i(\bar{x}_i,t),
\end{equation*}
where the boundary condition is given by $Z_i(\bar{x}_i, t_f) = \exp[- \phi_i(\bar{x}_i) / \lambda_i]$. Once the value of desirability function $Z_i(\bar{x}_i, t)$ in~\eqref{Z_Function} is solved, we can readily figure out the value function $V_i(\bar{x}_i, t)$ and joint optimal control action from \eqref{ContTrans} and~\eqref{OptimalAction}, respectively. Invoking the Feynman–Kac formula introduced in \hyperref[lem1]{Lemma~1}, a solution to~\eqref{Z_Function} can be formulated as~\eqref{Z_Solution} in \hyperref[thm2]{Theorem~2}
\begin{equation*}
Z_i(\bar{x}_i, t) = \mathbb{E}_{\bar{x}_i,t}\left[ \exp\left( -\frac{1}{\lambda_i} \phi_i(\bar{y}^{t_f}_i) -\frac{1}{\lambda_i} \int_{t}^{t_f}q_i(\bar{y}_i, \tau) \ d\tau \right) \right],
\end{equation*}
where $\bar{y}(t)$ satisfies the uncontrolled dynamics $d \bar{y}_i(\tau) = \bar{f}_i(\bar{y}_i,\tau) d\tau + \bar{B}_i(\bar{y}_i) \bar{\sigma}_i \cdot d\bar{w}_i(\tau)$ with initial condition $\bar{y}_i(t) = \bar{x}_i(t)$. This completes the proof. \qed
\section*{Appendix C: Proof of Proposition 3}\label{appC}
\noindent \textit{Proof of \hyperref[prop3]{Proposition~3}}: First, we formulate the desirability function~\eqref{Z_Solution} as a path integral shown in~\eqref{Prop3E1}. Partitioning the time interval from $t$ to $t_f$ into $K$ intervals of equal length $\varepsilon > 0$, $t = t_0 < t_1 < \cdots < t_K = t_f$, we can rewrite~\eqref{Z_Solution} as the following path integral
\begin{equation}\label{EqC1}
\begin{split}
Z_i(\bar{x}_i,t) = \ & \mathbb{E}_{\bar{x}_i,t}\left[ \exp\left( -\frac{1}{\lambda_i} \phi_i(\bar{y}^{t_f}_i) -\frac{1}{\lambda_i} \int_{t}^{t_f} q_i(\bar{y}_i, \tau) \ d\tau \right) \right] \\
= \ & \int d\bar{x}^{(1)}_i \cdots \int \exp\left( -\frac{1}{\lambda_i} \phi_i(\bar{x}^{(K)}_i) \right) \cdot \prod_{k=0}^{K-1} Z_i(\bar{x}_i^{(k+1)}, t_{k+1}; \bar{x}_i^{(k)}, t_k) \ d\bar{x}_i^{(K)},
\end{split}
\end{equation}
where the integral of variable $\bar{x}_i^{(k)}$ is over the set of all joint uncontrolled trajectories $\bar{x}_i(\tau)$ on time interval $[t_{k-1}, t_k)$ and with initial condition $\bar{x}_i^{(0)} = \bar{x}_i(t_0) = \bar{x}_i(t)$, which can be measured by agent $i$ at initial time $t$, and the function $Z_i(\bar{x}_i^{(k+1)}, t_{k+1}; \bar{x}_i^{(k)}, t_k)$ is implicitly defined by
\begin{equation}\label{EqC1.5}
\begin{split}
\int f(\bar{x}_{i}^{(k+1)}) \cdot & Z_i(\bar{x}_i^{(k+1)}, t_{k+1}; \bar{x}_i^{(k)}, t_k) \ d\bar{x}_i^{(k+1)} \\
& = \mathbb{E}_{\bar{x}_i^{(k)}, t_k}\left[ f(\bar{x}_i^{(k+1)}) \cdot \exp\left( -\frac{1}{\lambda_i}\int_{t_i}^{t_{i+1}} q_i(\bar{y}_i, \tau) \ d\tau \right) \Big| \ \bar{y}_i(t_k) = \bar{x}_i^{(k)} \right]
\end{split}
\end{equation}
for arbitrary functions $f(\bar{x}_i^{(k+1)})$. Based on definition~\eqref{EqC1.5} and in the limit of infinitesimal $\varepsilon$, the function $Z_i(\bar{x}_i^{(k+1)}, t_{k+1} ; \bar{x}_i^{(k)}, t_k)$ can be approximated by
\begin{equation}\label{EqC2}
Z_i(\bar{x}_i^{(k+1)}, t_{k+1} ; \bar{x}_i^{(k)}, t_k) = p_i(\bar{x}_i^{(k+1)}, t_{k+1} | \bar{x}_i^{(k)}, t_k) \cdot \exp\left(-\frac{\varepsilon}{\lambda_i} \cdot q_i(\bar{x}_i^{(k)}, t_k) \right),
\end{equation}
where $p_i(\bar{x}_i^{(k+1)}, t_{k+1} | \bar{x}_i^{(k)}, t_k)$ is the transition probability of uncontrolled dynamics from state-time pair $(\bar{x}_i^{(k)}, t_k)$ to $(\bar{x}_i^{(k+1)}, t_{k+1})$ and can be factorized as follows
\begin{equation}\label{EqC2_5}
\begin{split}
p_i(\bar{x}_i^{(k+1)}, t_{k+1} | \bar{x}_i^{(k)}, t_k) & = p_i(\bar{x}_{i(n)}^{(k+1)}, t_{k+1} | \bar{x}_i^{(k)}, t_k) \cdot p_i(\bar{x}_{i(d)}^{(k+1)}, t_{k+1} | \bar{x}_{i}^{(k)}, t_k)\\
& = p_i(\bar{x}_{i(n)}^{(k+1)}, t_{k+1} | \bar{x}_{i(d)}^{(k)}, \bar{x}_{i(n)}^{(k)}, t_k) \cdot p_i(\bar{x}_{i(d)}^{(k+1)}, t_{k+1} | \bar{x}_{i(d)}^{(k)}, \bar{x}_{i(n)}^{(k)}, t_k)\\
& \propto p_i(\bar{x}_{i(d)}^{(k+1)}, t_{k+1} | \bar{x}_{i}^{(k)}, t_k),\\
\end{split}
\end{equation}
where $p_i(\bar{x}_{i(n)}^{(k+1)}, t_{k+1} | \bar{x}_{i(d)}^{(k)}, \bar{x}_{i(n)}^{(k)}, t_k)$ is a Dirac delta function, since $\bar{x}_{i(n)}^{(k+1)}$ can be deterministically calculated from $\bar{x}_{i(n)}^{(k)}$ and $\bar{x}_{i(d)}^{(k)}$. Provided the directly actuated uncontrolled dynamics from~\eqref{Partitioned_dynamics}
\begin{equation*}
\bar{x}_{i(d)}^{(k+1)} - \bar{x}_{i(d)}^{(k)} = \bar{f}_{i(d)}(\bar{x}^{(k)}_i, t_k) \varepsilon + \bar{B}_{i(d)}(\bar{x}^{(k)}_i) \cdot \bar{\sigma}_i \bar{w}_i
\end{equation*}
with Brownian motion $\bar{w}_i \sim \mathcal{N}(0, \varepsilon {I}_{M})$, the directly actuated states $\bar{x}_{i(d)}^{(k)}$ and $\bar{x}_{i(d)}^{(k+1)}$ satisfy Gaussian distribution $\bar{x}_{i(d)}^{(k+1)} \sim \mathcal{N}(\bar{x}_{i(d)}^{(k)} + \bar{f}_{i(d)}(\bar{x}^{(k)}_i, t_k)\varepsilon, \Sigma^{(k)}_i)$ with covariance $\Sigma^{(k)}_i = \varepsilon \bar{B}_{i(d)}(\bar{x}^{(k)}_i) \bar{\sigma}_i \bar{\sigma}^\top_i \cdot \bar{B}_{i(d)}(\bar{x}^{(k)}_i)^\top$. When condition $\bar\sigma_{i} \bar\sigma^\top_{i} = \lambda_i \bar{R}_i^{-1}$ in~\hyperref[thm2]{Theorem~2} is fulfilled, the covariance is $\Sigma_i^{(k)} = \varepsilon \lambda_i \bar{B}_{i(d)}(\bar{x}^{(k)}_i) \cdot \bar{R}_i^{-1} \bar{B}_{i(d)}(\bar{x}^{(k)}_i)^\top = \varepsilon H_i^{(k)}$ with $H_i^{(k)} = \lambda_i \bar{B}_{i(d)}(\bar{x}^{(k)}_i) \bar{R}_i^{-1} \cdot \allowbreak \bar{B}_{i(d)}(\bar{x}^{(k)}_i)^\top = \bar{B}_{i(d)}(\bar{x}^{(k)}_i) \bar{\sigma}_i \bar{\sigma}^\top \bar{B}_{i(d)}(\bar{x}^{(k)}_i)^\top$. Hence, the transition probability in~\eqref{EqC2_5} satisfies
\begin{align}\label{EqC3}
p_i(\bar{x}_{i(d)}^{(k+1)}, & t_{k+1} | \bar{x}_i^{(k)}, t_k) = \frac{1}{[{\det(2\pi \Sigma_i^{(k)}) }]^{1/2}} \exp \left( -\frac{1}{2} \left\| \bar{x}_{i(d)}^{(k+1)} - \bar{x}_{i(d)}^{(k)} - \bar{f}_{i(d)}(\bar{x}^{(k)}_i, t_k)\varepsilon \right\|^2_{\left(\Sigma_i^{(k)}\right)^{-1}} \right) \nonumber \\
= & \frac{1}{[{\det(2\pi \Sigma_i^{(k)}) }]^{1/2}} \cdot \exp\left( - \frac{\varepsilon}{2} \left\|\frac{ \bar{x}_{i(d)}^{(k+1)} - \bar{x}_{i(d)}^{(k)}}{\varepsilon} - \bar{f}_{i(d)}(\bar{x}^{(k)}_i, t_k) \right\|_{\left(H_i^{(k)}\right)^{-1}}^2\right).
\end{align}
Substituting~\eqref{EqC2}, \eqref{EqC2_5} and~\eqref{EqC3} into~\eqref{EqC1} and in the limit of infinitesimal $\varepsilon$, the desirability function $Z_i(\bar{x}_i, t)$ can then be rewritten as a path integral
\begin{equation}\label{EqC4}
Z_i(\bar{x}_i, t) = \lim_{\varepsilon \downarrow 0}Z_i^{(\varepsilon)}(\bar{x}^{(0)}_i, t_0).
\end{equation}
Defining a path variable $\bar \ell_i = (\bar{x}^{(1)}_i, \cdots, \bar{x}^{(K)}_i)$, the discretized desirability function in~\eqref{EqC4} can be expressed as
\begin{align}\label{EqC4.5}
Z^{(\varepsilon)}_i(\bar{x}^{(0)}_i, t_0) & = \int \exp\left( - S_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) - \frac{1}{2}\sum_{k=0}^{K-1} \log \det(2\pi \Sigma_i^{(k)}) \right) \ d\bar{\ell}_i \\
& = \int \exp\left( -S_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) - \frac{1}{2}\sum_{k=0}^{K-1}\log \det (H_i^{(k)}) - \frac{K D |\mathcal{\bar N}_i|}{2} \log (2\pi \varepsilon) \right) \ d\bar{\ell}_i \nonumber\\
& = \int \exp\left( -\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) - \frac{K D |\mathcal{\bar N}_i|}{2} \log (2\pi \varepsilon) \right) \ d\bar{\ell}_i, \nonumber
\end{align}
where $S_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0)$ is the path value for a trajectory $(\bar{x}_i^{(0)}, \cdots, \bar{x}_i^{(K)})$ starting at space-time pair $(\bar{x}_i, t)$ or $(\bar{x}_i^{(0)}, t_0)$ and takes the form of
\begin{equation*}
S^{\varepsilon, \lambda_i}_i(\bar{x}^{(0)}_i, \bar{\ell}_i, t_0) = \frac{\phi_i(\bar{x}^{(K)}_i)}{\lambda_i} + \varepsilon \sum_{k=0}^{K-1} \Bigg[ \frac{q_i(\bar{x}^{(k)}_i, t_k)}{\lambda_i} + \frac{1}{2} \left\| \frac{ \bar{x}_{i(d)}^{(k+1)} - \bar{x}_{i(d)}^{(k)}}{\varepsilon} - \bar{f}_{i(d)}(\bar{x}^{(k)}_{i}, t_k) \right\|^2_{\left(H_i^{(k)}\right)^{-1}} \Bigg];
\end{equation*}
$\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0)$ is the generalized path value and satisfies $\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) = S_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) \allowbreak + \frac{1}{2}\sum_{k=0}^{K-1} \allowbreak \log \det(H_i^{(k)})$, and the constant $KD|\bar{\mathcal{N}}_i| / 2 \cdot \log (2\pi \varepsilon)$ in~\eqref{EqC4.5} is related to the numerical stability, which demands a careful choice of $\varepsilon$ and a fine partition over $[t, t_f)$. Identical to the expectation in~\eqref{Z_Solution} and the integral in~\eqref{EqC1}, the integral in~\eqref{EqC4.5} is subject to the set of all uncontrolled trajectories $\bar{x}_i(\tau)$ initialized at~$(\bar{x}_i, t)$ or $(\bar{x}_i^{(0)}, t_0)$. Summarizing the preceding deviations, \eqref{Prop3E1} and~\eqref{Prop3E2} in~\hyperref[prop3]{Proposition~3} can be restored.
Substituting gradient~\eqref{der2} and discretized desirability function~\eqref{EqC4} into the joint optimal control action~\eqref{OptimalAction}, we have
\begin{align}\label{EqC6}
\bar{u}^*_{i}(\bar{x}_i, t) & = \lambda_i \bar{R}_i^{-1} \bar{B}_i(\bar{x}_i)^\top \frac{\nabla_{\bar{x}_i} Z_i(\bar{x}_i,t)}{Z_i(\bar{x}_i,t)} = \bar\sigma_i \bar\sigma_i^\top \bar{B}_i(\bar{x}_i)^\top \frac{\nabla_{\bar{x}_i} Z_i(\bar{x}_i,t)}{Z_i(\bar{x}_i,t)} \\
&= \lambda_i \bar{R}_i^{-1} \bar{B}_i(\bar{x}_i)^\top \cdot \lim_{\varepsilon \downarrow 0} \frac{\nabla_{\bar{x}^{(0)}_i}\int \exp[-\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0)- K D |\mathcal{\bar N}_i| / 2 \cdot \log (2\pi \varepsilon) ] \ d\bar{\ell}_i }{\int \exp[-\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0)- K D |\mathcal{\bar N}_i| / 2 \cdot \log (2\pi \varepsilon)] \ d\bar{\ell}_i} \allowdisplaybreaks \nonumber\\
& \neweq{(a)} \lambda_i \bar{R}_i^{-1} \bar{B}_i(\bar{x}_i)^\top \cdot \lim_{\varepsilon \downarrow 0} \frac{ \exp[- K D |\mathcal{\bar N}_i| / 2 \cdot \log(2\pi \varepsilon)] \cdot \nabla_{\bar{x}^{(0)}_i} \int \exp[ -\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) ] \ d\bar{\ell}_i}{ \exp[- K D |\mathcal{\bar N}_i| / 2 \cdot \log (2\pi \varepsilon)] \cdot \int \exp[ -\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) ] \ d\bar{\ell}_i} \allowdisplaybreaks \nonumber\\
& \neweq{(b)} \lambda_i \bar{R}_i^{-1} \bar{B}_i(\bar{x}_i)^\top \cdot \lim_{\varepsilon \downarrow 0} \frac{\int \exp[ -\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) ] \cdot \nabla_{\bar{x}^{(0)}_i}[ -\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) ] \ d\bar{\ell_i}}{\int \exp[ -\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) ] \ d\bar{\ell}_i} \nonumber \\
& \neweq{(c)} \lambda_i \bar{R}_i^{-1} \bar{B}_i(\bar{x}_i)^\top \cdot \lim_{\varepsilon \downarrow 0} \int \tilde{p}^*_i( \bar{\ell}_i | \bar{x}_i^{(0)}, t_0) \cdot \nabla_{\bar{x}^{(0)}_i}[ -\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) ] \ d\bar{\ell}_i \nonumber \\
& \neweq{(d)} \lambda_i \bar{R}_i^{-1} \bar{B}_{i(d)}(\bar{x}_i)^\top \cdot \lim_{\varepsilon \downarrow 0} \int \tilde{p}^*_i(\bar{\ell}_i | \bar{x}_i^{(0)}, t_0) \cdot \tilde{u}_i(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) \ d\bar{\ell}_i. \nonumber
\end{align}
(a) follows from the fact that $\exp[- K D |\mathcal{\bar N}_i| / 2 \cdot \log (2\pi \varepsilon)]$ is independent from the path variables $(\bar{x}_i^{(0)}, \bar{\ell}_i)$; (b) employs the differentiation rule for exponential function and requires that the integrand $\exp[-\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \allowbreak \bar{\ell}_i, t_0)]$ be continuously differentiable in $\varepsilon$ and along the trajectory $(\bar{x}_i^{(0)}, \bar{\ell}_i)$; (c) follows from the optimal path distribution $\tilde{p}^*_i(\bar{\ell}_i | \bar{x}_i^{(0)}, t_0)$ that satisfies
\begin{equation*}
\tilde{p}^*_i(\bar{\ell}_i | \bar{x}_i^{(0)}, t_0) = \frac{\exp[ -\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) ]}{\int \exp[ -\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) ] \ d\bar{\ell}_i};
\end{equation*}
(d) follows the partitions $\bar{B}_i(\bar{x}_i) = [0, \bar{B}_{i(d)}(\bar{x}_i)^\top]^\top$ and $- \nabla_{\bar{x}^{(0)}_i}\ \tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) = [ -\nabla_{\bar{x}^{(0)}_{i(n)}} \allowbreak \tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \allowbreak \bar{\ell}_i, t_0)^\top, - \nabla_{\bar{x}^{(0)}_{i(d)}} \tilde{S}_i^{\varepsilon, \lambda_i} (\bar{x}_i^{(0)}, \bar{\ell}_i, t_0)^\top ]^\top$, and the initial control variable $\tilde{u}_i(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0)$ is determined by
\begin{align}\label{EqC7}
&\nabla_{\bar{x}^{(0)}_{i(d)}} \tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0)
= \nabla_{\bar{x}^{(0)}_{i(d)}} \bigg[ \frac{\phi_i(\bar{x}^{(K)}_i)}{\lambda_i} + \frac{\varepsilon}{\lambda_i}\sum_{k=0}^{K-1}q_i(\bar{x}^{(k)}_i, t_k) + \frac{1}{2}\sum_{k=0}^{K-1} \log \det(H_i^{(k)}) \\
& \hspace{191pt} + \frac{\varepsilon}{2} \sum_{k=0}^{K-1} \left\| \frac{ \bar{x}_{i(d)}^{(k+1)} - \bar{x}_{i(d)}^{(k)}}{\varepsilon}- \bar{f}_{i(d)}(\bar{x}^{(k)}_i, t_k) \right\|^2_{\left(H_i^{(k)}\right)^{-1}}\bigg]. \nonumber
\end{align}
In the following, we calculate the gradients in~\eqref{EqC7}. Since the terminal cost $\phi_i(\bar{x}^{(K)}_i)$ usually is a constant, the first gradient in~\eqref{EqC7} is zero, \textit{i.e.} $\nabla_{\bar{x}^{(0)}_{i(d)}} [\phi_i(\bar{x}_i^{(K)}) / \lambda_i] = 0$. When the immediate cost $q_i(\bar{x}^{(0)}_i, t_0)$ is a function of state $\bar{x}^{(0)}_{i(d)}$, the second gradient in~\eqref{EqC7} can be computed as follows
\begin{equation}\label{EqC7_5}
\nabla_{\bar{x}^{(0)}_{i(d)}} \frac{\varepsilon}{\lambda_i}\sum_{k=0}^{K-1}q_i(\bar{x}^{(k)}_i, t_k) = \frac{\varepsilon}{\lambda_i} \nabla_{\bar{x}^{(0)}_{i(d)}} q_i(\bar{x}^{(0)}_i, t_0);
\end{equation}
when $q_i(\bar{x}^{(0)}_i, t_0)$ is not related to the value of $\bar{x}^{(0)}_i$, \textit{i.e.} a constant or an indicator function, the second gradient is then zero. The third gradient in~\eqref{EqC7} follows
\begin{equation}\label{EqC8}
\nabla_{\bar{x}^{(0)}_{i(d)}} \frac{1}{2} \sum_{k=0}^{K-1} \log \det(H_i^{(k)}) = \frac{1}{2} \nabla_{\bar{x}^{(0)}_{i(d)}} \log \det(H_i^{(0)}).
\end{equation}
Letting $\alpha_i^{(k)} = (\bar{x}_{i(d)}^{(k+1)} - \bar{x}_{i(d)}^{(k)}) / \varepsilon - \bar{f}_{i(d)}(\bar{x}_i^{(k)}, t_k)$ and $\beta_i^{(k)} = (H_i^{(k)})^{-1}\alpha_i^{(k)}$, the gradient of the fourth term in~\eqref{EqC7} satisfies
\begin{align}\label{EqC9}
&\nabla_{\bar{x}_{i(d)}^{(0)}} \frac{\varepsilon}{2} \sum_{k=0}^{K-1} \left\| \alpha_i^{(k)} \right\|^2_{\left(H_i^{(k)}\right)^{-1}} = \frac{\varepsilon}{2} \cdot \nabla_{\bar{x}^{(0)}_{i(d)}} (\alpha_i^{(0)})^\top \beta_i^{(0)} \nonumber \\
& = \frac{\varepsilon}{2} \left[ \left( \nabla_{\bar{x}^{(0)}_{i(d)}} \alpha_i^{(0)} \right) \beta_i^{(0)} + \left( \nabla_{\bar{x}_{i(d)}^{(0)}} \beta_i^{(0)} \right) \alpha_i^{(0)} \right] \\
& = -\frac{1}{2} \beta_i^{(0)} - \frac{\varepsilon}{2} \left[ \left( \nabla_{\bar{x}^{(0)}_{i(d)}}\bar{f}_{i(d)}(\bar{x}_i^{(0)}, t_0) \right) \beta_i^{(0)} - \alpha_i^{(0)} \nabla_{\bar{x}^{(0)}_{i(d)}} \beta_i^{(0)} \right] \nonumber \\
& =- \left(H_i^{(0)}\right)^{-1} \alpha_i^{(0)} - \varepsilon \left[ \nabla_{\bar{x}^{(0)}_{i(d)}} \bar{f}_{i(d)}(\bar{x}_i^{(0)}) \right] \left( H_i^{(0)}\right)^{-1} \alpha_i^{(0)} + \frac{\varepsilon}{2} \left( \alpha_i^{(0)} \right)^\top \left[ \nabla_{\bar{x}^{(0)}_{i(d)}} \left( H_i^{(0)} \right)^{-1} \right] \alpha_i^{(0)}. \nonumber
\end{align}
Detailed interpretations on the calculation of \eqref{EqC9} can be found in~\cite{Theodorou_JMLR_2010, Theodorou_2011}. Meanwhile, after substituting~\eqref{EqC9} into~\eqref{EqC6}, one can verify that the integrals in~\eqref{EqC6} satisfy
\begin{gather}
\label{App2_Eq12}
\int \tilde{p}^*_i(\bar{\ell}_i | \bar{x}_i^{(0)}, t_0) \cdot \varepsilon \left[ \nabla_{\bar{x}^{(0)}_{i(d)}} \bar{f}_{i(d)}(\bar{x}_i^{(0)}) \right] \left(H_i^{(0)}\right)^{-1} \alpha_i^{(0)} d\ell = 0,\\
\label{App2_Eq13}
\int \tilde{p}^*_i(\bar{\ell}_i | \bar{x}_i^{(0)}, t_0) \cdot \frac{\varepsilon}{2} \left( \alpha_i^{(0)} \right)^\top \left[ \nabla_{\bar{x}^{(0)}_{i(d)}} \left( H_i^{(0)} \right)^{-1} \right] \alpha_i^{(0)} d\ell = - \frac{1}{2} \nabla_{\bar{x}^{(0)}_{i(d)}} \log \det( H_i^{(0)} ).
\end{gather}
Substituting~(\ref{EqC7}-\ref{App2_Eq13}) into~\eqref{EqC6}, we obtain the initial control variable $\tilde{u}_i(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0)$ as follows
\begin{equation*}
\tilde{u}_i(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) = -\frac{\varepsilon}{\lambda_i}\nabla_{\bar{x}^{(0)}_{i(d)}}q_i(\bar{x}^{(0)}_i, t_0) + \left(H_i^{(0)}\right)^{-1} \left(\frac{\bar{x}_{i(d)}^{(1)} - \bar{x}_{i(d)}^{(0)}}{\varepsilon} - \bar{f}_{i(d)}(\bar{x}_i^{(0)}, t_0) \right).
\end{equation*}
This completes the proof. \qed
\section*{Appendix D: Relative Entropy Policy Search}\label{appD}
The local REPS algorithm in subsystem $\bar{\mathcal{N}}_i$ alternates between two steps, learning the optimal path distribution and updating the parameterized policy, till the convergence of the algorithm. First, we consider the learning step realized by the optimization problem~\eqref{LearningStep}. Since we want to minimize the relative entropy between the current approximate path distribution $\tilde p_i(\bar{x}_i^{(0)}, \bar{\ell}_i)$ and the optimal path distribution $\tilde{p}_i^*(\bar{x}_i^{(0)}, \bar{\ell}_i)$, the objective function of the learning step follows
\begin{equation*}
\begin{split}
&\arg \min_{\tilde p_i} \textrm{KL}( \tilde p_i(\bar{x}_i^{(0)}, \bar{\ell}_i) \ \| \ \tilde{p}_i^*(\bar{x}_i^{(0)}, \bar{\ell}_i) )\\
\neweq{(a)} & \arg \max_{\tilde p_i} \int \tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i) \left[ \log \tilde{p}_i^*(\bar{\ell}_i | \bar{x}_i^{(0)}) + \log \mu(\bar{x}_i^{(0)}) - \log \tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i) \right] \ d\bar{x}_i^{(0)} d\bar{\ell}_i\\
\neweq{(b)} & \arg \max_{\tilde{p}_i} \int \tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i) \left[ -\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) - \log \tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i) \right] \ d\bar{x}_i^{(0)} d\bar{\ell}_i .
\end{split}
\end{equation*}
(a) transforms the minimization problem to a maximization problem and adopts the identity $\tilde{p}_i^*(\bar{x}^{(0)}_i, \bar{\ell}_i) = \tilde{p}_i^*(\bar{\ell}_i | \bar{x}_i^{(0)}) \cdot \mu_i(\bar{x}^{(0)}_i)$, where time arguments are generally omitted in this appendix for brevity; and (b) employs identity~\eqref{OptPathDist} and omits the terms that are independent from the path variable $\bar{\ell}_i$, since these terms have no influence on the optimization problem. In order to construct the information loss from old distribution and avoid overly greedy policy updates~\cite{Peters_CAI_2010, Gomez_KDD_2014}, we restrict the update rate with constraint
\begin{equation*}
\int \tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i) \log\frac{\tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i)}{\tilde{q}_i(\bar{x}_i^{(0)}, \bar{\ell}_i)} \ d\bar{x}_i^{(0)} d\bar{\ell}_i \leq \delta,
\end{equation*}
where $\delta > 0$ can be used as a trade-off between exploration and exploitation, and LHS is the relative entropy between the current approximate path distribution $\tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i)$ and the old approximate path distribution $\tilde{q}_i(\bar{x}_i^{(0)}, \bar{\ell}_i)$. Meanwhile, the marginal distribution $\tilde{p}_i(\bar{x}_i^{(0)}) = \int \tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i) \ d\bar{\ell}_i$ needs to match the initial distribution $\mu_i(\bar{x}^{(0)}_i)$, which is known to the designer. However, this condition could generate an infinite number of constraints in optimization problem~\eqref{LearningStep} and is too restrictive for practice~\cite{Kupcsik_CAI_2013, Gomez_KDD_2014, Sutton_2018}. Hence, we relax this condition by only considering to match the state feature averages of initial state
\begin{equation*}
\int \tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i) \cdot \psi_i(\bar{x}_i^{(0)}) \ d\bar{x}_i^{(0)} d\bar{\ell}_i = \int \mu_i(\bar{x}_i^{(0)}) \cdot \psi_i(\bar{x}_i^{(0)}) \ d\bar{x}_i^{(0)} = \hat{\psi}^{(0)}_i,
\end{equation*}
where $\psi_i(\bar{x}_i^{(0)})$ is a feature vector of initial state, and $\hat{\psi}^{(0)}_i$ is the expectation of the state feature vector subject to initial distribution $\mu_i(\bar{x}^{(0)}_i)$. In general, $\psi(\bar{x}_i^{(0)})$ can be a vector made up with linear and quadratic terms of initial states, $\bar{x}_{i(m)}^{(0)}$ and $\bar{x}_{i(m)}^{(0)}\bar{x}_{i(n)}^{(0)}$, such that the mean and the covariance of marginal distribution $\tilde{p}_i(\bar{x}_i^{(0)})$ match those of initial distribution $\mu_i(\bar{x}^{(0)}_i)$. Lastly, we consider the following normalization constraint
\begin{equation}\label{NormConstraint}
\int \tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i) \ d\bar{x}_i^{(0)} d\bar{\ell}_i = 1,
\end{equation}
which ensures that $\tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i)$ defines a probability distribution.
The optimization problem~\eqref{LearningStep} can be solved analytically by the method of Lagrange multipliers. Defining the Lagrange multipliers $\kappa > 0$, $\eta \in \mathbb{R}$ and vector $\theta$, the Lagrangian is
\begin{align}\label{Lagrangian}
\mathcal{L} = \eta + \kappa \delta + \theta^\top \hat{\psi}_i^{(0)} + \int \tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i)\Bigg[ & -\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i) - \log \tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i) -\eta \\
& - \theta^\top \psi_{i}(\bar{x}_i^{(0)}) - \kappa \log\frac{\tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i)}{\tilde{q}_i(\bar{x}_i^{(0)}, \bar{\ell}_i)} \Bigg] \ d\bar{x}_i^{(0)} d\bar{\ell}_i. \nonumber
\end{align}
We can maximize the Lagrangian $\mathcal{L}$ and derive the maximizer $\tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i)$ in~\eqref{LearningStep} by letting $\partial \mathcal{L} / \partial \tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i) = 0$. This condition will hold for arbitrary initial distributions $\mu_i(\bar{x}_i^{(0)})$ if and only if the derivative of the integrand in~\eqref{Lagrangian} is identically equal to zero, \textit{i.e.
\begin{equation*}
-\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i, t_0) - (1+\kappa)\left[1 + \log \tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i)\right] -\eta - \theta^\top \psi_{i}(\bar{x}_i^{(0)}) + \kappa\log \tilde{q}_i(\bar{x}_i^{(0)} \bar{\ell}_i) = 0,
\end{equation*}
from which we can find the maximizer $\tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i)$ as shown in~\eqref{distribution}. In order to evaluate $\tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i)$ in~\eqref{distribution}, we then determine the values of dual variables $\kappa, \eta$ and $\theta$ by solving the dual problem. Substituting~\eqref{distribution} into the normalization constraint~\eqref{NormConstraint}, we have the identity
\begin{equation}\label{NormCond}
\exp\left(\frac{1+\kappa + \eta}{1 + \kappa} \right) = \int \tilde{q}_i(\bar{x}_i^{(0)}, \bar{\ell}_i)^{\frac{\kappa}{1 + \kappa}} \cdot \exp\left( - \frac{\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i) + \theta^\top \psi_i(\bar{x}_i^{(0)})}{1 + \kappa} \right) \ d\bar{x}_i^{(0)} d\bar{\ell}_i,
\end{equation}
which can be used to determine $\eta$ provided the values of $\kappa$ and $\theta$. To figure out the values of $\kappa$ and $\theta$, we solve the dual problem~\eqref{DualProb}, where the objective function $g(\kappa, \theta)$ in~\eqref{DualFun} is obtained by substituting~\eqref{distribution} and~\eqref{NormCond} into~\eqref{Lagrangian}
\begin{equation*}
\begin{split}
g(\kappa,& \theta) \neweq{(a)} \eta + \kappa \delta + \theta^\top \hat{\psi}_i^{(0)} + 1 + \kappa\\
& = \kappa \delta + \theta^\top \hat{\psi}_i^{(0)} + (1+\kappa) \frac{1 +\kappa + \eta}{1 + \kappa}\\
& \neweq{(b)} \kappa \delta + \theta^\top \hat{\psi}_i^{(0)} + (1+\kappa) \log \int \tilde{q}_i(\bar{x}_i^{(0)}, \bar{\ell}_i)^{\frac{\kappa}{1 + \kappa}} \cdot \exp\left( - \frac{\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}_i) + \theta^\top \psi_i(\bar{x}_i^{(0)})}{1 + \kappa} \right) \ d\bar{x}_i^{(0)} d\bar{\ell}_i.
\end{split}
\end{equation*}
(a) substitutes~\eqref{distribution} into~\eqref{Lagrangian}, and (b) substitutes~\eqref{NormCond} into the result. By applying the Monte Carlo method and using the data set $\mathcal{Y}_i = \{ (\bar{x}_i^{(0)}, \bar{\ell}^{[y]}_i) \}_{y = 1, \cdots, Y}$, the objective function~\eqref{DualFun} can be approximated from sample trajectories by
\begin{equation*}
g(\kappa, \theta) = \kappa \delta + \theta^\top \hat\psi_i^{(0)} + (1 + \kappa) \log \Bigg[ \frac{1}{Y} \sum_{y=1}^{Y} \tilde{q}_i(\bar{x}_i^{(0)}, \bar{\ell}^{[y]}_i)^{\frac{\kappa}{1 + \kappa}}
\exp\left( - \frac{\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}^{[y]}_i) + \psi_{i}^\top(\bar{x}_i^{(0)}) \cdot \theta}{1+\kappa} \right) \Bigg],
\end{equation*}
where $\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}^{[y]}_i)$ and $\psi_{i}(\bar{x}_i^{(0)})$ are respectively the generalized path value and state feature vector of sample trajectory $(\bar{x}_i^{(0)}, \bar{\ell}^{[y]}_i)$, and the old path probability $\tilde{q}_i(\bar{x}_i^{(0)}, \bar{\ell}^{[y]}_i)$ can be evaluated with the current or initial policy by
\begin{equation}\label{OldDist}
\tilde{q}_i(\bar{x}_i^{(0)}, \bar{\ell}^{[y]}_i) = \mu_i(\bar{x}_i^{(0)}) \cdot \prod_{k = 0}^{K-1} \int p_i(\bar{x}_i^{(k+1)}, t_{k+1} | \bar{x}_i^{(k)}, \bar{u}_i^{(k)}, t_k) \cdot \pi^{(k)}_i(\bar{u}_i^{(k)} | \bar{x}_i^{(k)}) \ d\bar{u}_i^{(k)},
\end{equation}
where the state variables $\bar{x}_i^{(0)}$, $\bar{x}_i^{(k)}$ and $\bar{x}_i^{(k+1)}$ in~\eqref{OldDist} are from sample trajectory $(\bar{x}_i^{(0)}, \bar{\ell}_i^{[y]})$; the control policy $\pi^{(k)}_i(\bar{u}_i^{(k)} | \bar{x}_i^{(k)})$ is given either as an initialization or an optimization result from updating step~\eqref{UpdatingStep}, and the controlled transition probability $p_i(\bar{x}_i^{(k+1)}, t_{k+1} | \bar{x}_i^{(k)}, \bar{u}_i^{(k)}, t_k)$ with $\bar{x}_i^{(k+1)} \sim \mathcal{N}(\bar{x}_i^{(k)} + \bar{f}_i(\bar{x}^{(k)}_i, t_k)\varepsilon + \bar{B}_i(\bar{x}_i^{(k)}) \bar{u}_i(\bar{x}_i^{(k)}, t_k) \varepsilon, \Sigma^{(k)}_i)$ can be obtained by following the similar steps when deriving the uncontrolled transition probability in~\eqref{EqC3}. When policy $\pi_i^{(k)}$ is Gaussian,~\eqref{OldDist} can be analytically evaluated.
In the policy updating step, we can find the optimal parameters $\chi_i^{*(k)}$ for Gaussian policy by minimizing the relative entropy between the joint distribution $\tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i)$ from learning step~\eqref{LearningStep} and joint distribution $\tilde{p}_i^{\pi}(\bar{x}_i^{(0)}, \bar{\ell}_i)$ generated by parametric policy $\pi_i^{(k)}(\bar{u}_i^{(k)} | \bar{x}_i^{(k)}, \chi_i^{(k)} ) \allowbreak \sim \mathcal{N}(\bar{u}_i^{(k)}|\hat{a}_i^{(k)}\bar{x}_i^{(k)} + \hat{b}_i^{(k)}, \hat{\Sigma}_i^{(k)})$. To determine the policy parameters $\chi_i^{(k)}= (\hat{a}_i^{(k)}, \hat{b}_i^{(k)}, \hat\Sigma_i^{(k)})$ at time $t_k$, we need to solve the following optimization problem, which is also a weighted maximum likelihood problem
\begin{align}\label{UpdatingStep}
\chi_i^{*(k)} & = \arg {\textstyle \min_{\chi_i^{(k)}} } \ \textrm{KL} (\tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i) \ \| \ \tilde{p}_i^{\pi}(\bar{x}_i^{(0)}, \bar{\ell}_i)) \nonumber \\
& \neweq{(a)} \arg {\textstyle \max_{\chi_i^{(k)}} } \ \int \tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i) \cdot \log\frac{\tilde{p}_i^{\pi}(\bar{x}_{i}^{(k+1)} | \bar{x}_i^{(k)})}{\tilde{p}_i(\bar{x}_i^{(k+1)} | \bar{x}_i^{(k)})} \ d\bar{x}_i^{(0)} d\bar{\ell}_i \allowdisplaybreaks \nonumber \\
& \newapprox{(b)} \arg {\textstyle \max_{\chi_i^{(k)}} } \ \int \tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}_i) \cdot \log {\pi}_i^{(k)}(\bar{u}_i^{*(k)} | \bar{x}_i^{(k)}, \chi_i^{(k)}) \ d\bar{x}_i^{(0)} d\bar{\ell}_i \allowdisplaybreaks \\
& \neweq{(c)} \arg {\textstyle \max_{\chi_i^{(k)}} } \ \sum_{y=1}^{Y} \frac{\tilde{p}_i(\bar{x}_i^{(0)}, \bar{\ell}^{[y]}_i)}{\tilde{q}_i(\bar{x}_i^{(0)}, \bar{\ell}^{[y]}_i)} \cdot \log {\pi}_i^{(k)}(\bar{u}_i^{*(k)} | \bar{x}_i^{(k)}, \chi_i^{(k)}) \allowdisplaybreaks \nonumber \\
& \neweq{(d)} \arg {\textstyle \max_{\chi_i^{(k)}} } \sum_{y=1}^{Y} d_i^{[y]} \cdot \log {\pi}_i^{(k)}(\bar{u}_i^{*(k)} | \bar{x}_i^{(k)}, \chi_i^{(k)}) \allowdisplaybreaks \nonumber.
\end{align}
(a) converts the minimization problem to a maximization problem and replaces the joint distributions by the products of step-wise transition distributions; (b) employs the assumption that the distribution of controlled transition equals the product of passive transition distribution and control policy distribution~\cite{Gomez_KDD_2014}, \textit{i.e.} $\tilde{p}_i^{\pi}(\bar{x}_{i}^{(k+1)} | \bar{x}_i^{(k)}) = \tilde{p}_i(\bar{x}_i^{(k+1)} | \bar{x}_i^{(k)}) \cdot {\pi}_i^{(k)}(\bar{u}_i^{*(k)} | \bar{x}_i^{(k)}, \chi_i^{(k)})$, and the control action $\bar{u}_i^{*(k)} = [\bar{B}_{i(d)}(\bar{x}_i^{(k)})^\top \bar{B}_{i(d)}(\bar{x}_i^{(k)})]^{-1} \allowbreak \bar{B}_{i(d)}(\bar{x}_{i}^{(k)})^\top [\bar{x}_{i(d)}^{(k+1)} - \bar{x}_{i(d)}^{(k)} - \varepsilon \bar{f}_{i(d)}(\bar{x}_i^{(k)}, t_k) ] / \varepsilon $ from control affine system dynamics maximizes the likelihood function; (c) approximates the integral by using sample trajectories from $\tilde{q}_i(\bar{x}_i^{(0)}, \bar{\ell}^{[y]}_i)$; (d) substitutes~\eqref{distribution}, and the weight of likelihood function is
\begin{equation*}
d_i^{[y]} = \tilde{q}_i(\bar{x}_i^{(0)}, \bar{\ell}^{[y]}_i)^{\frac{-1}{1 + \kappa}} \cdot \exp\left( - \frac{\tilde{S}_i^{\varepsilon, \lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}^{[y]}_i) + \psi_{i}^\top(\bar{x}_i^{(0)}) \cdot \theta}{1+\kappa} \right).
\end{equation*}
Constant terms are omitted in steps (a) to (d).
\section*{Appendix E: Multi-Agent LSOC Algorithms}\label{appE}
\hyperref[alg1]{Algorithm~1} gives the procedures of distributed LSMDP algorithm introduced in \hyperref[Sec3_1]{Section~3.1}.
\begin{breakablealgorithm} \small
\caption{Distributed LSMDP based on factorial subsystems}
\label{alg1}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input}}
\REQUIRE \parbox[t]{\dimexpr\linewidth-\algorithmicindent}{agent set $\mathcal{D}$, communication network $\mathcal{G}$, initial time $t_0$, exit time $t_f$, initial states $x_i^{t_0}$, exit states $x_i^{t_f}$, joint state-related costs $q_i(\bar{x}_i)$, exit costs $\phi(x_t^{t_f})$, weights on exit costs $w_j^i$, and error bound $\epsilon$.\strut}
\renewcommand{\algorithmicrequire}{\textbf{Initialize}}
\REQUIRE factorial subsystems $\mathcal{\bar{N}}_{i \in\mathcal{D}}$, and joint exit costs $\phi_i(\bar{x}_i)$.
\renewcommand{\algorithmicrequire}{\textbf{Planning}:}
\REQUIRE \
\FOR{$i \in \mathcal{D} = \{1, \cdots, N\}$}
\STATEx /*Calculate joint desirability $Z_i(\cdot)$ for subsystem $\mathcal{\bar{N}}_i$*/
\STATE \parbox[t]{\dimexpr\linewidth-\algorithmicindent}{Compute coefficients $\Theta$, $\Omega$ and $Z_{\mathcal{B}}$ in~\eqref{Z_update} for subsystem $\bar{\mathcal{N}}_i$. For distributed planning~\eqref{dist_alge}, partition the coefficients $[I - \Theta, \Omega Z_{\mathcal{B}}]_j$ and calculate the projection matrices $P_j$ for $j\in \mathcal{\bar{N}}_i$. \strut}
\WHILE{ $\| Z_{\mathcal{I}}^{(n+1)} - Z_{\mathcal{I}}^{(n)} \| > \epsilon$}
\STATE \parbox[t]{\dimexpr\linewidth-\algorithmicindent}{Update desirability $Z_\mathcal{I}$ with~\eqref{Z_update}. For distributed planning, exchange local solutions $Z^{(n)}_{\mathcal{I}, j}$ with neighboring agents $k \in \mathcal{N}_j \cap \bar{\mathcal{N}}_i$ and update desirability $Z_\mathcal{I}^{(n+1)}$ with~\eqref{dist_alge}.\strut}
\ENDWHILE
\STATEx /*Calculate control distribution for agent $i$*/
\STATE Compute the joint optimal control distribution $\bar{u}_i^*(\cdot | \bar{x}_i)$ of subsystem $\mathcal{\bar{N}}_i$ by~\eqref{OptimalControl}.
\STATE Derive the local optimal control distribution $u_i^*(\cdot | \bar{x}_i)$ for agent $i$ by marginalizing $\bar{u}_i^*(\cdot | \bar{x}_i)$.
\ENDFOR
\renewcommand{\algorithmicrequire}{\textbf{Execution}:}
\REQUIRE
\WHILE {$t < t_f$ or $x \notin \mathcal{B}$}
\FOR{$i \in \mathcal{D} = \{1, \cdots, N\}$}
\STATE Measure joint state $\bar{x}_i(t)$ by collecting state information from neighboring agents $j \in \mathcal{N}_i$.
\STATE Sample control action or posterior state $x'_i$ from $u_i^*(\cdot | \bar{x}_i)$.
\ENDFOR
\ENDWHILE
\end{algorithmic}
\end{breakablealgorithm}
\noindent \hyperref[alg2]{Algorithm~2} illustrates the procedures of sampling-based distributed LSOC algorithm introduced in \hyperref[Sec3_2]{Section 3.2}.
\begin{breakablealgorithm} \small
\caption{Distributed LSOC based on sampling estimator}
\label{alg2}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input}}
\REQUIRE \parbox[t]{\dimexpr\linewidth-\algorithmicindent}{agent set $\mathcal{D}$, communication network $\mathcal{G}$, initial time $t_0$, exit time $t_f$, initial states $x_i^{t_0}$, exit states $x_i^{t_f}$, joint state-related costs $q_i(\bar{x}_i)$, control weight matrices $\bar{R}_i$, exit costs $\phi(x_t^{t_f})$, and weights on exit costs~$w_j^i$\strut}
\renewcommand{\algorithmicrequire}{\textbf{Initialize}}
\REQUIRE factorial subsystems $\mathcal{\bar{N}}_{i \in \mathcal{D}}$, and joint exit costs $\phi_i(\bar{x}_i)$.
\renewcommand{\algorithmicrequire}{\textbf{Planning \& Execution}:}
\REQUIRE \
\WHILE {$t < t_f$ or $x \notin \mathcal{B}$}
\FOR{$i \in \mathcal{D} = \{1, \cdots, N\}$}
\STATE Measure joint state $\bar{x}_i(t)$ by collecting state information from neighboring agents $j \in \mathcal{N}_i$.
\STATE \parbox[t]{\dimexpr\linewidth-\algorithmicindent}{Generate uncontrolled trajectory set $\mathcal{Y}_i$ by sampling or collecting data from neighboring agents. \strut}
\STATE \parbox[t]{\dimexpr\linewidth-\algorithmicindent}{Evaluate generalized path value $\tilde{S}_i^{\varepsilon,\lambda_i}(\bar{x}_i^{(0)}, \bar{\ell}^{[y]}_i, t_0)$ and initial control $\tilde{u}_i(\bar{x}_i^{(0)}, \bar{\ell}^{[y]}_i, t_0)$ of each sample trajectory $(\bar{x}_i^{(0)}, \bar{\ell}_i^{[y]})$ in $\mathcal{Y}_i$ by~\eqref{Prop3E2} and~\eqref{inictrl}. \strut}
\STATE \parbox[t]{\dimexpr\linewidth-\algorithmicindent}{Approximate the optimal path distribution $\tilde{p}^*_i(\bar{\ell}_i^{[y]} | \bar{x}_i^{(0)}, t_0)$ and joint optimal control action $\bar{u}^*_i(\bar{x}_i, t)$ by~\eqref{MC_Estimator}, \eqref{MC_Estimato2} or other sampling techniques. \strut}
\STATE Select and execute local control action $u_i^*(\bar{x}_i, t)$ from joint optimal control action $\bar{u}^*_i(\bar{x}_i, t)$.
\ENDFOR
\ENDWHILE
\end{algorithmic}
\end{breakablealgorithm}
\noindent \hyperref[alg3]{Algorithm~3} illustrates the procedures of REPS-based distributed LSOC algorithm introduced in \hyperref[Sec3_2]{Section 3.2}.
\begin{breakablealgorithm} \small
\caption{Distributed LSOC based on REPS}
\label{alg3}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input}}
\REQUIRE \parbox[t]{\dimexpr\linewidth-\algorithmicindent}{agent set $\mathcal{D}$, communication network $\mathcal{G}$, initial time $t_0$, exit time $t_f$, initial states $x_i^{t_0}$, exit states $x_i^{t_f}$, joint state-related costs $q_i(\bar{x}_i)$, control weight matrices $\bar{R}_i$, exit costs $\phi(x_t^{t_f})$, weights on exit costs~$w_j^i$, and initial policy $\pi^{(k)}_i(\bar{u}^{(k)}_i | \bar{x}_i^{(k)}, \chi^{(k)}_i)$. \strut}
\renewcommand{\algorithmicrequire}{\textbf{Initialize}}
\REQUIRE factorial subsystems $\mathcal{\bar{N}}_{i \in \mathcal{D}}$, and joint exit costs $\phi_i(\bar{x}_i)$.
\renewcommand{\algorithmicrequire}{\textbf{Planning \& Execution}:}
\REQUIRE \
\WHILE {$t < t_f$ or $x \notin \mathcal{B}$}
\FOR{$i \in \mathcal{D} = \{1, \cdots, N\}$}
\STATE Measure joint state $\bar{x}_i(t)$ by collecting state information from neighboring agents $j \in \mathcal{N}_i$.
\REPEAT
\STATE \parbox[t]{\dimexpr\linewidth-\algorithmicindent}{Generate trajectory set $\mathcal{Y}_i$ by sampling with (initial) policy $\pi^{(k)}_i(u^{(k)}_i | \bar{x}^{(k)}_i, \chi^{(k)}_i)$ or collecting data from neighboring agents $j \in \mathcal{N}_i$. \strut}
\STATE Solve dual variables $\kappa, \theta$ and $\eta$ from dual problem~\eqref{DualProb} and condition~\eqref{NormCond}.
\STATE Compute path distribution $\tilde{q}(\bar{x}_i^{(0)}, \bar{\ell}_i^{[y]})$ of each trajectory in $\mathcal{Y}_i$ by \eqref{OldDist}.
\STATE Update parameter $\chi^{(k)}_i$ by solving weighted maximum likelihood problem \eqref{UpdatingStep}.
\UNTIL{convergence of parametric policy $\pi^{(k)}_i(\bar{u}^{(k)}_i | \bar{x}_i^{(k)}, \chi^{(k)}_i)$.}
\STATE Marginalize joint optimal control policy $\pi^{(0)}_i(\bar{u}^{(0)}_i | \bar{x}_i^{(0)}, \chi^{(0)}_i)$ by~\eqref{Marginalize_Policy}.
\STATE \parbox[t]{\dimexpr\linewidth-\algorithmicindent}{Sample and execute local control action $u^*_i(\bar{x}_i, t)$ from local optimal control policy $\pi_i^{*(0)}(u^{*(0)}_i|\bar{x}_i^{(0)}, \chi_i^{*(0)})$.\strut}
\ENDFOR
\ENDWHILE
\end{algorithmic}
\end{breakablealgorithm}
\bibliographystyle{IEEEtran}
|
2,869,038,154,110 | arxiv |
\section{Introduction}\label{sec:introduction}
The appearance and growth of online markets has had a considerable impact on the habits of consumers, providing them access to a greater variety of products and information on these goods. While this freedom of purchase has made online commerce into a multi-billion dollar industry, it also made it more difficult for consumers to select the products that best fit their needs. One of the main solutions proposed for this information overload problem are recommender systems, which provide automated and personalized suggestions of products to consumers.
The recommendation problem can be defined as estimating the response of a user for unseen items, based on historical information stored in the system, and suggesting to this user \emph{novel} and \emph{original} items for which the predicted response is \emph{high}. User-item responses can be numerical values known as ratings (e.g., 1-5 stars), ordinal values (e.g., strongly agree, agree, neutral, disagree, strongly disagree) representing the possible levels of user appreciation, or binary values (e.g., like/dislike or interested/not interested). Moreover, user responses can be obtained explicitly, for instance, through ratings/reviews entered by users in the system, or implicitly, from purchase history or access patterns \cite{konstan97,terveen97}. For the purpose of simplicity, from this point on, we will call rating any type of user-item response.
Item recommendation \index{item recommendation} approaches can be divided in two broad categories: personalized and non-personalized. Among the personalized approaches are \emph{content-based} and \emph{collaborative filtering} methods, as well as \emph{hybrid} techniques combining these two types of methods.
The general principle of con\-tent-based (or cognitive) methods \cite{balabanovic97,billsus00,lang95,pazzani97} is to identify the common characteristics of items that have received a favorable rating from a user, and then recommend to this user unseen items that share these characteristics. Recommender systems \index{recommender systems} based purely on content generally suffer from the problems of \emph{limited content analysis} and \emph{over-specialization} \cite{shardanand95}. Limited content analysis occurs when the system has a limited amount of information on its users or the content of its items. For instance, privacy issues might refrain a user from providing personal information, or the precise content of items may be difficult or costly to obtain for some types of items, such as music or images. Another problem is that the content of an item is often insufficient to determine its quality. Over-specialization, on the other hand, is a side effect of the way in which content-based systems recommend unseen items, where the predicted rating of a user for an item is high if this item is similar to the ones liked by this user. For example, in a movie recommendation application, the system may recommend to a user a movie of the same genre or having the same actors as movies already seen by this user. Because of this, the system may fail to recommend items that are different but still interesting to the user. More information on content-based recommendation approaches can be found in Chapter~\ref{24-content-rs} of this book.
Instead of depending on content information, collaborative (or social) filtering approaches use the rating information of other users and items in the system. The key idea is that the rating of a target user for an unseen item is likely to be similar to that of another user, if both users have rated other items in a similar way. Likewise, the target user is likely to rate two items in a similar fashion, if other users have given similar ratings to these two items. Collaborative filtering approaches overcome some of the limitations of content-based ones. For instance, items for which the content is not available or difficult to obtain can still be recommended to users through the feedback of other users. Furthermore, collaborative recommendations are based on the quality of items as evaluated by peers, instead of relying on content that may be a bad indicator of quality. Finally, unlike content-based systems, collaborative filtering ones can recommend items with very different content, as long as other users have already shown interest for these different items.
Collaborative filtering \index{collaborative filtering} approaches can be grouped in two general classes of \emph{neighborhood} and \emph{model}-based methods. In neighborhood-based (memory-based \cite{breese98} or heuristic-based \cite{adomavicius05}) collaborative filtering \cite{delgado99,deshpande04,hill95,konstan97,linden03,nakamura98,resnick94,sarwar01,shardanand95}, the user-item ratings stored in the system are directly used to predict ratings for unseen items. This can be done in two ways known as \emph{user-based} or \emph{item-based} recommendation. User-based systems, such as GroupLens \cite{konstan97},
evaluate the interest of a target user for an item using the ratings for this item by other users, called \emph{neighbors}, that have similar rating patterns. The neighbors of the target user are typically the users whose ratings are most correlated to the target user's ratings. Item-based approaches \cite{deshpande04,linden03,sarwar01}, on the other hand, predict the rating of a user for an item based on the ratings of the user for similar items. In such approaches, two items are similar if several users of the system have rated these items in a similar fashion.
In contrast to neighborhood-based systems, which use the stored ratings directly in the prediction, model-based approaches use these ratings to learn a predictive model. Salient characteristics of users and items are captured by a set of model parameters, which are learned from training data and later used to predict new ratings. Model-based approaches for the task of recommending items are numerous and include Bayesian Clustering \cite{breese98}, Latent Semantic Analysis \cite{hofmann03}, Latent Dirichlet Allocation \cite{blei03}, Maximum Entropy \cite{zitnick04}, Boltzmann Machines \cite{salakhutdinov07}, Support Vector Machines \cite{grcar06}, and Singular Value Decomposition \cite{bell07a,koren08,paterek07,takacs08,takacs09}. A survey of state-of-the-art model-based methods can be found in Chapter \ref{15-collab-filt-rs} of this book.
Finally, to overcome certain limitations of content-based and collaborative filtering methods, hybrid recommendation approaches combine characteristics of both types of methods. Content-based and collaborative filtering methods can be combined in various ways, for instance, by merging their individual predictions into a single, more robust prediction \cite{pazzani99,billsus00}, or by adding content information into a collaborative filtering model \cite{Adams2010,Agarwal2011,Yoo2009,Singh2008,nikolakopoulos2015hierarchical,r14,nikolakopoulos2015top}. Several studies have shown hybrid recommendation approaches to provide more accurate recommendations than pure content-based or collaborative methods, especially when few ratings are available \cite{adomavicius05}.
\subsection{Advantages of neighborhood approaches}
While recent investigations show state-of-the-art model-based approaches superior to neighborhood ones in the task of predicting ratings \cite{koren08,takacs07}, there is also an emerging understanding that good prediction accuracy alone does not guarantee users an effective and satisfying experience \cite{herlocker04}. Another factor that has been identified as playing an important role in the appreciation of users for the recommender system is \emph{serendipity} \cite{herlocker04,sarwar01}. Serendipity extends the concept of novelty by helping a user find an interesting item he or she might not have otherwise discovered. For example, recommending to a user a movie directed by his favorite director constitutes a novel recommendation if the user was not aware of that movie, but is likely not serendipitous since the user would have discovered that movie on his own. A more detailed discussion on novelty and diversity is provided in Chapter~\ref{5-novelty} of this book.
Model-based approaches excel at characterizing the preferences of a user with latent factors. For example, in a movie recommender system, such methods may determine that a given user is a fan of movies that are both funny and romantic, without having to actually define the notions ``funny'' and ``romantic''. This system would be able to recommend to the user a romantic comedy that may not have been known to this user. However, it may be difficult for this system to recommend a movie that does not quite fit this high-level genre, for instance, a funny parody of horror movies. Neighborhood approaches, on the other hand, capture local associations in the data. Consequently, it is possible for a movie recommender system based on this type of approach to recommend the user a movie very different from his usual taste or a movie that is not well known (e.g., repertoire film), if one of his closest neighbors has given it a strong rating. This recommendation may not be a guaranteed success, as would be a romantic comedy, but it may help the user discover a whole new genre or a new favorite actor/director.
The main advantages of neighborhood-based methods are:
\begin{itemize}
\item \textbf{Simplicity:} Neighborhood-based methods are intuitive and relatively simple to implement. In their simplest form, only one parameter (the number of neighbors used in the prediction) requires tuning.
\vspace{2mm}
\item \textbf{Justifiability:} Such methods also provide a concise and intuitive justification for the computed predictions. For example, in item-based recommendation, the list of neighbor items, as well as the ratings given by the user to these items, can be presented to the user as a justification for the recommendation. This can help the user better understand the recommendation and its relevance, and could serve as basis for an interactive system where users can select the neighbors for which a greater importance should be given in the recommendation \cite{bell07a}. The benefits and challenges of explaining recommendations to users are addressed in Chapter~\ref{8-explanations} of this book.
\vspace{2mm}
\item \textbf{Efficiency:} One of the strong points of neighborhood-based systems are their efficiency. Unlike most model-based systems, they require no costly training phases, which need to be carried at frequent intervals in large commercial applications.
These systems may require pre-computing nearest neighbors in an offline step, which is typically much cheaper than model training,
providing near instantaneous recommendations. Moreover, storing these nearest neighbors requires very little memory, making such approaches scalable to applications having millions of users and items.
\vspace{2mm}
\item \textbf{Stability:} Another useful property of recommender systems based on this approach is that they are little affected by the constant addition of users, items and ratings, which are typically observed in large commercial applications. For instance, once item similarities have been computed, an item-based system can readily make recommendations to new users, without having to re-train the system. Moreover, once a few ratings have been entered for a new item, only the similarities between this item and the ones already in the system need to be computed.
\end{itemize}
While neighborhood-based methods have gained popularity due to these advantages\footnote{For further insights of some of the key properties of neighborhood-based methods under a probabilistic lens, see \cite{canamares2017probabilistic}. Therein, the interested reader can find a probabilistic reformulation of basic neighborhood-based methods that elucidates certain aspects of their effectiveness; delineates innate connections with item popularity; while also, allows for comparisons between basic neighborhood-based variants.}, they are also known to suffer from the problem of limited coverage, which causes some items to be never recommended. Also, traditional methods of this category are known to be more sensitive to the sparseness of ratings and the cold-start problem, where the system has only a few ratings, or no rating at all, for new users and items. Section \ref{sec:advanced-techniques} presents more advanced neighborhood-based techniques that can overcome these problems.
\subsection{Objectives and outline}
This chapter has two main objectives. It first serves as a general guide on neighborhood-based recommender systems, and presents practical information on how to implement such recommendation approaches. In particular, the main components of neighborhood-based methods will be described, as well as the benefits of the most common choices for each of these components. Secondly, it presents more specialized techniques on the subject that address particular aspects of recommending items, such as data sparsity. Although such techniques are not required to implement a simple neighborhood-based system, having a broader view of the various difficulties and solutions for neighborhood methods may help making appropriate decisions during the implementation process.
The rest of this document is structured as follows. In Section \ref{sec:definition-notation}, we first give a formal definition of the item recommendation task and present the notation used throughout the chapter. In Section \ref{sec:nb-recommendation}, the principal neighborhood approaches, predicting user ratings for unseen items based on regression or classification, are then introduced, and the main advantages and flaws of these approaches are described. This section also presents two complementary ways of implementing such approaches, either based on user or item similarities, and analyzes the impact of these two implementations on the accuracy, efficiency, stability, justfiability ans serendipity of the recommender system. Section \ref{sec:components-nb-methods}, on the other hand, focuses on the three main components of neighborhood-based recommendation methods: rating normalization, similarity weight computation, and neighborhood selection. For each of these components, the most common approaches are described, and their respective benefits compared. In Section \ref{sec:advanced-techniques}, the problems of limited coverage and data sparsity are introduced, and several solutions proposed to overcome these problems are described. In particular, several techniques based on dimensionality reduction and graphs are presented. Finally, the last section of this document summarizes the principal characteristics and methods of neighorhood-based recommendation, and gives a few more pointers on implementing such methods.
\section{Problem definition and notation}\label{sec:definition-notation}
In order to give a formal definition of the item recommendation task, we introduce the following notation. The set of users in the recommender system will be denoted by $\mathcal{U}$, and the set of items by $\mathcal{I}$. Moreover, we denote by $\mathcal{R}$ the set of ratings recorded in the system, and write $\mathcal{S}$ the set of possible values for a rating (e.g., $\mathcal{S} = [1,5]$ or $\mathcal{S} = \{\textrm{like},\textrm{dislike}\}$). Also, we suppose that no more than one rating can be made by any user $u \in \mathcal{U}$ for a particular item $i \in \mathcal{I}$ and write $r_{ui}$ this rating. To identify the subset of users that have rated an item $i$, we use the notation $\mathcal{U}_i$. Likewise, $\mathcal{I}_u$ represents the subset of items that have been rated by a user $u$. Finally, the items that have been rated by two users $u$ and $v$, i.e. $\mathcal{I}_u \cap \mathcal{I}_v$, is an important concept in our presentation, and we use $\mathcal{I}_{uv}$ to denote this concept. In a similar fashion, $\mathcal{U}_{ij}$ is used to denote the set of users that have rated both items $i$ and $j$.
Two of the most important problems associated with recommender systems are the \emph{rating prediction}
and \emph{top-$N$} recommendation problems.
The first problem is to predict the rating that a user $u$ will give his or her unrated item $i$.
When ratings are available, this task is most often defined as a regression or (multi-class) classification problem where the goal is to learn a function
$f : \mathcal{U} \times \mathcal{I} \to \mathcal{S}$ that predicts the rating $f(u,i)$ of a user $u$ for an unseen item $i$.
Accuracy is commonly used to evaluate the performance of the recommendation method. Typically, the ratings $\mathcal{R}$ are divided into a \emph{training} set $\mathcal{R}_\text{train}$ used to learn $f$, and a \emph{test} set $\mathcal{R}_\text{test}$ used to evaluate the prediction accuracy. Two popular measures of accuracy are the \emph{Mean Absolute Error} (MAE):
\begin{equation}
\text{MAE}(f) \, = \, \frac{1}{|\mathcal{R}_\text{test}|}
\sum_{r_{ui} \in \mathcal{R}_\text{test}} \!\!|f(u,i) - r_{ui}|,
\end{equation}
and the \emph{Root Mean Squared Error} (RMSE):
\begin{equation}
\text{RMSE}(f) \, = \, \sqrt{\frac{1}{|\mathcal{R}_\text{test}|}
\sum_{r_{ui} \in \mathcal{R}_\text{test}} \!\!\left(f(u,i) - r_{ui}\right)^2}.
\end{equation}
When ratings are not available, for instance, if only the list of items purchased by each user is known, measuring the rating prediction accuracy is not possible. In such cases, the problem of finding the best item is usually transformed into the task of recommending to an active user $u_a$ a list $L(u_a)$ containing $N$ items likely to interest him or her \cite{deshpande04,sarwar01}. The quality of such method can be evaluated by splitting the items of $\mathcal{I}$ into a set $\mathcal{I}_\text{train}$, used to learn $L$, and a test set $\mathcal{I}_\text{test}$. Let $T(u) \subset \mathcal{I}_u \cap \mathcal{I}_\text{test}$ be the subset of test items that a user $u$ found relevant. If the user responses are binary, these can be the items that $u$ has rated positively. Otherwise, if only a list of purchased or accessed items is given for each user $u$, then these items can be used as $T(u)$. The performance of the method is then computed using the measures of \emph{precision} and \emph{recall}:
\begin{eqnarray}
\text{Precision}(L) & \, = \, & \frac{1}{|\mathcal{U}|} \sum_{u \in \mathcal{U}} |L(u) \cap T(u)| \, / \, |L(u)| \\
\text{Recall}(L) & \, = \, & \frac{1}{|\mathcal{U}|} \sum_{u \in \mathcal{U}} |L(u) \cap T(u)| \, / \, |T(u)|.
\end{eqnarray}
A drawback of this task is that all items of a recommendation list $L(u)$ are considered equally interesting to user $u$. An alternative setting, described in \cite{deshpande04}, consists in learning a function $L$ that maps each user $u$ to a list $L(u)$ where items are \emph{ordered} by their ``interestingness'' to $u$. If the test set is built by randomly selecting, for each user $u$, a single item $i_u$ of $\mathcal{I}_u$, the performance of $L$ can be evaluated with the \emph{Average Reciprocal Hit-Rank} (ARHR):
\begin{equation}
\text{ARHR}(L) \, = \, \frac{1}{|\mathcal{U}|} \sum_{u \in \mathcal{U}} \frac{1}{\text{rank}(i_u, L(u))},
\end{equation}
where $\text{rank}(i_u, L(u))$ is the rank of item $i_u$ in $L(u)$, equal to $\infty$ if $i_u \not\in L(u)$. A more extensive description of evaluation measures for recommender systems can be found in Chapter~\ref{29-evaluation} of this book.
\section{Neighborhood-based recommendation}\label{sec:nb-recommendation}\index{neighborhood-based recommendation}
Recommender systems based on neighborhood automate the common principle that similar users prefer similar items, and similar items are preferred by similar users.
To illustrate this, consider the following example based on the ratings of Figure \ref{fig:toy-example}.
\begin{example
User Eric has to decide whether or not to rent the movie ``Titanic'' that he has not yet seen. He knows that Lucy has very similar tastes when it comes to movies, as both of them hated ``The Matrix'' and loved ``Forrest Gump,'' so he asks her opinion on this movie. On the other hand, Eric finds out he and Diane have different tastes, Diane likes action movies while he does not, and he discards her opinion or considers the opposite in his decision.
\end{example
\begin{figure}[h!tb]
\begin{center}
\begin{small}
\begin{tabular}{x{1cm}||x{1cm}|x{1cm}|x{1cm}|x{1cm}|x{1cm}}
\hline
\multirow{2}{*}{} & The & \multirow{2}{*}{Titanic} & Die & Forrest & \multirow{2}{*}{Wall-E}\tabularnewline
& Matrix & & Hard & Gump & \tabularnewline
\hline\hline
John & 5 & 1 & & 2 & 2 \tabularnewline
Lucy & 1 & 5 & 2 & 5 & 5 \tabularnewline
Eric & 2 & ? & 3 & 5 & 4 \tabularnewline
Diane & 4 & 3 & 5 & 3 & \tabularnewline
\hline
\end{tabular}
\end{small}
\caption{A ``toy example'' showing the ratings of four users for five movies.}
\label{fig:toy-example}
\end{center}
\end{figure}
\subsection{User-based rating prediction}
User-based neighborhood recommendation methods predict the rating $r_{ui}$ of a user $u$ for an unseen item $i$ using the ratings given to $i$ by users most similar to $u$, called nearest-neighbors. Suppose we have for each user $v \neq u$ a value $w_{uv}$ representing the preference similarity between $u$ and $v$ (how this similarity can be computed will be discussed in Section \ref{sec:sim-weight-computation}). The $k$-nearest-neighbors ($k$-NN) of $u$, denoted by $\mathcal{N}(u)$, are the $k$ users $v$ with the highest similarity $w_{uv}$ to $u$. However, only the users who have rated item $i$ can be used in the prediction of $r_{ui}$, and we instead consider the $k$ users most similar to $u$ that \emph{have rated} $i$. We write this set of neighbors as $\mathcal{N}_i(u)$. The rating $r_{ui}$ can be estimated as the average rating given to $i$ by these neighbors:
\begin{equation}\label{eqn:simple-ub-prediction}
\hat{r}_{ui} \, = \, \frac{1}{|\mathcal{N}_i(u)|} \sum_{v \in \mathcal{N}_i(u)}\!\!\! r_{vi}.
\end{equation}
A problem with (\ref{eqn:simple-ub-prediction}) is that is does not take into account the fact that the neighbors can have different levels of similarity. Consider once more the example of Figure~\ref{fig:toy-example}. If the two nearest-neighbors of Eric are Lucy and Diane, it would be foolish to consider equally their ratings of the movie ``Titanic,'' since Lucy's tastes are much closer to Eric's than Diane's. A common solution to this problem is to weigh the contribution of each neighbor by its similarity to $u$. However, if these weights do not sum to $1$, the predicted ratings can be well outside the range of allowed values. Consequently, it is customary to normalize these weights, such that the predicted rating becomes
\begin{equation}\label{eqn:unnormalized-ub-prediction}
\hat{r}_{ui} \, = \, \frac{\sum\limits_{v \in \mathcal{N}_i(u)}\!\!\! w_{uv} \, r_{vi}}
{\sum\limits_{v \in \mathcal{N}_i(u)}\!\!\! |w_{uv}|}.
\end{equation}
In the denominator of (\ref{eqn:unnormalized-ub-prediction}), $|w_{uv}|$ is used instead of $w_{uv}$ because negative weights can produce ratings outside the allowed range. Also, $w_{uv}$ can be replaced by $w^\alpha_{uv}$, where $\alpha > 0$ is an amplification factor \cite{breese98}. When $\alpha > 1$, as is it most often employed, an even greater importance is given to the neighbors that are the closest to $u$.
\begin{example
Suppose we want to use (\ref{eqn:unnormalized-ub-prediction}) to predict Eric's rating of the movie ``Titanic'' using the ratings of Lucy and Diane for this movie. Moreover, suppose the similarity weights between these neighbors and Eric are respectively $0.75$ and $0.15$. The predicted rating would be
\begin{displaymath}
\hat{r} \, = \, \frac{0.75\!\times\!5 \, + \,0.15\!\times\!3}{0.75 \, + \, 0.15} \ \simeq \ 4.67,
\end{displaymath}
which is closer to Lucy's rating than to Diane's.
\end{example
Equation (\ref{eqn:unnormalized-ub-prediction}) also has an important flaw: it does not consider the fact that users may use different rating values to quantify the same level of appreciation for an item. For example, one user may give the highest rating value to only a few outstanding items, while a less difficult one may give this value to most of the items he likes. This problem is usually addressed by converting the neighbors' ratings $r_{vi}$ to normalized ones $h(r_{vi})$ \cite{breese98,resnick94}, giving the following prediction:
\begin{equation}\label{eqn:normalized-ub-prediction}
\hat{r}_{ui} \, = \, h^{-1}\left(\frac{\sum\limits_{v \in \mathcal{N}_i(u)}\!\!\! w_{uv} \, h(r_{vi})}
{\sum\limits_{v \in \mathcal{N}_i(u)}\!\!\! |w_{uv}|}\right).
\end{equation}
Note that the predicted rating must be converted back to the original scale, hence the $h^{-1}$ in the equation. The most common approaches to normalize ratings will be presented in Section \ref{sec:rating-normalization}.
\subsection{User-based classification}
The prediction approach just described, where the predicted ratings are computed as a weighted average of the neighbors' ratings, essentially solves a \emph{regression} problem. Neighborhood-based \emph{classification}, on the other hand, finds the most likely rating given by a user $u$ to an item $i$, by having the nearest-neighbors of $u$ vote on this value. The vote $v_{ir}$ given by the $k$-NN of $u$ for the rating $r \in \mathcal{S}$ can be obtained as the sum of the similarity weights of neighbors that have given this rating to $i$:
\begin{equation}\label{eqn:unnormalized-ub-classification}
v_{ir} \, = \, \sum_{v \in \mathcal{N}_i(u)}\!\!\! \delta(r_{vi} = r) \, w_{uv},
\end{equation}
where $\delta(r_{vi} = r)$ is $1$ if $r_{vi} = r$, and $0$ otherwise. Once this has been computed for every possible rating value, the predicted rating is simply the value $r$ for which $v_{ir}$ is the greatest.
\begin{example
Suppose once again that the two nearest-neighbors of Eric are Lucy and Diane with respective similarity weights $0.75$ and $0.15$. In this case, ratings $5$ and $3$ each have one vote. However, since Lucy's vote has a greater weight than Diane's, the predicted rating will be $\hat{r} = 5$.
\end{example
A classification method that considers normalized ratings can also be defined. Let $\mathcal{S}'$ be the set of possible normalized values (that may require discretization), the predicted rating is obtained as:
\begin{equation}\label{eqn:normalized-ub-classification}
\hat{r}_{ui} \, = \, h^{-1}\left( \argmax_{r \in \mathcal{S}'} \, \sum_{v \in \mathcal{N}_i(u)}\!\!\! \delta(h(r_{vi}) = r) \, w_{uv} \right).
\end{equation}
\subsection{Regression VS classification}
The choice between implementing a neighborhood-based regression or classification method largely depends on the system's rating scale. Thus, if the rating scale is continuous, e.g. ratings in the \emph{Jester} joke recommender system \cite{goldberg01} can take any value between $-10$ and $10$, then a regression method is more appropriate. On the contrary, if the rating scale has only a few discrete values, e.g. ``good'' or ``bad,'' or if the values cannot be ordered in an obvious fashion, then a classification method might be preferable. Furthermore, since normalization tends to map ratings to a continuous scale, it may be harder to handle in a classification approach.
Another way to compare these two approaches is by considering the situation where all neighbors have the same similarity weight. As the number of neighbors used in the prediction increases, the rating $r_{ui}$ predicted by the regression approach will tend toward the mean rating of item $i$. Suppose item $i$ has only ratings at either end of the rating range, i.e. it is either loved or hated, then the regression approach will make the safe decision that the item's worth is average. This is also justified from a statistical point of view since the expected rating (estimated in this case) is the one that minimizes the RMSE. On the other hand, the classification approach will predict the rating as the most frequent one given to $i$. This is more risky as the item will be labeled as either ``good'' or ``bad''. However, as mentioned before, risk taking may be be desirable if it leads to serendipitous recommendations.
\subsection{Item-based recommendation}\index{item-based recommendation}
While user-based methods rely on the opinion of like-minded users to predict a rating, item-based approaches \cite{deshpande04,linden03,sarwar01} look at ratings given to similar items. Let us illustrate this approach with our toy example.
\begin{example
Instead of consulting with his peers, Eric instead determines whether the movie ``Titanic'' is right for him by considering the movies that he has already seen. He notices that people that have rated this movie have given similar ratings to the movies ``Forrest Gump'' and ``Wall-E''. Since Eric liked these two movies he concludes that he will also like the movie ``Titanic''.
\end{example
This idea can be formalized as follows. Denote by $\mathcal{N}_u(i)$ the items rated by user $u$ most similar to item $i$. The predicted rating of $u$ for $i$ is obtained as a weighted average of the ratings given by $u$ to the items of $\mathcal{N}_u(i)$:
\begin{equation}\label{eqn:unnormalized-ib-prediction}
\hat{r}_{ui} \, = \, \frac{\sum\limits_{j \in \mathcal{N}_u(i)}\!\!\! w_{ij} \, r_{uj}}
{\sum\limits_{j \in \mathcal{N}_u(i)}\!\!\! |w_{ij}|}.
\end{equation}
\begin{example
Suppose our prediction is again made using two nearest-neighbors, and that the items most similar to ``Titanic'' are ``Forrest Gump'' and ``Wall-E,'' with respective similarity weights $0.85$ and $0.75$. Since ratings of $5$ and $4$ were given by Eric to these two movies, the predicted rating is computed as
\begin{displaymath}
\hat{r} \, = \, \frac{0.85\!\times\!5 \, + \,0.75\!\times\!4}{0.85 \, + \, 0.75} \ \simeq \ 4.53.
\end{displaymath}
\end{example
Again, the differences in the users' individual rating scales can be considered by normalizing ratings with a $h$:
\begin{equation}\label{eqn:normalized-ib-prediction}
\hat{r}_{ui} \, = \, h^{-1}\left(\frac{\sum\limits_{j \in \mathcal{N}_u(i)}\!\!\! w_{ij} \, h(r_{uj})}
{\sum\limits_{j \in \mathcal{N}_u(i)}\!\!\! |w_{ij}|}\right).
\end{equation}
Moreover, we can also define an item-based classification approach. In this case, the items $j$ rated by user $u$ vote for the rating to be given to an unseen item $i$, and these votes are weighted by the similarity between $i$ and $j$. The normalized version of this approach can be expressed as follows:
\begin{equation}\label{eqn:normalized-ib-classification}
\hat{r}_{ui} \, = \, h^{-1}\left( \argmax_{r \in \mathcal{S}'} \, \sum_{j \in \mathcal{N}_u(i)}\!\!\! \delta(h(r_{uj}) = r) \, w_{ij} \right).
\end{equation}
\subsection{User-based VS item-based recommendation}\label{sec:ub-vs-ib-recommendation}
When choosing between the implementation of a user-based and an item-based neighborhood recommender system, five criteria should be considered:
\begin{itemize}
\item \textbf{Accuracy:} The accuracy of neighborhood recommendation methods depends mostly on the ratio between the number of users and items in the system. As will be presented in the Section \ref{sec:sim-weight-computation}, the similarity between two users in user-based methods, which determines the neighbors of a user, is normally obtained by comparing the ratings made by these users on the same items. Consider a system that has $10,000$ ratings made by $1,000$ users on $100$ items, and suppose, for the purpose of this analysis, that the ratings are distributed uniformly over the items\footnote{The distribution of ratings in real-life data is normally skewed, i.e. most ratings are given to a small proportion of items.}. Following Table \ref{table:user-vs-item-based-accuracy}, the average number of users available as potential neighbors is roughly $650$. However, the average number of common ratings used to compute the similarities is only $1$. On the other hand, an item-based method usually computes the similarity between two items by comparing ratings made by the same user on these items. Assuming once more a uniform distribution of ratings, we find an average number of potential neighbors of $99$ and an average number of ratings used to compute the similarities of $10$.
\hspace{\svparindent} In general, a small number of high-confidence neighbors is by far preferable to a large number of neighbors for which the similarity weights are not trustable. In cases where the number of users is much greater than the number of items, such as large commercial systems like \emph{Amazon.com}, item-based methods can therefore produce more accurate recommendations \cite{fouss07,sarwar01}. Likewise, systems that have less users than items, e.g., a research paper recommender with thousands of users but hundreds of thousands of articles to recommend, may benefit more from user-based neighborhood methods \cite{herlocker04}.
\begin{table}[h!tb]
\begin{center}
\caption{The average number of neighbors and average number of ratings used in the computation of similarities for user-based and item-based neighborhood methods. A uniform distribution of ratings is assumed with average number of ratings per user $p = |\mathcal{R}|/|\mathcal{U}|$, and average number of ratings per item $q = |\mathcal{R}|/|\mathcal{I}|$}
\label{table:user-vs-item-based-accuracy}
\begin{tabular}{x{2.0cm}||x{3.5cm}|x{2.5cm}}
\hline
& Avg. neighbors & Avg. ratings \tabularnewline
\hline\hline
User-based & $(|\mathcal{U}|-1)\left(1 - \left(\frac{|\mathcal{I}|-p}{|\mathcal{I}|}\right)^p\right)$ & $\frac{p^2}{|\mathcal{I}|}$ \tabularnewline
\hline
Item-based & $(|\mathcal{I}|-1)\left(1 - \left(\frac{|\mathcal{U}|-q}{|\mathcal{U}|}\right)^q\right)$ & $\frac{q^2}{|\mathcal{U}|}$ \tabularnewline
\hline
\end{tabular}
\end{center}
\end{table}
\vspace{2mm}
\item \textbf{Efficiency:} As shown in Table \ref{table:user-vs-item-based-complexity}, the memory and computational efficiency of recommender systems also depends on the ratio between the number of users and items. Thus, when the number of users exceeds the number of items, as is it most often the case, item-based recommendation approaches require much less memory and time to compute the similarity weights (training phase) than user-based ones, making them more scalable. However, the time complexity of the online recommendation phase, which depends only on the number of available items and the maximum number of neighbors, is the same for user-based and item-based methods.
\hspace{\svparindent} In practice, computing the similarity weights is much less expensive than the worst-case complexity reported in Table \ref{table:user-vs-item-based-complexity}, due to the fact that users rate only a few of the available items. Accordingly, only the non-zero similarity weights need to be stored, which is often much less than the number of user pairs. This number can be further reduced by storing for each user only the top $N$ weights, where $N$ is a parameter \cite{sarwar01} that is sufficient for satisfactory coverage on user-item pairs. In the same manner, the non-zero weights can be computed efficiently without having to test each pair of users or items, which makes neighborhood methods scalable to very large systems.
\begin{table}[h!tb]
\begin{center}
\caption{The space and time complexity of user-based and item-based neighborhood methods, as a function of the maximum number of ratings per user $p = \max_{u} |\mathcal{I}_u|$, the maximum number of ratings per item $q = \max_{i} |\mathcal{U}_i|$, and the maximum number of neighbors used in the rating predictions $k$.}
\label{table:user-vs-item-based-complexity}
\begin{tabular}{x{2.0cm}||x{1.5cm}|x{1.5cm}|x{1.5cm}}
\hline
& \multirow{2}{*}{Space} & \multicolumn{2}{c}{Time} \tabularnewline
& & Training & Online \tabularnewline
\hline\hline
User-based & $O(|\mathcal{U}|^2)$ & $O(|\mathcal{U}|^2 p)$ & $O(|\mathcal{I}| k)$ \tabularnewline
Item-based & $O(|\mathcal{I}|^2)$ & $O(|\mathcal{I}|^2 q)$ & $O(|\mathcal{I}| k)$ \tabularnewline
\hline
\end{tabular}
\end{center}
\end{table}
\vspace{2mm}
\item \textbf{Stability:} The choice between a user-based and an item-based approach also depends on the frequency and amount of change in the users and items of the system. If the list of available items is fairly static in comparison to the users of the system, an item-based method may be preferable since the item similarity weights could then be computed at infrequent time intervals while still being able to recommend items to new users. On the contrary, in applications where the list of available items is constantly changing, e.g., an online article recommender, user-based methods could prove to be more stable.
\vspace{2mm}
\item \textbf{Justifiability:} An advantage of item-based methods is that they can easily be used to justify a recommendation. Hence, the list of neighbor items used in the prediction, as well as their similarity weights, can be presented to the user as an explanation of the recommendation. By modifying the list of neighbors and/or their weights, it then becomes possible for the user to participate interactively in the recommendation process. User-based methods, however, are less amenable to this process because the active user does not know the other users serving as neighbors in the recommendation.
\vspace{2mm}
\item \textbf{Serendipity:} In item-based methods, the rating predicted for an item is based on the ratings given to similar items. Consequently, recommender systems using this approach will tend to recommend to a user items that are related to those usually appreciated by this user. For instance, in a movie recommendation application, movies having the same genre, actors or director as those highly rated by the user are likely to be recommended. While this may lead to safe recommendations, it does less to help the user discover different types of items that he might like as much.
\hspace{\svparindent} Because they work with user similarity, on the other hand, user-based approaches are more likely to make serendipitous recommendations. This is particularly true if the recommendation is made with a small number of nearest-neighbors. For example, a user $A$ that has watched only comedies may be very similar to a user $B$ only by the ratings made on such movies. However, if $B$ is fond of a movie in a different genre, this movie may be recommended to $A$ through his similarity with $B$.
\end{itemize}
\section{Components of neighborhood methods}\label{sec:components-nb-methods}
In the previous section, we have seen that deciding between a regression and a classification rating prediction method, as well as choosing between a user-based or item-based recommendation approach, can have a significant impact on the accuracy, efficiency and overall quality of the recommender system. In addition to these crucial attributes, three very important considerations in the implementation of a neighborhood-based recommender system are 1) the normalization of ratings, 2) the computation of the similarity weights, and 3) the selection of neighbors. This section reviews some of the most common approaches for these three components, describes the main advantages and disadvantages of using each one of them, and gives indications on how to implement them.
\begin{comment}
\begin{itemize}
\item \textbf{Rating normalization:} Using raw ratings in the predictions is generally not a good idea, as the rating scale of individual users or items in not considered. The selection of a proper normalization scheme for the ratings can lead to significant improvement in the method's prediction accuracy.
\vspace{2mm}
\item \textbf{Weight computation:} Several issues needs to be considered in the computation of the similarity weights, for instance, which similarity measure to use, how to measure the weights efficiently, and how to account for the trust of similarities.
\vspace{2mm}
\item \textbf{Neighborhood selection:} How to select the nearest-neighbors and how many of them should be selected are question also requiring decisions that can have an impact on the quality of the recommender system.
\end{itemize}
This section reviews some of the most common approaches for these three elements, gives the advantages and flaws of using each one of them, and presents some indicators on how to implement them.
\end{comment}
\subsection{Rating normalization}\label{sec:rating-normalization}
When it comes to assigning a rating to an item, each user has its own personal scale. Even if an explicit definition of each of the possible ratings is supplied (e.g., 1=``strongly disagree,'' 2=``disagree,'' 3=``neutral,'' etc.), some users might be reluctant to give high/low scores to items they liked/disliked. Two of the most popular rating normalization schemes that have been proposed to convert individual ratings to a more universal scale are \emph{mean-centering} and \emph{$Z$-score}.
\subsubsection{Mean-centering}
The idea of mean-centering \cite{breese98,resnick94} is to determine whether a rating is positive or negative by comparing it to the mean rating. In user-based recommendation, a raw rating $r_{ui}$ is transformation to a mean-centered one $h(r_{ui})$ by subtracting to $r_{ui}$ the average $\ol{r}_u$ of the ratings given by user $u$ to the items in $\mathcal{I}_u$:
\begin{displaymath}
h(r_{ui}) \, = \, r_{ui} - \ol{r}_u.
\end{displaymath}
Using this approach the user-based prediction of a rating $r_{ui}$ is obtained as
\begin{equation}\label{eqn:user-based-norm-pred}
\hat{r}_{ui} \, = \, \ol{r}_u \, + \,
\frac{\sum\limits_{v \in \mathcal{N}_i(u)}\!\!\!w_{uv} \, (r_{vi} - \ol{r}_v)}
{\sum\limits_{v \in \mathcal{N}_i(u)}\!\!\!|w_{uv}|}.
\end{equation}
In the same way, the \emph{item}-mean-centered normalization of $r_{ui}$ is given by
\begin{displaymath}
h(r_{ui}) \, = \, r_{ui} - \ol{r}_i,
\end{displaymath}
where $\ol{r}_i$ corresponds to the mean rating given to item $i$ by user in $\mathcal{U}_i$. This normalization technique is most often used in item-based recommendation, where a rating $r_{ui}$ is predicted as:
\begin{equation}\label{eqn:item-based-norm-pred}
\hat{r}_{ui} \, = \, \ol{r}_i \, + \,
\frac{\sum\limits_{j \in \mathcal{N}_u(i)}\!\!\!w_{ij} \, (r_{uj} - \ol{r}_j)}
{\sum\limits_{j \in \mathcal{N}_u(i)}\!\!\!|w_{ij}|}.
\end{equation}
An interesting property of mean-centering is that one can see right-away if the appreciation of a user for an item is positive or negative by looking at the sign of the normalized rating. Moreover, the module of this rating gives the level at which the user likes or dislikes the item.
\begin{example
As shown in Figure \ref{fig:mean-centered-ratings}, although Diane gave an average rating of 3 to the movies ``Titanic'' and ``Forrest Gump,'' the user-mean-centered ratings show that her appreciation of these movies is in fact negative. This is because her ratings are high on average, and so, an average rating correspond to a low degree of appreciation. Differences are also visible while comparing the two types of mean-centering. For instance, the item-mean-centered rating of the movie ``Titanic'' is neutral, instead of negative, due to the fact that much lower ratings were given to that movie. Likewise, Diane's appreciation for ``The Matrix'' and John's distaste for ``Forrest Gump'' are more pronounced in the item-mean-centered ratings.
\end{example
\begin{figure}[h!tb]
\begin{small}
\begin{center}
\emph{User} mean-centering:\\
\vspace{2mm}
\begin{tabular}{x{1cm}||x{1cm}|x{1cm}|x{1cm}|x{1cm}|x{1cm}}
\hline
\multirow{2}{*}{} & The & \multirow{2}{*}{Titanic} & Die & Forrest & \multirow{2}{*}{Wall-E}\tabularnewline
& Matrix & & Hard & Gump & \tabularnewline
\hline\hline
John & 2.50 & -1.50 & & -0.50 & -0.50 \tabularnewline
Lucy & -2.60 & 1.40 & -1.60 & 1.40 & 1.40 \tabularnewline
Eric & -1.50 & & -0.50 & 1.50 & 0.50 \tabularnewline
Diane & 0.25 & -0.75 & 1.25 & -0.75 & \tabularnewline
\hline
\end{tabular}
\vspace{5mm}
\emph{Item} mean-centering:\\
\vspace{2mm}
\begin{tabular}{x{1cm}||x{1cm}|x{1cm}|x{1cm}|x{1cm}|x{1cm}}
\hline
\multirow{2}{*}{} & The & \multirow{2}{*}{Titanic} & Die & Forrest & \multirow{2}{*}{Wall-E}\tabularnewline
& Matrix & & Hard & Gump & \tabularnewline
\hline\hline
John & 2.00 & -2.00 & & -1.75 & -1.67 \tabularnewline
Lucy & -2.00 & 2.00 & -1.33 & 1.25 & 1.33 \tabularnewline
Eric & -1.00 & & -0.33 & 1.25 & 0.33 \tabularnewline
Diane & 1.00 & 0.00 & 1.67 & -0.75 & \tabularnewline
\hline
\end{tabular}
\end{center}
\caption{The \emph{user} and \emph{item} mean-centered ratings of Figure \ref{fig:toy-example}.}
\label{fig:mean-centered-ratings}
\end{small}
\end{figure}
\subsubsection{Z-score normalization}
Consider, two users $A$ and $B$ that both have an average rating of $3$. Moreover, suppose that the ratings of $A$ alternate between $1$ and $5$, while those of $B$ are always $3$. A rating of $5$ given to an item by $B$ is more exceptional than the same rating given by $A$, and, thus, reflects a greater appreciation for this item. While mean-centering removes the offsets caused by the different perceptions of an average rating, $Z$-score normalization \cite{herlocker99} also considers the spread in the individual rating scales. Once again, this is usually done differently in user-based than in item-based recommendation. In user-based methods, the normalization of a rating $r_{ui}$ divides the \emph{user}-mean-centered rating by the standard deviation $\sigma_u$ of the ratings given by user $u$:
\begin{displaymath}
h(r_{ui}) \, = \, \frac{r_{ui} - \ol{r}_u}{\sigma_u}.
\end{displaymath}
A user-based prediction of rating $r_{ui}$ using this normalization approach would therefore be obtained as
\begin{equation}\label{eqn:user-based-zscore-pred}
\hat{r}_{ui} \, = \, \ol{r}_u \, + \,
\sigma_u \, \frac{\sum\limits_{v \in \mathcal{N}_i(u)}\!\!\!w_{uv} \, (r_{vi} - \ol{r}_v)/\sigma_v}
{\sum\limits_{v \in \mathcal{N}_i(u)}\!\!\!|w_{uv}|}.
\end{equation}
Likewise, the $z$-score normalization of $r_{ui}$ in item-based methods divides the \emph{item}-mean-centered rating by the standard deviation of ratings given to item $i$:
\begin{displaymath}
h(r_{ui}) \, = \, \frac{r_{ui} - \ol{r}_i}{\sigma_i}.
\end{displaymath}
The item-based prediction of rating $r_{ui}$ would then be
\begin{equation}\label{eqn:item-based-zscore-pred}
\hat{r}_{ui} \, = \, \ol{r}_i \, + \,
\sigma_i \,\frac{\sum\limits_{j \in \mathcal{N}_u(i)}\!\!\!w_{ij} \, (r_{uj} - \ol{r}_j)/\sigma_j}
{\sum\limits_{j \in \mathcal{N}_u(i)}\!\!\!|w_{ij}|}.
\end{equation}
\subsubsection{Choosing a normalization scheme}
In some cases, rating normalization can have undesirable effects. For instance, imagine the case of a user that gave only the highest ratings to the items he has purchased. Mean-centering would consider this user as ``easy to please'' and any rating below this highest rating (whether it is a positive or negative rating) would be considered as negative. However, it is possible that this user is in fact ``hard to please'' and carefully selects only items that he will like for sure. Furthermore, normalizing on a few ratings can produce unexpected results. For example, if a user has entered a single rating or a few identical ratings, his rating standard deviation will be $0$, leading to undefined prediction values. Nevertheless, if the rating data is not overly sparse, normalizing ratings has been found to consistently improve the predictions \cite{herlocker99,howe08}.
Comparing mean-centering with $Z$-score, as mentioned, the second one has the additional benefit of considering the variance in the ratings of individual users or items. This is particularly useful if the rating scale has a wide range of discrete values or if it is continuous. On the other hand, because the ratings are divided and multiplied by possibly very different standard deviation values, $Z$-score can be more sensitive than mean-centering and, more often, predict ratings that are outside the rating scale. Lastly, while an initial investigation found mean-centering and $Z$-score to give comparable results \cite{herlocker99}, subsequent analysis showed $Z$-score to have more significant benefits \cite{howe08}.
Finally, if rating normalization is not possible or does not improve the results, another possible approach to remove the problems caused by the rating scale variance is \emph{preference-based filtering}. The particularity of this approach is that it focuses on predicting the relative preferences of users instead of absolute rating values. Since, an item preferred to another one remains so regardless of the rating scale, predicting relative preferences removes the need to normalize the ratings. More information on this approach can be found in \cite{cohen98,freund98,jin03b,jin03a}.
\subsection{Similarity weight computation}\label{sec:sim-weight-computation}
The similarity weights play a double role in neighborhood-based recommendation methods: 1) they allow to select trusted neighbors whose ratings are used in the prediction, and 2) they provide the means to give more or less importance to these neighbors in the prediction. The computation of the similarity weights is one of the most critical aspects of building a neighborhood-based recommender system, as it can have a significant impact on both its accuracy and its performance.
\subsubsection{Correlation-based similarity}
A measure of the similarity between two objects $a$ and $b$, often used in information retrieval, consists in representing these objects in the form of a vector $\vec x_a$ and $\vec x_b$ and computing the \emph{Cosine Vector} (CV) (or \emph{Vector Space}) similarity \cite{balabanovic97,billsus00,lang95} between these vectors:
\begin{displaymath}
\cos(\vec x_a, \vec x_b) \, = \, \frac{\tr{\vec x_a} \vec x_b}{||\vec x_a||\cdot||\vec x_b||}.
\end{displaymath}
In the context of item recommendation, this measure can be employed to compute user similarities by considering a user $u$ as a vector $\vec x_u \in \mathfrak{R}^{|I|}$, where $\vec x_{ui} = r_{ui}$ if user $u$ has rated item $i$, and $0$ otherwise. The similarity between two users $u$ and $v$ would then be computed as
\begin{equation}
CV(u,v) \, = \, \cos(\vec x_u,\vec x_v) \, = \,
\frac{\sum\limits_{i \in \mathcal{I}_{uv}} r_{ui} \, r_{vi}}
{\sqrt{\sum\limits_{i \in \mathcal{I}_u} r_{ui}^2 \sum\limits_{j \in \mathcal{I}_v} r_{vj}^2}},
\end{equation}
where $I_{uv}$ once more denotes the items rated by both $u$ and $v$. A problem with this measure is that is does not consider the differences in the mean and variance of the ratings made by users $u$ and $v$.
A popular measure that compares ratings where the effects of mean and variance have been removed is the \emph{Pearson Correlation} (PC) similarity:
\begin{equation}\label{eqn:user-based-pc}
\mathrm{PC}(u,v) \, = \, \frac{\sum\limits_{i \in \mathcal{I}_{uv}} (r_{ui} - \ol{r}_u) (r_{vi} - \ol{r}_v)}
{\sqrt{\sum\limits_{i \in \mathcal{I}_{uv}} (r_{ui} - \ol{r}_u)^2
\sum\limits_{i \in \mathcal{I}_{uv}} (r_{vi} - \ol{r}_v)^2}}.
\end{equation}
Note that this is different from computing the CV similarity on the $Z$-score normalized ratings, since the standard deviation of the ratings in evaluated only on the common items $I_{uv}$, not on the entire set of items rated by $u$ and $v$, i.e. $\mathcal{I}_u$ and $\mathcal{I}_v$. The same idea can be used to obtain similarities between two items $i$ and $j$ \cite{deshpande04,sarwar01}, this time by comparing the ratings made by users that have rated both these items:
\begin{equation}\label{eqn:item-based-pc}
\mathrm{PC}(i,j) \, = \, \frac{\sum\limits_{u \in \mathcal{U}_{ij}} (r_{ui} - \ol{r}_i) (r_{uj} - \ol{r}_j)}
{\sqrt{\sum\limits_{u \in \mathcal{U}_{ij}} (r_{ui} - \ol{r}_i)^2
\sum\limits_{u \in \mathcal{U}_{ij}} (r_{uj} - \ol{r}_j)^2}}.
\end{equation}
While the sign of a similarity weight indicates whether the correlation is direct or inverse, its magnitude (ranging from $0$ to $1$) represents the strength of the correlation.
\begin{example
The similarities between the pairs of users and items of our toy example, as computed using PC similarity, are shown in Figure \ref{fig:pc-sim-example}. We can see that Lucy's taste in movies is very close to Eric's (similarity of $0.922$) but very different from John's (similarity of $-0.938$). This means that Eric's ratings can be trusted to predict Lucy's, and that Lucy should discard John's opinion on movies or consider the opposite. We also find that the people that like ``The Matrix'' also like ``Die Hard'' but hate ``Wall-E''. Note that these relations were discovered without having any knowledge of the genre, director or actors of these movies.
\end{example
\begin{figure}[h!tb]
\begin{small}
\begin{center}
\emph{User-based} Pearson correlation
\vspace{2mm}
\begin{tabular}{x{1cm}||x{1cm}|x{1cm}|x{1cm}|x{1cm}}
\hline
& John & Lucy & Eric & Diane \tabularnewline
\hline\hline
John & 1.000 & -0.938 & -0.839 & 0.659 \tabularnewline
Lucy & -0.938 & 1.000 & 0.922 & -0.787 \tabularnewline
Eric & -0.839 & 0.922 & 1.000 & -0.659 \tabularnewline
Diane & 0.659 & -0.787 & -0.659 & 1.000 \tabularnewline
\hline
\end{tabular}
\vspace{5mm}
\emph{Item-based} Pearson correlation
\vspace{2mm}
\begin{tabular}{x{1.75cm}||x{1cm}|x{1cm}|x{1cm}|x{1cm}|x{1cm}}
\hline
\multirow{2}{*}{} & The & \multirow{2}{*}{Titanic} & Die & Forrest & \multirow{2}{*}{Wall-E}\tabularnewline
& Matrix & & Hard & Gump & \tabularnewline
\hline\hline
Matrix & 1.000 & -0.943 & 0.882 & -0.974 & -0.977 \tabularnewline
Titanic & -0.943 & 1.000 & -0.625 & 0.931 & 0.994 \tabularnewline
Die Hard & 0.882 & -0.625 & 1.000 & -0.804 & -1.000 \tabularnewline
Forrest Gump & -0.974 & 0.931 & -0.804 & 1.000 & 0.930 \tabularnewline
Wall-E & -0.977 & 0.994 & -1.000 & 0.930 & 1.000 \tabularnewline
\hline
\end{tabular}
\caption{The \emph{user} and \emph{item} PC similarity for the ratings of Figure \ref{fig:toy-example}.}
\label{fig:pc-sim-example}
\end{center}
\end{small}
\end{figure}
The differences in the rating scales of individual users are often more pronounced than the differences in ratings given to individual items. Therefore, while computing the item similarities, it may be more appropriate to compare ratings that are centered on their \emph{user} mean, instead of their \emph{item} mean. The \emph{Adjusted Cosine} (AC) similarity \cite{sarwar01}, is a modification of the PC item similarity which compares user-mean-centered ratings:
\begin{displaymath}
AC(i,j) \, = \, \frac{\sum\limits_{u \in \mathcal{U}_{ij}} (r_{ui} - \ol{r}_u)(r_{uj} - \ol{r}_u)}
{\sqrt{\sum\limits_{u \in \mathcal{U}_{ij}} (r_{ui} - \ol{r}_u)^2 \sum\limits_{u \in \mathcal{U}_{ij}}(r_{uj} - \ol{r}_u)^2}}.
\end{displaymath}
In some cases, AC similarity has been found to outperform PC similarity on the prediction of ratings using an item-based method \cite{sarwar01}.
\subsubsection{Other similarity measures}
Several other measures have been proposed to compute similarities between users or items. One of them is the \emph{Mean Squared Difference} (MSD) \cite{shardanand95}, which evaluate the similarity between two users $u$ and $v$ as the inverse of the average squared difference between the ratings given by $u$ and $v$ on the same items:
\begin{equation}\label{eqn:user-based-msd}
\mathrm{MSD}(u,v) \, = \, \frac{|\mathcal{I}_{uv}|}{\sum\limits_{i \in \mathcal{I}_{uv}} (r_{ui} - r_{vi})^2}.
\end{equation}
While it could be modified to compute the differences on normalized ratings, the MSD similarity is limited compared to PC similarity because it does not allows to capture negative correlations between user preferences or the appreciation of different items. Having such negative correlations may improve the rating prediction accuracy \cite{herlocker02}.
Another well-known similarity measure is the \emph{Spearman Rank Correlation} (SRC) \cite{kendall90rank}. While PC uses the rating values directly, SRC instead considers the ranking of these ratings. Denote by $k_{ui}$ the rating rank of item $i$ in user $u$'s list of rated items (tied ratings get the average rank of their spot). The SRC similarity between two users $u$ and $v$ is evaluated as:
\begin{equation}\label{eqn:user-based-src}
\mathrm{SRC}(u,v) \, = \, \frac{\sum\limits_{i \in \mathcal{I}_{uv}} (k_{ui} - \ol{k}_u) (k_{vi} - \ol{k}_v)}
{\sqrt{\sum\limits_{i \in \mathcal{I}_{uv}} (k_{ui} - \ol{k}_u)^2
\sum\limits_{i \in \mathcal{I}_{uv}} (k_{vi} - \ol{k}_v)^2}},
\end{equation}
where $\ol{k}_u$ is the average rank of items rated by $u$
The principal advantage of SRC is that it avoids the problem of rating normalization, described in the last section, by using rankings. On the other hand, this measure may not be the best one when the rating range has only a few possible values, since that would create a large number of tied ratings. Moreover, this measure is typically more expensive than PC as ratings need to be sorted in order to compute their rank.
Table \ref{fig:sim-mae-comparison} shows the user-based prediction accuracy (MAE) obtained with MSD, SRC and PC similarity measures, on the \emph{MovieLens}\footnote{\url{http://www.grouplens.org/}} dataset \cite{herlocker02}. Results are given for different values of $k$, which represents the maximum number of neighbors used in the predictions. For this data, we notice that MSD leads to the least accurate predictions, possibly due to the fact that it does not take into account negative correlations. Also, these results show PC to be slightly more accurate than SRC. Finally, although PC has been generally recognized as the best similarity measure, see e.g. \cite{herlocker02}, subsequent investigation has shown that the performance of such measure depended greatly on the data \cite{howe08}.
\begin{table}[h!tb]
\begin{center}
\caption{The rating prediction accuracy (MAE) obtained on the \emph{MovieLens} dataset using the Mean Squared Difference (MSD), Spearman Rank Correlation and Pearson Correaltion (PC) similarity measures. Results are shown for predictions using an increasing number of neighbors $k$.}
\label{fig:sim-mae-comparison}
\begin{small}
\begin{tabular}{x{1cm}||x{1cm}|x{1cm}|x{1cm}}
\hline
$k$ & MSD & SRC & PC \tabularnewline
\hline\hline
5 & 0.7898 & 0.7855 & 0.7829 \tabularnewline
10 & 0.7718 & 0.7636 & 0.7618 \tabularnewline
20 & 0.7634 & 0.7558 & 0.7545 \tabularnewline
60 & 0.7602 & 0.7529 & 0.7518 \tabularnewline
80 & 0.7605 & 0.7531 & 0.7523 \tabularnewline
100 & 0.7610 & 0.7533 & 0.7528 \tabularnewline
\hline
\end{tabular}
\end{small}
\end{center}
\end{table}
\begin{comment}
\paragraph{\bf Efficient computation}
The efficiency of the approach used
\end{comment}
\subsubsection{Considering the significance of weights}\label{sec:accounting-for-significance}
Because the rating data is frequently sparse in comparison to the number of users and items of a system, it is often the case that similarity weights are computed using only a few ratings given to common items or made by the same users. For example, if the system has $10,000$ ratings made by $1,000$ users on $100$ items (assuming a uniform distribution of ratings), Table \ref{table:user-vs-item-based-accuracy} shows us that the similarity between two users is computed, on average, by comparing the ratings given by these users to a \emph{single} item. If these few ratings are equal, then the users will be considered as ``fully similar'' and will likely play an important role in each other's recommendations. However, if the users' preferences are in fact different, this may lead to poor recommendations.
Several strategies have been proposed to take into account the \emph{significance} of a similarity weight. The principle of these strategies is essentially the same: reduce the magnitude of a similarity weight when this weight is computed using only a few ratings. For instance, in \emph{Significance Weighting} \cite{herlocker99,ma07}, a user similarity weight $w_{uv}$ is penalized by a factor proportional to the number of commonly rated item, if this number is less than a given parameter $\gamma > 0$:
\begin{equation}
w'_{uv} \, = \, \frac{\min\{|\mathcal{I}_{uv}|, \, \gamma\}}{\gamma} \times w_{uv}.
\end{equation}
Likewise, an item similarity $w_{ij}$, obtained from a few ratings, can be adjusted as
\begin{equation}
w'_{ij} \, = \, \frac{\min\{|\mathcal{U}_{ij}|, \, \gamma\}}{\gamma} \times w_{ij}.
\end{equation}
In \cite{herlocker99,herlocker02}, it was found that using $\gamma \geq 25$ could significantly improve the accuracy of the predicted ratings, and that a value of $50$ for $\gamma$ gave the best results. However, the optimal value for this parameter is data dependent and should be determined using a cross-validation approach.
A characteristic of significance weighting is its use of a threshold $\gamma$ determining when a weight should be adjusted. A more continuous approach, described in \cite{bell07a}, is based on the concept of \emph{shrinkage} where a weak or biased estimator can be improved if it is ``shrunk'' toward a null-value. This approach can be justified using a Bayesian perspective, where the best estimator of a parameter is the posterior mean, corresponding to a linear combination of the prior mean of the parameter (null-value) and an empirical estimator based fully on the data. In this case, the parameters to estimate are the similarity weights and the null value is zero. Thus, a user similarity $w_{uv}$ estimated on a few ratings is shrunk as
\begin{equation}
w'_{uv} \, = \, \frac{|\mathcal{I}_{uv}|}{|\mathcal{I}_{uv}| + \beta} \times w_{uv},
\end{equation}
where $\beta > 0$ is a parameter whose value should also be selected using cross-validation. In this approach, $w_{uv}$ is shrunk proportionally to $\beta/|I_{uv}|$, such that almost no adjustment is made when $|\mathcal{I}_{uv}| \gg \beta$. Item similarities can be shrunk in the same way:
\begin{equation}
w'_{ij} \, = \, \frac{|\mathcal{U}_{ij}|}{|\mathcal{U}_{ij}| + \beta} \times w_{ij},
\end{equation}
As reported in \cite{bell07a}, a typical value for $\beta$ is 100.
\subsubsection{Considering the variance of ratings}
Ratings made by two users on universally liked/disliked items may not be as informative as those made for items with a greater rating variance. For instance, most people like classic movies such as ``The Godfather'' so basing the weight computation on such movies would produce artificially high values. Likewise, a user that always rates items in the same way may provide less predictive information than one whose preferences vary from one item to another.
A recommendation approach that addresses this problem is the \emph{Inverse User Frequency} \cite{breese98}. Based on the information retrieval notion of \emph{Inverse Document Frequency} (IDF), a weight $\lambda_i$ is given to each item $i$, in proportion to the log-ratio of users that have rated $i$:
\begin{displaymath}
\lambda_i \, = \, \log\frac{|\mathcal{U}|}{|\mathcal{U}_i|}.
\end{displaymath}
In the \emph{Frequency-Weighted Pearson Correlation} (FWPC), the correlation between the ratings given by two users $u$ and $v$ to an item $i$ is weighted by $\lambda_i$:
\begin{equation}\label{eqn:fw-user-based-pc}
\mathrm{FWPC}(u,v) \, = \, \frac{\sum\limits_{i \in \mathcal{I}_{uv}} \lambda_i (r_{ui} - \ol{r}_u) (r_{vi} - \ol{r}_v)}
{\sqrt{\sum\limits_{i \in \mathcal{I}_{uv}} \lambda_i (r_{ui} - \ol{r}_u)^2
\sum\limits_{i \in \mathcal{I}_{uv}} \lambda_i (r_{vi} - \ol{r}_v)^2}}.
\end{equation}
This approach, which was found to improve the prediction accuracy of a user-based recommendation method \cite{breese98}, could also be adapted to the computation of item similarities. More advanced strategies have also been proposed to consider rating variance. One of these strategies, described in \cite{jin04}, computes the factors $\lambda_i$ by maximizing the average similarity between users.
\subsubsection{Considering the target item}
If the goal is to predict ratings with a user-based method, more reliable correlation values can be obtained if the target item is considered in their computation. In \cite{baltrunas2009item}, the user-based PC similarity is extended by weighting the summation terms corresponding to an item $i$ by the similarity between $i$ and the target item $j$:
\begin{equation}\label{eqn:WPCCnorm}
\mathrm{WPC}_j(u,v) \, = \, \frac{\sum\limits_{i \in \mathcal{I}_{uv}} w_{ij} \, (r_{ui} - \ol{r}_u) (r_{vi} - \ol{r}_v)}
{\sqrt{\sum\limits_{i \in \mathcal{I}_{uv}} w_{ij} \, (r_{ui} - \ol{r}_u)^2
\sum\limits_{i \in \mathcal{I}_{uv}} w_{ij} \, (r_{vi} - \ol{r}_v)^2}}.
\end{equation}
The item weights $w_{ij}$ can be computed using PC similarity or obtained by considering the items' content (e.g., the common genres for movies). Other variations of this similarity metric and their impact on the prediction accuracy are described in \cite{baltrunas2009item}. Note, however, that this model may require to recompute the similarity weights for each predicted rating, making it less suitable for online recommender systems.
\subsection{Neighborhood selection}
The number of nearest-neighbors to select and the criteria used for this selection can also have a serious impact on the quality of the recommender system. The selection of the neighbors used in the recommendation of items is normally done in two steps: 1) a global filtering step where only the most likely candidates are kept, and 2) a per prediction step which chooses the best candidates for this prediction.
\subsubsection{Pre-filtering of neighbors}
In large recommender systems that can have millions of users and items, it is usually not possible to store the (non-zero) similarities between each pair of users or items, due to memory limitations. Moreover, doing so would be extremely wasteful as only the most significant of these values are used in the predictions. The pre-filtering of neighbors is an essential step that makes neighborhood-based approaches practicable by reducing the amount of similarity weights to store, and limiting the number of candidate neighbors to consider in the predictions. There are several ways in which this can be accomplished:
\begin{itemize}
\item \textbf{Top-$N$ filtering:} For each user or item, only a list of the $N$ nearest-neighbors and their respective similarity weight is kept. To avoid problems with efficiency or accuracy, $N$ should be chosen carefully. Thus, if $N$ is too large, an excessive amount of memory will be required to store the neighborhood lists and predicting ratings will be slow. On the other hand, selecting a too small value for $N$ may reduce the coverage of the recommendation method, which causes some items to be never recommended.
\vspace{2mm}
\item \textbf{Threshold filtering:} Instead of keeping a fixed number of nearest-neighbors, this approach keeps all the neighbors whose similarity weight's magnitude is greater than a given threshold $w_\text{min}$. While this is more flexible than the previous filtering technique, as only the most significant neighbors are kept, the right value of $w_\text{min}$ may be difficult to determine.
\vspace{2mm}
\item \textbf{Negative filtering:} In general, negative rating correlations are less reliable than positive ones. Intuitively, this is because strong positive correlation between two users is a good indicator of their belonging to a common group (e.g., teenagers, science-fiction fans, etc.). However, although negative correlation may indicate membership to different groups, it does not tell how different are these groups, or whether these groups are compatible for some other categories of items. While certain experimental investigations \cite{herlocker99,herlocker04} have found negative correlations to provide no significant improvement in the prediction accuracy, in certain settings they seem to have a positive effect (see e.g., \cite{EASE}). Whether such correlations can be discarded depends on the data and should be examined on a case-by-case basis.
\end{itemize}
Note that these three filtering approaches are not exclusive and can be combined to fit the needs of the recommender system. For instance, one could discard all negative similarities \emph{as well as} those that are not in the top-$N$ lists.
\subsubsection{Neighbors in the predictions}
Once a list of candidate neighbors has been computed for each user or item, the prediction of new ratings is normally made with the $k$-nearest-neighbors, that is, the $k$ neighbors whose similarity weight has the greatest magnitude. The choice of $k$ can also have a significant impact on the accuracy and performance of the system.
As shown in Table \ref{fig:sim-mae-comparison}, the prediction accuracy observed for increasing values of $k$ typically follows a \emph{concave} function. Thus, when the number of neighbors is restricted by using a small $k$ (e.g., $k < 20$), the prediction accuracy is normally low. As $k$ increases, more neighbors contribute to the prediction and the variance introduced by individual neighbors is averaged out. As a result, the prediction accuracy improves. Finally, the accuracy usually drops when too many neighbors are used in the prediction (e.g., $k > 50$), due to the fact that the few strong local relations are ``diluted'' by the many weak ones. Although a number of neighbors between $20$ to $50$ is most often described in the literature, see e.g. \cite{herlocker02,herlocker04}, the optimal value of $k$ should be determined by cross-validation.
On a final note, more serendipitous recommendations may be obtained at the cost of a decrease in accuracy, by basing these recommendations on a few very similar users. For example, the system could find the user most similar to the active one and recommend the new item that has received the highest rated from this user.
\section{Advanced techniques}\label{sec:advanced-techniques}
The neighborhood approaches based on rating correlation, such as the ones presented in the previous sections, have three important limitations:
\begin{itemize}
\item \textbf{Limited Expressiveness:} Traditional neighborhood-based methods determine the neighborhood of users or items using some predefined similarity measure like cosine or PC. Recommendation algorithms that rely on such similarity measures have been shown to enjoy remarkable recommendation accuracy in certain settings. However their performance can vary considerably depending on whether the chosen similarity measures conform with the latent characteristics of the dataset onto which they are applied.
\vspace{2mm}
\item \textbf{Limited coverage:} Because rating correlation measures the similarity between two users by comparing their ratings for the same items, users can be neighbors \emph{only if} they have rated common items. This assumption is very limiting, as users having rated a few or no common items may still have similar preferences. Moreover, since only items rated by neighbors can be recommended, the coverage of such methods can also be limited. This limitation also applies when two items have only a few or no co-ratings.
\vspace{2mm}
\item \textbf{Sensitivity to sparse data:} Another consequence of rating correlation, addressed briefly in Section \ref{sec:ub-vs-ib-recommendation}, is the fact that the accuracy of neighborhood-based recommendation methods suffers from the lack of available ratings. Sparsity is a problem common to most recommender systems due to the fact that users typically rate only a small proportion of the available items \cite{billsus98,good99,sarwar98,sarwar00b}. This is aggravated by the fact that users or items newly added to the system may have no ratings at all, a problem known as \emph{cold-start} \cite{schein02}. When the rating data is sparse, two users or items are unlikely to have common ratings, and consequently, neighborhood-based approaches will predict ratings using a very limited number of neighbors. Moreover, similarity weights may be computed using only a small number of ratings, resulting in biased recommendations (see Section \ref{sec:accounting-for-significance} for this problem).
\end{itemize}
A common solution for {latter} problems is to fill the missing ratings with default values \cite{breese98,deshpande04}, such as the middle value of the rating range, or the average user or item rating. A more reliable approach is to use content information to fill out the missing ratings \cite{degemmis07,good99,konstan97,melville02}. For instance, the missing ratings can be provided by autonomous agents called \emph{filterbots} \cite{good99,konstan97}, that act as ordinary users of the system and rate items based on some specific characteristics of their content. The missing ratings can instead be predicted by a content-based approach \cite{melville02}. Furthermore, content similarity can also be used ``instead of'' or ``in addition to'' rating correlation similarity to find the nearest-neighbors employed in the predictions \cite{balabanovic97,li04,pazzani99,soboroff99}. Finally, data sparsity can also be tackled by acquiring new ratings with active learning techniques. In such techniques, the system interactively queries the user to gain a better understanding of his or her preferences. A more detailed presentation of interactive and session-based techniques is given in Chapter~\ref{4-session-rs} of this book. These solutions, however, also have their own drawbacks. For instance, giving a default values to missing ratings may induce bias in the recommendations. Also, item content may not be available to compute ratings or similarities.
This section presents two approaches that aim to tackle the {aforementioned challenges}: \emph{learning-based} and \emph{graph-based} methods.
\subsection{Learning-based methods}
In the methods of this family the similarity or affinity between users and items is obtained by defining a parametric model that describes the relation between users, items or both, and then fits the model parameters through an optimization procedure.
Using a learning-based method has significant advantages. First, such methods can capture high-level patterns and trends in the data, are generally more robust to outliers, and are known to generalize better than approaches solely based on local relations. In recommender systems, this translates into greater accuracy and stability in the recommendations \cite{koren08}. Also, because the relations between users and items are encoded in a limited set of parameters, such methods normally require less memory than other types of approaches. Finally, since the parameters are usually learned offline, the online recommendation process is generally faster.
Learning-based methods that use neighborhood or similarity information can be divided in two categories: factorization methods and adaptive neighborhood learning methods. These categories are presented in the following sections.
\subsubsection{Factorization methods}
Factorization methods \cite{bell07a,billsus98, puresvd, goldberg01,koren08, eigenrec, sarwar00b,takacs08,takacs09} address the problems of limited coverage and sparsity by projecting users and items into a reduced latent space that captures their most salient features. Because users and items are compared in this dense subspace of high-level features, instead of the ``rating space,'' more meaningful relations can be discovered. In particular, a relation between two users can be found, even though these users have rated different items. As a result, such methods are generally less sensitive to sparse data \cite{bell07a,billsus98,sarwar00b}.
There are essentially two ways in which factorization can be used to improve recommender systems: 1) factorization of a sparse \emph{similarity} matrix, and 2) factorization of a user-item \emph{rating} matrix.
\paragraph{\textbf{Factorizing the similarity matrix}}
Neighborhood similarity measures like the correlation similarity are usually very sparse since the average number of ratings per user is much less than the total number of items. A simple solution to densify a sparse similarity matrix is to compute a low-rank approximation of this matrix with a factorization method.
Let $W$ be a symmetric matrix of rank $n$ representing either user or item similarities. To simplify the presentation, we will suppose the latter case. We wish to approximate $W$ with a matrix $\hat{W} = Q\tr{Q}$ of lower rank $k < n$, by minimizing the following objective:
\begin{eqnarray*}
E(Q) & \, = \, & ||W - Q \tr{Q}||^2_F \\
& \, = \, & \sum_{i,j} \left(w_{ij} - \vec q_i\tr{\vec q_j}\right)^2,
\end{eqnarray*}
where $||M||_F = \sqrt{\sum_{i,j} m^2_{ij}}$ is the matrix Frobenius norm. Matrix $\hat{W}$ can be seen as a ``compressed'' and less sparse version of $W$. Finding the factor matrix $Q$ is equivalent to computing the eigenvalue decomposition of $W$:
\begin{displaymath}
W \, = \, V D \tr{V},
\end{displaymath}
where $D$ is a diagonal matrix containing the $|\mathcal{I}|$ eigenvalues of $W$, and $V$ is a $|\mathcal{I}|\!\times\!|\mathcal{I}|$ orthogonal matrix containing the corresponding eigenvectors. Let $V_k$ be a matrix formed by the $k$ principal (normalized) eigenvectors of $W$, which correspond to the axes of the $k$-dimensional latent subspace. The coordinates $\vec q_i \in \mathfrak{R}^k$ of an item $i$ in this subspace is given by the $i$-th row of matrix $Q = V_k D_k^{1/2}$. Furthermore, the item similarities computed in this latent subspace are given by matrix
\begin{eqnarray}
\hat{W} & \, = \, & Q \tr{Q}\nonumber \\
& \, = \, & V_k D_k \tr{V}_k.
\end{eqnarray}
This approach was used to recommend jokes in the Eigentaste system \cite{goldberg01}. In Eigentaste, a matrix $W$ containing the PC similarities between pairs of items is decomposed to obtain the latent subspace defined by the $k$ principal eigenvectors of $W$. A user $u$, represented by the $u$-th row $\vec r_u$ of the rating matrix $R$, is projected in the plane defined by $V_k$:
\begin{displaymath}
\vec r'_u \, = \, \vec r_u V_k.
\end{displaymath}
In an offline step, the users of the system are clustered in this subspace using a recursive subdivision technique. Then, the rating of user $u$ for an item $i$ is evaluated as the mean rating for $i$ made by users in the same cluster as $u$. This strategy is related to the well-known spectral clustering method \cite{shi2000normalized}.
\paragraph{\textbf{Factorizing the rating matrix}}
The problems of cold-start and limited coverage can also be alleviated by factorizing the user-item rating matrix. Once more, we want to approximate the $|\mathcal{U}|\!\times\!|\mathcal{I}|$ rating matrix $R$ of rank $n$ by a matrix $\hat{R} = P\tr{Q}$ of rank $k < n$, where $P$ is a $|\mathcal{U}|\!\times\!k$ matrix of \emph{users} factors and $Q$ a $|\mathcal{I}|\!\times\!k$ matrix of \emph{item} factors. This task can be formulated as finding matrices $P$ and $Q$ which minimize the following function:
\begin{eqnarray*}
E(P,Q) & \, = \, & ||R - P \tr{Q}||^2_F \\
& \, = \, & \sum_{u,i} \left(r_{ui} - \vec p_u\tr{\vec q_i}\right)^2.
\end{eqnarray*}
The optimal solution can be obtained by the Singular Value Decomposition (SVD) of $R$: $P = U_k D_k^{1/2}$ and $Q = V_k D_k^{1/2}$, where $D_k$ is a diagonal matrix containing the $k$ largest singular values of $R$, and $U_k, V_k$ respectively contain the left and right singular vectors corresponding to these values.
However, there is significant problem with applying SVD directly to the rating matrix $R$: most values $r_{ui}$ of $R$ are undefined, since there may not be a rating given to $i$ by $u$. Although it is possible to assign a default value to $r_{ui}$, as mentioned above, this would introduce a bias in the data. More importantly, this would make the large matrix $R$ dense and, consequently, render impractical the SVD decomposition of $R$. A common solution to this problem is to learn the model parameters using only the known ratings \cite{bell07a,koren08,takacs07,takacs09}. For instance, suppose the rating of user $u$ for item $i$ is estimated as
\begin{equation}\label{eqn:svd-rating}
\hat{r}_{ui} \, = \, b_u \, + \, b_i \, + \, \vec p_u \tr{\vec q}_i,
\end{equation}
where $b_u$ and $b_i$ are parameters representing the user and item rating biases. The model paremeters can be learned by minimizing the following objective function:
\begin{equation}\label{eqn:svd-factor}
E(P,Q,\vec b) \, = \, \sum_{r_{ui} \in \mathcal{R}} (r_{ui} - \hat{r}_{ui})^2
\, + \, \lambda\left(||\vec p_u||^2 + ||\vec q_i||^2 + b^2_u + b^2_i\right).
\end{equation}
The second term of the function is as a regularization term added to avoid overfitting. Parameter $\lambda$ controls the level of regularization. A more comprehensive description of this recommendation approach can be found in Chapter~\ref{15-collab-filt-rs} of this book.
The SVD model of Equation \ref{eqn:svd-rating} can be transformed into a similarity-based method by supposing that the profile of a user $u$ is determined implicitly by the items he or she has rated. Thus, the factor vector of $u$ can be defined as a weighted combination of the factor vectors $\vec s_j$ corresponding to the items $j$ rated by this user:
\begin{equation}\label{eqn:user-factor-est}
\vec p_u \, = \, |\mathcal{I}_u|^{-\alpha} \sum\limits_{j \in \mathcal{I}_u} c_{uj} \, \vec s_j.
\end{equation}
In this formulation, $\alpha$ is a normalization constant typically set to $\alpha=1/2$, and $c_{uj}$ is a weight representing the contribution of item $j$ to the profile of $u$. For instance, in the SVD++ model \cite{koren08} this weight is defined as the bias corrected rating of $u$ for item $j$: $c_{uj} = r_{ui} - b_u - b_j$. Other approaches, such as the FISM
\cite{Kabbur2013} and NSVD \cite{paterek07} models, instead use constant weights: $c_{uj} = 1$.
Using the formulation of Equation \ref{eqn:user-factor-est}, a rating $r_{ui}$ is predicted as
\begin{equation}
\hat{r}_{ui} \, = \, b_u \, + \, b_i \, + \, |\mathcal{I}_u|^{-\alpha} \sum\limits_{j \in \mathcal{I}_u} c_{uj} \, \vec s_j \tr{\vec q}_i.
\end{equation}
Like the standard SVD model, the parameters of this model can be learned by minimizing the objective function of Equation (\ref{eqn:svd-factor}), for instance, using gradient descent optimization.
Note that, instead of having both user and item factors, we now have two different sets of item factors, i.e., $\vec q_i$ and $\vec s_j$. These vectors can be interpreted as the factors of an asymmetric item-item similarity matrix $W$, where
\begin{equation}
w_{ij} \, = \, \vec s_i \tr{\vec q}_j.
\end{equation}
As mentioned in \cite{koren08}, this similarity-based factorization approach has several advantages over the traditional SVD model. First, since there are typically more users than items in a recommender system, replacing the user factors by a combination of item factors reduces the number of parameters in the model, which makes the learning process faster and more robust. Also, by using item similarities instead of user factors, the system can handle new users without having to re-train the model. Finally, as in item-similarity neighborhood methods, this model makes it possible to justify a rating to a user by showing this user the items that were most involved in the prediction.
In FISM \cite{Kabbur2013}, the prediction of a rating $r_{ui}$ is made without considering the factors of $i$:
\begin{equation}
\hat{r}_{ui} \, = \, b_u \, + \, b_i \, + \,
\big(|\mathcal{I}_u| - 1\big)^{-\alpha} \sum\limits_{j \in \mathcal{I}_u \! \setminus \! \{i\}} \vec s_j \tr{\vec q}_i.
\end{equation}
This modification, which corresponds to ignoring the diagonal entries in the item similarity matrix, avoids the problem of having an item recommending itself and has been shown to give better performance when the number of factors is high.
\subsubsection{Neighborhood-learning methods}
Standard neighborhood-based recommendation algorithms determine the neighborhood of users or items directly from the data, using some pre-defined similarity measure like PC.
However, subsequent developments in the field of item recommendation have shown the advantage of learning the neighborhood automatically from the data, instead of using a pre-defined similarity measure \cite{Koenigstein2013,koren08,Natarajan2013,Rendle2009}.
\paragraph{\textbf{Sparse linear neighborhood model}}
A representative neighborhood-learning recommendation method is the {$\mathtt{SLIM}$}\xspace algorithm, developed by Ning \etal~\cite{Ning2011}.
In {$\mathtt{SLIM}$}\xspace, a new rating is predicted as a sparse aggregation of existing ratings in a user's profile,
\begin{equation}\label{eqn:slimpred}
\hat{r}_{ui} \, = \, \vec r_u \tr{\vec w}_i,
\end{equation}
where $\vec r_u$ is the $u$-th row of the rating matrix $R$ and $\vec w_j$ is a sparse row vector containing $|\mathcal{I}|$ aggregation
coefficients. Essentially, the non-zero entries in $\vec w_i$ correspond to the neighbor items of an item $i$.
The neighborhood parameters are learned by minimizing the squared prediction error. Standard regularization and sparsity are enforced by penalizing the $\ell_2$-norm and $\ell_1$-norm of the parameters. The combination of these two types of regularizers in a regression problem is known as elastic net regularization \cite{Zou2005}. This learning process can be expressed as the following optimization problem:
\begin{eqnarray} \label{eqn:slim-opt}
\displaystyle{
\begin{aligned}
& \underset{W}{\text{minimize}}
& & \frac{1}{2}\| R - R W \|^2_F
+ \frac{\beta}{2} \| W \|^2_F
+ \lambda \| W \|_1 \\
& \text{subject to}
& & W \ge 0 \\
&
& & \mbox{diag}(W) = 0.
\end{aligned}
}
\end{eqnarray}
The constraint $\mbox{diag}(W) = 0$ is added to the model to avoid trivial solutions (e.g., $W$ corresponding to the identity matrix) and ensure that $r_{ui}$ is not used to compute $\hat{r}_{ui}$ during the recommendation process. Parameters $\beta$ and $\lambda$ control the amount of each type of regularization. Moreover, the non-negativity constraint on $W$ imposes the relations between neighbor items to be positive. Dropping the non-negativity as well as the sparsity constraints has been recently explored in~\cite{EASE}, and was shown to work well on several datasets with small number of items with respect to users. Note, however, that without the sparsity constraint the resulting model will be fully dense; a fact that imposes practical limitations on the applicability of such approaches in large item-space regimes.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{slim_example.PNG}
\caption{A simple illustration of {$\mathtt{SLIM}$}\xspace. The method works by first building an item-to-item model, based on $R$. Intuitively, this item model expresses each item (i.e., each column of the original rating matrix $R$) as a \textit{sparse linear combination} of the rest of the items (i.e., the other columns of $R$). Then, given $W$, new recommendations for a target user $u$ can be readily produced by multiplying the row corresponding to user $u$ (i.e. the $u$-th row of $R$), with the learned item model, $W$.}
\label{fig:slim}
\end{figure}
\paragraph{\textbf{Sparse neighborhood with side information}}
Side information, such as user profile attributes (e.g., age, gender, location) or item descriptions/tags, is becoming increasingly available in e-commerce applications. Properly exploited, this rich source of information can significantly improve the performance of conventional recommender systems \cite{Adams2010,Agarwal2011,Yoo2009,Singh2008}.
Item side information can be integrated in the {$\mathtt{SLIM}$}\xspace model by supposing that the co-rating profile of two items is correlated to the properties encoded in their side information \cite{r14}. To enforce such correlations in the model, an additional requirement is added, where both the user-item rating matrix $R$ and the item side information matrix $F$ should be reproduced by the same sparse linear aggregation. That is, in addition to satisfying $R \sim R W$, the coefficient matrix $W$ should also satisfy $F \sim F W$. This is achieved by solving the following optimization problem:
\begin{eqnarray}
\label{opt:cslim}
\displaystyle{
\begin{aligned}
& \underset{W}{\text{minimize}}
& & \frac{1}{2}\| R - RW \|^2_F \, + \, \frac{\alpha}{2}\| F - FW \|^2_F
\, + \, \frac{\beta}{2} \| W \|^2_F \, + \, \lambda \| W \|_1 \; \\
& \text{subject to}
& & W \ge 0, \;\\
&
& & \mbox{diag}(W) = 0. \\
\end{aligned}
}
\end{eqnarray}
The parameter $\alpha$ is used to control the relative importance of the user-item rating information $R$ and the item side information $F$ when they are used to learn $W$.
In some cases, requiring that the aggregation coefficients be the same for both $R$ and $F$ can be too strict. An alternate model relaxes this constraints by imposing these two sets of aggregation coefficients to be similar. Specifically, it uses an aggregation coefficient matrix $Q$ such that $F \sim F Q$ and $W \sim Q$. Matrices $W$ and $Q$ are learned as
the minimizers of the following optimization problem:
\begin{eqnarray}
\label{opt:rcslim}
\displaystyle{
\begin{aligned}
& \underset{W, Q}{\text{minimize}}
& & \frac{1}{2}\| R - RW \|^2_F \, + \, \frac{\alpha}{2}\| F - FQ \|^2_F
\, + \, \frac{\beta_1}{2} \| W - Q\|^2_F \; \\
&
& & \quad \, + \, \frac{\beta_2}{2} \big(\| W \|^2_F + \| Q \|^2_F\big)
\, + \, \lambda \big( \| W \|_1 + \| Q \|_1\big) \;\\
& \text{subject to}
& & W, Q \ge 0, \\
&
& & \mbox{diag}(W) = 0, \ \mbox{diag}(Q) = 0. \; \\
\end{aligned}
}
\end{eqnarray}
Parameter $\beta_1$ controls how much $W$ and $Q$ are allowed to be different from each other.
In \cite{r14}, item reviews in the form of short texts were used as side information in the models described above. These models were shown to outperform the {$\mathtt{SLIM}$}\xspace method without side information, as well as other approaches that use side information, in the top-$N$ recommendation task.
\paragraph{\textbf{Global and local sparse neighborhood models}}
A global item model may not always be sufficient to capture the preferences of every user; especially when there exist subsets of users with diverse or even opposing preferences. In cases like these training \textit{local item models} (i.e., item models that are estimated based on user subsets) is expected to be beneficial compared to adopting a single item model for all users in the system. An example of such a case can be seen in Figure~\ref{fig:GLSLIM}.
$\mathtt{GLSLIM}$\xspace~\cite{GLSLIM} aims to address the above issue. In a nutshell, $\mathtt{GLSLIM}$\xspace computes top-$N$ recommendations that utilize user-subset specific models (local models) and a global model. These models are jointly optimized along with computing the user-specific parameters that weigh their contribution in the production of the final recommendations. The underlying model used for the estimation of local and global item similarities is {$\mathtt{SLIM}$}\xspace.
Specifically, $\mathtt{GLSLIM}$\xspace estimates a global item-item coefficient matrix $S$ and also $k$ local item-item coefficient matrices $S^{p_{u}}$, where $k$ is the number of user subsets and $p_{u} \in\{1, \ldots, k\}$ is the index of the user subset, for which a local matrix $S^{p_{u}}$ is estimated. The recommendation score of user $u$, who belongs to subset $p_{u}$, for item $i$ is estimated as:
\begin{equation}
\label{eq:GLSLIM_prediction}
\tilde{r}_{ui} \, = \, \sum_{l \in R_{u}} g_{u} s_{li} \, + \, \left(1-g_u\right) s_{li}^{p_u}.
\end{equation}
Term $s_{li}$ depicts the global item-item similarity between the $l$-th item rated by $u$ and the target item $i$. Term $s_{li}^{p_{u}}$ captures the item-item similarity between the $l$-th item rated by $u$ and target item $i$, based on the local model that corresponds to user-subset, $p_{u}$, to which user $u$ belongs. Finally, the term $g_{u}\in[0,1]$ is the personalized weight of user $u$, which controls the involvement of the global and the local components, in the final recommendation score.
\begin{figure}[t]
\centering
\includegraphics[width = \textwidth]{glslim_example.pdf}
\caption[GLSLIM Example]{$\mathtt{GLSLIM}$\xspace Motivating Example. The figure shows the training matrices $R$ of two different datasets. Both contain two user subsets. Let's assume that we are trying to compute recommendation scores for item $i$, and that the recommendations are computed using an item-item similarity-based method. Observe that in Case A there exist a set of items that have been rated solely by users that belong to Subset 1, while also a set of items which have been rated by users in both Subsets. Notice that the similarities of items $c$ and $i$ will be different when estimated based on the feedback of (a) Subset 1 alone; (b) Subset 2 alone; and, (c) the complete set of users. Specifically, their
similarity will be zero for the users of Subset 2 (as item $i$ is not rated by the users in that subset), but it will be e.g., $l_{ic} > 0$ for the users of Subset 1, as well as e.g., $g_{ic} > 0$ when estimated globally, with $g_{ic}$ being potentially different than the locally estimated $l_{ic}$.
Combining global and local item-item similarity models, in settings like this could help capture potentially diverse user preferences which would otherwise be missed if only a single global model, was computed instead. On the other hand, for datasets like the one pictured in Case B the similarity between e.g., items $i$ and $j$ will be the same, regardless of whether it is estimated globally, or locally for Subset 1, since both items have been rated only by users of Subset 1. }
\label{fig:GLSLIM}
\end{figure}
The estimation of the global and the local item models, the user assignments to subsets, and the personalized weights is achieved by alternating minimization. Initially, the users are separated into subsets, using a clustering algorithm. Weights $g_{u}$ are initialized to 0.5 for all users, in order to enforce equal contribution of the global and the local components. Then the coefficient matrices $S$ and $S^{p_u},$ with $p_{u} \in\{1, \ldots, k\}$, as well as personalized weights $g_{u}$ are estimated, by repeating the following two step procedure:
\vspace{2mm}
\noindent\textbf{Step 1: Estimating local and global models:} The training matrix $R$ is split into $k$ training matrices $R^{p_{u}}$ of size $|\set{U}| \times |\set{I}|,$ with $p_{u} \in\{1, \ldots, k\}$. Every row $u$ of $R^{p_{u}}$ coincides with the $u$-th row of $R$, if user $u$ belongs in the $p_{u}$-th subset; or is left empty, otherwise. In order to estimate the $i$-th column, $\vec s_{i}$, of matrix $S$, and the $i$-th columns, $\vec s_i^{p_{u}}$, of matrices $S^{p_u}, p_u \in\{1, \ldots, k\}$, $\mathtt{GLSLIM}$\xspace solves the following optimization problem:
\begin{equation}
\MINtwo{\vec s_{i},\,\left\{\vec s_{i}^{1}, \ldots, \vec s_{i}^{k}\right\}}{\frac{1}{2}\left\|\vec r_{i}-{\rm g} \odot R \vec s_{i}-{\rm g}^{\prime} \odot \sum_{p_{u}=1}^{k} R^{p_{u}} \vec s_{i}^{p_{u}}\right\|_{2}^{2}
\, + \, \frac{1}{2} \beta_{g}\left\|\vec s_{i}\right\|_{2}^{2}
\, + \, \lambda_{g}\left\|\vec s_{i}\right\|_{1}+ \\
& + \, \sum_{p_{u}=1}^{k} \frac{1}{2} \beta_{l}\left\|\vec s_{i}^{p_{u}}\right\|_{2}^{2} \, + \, \lambda_{l}\left\|\vec s_{i}^{p_{u}}\right\|_{1} \\ }
{\vec s_{i} \geq 0, \ \ \vec s_{i}^{p_{u}} \geq 0, \ \forall p_{u} \in\{1, \ldots, k\}}{[\vec s_{i}]_i = 0, \ \ [\vec s^{p_u}_i]_i=0, \ \forall p_{u}}
\end{equation}
\noindent where $\vec r_{i}$ is the $i$-th column of $R$; and, $\beta_{g}$, $\beta_{l}$ are the $l_{2}$ regularization hyperparameters corresponding to $S$, $S^{p_{u}}, \forall p_{u} \in$ $\{1, \ldots, k\}$, respectively. Finally, $\lambda_{g}$, $\lambda_{l}$ are the $l_{1}$ regularization hyperparameters controlling the sparsity of $S$, $S^{p_{u}}$ $\forall p_{u} \in\{1, \ldots, k\}$, respectively. The constraint $[\vec s_{i}]_i=0$ makes sure that when computing $r_{ui}$, the element $r_{ui}$ is not used. Similarly, the constraints $[\vec s^{p_u}_i]_i=0$ $\forall p_{u} \in\{1, \ldots, k\}$, enforce this property for the local sparse coefficient matrices as well.
\vspace{2mm}
\noindent\textbf{Step 2: Updating user subsets:}
With the global and local models fixed, $\mathtt{GLSLIM}$\xspace proceeds to update the user subsets. While doing that, it also determines the personalized weight $g_{u}$. Specifically, the computation of the personalized weight $g_{u}$, relies on minimizing the squared error of Equation~\eqref{eq:GLSLIM_prediction} for user $u$ who belongs to subset $p_{u}$, over all items $i$. Setting the derivative of the squared error to $0$, yields:
\begin{equation}
g_{u} \, = \, \frac{\sum_{i=1}^{m}\left(\sum_{l \in R_u } s_{l i}-\sum_{l \in R_u } s_{l i}^{p_{u}}\right)\left(r_{u i}-\sum_{l \in R_u } s_{l i}^{p_{u}}\right)}{\sum_{i=1}^{m}\left(\sum_{l \in R_u } s_{l i}-\sum_{l \in R_u } s_{l i}^{p_{u}}\right)^{2}}.
\end{equation}
$\mathtt{GLSLIM}$\xspace tries to assign each user $u$ to every possible subset, while computing the weight $g_{u}$ that the user would have, if assigned to that subset. Then, for every subset $p_{u}$ and user $u$, the training error is computed and the user is assigned to the subset for which this error is minimized (or remains to the original subset, if no difference in training error occurs).
Steps 1 and 2, are repeated until the number of users who switch subsets, in Step 2, becomes smaller than $1\%$ of $|\set{U}|$. It is empirically observed that initializing subset assignments with the $\mathtt{CLUTO}$~\cite{karypis2002cluto} clustering algorithm, results in a significant reduction of the number of iterations till convergence.
Furthermore, a comprehensive set of experiments conducted in~\cite{GLSLIM} explore in detail the qualitative performance of $\mathtt{GLSLIM}$\xspace, and suggest that it improves upon the standard {$\mathtt{SLIM}$}\xspace, in several datasets.
\subsection{Graph-based methods}
In graph-based approaches, the data is represented in the form of a graph where nodes are users, items or both, and edges encode the interactions or similarities between the users and items. For example, in Figure \ref{fig:bipartite}, the data is modeled as a bipartite graph where the two sets of nodes represent users and items, and an edge connects user $u$ to item $i$ if there is a rating given to $i$ by $u$ in the system. A weight can also be given to this edge, such as the value of its corresponding rating. In another model, the nodes can represent either users or items, and an edge connects two nodes if the ratings corresponding two these nodes are sufficiently correlated. The weight of this edge can be the corresponding correlation value.
\begin{figure}[hbt]
\begin{center}
\scalebox{0.35}{\includegraphics{bipartite.pdf}}
\end{center}
\caption{A bipartite graph representation of the ratings of Figure \ref{fig:toy-example} (\emph{only ratings with value in $\{2,3,4\}$ are shown}).}
\label{fig:bipartite}
\end{figure}
In these models, standard approaches based on correlation predict the rating of a user $u$ for an item $i$ using only the nodes directly connected to $u$ or $i$. Graph-based approaches, on the other hand, allow nodes that are not directly connected to influence each other by propagating information along the edges of the graph. The greater the weight of an edge, the more information is allowed to pass through it. Also, the influence of a node on another should be less if the two nodes are further away in the graph. These two properties, known as \emph{propagation} and \emph{attenuation} \cite{gori07,huang04}, are often observed in graph-based similarity measures.
The transitive associations captured by graph-based methods can be used to recommend items in two different ways. In the first approach, the proximity of a user $u$ to an item $i$ in the graph is used directly to evaluate the relevance of $i$ to $u$ \cite{fouss07,gori07,huang04}. Following this idea, the items recommended to $u$ by the system are those that are the ``closest'' to $u$ in the graph. On the other hand, the second approach considers the proximity of two users or item nodes in the graph as a measure of similarity, and uses this similarity as the weights $w_{uv}$ or $w_{ij}$ of a neighborhood-based recommendation method \cite{fouss07,luo08}.
\subsubsection{Path-based similarity}
In path-based similarity, the distance between two nodes of the graph is evaluated as a function of the number and of paths connecting the two nodes, as well as the length of these paths.
Let $R$ be once again the $|U|\!\times\!|I|$ rating matrix, where $r_{ui}$ is the rating given by user $u$ to an item $i$. The adjacency matrix $A$ of the user-item bipartite graph can be defined from $R$ as
\begin{displaymath}
A \, = \, \left( \begin{array}{cc}
0 & \, \tr{R} \\
R & 0 \\
\end{array} \right).
\end{displaymath}
The association between a user $u$ and an item $i$ can be defined as the sum of the weights of all distinctive paths connecting $u$ to $v$ (allowing nodes to appear more than once in the path), whose length is no more than a given maximum length $K$. Note that, since the graph is bipartite, $K$ should be an odd number. In order to attenuate the contribution of longer paths, the weight given to a path of length $k$ is defined as $\alpha^k$, where $\alpha \in [0,1]$. Using the fact that the number of length $k$ paths between pairs of nodes is given by $A^k$, the user-item association matrix $S_K$ is
\begin{eqnarray}
S_K & \, = \, & \sum\limits_{k = 1}^K \alpha^k A^k \nonumber\\
& \, = \, & (I - \alpha A)^{-1}(\alpha A - \alpha^K A^K).
\end{eqnarray}
This method of computing distances between nodes in a graph is known as the \emph{Katz} measure \cite{katz53}. Note that this measure is closely related to the \emph{Von Neumann Diffusion} kernel \cite{fouss06,kondor02,kunegis08}
\begin{eqnarray}
K_\mathrm{VND} & \, = \, & \sum\limits_{k = 0}^\infty \alpha^k A^k \nonumber\\
& \, = \, & (I - \alpha A)^{-1}
\end{eqnarray}
and the \emph{Exponential Diffusion} kernel
\begin{eqnarray}
K_\mathrm{ED} & \, = \, & \sum\limits_{k = 0}^\infty \frac{1}{k!} \alpha^k A^k \nonumber\\
& \, = \, & \exp(\alpha A),
\end{eqnarray}
where $A^0 = I$.
In recommender systems that have a large number of users and items, computing these association values may require extensive computational resources. In \cite{huang04}, spreading activation techniques are used to overcome these limitations. Essentially, such techniques work by first activating a selected subset of nodes as starting nodes, and then iteratively activating the nodes that can be reached directly from the nodes that are already active, until a convergence criterion is met.
Path-based methods, as well as the other graph-based approaches described in this section, focus on finding relevant associations between users and items, not predicting exact ratings. Therefore, such methods are better suited for item retrieval tasks, where explicit ratings are often unavailable and the goal is to obtain a short list of relevant items (i.e., the top-$N$ recommendation problem).
\subsubsection{Random walk similarity}
Transitive associations in graph-based methods can also be defined within a probabilistic framework. In this framework, the similarity or affinity between users or items is evaluated as a probability of reaching these nodes in a random walk. Formally, this can be described with a first-order Markov process defined by a set of $n$ states and a $n\!\times\!n$ transition probability matrix $P$ such that the probability of jumping from state $i$ to $j$ at any time-step $t$ is
\begin{displaymath}
p_{ij} \, = \, \mathrm{Pr}\big(s(t\!+\!1) = j \, | \, s(t) = i\big).
\end{displaymath}
Denote $\vec \piup(t)$ the vector containing the state probability distribution of step $t$, such that $\pi_i(t) = \mathrm{Pr}\left(s(t) = i\right)$, the evolution of the Markov chain is characterized by
\begin{displaymath}
\vec \piup(t\!+\!1) \, = \, \tr{P} \vec \piup(t).
\end{displaymath}
Moreover, under the condition that $P$ is row-stochastic, i.e. $\sum_j p_{ij} = 1$ for all $i$, the process converges to a stable distribution vector $\vec \piup(\infty)$ corresponding to the positive eigenvector of $\tr{P}$ with an eigenvalue of $1$. This process is often described in the form of a weighted graph having a node for each state, and where the probability of jumping from a node to an adjacent node is given by the weight of the edge connecting these nodes.
\paragraph{\bf Itemrank}
A recommendation approach, based on the PageRank algorithm for ranking Web pages \cite{brin98}, is ItemRank \cite{gori07}. This approach ranks the preferences of a user $u$ for unseen items $i$ as the probability of $u$ to visit $i$ in a random walk of a graph in which nodes correspond to the items of the system, and edges connects items that have been rated by common users. The edge weights are given by the $|\mathcal{I}|\!\times\!|\mathcal{I}|$ transition probability matrix $P$ for which $p_{ij} = |\mathcal{U}_{ij}|/ |\mathcal{U}_i|$ is the estimated conditional probability of a user to rate and item $j$ if it has rated an item $i$.
As in PageRank, the random walk can, at any step $t$, either jump using $P$ to an adjacent node with fixed probability $\alpha$, or ``teleport'' to any node with probability $(1-\alpha)$. Let $\vec r_u$ be the $u$-th row of the rating matrix $R$, the probability distribution of user $u$ to teleport to other nodes is given by vector $\vec d_u = \vec r_u / ||\vec r_u||$. Following these definitions, the state probability distribution vector of user $u$ at step $t\!+\!1$ can be expressed recursively as
\begin{equation}\label{eqn:itemrank}
\vec \piup_u(t\!+\!1) \, = \, \alpha\tr{P}\vec \piup_u(t) \, + \, (1\!-\!\alpha) \vec d_u.
\end{equation}
For practical reasons, $\vec \piup_u(\infty)$ is usually obtained with a procedure that first initializes the distribution as uniform, i.e. $\vec \piup_u(0) = \frac{1}{n} \ones$, and then iteratively updates $\vec \piup_u$, using (\ref{eqn:itemrank}), until convergence. Once $\vec \piup_u(\infty)$ has been computed, the system recommends to $u$ the item $i$ for which $\vec \piup_{ui}$ is the highest.
\paragraph{\bf Average first-passage/commute time}
Other distance measures based on random walks have been proposed for the recommendation problem. Among these are the \emph{average first-passage time} and the \emph{average commute time} \cite{fouss07,fouss06}.
The average first-passage time $m(j|i)$ \cite{norris99} is the average number of steps needed by a random walker to reach a node $j$ for the first time, when starting from a node $i \neq j$. Let $P$ be the $n\!\times\!n$ transition probability matrix, $m(j|i)$ can be obtained expressed recursively as
\begin{displaymath}
m(j\,|\,i) \, = \, \left\{ \begin{array}{lll}
0 &, \ \ \textrm{if } i = j \\
1 + \sum\limits_{k=1}^n p_{ik} \, m(j\,|\,k) &, \ \ \textrm{otherwise}
\end{array}\right.
\end{displaymath}
A problem with the average first-passage time is that it is not symmetric. A related measure that does not have this problem is the average commute time $n(i,j) = m(j\,|\,i)+m(i\,|\,j)$ \cite{gobel74}, corresponding to the average number of steps required by a random walker starting at node $i \neq j$ to reach node $j$ for the first time and go back to $i$. This measure has several interesting properties. Namely, it is a true distance measure in some Euclidean space \cite{gobel74}, and is closely related to the well-known property of resistance in electrical networks and to the pseudo-inverse of the graph Laplacian matrix \cite{fouss07}.
In \cite{fouss07}, the average commute time is used to compute the distance between the nodes of a bipartite graph representing the interactions of users and items in a recommender system. For each user $u$ there is a directed edge from $u$ to every item $i \in \mathcal{I}_u$, and the weight of this edge is simply $1/|\mathcal{I}_u|$. Likewise, there is a directed edge from each item $i$ to every user $u \in \mathcal{U}_i$, with weight $1/|\mathcal{U}_i|$. Average commute times can be used in two different ways: 1) recommending to $u$ the item $i$ for which $n(u,i)$ is the smallest, or 2) finding the users nearest to $u$, according to the commute time distance, and then suggest to $u$ the item most liked by these users.
\subsubsection{{Combining Random Walks and Neighborhood-learning Methods}}
\paragraph{\textbf{Motivation and Challenges}}
Neighborhood-learning methods have been shown to achieve high top-$n$ recommendation accuracy while being scalable and easy to interpret. The fact, however, that they typically consider only direct item-to-item relations imposes limitations to their quality and makes them brittle to the presence of sparsity, leading to poor itemspace coverage and substantial decay in performance.
A promising direction towards ameliorating such problems involves treating item models as graphs onto which random-walk-based techniques can then be applied. However directly applying random walks on item models can lead to a number of problems that arise from their inherent mathematical properties and the way these properties relate to the underlying top-$n$ recommendation task.
In particular,
imagine of a random walker jumping from node to node on an item-to-item graph with transition probabilities proportional to the proximity scores depicted by an item model $W$. If the starting distribution of this walker reflects the items consumed by a particular user $u$ in the past, the probability the walker lands on different nodes after $K$ steps provide an intuitive measure of proximity that can be used to rank the nodes and recommend items to user $u$ accordingly.
Concretely, if we denote the transition probability matrix of the walk $S = \operatorname{diag}(W \mathbf{1})^{-1} W$ where $\mathbf{1}$ is used to denote the vector of ones, personalized recommendations for user $u$ can be produced e.g., by leveraging the $K$-step landing distribution of a walk rooted on the items consumed by $u$;
\begin{equation}
\piup_u^\top \, = \, \phiup_u^\top S^K, \qquad \phiup_u^\top \, = \, \tfrac{\vec r_u^\top}{\lVert \vec r_u^\top\rVert_1}
\label{model:srw}
\end{equation}
or by computing the limiting distribution of a random walk with restarts on $S$, using $\phiup_u^\top$ as the restarting distribution. The latter approach is the well-known personalized PageRank model~\cite{brin98} with teleportation vector $\phiup_u^\top$ and damping factor $p$, and its stationary distribution can be expressed~\cite{langville2011google} as
\begin{equation}
\piup_u^\top \, = \, \phiup_u^\top \sum_{k=0}^{\infty}(1-p) p^k S^k.
\label{model:simplepr}
\end{equation}
Clearly, both schemes harvest the information captured in the $K$-step landing probabilities $\{\phiup_u^\top S^k\}_{k=0,1,\dots}$.
But, how do these landing probabilities behave as the number of steps $K$ increases? For how long will they still be significantly influenced by user's preferences $\phiup^\top_u$?
Markov chain theory ensures that when $S$ is irreducible and aperiodic the landing probabilities will converge to a \textit{unique} stationary distribution irrespectively of the initialization of the walk. This means that for large enough $K$, the $K$-step landing probabilities will no longer be ``personalized,'' in the sense that they will become independent of the user-specific starting vector $\phiup_u^\top$. Furthermore, long before reaching equilibrium, the quality of these vectors in terms of recommendation will start to plummet as more and more probability mass gets concentrated to the central nodes of the graph. Note, that the same issue arises for simple random walks that act directly on the user-item bipartite network, and has lead to methods that typically consider only very short-length random walks, and need to explicitly re-rank the $K$-step landing probabilities, in order to compensate for the inherent bias of the walk towards popular items~\cite{RP3b}. However, longer random-walks might be necessary to capture non-trivial multi-hop relations between the items, as well as to ensure better coverage of the itemspace.
\paragraph{\textbf{The RecWalk Recommendation Framework}}
\texttt{RecWalk}~\cite{10.1145/3289600.3291016,10.1145/3406241} addresses the aforementioned challenges, and resolves this long- vs short-length walk dilemma through the construction of a \textit{nearly uncoupled random walk}~\cite{NCD1,NCD2} that gives full control over the stochastic dynamics of the walk towards equilibrium; provably, and irrespectively of the dataset or the specific item model onto which it is applied. Intuitively, this allows for prolonged and effective exploration of the underlying network while keeping the influence of the user-specific initialization strong.\footnote{The mathematical details behind the particular construction choices of $\mathtt{RecWalk}$\xspace that enforce such desired mixing properties can be found in~\cite{10.1145/3289600.3291016,10.1145/3406241}. }
From a random-walk point of view, the $\mathtt{RecWalk}$\xspace model can be described as follows: Consider a random walker jumping from node to node on the user-item bipartite network. Suppose the walker currently occupies a node $c \in \set{U}\cup\set{I}$. In order to determine the next step transition the walker tosses a biased coin that yields heads with probability $\alpha$ and tails with probability $(1-\alpha)$:
\begin{enumerate}
\item If the coin-toss yields \textit{heads}, then:
\begin{enumerate}
\item if $c \in \set{U}$, the walker jumps to one of the items rated by the current user (i.e., the user corresponding to the current node $c$) uniformly at random;
\item if $c \in \set{I}$, the walker jumps to one of the users that have rated the current item uniformly at random;
\end{enumerate}
\item If the coin-toss yields \textit{tails}, then:
\begin{enumerate}
\item if $c\in\set{U}$, the walker stays put;
\item if $c \in \set{I}$, the walker jumps to a related item abiding by an \textit{item-to-item transition probability matrix} $M_\set{I}$, that is defined in terms of an underlying item model.
\end{enumerate}
\end{enumerate}
The stochastic process that describes this random walk is defined to be a homogeneous discrete time Markov chain with state space $\set{U}\cup\set{I}$; i.e., the transition probabilities from any given node $c$ to the other nodes, are fixed and independent of the nodes visited by the random walker before reaching $c$. An illustration of the $\mathtt{RecWalk}$\xspace model is given in Figure~\ref{fig:RecWalk}.
The transition probability matrix $P$ that governs the behavior of the random walker can be usefully expressed as a weighted sum of two stochastic matrices $H$ and $M$ as
\begin{equation}
P \, = \, \alpha H \, + \, (1-\alpha) M \label{transitionProbabilityMatrixP}
\end{equation}
where $ 0< \alpha < 1 $, is a parameter that controls the involvement of these two components in the final model.
Matrix $H$ can be thought of as the transition probability matrix of a simple random walk on the user-item bipartite network. Assuming that the rating matrix $R$ has no zero columns and rows, matrix $H$ can be expressed as
\begin{equation}
H \, = \, \Diag(A\ones)^{-1}A, \qquad \textrm{where } \ A \, = \, \left( \begin{array}{cc}
& \, R \\
\tr{R} & \\
\end{array} \right).
\end{equation}
Matrix $M$, is defined as
\begin{equation}
M \, = \, \pmat{ I & \\
& M_\set{I} }
\end{equation}
where $I \in \mathfrak{R}^{U \times U}$ the identity matrix and $M_\set{I} \in \mathfrak{R}^{I \times I}$ is a transition probability matrix designed to capture relations between the items. In particular, given an item model with non-negative weights $W$ (e.g., the aggregation matrix produced by a {$\mathtt{SLIM}$}\xspace model), matrix $M_\set{I}$ is defined using the following stochasticity adjustment strategy:
\begin{equation}
M_\set{I} \, = \, \frac{1}{\lVert W \rVert_\infty} W \, + \, \Diag\left(\ones-\frac{1}{\lVert W \rVert_\infty}W\ones\right).
\label{def:M_I}
\end{equation}
The first term divides all the elements by the maximum row-sum of $W$ and the second enforces stochasticity by adding residuals to the diagonal, appropriately. The motivation behind this definition is to retain the information captured by the relative differences of the item-to-item relations in $W$. This prevents items that are loosely related to the rest of the itemspace to disproportionately influence the inter-item transitions and introduce noise to the model.\footnote{From a purely mathematical point-of-view the above strategy promotes desired spectral properties to $M_\set{I}$ that are shown to be intertwined with recommendation performance. For additional details see~\cite{10.1145/3406241}.}
\begin{figure}[t]
\centering
\includegraphics[angle=0,origin=c]{RecWalksynopsis.pdf}
\caption{RecWalk Illustration. Maroon colored nodes correspond to users; Gold colored nodes correspond to items.
}
\label{fig:RecWalk}
\end{figure}
In $\mathtt{RecWalk}$\xspace the recommendations are produced by exploiting the information captured in the successive landing probability distributions of a walk initialized in a user-specific way.
Two simple recommendation strategies that were considered in~\cite{10.1145/3289600.3291016} are:
\begin{description}
\item[$\mathtt{RecWalk}^\mathtt{K-step}$\xspace:] The recommendation score of user $u$ for item $i$ is defined to be the probability the random walker lands on node $i$ after $K$ steps, given that the starting node was $u$. In other words, the recommendation score for item $i$ is given by the corresponding elements of
\begin{equation}
\label{eq:recwalk_k_step}
\piup_u^\top \, = \, \vec e_u^\top P^K
\end{equation}
where $\vec e_u \in \mathfrak{R}^{U+I}$ is a vector that contains the element 1 on the position that corresponds to user $u$ and zeros elsewhere. The computation of the recommendations is performed by $K$ sparse-matrix-vector products with matrix $P$, and it entails
\begin{math}
\upTheta(K\operatorname{nnz}(P))
\end{math}
operations, where $\operatorname{nnz}(P)$ is the number of nonzero elements in $P$.
\item[$\mathtt{RecWalk}^\mathtt{PR}$\xspace:] The recommendation score of user $u$ for item $i$ is defined to be the element that corresponds to item $i$ in the limiting distribution of a random walk with restarts on $P$, with restarting probability $\eta$ and restarting distribution $\vec e_u$:
\begin{equation}
\label{eq:recwalk_pr}
\piup_u^\top \, = \, \lim\limits_{K\to \infty}\, \vec e_u^\top\big(\eta P \, + \, (1-\eta)\ones \vec e_u^\top\big)^K.
\end{equation}
The limiting distribution in \eqref{eq:recwalk_pr} can be computed efficiently using e.g., the power method, or any specialized PageRank solver. Note that this variant of $\mathtt{RecWalk}$\xspace also comes with theoretical guarantees for item-space coverage for every user in the system, regardless of the base item model $W$ used in the definition of matrix $M_\set{I}$~\cite{10.1145/3406241}.
\end{description}
In \cite{10.1145/3406241} it was shown that both approaches manage to boost the quality of several base item models on top of which they were built. Using {$\mathtt{fsSLIM}$}\xspace~\cite{ning2011slim} with small number of neighbors as a base item model, in particular, was shown to achieve state-of-the-art recommendation performance, in several datasets. At the same time $\mathtt{RecWalk}$\xspace was found to dramatically increase itemspace coverage of the produced recommendations, in every considered setting. This was true both for $\mathtt{RecWalk}^\mathtt{K-step}$\xspace, as well as for $\mathtt{RecWalk}^\mathtt{PR}$\xspace.
\subsubsection{User-Adaptive Diffusion Models}
\noindent{\textbf{Motivation}}:
Personalization of the recommendation vectors in the graph-based schemes we have seen thus far, comes from the use of a user-specific initialization, or a user-specific restarting distribution.
However, the underlying mechanism for propagating user preferences, across the itemspace (i.e., the adopted diffusion function, or the choice of the $K$-step distribution) is \textit{fixed} for every user in the system. From a user modeling point of view this translates to the implicit assumption that every user explores the itemspace in \textit{exactly the same} way---overlooking the reality that different users can have different behavioral patterns. The fundamental premise of {$\mathtt{PerDif}$}\xspace~\cite{PERDIF} is that the latent item exploration behavior of the users can be captured better by \textit{user-specific} preference propagation mechanisms; thus, leading to improved recommendations.
{$\mathtt{PerDif}$}\xspace proposes a simple model of personalized item exploration subject to an underlying item model.
At each step the users might either decide to go forth and discover items related to the ones they are currently considering, or return to their base and possibly go down alternative paths. Different users, might explore the itemspace in different ways; and their behavior might change throughout the exploration session. The following stochastic process, formalizes the above idea:
\vspace{2mm}
\noindent{\textbf{The PerDIF Item Discovery Process:}} Consider a random walker carrying a bag of $K$ biased coins. The coins are labeled with consecutive integers from 1 to $K$. Initially, the random walker occupies the nodes of graph according to distribution $\phiup$. She then flips the 1st coin: if it turns heads (with probability $\mu_1$), she jumps to a different node in the graph abiding by the probability matrix $P$; if it turns tails (with probability $1-\mu_1$), she jumps to a node according to the probability distribution $\phiup$. She then flips the 2nd coin and she either follows $P$ with probability $\mu_2$ or `restarts' to $\phiup$ with probability ($1-\mu_2$). The walk continues until she has used all her $K$ coins.
At the $k$-th step the transitions of the random walker are completely determined by the probability the $k$-th coin turning heads ($\mu_k$), the transition matrix $P$, and the restarting distribution $\phiup$. Thus, the stochastic process that governs the position of the random walker over time is a time-inhomogeneous Markov chain with state space the nodes of the graph, and transition matrix at time $k$ given by
\begin{equation}
G(\mu_k) \, = \, \mu_k P \, + \, (1-\mu_k)\ones\phiup^\top.
\end{equation}
The node occupation distribution of the random walker after the last transition can therefore be expressed as
\begin{equation}
\piup^\top \, = \, \phiup^\top G(\mu_1)\,G(\mu_2)\,\cdots\, G(\mu_K).
\label{eq:itemExporationProcess}
\end{equation}
Given an item transition probability matrix $P$, and a user-specific restarting distribution $\phiup_u$,
the goal is to find a set of probabilities $\muup_u= \pmat{\mu_1,\dots,\mu_K}$ so that the outcome of the aforementioned item exploration process yields a meaningful distribution over the items that can be used for recommendation. {$\mathtt{PerDif}$}\xspace tackles this task as follows:
\vspace{2mm}
\noindent{\textbf{Learning the personalized probabilities:}} For each user $u$ we randomly sample one item she has interacted with (henceforth referred to as the `target' item) alongside $\tau_{\mathit{neg}}$ unseen items, and we fit $\muup_u$ so that the node occupancy distribution after a $K$-step item exploration process rooted on $\phiup_u$ (cf~\eqref{eq:itemExporationProcess}) yields high probability to the target item while keeping the probabilities of the negative items low. Concretely, upon defining a vector $\vec h_u\in\mathfrak{R}^{\tau_{\mathit{neg}}+1}$ which contains the value 1 for the target item and zeros for the negative items, we learn $\muup_u$ by solving
\begin{equation}
\MINone{\muup_u\in \mathfrak{R}^K}{ \big\lVert \phiup_u^\top G(\mu_1)\cdots G(\mu_K)E_u - \vec h_u^\top \big\rVert_2^2}{\mu_i \in (0,1), \quad \forall i \in [1,\dots,K]}
\label{problem1}
\end{equation}
where $\mu_i = [\muup_u]_i, \forall i$, and $E_u$ is a $(I\times (\tau_{\mathit{neg}}+1))$ matrix designed to select and rearrange the elements of the vector $\phiup_u^\top G(\mu_1)\cdots G(\mu_K)$ according to the sequence of items comprising $\vec h_u$. Upon obtaining $\muup_u$, personalized recommendations for user $u$ can be computed as
\begin{equation}\label{eq:recommendationsMuForm}
\piup_u^\top = \phiup_u^\top G(\mu_1)\cdots G(\mu_K).
\end{equation}
Leveraging the special properties of the stochastic matrix $G$ the above non-linear optimization problem can be solved efficiently. In particular, it can be shown~\cite{PERDIF} that the optimization problem~\eqref{problem1} is equivalent to
\begin{displaymath}
\MIN{\omegaup_u \in \Delta_{++}^{K+1}}{\, \big\lVert \omegaup_u^\top S_u E_u - \vec h_u^\top \big\rVert_2^2
\end{displaymath}
where $\Delta_{++}^{K+1} = \{x: x^\top\ones = 1, x>0 \}$ and
\begin{displaymath}
S_u \, = \,
\pmat{
\phiup_u^\top \\
\phiup_u^\top P \\
\phiup_u^\top P^2 \\
\vdots \\[0.1cm]
\phiup_u^\top P^K
}, \qqua
\omegaup_u \, \equiv \, \omegaup_u(\muup_u) \, = \, \pmat{
1-\mu_K\\
\mu_K\,(1-\mu_{K-1})\\
\mu_K\,\mu_{K-1}\,(1-\mu_{K-2})\\
\vdots\\
\mu_K\,\cdots\,\mu_2\,(1-\mu_1)\\
\mu_K\,\cdots\,\mu_2\,\mu_1
}.
\end{displaymath}
The above result simplifies learning $\muup_u$ significantly. It also lends {$\mathtt{PerDif}$}\xspace its name. In particular, the task of finding personalized probabilities for the item exploration process, reduces to that of finding \textit{personalized diffusion coefficients} $\omegaup_u$ over the space of the first $K$ landing probabilities of a walk rooted on $\phiup_u$ (see definition of $S_u$). Afterwards $\muup_u$ can be obtained in linear time from $\omegaup_u$ upon solving a simple forward recurrence~\cite{PERDIF}. Taking into account the fact that in recommendation settings $K$ will typically be small and $\phiup_u, P$ sparse, building `on-the-fly' $S_u E_u$ row-by-row, and solving the $(K+1)$-dimensional convex quadratic problem
\begin{equation}\label{eq:free_learn}
\textsc{PerDif\textsuperscript{free}}\xspace\,: \ \ \MIN{\omegaup_u \in \Delta_{++}^{K+1}}{ \big\lVert \omegaup_u^\top S_u E_u - \vec h_u^\top \big\rVert_2^2}
\end{equation}
can be performed very efficiently (typically in a matter of milliseconds even in large scale settings).
Moreover, working on the space of landing probabilities can also facilitate parametrising the diffusion coefficients within a family of known diffusions. This motivates the parameterized variant of {$\mathtt{PerDif}$}\xspace
\begin{equation}\label{eq:dict_learn}
\textsc{PerDif\textsuperscript{par}}\xspace:\ \ \MIN{\gammaup_u \in \Delta_{+}^{L}}{ \lVert \gammaup_u^\top D S_u E_u - \vec h_u^\top \rVert_2^2}
\end{equation}
with $\Delta_{+}^{L}=\{y:y^\top\ones = 1, y\geq0\}$ and $D \in \mathfrak{R}^{L\times (K+1)}$ defined such that its rows contain preselected diffusion coefficients (e.g., PageRank~\cite{brin98} coefficients for several damping factors, heat kernel~\cite{chung2007heat} coefficients for several \textit{temperature} values etc.), normalized to sum to one. Upon obtaining $\gammaup_u$, vector $\omegaup_u$ can be computed as $\omegaup_u^\top = \gammaup_u^\top D$.
\begin{figure}[t]
\centering
\includegraphics[scale=1.19]{PERDIFsynopsis.pdf}
\caption{Personalized Diffusions on the User-Item Bipartite Network. }
\label{fig:PerDif}
\end{figure}
While \textsc{PerDif\textsuperscript{free}}\xspace learns $\omegaup_u$ by weighing the contributions of the landing probabilities directly, \textsc{PerDif\textsuperscript{par}}\xspace constrains $\omegaup_u$ to comprise a user-specific mixture of predetermined such weights (i.e., the rows of $D$), thus allowing one to endow $\omegaup_u$ with desired properties, relevant to the specific recommendation task at hand.
Furthermore, the use of matrix $D$ can improve the robustness of the personalized diffusions in settings where the recommendation quality of the individual landing distributions comprising $S_u$ is uneven across the $K$ steps considered.
Besides, its merits in terms of recommendation accuracy, personalizing the diffusions within the {$\mathtt{PerDif}$}\xspace framework can also provide useful information arising from the analysis of the learned diffusion coefficients, $\omegaup_u$. In particular, the dual interpretation of the model parameters ($\muup_u$ in the item exploration space; and, $\omegaup_u$ in the diffusion space) allows utilizing the learned model parameters to identify users for which the model will most likely lead to poor predictions, at training-time---thereby affording preemptive interventions to handle such cases appropriately. This affords a level of transparency that can prove particularly useful in practical settings (for the technical details on how this can be achieved see~\cite{PERDIF}).
\section{Conclusion}
One of the earliest approaches proposed for the task item recommendation, neighbor\-hood-based recommendation still ranks among the most popular methods for this problem. Although quite simple to describe and implement, this recommendation approach has several important advantages, including its ability to explain a recommendation with the list of the neighbors used, its computational and space efficiency which allows it to scale to large recommender systems, and its marked stability in an online setting where new users and items are constantly added. Another of its strengths is its potential to make serendipitous recommendations that can lead users to the discovery of unexpected, yet very interesting items.
In the implementation of a neighborhood-based approach, one has to make several important decisions. Perhaps the one having the greatest impact on the accuracy and efficiency of the recommender system is choosing between a user-based and an item-based neighborhood method. In typical commercial recommender systems, where the number of users far exceeds the number of available items, item-based approaches are typically preferred since they provide more accurate recommendations, while being more computationally efficient and requiring less frequent updates. On the other hand, user-based methods usually provide more original recommendations, which may lead users to a more satisfying experience. Moreover, the different components of a neighborhood-based method, which include the normalization of ratings, the computation of the similarity weights and the selection of the nearest-neighbors, can also have a significant influence on the quality of the recommender system. For each of these components, several different alternatives are available. Although the merit of each of these has been described in this document and in the literature, it is important to remember that the ``best'' approach may differ from one recommendation setting to the next. Thus, it is important to evaluate them on data collected from the actual system, and in light of the particular needs of the application.
Modern machine-learning-based techniques can be used to further increase the performance of neighborhood-based approaches, by automatically extracting the most representative neighborhoods based on the available data. Such models achieve state-of-the-art recommendation accuracy, however their adoption imposes additional computational burden that needs to be considered in light of the particular characteristics of the recommendation problem at hand.
Finally, when the performance of a neighborhood-based approach suffers from the problems of limited coverage and sparsity, one may explore techniques based on dimensionality reduction or graphs. Dimensionality reduction provides a compact representation of users and items that captures their most significant features. An advantage of such approach is that it allows to obtain meaningful relations between pairs of users or items, even though these users have rated different items, or these items were rated by different users. On the other hand, graph-based techniques exploit the transitive relations in the data. These techniques also avoid the problems of sparsity and limited coverage by evaluating the relationship between users or items that are not ``directly connected''. However, unlike dimensionality reduction, graph-based methods also preserve some of the ``local'' relations in the data, which are useful in making serendipitous recommendations.
|
2,869,038,154,111 | arxiv | \subsection{Assumptions on $\xi(p)$ and $\zeta(p)$}
The result of Eq.~(\ref{D}) assumes that the derivative $\partial D/\partial \alpha_{\text{\tiny R}}$ at $\alpha_{\text{\tiny R}}=0$ does exist. The latter is not the case, e.\,g., for the model of Dirac fermions, where $D\propto 1/\alpha_{\text{\tiny R}}$~\cite{iDMI-theory-TI1,iDMI-theory-TI2,iDMI-theory-TI3}. Thus, the necessary condition for the validity of Eq.~(\ref{D}) is $\xi(p)\not\equiv 0$. In order to establish the sufficient conditions, one should investigate the convergence of the integrals that define $\partial D/\partial \alpha_{\text{\tiny R}}$. Given $\xi(p)$ and $\zeta(p)$ have no singularities at finite values of $p$, it would be a study of convergence of the corresponding integrals at $p=\infty$. Uniform convergence is guaranteed, for instance, if distribution functions $f(\varepsilon^{\pm}(\boldsymbol p))$ decay at infinity well enough. This will be the case if at large $p$ function $\xi(p)$ is positive, unbounded, and grows faster than $\vert p\,\zeta(p)\vert$.
The result of Eq.~(\ref{A}) provides the value of the exchange stiffness in the absence of SOC, hence it depends on $\xi(\cdot)$ only. If $\xi(p)$ has no singularities at finite values of $p$, and it is positive and unbounded at large $p$, Eq.~(\ref{A}) is valid.
\subsection{Derivation of Eqs.~(\ref{symmetric_energy_general}) and (\ref{A}) of the main text of the Letter}
In order to compute the symmetric exchange contribution to micromagnetic free energy density, one has to extract all terms proportional to $\nabla_\beta n_\gamma\nabla_{\beta'} n_{\gamma'}$ and $\nabla_\beta\nabla_{\beta'} n_\gamma$ in the electronic grand potential, Eq.~(\ref{Omega_general}). To do that, we extend the Dyson series of Eq.~(\ref{Dyson}) as
\begin{multline}
\label{Dyson_hardcore}
\mathcal G(\boldsymbol r_0,\boldsymbol r_0)=G(\boldsymbol r_0-\boldsymbol r_0)+
J_{\text{sd}} S\int d\boldsymbol r'\,
G(\boldsymbol r_0-\boldsymbol r')
\left[
\sum\limits_{\beta\gamma}(\boldsymbol r'-\boldsymbol r_0)_\beta\nabla_\beta n_\gamma(\boldsymbol r_0)\,\sigma_\gamma
\right]
G(\boldsymbol r'-\boldsymbol r_0)
\\
+
(J_{\text{sd}} S)^2\int d\boldsymbol r'd\boldsymbol r''\,
G(\boldsymbol r_0-\boldsymbol r')
\left[
\sum\limits_{\beta\gamma}(\boldsymbol r'-\boldsymbol r_0)_\beta\nabla_\beta n_\gamma(\boldsymbol r_0)\,\sigma_\gamma
\right]
G(\boldsymbol r'-\boldsymbol r'')
\left[
\sum\limits_{\beta'\gamma'}(\boldsymbol r''-\boldsymbol r_0)_{\beta'}\nabla_{\beta'} n_{\gamma'}(\boldsymbol r_0)\,\sigma_{\gamma'}
\right]
G(\boldsymbol r''-\boldsymbol r_0)
\\
+
\frac{J_{\text{sd}} S}{2}\int d\boldsymbol r'\,
G(\boldsymbol r_0-\boldsymbol r')
\left[
\sum\limits_{\beta\beta'\gamma}(\boldsymbol r'-\boldsymbol r_0)_\beta(\boldsymbol r'-\boldsymbol r_0)_{\beta'}\nabla_\beta\nabla_{\beta'} n_\gamma(\boldsymbol r_0)
\right]
G(\boldsymbol r'-\boldsymbol r_0),
\end{multline}
where the first line has been already analysed in the main text, the second line is a second order correction to the Green's function due to the first spatial derivatives of $\boldsymbol n$, while the third line is a first order correction due to the second spatial derivatives of $\boldsymbol n$. We substitute the latter two into Eq.~(\ref{Omega_general}), switch to momentum representation, and symmetrize the outcome, arriving at
\begin{equation}
\label{Exc_general_0}
\Omega_A[\boldsymbol n]=
\sum_{\beta\beta'\gamma\gamma'}{\Omega^{\text{exc-I}}_{\beta\beta'\gamma\gamma'}\nabla_\beta\, n_\gamma\nabla_{\beta'}\, n_{\gamma'}}
+
\sum_{\beta\beta'\gamma}{\Omega^{\text{exc-II}}_{\beta\beta'\gamma}\,\nabla_\beta\nabla_{\beta'}\, n_\gamma},
\end{equation}
where the tensors are defined as
\begin{equation}
\label{Exc_general_1}
\Omega^{\text{exc-I}}_{\beta\beta'\gamma\gamma'}=
T\frac{(J_{\text{sd}} S)^2}{2\pi}
\im{
\int{d \varepsilon\,
g(\varepsilon)
\int{
\frac{d^2 p}{(2\pi)^2}
\tr{
\Bigl(
G^{R}\,v_\beta\,G^{R}\sigma_\gamma\,G^{R}\sigma_{\gamma'}\,G^{R}\,v_{\beta'}\,G^{R}+
G^{R}\,v_{\beta'}\,G^{R}\sigma_{\gamma'}\,G^{R}\sigma_\gamma\,G^{R}\,v_\beta\,G^{R}
\Bigr)
}
}
}
}
\end{equation}
and
\begin{equation}
\label{Exc_general_2}
\Omega^{\text{exc-II}}_{\beta\beta'\gamma}=
-T\frac{J_{\text{sd}} S}{4\pi}
\im{
\int{d \varepsilon\,
g(\varepsilon)
\int{
\frac{d^2 p}{(2\pi)^2}
\tr{\left(
\frac{\partial^2 G^{R}}{\partial p_\beta\partial p_{\beta'}}\sigma_\gamma\,G^{R}+
G^{R}\sigma_\gamma\,\frac{\partial^2 G^{R}}{\partial p_\beta\partial p_{\beta'}}
\right)}
}
}
}.
\end{equation}
The notation of the argument of $\boldsymbol n(\boldsymbol r_0)$ is dropped in Eq.~(\ref{Exc_general_0}) and further below.
The Green's functions entering Eqs.~(\ref{Exc_general_1}) and (\ref{Exc_general_2}) are taken in the momentum representation of Eq.~(\ref{green's_functions}) of the main text, but with $\alpha_{\text{\tiny{R}}}=0$. Taking a matrix trace calculation and performing an integration over the angle, we obtain
\begin{gather}
\label{calculated_Omega_A_1}
\Omega^{\text{exc-I}}_{\beta\beta'\gamma\gamma'}=
A_1\,\delta_{\beta\beta'}\delta_{\gamma\gamma'}+
W\,\delta_{\beta\beta'}n_{\gamma}n_{\gamma'},
\\
\label{calculated_Omega_A_2}
\Omega^{\text{exc-II}}_{\beta\beta'\gamma}=
A_2\,\delta_{\beta\beta'}n_{\gamma},
\end{gather}
where $\delta_{q_1 q_2}$ is Kronecker delta, while
\begin{gather}
\label{almost_A_1}
A_1=\frac{\Delta_{\text{sd}}^2}{2\pi^2}T
\int\limits_0^{\infty}p\,dp\int\limits_{-\infty}^{\infty}d\varepsilon\,
g(\varepsilon)\,
\im{\left(
\frac{\left[\xi'(p)\right]^2\left[3\Delta_{\text{sd}}^2+(\varepsilon-\xi(p))^2\right] \left[\varepsilon-\xi(p)\right]}{[\varepsilon+i 0-\varepsilon^{+}_0(\boldsymbol p)]^4[\varepsilon+i 0-\varepsilon^{-}_0(\boldsymbol p)]^4}
\right)},
\\
\label{almost_A_2}
A_2=-\frac{\Delta_{\text{sd}}^2}{\pi^2}T
\int\limits_0^{\infty}p\,dp\int\limits_{-\infty}^{\infty}d\varepsilon\,
g(\varepsilon)\,
\im{\left(
\frac{\left[\xi'(p)+p\,\xi''(p)\right]\left[\Delta_{\text{sd}}^2+3(\varepsilon-\xi(p))^2\right]}{4 p[\varepsilon+i 0-\varepsilon^{+}_0(\boldsymbol p)]^3[\varepsilon+i 0-\varepsilon^{-}_0(\boldsymbol p)]^3}
+2\frac{\left[\xi'(p)\right]^2\left[\Delta_{\text{sd}}^2+(\varepsilon-\xi(p))^2\right] [\varepsilon-\xi(p)]}{[\varepsilon+i 0-\varepsilon^{+}_0(\boldsymbol p)]^4[\varepsilon+i 0-\varepsilon^{-}_0(\boldsymbol p)]^4}
\right)},
\end{gather}
and the actual value of $W$ is not relevant for the final result. Combining Eqs.~(\ref{Exc_general_0}),~(\ref{calculated_Omega_A_1}), and (\ref{calculated_Omega_A_2}) we find
\begin{equation}
\label{A1+A2}
\Omega_A[\boldsymbol n]=
A_1\left[(\nabla_x\boldsymbol n)^2+(\nabla_y\boldsymbol n)^2\right]+
A_2\left[\boldsymbol n\,\nabla_x^2\boldsymbol n+\boldsymbol n\,\nabla_y^2\boldsymbol n\right]
+W(\boldsymbol n\, \nabla_x \boldsymbol n)^2+W(\boldsymbol n\, \nabla_y \boldsymbol n)^2.
\end{equation}
Before we proceed, it is important to notice two consequences of the constraint $\boldsymbol n^2 \equiv 1$, namely,
\begin{equation}
\label{wisdom}
\frac{1}{2}\nabla_\beta \boldsymbol n^2=\boldsymbol n\, \nabla_\beta \boldsymbol n=0
\qquad\text{and} \qquad
\frac{1}{2}\nabla_\beta^2 \boldsymbol n^2=\nabla_\beta(\boldsymbol n\, \nabla_\beta \boldsymbol n)=(\nabla_\beta\boldsymbol n)^2+\boldsymbol n\, \nabla_\beta^2 \boldsymbol n=0.
\end{equation}
With the help of Eq.~(\ref{wisdom}) we are able to bring Eq.~(\ref{A1+A2}) to the form
\begin{equation}
\Omega_A[\boldsymbol n]=
(A_1-A_2)\left[(\nabla_x\boldsymbol n)^2+(\nabla_y\boldsymbol n)^2\right],
\end{equation}
proving Eq.~(\ref{symmetric_energy_general}) of the main text with $A=A_1-A_2$.
To complete the calculation of the exchange stiffness $A$, one should perform a partial fraction decomposition of the integrands in Eqs.~(\ref{almost_A_1}),~(\ref{almost_A_2}) and make use of the formula
\begin{equation}
\im{\left([\varepsilon-\varepsilon^{\pm}_0(\boldsymbol p)+i0]^{-n-1}\right)} =
\frac{(-1)^{n+1}}{n!}\pi\,\delta^{(n)}(\varepsilon-\varepsilon^{\pm}_0(\boldsymbol p))
\end{equation}
to integrate over $\varepsilon$ with the result
\begin{multline}
\label{almost_A}
A=\frac{\Delta_{\text{sd}}}{32\pi}T
\int_{0}^{\infty}{d p\,\frac{p\,[\xi'(p)]^2}{\Delta_{\text{sd}}^2}(g_-'-g_+')}
+
\frac{\Delta_{\text{sd}}}{32\pi}T
\int_{0}^{\infty}{d p\,\frac{p\,[\xi'(p)]^2}{\Delta_{\text{sd}}}(g_-''+g_+'')}
\\
+\frac{\Delta_{\text{sd}}}{16\pi}T
\int_{0}^{\infty}{d p\,[\xi'(p)+p\,\xi''(p)](g_-''-g_+'')}
+\frac{\Delta_{\text{sd}}}{16\pi}T
\int_{0}^{\infty}{d p\,p\,[\xi'(p)]^2(g_-'''-g_+''')},
\end{multline}
where $\xi'(p)=\partial \xi/\partial p$ and the derivatives of $g_{\pm}=g(\varepsilon^{\pm}_0(\boldsymbol p))=g(\xi(p)\pm\Delta_{\text{sd}})$ are taken with respect to the argument. The latter can also be assumed to be the derivatives with respect to $\xi$,
\begin{equation}
g_{\pm}^{(n)}=\frac{\partial^{n} g_{\pm}}{\partial \xi^{n}}.
\end{equation}
The third term cancels out the fourth term in Eq.~(\ref{almost_A}) after integration by parts with the help of
\begin{equation}
\xi'(p)+p\,\xi''(p)=\partial [p\,\xi'(p)]/\partial p.
\end{equation}
In the remaining terms, one replaces the derivatives of $g_{\pm}=g(\xi(p)\pm\Delta_{\text{sd}})$ with respect to $\xi$ by the derivatives with respect to $\Delta_{\text{sd}}$, reduces the resulting expression to a form of a full derivative with respect to $\Delta_{\text{sd}}$, and uses the relation $\partial g(\varepsilon)/\partial \varepsilon=-f(\varepsilon)/T$ to arrive at Eq.~(\ref{A}) of the main text.
\end{document} |
2,869,038,154,112 | arxiv | \section{Introduction}
Silica aerogels(aerogels) are a colloidal form of glass, in which globules of
silica are connected in three dimensional networks with siloxan
bonds.
They are solid,
very light, transparent and their refractive index can be controlled in
the production process. Many high energy and nuclear physics experiments
have used aerogels instead of pressurized gas for
their \v Cerenkov counters
\cite{used1,used2,used3,used4,used5,used6,used7,used8,used9,used10}.
Several experiments in near future operating under very
high dose of radiation are likely to employ aerogel
\v Cerenkov counters for
their particle identification systems\cite{aerogel,slactdr,loi,tdr}.
Stability of aerogels under such circumstances has, however, not been
studied yet. In this report we for the first time show the effect
of radiation damage on aerogel samples produced
by some of us at KEK\cite{aerogel,loi,tdr}.
In the next section, we quantify the \v Cerenkov light production
in aerogels. Prospects of using aerogels in present
and future experiments are explored in the third section. Experimental
setup and results are described in fourth and fifth sections,
respectively. Conclusion is stated in the
sixth section of this report.
\section{Silica Aerogels for KEK B-Factory}
The production method of aerogels used for the KEK B-factory experiment
can be found in Ref. \cite{aerogel,loi,tdr}.
Accessible range of refractive index
($n$) in this method is between 1.006 and 1.060, and it can be controlled
to a level of $\delta n$=0.0004.
Typical size of a tile was
12cm $\times$ 12cm $\times$ 2cm within a tolerance of 0.3\%.
Absorption and scattering lengths as functions of incident
wavelength were obtained
from a transmittance measurement with a
spectrophotometer\cite{monochrometer} using the following relations.
We defined a transmission length $\Lambda$ as;
\begin{equation}
T/T_0 = exp \left(-t/\Lambda \right),
\end{equation}
where T/T$_{0}$ is transmittance, and $t$ is thckness of aerogels.
Then we found the absorption and scattering length by fitting $\Lambda$
with the following equations;
\begin{equation}
\frac{1}{\Lambda} =
\frac{1}{\Lambda_{abs}} + \frac{1}{\Lambda_{scat}}
\label{lambda1}
\end{equation}
\begin{equation}
\Lambda_{abs}=a\lambda^2,~\Lambda_{scat}=b\lambda^4,
\label{lambda2}
\end{equation}
where $\Lambda_{abs}$ is absorption length, $\Lambda_{scat}$ is scattering
length, and a, b are free parameters.
The results are shown in
Figure \ref{transmittance}.
At 400 nm, where a typical photo-multiplier tube (PMT) with a bialkali
photocathode has maximum sensitivity, the absorption
and scattering lengths were measured to be
$22\pm2$ and $5.2\pm0.5$ cm, respectively.
The \v Cerenkov light yield of our aerogel samples was measured
using a 3.5 GeV/c $\pi^-$ beam at KEK PS ($\pi 2$ beam line).
Light yield from 12cm-thick
aerogel ({\it i.e.}, 6 layers)
was measured by two 3-inch PMTs which were directly attached
to both sides of aerogel surfaces.
The counter size was 12cm $\times$ 12 cm $\times$ 12 cm .
Surfaces other than the photocathode
area were covered by GORETEX white reflector\cite{gortex}.
Measurements of $N_{pe}$ and $N_0$, as defined below,
are given in Table \ref{n0}. Number of photoelectrons
($N_{pe}$) is given by Frank-Tamm's equation\cite{franc_tamm}~:
\begin{equation}
{dN_{pe} \over dE} = \left( {\alpha \over \hbar c} \right)
\cdot L \cdot z^2 \cdot \sin ^2 \theta _C \cdot \epsilon _{QE} (E)
\cdot \epsilon ,
\label{frank-tamm}
\end{equation}
where $\alpha$ is the fine structure constant,
$L$ thickness of radiator, $z$ charge of an incident particle,
$\theta _{C}$
\v Cerenkov angle, $\epsilon _{QE}(E)$ quantum efficiency of the PMTs,
and $\epsilon$ detection efficiency including light absorption and light
collection.
Energy-integrated quantum efficiency of PMTs used in this experiment was
measured to be
\begin{equation}
\int \epsilon _{QE} (E) dE = 0.46 eV.
\label{qe}
\end{equation}
Using Equations \ref{frank-tamm} and \ref{qe},
$\epsilon$ was estimated to be $\sim$ 0.6, which is higher than that for
any other existing aerogel counters.
The \v Cerenkov Quality Factor $N_0$ defined as;
\begin{equation}
N_0 = \left( {\alpha \over \hbar c} \right)
\cdot \int \epsilon _{QE}
\cdot \epsilon \cdot dE,
\label{n0def}
\end{equation}
for samples of different refractive
indices are given in Table \ref{n0}.
For our aerogels,
$N_0$ was typically $\sim$100cm$^{-1}$, independent of the refractive
index in the range of 1.01 $< n <$ 1.03.
\section{Use of Aerogels in High-Radiation Environment}
\subsection{B-Factory}
$B$-factories are high luminosity colliders where copious amount of
$B$-mesons are produced and their decays are studied \cite{slactdr,loi,tdr}.
The primary goal of these
machines is to study the CP violation in the heavy quark sector
\cite{km,sanda}.
Because of high luminosity of these machines, corresponding
detectors are subjected to a substantial amount of radiation from
both physics and background events.
Particle identification is a vital part of the detectors in these
machines. Threshold aerogel \v Cerenkov counters are being proposed
for $\pi/K/p$ separation in these detectors.
\subsubsection{$e^+e^-$ Collider}
Two asymmetric $B$-factories, producing $B$-mesons by annihilating
electrons and positrons are being constructed at KEK \cite{loi,tdr} and
SLAC
\cite{slactdr}. Threshold aerogel \v Cerenkov counters
are to be used as parts of particle identification system.
Radiation dose in these detectors is estimated to be typically 1 kRad/year
at places closest to the beam-pipe \cite{tdr}.
\subsubsection{Hadron Machine}
An imaging \v Cerenkov counter using aerogel radiator has been proposed
\cite{spot}, which could be a potential candidate for
a particle identification system at low angle in a generic
hadron $B$-factories, such as the HERA-$B$ experiment\cite{herab}.
Considering the scattering length of our aerogel
(Figure \ref{transmittance}), aerogel as thin as 1 cm
can be used as a radiator to make a ring image of
the \v Cerenkov radiation.
In case of the HERA-$B$ experiment, the radiation dose is
expected to be 10Mrad/year at the innermost edge.
\subsection{Nuclear Science}
Possibility of separating high energy isotopes
by aerogel \v Cerenkov counters is discussed here.
Defining
\begin{equation}
n_0=n-1 \mbox{ and } \beta_0 = 1 - \beta,
\label{definition}
\end{equation}
we obtain
\begin{equation}
N_{pe} \simeq 2N_0 L z^2 (n_0 - \beta_0).
\end{equation}
using equations \ref{frank-tamm}, \ref{n0def}, and \ref{definition}.
Assuming a Poisson distribution for $N_{pe}$, the measurement error
of $\beta$ is calculated to be
\begin{equation}
\delta \beta_0 \simeq {1 \over z} \sqrt{n_0-\beta_0 \over 2N_0 L}.
\end{equation}
The momentum resolution for a nucleon is expressed as
\begin{equation}
{\delta p \over p} \simeq {1 \over z \beta_0} \sqrt{n_0-\beta_0
\over 8 N_0 L}.
\end{equation}
At a little above the threshold,
{\it i.e.}, $\beta_0 \sim n_0/2$, this becomes
\begin{equation}
{\delta p \over p} \simeq {1 \over z \sqrt{
4 n_0 N_0 L}}.
\end{equation}
Assuming $N_0=100$ cm$^{-1}$, $L$=10 cm, and $n$=1.05, we obtain
a momentum resolution of 0.07/$z$ which is better than
the typical resolution of magnetic
spectrometers for high-$z$ incident particles {\it i.e.}, heavy nuclei.
Therefore a combination of aerogel counters and a magnetic spectrometer
can separate isotopes. The nuclear charge $z$ can be measured by
some other devices such as
scintillator and/or combination
of \v Cerenkov radiators.
Rigidity $R$ can also be measured by the
magnetic spectrometer.
Therefore we can determine the mass number of the nucleus (A), using the
relationship
\begin{equation}
Ap = zR.
\end{equation}
For nuclei having the $A/z$ value close to 2, measurement error of A
is calculated to be
\begin{equation}
\delta A \sim 0.14 + x A R,
\end{equation}
where $x$ is the rigidity resolution of the spectrometer
determined by the position resolution and multiple Coulomb
scattering.
Therefore the limit of separating heavy isotopes is held by
spectrometer resolution.
For example, with n = 1.05 aerogel and
for 5 GeV/nucleon region, we can distinguish
nuclei up to
Boron with a 0.5\%-resolution spectrometer and up to Vanadium with
a 0.1\%-resolution one.
In summary, using aerogels, we can distinguish nuclear fragments in heavy ion
collisions at low angle.
Combination of aerogel counters of several refractive indices can even
replace a magnetic spectrometer as far as the issue of particle
identification goes.
For the RHIC collider, the radiation dose at a very low angle is considered
to be less than 100krad/year \cite{phenix}.
\subsection{Space Experiments}
HEAO-C2 experiment \cite{heao}, which used
aerogel \v Cerenkov counters, has measured an abundance of heavy nuclei
in high energy cosmic rays.
Quality of our aerogel is more than four times better
than that used in HEAO-C2, thereby showing that if used, it could
substantially improve the
quality of the data. In addition, a combination of aerogels and
a magnetic spectrometer in the experiment could separate
the isotopes. This would be a completely new method.
The expected radiation dose is 10 kRad/year at 600 km
altitude \cite{takahashi}.
Recently a space station based experiment -- Anti-Matter Spectrometer
(AMS) has been proposed\cite{ting}, which will look for heavy
anti-matter nuclei in space. The same technique for isotope separation
discussed above can be used to separate isotopes in this spectrometer.
The AMS will orbit the earth at about 300 km altitude, where
the radiation dose is expected to be several kilorads per year.
\section{Experimental Setup}
Optical transparency of the aerogel should be as high as possible
not to lose \v Cerenkov photons inside it. The reflactive index of
the aerogel (n), should be stable during an experiment.
In the present work, two properties of aerogel samples,
transmittance and refractive index were measured
in order to monitor radiation damage on the samples at the irradiation
facility of National Tsing Hua University (Taiwan).
In the cases of $B$-factories and space experiments, most of the irradiation
is caused by electrons and/or $\gamma$-rays
of critical energies.
We used a Co$^{60}$ $\gamma$-ray source, activity of which was
1320 Curie.
The error in estimation of radiation dose
was dominated by the uncertainty in the placement of the sample (0.3cm)
in front of the source
and the ambiguity of
radius dependent dose. The error at the highest dose value (9.8Mrad of
{\it equivalent~dose} \cite{edose})
was the largest, and it was 17\%.
For each aerogel sample, five or six irradiations
were carried out.
\subsection{Transmittance}
Transmittance of the aerogel samples was measured by observing the
ratio of photons absorbed in the aerogel volume with and
without the irradiation. A schematic diagram of the setup is
given in Figure~\ref{setupa}. Three aerogel samples of dimensions
12 cm $\times$ 12 cm $\times$ 2 cm each were stacked together
in a zinc box. This box was kept inside a mother-box having
an LED light box on one side and a photo-multiplier tube
on the other. The whole system was enclosed in a big light-tight
box. Inner sides of this box were covered with black clothes to absorb
any stray light.
We made two such stacks for each refractive index. One
was irradiated (RAD-sample) and the other one was kept
shielded in the irradiation cell (REF-sample). The latter
is for reference. It is important that both RAD and REF be
kept under same environmental conditions to cancel out any
effects caused by humidity, temperature, dust, or any other
factors other than gamma-ray radiation.
The LED (blue in color) was triggered by an external pulse
generator to produce bursts of photon. Typically the trigger
signal had a pulse height of 3.75 V, width of 420 ns, and
was repeated every 10 msec. About 400 photo-electrons were produced
for each burst in the absence of aerogel asmples between the LED
and the phototube. The number reduced to about 200 p.e.
when the stack of aerogel was introduced.
We integrated the charge produced by a 2-inch PMT
\cite{r329} for each photon burst
within a gate of 0.5 $\mu$sec. Ratio of
this integrated charge with aerogel between PMT and LED to
that without aerogel gives us a measure of the transparency
of that aerogel sample. We will call this ratio as
$r_{RAD}$ for irradiated sample and $r_{REF}$ for the
reference sample. The ratio $r_{abs} = r_{RAD}/r_{REF}$ gives us
the transparency of irradiated sample with respect to the
reference one.
It may be noted that $r_{abs}$ is sensitive only to radiation damage,
whereas $r_{REF}$ tells us if there is deterioration
such as due to atmospheric conditions.
After each stage of irradiation, the RAD and REF samples were
tested for transmittance, and the ratios $r_{REF}$, $r_{RAD}$ and
$r_{abs}$ were calculated.
Before the irradiation, $r_{REF}$ and $r_{RAD}$ were measured
several times to make sure that we get a set of systematically
consistent consecutive readings.
The dominant systematic error came from the uncertainty of placement of
aerogel crystals. Other minor sources were
temperature dependence and instabilities
of PMT and LED.
From these readings we found that the combined error of measurement of
each transparency ratio, $r_{RAD}$ or $r_{REF}$ was 0.55\%.
Therefore we estimated the error of r$_{abs}$ to be 0.78(=0.55$\sqrt{2}$)\%.
\subsection{Refractive Index}
Refractive index of the aerogel sample was monitored using
the {\em Prism Formula}
\begin{equation}
n = \frac{\sin(\phi /2 + \alpha /2)}{\sin(\phi /2)},
\label{prism}
\end{equation}
where $n$ is the refractive index, $\phi$ the angle of the prism,
$\alpha$ the angle of the minimum deflection.
The setup for measuring $n$ is shown in Figure~\ref{setupb}.
A red laser (Ar II)
was shone on the corner of the aerogel sample placed
on a rotating table. The aerogel behaved as a prism, and
deviated the laser beam on a screen
placed at a distance $l$. By rotating the table
manually, and checking the minimum distance $d$, angle of the
minimum deflection
$\alpha$ was found out.
A special sample was prepared which was irradiated in parallel with
the stacks for transmittance test. Its refractive index was monitored
by the method described above after each stage of irradiation.
We estimated the error of refractive index measurement by calculating
error propagation of the prism formula. It is estimated as ;
\begin{equation}
\left( {\Delta n \over n} \right) ^2=
\left[ \left\{ \sin {\alpha \over 2} -
{\cos {\alpha \over 2} \over \tan {\phi \over 2}}\right\}
\cos ^2 \alpha \right]^2
\left\{ {d^2 \over 4l^4} (\Delta l)^2+{1 \over 4 l^2}(\Delta d)^2 \right\}
+{1\over 4}
{\sin ^2 {\alpha \over 2} \over \sin ^4 {\phi \over 2}} (\Delta \phi)^2.
\label{errorprism}
\end{equation}
For $n$ =1 .012 and 1.028 crystals,
the accuracy of measuring $l$ was $\Delta l$ = 0.2 cm and that
of $d$ was $\Delta d$ = 0.1 cm.
For $n$ = 1.018 samples, however, the $\Delta d$ was worse (=0.3 cm)
due to the inferior
surface quality, giving rise to a spread of the laser spot on the screen.
Error in measuring the refracting angle of the sample was $\Delta\phi$
= $0.1^\circ$.
The errors $\Delta n$ were then calculated to be 0.00035 for $n$ = 1.012 and
1.028 samples, and 0.001 for $n$ = 1.018 sample, using
Equation~\ref{errorprism}.
It may be noted that $\Delta d$ dominates the final error.
\section{Results}
\subsection{Transmittance}
Samples of three refractive indices~: 1.012, 1.018 and 1.028 were
tested. The transparency ratio $r_{abs}$ was measured at several
points, ranging from 1 kRad to 9.8 MRad. The results are
plotted in Figure~\ref{radtra} for each reflactive index.
No degradation in transparency is observed in any samples
within experimental errors.
Defining absolute deterioration as the maximum of
the measurement error and the deviation from the initial measurement,
we conclude that the radiation damage to transparency of the samples
is less than 1.3\% at 90\% confidence level (CL) at 9.8 Mrad dose.
\subsection{Refractive Index}
Samples of three refractive indices~: 1.012, 1.018 and 1.028 were
tested. Refractive index was measured at several points,
ranging from 1 kRad to 9.8 MRad. The results are shown in
Figure~\ref{radind}.
Angle $\phi$ for the aerogel samples was $90^\circ$. Typical values
of $l$ and $d$ were 160 cm and 6 cm, respectively.
Accuracy of determining refractive index ($\Delta n$) in this method
is better than 0.001.
Again, we do not see any change in refractive index in any of
the samples within experimental errors \cite{problem1}.
\subsection{Post-irradiation Beam Test}
The three irradiated samples were later tested with the pion beam at the
$\pi2$ beam line of the KEK-PS. Momentum of the beam was
varied between 0.8 GeV/c and 3.5 GeV/c, from which
refractive indices of the samples could be calculted. The
numbers perfectly agreed with the values obtained
at the production time (the open symbols in Figure \ref{radind}).
This gives us additional confirmation
that there has not been any radiation damage to the aerogel samples.
Using the same definition of deterioration as above, we conclude that
the radiation damage to refractive index of the samples is less than
0.001 at 90\% CL at 9.8 MRad dose.
\section{Conclusion}
Silica aerogels of low refractive index ($n = 1.012 \sim 1.028$)
are found to be
radiation-hard at least up to 9.8 Mrad of gamma-ray radiation. We observe
no change in transparency and refractive index within the error of
measurement after
the irradiation. Measurement accuracies were 0.8\% for
transparency and $<$0.0006 for refractive index, respectively.
90\%-CL upper limits on the radiation damage are
1.3\% for transparency and 0.001 for refractive index, respectively.
Silica aerogels can be used in high-radiation environments, such as
$B$-factories, nuclear and heavy-ion experiments, space-station
and satellite experiments, without any fear of radiation damage.
\section*{Acknowledgements}
We would like to thank the BELLE Collaboration of KEK $B$-Factory
for its help in this project.
The aerogels were developed under a collaborative
research program between Matsushita Electric Works Ltd. and KEK.
One of the authors (RE) appreciates
Drs.~S.~Kuramata (Hirosaki Univ.) and S.~Yanagida (Ibaraki Univ.)
for useful discussions on certain relevant physics issues.
We are grateful to the staff-members Dr.~F.I.~Chou, M.T.~Duo, K.W.~Fang
and Y.Y.~Wei of the Radio-isotope Division of NSTDC
at National Tsing Hua University, Hsin-Chu (Taiwan), where the
irradiation was conducted, for their help and co-operation.
We are thankful to Prof.~J.C.~Peng of LANL for valuable comments
on this manuscript.
This experiment was supported in part by the grant
NSC~85-2112-M-002-034 of the Republic of China.
\newpage
|
2,869,038,154,113 | arxiv | \section{Introduction}
The contextuality of quantum theory is a fundamental sign of its nonclassicality that has been investigated for several decades.
While contextuality was originally established as a property specific to the formalism of quantum theory~\cite{BellContext, KS}, it has, in more recent times, been further generalised as a property of nonclassical probability distributions that can arise in operational theories~\cite{Spekkens}.
This operational notion of contextuality is applicable to a broad range of physical scenarios and has been shown to be linked to a variety of foundational and applied topics in quantum theory (see, e.g., Refs.~\cite{Spekkens2,Pusey,Leifer,Lostaglio,ArminRoope,Lostaglio2,Anwer,Saha2019,Kunjwal}).
The principle of noncontextuality holds that operationally equivalent physical procedures must correspond to identical descriptions in any underlying ontological model~\cite{Spekkens}.
This assumption imposes constraints on the correlations that can be obtained in prepare-and-measure scenarios involving operationally equivalent preparations and measurements.
In such scenarios, which we term ``contextuality scenarios'', the correlations obtainable by noncontextual models can be characterised in terms of linear programming~\cite{NCpolytope}.
In contrast, quantum models that nonetheless respect the operational equivalences may produce ``contextual correlations'' unobtainable by any such noncontextual model~\cite{Spekkens}.
This leads to a conceptually natural question: how can we determine if, for a given contextuality scenario, a given set of contextual correlations is compatible with quantum theory?
This question is crucial for understanding the extent of nonclassicality manifested in quantum theory, and hence also for the development of quantum information protocols powered by quantum contextuality.
While an explicit quantum model is sufficient to prove compatibility with quantum theory, proving the converse---that no such model exists---is more challenging.
Here, we provide an answer to the question by introducing a hierarchy of semidefinite relaxations of the set of quantum correlations arising in contextuality scenarios involving arbitrary operational equivalences between preparations and measurements.
This constitutes a sequence of increasingly precise necessary conditions that contextual correlations must satisfy in order to admit of a quantum model.
Thus, if a given contextual probability distribution fails one of the tests, it is incompatible with any quantum model satisfying the specified operational equivalences.
We exemplify their practical utility by determining the maximal quantum violations of several different noncontextuality inequalities (for noisy state discrimination~\cite{Schmid}, for three-dimensional parity-oblivious multiplexing~\cite{Ambainis}, for the communication task experimentally investigated in Ref.~\cite{Hameedi}, and for the polytope inequalities obtained in Ref.~\cite{NCpolytope}).
Then, we apply our method to solve a foundational problem in quantum contextuality: we present a correlation inequality satisfied by all quantum models based on pure states and show that it can be violated by quantum strategies exploiting mixed states.
Thus, we prove that mixed states are an indispensable resource for strong forms of quantum contextuality.
Equipped with the ability to bound the magnitude of quantum contextuality, we ask what additional resources are required to simulate preparation contextual correlations with classical or quantum models.
We identify this resource as the preparation of states deviating from the required operational equivalences, and quantify this deviation in terms of the information extractable about the operational equivalences via measurement.
This allows us to interpret preparation contextuality scenarios, and experiments aiming to simulate their results, as particular types of informationally restricted correlation experiments~\cite{Info1, Info2}.
For both classical and quantum models, we show that the simulation cost can be lower bounded using variants of our hierarchy of semidefinite relaxations. We apply these concepts to the simplest preparation contextuality scenario~\cite{POM}, where we explicitly derive both the classical and quantum simulation costs of contextuality.
\section{Contextuality}
Consider a prepare-and-measure experiment in which Alice receives an input $x\in [n_X] \coloneqq \{1,\ldots,n_X\}$, prepares a system using the preparation $P_x$ and sends it to Bob.
Bob receives an input $y\in[n_Y]$, performs a measurement $M_y$ and obtains an outcome $b\in[n_B]$; this event is called the \emph{measurement effect} and is denoted $[b|M_y]$.
When the experiment is repeated many times it gives rise to the conditional probability distribution $p(b|x,y)\coloneqq p(b|P_x,M_y)$.
An ontological model provides a realist explanation of the observed correlations $p(b|x,y)$~\cite{Spekkens}. In an ontological model, the preparation is associated to an ontic variable $\lambda$ subject to some distribution (i.e., an epistemic state) $p(\lambda|x)$ and the measurement is represented by a probabilistic response function depending on the ontic state, $p(b|y,\lambda)$.%
\footnote{\label{fn:CL}These distributions must, respectively, be linear in preparations $x$ (since the epistemic state of a mixture of preparations is the mixture of the epistemic states of the respective preparations) and measurement effects $[b|M_y]$ (since a measurement effect arising from a mixture of measurements or a post-processing of outcomes must be represented by the corresponding mixture and post-processing of response functions).}
The observed correlations are then written
\begin{equation}
p(b|x,y)=\sum_{\lambda} p(\lambda|x)p(b|y,\lambda).
\end{equation}
Notice that every probability distribution admits of an ontological model.
\subsection{Operational equivalences}
The notion of noncontextuality becomes relevant when certain operational procedures (either preparations or measurements) are \emph{operationally equivalent}~\cite{Spekkens}.
Two preparations $P$ and $P'$ are said to be operationally equivalent, denoted $P\simeq P'$, if no measurement\footnote{Here, the quantifier is over all possible measurements (that, e.g., Bob could perform), not only the fixed set $\{M_y\}_y$ he uses in the prepare-and-measure experiment at hand.} can distinguish them, i.e.,
\begin{equation}
\forall\, [b|M]: \quad p(b|P,M) = p(b|P',M).
\end{equation}
Similarly, two measurement effects $[b|M]$ and $[b'|M']$ are operationally equivalent, denoted $[b|M]\simeq [b'|M']$, if no preparation can distinguish them, i.e.,
\begin{equation}
\forall\, P: \quad p(b|P,M) = p(b'|P,M').
\end{equation}
In prepare-and-measure experiments, we are particularly interested in operationally equivalent procedures obtained by combining preparations $P_x$ or measurement effects $[b|M_y]$.
Specifically, one may have (hypothetical) preparations $P_\alpha=\sum_{x=1}^{n_X} \alpha_x P_x$ and $P_\beta=\sum_{x=1}^{n_X} \beta_x P_x$, where $\{\alpha_x\}_x$ and $\{\beta_x\}_x$ are convex weights (i.e., non-negative and summing to one) and, likewise, measurement effects $[b_\alpha|M_\alpha]=\sum_{b=1}^{n_B}\sum_{y=1}^{n_Y}\alpha_{b|y}[b|M_y]$ and $[b_\beta|M_\beta]=\sum_{b=1}^{n_B}\sum_{y=1}^{n_Y}\beta_{b|y}[b|M_y]$, where $\{\alpha_{b|y}\}_{b,y}$ and $\{\beta_{b|y}\}_{b,y}$ are sets of convex weights, with $P_\alpha \simeq P_\beta$ and $[b_\alpha|M_\alpha]\simeq [b_\beta|M_\beta]$.
Such operationally equivalent procedures can naturally be grouped into equivalence classes, and it will be convenient for us to specify equivalent procedures in a slightly different, yet equivalent, way as follows.
\begin{definition}\label{defn:OE}
\textbf{(a)} A \emph{preparation operational equivalence} is a set $\mathcal{E}_\mathcal{P}=\{(S_k,\{\xi_k(x)\}_{x\in S_k})\}_{k=1}^K$, where $\{S_k\}_k$ is a partition of $[n_X]$ into $K$ disjoint sets and, for each $k$, $\{\xi_k(x)\}_{x\in S_k}$ are convex weights (i.e., with $\xi_k(x)\ge 0$ and $\sum_{x\in S_k}\xi_k(x) = 1$).
We say that the preparations $\{P_x\}_{x=1}^{n_X}$ satisfy $\mathcal{E}_\mathcal{P}$ if for all $k,k'\in [K]$
\begin{equation}\label{eq:OE_P}
\sum_{x\in S_k} \xi_k(x) P_x \simeq \sum_{x \in S_{k'}} \xi_{k'}(x)P_x.
\end{equation}
\textbf{(b)} A \emph{measurement operational equivalence} is a set $\mathcal{E}_\mathcal{M}=\{(T_\ell,\{\zeta_\ell(b,y)\}_{(b,y)\in T_\ell})\}_{\ell=1}^L$, where $\{T_\ell\}_\ell$ is a partition of $[n_B]\times [n_Y]$ into $L$ disjoint sets and, for each $\ell$, $\{\zeta_\ell(b,y)\}_{(b,y)\in T_\ell}$ are convex weights.
We say the measurements $\{M_y\}_{y=1}^{n_Y}$ with effects $\{[b|M_y]\}_{b=1}^{n_B}$ satisfy $\mathcal{E}_\mathcal{M}$ if for all $\ell,\ell'\in [L]$
\begin{equation}
\sum_{(b,y)\in T_\ell} \zeta_\ell(b,y) [b|M_y] \simeq \sum_{(b,y)\in T_{\ell'}} \zeta_{\ell'}(b,y) [b|M_y].
\end{equation}
\end{definition}
Note that any operational equivalence of the form $\sum_{x}\alpha_x P_x \simeq \sum_{x}\beta_x P_x$ or $\sum_{b,y} \alpha_{b|y}[b|M_y]\simeq \sum_{b,y}\beta_{b|y}[b|M_y]$ can be specified in this way.%
\footnote{In particular, one can obtain such a bipartition (e.g., for preparations) by taking $S_1=\{x : \alpha_x - \beta_x \ge 0\}$, $S_2=\{x : \alpha_x - \beta_x < 0\}$, $\xi_1(x) = (\alpha_x - \beta_x)/(\sum_{x\in S_1} (\alpha_x - \beta_x))$, and $\xi_2(x) = (\beta_x - \alpha_x)/(\sum_{x\in S_2} (\beta_x - \alpha_x))$.}
The formulation of Definition~\ref{defn:OE} allows us to consider natural partitions into $K\ge 2$ or $L\ge 2$ sets, which will prove useful later.
For example, if one had three operationally equivalent preparations of the form $\frac{1}{2}(P_1+P_2) \simeq \frac{1}{2}(P_3+P_4) \simeq \frac{1}{2}(P_5+P_6)$, we can express this as a single operational equivalence rather than several pairwise equivalences.
\subsection{Contextuality scenarios and noncontextuality}
With these basic notions, we can now more precisely define the kind of scenario in which we will study noncontextuality and its precise definition in such settings.
In particular, we consider prepare-and-measure scenarios of the form described above in which Alice's preparations and Bob's measurements must obey fixed sets of operational equivalences.
\begin{definition}
A \emph{contextuality scenario} is a tuple $(n_X,n_Y,n_B,\{\mathcal{E}^{(r)}_\mathcal{P}\}_{r=1}^R,\{\mathcal{E}^{(q)}_\mathcal{M}\}_{q=1}^Q)$, where $\mathcal{E}^{(r)}_\mathcal{P}$ and $\mathcal{E}^{(q)}_\mathcal{M}$ are preparation and measurement operational equivalences, respectively.
\end{definition}
Note that the normalisation of the probability distribution $p(b|x,y)$ implies that $\sum_b [b|M_y] = \sum_b [b|M_{y'}]$ for all $y,y'$, and hence every ontological model must satisfy the corresponding operational equivalence. We will generally omit this trivial operational equivalence from the specification of a contextuality scenario.
\medskip
The notion of (operational) noncontextuality formalises the idea that operationally identical procedures must have identical representations in the underlying ontological model~\cite{Spekkens}.
\begin{definition}\label{defrealist}
An ontological model is said to be:
\textbf{(a)} \emph{Preparation noncontextual} if it assigns the same epistemic state to operationally equivalent preparation procedures; i.e., if the preparations $P_x$ satisfy an operational equivalence $\mathcal{E}_\mathcal{P}=\{(S_k,\{\xi_k(x)\}_{x\in S_k})\}_{k=1}^K$ then, for all $k,k'\in [K]$
\begin{equation}
\forall \lambda: \, \sum_{x\in S_k}\xi_k(x) p(\lambda|x) = \sum_{x\in S_{k'}}\xi_{k'}(x) p(\lambda|x).
\end{equation}
\textbf{(b)} \emph{Measurement noncontextual} if it endows operationally equivalent measurement procedures with the same response function; i.e., if the measurement effects $[b|M_y]$ satisfy an operational equivalence $\mathcal{E}_\mathcal{M}=\{(T_\ell,\{\zeta_\ell(b,y)\}_{(b,y)\in T_\ell})\}_{\ell=1}^L$ then, for all $\ell,\ell'\in[L]$
\begin{equation}
\forall \lambda: \, \sum_{(b,y)\in T_\ell}\!\!\zeta_\ell(b,y) p(b|y,\lambda) =\!\! \sum_{(b,y)\in T_{\ell'}}\!\!\zeta_{\ell'}(b,y) p(b|y,\lambda).
\end{equation}
Finally, if an ontological model is both preparation and measurement noncontextual, we simply say that it is \emph{noncontextual}.
\end{definition}
The assumption of noncontextuality imposes nontrivial constraints on the probability distributions that can arise in an ontological model~\cite{Spekkens}.
\begin{definition}
Given a contextuality scenario, the correlations $p(b|x,y)$ are said to be (preparation/measurement) noncontextual if there exists a (preparation/measurement) noncontextual ontological model satisfying the operational equivalences of the scenario and reproducing the desired correlations. If no so much model exists, we say that the correlations are \emph{(preparation/measurement) contextual}.
\end{definition}
It is known that the set of noncontextual correlations (and, likewise, the sets of preparation or measurement noncontextual correlations) forms, for a given contextuality scenario, a convex polytope delimited by \emph{noncontextuality inequalities}~\cite{NCpolytope}.
\subsection{Quantum models}
Here, we are particularly interested in what correlations can be obtained in contextuality scenarios within quantum mechanics.
In quantum theory, a preparation $P$ corresponds to a density matrix $\rho$ (i.e., satisfying $\rho \succeq 0$ and $\Tr(\rho)=1$), and two preparations $\rho$ and $\rho'$ are operationally equivalent if and only if $\rho=\rho'$.
Preparation operational equivalences thus correspond to different decompositions of the same density matrix.
Likewise, a measurement corresponds to a positive operator-valued measure (POVM) $\{E_b\}$ (defined by $E_b\succeq 0$ and $\sum_b E_b = \mathds{1}$), where the $E_b$ are the measurement effects.
Measurement effects $E_b$ and $E'_{b'}$ are thus operationally equivalent if and only if $E_b = E'_{b'}$.
We can thus specify precisely what a quantum model for a contextuality scenario corresponds to.
\begin{definition}
\label{defn:quantum_model}
A \emph{quantum model} for a contextuality scenario $(n_X,n_Y,n_B,\{\mathcal{E}^{(r)}_\mathcal{P}\}_{r=1}^R,\{\mathcal{E}^{(q)}_\mathcal{M}\}_{q=1}^Q)$ is given by two sets of Hermitian positive semidefinite operators $\{\rho_x\}_{x=1}^{n_X}$ and $\{\{E_{b|y}\}_{b=1}^{n_B}\}_{y=1}^{n_Y}$ which satisfy
\begin{align}
\forall x:&\, \Tr(\rho_x)=1\label{eq:quantum_model_states}\\
\forall y:&\, \sum_{b=1}^{n_B}E_{b|y}=\mathds{1}\label{eq:quantum_model_povm}
\end{align}
as well as the operational equivalences
\begin{align}
\forall r,k:&\, \sum_{x\in S^{(r)}_k} \xi^{(r)}_k(x)\rho_x = \sigma_r \label{eq:model_OE_states}\\
\forall q,\ell:&\, \sum_{(b,y)\in T^{(q)}_\ell} \zeta^{(q)}_\ell(b,y)E_{b|y} = \tau_q,\label{eq:model_OE_measurements}
\end{align}
for some operators $\sigma_r$ and $\tau_q$ independent of $k$ and $\ell$.
If a quantum model consists only of pure states (i.e., if $\rho_x^2=\rho_x$ for all $x$) or projective measurements (i.e., if $E_{b|y}^2 = E_{b|y}$ and $E_{b|y}E_{b'|y}=0$ for all $b,b',y$), then we will call the model \emph{pure} or \emph{projective}, respectively.
\end{definition}
It turns out that quantum theory is conceptually different from standard realist models, in the sense that there exist quantum models for contextuality scenarios---that thus respect the specified operational equivalences---but nevertheless can give rise to contextual correlations~\cite{Spekkens}.
Quantum theory is thus said to be contextual.
Interestingly, quantum models cannot provide any advantage over noncontextual ontological models
in the absence of nontrivial preparation operational equivalences \cite{spekkens14}. In this sense, quantum theory is measurement noncontextual. Conversely, however, quantum contextuality can be witnessed in contextuality scenarios involving only operational equivalences between the preparations (along with the trivial measurement operational equivalence arising from Eq.~\eqref{eq:quantum_model_povm}, which is necessarily satisfied by any quantum model for any contextuality scenario). For this reason there has been particular interest in preparation noncontextual inequalities, although interesting contextuality scenarios involving both preparation and measurement operational equivalences have been proposed (see e.g.~\cite{NCpolytope, Mazurek2016, Kunjwal2015, ArminRoope}).
\section{A hierarchy of SDP relaxations}
In recent years, hierarchies of semidefinite programming (SDP) relaxations of the set of quantum correlations have become an invaluable tool in the study of quantum correlations~\cite{NPA1, NPA2}.
Such a hierarchy capable of bounding contextual correlations in contextuality scenarios, where operational equivalences must be taken into account, has thus far, however, proved elusive, and it is this problem we address here.
The fundamental question we are interested in is the following: given a contextuality scenario $(n_X,n_Y,n_B,\{\mathcal{E}^{(r)}_\mathcal{P}\},\{\mathcal{E}^{(q)}_\mathcal{M}\})$ and a probability distribution $p(b|x,y)$, does there exist a quantum model for the scenario reproducing the observed correlations, i.e., satisfying $p(b|x,y) = \Tr(\rho_x E_{b|y})$?
Note that, in contrast to many scenarios in quantum information, such as Bell nonlocality, it is not \emph{a priori} clear that, in the search for such a quantum model, one can restrict oneself to pure states and projective measurements despite the fact that no assumption on the Hilbert space dimension is made.
Indeed, while one can always purify a mixed state, or perform a Naimark dilation of the POVMs, such extensions may no longer satisfy the operational equivalences of the contextuality scenario.
Although SDP hierarchies have previously been formulated for prepare-and-measure scenarios \cite{NV, NV2, CharlesHierarchy, Info2}, the main challenge for contextuality scenarios is to represent the constraints arising from the operational equivalences.
Here, we adopt an approach motivated by a recent hierarchy~\cite{Info2} bounding informationally restricted correlations~\cite{Info1} and the fact that operational equivalences can be interpreted as restrictions on the information obtainable about equivalent operational procedures (see also Sec.~\ref{sec:zero-inf-games}).
\subsection{Necessary conditions for a quantum model}
\label{sec:hierarchyConds}
Similarly to other related SDP hierarchies, our approach to formulate increasingly strict necessary conditions for the existence of a quantum model is based on reformulating the problem in terms of the underlying \emph{moment matrix} of a quantum model.
To this end, let us define the set of operator variables
\begin{equation}\label{eq:operatorSet}
J=\{\mathds{1}\}\cup \{\rho_x\}_{x}\cup \{E_{b|y}\}_{b,y} \cup \{\sigma_r,\tau_\ell\}_{r,\ell},
\end{equation}
where $\sigma_r,\tau_\ell$ (with $r\in[R]$, $\ell\in[L]$) are variables corresponding to the operators defined in Eqs.~\eqref{eq:model_OE_states} and~\eqref{eq:model_OE_measurements} and will be used to enforce robustly the operational equivalences.
Consider a list $\mathcal{S}=(\mathcal{S}_1,\dots,\mathcal{S}_{|\mathcal{S}|})$ of monomials (of degree at least one) of variables in $J$. We say that $\mathcal{S}$ represents the $k$th degree of the hierarchy if it contains all monomials over $J$ of degree at most $k$.%
\footnote{In practice, however, it is often preferable to use an intermediate hierarchy level, i.e.~a monomial list that has largest degree $k$ but does not contain all terms of degree at most $k$. We will later exemplify this.}
The choice of $\mathcal{S}$ will lead to different semidefinite relaxations, but it should at least include all elements of $J$.
Given a monomial list $\mathcal{S}$, the existence of a quantum model implies the existence of a \emph{moment matrix} $\Gamma$ whose elements, labelled by the monomials in $u,v\in\mathcal{S}$, are
\begin{equation}\label{eq:moment_matrix}
\Gamma_{u,v} = \Tr(u^\dagger v)
\end{equation}
and satisfy a number of properties that form our necessary conditions.
Some of these constraints are common with those found in similar hierarchies (points (I)--(III) below), while others capture important aspects of quantum models for contextuality scenarios (points (IV)--(V)) and will be expressed through localising matrices~\cite{NPA3}.
We outline these constraints below.
\medskip
\noindent\textbf{(I) Hermitian positive semidefiniteness.}
By construction the moment matrix is Hermitian and it is easily seen to be positive semidefinite~\cite{NPA1}, i.e.,
\begin{equation}\label{eq:constr_PSD}
\Gamma = \Gamma^\dagger \succeq 0.
\end{equation}
\noindent\textbf{(II) Consistency with $p$.}
Since the quantum model must reproduce the correlations $p(b|x,y)$, $\Gamma$ must satisfy
\begin{equation}\label{eq:constr_P}
\forall x,y,b:\, \Gamma_{\rho_x,E_{b|y}} = p(b|x,y).
\end{equation}
\medskip
\noindent\textbf{(III) Validity of states and measurements.}
Since any quantum model must satisfy the constraints of Eqs.~\eqref{eq:quantum_model_states} and~\eqref{eq:quantum_model_povm}, $\Gamma$ must satisfy
\begin{equation}
\forall x:\, \Gamma_{\mathds{1},\rho_x} = 1,
\end{equation}
as well as linear identities of the form
\begin{equation}\label{eq:Gamma_lin_constraints}
\sum_{u,v}c_{u,v}\Gamma_{u,v} = 0 \quad \text{if}\quad \sum_{u,v} c_{u,v} \Tr(u^\dagger v) = 0,
\end{equation}
where the sum is over all monomials $u,v$ in $\mathcal{S}$.
These constraints are, in particular, those satisfied by any quantum model that follow from the validity of the states and measurements making up the model and the cyclicity of the trace.
For example, Eq.~\eqref{eq:Gamma_lin_constraints} includes constraints of the form $\sum_{b}\Gamma_{E_{b|y},E_{b'|y'}}=\Gamma_{\mathds{1},E_{b'|y'}}$, as well as constraints such as $\Gamma_{E_{b|y},\rho_x E_{b'|y'}} = \Gamma_{\rho_x, E_{b'|y'}E_{b|y}}$ which follows from the fact that $\Tr(E_{b|y}\rho_x E_{b'|y'}) = \Tr(\rho_x E_{b'|y'}E_{b|y})$.
It thus includes the constraints implied by the trivial operational equivalence following from Eq.~\eqref{eq:quantum_model_povm} that are satisfied by any quantum model, thereby justifying the fact that we generally do not explicitly include this operational equivalence relation when specifying contextuality scenarios.
Note that if we were to assume the quantum model is either pure or projective (so that, respectively, either $\rho_x^2 = \rho_x$, or $E_{b|y}^2 = E_{b|y}$ and $E_{b|y}E_{b'|y}=0$), then this implies further constraints of the form~\eqref{eq:Gamma_lin_constraints}.
In particular, one can always make this assumption if there are no nontrivial operational equivalences of the corresponding type, allowing the SDP hierarchy we formulate to be simplified, but can also be considered as an additional assumption of interest (see Sec.~\ref{sec:mixedstates}).
\medskip
\noindent\textbf{(IV) Operational equivalences.}
A quantum model must satisfy the operational equivalences of Eqs.~\eqref{eq:model_OE_states} and~\eqref{eq:model_OE_measurements}.
While this implies that the traces of each side of those equations must equal---which in turn imposes the corresponding linear identities on the moment matrix---this alone does not fully capture the constraints implied by the operational equivalences, and notably is not enough to provide a good hierarchy. To properly enforce these constraints, we draw inspiration from the hierarchy of informationally restricted quantum correlations \cite{Info2} and make use of \emph{localising matrices}. These are additional matrices of moments whose elements (or a subset thereof) are linear combinations of elements of $\Gamma$, and which themselves must be positive semidefinite~\cite{NPA3}.
We thus define, for all $r\in[R]$, $k\in[K^{(r)}]$ and all $q\in[Q]$, $\ell\in[L^{(q)}]$, the localising matrices $\tilde{\Lambda}^{(r,k)}$ and $\hat{\Lambda}^{(q,\ell)}$ with elements
\begin{align}
\tilde{\Lambda}^{(r,k)}_{u,v} &= \Tr\left(u^\dagger\left(\sigma_r-\sum_{x\in S_k^{(r)}}\xi_k^{(r)}(x)\rho_x\right)v\right) \\
\hat{\Lambda}^{(q,\ell)}_{u,v} &= \Tr\left(u^\dagger\left(\tau_q-\!\!\sum_{(b,y)\in T_\ell^{(q)}}\!\!\zeta_\ell^{(q)}(b,y)E_{b|y}\right)v\right),
\end{align}
which are labelled now by monomials from a monomial list $\mathcal{L}$, in general different from $\mathcal{S}$ (and which, in principal, could differ for each localising matrix).
Ideally, $\mathcal{L}$ should be chosen so that the elements of the localising matrices are linear combinations of elements of the moment matrix $\Gamma$.
For a quantum model exactly satisfying exactly the operational equivalences $\mathcal{E}^{(r)}_\mathcal{P}$ and $\mathcal{E}^{(q)}_\mathcal{M}$, with $\sigma_r$ and $\tau_q$ defined as in Eqs.~\eqref{eq:model_OE_states} and~\eqref{eq:model_OE_measurements} one has $\tilde{\Lambda}^{(r,k)} = \hat{\Lambda}^{(q,\ell)} = 0$, $\Tr(\sigma_r)=1$ and $\Tr(\tau_q)=\sum_{(b,y)\in T_\ell^{(q)}}\zeta_\ell^{(q)}(b,y)\Tr(E_{b|y})$.
Such complicated matrix equality constraints (which one could in principle enforce without defining the localising matrices), however, tend to lead to poor results in practice due to the numerical instability of SDP solvers.
Instead, we impose the more robust constraints that $\tilde{\Lambda}^{(r,k)}, \hat{\Lambda}^{(q,\ell)}\succeq 0$ (along with the equality constraints on the traces of $\sigma_r,\tau_q$, which serve to ``normalise'' the localising matrices), which follow from the existence, for any quantum model, of Hermitian operators $\sigma_r,\tau_q$ satisfying $\sigma_r\succeq\sum_{x\in S_k^{(r)}}\xi_k^{(r)}(x)\rho_x$ and $\tau_q\succeq\sum_{(b,y)\in T_\ell^{(q)}}\zeta_\ell^{(q)}(b,y)E_{b|y}$.
We thus have, for all $r,k,q,\ell$
\begin{align}
& \tilde{\Lambda}^{(r,k)} \succeq 0, \quad \hat{\Lambda}^{(q,\ell)} \succeq 0\\
& \Gamma_{\mathds{1},\sigma_r} = 1, \quad \Gamma_{\mathds{1},\tau_q} =\!\!\sum_{(b,y)\in T_\ell^{(q)}}\!\!\zeta_\ell^{(q)}(b,y)\Gamma_{\mathds{1},E_{b|y}}.
\end{align}
Moreover, whenever the monomials $u$, $\sigma_r u$ and $\rho_x u$ are in $\mathcal{S}$ we have
\begin{equation}\label{eq:gamma_constr_OE_P}
\tilde\Lambda^{(r,k)}_{u,v} = \Gamma_{u,\sigma_r v} - \sum_{x\in S^{(r)}_k} \xi^{(r)}_k \Gamma_{u,\rho_x v},
\end{equation}
and, when $u$, $\tau_q u$ and $E_{b|y} u$ are similarly in $\mathcal{S}$,
\begin{equation}\label{eq:gamma_constr_OE_M}
\hat\Lambda^{(q,\ell)}_{u,v} = \Gamma_{u,\tau_q v} - \!\!\sum_{(b,y)\in T^{(q)}_\ell} \zeta^{(q)}_\ell \Gamma_{u,E_{b|y} v},
\end{equation}
thereby relating the localising matrices to the moment matrix $\Gamma$.
We note that the operators $\sigma_r$ and $\tau_q$, and the localising matrices expressing the deviation of their moments from those of the operational equivalences, hence play the role of slack variables to robustly enforce the operational equivalencies.
As we will see in Sec.~\ref{sec:simultingContextuality}, the formulation we adopt here will also allow a natural generalisation allowing us to study the simulation cost of preparation contextuality, where the trace of $\sigma_r$ has a natural interpretation, further motivating our choice to present the constraints in the form given here.
\medskip
\noindent\textbf{(V) Positivity of states and measurements.}
In most SDP hierarchies used in quantum information, one can assume without loss of generality that the states and measurements in question are projective (see e.g.~\cite{NPA1, NPA2}); since all projective operators are positive semidefinite, it is not necessary in such cases to consider explicitly the constraints the positive semidefiniteness of the operators in a quantum model imposes on a moment matrix.
As already mentioned, however, for contextuality scenarios this is not \emph{a priori} the case, and to capture the constraints implied by the positive semidefiniteness of states and measurements (i.e., $\rho_x, E_{b|y} \succeq 0$) we again exploit localising matrices.
Let us thus introduce the localising matrices (for all $x,y,b$) $\tilde{\Upsilon}^x$ and $\hat{\Upsilon}^{(b,y)}$ with elements
\begin{align}
\tilde{\Upsilon}^x_{u,v} &= \Tr(u^\dagger \rho_x v)\\
\hat{\Upsilon}^{(b,y)}_{u,v} &= \Tr(u^\dagger E_{b|y} v),
\end{align}
which are labelled by monomials from a monomial list $\mathcal{O}$, in general different from $\mathcal{S}$ (and which, as for $\mathcal{L}$, in principle could differ for each $x,y,b$).
Ideally, $\mathcal{O}$ should be chosen so that the elements of the localising matrices are also elements of the moment matrix $\Gamma$.
It is easily seen that the positive semidefiniteness of $\rho_x$ and $E_{b|y}$ implies
\begin{align}
\forall x:&\quad \tilde{\Upsilon}^x \succeq 0\\
\forall y,b:&\quad \hat{\Upsilon}^{(b,y)} \succeq 0,
\end{align}
which in turn (for well chosen $\mathcal{O}$) constrains $\Gamma$.
Moreover, for all $u,v$ in $\mathcal{O}$, whenever the monomials $u,\rho_x v$ or, respectively, $u,E_{b|y}v$ are in $\mathcal{S}$ we have
\begin{align}
\tilde{\Upsilon}^x_{u,v} = \Gamma_{u,\rho_x v}, \quad \hat{\Upsilon}^{(b,y)}_{u,v} = \Gamma_{u,E_{b|y} v},
\end{align}
thereby relating the localising matrices to the main moment matrix.
\medskip
For given choices of the moment lists $\mathcal{S}$, $\mathcal{L}$ and $\mathcal{O}$, the constraints presented above thus provide necessary conditions for a given correlation to have a quantum realisation in the contextuality scenario.
Note moreover that, by standard arguments~\cite{NPA1}, one can actually assume the moment matrix (and localising matrices) are real since the above constraints only involve real coefficients.
These conditions are all semidefinite constraints, which leads us to the following proposition summarising our hierarchy of SDP relaxations.
\begin{proposition}
\label{prop:hierarchyNC}
Let $\mathcal{S}$, $\mathcal{L}$, $\mathcal{O}$ be fixed lists of monomials from $J$. A necessary condition for the existence of a quantum model in a given contextuality scenario reproducing the correlations $\{p(b|x,y)\}_{b,x,y}$ is the feasibility of the following SDP:
\begin{subequations}
\begin{align}\label{sdp}
\textup{find} \quad & \Gamma, \{\tilde{\Lambda}^{(r,k)}\}_{r,k}, \{\hat{\Lambda}^{(q,\ell)}\}_{q,\ell}, \{\tilde{\Upsilon}^{x}\}_{x}, \{\hat{\Upsilon}^{(b,y)}\}_{b,y} \notag \\
\textup{s.t.} \quad & \Gamma \succeq 0,\quad \tilde{\Lambda}^{(r,k)} \succeq 0, \quad \hat{\Lambda}^{(q,\ell)} \succeq 0 \notag \\
& \tilde{\Upsilon}^{x} \succeq 0, \quad \hat{\Upsilon}^{(b,y)} \succeq 0\\
& \Gamma_{\rho_x,E_{b|y}}=p(b|x,y)\\
& \Gamma_{\mathds{1},\rho_x} = 1 \\
& \sum_{u,v}c_{u,v}\Gamma_{u,v} = 0 \quad \textup{if}\quad \sum_{u,v} c_{u,v} \Tr(u^\dagger v) = 0 \label{eq:sdp_lin_constr}\\
& \Gamma_{\mathds{1},\sigma_r} = 1, \quad \Gamma_{\mathds{1},\tau_q} =\!\!\sum_{(b,y)\in T_\ell^{(q)}}\!\!\zeta_\ell^{(q)}(b,y)\Gamma_{\mathds{1},E_{b|y}}\label{eq:sdp_constr_OEP} \\
& \tilde\Lambda^{(r,k)}_{u,v} = \Gamma_{u,\sigma_r v} - \sum_{x\in S^{(r)}_k} \xi^{(r)}_k \Gamma_{u,\rho_x v} \label{eq:sdp_constr_OEM}\\
& \hat\Lambda^{(q,\ell)}_{u,v} = \Gamma_{u,\tau_q v} - \sum_{(b,y)\in T^{(q)}_\ell} \zeta^{(q)}_\ell \Gamma_{u,E_{b|y} v} \\
& \tilde{\Upsilon}^x_{u,v} = \Gamma_{u,\rho_x v}, \quad \hat{\Upsilon}^{(b,y)}_{u,v} = \Gamma_{u,E_{b|y} v},\label{eq:sdp_positivity_constr}
\end{align}
\end{subequations}
where the above operators are all symmetric real matrices.
\end{proposition}
By taking increasingly long monomials lists $\mathcal{S}$, $\mathcal{L}$ and $\mathcal{O}$, one thus obtains increasingly strong necessary conditions for a quantum realisation, and which can be efficiently checked by standard numerical solvers for SDPs.
While the above hierarchy applies to arbitrary contextuality scenarios, in many scenarios or situations of interest, it can be somewhat simplified.
In particular, if one wishes to determine whether a given correlation is compatible with a pure and/or projective quantum model, the extra constraints imposed on the states and measurement effects (cf.\ Definition \ref{defn:quantum_model}) correspond to further linear constraints in Eq.~\eqref{eq:sdp_lin_constr}, meaning that the corresponding localising matrices $\tilde\Upsilon^x$ and/or $\hat\Upsilon^{(b,y)}$ (and subsequent constraints in Eq.~\eqref{eq:sdp_positivity_constr}) are not required.
Similarly, if there are either no preparation or no measurement operational equivalences present in the problem (i.e., if $R=0$ or $Q=0$) then the corresponding localising matrices $\tilde\Lambda^{(r,k)}$ or $\hat\Lambda^{(q,\ell)}$ (and subsequent constraints in Eqs.~\eqref{eq:sdp_constr_OEP} and~\eqref{eq:sdp_constr_OEM}) are also not required.
The later case is particularly relevant in many (preparation) contextuality scenarios of interest, including the examples we consider in the following section.
To illustrate this, in Appendix~\ref{app:sdp_PNC} we show how the SDP simplifies for the case of preparation noncontextuality, where only nontrivial preparation operational equivalences are considered.
\medskip
Although the above hierarchy solves a feasibility problem, asking whether a distribution $p(b|x,y)$ is compatible with a quantum model for the contextuality scenario, in practice one is often interested with maximising a linear functional of the probability distribution over all possible quantum models---i.e., a \emph{noncontextuality inequality}---perhaps subject to some further constraints on the distribution.
It is easily seen that, following standard techniques, the hierarchy of necessary conditions we have presented also allows one to bound such optimisation problems by instead maximising the corresponding functional over all feasible solutions to the SDP of Proposition~\ref{prop:hierarchyNC}.
\medskip
As we will see in the following section, the hierarchy of Proposition~\ref{prop:hierarchyNC} allows us to readily obtain tight bounds on quantum contextual correlations in many scenarios of interest.
However, in some cases involving both nontrivial preparation and measurement operational equivalences and no assumptions of pure states or projective measurements, it performs relatively poorly in practice.
This appears to stem from the fact that, in such cases, the probabilities $p(b|x,y)$ do not appear on the diagonal of the moment matrix or any of the localising matrices.
In Appendix~\ref{app:sqrtHierarchy} we show how these difficulties can be overcome by presenting a modified version of our hierarchy, obtained by taking the operators $\{\sqrt{\rho_x}\}_x$ and/or $\{\sqrt{\smash[b]{E_{b|y}}}\}_{b,y}$ in the operator set $J$ (cf.\ Eq.~\eqref{eq:operatorSet}) instead of $\{\rho_x\}_x$ and $\{E_{b|y}\}_{b,y}$, an approach which we believe may be of independent technical interest.
\section{Applications of the SDP hierarchy}
We implemented a version of this hierarchy (and the variant described in Appendix~\ref{app:sqrtHierarchy}) in MATLAB, exploiting the SDP interface YALMIP~\cite{yalmip}, and our code is freely available~\cite{codeGit}.
Our implementation can handle arbitrary contextuality scenarios, restrictions to pure or projective quantum models or to classical (commuting) models, and solve either the feasibility SDP of Proposition~\ref{prop:hierarchyNC} or maximise a linear functional of the correlations $p(b|x,y)$ subject to linear constraints on the probabilities.
In solving large SDP problems that would otherwise be numerically intractable, it can make use of \mbox{RepLAB}~\cite{replab,replabGithub} (a recently developed tool for manipulating finite groups with an emphasis on SDP applications) to exploit symmetries in noncontextuality inequalities, a capability we exploit in obtaining some of the results presented below.
\subsection{Quantum violations of established preparation noncontextuality inequalities}\label{sec:qbounds}
To illustrate the usefulness of the hierarchy described in Proposition~\ref{prop:hierarchyNC}, we first exploit it to derive tight bounds on the maximal quantum violation of three preparation noncontextuality inequalities introduced in previous literature.
In Appendix~\ref{AppHierarchyExamples} we detail the analysis of two examples based on the inequalities derived in Ref.~\cite{Ambainis} and the inequalities experimentally explored in Ref.~\cite{Hameedi}.
Here, we focus on the noncontextuality inequalities for state discrimination presented in Ref.~\cite{Schmid}.
To reveal a contextual advantage in state discrimination, Ref.~\cite{Schmid} considers a scenario with $x\in[4]$, $y\in[3]$ and $b\in[2]$ and attempts to discriminate the preparations $P_1$ and $P_2$, while $P_3$ and $P_4$ are symmetric extensions that ensure the operational equivalence $\frac{1}{2}P_1+\frac{1}{2}P_3\simeq\frac{1}{2}P_2+\frac{1}{2}P_4$.
The first two measurements ($y=1,2$) correspond to distinguishing preparations $P_1$ and $P_3$, and $P_2$ and $P_4$, respectively (in the noiseless case, these should be perfectly discriminable); while the third ($y=3$) corresponds to the state discrimination task, i.e., discriminating $P_1$ and $P_2$.
There are three parameters of interest: the probability of a correct discrimination, $s$; the probability of confusing the two states, $c$; and the noise parameter, $\epsilon$.
Under the symmetry ansatz considered, the observed statistics are thus required to satisfy
\begin{align}\nonumber
&s=p(1|1,3)=p(2|2,3)=p(2|3,3)=p(1|4,3)\\\notag
& c=p(1|2,1)=p(1|1,2)=p(2|4,1)=p(2|3,2)\\
& 1-\epsilon=p(1|2,2)=p(1|1,1)=p(2|4,2)=p(2|3,1).
\end{align}
The authors show that, for $\epsilon\leq c\leq 1-\epsilon$, the following noncontextuality inequality holds:
\begin{equation}\label{ncdiscrimination}
s\leq 1 - \frac{c-\epsilon}{2}.
\end{equation}
What is the maximal quantum advantage in the task?
Ref.~\cite{Schmid} presented a specific family of quantum models that achieve
\begin{equation}\label{conj}
s=\frac{1}{2}\left( 1+\sqrt{1-\epsilon+2\sqrt{\epsilon(1-\epsilon)c(1-c)}+c(2\epsilon-1)}\right),
\end{equation}
which violates the bound \eqref{ncdiscrimination}, and conjectured it to be optimal for qubit systems.
The semidefinite programming hierarchy presented in the previous section allows us to place upper bounds on $s$ for given values of $(c,\epsilon)$ by maximising $s$ under the above constraints.
Using a moment matrix of size $42$ and localising matrices of size $7$,%
\footnote{The precise lists of moments used in this and all subsequent examples can be found along with our implementation of the SDP hierarchy, where the code generating these results is available~\cite{codeGit}.
In all these examples we take the monomial lists $\mathcal{L}$ and $\mathcal{O}$ for the localising matrices to be the same (simply because this was sufficient to obtain the presented results), although one could indeed take these to be different if desired.} we systematically performed this maximisation with a standard numerical SDP solver~\cite{mosek} for different values of $(c,\epsilon)$ by dividing the space of valid such parameters (i.e., satisfying $\epsilon \le c \le 1-\epsilon$) into a grid with spacing of 0.01.
We consistently obtained in every case an upper bound agreeing with the value in Eq.~\eqref{conj} to within $10^{-5}$, which is consistent with the precision of the SDP solver.
We thus find that Eq.~\eqref{conj} indeed gives the maximal quantum contextual advantage in state discrimination.
For the interested reader, in Appendix~\ref{app:tuto} we use this example to show more explicitly what form the constraints of the SDP hierarchy take and how they relate the moment matrix and localising matrices.
\subsection{Mixed states as resources for quantum contextuality}
\label{sec:mixedstates}
In many forms of nonclassicality, such as Bell nonlocality, steering and quantum dimension witnessing, the strongest quantum correlations are necessarily obtained with pure states.
In the former two, this stems from the fact that any mixed state can be purified in a larger Hilbert space.
In the latter, it follows from the possibility to realise a mixed state as a convex combination of pure states of the same dimension.
Interestingly, however, it is \emph{a priori} unclear whether mixed states should play a more fundamental role in quantum contextuality: both purifications of mixed states and post-selections on pure-state components of mixed states may break the operational equivalences between preparation in contextuality scenarios.
Here we show that this intuition turns out to be correct: preparation contextuality indeed is exceptional as mixed states are needed to obtain some contextual quantum correlations.
To prove this, we consider the noncontextuality scenario of Hameedi-Tavakoli-Marques-Bourennane (HTMB) \cite{Hameedi}.
In this scenario, Alice receives two trits, $x\coloneqq x_1x_2\in\{0,1,2\}^2$ and Bob receives a bit $y\in[2]$ and produces a ternary outcome $b\in\{0,1,2\}$.
There are two operational equivalences involved, corresponding to Alice sending zero information about the value of the sums $x_1+x_2$ and $x_1+2x_2$ (modulo $3$), respectively.
Each of these corresponds to a partition of Alice's nine preparations into three sets.
Under these constraints, Alice and Bob evaluate a Random Access Code~\cite{TavakoliRAC}.
The HTMB inequality bounds the success probability of the task in a noncontextual model \cite{Hameedi}:
\begin{equation}\label{eq:HTMB}
\mathcal{A}_\text{HTMB}\coloneqq \frac{1}{18}\sum_{x,y}p(b=x_y|x,y)\leq \frac{2}{3}.
\end{equation}
We revisit this scenario and employ our semidefinite relaxations to determine a bound on the largest value of $\mathcal{A}_\text{HTMB}$ attainable in a quantum model in which all nine preparations are pure.
As described following Proposition~\ref{prop:hierarchyNC}, this scenario can easily be considered with our hierarchy by simply including the linear constraints following from $\rho_x^2 = \rho_x$ (for all $x$) in Eq.~\eqref{eq:sdp_lin_constr} and noting that the localising matrices $\tilde\Upsilon^x$ are no longer required.
Using a moment matrix of size 2172 and localising matrices of size 187, we find that $\mathcal{A}_\text{HTMB} \lesssim 0.667$ up to solver precision.
To make such a large SDP problem numerically tractable, we used \mbox{RepLAB}~\cite{replab,replabGithub} to make the moment matrix invariant under the symmetries of the random access code, thereby significantly reducing the number of variables in the SDP problem.
This gives us strong evidence (i.e., up to numerical precision) that pure states cannot violate the HTMB inequality~\eqref{eq:HTMB}, and we conjecture this to indeed be the case exactly.%
\footnote{Note that the large size of the moment matrices meant that the solver precision we were able to obtain is somewhat reduced compared to the other examples discussed in this paper. Our numerical result agrees with the noncontextual bound of $2/3$ to within $2\times 10^{-4}$, which is within an acceptable range given the error metrics returned by the solver.}
Importantly, however, mixed states are known to enable a violation of the inequality: six-dimensional quantum systems can achieve $\mathcal{A}_\text{HTMB}\approx 0.698$ \cite{Hameedi}.%
\footnote{We were similarly able to use our hierarchy to place an upper bound on the quantum violation of this inequality at $\mathcal{A}_\text{HTMB} \lesssim 0.704$ using a moment matrix of size 3295 and localising matrices of size 268 with the solver SCS~\cite{scs_code}. We note that obtaining this bound required using terms from the 4th level of the hierarchy. We leave it open as to what the tight quantum bound is.}
This shows that sufficiently strong contextual quantum correlations can require the use of mixed states.
\subsection{Quantum violation of contextuality inequalities involving nontrivial measurement operational equivalences}
\label{sec:measurementNC}
The examples discussed above focused on preparation contextuality scenarios, in which there are no non-trivial measurement operational equivalences. Nonetheless, quantum contextuality can also be observed in scenarios involving measurement operational equivalences (in addition to preparation operational equivalences), and we demonstrate the ability of our hierarchy to provide tight bounds in such scenarios by applying to the noncontextuality inequalities derived in Ref.~\cite{NCpolytope}.
In Ref.~\cite{NCpolytope}, the authors consider a scenario with $x\in[6]$, $y\in [3]$ and $b\in[2]$ where the preparations satisfy the operational equivalence $\frac{1}{2}(P_1 + P_2) \simeq \frac{1}{2}(P_3 + P_4) \simeq \frac{1}{2}(P_5 + P_6)$ and the measurements satisfy the operational equivalence $\frac{1}{3}\sum_{y}[1|M_y] \simeq \frac{1}{3}\sum_{y}[2|M_y]$.
The authors completely characterised the polytope of noncontextual correlations in this contextuality scenario, finding the following 6 inequivalent (under symmetries), nontrivial noncontextuality ``facet'' inequalities (where we use the notation $p_{xy}\coloneqq p(1|x,y)$):
\begin{subequations}
\begin{align}
I_1 &= p_{11} + p_{32} + p_{53} \le 2.5,\\
I_2 &= p_{11} + p_{22} + p_{53} \le 2.5,\\
I_3 &= p_{11} - 2p_{22} + 2p_{32} - p_{41} - 2p_{51} + 2p_{53} \le 3, \label{eq:I3}\\
I_4 &= 2p_{11} - p_{22} + 2p_{32} \le 3, \label{eq:I4}\\
I_5 &= p_{11} + p_{22} + p_{32} - p_{51} + 2p_{53} \le 4,\\
I_6 &= p_{11} + 2p_{22} - p_{51} + 2p_{53} \le 4.
\end{align}
\end{subequations}
While it was shown in Ref.~\cite{Mazurek2016} that a quantum model can violate the first of these inequalities and obtain the logical maximum of $I_1=3$, the degree to which the other inequalities can be violated has not, to our knowledge, previously been studied. We note that this question is also addressed in the parallel work of Ref.~\cite{Anubhav}.
In this scenario, where we have both nontrivial preparation and measurement operational equivalences, we failed to obtain nontrivial bounds on these inequalities using the basic hierarchy described by Proposition~\ref{prop:hierarchyNC}.
Instead, we employed the variant of the hierarchy described in Appendix~\ref{app:sqrtHierarchy} which uses the principal square roots of the states $\rho_x$ and/or measurements $E_{b|y}$ in the operator list, but otherwise follows the same approach.
This hierarchy, which is a strict extension of the one described by Proposition~\ref{prop:hierarchyNC}, allowed us to
place strong bounds on all the above inequalities.
Indeed, using moment matrices of size 1191 and localising matrices of size 85 (and monomials involving square roots of measurement operators, but not of states; see Appendix~\ref{app:sqrtHierarchy}), we obtained the following quantum bounds:\footnote{Due to the lack of symmetry in some of the inequalities and relatively large moment matrix size, we used the memory efficient solver SCS~\cite{scs_code} to obtain some of the results of Eq.~\eqref{eq:MNC_bounds}. This solver has the drawback of converging slower than more standard solvers~\cite{mosek}, and we thus obtain a numerical precision of the order of $10^{-3}\sim 10^{-4}$.}
\begin{equation}
\label{eq:MNC_bounds}
\begin{aligned}
I_1 \lesssim 3.000,\qquad & I_2 \lesssim 2.866,\quad & I_3 \lesssim 3.500, \\
I_4 \lesssim 3.366,\qquad & I_5 \lesssim 4.689,\quad & I_6 \lesssim 4.646.
\end{aligned}
\end{equation}
Using a see-saw optimisation approach, for all six inequalities we were able to obtain quantum strategies saturating the bounds from the hierarchy, showing that they are in fact tight up to the precision of the SDP solver.
Interestingly, we were moreover able to show that the maximum quantum violation of the third inequality~\eqref{eq:I3} cannot be obtained with projective measurements.
Indeed, by using the hierarchy of Proposition~\ref{prop:hierarchyNC} and imposing the constraints following from the projectivity of POVM elements (and using the same monomial lists as for the above results) we were able to show that $I_3\lesssim 3.464$ for projective quantum models.
Using a see-saw optimisation, we were able to obtain projective quantum models saturating this bound to numerical precision, thereby confirming its tightness and showing that non-projective measurements, just like mixed states, are resources for quantum contextuality.
\section{Simulating preparation contextuality}
\label{sec:simultingContextuality}
Quantum correlations are famously capable of going beyond those achievable in classical theories in numerous scenarios, as highlighted by the violation of Bell inequalities and, indeed, noncontextuality inequalities.
One can likewise consider correlations that are even stronger than those observed in nature, which we call ``post-quantum'' correlations.
Interest in post-quantum theories stems from them nonetheless respecting physical principles such as no-signalling, and understanding what physical principles distinguish quantum and post-quantum correlations can lead to new insights into quantum theory itself~\cite{Popescu1994,Barrett2007,Pawlowski2009}.
An interesting strategy to study the correlations obtained by different physical theories is to ask what kind of resource, and how much of it, one should supplement a theory with to achieve stronger correlations.
This question has been extensively studied in the context of simulating Bell correlations with classical theory and additional resources.
Two such resources that can be used in that case are classical communication~\cite{Buhrman10,Toner} and measurement dependence~\cite{Hall11}.
Similarly, various resources have also been investigated in Kochen-Specker contextuality experiments with the goal of simulating quantum correlations within a classical theory~\cite{Kleinmann2011,Abramsky2017,Amaral}.
To our knowledge, however, nothing is known about what resources would be necessary to simulate operationally contextual correlations, and in particular the especially relevant resource of preparation contextuality.
In this section, we begin by casting preparation contextuality scenarios as information-theoretic games, and show how these allow us to formalise a notion of simulation cost, for both classical and quantum models.
The resource used is the preparation of states which deviate from the required operational equivalences.
This is a natural figure of merit as the defining feature of a model for noncontextual correlations within a given theory is that the underlying ontological model obeys the specified operational equivalences;
it is thus this condition that must be violated in some way if stronger correlations are to be simulated.
We leverage our hierarchy of semidefinite relaxations to quantify both the simulation of quantum contextual correlations using classical theory, and the simulation of post-quantum correlations using quantum theory.
\subsection{Zero-information games}
\label{sec:zero-inf-games}
To show how the cost of simulating preparation contextuality can be quantified in information theoretic terms, we begin by giving an alternative interpretation for preparation contextuality scenarios (i.e., contextuality scenarios involving only nontrivial operational equivalences between sets of preparations).
In particular, we will describe how preparation contextuality scenarios can be interpreted as games in which Alice is required to hide some knowledge about her input $x$ (see, e.g., Ref.~\cite{Marvian2020}).
Consider thus a contextuality experiment involving $R$ preparation operational equivalences.
For a given such equivalence $r\in [R]$ involving a partition into $K_r$ sets $S_k^{(r)}$, let Alice randomly choose a set $S_k^{(r)}$ (with uniform prior $p(S_k^{(r)})=1/K^{(r)}$) and a state from that set with prior $p(x|S_k^{(r)})=\xi_k^{(r)}(x)$.
How well could a receiver hope to identify which of the sets $\{S_1^{(r)},\dots,S_{K^{(r)}}^{(r)}\}$ the state they receive is sampled from?
The optimal discrimination probability in an operational theory is
\begin{equation}\label{pg}
G^{(r)} \coloneqq \max_{\tilde{p}(\cdot|x)}\frac{1}{K^{(r)}}\sum_{k=1}^{K^{(r)}}\sum_{x\in S_k^{(r)}}\xi_k^{(r)}(x)\,\tilde{p}(k|x)
\end{equation}
where $\tilde{p}$ is the response distribution for the discrimination.
Using that $\sum_{k=1}^{K_r}\tilde{p}(k|x)=1$, it straightforwardly follows (see Appendix~\ref{AppInfolink}) that the discrimination probability is $G^{(r)}=\frac{1}{K^{(r)}}$ (i.e., random) if and only if the $r$th operational equivalence is satisfied.
The discrimination probability constitutes an operational interpretation of the min-entropic accessible information about the set membership of $x$ \cite{Konig}, and is convenient to work with. More precisely, the accessible information is given by
\begin{equation}\label{info}
\mathcal{I}_r=\log_2(K^{(r)})+\log_2(G^{(r)}).
\end{equation}
Thus, we can associate the operational equivalences to an information tuple $\bar{\mathcal{I}}=\left(\mathcal{I}_1,\ldots, \mathcal{I}_R\right)$.
A contextuality experiment is a zero-information game since $G^{(r)}=\frac{1}{K^{(r)}}$ for all $r$ is equivalent to vanishing information: $\bar{\mathcal{I}}=\bar{0}$.
\subsection{Information cost of simulating preparation contextuality}
Since a vanishing information tuple $\bar{\mathcal{I}}$ is necessary for a faithful realisation of a contextuality scenario in a given physical model, it follows that contextual correlations that cannot be explained in said model require an overhead information, i.e., an information tuple $\bar{\mathcal{I}}\neq \bar{0}$.
In both classical (noncontextual) models and quantum theory, this means that the preparations are allowed to deviate from the operational equivalences specified by the contextuality scenario to an extent quantified by the overhead information.
By doing so, one necessarily goes beyond a standard model for the scenario, as defined in Definition~\ref{defrealist} for classical models and Definition~\ref{defn:quantum_model} for quantum theory.
For the simplest case of a single operational equivalence (i.e., $R=1$), we define the information cost, $\mathcal{Q}$, of simulating $p(b|x,y)$ in quantum theory as the smallest amount of overhead information required for quantum theory to reproduce the correlations:
\begin{align}\label{Qcost}\nonumber
& \mathcal{Q}[p]\coloneqq\min \mathcal{I} \\ \nonumber
& \text{s.t.} \quad \rho_x\succeq 0, \quad \Tr(\rho_x)=1, \quad E_{b|y}\succeq 0, \\
& \qquad \textstyle\sum_b E_{b|y}=\mathds{1},\, \text{and } \, p(b|x,y)=\Tr\left(\rho_xE_{b|y}\right).
\end{align}
However, when several operational equivalences are involved, the information is represented by a tuple $\bar{\mathcal{I}}$ and it is unclear how the information cost of simulation should be defined (note, in particular, that the operational equivalences may not be independent, so information about one may also provide information about another).
We thus focus here on the simpler case described above, and leave the more general case of $R>1$ for future research.
It is not straightforward to evaluate $\mathcal{Q}$.
However, by modifying our semidefinite relaxations of contextual quantum correlations we can efficiently obtain lower bounds on $\mathcal{Q}$ in general scenarios.
Indeed, note that from Eq.~\eqref{pg}, interpreted in a quantum model, it follows that if $\sigma_r$ satisfies $\sigma_r\succeq \sum_{x\in S_k^{(r)}}\xi_k^{(r)}(x)\rho_x$ for every $k\in [K^{(r)}]$ then one has $G^{(r)} \le \frac{1}{K_r}\Tr(\sigma_r)$.
Thus, rather than imposing the constraint arising from $\Tr(\sigma_r)=1$ in our hierarchy of semidefinite relaxations, we can instead minimise ($\frac{1}{K_r}$ times) the term corresponding to $\Tr(\sigma_r)$ in the moment matrix, which thus provides an upper bound on $G^{(r)}$.
Note that this provides an alternative interpretation to the constraint that $\Gamma_{\mathds{1},\sigma_r}=1$ in Eq.~\eqref{eq:sdp_constr_OEP}: it enforces the fact that Bob should have no information about which set $S_k^{(r)}$ Alice's state was chosen from.
This interpretation makes an interesting link to the recently developed approach to bounding informationally-constrained correlations~\cite{Info2}, and which indeed was the initial motivation for the approach we take in this paper.
Considering still the case of $R=1$, we thereby bound the information cost of a quantum simulation by evaluating the semidefinite relaxation as follows.
\begin{proposition}
\label{prop:hierarchySimCost}
For any fixed lists $\mathcal{S}$, $\mathcal{L}$, $\mathcal{O}$ of monomials from $J$, the quantum simulation cost $\mathcal{Q}[p]$ is lower bounded as
\begin{equation}
\log_2(K^{(1)}) + \log_2(G^*) \le \mathcal{Q}[p] ,
\end{equation}
where $G^*$ is obtained as
\begin{align}
G^* = \textup{min} \quad & \frac{\Gamma_{\mathds{1},\sigma_1}}{K^{(1)}} \\
\textup{s.t.} \quad & \Gamma \succeq 0,\quad \tilde{\Lambda}^{(1,k)} \succeq 0, \quad \tilde{\Upsilon}^{x} \succeq 0\notag\\
& \Gamma_{\mathds{1},\rho_x} = 1 \notag\\
& \sum_{u,v}c_{u,v}\Gamma_{u,v} = 0 \quad \textup{if}\quad \sum_{u,v} c_{u,v} \Tr(u^\dagger v) = 0 \notag\\
& \tilde\Lambda^{(1,k)}_{u,v} = \Gamma_{u,\sigma_1 v} - \sum_{x\in S^{(1)}_k} \xi^{(1)}_k \Gamma_{u,\rho_x v}\notag\\
& \tilde{\Upsilon}^x_{u,v} = \Gamma_{u,\rho_x v},\notag
\end{align}
where the above operators are all taken to be Hermitian.
\end{proposition}
The correctness of Proposition~\ref{prop:hierarchySimCost} follows immediately from Eq.~\eqref{info} and the fact that $G^*$ is an upper bound on $G^{(1)}$.
\medskip
Furthermore, one can similarly consider the information cost of simulation in classical models.
In analogy with the quantum simulation cost, we define the classical simulation cost, $\mathcal{C}$, as the smallest overhead information required for a classical noncontextual model to reproduce given correlations:
\begin{align}\label{Ccost}\nonumber
& \mathcal{C}[p]\coloneqq\min \mathcal{I} \quad \text{s.t. } \hspace{1mm}p(b|x,y)=\sum_{\lambda}p(\lambda|x)p(b|y,\lambda),\\
&\forall x:\hspace{1mm} \sum_{\lambda} p(\lambda|x)=1, \quad \forall (\lambda,y):\hspace{1mm} \sum_{b} p(b|y,\lambda)=1.
\end{align}
Naturally, in contrast to quantum simulation, every contextual distribution $p(b|x,y)$ will be associated to a non-zero classical simulation cost.
In analogy with the quantum case, we can place lower bounds on the classical simulation cost using the SDP hierarchy we discussed and assuming that all variables commute, thereby introducing many further constraints on the SDP and providing necessary conditions for a classical model to exist for a given value of $G$.
However, it turns out that a precise characterisation of the classical simulation cost, in terms of a linear program, is also possible by exploiting the fact that the set of classical, informationally restricted, correlations forms a convex polytope~\cite{Info1,Info2}.%
\footnote{This follows from the fact that it suffices to consider a finite alphabet size for the ontic variable $\lambda$ \cite{Info2}.}
Finally, we make the interesting observation that the discrimination probability $G$ can be given a resource theoretic interpretation in terms of a robustness measure.
As we discuss in Appendix~\ref{AppRobustness}, this can be used to give an alternative interpretation of the simulation cost $\mathcal{I}$.
\subsection{Simulation cost in the simplest scenario}
We illustrate the above discussion of the classical and quantum simulation costs of contextuality by applying it to arguably the simplest contextuality experiment, namely parity-oblivious multiplexing (POM) \cite{POM}.
In POM, Alice has four preparations ($x\in[4]$ written in terms of two bits $x\coloneqq x_1x_2\in[2]^2$) and Bob has two binary-outcome measurements ($y\in[2]$ and $b\in[2]$).
The sole operational equivalence is $\frac{1}{2}P_{11}+\frac{1}{2}P_{22}\simeq\frac{1}{2}P_{12}+\frac{1}{2}P_{21}$, which corresponds to Alice's preparations carrying no information about the parity of her input $x$.
The task is for Bob to guess the value of her $y$th input bit.
The average success probability in a noncontextual model obeys
\begin{equation}
\mathcal{A}_\text{POM}\coloneqq \frac{1}{8}\sum_{x,y}p(b=x_y|x,y)\leq \frac{3}{4}.
\end{equation}
In contrast, quantum models obey the tight bound $\mathcal{A}_\text{POM}\leq \frac{1}{2}\left(1+\frac{1}{\sqrt{2}}\right)$ \cite{POM}.
However, a post-quantum probability theory can achieve the algebraically maximal success probability of $\mathcal{A}_\text{POM}=1$ \cite{Banik}.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{FigSimulationCostPrettier.pdf}
\caption{The information cost of simulating contextuality in parity-oblivious multiplexing using classical and quantum models.}\label{FigParity}
\end{figure}
We consider the information cost of simulating a given value of $\mathcal{A}_\text{POM}$ (i.e., the minimal information cost over all distributions compatible with that value, which can easily be evaluated by modifying the linear and semidefinite programs defined above) in both classical and quantum models.
The results are illustrated in Fig.~\ref{FigParity}.
The classical simulation cost is analytically given by
\begin{equation}
\mathcal{C}_\text{POM} = \log_2\left(4\mathcal{A}_\text{POM}-2\right).
\end{equation}
In Appendix~\ref{AppStrats} we present an explicit simulation strategy that saturates this result, while the results of the linear program and the classical version of the hierarchy coincide with this value up to numerical precision.
For quantum models, we have employed the described semidefinite relaxations using a moment matrix of size $547$ and localising matrices of size 89.
The results are illustrated in Fig.~\ref{FigParity}.
Importantly, we find that this lower bound on the quantum simulation cost is tight since we can saturate it with an explicit quantum strategy (detailed in Appendix~\ref{AppStrats}).
The quantum simulation cost is analytically given by
\begin{equation}
\mathcal{Q}=\log_2\left(2\mathcal{A}_\text{POM}-\sqrt{2}\right)+\log_2\left(2+\sqrt{2}\right).
\end{equation}
\section{Conclusions}
In this paper we introduced a semidefinite relaxation hierarchy for bounding the set of contextual quantum correlations and demonstrated its usefulness by applying it to solve several open problems in quantum contextuality.
This approach opens the door to the investigation of the limits of quantum contextuality in general prepare-and-measure experiments, as well as potential applications thereof.
Moreover, it provides the building blocks with which to explore several interesting, related questions, such as whether our approach can be extended to contextuality scenarios involving more than two parties, and whether it can be adapted to bound quantum correlations in Kochen-Specker type contextuality experiments.
By leveraging the interpretation of contextuality experiments as zero-information games, we introduced a measure of the cost of simulating preparation contextual correlations in restricted physical models, and showed how this simulation cost can be bounded in both classical and quantum models.
This raises three fundamental questions:
1) How can the definition of the simulation cost be extended to scenarios with multiple preparation operational equivalences which, \emph{a priori}, may not be independent?
2) How does the simulation cost of contextuality scale in prepare-and-measure scenarios with increasingly many settings? and
3) For a given number of inputs and outputs, what is the largest simulation cost possible in order for classical correlations to reproduce quantum correlations?
Additionally, it would be interesting to investigate how the simulation cost of operational contextuality relates to other notions of simulation, e.g., in Bell nonlocality, Kochen-Specker contextuality and communication complexity.
In particular, can our semidefinite relaxation techniques be adapted to also bound simulation costs in such correlation experiments?
Our work thus provides both a versatile tool for bounding quantum contextuality and a general framework for analysing the simulation of contextual correlations.
Finally, while finalising this article, we became aware of the related work of Ref.~\cite{Anubhav}. This work also addresses the problem of bounding the set of contextual quantum correlations. It uses a hierarchy of semidefinite programming relaxations that is considerably different to the one introduced here. For contextuality scenarios featuring measurement operational equivalences, as well as general mixed states and non-projective measurements, the hierarchy of Ref.~\cite{Anubhav} appears to provide faster convergence (they recover, for example, more readily the bounds of Eq.~\eqref{eq:MNC_bounds}). In contrast, the hierarchy we introduced here appears particularly well suited to preparation contextuality scenarios, admits a generalisation to quantifying the simulation cost of contextuality, and makes an interesting conceptual connection to informationally restricted quantum correlations~\cite{Info1, Info2}.
\bigskip
\begin{acknowledgements}
The authors thank Jean-Daniel Bancal for helpful discussions on the efficient implementation of SDP hierarchies and, in particular, on the use of RepLAB to exploit symmetries in such implementations, as well as the anonymous referees for comments that helped
to significantly improve this paper. This work was supported by the Swiss National Science Foundation (Starting grant DIAQ, NCCR-SwissMAP, Early Mobility Fellowship P2GEP2 194800 and Mobility Fellowship P2GEP2 188276).
\end{acknowledgements}
|
2,869,038,154,114 | arxiv | \section{Introduction}
One of the most challenging problems in string theory is the quest for string vacua which give rise to the phenomenology observed in nature. In addition to realizing the experimentally observed gauge symmetries and matter content, realistic string models should also account for finer details, such as the mixing angles and disparate mass scales exhibited by the Yukawa couplings. D-brane compactifications provide a promising framework for addressing such questions\cite{Blumenhagen:2005mu, Marchesano:2007de,Blumenhagen:2006ci}. In these compactifications, gauge groups appear on stacks of D-branes which fill out four-dimensional spacetime, and chiral matter appears at the intersection of two stacks.
This localization of gauge theory exhibited in D-brane compactifications, which is not present in heterotic compactifications, allows for gauge dynamics to be addressed independently of gravitational considerations. That is, many quantities relevant to particle physics, such as the chiral spectrum or the superpotential, are independent of global aspects of the geometry. Thus, for a large class of phenomenological questions it is sufficient to consider local D-brane setups which mimick a gauge theory with a desired matter field content, instead of considering a global D-brane compactification. These local setups are called D-brane quivers and this ``bottom-up" approach to model building was initiated in \cite{Aldazabal:2000sa,Antoniadis:2001np,Antoniadis:2000ena}.
Recently, there have been extensive efforts \cite{Ibanez:2008my,Leontaris:2009ci,Anastasopoulos:2009mr,Cvetic:2009yh,Cvetic:2009ez,Kiritsis:2009sf,Cvetic:2009ng,Anastasopoulos:2010} to construct semi-realistic bottom-up MSSM quivers. In theses quivers, desired Yukawa couplings are often perturbatively forbidden, since they violate global $U(1)$ selection rules which are remnants of the generalized Green-Schwarz mechanism. It has been shown \cite{Blumenhagen:2006xt,Haack:2006cy,Ibanez:2006da,Florea:2006si} that D-instantons can break these global $U(1)$ selection rules and induce perturbatively forbidden couplings (for a review, see \cite{Blumenhagen:2009qh}). In \cite{Cvetic:2009yh} (see also \cite{Cvetic:2009ng}), the authors performed a systematic bottom-up search of D-brane quivers that exhibit the exact MSSM spectrum including three right-handed neutrinos. They investigated which quivers allow for the generation of the MSSM superpotential, perturbatively or non-perturbatively, without inducing any undesired phenomenological drawbacks, such as R-parity violating couplings or a $\mu$-term which is too large.
One phenomenological requirement often imposed on bottom-up quivers is the existence of a mechanism which accounts for the smallness of the neutrino masses. There have been many studies of such mechanisms, both from the field theory and string points of view (for a review, see~\cite{Mohapatra:2005wg}). Many models involve small Majorana masses associated with the higher-dimensional Weinberg operator~\cite{Weinberg:1980bf} $C L H_u L H_u/M$, where $M/C \sim 10^{14}$ GeV. The Weinberg operator may be generated by integrating out heavy states in the effective four-dimensional field theory, such as a heavy right-handed Majorana neutrino (the type I seesaw mechanism). However, in D-brane compactifications a
large Majorana mass for the right-handed neutrino is likely to be due to non-perturbative D-instanton effects \cite{Blumenhagen:2006xt,Ibanez:2006da,Cvetic:2007ku,Ibanez:2007rs,Antusch:2007jd,Cvetic:2007qj}, while in
other string constructions it may be due to higher-dimensional operators emerging from the string compactification
(see, e.g.,~\cite{Giedt:2005vx,Lebedev:2007hv}). It is therefore worthwhile to consider the alternative possibility that the Weinberg
operator is generated directly by D-instantons or other stringy effects without introducing the intermediate step of a
stringy right-handed neutrino mass\footnote{We note the analogy to the work of Klebanov and Witten \cite{Klebanov:2003my}, and the subsequent work of \cite{Cvetic:2006iz}, which calculated the superpotential contributions of stringy dimension six proton decay operators in GUT models and analyzed them relative to their effective field theoretic counterparts. Here, we consider the possibility that the Weinberg operator is a stringy effect rather than a field theoretic operator generated effectively by the Majorana mass term $N_RN_R$ or dimension four $R$-parity violating couplings, such as $q_L L d_R$.}. (Other types of string effects may lead to other possibilities, such as
small Dirac neutrino masses \cite{Cleaver:1997nj,Cvetic:2008hi}).
In D-brane compactifications, the scales of important superpotential couplings are often dependent on some combination of the string scale $M_s$ and the suppression factors of instantons which might induce them. Thus, it is interesting to examine whether or not a string scale different from the four-dimensional Planck scale assists in accounting for the phenomenological scales observed in nature. For example, in \cite{Anastasopoulos:2009mr} it was argued that a lower string scale may avoid the presence of a large $\mu$-term and thus may relax some of the bottom-up constraints imposed in the systematic analyses of \cite{Cvetic:2009yh,Cvetic:2009ng}. In addition to the possibility of being helpful in model building, a lower string scale is intriguing because it might give rise to stringy signatures arising from exotic matter and Regge excitations
observable at the LHC (see, e.g.,\cite{Anastasopoulos:2006cz,Anastasopoulos:2008jt,Lust:2008qc,
Anchordoqui:2008di,Armillis:2008vp,Fucito:2008ai,Anchordoqui:2009mm,Lust:2009pz,Anchordoqui:2009ja,Burikham:2004su}
and references therein).
In this work we explore various implications of a lower string scale $M_s$ for bottom-up D-brane model building. Specifically, we analyze what a lower string scale implies for dimensionful superpotential terms, namely the $\mu$-term and the Weinberg operator, where the latter can be induced directly by a D-instanton or effectively via the type I seesaw mechanism. It has long been realized \cite{Mohapatra:2005wg,Conlon:2007zza} that a generic string induced Weinberg operator with $M\sim M_s \sim 10^{18} GeV$ and $C \leq 1$ cannot account for the observed neutrino masses. However, with a lower string scale the induced Weinberg operator may be the primary source for the neutrino masses. The D-instanton case is even more problematic \cite{Ibanez:2007rs}, since $C$ is typically exponentially suppressed. Again, however, it is possible to account for the observed neutrino masses via a D-instanton induced Weinberg operator if one lowers $M_s$ \cite{Ibanez:2007rs}.
We analyze two scenarios in detail. First, we investigate the implications on the string scale $M_s$ and bottom-up model building constraints in the case where the D-instanton induced Weinberg operator is primary source for the neutrino masses. We discuss in detail the consequences for the case where the Weinberg operator inducing instanton also induces some of the desired, but perturbatively, missing superpotential couplings. Second, we investigate the implication of a lower string scale for the case where the Weinberg operator is induced effectively via the type I seesaw mechanism with a D-instanton induced Majorana mass term. Again we analyze the constraints on the string scale and on the bottom-up model building if the Majorana mass term generating instanton also induces one of the perturbatively forbidden superpotential couplings.
Furthermore, we perform a systematic analysis of D-brane quivers similar to the one performed in \cite{Cvetic:2009yh,Cvetic:2009ng}, where the spectrum is the exact MSSM spectrum without right-handed neutrinos and the neutrino masses are induced by a stringy Weinberg operator. It turns out that the absence of right-handed neutrinos requires at least four stacks of D-branes. Imposing constraints inspired by experimental observation, such as the absence of R-parity violating couplings, rules out a large class of potential D-brane quivers. Allowing for an additional D-brane stack gives many more solutions, which serve as a good starting point for future model building.
This paper is organized as follows. In chapter \ref{chap stringy Weinberg}, we discuss the implications for the string scale if a D-instanton induced Weinberg operator is the primary source for the neutrino masses. We analyze further constraints on the string scale if an instanton which induces a desired, but perturbatively forbidden, Yukawa coupling also induces one of the dimensionful superpotential couplings, the $\mu$-term or the Weinberg operator. Furthermore, we perform a systematic analysis of multi-stack D-brane quivers that exhibit the exact MSSM spectrum, where the neutrino mass is due to a D-instanton induced Weinberg operator. The details and results of this analysis are displayed in appendix \ref{appendix}. In chapter \ref{chap stringy Weinberg}, we assume the presence of right-handed neutrinos and the Weinberg operator is induced effectively via the type I seesaw mechanism. We discuss the implications for the string scale $M_s$ if the instanton that induces one of the dimensionful quantities, the Majorana mass term for the right-handed neutrinos or the $\mu$-term, also generates a perturbatively forbidden, but desired, Yukawa coupling.
\section{The Stringy Weinberg Operator
\label{chap stringy Weinberg}}
It is known that D-instantons can induce a Weinberg operator which will give contributions to the neutrino masses \cite{Ibanez:2007rs}. Much of the discussion thus far has focused on the key fact that for the usual value of the string scale, $M_s\simeq 10^{18}GeV$, such a Weinberg operator gives contributions of at most $10^{-5}eV$, which is four orders of magnitude too small\footnote{The neutrino masses are measured to be in the range $10^{-2}-1\,\,eV$, but for simplicity's sake in the following analysis, we take the mass to be $10^{-1}\,\,eV$.}. This can be seen explicitly by examining the form of a D-instanton induced Weinberg operator,
\begin{align}
\label{eqn string weinberg operator}
e^{-S^{WB}_{ins}} \,\frac{L^i\, H_u\, L^j\, H_u}{M_s}\,\,,
\end{align}
where the stated result is under the assumptions of an instanton suppression $e^{-S^{WB}_{ins}}\simeq 1$ and $\langle H_u \rangle\simeq 100\,GeV$, which is the best case scenario, since an instanton with $e^{-S^{WB}_{ins}}<1$ further suppresses the masses. Therefore, if $M_s\simeq 10^{18} \, GeV$, the D-instanton induced Weinberg operator only gives subleading corrections to the neutrino masses, and another mechanism must account for their observed order. In what follows, we refer to this mechanism as the stringy Weinberg operator.
Lowering the string scale has been proposed as a potential solution to the hierarchy problem \cite{ArkaniHamed:1998rs,Antoniadis:1998ig}. In such a scenario, the small string scale is due to large internal dimensions which could lower the string scale down to the TeV scale, leading to the interesting signatures at the LHC mentioned in the introduction.
A lower string scale is also an interesting possibility from the point of view of the Weinberg operator, since from (\ref{eqn string weinberg operator}) we see that with a string scale
\begin{equation}
M_s \lesssim 10^{14}\,\,\,GeV,
\end{equation}
it would give contributions on the order of $10^{-1}\,eV$, and thus could be the primary mechanism for generating neutrino masses.
In this chapter, we explore the possibility that such a Weinberg operator is the primary source of the neutrino masses.
We will see that a given quiver will require $M_s$ to be in a particular range if one hopes to obtain Yukawa couplings, a stringy Weinberg operator, and a stringy $\mu$-term of the desired order. This puts stringent constraints on the relation between the string scale and instanton suppression factors. Specifically, if the stringy Weinberg operator is to be of the observed order, then the instanton suppression factor of the Weinberg operator and the string scale are related to each other via
\begin{equation}
\frac{M_s}{e^{-S^{WB}_{ins}}} = 10^{14} \, GeV\,\,.
\label{eq: relation}
\end{equation}
Furthermore, if the $\mu$-term is generated non-perturbatively there is also a relation between the string scale $M_s$ and the $\mu$-term instanton suppression factor, given by
\begin{align}
M_s \,e^{-S^{\mu}_{ins}}\, = 100 \, GeV.
\label{eq: mu-relation}
\end{align}
From \eqref{eq: relation} and \eqref{eq: mu-relation}, it is clear that there is an additional relation relating the suppression factors of the instantons which generate the Weinberg operator and the $\mu$-term, given by
\begin{align}
e^{-S^{\mu}_{ins}} \,e^{-S^{WB}_{ins}} \simeq 10^{-12}.
\label{eq relation weinberg mu}
\end{align}
These general observations must hold in order to obtain a $\mu$-term of the desired order and neutrino masses of the observed order via a stringy Weinberg operator. The interplay between the instantons in these relations and the instantons which generate the Yukawa couplings generically give three different ranges for $M_s$, which we now discuss.
\subsection{$M_s \simeq \left( 10^3 - 10^{14}\right) \,GeV$}
As discussed above, a stringy Weinberg operator in a compactification at the usual string scale $M_s\simeq 10^{18}\,GeV$ generates neutrino masses which are too small by four orders of magnitude. From \eqref{eq: relation}, we see that this can be compensated for if
$M_s\simeq10^{14}\,GeV$,
in which case a stringy Weinberg operator could be the primary source of neutrino mass. We emphasize, however, that this numerical value is entirely based on the choice of an $O(1)$ suppression factor for the instanton which generates the Weinberg operator. Further suppression due to the instanton could forces one to further lower the string scale to account for the observed neutrino masses. The string scale as low as the TeV scale is still compatible with experimental observations. Thus the range for the string mass is given by
\begin{align}
M_s \simeq \left( 10^3 - 10^{14}\right)\,\,GeV\,.
\end{align}
In this range, a stringy $\mu$-term of the correct order can be generated by an instanton with suppression $e^{-S^{\mu}_{ins}}\simeq 10^{-12}-10^{-1}$.
We emphasize that, though this is the widest range of string scales compatible with a stringy Weinberg operator being the origin of the observed order of neutrino masses, a particular quiver might contain effects which constrain the range. We now discuss two such effects.
\subsection{$M_s \simeq \left(10^{9}-10^{14}\right)\,\, GeV$\label{sec middle scale}}
Often times an instanton which is required to generate a perturbatively forbidden Yukawa coupling also generates the Weinberg operator. This gives the relation \begin{align}e^{-S_{ins}}\,\,\, \equiv \,\,\, e^{-S^{WB}_{ins}}\,\,\,=\,\,\,e^{-S^{Yuk}_{ins}},\end{align} since the two suppressions are associated with the same instanton. In this case, the fact that the instanton might account for either the electron mass or the bottom-quark mass gives the lower and upper bound
\begin{align}
10^{-5} \leq e^{-S_{ins}}\leq 1.
\end{align}
Given that the suppression factor of the instanton is set by requiring that the Yukawa couplings are of the correct order, equation \eqref{eq: relation} requires that
\begin{align}
\label{scale yukawa weinberg}
M_s \simeq \left(10^{9}-10^{14} \right)\,GeV \, \, .
\end{align}
Furthermore, since the string scale is bounded, this implies that a D-instanton induced $\mu$-term has a suppression factor $e^{-S^{\mu}_{ins}}$ in the range $10^{-12} \leq e^{-S^{\mu}_{ins}} \leq 10^{-7}$.
For the sake of conreteness, we present a five-stack quiver which exhibits this effect. Its matter spectrum and transformation behavior is given in Table \ref{table weinberg yukawa quiver}, where the $SU(3)_C$ and $SU(2)_L$ of the MSSM arise from a stack $a$ of three D-branes and a stack $b$ of two D-branes, respectively. The hypercharge is given by the linear combination $U(1)_Y=\frac{1}{6}\,U(1)_a+\frac{1}{2}\,U(1)_c+ \frac{1}{2}\,U(1)_d +\frac{1}{2}\,U(1)_e$, where the $U(1)$'s of $c$, $d$, and $e$ are associated with three other D-branes, making this a five-stack quiver. All other linear combinations of these $U(1)$'s become massive via the Green-Schwarz mechanism and survive as global symmetries which have to be obeyed on the perturbative level.
\begin{table}[h] \centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Sector & Matter Fields & Transformation & Multiplicity & Hypercharge\\
\hline \hline
$ab$ & $q_L^1$ & $(a,\overline{b})$ & $1$& $\frac{1}{6}$ \\
\hline
$ab'$ & $q_L^{2,3}$ & $(a,b)$ & $2$& $\frac{1}{6}$ \\
\hline
$ac$ & $d_R$ & $(\overline{a},c)$ & $3 $ & $\frac{1}{3}$ \\
\hline
$ac'$ & $u_R^1$ & $(\overline{a},\overline{c})$ & $1 $ & $-\frac{2}{3}$ \\
\hline
$ad'$ & $u_R^2$ & $(\overline{a}, \overline d)$ & $1 $ & $-\frac{2}{3}$ \\
\hline
$ae'$ & $u_R^3$ & $(\overline{a}, \overline e)$ & $1 $ & $-\frac{2}{3}$ \\
\hline
$bc'$ & $H_d$ & $(\overline b,\overline c)$ & $1$ & $-\frac{1}{2} $ \\
\hline
$bd$ & $H_u$ & $(\overline b,d)$ & $1$ & $\frac{1}{2} $ \\
\hline
$bd'$ & $L^{1,2}$ & $(\overline b,\overline{d})$ & $2$& $-\frac{1}{2}$ \\
\hline
$be'$ & $L^3$ & $(b,\overline e)$ & $1$ & $-\frac{1}{2} $ \\
\hline
$ce'$ & $E_R^{1,2}$ & $(c,e)$ & $2 $ & $1$ \\
\hline
$dd'$ & $E_R^3$ & $\Ysymm_d$ & $1 $ & $1$ \\
\hline
\end{tabular}
\caption{\small {Spectrum for a quiver with $U(1)_Y=\frac{1}{6}\,U(1)_a+\frac{1}{2}\,U(1)_c+ \frac{1}{2}\,U(1)_d +\frac{1}{2}\,U(1)_e$.}
\label{table weinberg yukawa quiver}
\end{table}
Both, the Weinberg operator $L^{1,2} \,H_u\,L^3\,H_u$ as well as the Yukawa coupling $q_L^1\,H_u\,u_R^3$
are perturbatively forbidden since they have non-vanishing charges under some of the global $U(1)$'s, namely
\begin{align}
Q_a=0 \,\,\,\,\,\,\, Q_b=-2 \,\,\,\,\,\,\, Q_c=0 \,\,\,\,\,\,\, Q_d=1 \,\,\,\,\,\,\, Q_e=-1\,\,.
\end{align}
Couplings of this charge can be induced by an instanton $E2$ with intersection numbers\footnote{Positive intersection number $I_{E2a}$ corresponds to fermionic zero mode $\lambda_a$, tranforming as fundamental under the global $U(1)_a$. We refer the reader to \cite{Cvetic:2009yh} for further details on the the non-perturbative generation of desired superpotential terms in semi-realistic D-brane quivers.}
\begin{align}
I_{E2a}=0 \,\,\,\,\,\,\, I_{E2b}=-1 \,\,\,\,\,\,\, I_{E2c}=0 \,\,\,\,\,\,\, I_{E2d}=1 \,\,\,\,\,\,\, I_{E2e}=-1.
\end{align}
If the presence of this instanton $E2$ is required to account for the observed order of the charm-quark mass and neutrino masses
\begin{align}
m^{13}_{\nu}=m^{23}_{\nu}=e^{-S_{E2}} \frac{L^{1,2}\, H_u\, L^{3}\, H_u}{M_s}\,\,,
\end{align}
then the string mass must satisfy
\begin{align}
\label{scale quiver 1}
M_s \simeq 10^{12} \,\,\, GeV\,\,.
\end{align}
Here we assume that the suppression factor $e^{-S_{E2}}$ is of the order $10^{-2}$ to give the observed hierarchy between the top-quark and charm-quark mass.
This effect occurs in many quivers, but in this particular quiver there are two subtle issues which allow us to evade the further lowering of the string mass in (\ref{scale quiver 1}). First, note that the Weinberg operator $L^{1,2}\,H_u\,L^3\,H_u$ has two fields of positive $Q_d$ charge and one of negative $Q_d$ charge, so that the overall charge of the coupling is $Q_d=2-1=1$. Such a case allows for the coupling to be generated by an instanton with vector like zero modes \cite{Ibanez:2008my,Cvetic:2009yh,Cvetic:2009ez}, and here this instanton, $E2^{'}$, would exhibit the intersection pattern
\begin{align}
I_{E2^{'}a}=0 \,\,\,\,\,\,\, I_{E2^{'}b}=-1 \,\,\,\,\,\,\, I_{E2^{'}c}=0 \,\,\,\,\,\,\, I_{E2d^{'}}=1 \,\,\,\,\,\,\, I_{E2^{'}e}=-1\,\,\,\,\,\,\,I^{\mathcal{N}=2}_{E2^{'}d}=1\,\,.
\end{align}
Note that this instanton does have the same global $U(1)$ charges as the instanton $E2$ but does \emph{not} generate $q_L^1H_uu_R^3$, and thus its suppression factor is not bounded. Therefore, if present, its suppression factor can be tuned to account for neutrino masses of the observed order \emph{without} being forced to constrain the string mass, as in (\ref{scale quiver 1}).
Additionally, we emphasize that $E2$ and $E2^{'}$ only generate certain couplings in the neutrino mass matrix. If the instantons which generate the other entries of this matrix can sufficiently account for the order of the neutrino masses, then one is not forced to lower the string mass, since the contributions of $E2$ and $E2^{'}$ would be small corrections to the observed neutrino masses. Therefore, for generic quivers, a detailed analysis of the neutrino mass matrix is required to determine whether or not the string scale must be constrained as in \eqref{scale quiver 1}.
\subsection{$M_s \simeq \left(10^{3} -10^{7}\right) \,\, GeV $\label{sec lower scale}}
Similarly, often times an instanton which is required to generate a perturbatively forbidden Yukawa coupling also generates the $\mu$-term. This gives the relation \begin{align}e^{-S_{ins}}\,\,\, \equiv\,\,\, e^{-S^{\mu}_{ins}}\,\,\,=\,\,\,e^{-S^{Yuk}_{ins}},\end{align} since the two suppressions are associated with the same instanton. In this case, the fact that the instanton might account for either the electron mass or the bottom-quark mass gives the lower and upper bound
\begin{align}
10^{-5} \leq e^{-S_{ins}}\leq 1.
\end{align}
Given that the suppression factor of the instanton is set by requiring that the Yukawa couplings are the correct order, equation \eqref{eq: mu-relation} requires that
\begin{align}
\label{scale yukawa mu}
M_s \simeq \left(10^{3}-10^{7}\right) \,\,\, GeV.
\end{align}
Furthermore via relation \eqref{eq relation weinberg mu} the suppression factor of the D-instanton inducing the Weinberg operator has a suppression factor $e^{-S^{WB}_{ins}}$ in the range $10^{-11} \leq e^{-S^{WB}_{ins}} \leq 10^{-7}$.
\begin{table}[h] \centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Sector & Matter Fields & Transformation & Multiplicity & Hypercharge\\
\hline \hline
$ab$ & $q_L^{1}$ & $(a,\overline{b})$ & $1$& $\frac{1}{6}$ \\
\hline
$ab'$ & $q_L^{2,3}$ & $(a,b)$ & $2$& $\frac{1}{6}$ \\
\hline
$ac$ & $d_R$ & $(\overline{a},c)$ & $3 $ & $\frac{1}{3}$ \\
\hline
$ac'$ & $u_R^{1,2}$ & $(\overline{a},\overline{c})$ & $2 $ & $-\frac{2}{3}$ \\
\hline
$ae'$ & $u_R^{3}$ & $(\overline{a}, \overline e)$ & $1$ & $-\frac{2}{3}$ \\
\hline
$bc$ & $H_u$ & $(\overline b,c)$ & $1$ & $\frac{1}{2} $ \\
\hline
$bc'$ & $H_d$ & $(\overline b,\overline c)$ & $1$ & $-\frac{1}{2} $ \\
\hline
$bd'$ & $L^{1,2}$ & $(\overline b,\overline{d})$ & $2$& $-\frac{1}{2}$ \\
\hline
$be$ & $L^{3}$ & $(b,\overline{e})$ & $1$& $-\frac{1}{2}$ \\
\hline
$cd'$ & $E_R^1$ & $(c,d)$ & $1$ & $1$ \\
\hline
$ce'$ & $E_R^{2,3}$ & $(c,e)$ & $2$ & $1$ \\
\hline
\end{tabular}
\caption{\small {Spectrum for a quiver with $U(1)_Y=\frac{1}{6}\,U(1)_a+\frac{1}{2}\,U(1)_c+ \frac{1}{2}\,U(1)_d +\frac{1}{2}\,U(1)_e$.}
\label{mu yukawa quiver}
\end{table}
For the sake of concreteness let us discuss a concrete quiver which realizes such a scenario. In Table \ref{mu yukawa quiver} we display the spectrum and its origin for a five-stack quiver where, as in the previous example, the hypercharge is given by the linear combination
\begin{align}
U(1)_Y=\frac{1}{6}\,U(1)_a+\frac{1}{2}\,U(1)_c+ \frac{1}{2}\,U(1)_d +\frac{1}{2}\,U(1)_e\,\,.
\end{align}
Note that the $\mu$-term is perturbatively absent since it carries non-vanishing charge under the global $U(1)$'s
\begin{align}
Q_a=0 \,\,\,\,\,\,\, Q_b=-2 \,\,\,\,\,\,\, Q_c=0 \,\,\,\,\,\,\, Q_d=0 \,\,\,\,\,\,\, Q_e=0.
\end{align}
An instanton $E2$ with an intersection pattern
\begin{align}
I_{E2a}=0 \,\,\,\,\,\,\, I_{E2b}=-1 \,\,\,\,\,\,\, I_{E2c}=0 \,\,\,\,\,\,\, I_{E2d}=0 \,\,\,\,\,\,\, I_{E2e}=0
\end{align}
induces the missing $\mu$-term. However, the very same instanton also generates the perturbatively missing couplings $q_L^{1}\,H_u\,u_R^{1,2}$ and $q_L^{1}\,H_d\,d_R^{1,2,3}$ which are necessary to give masses to the lightest quark family. To account for the observed mass hierarchy between the heaviest and lightest family the suppression $e^{-S_{E2}}$ factor is expected to be of the order $10^{-5}$. This implies that the string scale is of the order
\begin{align}
\label{scale quiver 2}
M_s \simeq 10^{7} \,\,\, GeV\,\,.
\end{align}
Via equation \eqref{eq relation weinberg mu}, this implies that the suppression factor of the instanton which induces the Weinberg operator is expected to be of the order $e^{-S^{WB}_{ins}} \simeq 10^{-7}$.
\subsection{A Systematic Analysis of MSSM quivers}
In this chapter we have seen that the string scale must satisfy $M_s \lesssim 10^{14}\,GeV$ if one hopes to account for the observed order of neutrino masses via a stringy Weinberg operator. Furthermore, we have shown that often times the suppression factor of the instanton which induces the Weinberg operator or $\mu$-term is constrained by the requirement that the same instanton generates a Yukawa coupling of the correct order. This further constrains the value of the string mass.
In Appendix \ref{appendix}, we perform a systematic analysis of all four-stack and five-stack quivers that exhibit the exact MSSM spectrum \emph{without} right-handed neutrinos, which is similar to the analysis performed in \cite{Cvetic:2009yh}. We impose top-down constraints which arise from global consistency conditions, such as tadpole cancellation, and bottom-up constraints which are motivated by experimental observations. The latter include, among other things, the absence of R-parity violating couplings and the absence of dimension $5$ operators that lead to rapid proton decay. The small neutrino masses are due to a stringy Weinberg operator as discussed above. Thus, for all these quivers the string mass must be $M_s \lesssim 10^{14}\,GeV$ to account for the observed neutrino masses. All four- and five-stack quivers which pass the top-down and bottom-up constraints are listed in the tables in Appendix \ref{appendix}. We mark setups in which the string scale is further constrained due to scenarios discussed in sections \ref{sec middle scale} and \ref{sec lower scale}. The class of setups which pass all the top-down and bottom-up constraints serve as a good staring point for future D-brane model building.
\section{The Effective Weinberg Operator
\label{chap effective Weinberg}}
In the previous chapter we investigated the generation of neutrino masses via a Weinberg operator induced by a D-instanton. We saw that in order to get realistic neutrino masses the string scale has to be $M_s \lesssim 10^{14}\, GeV$ and have shown that the suppression factor of the Weinberg operator inducing instanton and the $\mu$-term inducing instanton are related to each other via equation \eqref{eq relation weinberg mu}. Moreover, if one of the MSSM Yukawa couplings is induced by an instanton which also generates the $\mu$-term or the Weinberg operator, one obtains serious constraints on the string scale $M_s$.
In this chapter we analyze the situation where the Weinberg operator is induced effectively via the type I seesaw mechanism. We will see that generically one can obtain realistic neutrino masses even without lowering the string scale \cite{Blumenhagen:2006xt,Ibanez:2006da,Cvetic:2007ku,Ibanez:2007rs,Antusch:2007jd,Cvetic:2007qj}. This is due to the fact that the suppression factor of the instanton which induces the Majorana masses is generically independent of the scale of the Dirac mass, whether perturbatively present or non-perturbatively generated.
However, as encountered in \cite{Cvetic:2009yh}, the Majorana mass and Dirac mass scale are related in the case that the same instanton generates both, since the instanton suppression governs those scales. In that case the induced neutrino masses are too small to account for the observed values unless the string scale is lowered. Furthermore, we will analyze the case where one is required to lower the string scale in order to obtain a realistic $\mu$-term, analogously to section \ref{sec lower scale}.
We emphasize that the type I seesaw mechanism is the only mechanism we consider for the effective generation of the Weinberg operator, due to its relative simplicity. Other mechanisms do exist, of course, including the type II and type III seesaw mechanisms, as well as the possibility of generating the Weinberg operator via dimension four $R$-parity violating couplings, such as $q_LLd_R$ \cite{Barbier:2004ez,Chemtob:2004xr}. The latter might be seen as rather natural in the context of orientifold compactifications, where an instanton whose presence is required to generate a forbidden Yukawa coupling often generates a dimension four $R$-parity violating coupling, as well. However, the $R$-parity violating couplings generated in this way are often not suppressed enough to satisfy stringent experimental bounds, since their scale is tied to that of a Yukawa coupling via the instanton suppression factor. For these reasons, we focus on the type I seesaw mechanism, which we now discuss.
\subsection{The Generic Seesaw Mechanism
\label{sec generic seesaw}}
In the presence of a Dirac mass term which does not account for the small neutrino masses\footnote{However, see \cite{Cvetic:2008hi} for an intriguing mechanism to obtain small Dirac neutrino masses via D-instanton effects.}, a large Majorana mass term for the right-handed neutrinos can explain the observed small masses via the seesaw mechanism.
When generated by D-instantons, these mass terms take the form
\begin{align}
e^{-S^{Dirac}_{ins}}L^I \,H_u \,N_R^J\qquad \qquad e^{-S^{Majo}_{ins}}\,\,M_s\,N_R^I \,N_R^J\,\,.
\end{align}
These terms give a mass matrix of the form\footnote{For simplicity we display the neutrino mass matrix for one family only.}
\begin{align}
m_{\nu}= \left( \begin{array}{cc} 0 & e^{-S^{Dirac}_{ins}} \langle H_u \rangle\\
e^{-S^{Dirac}_{ins}}\langle H_u \rangle & e^{-S^{Majo}_{ins}} M_s \\
\end{array}\right)\,\,,
\label{eq generic seesaw mass matrix}
\end{align}
where $\langle H_u \rangle$ denotes the VEV of the Higgs field and $M_s$ is the
string mass. If the Majorana mass term for the right-handed neutrinos is much larger than the Dirac mass term, then the mass eigenvalues of $m_{\nu}$ are of the order
\begin{align}
m^1_{\nu}= \,\frac{e^{-2\,S^{Dirac}_{ins}}\,{\langle H_u \rangle}^2}{e^{-S^{Majo}_{ins} }\,M_s} \qquad \text{and}
\qquad m^2_{\nu}= e^{-S^{Majo}_{ins}}\,M_s\,\,.
\end{align}
Taking $\langle H_u \rangle\simeq 100 \, GeV$ and the observed neutrino mass $m^1_{\nu} \simeq 10^{-1} \, eV $, this relates the string scale $M_s$ to the two suppression factors $e^{-\,S^{Dirac}_{ins}}$ and $e^{-S^{Majo}_{ins}}$ via the relation
\begin{align}
M_s \simeq \frac{e^{-2\,S^{Dirac}_{ins} } } {e^{-S^{Majo}_{ins} }} \, 10^{14}\, GeV\,\,.
\label{eq relation M_s suppression seesaw}
\end{align}
Note that in contrast to the stringy Weinberg operator discussed in the previous section, here we do not have to lower the string scale to obtain realistic neutrino masses.
Thus, even for the generic string scale $M_s\simeq 10^{18}\, GeV$, we obtain realistic neutrino masses if the suppression factors satisfy
\begin{align}
\label{eq condition seesaw generic string scale}
10^{4}\simeq \frac{e^{-2\,S^{Dirac}_{ins} } } {e^{-S^{Majo}_{ins} }} \,\,.
\end{align}
Of course, being able to satisfy this crucially depends on the fact that the instanton suppression factors are not related to one another.
\subsection{The Seesaw with Lower String Scale
\label{sec seesaw lower string scale}}
We saw that for a Weinberg operator induced by the type I seesaw mechanism, one can generically obtain neutrino masses in the observed range for the string scale
$M_s\simeq 10^{18}\, GeV$, as long as the suppression factors of the Dirac and Majorana mass inducing instantons satisfy the condition \eqref{eq condition seesaw generic string scale}. This is due to the fact that the seesaw neutrino masses contain two parameters, $e^{-S^{Dirac}_{ins}}$ and $e^{-S^{Majo}_{ins}}$, while the neutrino masses generated via a D-instanton induced Weinberg operator depend only one parameter, namely $e^{-S^{WB}_{ins}}$.
However, the situation changes if the same instanton generates both the Dirac mass and Majorana mass terms.
Then we have
\begin{align}
e^{-S_{ins}} \,\,\, \equiv \,\,\, e^{-S^{Dirac}_{ins}} \,\,\,=\,\,\,e^{-S^{Majo}_{ins}}
\end{align}
in equation \eqref{eq generic seesaw mass matrix}, and the seesaw masses take the form
\begin{align}
m^1_{\nu}= e^{-S_{ins}}\,\frac{{\langle H_u \rangle}^2}{M_s} \qquad \text{and}
\qquad m^2_{\nu}= e^{-S_{ins}}\,M_s\,\,.
\end{align}
Note that the light mass eigenvalue is in a form similar to the one encountered in the previous chapter, where the neutrino masses arose from a Weinberg operator induced by a D-instanton. With the generic values $M_s \simeq 10^{18} \,GeV$, $\langle H_u \rangle \simeq 100\, GeV$, and an $O(1)$
instanton suppression factor, this gives neutrino masses of order $10^{-5}\, eV$, which is too small by a few orders of magnitude. Thus, in this case, the type I seesaw mechanism cannot account for the observed order of the neutrino masses unless $M_s$ is significantly lower than $10^{18}\, GeV$. The relation between the string scale and the suppression factor which has to be satisfied in order to obtain neutrino masses of the order $10^{-1}\, eV$ is
\begin{align}
\frac{M_s}{e^{-S_{ins}}}\, = 10^{14} \, GeV\,\,.
\end{align}
This is precisely the same relation as \eqref{eq: relation}, except that now it has arisen as a special case of the type I seesaw mechanism, rather than from a stringy Weinberg operator. As before, we know that $e^{-S_{ins}}$ is at most $O(1)$, so that in this scenario the string scale has to satisfy $M_s \lesssim 10^{14}\,GeV$.
We now present a four stack quiver which realizes such a scenario. The hypercharge for this quiver is $U(1)_Y=-\frac{1}{3}\,U(1)_a-\frac{1}{2}\,U(1)_b$, and its matter content and transformation behavior is given in Table \ref{lower seesaw quiver}. Note that this quiver was encountered in the systematic analysis performed in \cite{Cvetic:2009yh} and corresponds to solution 3 in Table 7 of \cite{Cvetic:2009yh}.
\begin{table}[h] \centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Sector & Matter Fields & Transformation & Multiplicity & Hypercharge\\
\hline \hline
$ab$ & $q_L$ & $(a,\overline{b})$ & $3$& $\frac{1}{6}$ \\
\hline
$ad'$ & $d_R$ & $(\overline{a},\overline d)$ & $3 $ & $\frac{1}{3}$ \\
\hline
$aa'$ & $u_R$ & $\Yasymm_a$ & $3 $ & $-\frac{2}{3}$ \\
\hline
$bc'$ & $H_u \,\,\, H_d$ & $(\overline b, \overline c) \,\,\, (b,c)$ & $1$ & $\frac{1}{2} \,\,\,-\frac{1}{2} $ \\
\hline
$bd$ & $L$ & $(b,\overline{d})$ & $3$& $-\frac{1}{2}$ \\
\hline
$bb'$ & $E_R$ & ${\overline \Yasymm}_b$ & $3$ & $1$ \\
\hline
$cd'$ & $N_R$ & $(\overline c, \overline d)$ & $3$ & $0$ \\
\hline
\end{tabular}
\caption{\small {Spectrum for a quiver with $U(1)_Y=-\frac{1}{3}\,U(1)_a-\frac{1}{2}\,U(1)_b.$}
\label{lower seesaw quiver}
\end{table}
In this quiver, the Dirac mass term $L^I\, H_u\, N_R^J$ and Majorana mass term $N_R^I\,N_R^J$ both have charge
\begin{align}
Q_a=0 \,\,\,\,\,\,\, Q_b=0 \,\,\,\,\,\,\, Q_c=-2 \,\,\,\,\,\,\, Q_d=-2
\end{align}
under the global $U(1)$'s. Couplings of this charge can be induced by an instanton $E2$ with intersection pattern
\begin{align}
I_{E2a}=0 \,\,\,\,\,\,\, I_{E2b}=0 \,\,\,\,\,\,\, I_{E2c}=-2 \,\,\,\,\,\,\, I_{E2d}=-2.
\end{align}
This is precisely the case discussed above, where the same instanton generates both the Dirac mass term and the Majorana mass term. Therefore there is an upper bound $M_s \lesssim 10^{14}\,GeV$ in order to obtain neutrino masses of the correct order. Since the $\mu$-term is perturbatively present and $E2$ does not generate any of the Yukawa couplings, there is no interplay between these effects which would further constrain $M_s$.
\subsection{Lower string scale due to $\mu$-term
\label{sec seesaw mu-term}}
As discussed in section \ref{sec lower scale}, one might have to lower the string scale in order to obtain a realistic $\mu$-term. This happens if one of the instantons which induces a perturbatively forbidden, but desired, Yukawa coupling also generates the $\mu$-term \cite{Anastasopoulos:2009mr}. In that case,
\begin{align}
e^{-S_{ins}}\equiv e^{-S^{\mu}_{ins}} \simeq e^{-S^{Yuk}_{ins}}
\end{align}
and in order to get realistic mass hierarchies we have
\begin{align}
10^{-5} \leq e^{-S_{ins}}\leq 1\,\,.
\end{align}
To obtain a $\mu$-term of the correct order, this requires
\begin{align}
M_s \simeq \left(10^{3} - 10^{7} \right)\, GeV\,\,.
\label{mu ms range}
\end{align}
From equation \eqref{eq relation M_s suppression seesaw}, we see that a string mass in this range requires the suppression factors $e^{-S^{Dirac}_{ins}}$ and $e^{-S^{Majo}_{ins}}$ to satisfy
\begin{align}
10^{-11} \leq \frac{e^{-2\,S^{Dirac}_{ins} } } {e^{-S^{Majo}_{ins} }} \leq 10^{-7}\,\,.
\label{eq relation seesaw mu-term }
\end{align}
Note that if the Dirac neutrino mass term is realized perturbatively, then $e^{-S^{Dirac}_{ins}} \simeq 1$ and the Majorana mass term is not large enough to account for the small neutrino masses via the seesaw mechanism. In this case they would be larger than observed. Furthermore, if the Dirac neutrino mass term is induced by an instanton which also generates another perturbatively forbidden Yukawa coupling, then $e^{-S^{Dirac}_{ins}} = e^{-S^{Yuk}_{ins}}$, and $e^{-S^{Majo}_{ins} }$ can only live in the small window
\begin{equation}
10^{-3}\lesssim \, e^{-S^{Majo}_{ins} } \, \lesssim 1.
\end{equation}
This is a consequence of the relation between the Dirac mass term and Yukawa coupling inducing instanton suppressions, the relation \eqref{eq relation seesaw mu-term }, and the fact that any instanton suppression is at most $O(1)$.
If, on the other hand, the instanton which generates the Dirac mass term does not generate a perturbatively forbidden Yukawa coupling or the Majorana mass term, then its suppression factor is unconstrained and one can obtain realistic neutrino masses, in accord with \eqref{eq relation seesaw mu-term }.
As an example, we present a four-stack quiver with the hypercharge embedding $U(1)_Y=\frac{1}{6}\,U(1)_a+\frac{1}{2}\,U(1)_c+\frac{1}{2}\,U(1)_d$, whose matter content and transformation behavior is given in Table \ref{mu seesaw quiver}.
\begin{table}[h] \centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Sector & Matter Fields & Transformation & Multiplicity & Hypercharge\\
\hline \hline
$ab$ & $q_L$ & $(a,\overline{b})$ & $3$& $\frac{1}{6}$ \\
\hline
$ac$ & $d_R$ & $(\overline{a}, c)$ & $3 $ & $\frac{1}{3}$ \\
\hline
$ac'$ & $u_R^1$ & $(\overline{a},\overline c)$ & $1 $ & $-\frac{2}{3}$ \\
\hline
$ad'$ & $u_R^{2,3}$ & $(\overline{a},\overline d)$ & $2 $ & $-\frac{2}{3}$ \\
\hline
$bc'$ & $H_d$ & $ (\overline b,\overline c)$ & $1$ & $-\frac{1}{2} $ \\
\hline
$bd$ & $L$ & $(b,\overline{d})$ & $3$& $-\frac{1}{2}$ \\
\hline
$bd'$ & $H_u$ & $ (b,d)$ & $1$ & $\frac{1}{2} $ \\
\hline
$bb'$ & $N_R$ & ${\overline \Yasymm}_b$ & $3$ & $0$ \\
\hline
$cc'$ & $E_R^2$ & $\Ysymm_c$ & $1$ & $1$ \\
\hline
$dd'$ & $E_R^{2,3}$ & $\Ysymm_d$ & $2$ & $1$ \\
\hline
\end{tabular}
\caption{\small {Spectrum for a quiver with $U(1)_Y=\frac{1}{6}\,U(1)_a+\frac{1}{2}\,U(1)_c+\frac{1}{2}\,U(1)_d.$}
\label{mu seesaw quiver}
\end{table}
In this quiver, the $\mu$-term $H_u\, H_d$ and the Yukawa couplings $q^I_L\, H_u\, u^1_R$ have global $U(1)$ charge
\begin{align}
Q_a=0 \,\,\,\,\,\,\, Q_b=0 \,\,\,\,\,\,\, Q_c=-1 \,\,\,\,\,\,\, Q_d=1\,\,.
\end{align}
Couplings of this charge can be induced by an instanton $E2_1$ with intersection pattern
\begin{align}
I_{E2a}=0 \,\,\,\,\,\,\, I_{E2b}=0 \,\,\,\,\,\,\, I_{E2c}=-1 \,\,\,\,\,\,\, I_{E2d}=1.
\end{align}
If the instanton $E2$ is to account for the observed up-quark mass, then the suppression factor is fixed to be $e^{-S_{E2}} \simeq 10^{-5}$ and the string scale must be of the order
\begin{align}
M_s \simeq 10^7 \,GeV.
\end{align}
In this quiver the Dirac mass term $L\,H_u\,N_R$ is perturbatively realized and, as discussed above, one cannot obtain realistic neutrino masses via the seesaw mechanism. It turns out that if one allows for a lower string scale and thus relaxes the conditions on the non-perturbative generation of the $\mu$-term\footnote{Here we implemented in addition to the constraints laid out in \cite{Cvetic:2009yh} also constraints on dimension five operators discussed in \cite{Kiritsis:2009sf,Cvetic:2009ez,Cvetic:2009ng}. }, there are no additional four-stack quivers compared to the ones found in \cite{Cvetic:2009yh}. It is for the reason that we have presented a quiver which does not allow for realistic neutrino masses. However we expect the situation to change if the MSSM realization is based on five D-brane stacks. In that case we expect that lowering the string scale to account for a realistic non-perturbative $\mu$-term will increase the number of realistic bottom-up MSSM-quivers.
\section{Conclusions}
Perhaps the most astonishing experimental particle physics results in recent memory is the the existence of very small neutrino masses, which differ from the top-quark mass by over ten orders of magnitude. The existence of such a discrepancy in nature is fascinating, and much effort has been dedicated to investigating possible explanations. One answer, of course, is that the masses``are what they are." But this isn't very satisfying, particularly since another explanation might give insight into fundamental theory or beyond the standard model effective theory. Many effective and string theoretic explanations have been developed to this end, including a number of ``seesaw" mechanisms and neutrino mass terms directly induced by D-instanton effects in type II superstring theory.
The small scale of the neutrino masses is not the only interesting experimentally observed scale in particle physics, of course. The Yukawa couplings, with their disparate mass scales and mixing angles, are also fascinating and furthermore it is important to have a $\mu$-term of the correct order in the MSSM.
As shown in \cite{Anastasopoulos:2009mr,Cvetic:2009ez}, D-instanton effects may account for these observed mass hierarchies. However, often times a D-instanton induces more than one of the perturbatively forbidden couplings, which would relate their scales and therefore might pose phenomenological problems\cite{Ibanez:2008my,Cvetic:2009yh,Cvetic:2009ez}. The string scale $M_s$ affects the scale of dimensionful parameters, though, and therefore a lower string scale might alleviate these problems \cite{Anastasopoulos:2009mr}.
In this work we investigate the implications of a lower string scale for bottom-up D-brane model building. We show that a lower string scale allows for a D-instanton induced Weinberg operator to be the primary source for the neutrino masses. For a generic string scale of $10^{18}\, GeV$, the latter would generate neutrino masses much smaller than the observed ones \cite{Ibanez:2007rs}. Thus, a lower string scale provides an option to obtain many semi-realistic bottom-up D-brane quivers. However, a lower string scale sometimes also poses a serious problem. Specifically, if the Dirac mass term is realized perturbatively and the Majorana mass is smaller due to a lower string scale, then neutrino masses of the observed order cannot be obtained via the type I seesaw mechanism.
In chapter \ref{chap stringy Weinberg}, we discuss the scenario where the neutrino masses are due to a D-instanton induced Weinberg operator. We show that the string scale has to be lower than $10^{14}\, GeV$ to obtain realistic neutrino masses. We further investigate how the string scale might be further constrained to account for the observed mass hierarchies if one D-instanton induces multiple desired superpotential couplings. Finally we perform a systematic analysis similar to the one performed in
\cite{Cvetic:2009yh,Cvetic:2009ez} of multi D-brane stack quivers, which exhibit the exact MSSM spectrum but without right-handed neutrinos, where the observed order of the neutrino masses are due to a stringy Weinberg operator.
In chapter \ref{chap effective Weinberg}, we investigate the generation of an effective Weinberg operator via the type I seesaw mechanism. While generically the type I seesaw mechanism can account for the observed neutrino masses even with the usual string scale, it sometimes happens that the same instanton which induces the Majorana mass term for the right-handed neutrinos also generates the Dirac neutrino mass term\cite{Cvetic:2009yh}. In that case one encounters a situation anologous to the stringy Weinberg operator and one is forced to lower the string scale to account for the observed neutrino masses. As in chapter \ref{chap stringy Weinberg}, we further analyze the implications on the string scale if a D-instanton induces multiple desired, but perturbatively missing, superpotential couplings. We encounter a large tension between getting realistic neutrino masses and a $\mu$-term of the desired order if the latter is induced by a D-instanton which also induces one of the perturbatively missing Yukawa couplings.
\vspace{1.5cm}
{\bf Acknowledgments}\\
We thank M. Ambroso, P. Anastasopoulos, M. Bianchi, F. Fucito, I. Garc\'ia-Etxebarria, G. K. Leontaris, A. Lionetto and J.F. Morales for useful discussions.
The work of M.C. and J.H. is supported by the DOE Grant DOE-EY-76-02-3071, the NSF RTG grant DMS-0636606 and the Fay R. and Eugene L. Langberg Chair.
The work of P.L. is supported by the IBM Einstein Fellowship and by the NSF grant PHY-0503584. R.R. thanks the University of Pennsylvania for hospitality during this work.
\newpage
|
2,869,038,154,115 | arxiv | \section{Introduction}
Attractors generated by iterated function systems are among the first fractal sets a mathematician encounters. The familiar middle third Cantor set and the Koch curve can both be realised as attractors for appropriate choices of iterated function system. Attractors generated by iterated function systems have the property that they are equal to several scaled down copies of themselves. When these copies are disjoint, or satisfy some weaker separation assumption, then much can be said about the attractor's metric and topological properties. However, when these copies overlap significantly the situation is much more complicated. Measuring how an iterated function system overlaps, and determining properties of the corresponding attractor, are two important problems that are occupying much current research (see for example \cite{Hochman,Hochman2,Shm2,Shm,ShmSol,Var,Varju2}). The purpose of this paper is to develop a new approach for measuring how an iterated function system overlaps. This approach is inspired by classical results from Diophantine approximation and metric number theory. One such result due to Khintchine demonstrates that for a class of limsup sets defined in terms of the rational numbers, their Lebesgue measure is determined by the convergence or divergence of naturally occurring volume sums (see \cite{Khit}). Importantly this result provides a quantitative description of how the rational numbers are distributed within $\mathbb{R}$.
In this paper we study limsup sets that are defined using iterated function systems (for their definition see Section \ref{two families}). We are motivated by the following goals:
\begin{enumerate}
\item We would like to determine whether it is the case that for a parameterised family of iterated function systems, a typical member will satisfy an appropriate analogue of Khintchine's theorem.
\item We would like to answer the question: Does studying the metric properties of these limsup sets allow us to distinguish between the overlapping behaviour of iterated function systems in a way that was not previously available?
\item We would like to understand how the metric properties of these limsup sets relates to traditional methods for measuring how an iterated function system overlaps, such as the dimension and absolute continuity of self-similar measures.
\end{enumerate} In this paper we make progress with each of these goals. Theorems \ref{1d thm}, \ref{translation thm} and \ref{random thm} address the first goal. These results demonstrate that for many parametrised families of overlapping iterated function systems, it is the case that a typical member will satisfy an appropriate analogue of Khintchine's theorem. To help illustrate this point, and to motivate what follows, we include here a result which follows from Theorem \ref{1d thm}.
\begin{thm}
For Lebesgue almost every $\lambda\in(1/2,0.668),$ Lebesgue almost every $x\in [\frac{-1}{1-\lambda},\frac{1}{1-\lambda}]$ is contained in $$\left\{x\in \R:\big|x-\sum_{i=1}^{m}d_i\lambda^{i-1}\big|\leq \frac{1}{2^m\cdot m}\textrm{ for infinitely many }(d_i)_{i=1}^{m}\in\bigcup_{n=1}^{\infty}\{-1,1\}^n\right\}.$$
\end{thm} Theorem \ref{precise result} allows us to answer the question stated in our second goal in the affirmative. See the discussion in Section \ref{new methods} for a more precise explanation. Theorem \ref{Colette thm} addresses the third goal. It shows that if we are given some measure $\m$, and our iterated function system satisfies a strong version of Khintchine's theorem with respect to $\m$, then the pushforward of $\m$ must be absolutely continuous. Moreover, we demonstrate with several examples that this strong version of Khintchine's theorem is not equivalent to the absolute continuity of the pushforward measure.
In the rest of this introduction we provide some more background to this topic, and introduce the limsup sets that will be our main object of study.
\subsection{Attractors generated by iterated function systems}
We call a map $\phi:\mathbb{R}^d\to\mathbb{R}^d$ a contraction if there exists $r\in (0,1)$ such that $|\phi(x)-\phi(y)|\leq r|x-y|$ for all $x,y\in\mathbb{R}^d$. We call a finite set of contractions an iterated function system or IFS for short. A well known result due to Hutchinson \cite{Hut} states that given an IFS $\Phi=\{\phi_i\}_{i=1}^l,$ then there exists a unique, non-empty, compact set $X$ satisfying $$X=\bigcup_{i=1}^l\phi_i(X).$$ We call $X$ the attractor generated by $\Phi$. When an IFS satisfies $\phi_{i}(X)\cap \phi_{j}(X)=\emptyset$ for all $i\neq j$, or is such that there exists an open set $O\subset \mathbb{R}^d,$ for which $\phi_i(O)\subset O$ for all $i$ and $\phi_{i}(O)\cap \phi_{j}(O)=\emptyset$ for all $i\neq j,$ then many important properties of $X$ can be determined (see \cite{Fal1}). This latter property is referred to as the open set condition. Without these separation assumptions determining properties of the attractor can be significantly more complicated.
The study of attractors generated by iterated function systems is classical within fractal geometry. One of the most important problems in this area is to determine the metric properties of attractors generated by overlapping iterated function systems. To understand the properties of an attractor $X,$ in both the overlapping case and non-overlapping case, it is useful to study measures supported on $X$. A particularly distinguished role is played by the measures described below, that are in a sense dynamically defined.
Let $\pi:\{1,\ldots,l\}^{\mathbb{N}}\to X$ be given by $$\pi((a_j)_{j=1}^{\infty}):=\lim_{n\to\infty}(\phi_{a_1}\circ \cdots \circ \phi_{a_n})(\mathbf{0}).$$ The map $\pi$ is surjective and is also continuous when $\{1,\ldots,l\}^{\mathbb{N}}$ is equipped with the product topology. The sequence space $\{1,\ldots,l\}^{\mathbb{N}}$ comes with a natural left-shift map $\sigma:\{1,\ldots,l\}^{\mathbb{N}}\to\{1,\ldots,l\}^{\mathbb{N}}$ defined via the equation $\sigma((a_j)_{j=1}^{\infty})=(a_{j+1})_{j=1}^{\infty}$. Given a finite word $\a=a_1\cdots a_n,$ we associate its cylinder set
$$[\a]:=\{(b_j)\in\{1,\ldots,l\}^\N:b_1\cdots b_n=a_1\cdots a_n\}.$$
We call a measure $\m$ on $\{1,\ldots,l\}^{\mathbb{N}}$ $\sigma$-invariant if $\m([\a])=\m(\sigma^{-1}([\a]))$ for all finite words $\a$. We call a probability measure $\m$ ergodic if $\sigma^{-1}(A)=A$ implies $\m(A)=0$ or $\m(A)=1$. Given a measure $\m$ on $\{1,\ldots,l\}^{\mathbb{N}},$ we obtain the corresponding pushforward measure $\mu$ supported on $X$ using the map $\pi,$ i.e. $\mu=\m\circ \pi^{-1}$.
We define the dimension of a measure $\mu$ on $\mathbb{R}^d$ to be $$\dim \mu=\inf \{\dim_{H}(A):\mu(A)>0\}.$$ Note that for any pushforward measure $\mu$ we have $\dim \mu\leq \dim_{H}(X).$ The problem of determining $\dim_{H}(X)$ is often solved by finding a $\sigma$-invariant ergodic probability measure whose pushforward has dimension equal to some known upper bound for $\dim_{H}(X)$. This approach is especially useful when the iterated function system is overlapping.
When studying attractors of iterated function systems, one of the guiding principles is that if there is no obvious mechanism preventing an attractor from satisfying a certain property, then one should expect this property to be satisfied. This principle is particularly prevalent in the many conjectures which state that under certain reasonable assumptions, the Hausdorff dimension of $X$, and the Hausdorff dimension of dynamically defined pushforward measures supported on $X$, should equal the value predicted by a certain formula. A particular example of this phenomenon is provided by self-similar sets and self-similar measures. We call a contraction $\phi$ a similarity if there exists $r\in(0,1)$ such that $|\phi(x)-\phi(y)|=r|x-y|$ for all $x,y\in \mathbb{R}^d$. If an IFS $\Phi$ consists of similarities then it is known that
\begin{equation}
\label{dimension upper}
\dim_{H}(X)\leq \min \{\dim_{S}(\Phi),d\},
\end{equation}where $\dim_{S}(\Phi)$ is the unique solution to $\sum_{i=1}^l r_i^s=1$. Given a probability vector $\textbf{p}=(p_1,\ldots,p_l),$ we let $\m_{\textbf{p}}$ denote the corresponding Bernoulli measure supported on $\{1,\ldots,l\}^{\mathbb{N}}.$ If an IFS consists of similarities, then we define the self-similar measure corresponding to $\textbf{p}$ to be $\mu_{\textbf{p}}:=\m_{\textbf{p}}\circ \pi^{-1}.$ The measure $\mu_{\textbf{p}}$ can also be defined as the unique measure satisfying the equation $$\mu_{\textbf{p}}=\sum_{i=1}^l p_i \cdot \mu_{\textbf{p}}\circ \phi_{i}^{-1}.$$ For any self-similar measure $\mu_{\textbf{p}},$ we have the upper bound:
\begin{equation}
\label{expected dimensiona}
\dim \mu_{\textbf{p}}\leq \min\left\{\frac{\sum_{i=1}^l p_i\log p_i}{\sum_{i=1}^l p_i\log r_i},d\right\}.
\end{equation} For an appropriate choice of $\textbf{p},$ it can be shown that equality in \eqref{expected dimensiona} implies equality in \eqref{dimension upper}. An important conjecture states that if an IFS consisting of similarities avoids certain degenerate behaviour, then we should have equality in \eqref{expected dimensiona} for all $\textbf{p},$ and therefore equality in \eqref{dimension upper} (see \cite{Hochman,Hochman2}). In $\mathbb{R}$ this conjecture can be stated succinctly as: If an IFS does not contain an exact overlap, then we should have equality in \eqref{expected dimensiona} for all $\textbf{p}$. Recall that an IFS is said to contain an exact overlap if there exists two distinct words $\a=a_1\cdots a_n$ and $\b=b_1\cdots b_m$ such that $$\phi_{a_1}\circ \cdots \circ \phi_{a_n}=\phi_{b_1}\circ \cdots \circ \phi_{b_m}.$$ In \cite{Hochman} and \cite{Hochman2} significant progress was made towards this conjecture. In particular, in \cite{Hochman} it was shown that for an IFS consisting of similarities acting on $\mathbb{R},$ if strict inequality holds in \eqref{expected dimensiona} for some $\textbf{p},$ then
$$\lim_{n\to\infty}\frac{-\log \Delta_n}{n}=\infty,$$ where $$\Delta_n:=\min_{\a\neq \b\in \{1,\ldots,l\}^n}|(\phi_{a_1}\circ \cdots \circ \phi_{a_n})(0)-(\phi_{b_1}\circ \cdots \circ \phi_{b_n})(0)|.$$ Using this statement, it can be shown that if the parameters defining the IFS are algebraic, and there are no exact overlaps, then equality holds in \eqref{expected dimensiona} for all $\textbf{p},$ and therefore also in \eqref{dimension upper} .
In addition to expecting equality to hold typically in \eqref{expected dimensiona}, it is expected that if $$\frac{\sum p_i\log p_i}{\sum p_i\log r_i}>d,$$ and the IFS avoids certain obstacles, then $\mu_{\textbf{p}}$ will be absolutely continuous with respect to $d$-dimensional Lebesgue measure. A standard technique for proving an attractor has positive $d$-dimensional Lebesgue measure is to show that there is an absolutely continuous pushforward measure. Note that by a recent result of Simon and V\'{a}g\'{o} \cite{SimVag}, it follows that the list of mechanisms leading to the failure of absolute continuity is strictly greater than the list of mechanisms leading to the failure of equality in \eqref{expected dimensiona}.
The usual methods for gauging how an iterated function system overlaps are to determine whether the Hausdorff dimension of the attractor satisfies a certain formula, to determine whether the dimension of pushforwards of dynamically-defined measures satisfy a certain formula, and to determine whether these measures are absolutely continuous with respect to the $d$-dimensional Lebesgue measure. If an IFS did not exhibit the expected behaviour, then this would be indicative of something degenerate within our IFS that was either preventing $X$ from being well spread out within $\mathbb{R}^d$, or was forcing mass from the pushforward measure into some small subregion of $\mathbb{R}^d$. This method for gauging how an iterated function system overlaps has its limitations. If each of the expected behaviours described above occurs for two distinct IFSs within a family, then we have no method for distinguishing their overlapping behaviour. The approach put forward in this paper shows how we can still make a distinction (see the discussion in Section \ref{new methods}). As previously stated this approach is inspired by results from Diophantine approximation and metric number theory. We now take the opportunity to briefly recall some background from this area.
\subsection{Diophantine approximation and metric number theory}
Given $\Psi:\mathbb{N}\to[0,\infty)$ we can define a limsup set defined in terms of neighbourhoods of rationals as follows. Let
$$J(\Psi):=\Big\{x\in\mathbb{R}:\Big|x-\frac{p}{q}\Big|\leq \Psi(q) \textrm{ for i.m. } (p,q)\in \mathbb{Z}\times\mathbb{N}\Big\}.$$ Here and throughout we use i.m. as a shorthand for infinitely many. If $x\in J(\Psi)$ we say that $x$ is $\Psi$-approximable. An immediate application of the Borel-Cantelli lemma implies that if $\sum_{q=1}^{\infty}q\cdot \Psi(q)<\infty,$ then $J(\Psi)$ has zero Lebesgue measure. The following theorem due to Khintchine shows that a partial converse to this statement holds. This theorem motivates much of the present work.
\begin{thm}[Khintchine \cite{Khit}]
\label{Khintchine}
If $\Psi:\mathbb{N}\to [0,\infty)$ is decreasing and $$\sum_{q=1}^{\infty}q\cdot \Psi(q)=\infty,$$ then Lebesgue almost every $x\in \mathbb{R}$ is $\Psi$-approximable.
\end{thm}Results analogous to Khintchine's theorem are ubiquitous in Diophantine approximation and metric number theory. We refer the reader to \cite{BDV} for more examples.
By an example of Duffin and Schaeffer, it can be seen that it is not possible to remove the decreasing assumption from Theorem \ref{Khintchine}. Indeed in \cite{DufSch} they constructed a $\Psi$ such that $\sum_{q=1}^{\infty}q\cdot \Psi(q)=\infty,$ yet $J(\Psi)$ has zero Lebesgue measure. This gave rise to a conjecture known as the Duffin-Schaeffer conjecture which was recently proved by Koukoulopoulos and Maynard in \cite{KouMay}.
\begin{thm}[\cite{KouMay}]
If $\Psi:\mathbb{N}\to [0,\infty)$ satisfies $$\sum_{q=1}^{\infty}\varphi(q)\cdot\Psi(q)=\infty,$$ then Lebesgue almost every $x\in \mathbb{R}$ is $\Psi$-approximable.
\end{thm}
Here $\varphi$ is the Euler totient function.
By studying the Lebesgue measure of $J(\Psi)$ for those $\Psi$ satisfying $\sum_{q=1}^{\infty}q\cdot \Psi(q)=\infty,$ we obtain a quantitative description of how the rationals are distributed within the reals. The example of Duffin and Schaeffer demonstrates that there exists some interesting non-trivial interactions occurring between fractions of different denominator.
\subsection{Two families of limsup sets}
\label{two families}
Before defining the limsup sets we study in this paper, it is necessary to introduce some notation. In what follows we let $$\D:=\{1,\ldots,l\},\quad\, \D^{*}:=\bigcup_{j=1}^{\infty}\{1,\ldots,l\}^j,\quad \, \D^{\mathbb{N}}:=\{1,\ldots,l\}^{\mathbb{N}}.$$ Given an IFS $\Phi=\{\phi_i\}_{i\in \D}$ and $\a=a_1\cdots a_n\in \D^{*},$ let $$\phi_\a:=\phi_{a_1}\circ \cdots \circ \phi_{a_n}.$$ Let $|\a|$ denote the length of $\a\in \D^*$. If $\Phi$ has attractor $X,$ then for each $\a\in \D^*$ let $$X_{\a}:=\phi_{\a}(X).$$
\subsubsection{The set $W_{\Phi}(z,\Psi)$}
Given an IFS $\Phi$, $\Psi:\D^*\to [0,\infty),$ and an arbitrary $z\in X,$ we let $$W_{\Phi}(z,\Psi):=\Big\{x\in \mathbb{R}^d: |x-\phi_\a(z)|\leq \Psi(\a) \textrm{ for i.m. }\a \in \D^*\Big\}.$$ Throughout this paper we will always have the underlying assumption that $\Psi$ satisfies $$\lim_{n\to\infty}\max_{\a\in \D^n}\Psi(\a)= 0.$$ This condition guarantees $$W_{\Phi}(z,\Psi)\subseteq X.$$ The study of the metric properties of $W_{\Phi}(z,\Psi)$ will be one of the main focuses of this paper. Proceeding via analogy with Khintchine's theorem, it is natural to wonder what metric properties of $W_{\Phi}(z,\Psi)$ are encoded in the volume sum:
\begin{equation}
\label{con/div}
\sum_{n=1}^{\infty}\sum_{\a\in \D^n}\Psi(\a)^{\dim_{H}(X)}.
\end{equation}It is an almost immediate consequence of the definition of Hausdorff measure that if we have convergence in \eqref{con/div}, then $\mathcal{H}^{\dim_{H}(X)}(W_{\Phi}(z,\Psi))=0$ for all $z\in X$. Given the results mentioned in the previous section, it is reasonable to expect that divergence in \eqref{con/div} might imply some metric property of $W_{\Phi}(z,\Psi)$ which demonstrates that a typical element of $X$ is contained in $W_{\Phi}(z,\Psi)$. A classification of those $\Psi$ for which divergence in \eqref{con/div} implies a typical element of $X$ is contained in $W_{\Phi}(z,\Psi)$ would provide a quantitative description of how the images of $z$ are distributed within $X$. This in turn provides a description of how the underlying iterated function system overlaps. This idea provides us with a new tool for describing the overlapping behaviour of iterated function systems. We refer the reader to Section \ref{new methods} for further discussions which demonstrate the utility of this idea.
The question of whether divergence in \eqref{con/div} implies a typical element of $X$ is contained in $W_{\Phi}(z,\Psi)$ was studied previously by the author in \cite{Bak2,Bak,Bakerapprox2}. Related work appears in \cite{LSV,PerRev,PerRevA}. In \cite{Bak2} the following theorem was proved:
\begin{thm}\cite[Theorem 1.4]{Bak2}
\label{conformal theorem}
If $\Phi$ is a conformal iterated function system and satisfies the open set condition, then for any $z\in X,$ if $\theta:\mathbb{N}\to [0,\infty)$ is a decreasing function and satisfies $$\sum_{n=1}^{\infty} \sum_{\a\in\D^{n}} (Diam(X_\a)\theta(n))^{\dim_{H}(X)}=\infty,$$ then $\mathcal{H}^{\dim_{H}(X)}$-almost every $x\in X$ is contained in $W_{\Phi}(z,Diam(X_\a)\theta(|\a|)).$
\end{thm}
Note that for a conformal iterated function system it is known that the open set condition implies $0<\mathcal{H}^{\dim_{H}(X)}(X)<\infty$ (see \cite{PRSS}). For the definition of a conformal iterated function system see Section \ref{self-conformal section}. Note that an iterated function system consisting of similarities is automatically a conformal iterated function system. In \cite[Theorem 6.1]{Bak2} it was also shown that if $\Phi$ is a conformal iterated function system and contains an exact overlap, then there exist many natural choices of $\Psi$ such that we have divergence in \eqref{con/div}, yet $\dim_{H}(W_{\Phi}(z,\Psi))<\dim_{H}(X).$ As such an exact overlap effectively prevents any Khintchine like behaviour.
In \cite{Bak} and \cite{Bakerapprox2} the author studied the family of IFSs $\Phi_{\lambda}:=\{\lambda x,\lambda x +1\},$ where $\lambda\in(1/2,1)$. For each element of this family the corresponding attractor is $[0,\frac{1}{1-\lambda}].$ In \cite{Bak} the author proved that if the reciprocal of $\lambda$ belongs to a special class of algebraic integers known as Garsia numbers, then for a general class of $\Psi$, divergence in \eqref{con/div} implies that for all $z\in [0,\frac{1}{1-\lambda}],$ Lebesgue almost every $x\in [0,\frac{1}{1-\lambda}]$ is contained in $W_{\Phi_{\lambda}}(z,\Psi)$. For more on this result and Garsia numbers we refer the reader to Section \ref{Examples} where this result is recovered using a different argument. The main result of \cite{Bak2} provides strong evidence to suggest that for a general class of $\Psi$, for a typical $\lambda\in (1/2,1),$ we should expect that divergence in \eqref{con/div} implies that Lebesgue almost every $x\in [0,\frac{1}{1-\lambda}]$ is contained in $W_{\Phi_{\lambda}}(z,\Psi)$. A consequence of the main result of \cite{Bak2} is that for Lebesgue almost every $\lambda\in (1/2,0.668),$ for all $z\in [0,\frac{1}{1-\lambda}],$ Lebesgue almost every $x\in [0,\frac{1}{1-\lambda}]$ is contained in $W_{\Phi_{\lambda}}(z,\frac{\log |\a|}{2^{|\a|}})$. Note that the results in \cite{Bak} and \cite{Bakerapprox2} are phrased for $z=0$ but can easily be adapted to the case of arbitrary $z\in[0,\frac{1}{1-\lambda}]$.
\subsubsection{The set $U_{\Phi}(z,\m,h)$}
\label{auxillary sets}
Instead of studying the sets $W_{\Phi}(z,\Psi)$ directly it is more profitable to study a related family of auxiliary sets. These sets are interesting in their own right and are defined in terms of a measure $\m$ supported on $\D^{\mathbb{N}}$. Our approach doesn't work for all $\m$ and we will require the following additional regularity assumption.
Given a probability measure $\m$ supported on $\D^{\mathbb{N}},$ we let
$$c_{\m}:={\textrm{ess}\inf}\inf_{k\in\mathbb{N}}\frac{\m([a_1\cdots a_{k+1}])}{\m([a_1\cdots a_k])}.$$ We say that $\m$ is slowly decaying if $c_{\m}>0$. If $\m$ is slowly decaying, then for $\m$-almost every $(a_j)\in \D^{\mathbb{N}},$ we have $$\frac{\m([a_1 \ldots a_{k+1}])}{\m([a_1\ldots a_k])}\geq c_{\m},$$ for all $k\in\mathbb{N}.$ Examples of slowly decaying measures include Bernoulli measures, and Gibbs measures for H\"{o}lder continuous potentials (see \cite{Bow}). In fact any measure with the quasi-Bernoulli property is slowly decaying.
Given a slowly decaying probability measure $\m$, for each $n\in\mathbb{N}$ we let $$L_{\m,n}:=\{\a\in\D^*: \m([a_1\cdots a_{|\a|}])\leq c_{\m}^n<\m([a_1\cdots a_{|\a|-1}])\}$$ and $$R_{\m,n}:=\#L_{\m,n}.$$ The elements of $L_{\m,n}$ are disjoint and the union of their cylinders has full $\m$ measure. Importantly, by the slowly decaying property, the cylinders corresponding to elements of $L_{\m,n}$ have comparable measure up to a multiplicative constant. Note that when $\m$ is the uniform $(1/l,\ldots,1/l)$ Bernoulli measure the set $L_{\m,n}$ is simply $\D^n$.
Given $z\in X$ and a slowly decaying probability measure $\m,$ we let $$Y_{\m,n}(z):=\{\phi_{\a}(z)\}_{\a\in L_{\m,n}}.$$ Obtaining information on how the elements of $Y_{\m,n}(z)$ are distributed within $X$ for different values of $n$ will occupy a large part of this paper.
Given a slowly decaying measure $\m,$ an IFS $\Phi$, $h:\mathbb{N}\to[0,\infty),$ and $z\in X,$ we can define a limsup set as follows. Let
$$U_{\Phi}(z,\m,h):=\left\{x\in \mathbb{R}^d:x\in \bigcup_{\a\in L_{\m,n}}B\left(\phi_{\a}(z),(\m([\a])h(n))^{1/d}\right) \textrm{ for i.m. } n\in \N\right\}.$$ Here and throughout $B(x,r)$ denotes the closed Euclidean ball centred at $x$ with radius $r$.
Throughout this paper we will always assume that $\m$ is non-atomic and $h$ is a bounded function. These properties ensure $$U_{\Phi}(z,\m,h)\subseteq X.$$ In this paper we study the metric properties of the sets $U_{\Phi}(z,\m,h)$ for parameterised families of IFSs when the underlying attractor typically has positive $d$-dimensional Lebesgue measure. In which case, for the set $U_{\Phi}(z,\m,h),$ the appropriate volume sum that we expect to determine the Lebesgue measure of $U_{\Phi}(z,\m,h)$ is $$\sum_{n=1}^{\infty}h(n).$$ It can be shown using the Borel-Cantelli lemma that if $\sum_{n=1}^{\infty}h(n)<\infty,$ then $U_{\Phi}(z,\m,h)$ has zero Lebesgue measure. For us the interesting question is: When does $\sum_{n=1}^{\infty}h(n)=\infty$ imply that $U_{\Phi}(z,\m,h)$ has positive or full Lebesgue measure?
The sets $U_{\Phi}(z,\m,h)$ are easier to work with than the sets $W_{\Phi}(z,\Psi)$. In particular we can use properties of the measure $\m$ to aid with our analysis. As we will see, the sets $U_{\Phi}(z,\m,h)$ can be used to prove results for the sets $W_{\Phi}(z,\Psi),$ but only under the following additional assumption. Given a slowly decaying measure $\m$ and $h:\mathbb{N}\to[0,\infty),$ we say that $\Psi$ is equivalent to $(\m,h)$ if
$$\Psi(\a)\asymp \big(\m([\a])h(n)\big)^{1/d}$$
for each $\a\in L_{\m,n}$ for all $n\in \N$. Here and throughout, for two real valued functions $f$ and $g$ defined on some set $S$, we write $f\asymp g$ if there exists a positive constant $C$ such that $$C^{-1}\cdot g(x)\leq f(x)\leq Cg(x)$$ for all $x\in S$. As we will see, if $\Psi$ is equivalent to $(\m,h)$ and $U_{\Phi}(z,\m,h)$ has positive Lebesgue measure, then $W_{\Phi}(z,\Psi)$ will also have positive Lebesgue measure (see Lemma \ref{arbitrarily small}).
\section{Statement of results}
\label{statement of results}
Before stating our theorems we need to define the entropy of a measure $\m$ supported on $\D^{\mathbb{N}}$ and introduce a class of functions that are the natural setting for some of our results.
For any $\sigma$-invariant measure $\m$ supported on $\D^{\N},$ we define the entropy of $\m$ to be
$$\mathfrak{h}(\m):=\lim_{n\to\infty}-\frac{1}{n}\sum_{\a\in\D^n}\m([\a])\log \m([\a]).$$ The entropy of a $\sigma$-invariant measure always exists.
Given a set $B\subset \mathbb{N},$ we define the lower density of $B$ to be $$\underline{d}(B):=\liminf_{n\to\infty}\frac{\#\{1\leq j\leq n:j\in B\}}{n},$$ and the upper density of $B$ to be
$$\overline{d}(B):=\limsup_{n\to\infty}\frac{\#\{1\leq j\leq n:j\in B\}}{n}.$$ Given $\epsilon>0,$ let
$$H_{\epsilon}^*:=\left\{h:\mathbb{N}\to[0,\infty):\sum_{n\in B}h(n)=\infty\,, \forall B\subseteq \N \textrm{ s.t. } \underline{d}(B)>1-\epsilon\right\}$$ and
$$H_{\epsilon}:=\left\{h:\mathbb{N}\to[0,\infty):\sum_{n\in B}h(n)=\infty\,, \forall B\subseteq \N \textrm{ s.t. } \overline{d}(B)>1-\epsilon\right\}.$$ We also define
\begin{equation}
\label{H^* functions}
H^*:=\bigcup_{\epsilon\in(0,1)}H^*_{\epsilon}
\end{equation}and
\begin{equation}
\label{H functions} H:=\bigcup_{\epsilon\in(0,1)}H_{\epsilon}.
\end{equation}For any $\epsilon>0$ we have $H_{\epsilon}\subset H_{\epsilon}^*.$ Therefore $H\subset H^*.$ It can be shown that $H^*$ contains all decreasing functions satisfying $\sum_{n=1}^{\infty}h(n)=\infty$. Most of the time we will be concerned with the class of functions $H$. The class $H^*$ will only appear in Theorem \ref{precise result}.
We say that a function $\Psi:\D^*\to [0,\infty)$ is weakly decaying if $$\inf_{\a\in \D^*}\min_{i\in\D}\frac{\Psi(i\a)}{\Psi(\a)}>0.$$ Given a measure $\m$ supported on $\D^{\N},$ we let $$\Upsilon_{\m}:=\left\{\Psi:\D^*\to[0,\infty): \Psi \textrm{ is weakly decaying and equivalent to }(\m,h) \textrm{ for some }h\in H\right\}.$$ As we will see, the weakly decaying property will allow us to obtain full measure statements.
\subsection{Parameterised families with variable contraction ratios}
Let $D:=\{d_1,\ldots, d_{l}\}$ be a finite set of real numbers. To each $\lambda\in(0,1),$ we associate the iterated function system $$\Phi_{\lambda,D}:=\left\{\phi_i(x)=\lambda x + d_i\right\}.$$ It is straightforward to check that the corresponding attractor for $\Phi_{\lambda,D}$ is $$X_{\lambda,D}:=\left\{\sum_{j=0}^{\infty}d_j\lambda^j: d_j\in D\right\},$$ and the projection map $\pi_{\lambda,D}:\D^{\mathbb{N}}\to X_{\lambda,D}$ takes the form $$\pi_{\lambda,D}((a_j)_{j=1}^{\infty})=\sum_{j=1}^{\infty}d_{a_{j}}\lambda^{j-1}.$$ To study this family of iterated function systems, it is useful to study the set $\Gamma:=D-D$ and the corresponding class of power series $$\mathcal{B}_{\Gamma}:=\Big\{g(x)=\sum_{j=0}^{\infty}g_jx^j:g_j\in \Gamma\Big\}.$$ To each $\mathcal{B}_{\Gamma}$ we associate the set $$\Lambda(\mathcal{B}_{\Gamma}):=\left\{\lambda\in(0,1):\exists g\in \mathcal{B}_{\Gamma}, g\not\equiv 0, g(\lambda)=g'(\lambda)=0\right\}.$$ In other words, $\Lambda(\mathcal{B}_{\Gamma})$ is the set of $\lambda\in(0,1)$ that can be realised as a double zero for a non-trivial function in $\mathcal{B}_{\Gamma}.$ We let $$\alpha(\mathcal{B}_{\Gamma})=\inf \Lambda(\mathcal{B}_{\Gamma}),$$ if $\Lambda(\mathcal{B}_{\Gamma})\neq \emptyset,$ and let $\alpha(\mathcal{B}_{\Gamma})=1$ otherwise.
These families of iterated function systems were originally studied by Solomyak in \cite{Sol}. He was interested in the absolute continuity of self-similar measures. In particular, he was interested in the pushforward of the uniform $(1/l,\ldots,1/l)$ Bernoulli measure. We denote this measure by $\mu_{\lambda,D}.$ The main result of \cite{Sol} is the following theorem.
\begin{thm}
\label{Solomyak transverality theorem}
For Lebesgue almost every $\lambda\in(1/l,\alpha(\mathcal{B}_{\Gamma})),$ the measure $\mu_{\lambda,D}$ is absolutely continuous and has a density in $L^{2}(\mathbb{R})$.
\end{thm}Using Theorem \ref{Solomyak transverality theorem}, Solomyak proved the well known result that for Lebesgue almost every $\lambda\in(1/2,1),$ the unbiased Bernoulli convolution is absolutely continuous and has a density in $L^{2}(\mathbb{R})$. As a by-product of our analysis, in Section \ref{applications} we give a short intuitive proof that for Lebesgue almost every $\lambda\in(1/2,1),$ the unbiased Bernoulli convolution is absolutely continuous. Instead of using the Fourier transform or by differentiating measures, as in \cite{Sol} and \cite{PerSol}, our proof makes use of the fact that self-similar measures are of pure type, i.e. they are either singular or absolutely continuous with respect to the Lebesgue measure. As a further by-product of our analysis, in Section \ref{applications} we recover another result of Solomyak from \cite{Sol}. We prove that for Lebesgue almost every $\lambda\in(1/3,2/5),$ the set $$C_{\lambda}:=\left\{\sum_{j=0}^{\infty}d_j\lambda^j: d_j\in \{0,1,3\}\right\}$$
has positive Lebesgue measure. Interestingly our proof of this statement does not rely on showing that there is an absolutely continuous measure supported on this set. Instead we study a subset of this set, and show that for Lebesgue almost every $\lambda\in(1/3,2/5),$ this set has positive Lebesgue measure.
For the families of iterated function systems introduced in this section, our main result is the following.
\begin{thm}
\label{1d thm}
Let $D$ be a finite set of real numbers. The following statements are true:
\begin{enumerate} \item Let $\m$ be a slowly decaying $\sigma$-invariant ergodic probability measure with $\mathfrak{h}(\m)>0$ and $(a_j)\in \D^{\N}.$ For Lebesgue almost every $\lambda\in(e^{-\h(\m)},\alpha(\mathcal{B}_{\Gamma})),$ for any $h\in H$ the set $U_{\Phi_{\lambda,D}}(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1},\m,h)$ has positive Lebesgue measure.
\item Let $\m$ be the uniform $(1/l,\cdots, 1/l)$ Bernoulli measure. For Lebesgue almost every $\lambda\in(1/l,\alpha(\mathcal{B}_{\Gamma})),$ for any $z\in X_{\lambda,D}$ and $h\in H$, the set $U_{\Phi_{\lambda,D}}(z,\m,h)$ has positive Lebesgue measure.
\item Let $\m$ be a slowly decaying $\sigma$-invariant ergodic probability measure with $\mathfrak{h}(\m)>0$ and $(a_j)\in \D^{\N}.$ For Lebesgue almost every $\lambda\in(e^{-\h(\m)},\alpha(\mathcal{B}_{\Gamma})),$ for any $\Psi\in \Upsilon_{\m}$ Lebesgue almost every $x\in X_{\lambda,D}$ is contained in $W_{\Phi_{\lambda,D}}(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1},\Psi).$
\item Let $\m$ be the uniform $(1/l,\cdots, 1/l)$ Bernoulli measure. For Lebesgue almost every $\lambda\in(1/l,\alpha(\mathcal{B}_{\Gamma})),$ for any $z\in X_{\lambda,D}$ and $\Psi\in \Upsilon_{\m},$ Lebesgue almost every $x\in X_{\lambda,D}$ is contained in $W_{\Phi_{\lambda,D}}(z,\Psi).$
\end{enumerate}
\end{thm}
To aid with our exposition we will prove in Section \ref{applications} the following corollary to Theorem \ref{1d thm}.
\begin{cor}
\label{example cor}
Let $D$ be a finite set of real numbers and $\m$ be a Bernoulli measure corresponding to the probability vector $(p_1,\ldots,p_l)$. Then for any $(a_j)\in \D^{\N}$, for Lebesgue almost every $\lambda\in(\prod_{i=1}^lp_i^{p_i},\alpha(\mathcal{B}_{\Gamma})),$ Lebesgue almost every $x\in X_{\lambda,D}$ is contained in the set $$\Big\{x\in X: \Big|x-\phi_{\a}\Big(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1}\Big)\Big|\leq \frac{\prod_{j=1}^{|\a|}p_{a_j}}{|\a|} \textrm{ for i.m. }\a\in \D^{*}\Big\}.$$
\end{cor}
In Section \ref{applications} we will apply these results to obtain more explicit statements in the setting of Bernoulli convolutions and the $\{0,1,3\}$ problem.
Certain lower bounds for the transversality constant $\alpha(B_{\Gamma})$ are known. Let $D$ be a finite set of real numbers and assume $d_j\neq d_k$ for all $j\neq k,$ so $$b(D):=\max\left\{\left|\frac{d_j-d_l}{d_k-d_i}\right|:k\neq i\right\}<\infty.$$ The proposition stated below provides a summary of the lower bounds obtained separately in \cite{PerSol2}, \cite{PolSimon}, and \cite{ShmSol2}.
\begin{prop}
\label{transversality constants}
Let $D$ be a finite set of real numbers and $b(D)$ be as above. Then the following statements are true:
\begin{itemize}
\item If $b(D)=1$ then $\alpha(B_{\Gamma})>0.668.$
\item If $b(D)=2$ then $\alpha(B_{\Gamma})=0.5.$
\item $\alpha(B_{\Gamma})= (b(D)+1)^{-1}$ whenever $b(D)\geq 3+\sqrt{8}.$
\item $\alpha(B_{\Gamma})\geq (b(D)+1)^{-1}$ for all $D$.
\end{itemize}
\end{prop}
\subsection{Parameterised families with variable translations}
Suppose $\{A_i\}_{i=1}^l$ is a collection of $d\times d$ non-singular matrices each satisfying $\|A_i\|<1.$ Here $\|\cdot\|$ denotes the operator norm induced by the Euclidean norm. Given a vector $\t=(t_1,\ldots, t_l)\in \mathbb{R}^{ld}$ we can define an IFS to be the set of contractions $$\Phi_{\t}:=\{\phi_{i}(x)=A_i x+ t_i\}_{i=1}^l.$$ Unlike in the previous section where we obtained a family of iterated function systems by varying the contraction ratio, here we obtain a family by varying the translation parameter $\t$. For each $\t\in\mathbb{R}^{ld}$ we denote the attractor by $X_\t,$ and the corresponding projection map from $\D^\N$ to $X_\t$ by $\pi_{\t}$. The attractor $X_\t$ is commonly referred to as a self-affine set.
This family of iterated function systems was introduced by Falconer in \cite{Fal}, and subsequently studied by Solomyak in \cite{Sol2}, and later by Jordan, Pollicott, and Simon in \cite{JoPoSi}. For this family an important result is the following.
\begin{thm}[Falconer \cite{Fal}, Solomyak \cite{Sol2}]
\label{FalSol}
Assume the $A_i$ satisfy the additional hypothesis that $\|A_i\|<1/2$ for all $1\leq i\leq l$. Then for Lebesgue almost every $\t\in\mathbb{R}^{ld}$ the attractor $X_\t$ satisfies:
$$\dim_{H}(X_\t)=\dim_{B}(X_\t)=\min\{\dim_{A}(A_1,\ldots,A_l),d\}.$$
\end{thm}Here $\dim_{A}(A_1,\ldots,A_l)$ is a quantity known as the affinity dimension. For its definition see \cite{Fal}.
Theorem \ref{FalSol} was originally proved by Falconer in \cite{Fal} under the assumption $\|A_i\|<1/3$ for all $1\leq i\leq l$. This upper bound was improved to $1/2$ by Solomyak in \cite{Sol2}. The bound $1/2$ is known to be optimal (see \cite{Edg,SiSo}). An analogue of Theorem \ref{FalSol} for measures was obtained by Jordan, Pollicott, and Simon in \cite{JoPoSi}. A recent result of B\'{a}r\'{a}ny, Hochman, and Rapaport \cite{BaHoRa} significantly improves upon Theorem \ref{FalSol}. They proved that we have $\dim_{H}(X_\t)=\dim_{B}(X_\t)=\min\{\dim_{A}(A_1,\ldots,A_l),d\}$ under some very general assumptions on the $A_i$ and $\t$. In particular, their result gives rise to many explicit examples where equality is satisfied.
Given $\a=a_1\cdots a_n\in \D^*,$ we let $$A_{\a}:=A_{a_1}\circ \cdots \circ A_{a_{n}},$$ and $$1>\alpha_1(A_{\a})\geq \alpha_2(A_{\a})\geq \cdots \geq \alpha_d(A_{\a})>0$$ denote the singular values of $A_{\a}$. The singular values of a non-singular matrix $A$ are the positive square roots of the eigenvalues of $AA^{T}.$ Alternatively they are the lengths of the semiaxes of the ellipse $A(B(0,1)).$ Given a $\sigma$-invariant ergodic probability measure $\m,$ then there exist positive constants $\lambda_{1}(\m),\cdots, \lambda_{d}(\m),$ such that for $\m$-almost every $(a_j)\in \D^{\mathbb{N}}$ we have $$\lim_{n\to\infty}\frac{\log \alpha_{k}(A_{a_1\cdots a_n})}{n}=\lambda_{k}(\m),$$ for all $1\leq k\leq d$. We call the numbers $\lambda_{1}(\m),\cdots, \lambda_{d}(\m)$ the Lyapunov exponents of $\m$. The existence of Lyapunov exponents for $\sigma$-invariant ergodic measures $\m$ was established in \cite{JoPoSi}.
The theorem stated below is our main result for this family of iterated function systems.
\begin{thm}
\label{translation thm}
Suppose $\|A_i\|<1/2$ for all $1\leq i\leq l$. Then the following statements are true:
\begin{enumerate}
\item Let $\m$ be a slowly decaying $\sigma$-invariant ergodic probability measure with $\h(\m)>-(\lambda_1(\m)+\cdots +\lambda_d(\m))$ and $(a_j)\in \D^{\N}.$ For Lebesgue almost every $\t\in\mathbb{R}^{ld}$, for any $h\in H$ the set $U_{\Phi_\t}(\pi_\t(a_j),\m,h)$ has positive Lebesgue measure.
\item Let $\m$ be the uniform $(1/l,\ldots,1/l)$ Bernoulli measure and suppose there exists $A$ such that $A_i=A$ for any $1\leq i\leq l $.
If $\log l> -(\lambda_1(\m)+\cdots +\lambda_d(\m)),$
then for Lebesgue almost every $\t\in\mathbb{R}^{ld}$, for any $z\in X_{\t}$ and $h\in H$, the set $U_{\Phi_\t}(z,\m,h)$ has positive Lebesgue measure.
\item Let $\m$ be a slowly decaying $\sigma$-invariant ergodic probability measure and $(a_j)\in \D^{\N}$. Suppose that $\h(\m)>-(\lambda_1(\m)+\cdots +\lambda_d(\m))$ and one of the following three properties are satisfied:
\begin{itemize}
\item Each $A_i$ is a similarity.
\item $d=2$ and all the matrices $A_i$ are equal.
\item All the matrices $A_i$ are simultaneously diagonalisable.
\end{itemize} Then for Lebesgue almost every $\t\in\mathbb{R}^{ld}$, for any $\Psi\in \Upsilon_{\m},$ Lebesgue almost every $x\in X_\t$ is contained in $W_{\Phi_\t}(\pi_\t(a_j),\Psi).$
\item Let $\m$ be the uniform $(1/l,\ldots,1/l)$ Bernoulli measure and suppose there exists $A$ such that $A_i=A$ for any $1\leq i\leq l $. Suppose that $\log l>-(\lambda_1(\m)+\cdots +\lambda_d(\m))$ and one of the following three properties are satisfied:
\begin{itemize}
\item $A$ is a similarity.
\item $d=2$.
\item The matrix $A$ is diagonalisable.
\end{itemize} Then for Lebesgue almost every $\t\in\mathbb{R}^{ld}$, for any $z\in X_\t$ and $\Psi\in \Upsilon_{\m}$, Lebesgue almost every $x\in X_\t$ is contained in $W_{\Phi_\t}(z,\Psi).$
\end{enumerate}
\end{thm}
The following corollary follows immediately from Theorem \ref{translation thm}.
\begin{cor}
\label{translation cor}
Suppose there exists $\lambda\in(0,1/2)$ and $O\in O(d)$ such that $A_i=\lambda\cdot O$ for all $1\leq i \leq l$. Then if $\frac{\log l}{-\log \lambda}>d,$ we have that for Lebesgue almost every $\t\in \R^{ld}$, for any $z\in X_{\t},$ Lebesgue almost every $x\in X_{\t}$ is contained in the set $$\left\{x\in\mathbb{R}^d:|x-\phi_{\a}(z)|\leq \left( \frac{l^{-|\a|}}{|\a|}\right)^{1/d}\textrm{ for i.m. }\a\in \D^{*}\right\}.$$
\end{cor}
The assumption $\|A_i\|<1/2$ appearing in Theorem \ref{translation thm} is necessary as the example below shows.
\begin{example}
\label{counterexample}
Consider the iterated function system $\Phi_{\lambda,t_1,t_2}=\{\lambda x +t_1, \lambda x +t_2\},$ where $\lambda\in(1/2,1)$ and $t_1,t_2\in\mathbb{R}$. Whenever $t_1\neq t_2$ we can apply a change of coordinates and identify this iterated function system with $\{\lambda x, \lambda x +1\}.$ For any $\epsilon>0,$ there exists $\lambda^*\in(1/2,1/2+\epsilon)$ such that $\{\lambda x, \lambda x +1\}$ contains an exact overlap. Using this fact and our change of coordinates, it can be shown that $U_{\Phi_{\lambda^*,t_1,t_2}}(\pi(a_j),\m,h)$ has zero Lebesgue measure when $\m$ is the $(1/2,1/2)$ Bernoulli measure and $h$ is any bounded function.
\end{example} Even though Example \ref{counterexample} demonstrates the condition $\|A_i\|<1/2$ is essential, the author expects Theorem \ref{translation thm} to hold more generally. In this paper we prove a random version of Theorem \ref{translation thm} which supports this claim. This random version is based upon the randomly perturbed self-affine sets studied in \cite{JoPoSi}. Our setup is taken directly from \cite{JoPoSi}.
Fix a set of matrices $\{A_i\}_{i=1}^l$ each satisfying $\|A_i\|<1,$ and a vector $\t=(t_1,\ldots,t_l)\in\mathbb{R}^{ld}$. We obtain a randomly perturbed version of the IFS $\Phi=\{\phi_i(x)=A_i x +t_i\}$ in the following way. Suppose that $\eta$ is an absolutely continuous distribution with density supported on a disc $\mathbf{D}$. The distribution $\eta$ gives rise to a random perturbation of $\phi_{\a}$ via the equation
$$\phi_{\a}^{y_\a}:=(\phi_{a_1}+y_{a_1})\circ (\phi_{a_2}+y_{a_1a_2})\circ \cdots \circ (\phi_{\a_{|\a|}}+y_{\a}),$$ where the coordinates of $$(y_{a_1},y_{a_1a_2},\ldots,y_{\a})\in \mathbf{D}\times \cdots \times \mathbf{D}$$ are i.i.d. with distribution $\eta$. For notational convenience we enumerate the errors using the natural numbers. Let $\rho:\D^*\to \mathbb{N}$ be an arbitrary bijection. We obtain a sequence of errors $\y=(y_k)_{k=1}^{\infty}\in \mathbf{D}^{\N}$ according to the rule $$y_k:=y_{\a}\textrm{ if }\rho(\a)=k.$$ Given $\y\in \mathbf{D}^{\mathbb{N}},$ we obtain a perturbed version of our original attractor defined via the equation $$X_{\y}:=\bigcap_{n=1}^{\infty}\bigcup_{\a\in \D^n}\phi_{\a}^{y_\a}(B),$$ where $B$ is some sufficiently large ball. We let $\pi_\y:\D^{\N}\to X_\y$ be the projection map given by $$\pi_\y(a_j):=\lim_{n\to\infty}\phi_{a_1\cdots a_n}^{y_{a_1\cdots a_n}}(\textbf{0}).$$ On $\mathbf{D}^{\mathbb{N}}$ we define the measure $$\mathbf{P}:=\eta \times \cdots \times \eta \times \cdots.$$
We may now define our limsup sets for these randomly perturbed attractors. Given $\y\in \mathbf{D}^{\mathbb{N}},$ $(a_j)\in \D^{\N},$ and $\Psi:\D^{*}\to [0,\infty),$ we define
$$W_{\Phi,\y}((a_j),\Psi):=\left\{x\in \mathbb{R}^d: |x-\pi_\y(\a (a_j))|\leq \Psi(\a) \textrm{ for i.m. }\a\in \D^{*}\right\}.$$ Here $\a (a_j)$ denotes the element of $\D^{\N}$ obtained by concatenating the finite word $\a$ with the infinite sequence $(a_j).$ Given a slowly decaying measure $\m$, $\y\in \D^{\mathbb{N}},$ $(a_j)\in \D^{\N},$ and $h:\mathbb{N}\to [0,\infty),$ we let $$U_{\Phi,\y}((a_j),\m,h):=\left\{x\in \mathbb{R}^d: x\in \bigcup_{\a\in L_{\m,n}}B\left(\pi_\y(\a (a_j)),(\m([\a])h(n))^{1/d}\right)\textrm{ for i.m. }n\in \N\right\}.$$
The sets $W_{\Phi,\y}((a_j),\Psi)$ and $U_{\Phi,\y}((a_j),\m,h)$ serve as our analogues of $W_{\Phi}(z,\Psi)$ and $U_{\Phi}(z,\m,h)$ in this random setting. Note here that we have defined our limsup sets in terms of neighbourhoods of $\pi_\y(\a (a_j))$ rather than $\phi_{\a}^{\y_{\a}}(\pi_{\y}(a_j)).$ In the deterministic setting considered above these quantities coincide. In the random setup it is not necessarily the case that $\pi_\y(\a (a_j))=\phi_{\a}^{\y_{\a}}(\pi_{\y}(a_j)).$ The theorem stated below is the random analogue of Theorem \ref{translation thm}. It suggests that one should be able to replace the assumption $\|A_i\|<1/2$ with some other reasonable conditions.
\begin{thm}
\label{random thm}
Fix a set of matrices $\{A_i\}_{i=1}^l$ each satisfying $\|A_i\|<1$ and $\t\in \R^{ld}$. Then the following statements are true:
\begin{enumerate}
\item Let $\m$ be a slowly decaying $\sigma$-invariant ergodic probability measure with $\h(\m)>-(\lambda_1(\m)+\cdots +\lambda_d(\m))$ and $(a_j)\in \D^{\N}.$ For $\mathbf{P}$-almost every $\y\in\mathbf{D}^{\mathbb{N}},$ for any $h\in H,$ the set $U_{\Phi,\y}((a_j),\m,h)$ has positive Lebesgue measure.
\item Let $\m$ be the uniform $(1/l,\ldots,1/l)$ Bernoulli measure and suppose there exists $A$ such that $A_i=A$ for all $1\leq i\leq l$.
If $\log l>-(\lambda_1(\m)+\cdots+\lambda_d(\m))$, then for $\mathbf{P}$-almost every $\y\in\mathbf{D}^{\mathbb{N}}$, for any $(a_j)\in \D^{\mathbb{N}}$ and $h\in H,$ the set $U_{\Phi,\y}((a_j),\m,h)$ has positive Lebesgue measure.
\item Let $\m$ be a slowly decaying $\sigma$-invariant ergodic probability measure with $\h(\m)>-(\lambda_1(\m)+\cdots +\lambda_d(\m))$ and $(a_j)\in \D^{\N}.$ For $\mathbf{P}$-almost every $\y\in\mathbf{D}^{\mathbb{N}}$, for any $\Psi$ that equivalent to $(\m,h)$ for some $h\in H,$ the set $W_{\Phi,\y}((a_j),\Psi)$ has positive Lebesgue measure.
\item Let $\m$ be the uniform $(1/l,\ldots,1/l)$ Bernoulli measure and suppose there exists $A$ such that $A_i=A$ for all $1\leq i\leq l$. If $\log l>-(\lambda_1(\m)+\cdots+\lambda_d(\m))$, then for $\mathbf{P}$-almost every $\y\in\mathbf{D}^{\mathbb{N}}$, for any $(a_j)\in \D^{\mathbb{N}}$ and $\Psi$ that is equivalent to $(\m,h)$ for some $h\in H,$ the set $W_{\Phi,\y}((a_j),\Psi)$ has positive Lebesgue measure.
\end{enumerate}
\end{thm}
The reason we cannot obtain the full measure statements from Theorem \ref{translation thm} in our random setting is because of how $X_\y$ is defined. In particular, $X_\y$ cannot necessarily be expressed as finitely many scaled copies of itself like in the deterministic setting. The proof of statements $3$ and $4$ from Theorem \ref{translation thm} rely on the fact that the underlying attractor satisfies the equation $X=\cup_{i=1}^l\phi_i (X)$.
\subsection{A specific family of IFSs}
We now introduce a family of iterated function systems for which we can make very precise statements. To each $t\in[0,1]$ we associate the IFS:
$$\Phi_{t}=\left\{\phi_1(x)=\frac{x}{2},\, \phi_{2}(x)=\frac{x+1}{2},\, \phi_{3}(x)=\frac{x+t}{2},\, \phi_{4}(x)=\frac{x+1+t}{2}\right\}.$$ For each $\Phi_t$ the corresponding attractor is $[0,1+t]$. We denote the projection map from $\D^{\mathbb{N}}$ to $[0,1+t]$ by $\pi_t$. For this family of iterated function systems we will be able to replace the almost every statements appearing in Theorems \ref{1d thm} and \ref{translation thm} with something more precise. The reason we can make these stronger statements is because separation properties for $\Phi_t$ can be deduced from the continued fraction expansion of $t$. Recall that for any $t\in [0,1]\setminus \mathbb{Q},$ there exists a unique sequence $(\zeta_m)\in\mathbb{N}^{\mathbb{N}}$ such that
$$ t=\cfrac{1}{\zeta_1+\cfrac{1}{\zeta_2 +\cfrac{1}
{\zeta_3 + \cdots }}}.$$
We call the sequence $(\zeta_m)$ the continued fraction expansion of $t$. Given $t$ with continued fraction expansion $(\zeta_m)$, for each $m\in\mathbb{N}$ we let
$$ \frac{p_m}{q_m}:=\cfrac{1}{\zeta_1+\cfrac{1}{\zeta_2 +\cfrac{1}
{\zeta_3 + \cdots \cfrac{1}
{\zeta_m }}}}.$$ We call $p_m/q_m$ the $m$-th partial quotient of $t$. We say that $t$ is badly approximable if the integers appearing in the continued fraction expansion of $t$ can be bounded from above.
The main result of this section is the following.
\begin{thm}
\label{precise result}
Let $\m$ be the uniform $(1/4,1/4,1/4,1/4)$ Bernoulli measure. The following statements are true:
\begin{enumerate}
\item If $t\in\mathbb{Q}$ then $\Phi_t$ contains an exact overlap, and for any $z\in[0,1+t]$ the set $U_{\Phi_t}(z,\m,1)$ has Hausdorff dimension strictly less than $1$.
\item If $t\notin \mathbb{Q},$ then there exists $h:\mathbb{N}\to[0,\infty)$ depending upon the continued fraction expansion of $t,$ such that $\lim_{n\to\infty}h(n)=0,$ and for any $z\in[0,1+t],$ Lebesgue almost every $x\in[0,1+t]$ is contained in $U_{\Phi_t}(z,\m,h)$.
\item If $t$ is badly approximable, then for any $z\in[0,1+t]$ and $h:\mathbb{N}\to [0,\infty)$ satisfying $\sum_{n=1}^{\infty}h(n)=\infty,$ Lebesgue almost every $x\in[0,1+t]$ is contained in $U_{\Phi_t}(z,\m,h).$
\item If $t\notin\mathbb{Q}$ and is not badly approximable, then there exists $h:\mathbb{N}\to[0,\infty)$ satisfying $\sum_{n=1}^{\infty}h(n)=\infty,$ yet $U_{\Phi_t}(z,\m,h)$ has zero Lebesgue measure for any $z\in[0,1+t]$.
\item Suppose $t\notin \mathbb{Q}$ is such that for any $\epsilon>0,$ there exists $L\in\mathbb{N}$ for which the following inequality holds for $M$ sufficiently large: $$\sum_{\stackrel{1\leq m \leq M}{\frac{q_{m+1}}{q_m}\geq L}}\log_{2}(\zeta_{m+1}+1) \leq \epsilon M.$$ Then for any $z\in[0,1+t]$ and $h\in H^*,$ Lebesgue almost every $x\in[0,1+t]$ is contained in $U_{\Phi_t}(z,\m,h)$.
\item Suppose $\mu$ is an ergodic invariant measure for the Gauss map and satisfies $$\sum_{m=1}^{\infty}\mu\Big(\left[\frac{1}{m+1},\frac{1}{m}\right]\Big)\log_2 (m +1)<\infty.$$ Then for $\mu$-almost every $t,$ for any $z\in[0,1+t]$ and $h\in H^*,$ Lebesgue almost every $x\in[0,1+t]$ is contained in $U_{\Phi_t}(z,\m,h).$ In particular, for Lebesgue almost every $t\in[0,1],$ for any $z\in[0,1+t]$ and $h\in H^*$, Lebesgue almost every $x\in[0,1+t]$ is contained in $U_{\Phi_t}(z,\m,h)$.
\end{enumerate}
\end{thm}
We include the following corollary to emphasise the strong dichotomy that follows from statement $1$ and statement $2$ from Theorem \ref{precise result}.
\begin{cor}
Either $t$ is such that $\Phi_t$ contains an exact overlap and for any $z\in[0,1+t]$ $$\dim_{H}\left(\left\{x\in[0,1+t]:|x-\phi_{\a}(z)|\leq \frac{1}{4^{|\a|}}\textrm{ for i.m. }\a\in\D^*\right\}\right)<1,$$ or for any $z\in[0,1+t],$ Lebesgue almost every $x\in [0,1+t]$ is contained in $$\left\{x\in[0,1+t]:|x-\phi_{\a}(z)|\leq \frac{1}{4^{|\a|}}\textrm{ for i.m. }\a\in\D^*\right\}.$$
\end{cor}
Theorem \ref{precise result} is stated in terms of the auxiliary sets $U_{\Phi_t}(z,\m,h)$ rather than in terms of $W_{\Phi_t}(z,\Psi).$ Where here the underlying measure $\m$ is the uniform $(1/4,1/4,1/4,1/4)$ Bernoulli measure. Note however that if $\Psi:\D^*\to[0,\infty)$ is a function that only depends upon the length of the word $\a$, then $\Psi(\a)=h(|\a|)\m([\a])$ for some appropriate choice of $h$. Combining this observation with the fact $L_{\m,n}=\D^n$ for any $n\in \N$ for this choice of $\m$, it follows that $W_{\Phi_t}(z,\Psi)=U_{\Phi_t}(z,\m,h)$ for this choice of $h$. Therefore Theorem \ref{precise result} can be reinterpreted in terms of the sets $W_{\Phi_t}(z,\Psi)$ when $\Psi$ only depends upon the length of the word.
\subsubsection{New methods for distinguishing between the overlapping behaviour of IFSs}
\label{new methods}
In this section we explain how Theorem \ref{precise result} allows us to distinguish between iterated function systems in a way that is not available to us by simply studying properties of self-similar measures. We start this discussion by stating the following result that will follow from the proof of Theorem \ref{precise result}.
\begin{thm}
\label{overlap or optimal}
Let $t\in[0,1].$ It is the case that either $\Phi_t$ contains an exact overlap, or for infinitely many $n\in\mathbb{N}$ we have $$|\phi_{\a}(z)-\phi_{\a'}(z)|\geq \frac{1}{8\cdot 4^n},$$ for any $z\in [0,1+t]$ for distinct $\a,\a'\in \D^n$.
\end{thm}
Theorem \ref{overlap or optimal} effectively states that for this family of IFSs, we either have an exact overlap, or for infinitely many scales we exhibit the optimal level of separation. This level of separation can be seen to be optimal by the pigeonhole principle, which tells us that for any $z\in[0,1+t]$ and $n\in\mathbb{N},$ there must exist distinct $\a,\a'\in \D^n$ such that $$|\phi_{\a}(z)-\phi_{\a'}(z)|\leq \frac{1+t}{4^n-1}.$$ Because of the strong dichotomy demonstrated by Theorem \ref{overlap or optimal}, we believe that this family of IFSs will serve as a useful toy model for other problems.
For a probability vector $\textbf{p}=(p_1,p_2,p_3,p_4)$ we denote the corresponding self-similar measure for the IFS $\Phi_t$ by $\mu_{\textbf{p},t}$. It follows from Theorem \ref{overlap or optimal} and the work of Hochman \cite[Theorem 1.1.]{Hochman} that the following theorem holds.
\begin{thm}
\label{Hoccor}
Either $\Phi_t$ contains an exact overlap, or for any probability vector $\textbf{p}$ we have $$\dim \mu_{\textbf{p},t}= \min\left\{\frac{\sum_{i=1}^4 p_i\log p_i}{-\log 2},1\right\}.$$
\end{thm} The following theorem follows from the work of Shmerkin and Solomyak \cite[Theorem A]{ShmSol}.
\begin{thm}
\label{ShmSolcor}
For every $t\in[0,1]$ outside of a set of Hausdorff dimension $0$, we have that $\mu_{\textbf{p},t}$ is absolutely continuous whenever $$\frac{\sum_{i=1}^4 p_i\log p_i}{-\log 2}>1.$$
\end{thm}To apply Theorem A from \cite{ShmSol} we have to check that a non-degeneracy condition is satisfied. Checking this condition holds is straightforward in our setting so we omit the details.
It is known that the set of badly approximable numbers has Hausdorff dimension $1$ and Lebesgue measure zero. Therefore, applying Theorem \ref{Hoccor} and Theorem \ref{ShmSolcor}, it follows that there exists a badly approximable number $t,$ and some $t'$ that is not badly approximable, such that for any probability vector $\textbf{p}$ we have $$\dim \mu_{\textbf{p},t}=\dim \mu_{\textbf{p},t'}= \min\left\{\frac{\sum_{i=1}^4 p_i\log p_i}{-\log 2},1\right\},$$ and whenever $$\frac{\sum_{i=1}^4 p_i\log p_i}{-\log 2}>1$$ the measures $\mu_{\textbf{p},t}$ and $\mu_{\textbf{p},t'}$ are both absolutely continuous. As such, the overlapping behaviour of $\Phi_t$ and $\Phi_{t'}$ are indistinguishable from the perspective of self-similar measures. However, we see from statement $3$ and statement $4$ of Theorem \ref{precise result} that there exists $h:\mathbb{N}\to [0,\infty)$ such that $U_{\Phi_t}(z,\m,h)$ has full Lebesgue measure for all $z\in[0,1+t]$, and $U_{\Phi_{t'}}(z,\m,h)$ has zero Lebesgue measure for all $z\in[0,1+t']$. Therefore, we see that by studying the metric properties of limsup sets we can distinguish between the overlapping behaviour of $\Phi_t$ and $\Phi_{t'}$. Studying the metric properties of limsup sets detects some of the finer details of how an iterated function system overlaps.
\subsection{The CS property and absolute continuity.}
We saw in the previous section that by studying IFSs using ideas from metric number theory, one can distinguish between IFSs in a way that is not available to us by simply studying pushforwards of Bernoulli measures. It is natural to wonder how Khintchine like behaviour relates to these measures. In this paper we show that there is a connection between a strong type of Khintchine like behaviour and the absolute continuity of these measures.
Given an IFS $\Phi$ and a slowly decaying measure $\m$, we say that $\Phi$ is consistently separated with respect to $\m$, or $\Phi$ has the CS property with respect to $\m$, if there exists $z\in X$ such that for any $h:\mathbb{N}\to [0,\infty)$ satisfying $$\sum_{n=1}^{\infty}h(n)=\infty,$$ the set $U_{\Phi}(z,\m,h)$ has positive Lebesgue measure. Using this terminology we see that statement $3$ and statement $4$ from Theorem \ref{precise result} imply that an IFS $\Phi_t$ has the CS property with respect to the $(1/4,1/4,1/4,1/4)$ Bernoulli measure if and only if $t$ is badly approximable. The use of the terminology consistently separated will become clearer in Section \ref{Colette section} (see Theorem \ref{new colette}). We prove the following result.
\begin{thm}
\label{Colette thm}
For a slowly decaying $\sigma$-invariant ergodic probability measure $\m,$ if $\Phi$ has the CS property with respect to $\m,$ then the pushforward of $\m$ is absolutely continuous.
\end{thm}
We emphasise here that an IFS having the CS property with respect to $\m$ and the pushforward of $\m$ being absolutely continuous are not equivalent statements. There are many examples of $\m$ and $\Phi$ such that the pushforward of $\m$ is absolutely continuous, yet $\Phi$ does not have the CS property with respect to $\m.$ In particular, for the family of IFSs $\{\Phi_t\}$ studied in the previous section, it can be shown that the pushforward of the uniform $(1/4,1/4,1/4,1/4)$ Bernoulli measure is absolutely continuous for any $t\in[0,1]$. However as remarked above, $\Phi_t$ has the CS property with respect to this measure if and only if $t$ is badly approximable. We include several explicit examples of consistently separated iterated function systems in Section \ref{Examples}.
\subsection{Overlapping self-conformal sets}
\label{self-conformal section}
Theorems \ref{1d thm}, \ref{translation thm}, and \ref{precise result} are stated in terms of parameterised families of overlapping IFSs where one would expect that for a typical member of this family the corresponding attractor would have positive Lebesgue measure. In Theorem \ref{conformal theorem} the attractor can have arbitrary Hausdorff dimension, but we assume that the underlying IFS satisfies some separation hypothesis. None of these results cover the case when the IFS is overlapping and the attractor is not expected to have positive Lebesgue measure. The purpose of this section is to fill this gap for IFSs consisting of conformal mappings. We recall some background on this class of IFS below.
Let $V\subset \mathbb{R}^d$ be an open set, a $C^1$ map $\phi:V\to\mathbb{R}^d$ is a conformal mapping if it preserves angles, or equivalently $\Phi$ is a conformal mapping if the differential $\phi'$ satisfies $|\phi'(x)y|=|\phi'(x)||y|$ for all $x\in V$ and $y\in\mathbb{R}^d$. We call an IFS $\Phi=\{\phi_i\}_{i=1}^l$ a conformal iterated function system on a compact set $Y\subset\mathbb{R}^d$ if each $\phi_i$ can be extended to an injective conformal contraction on some open connected neighbourhood $V$ that contains $Y,$ and $$0\leq \inf_{x\in V}|\phi_i'(x)|\leq \sup_{x\in V}|\phi_i'(x)|<1.$$ Throughout this paper we will assume that the differentials are H\"{o}lder continuous, i.e., there exists $\alpha>0$ and $c>0$ such that $$\big||\phi_i'(x)|-|\phi_i'(y)|\big|\leq c|x-y|^{\alpha}$$ for all $x,y\in V$. If our IFS is a conformal iterated function system on some compact set, then we call the corresponding attractor $X$ a self-conformal set. Self-conformal sets are a natural generalisation of self-similar sets.
To any conformal IFS we associate the family of potentials $f_s:\D^{\mathbb{N}}\to \mathbb{R}$ given by $$f_s((a_j))=s\cdot\log |\phi_{a_1}'(\pi(\sigma(a_j)))|.$$ Where here $s\in(0,\infty)$. We define the topological pressure of $f_s$ to be $$P(f_s):=\sup\left\{\h(\m)+\int f_s d\m: \m \textrm{ is }\sigma\textrm{-invariant}\right\}.$$ For more on topological pressure and thermodynamic formalism we refer the reader to \cite{Bow} and \cite{Fal2}. It can be shown that for any conformal IFS, there exists a unique value of $s$ satisfying the equation $P(f_s)=0.$ We call this parameter the similarity dimension of $\Phi$ and denote it by $\dim_{S}(\Phi)$. When $\Phi$ is a conformal IFS and satisfies the open set condition, it is known that $\dim_{H}(X)=\dim_{B}(X)=\dim_{S}(\Phi)$. Importantly there exists a unique measure $\m_{\Phi}$ such that $$\h(\m_{\Phi})+\int f_{\dim_S(\Phi)}d\m_{\Phi}=0.$$
The pushforward of the measure $\m_{\Phi},$ which we denote by $\mu_{\Phi},$ is a particularly useful tool for determining metric properties of the attractor $X$. In particular, when $\Phi$ satisfies the open set condition it can be shown that $\mu_{\Phi}$ is equivalent to $\mathcal{H}^{\dim_{H}(X)}|_{X}$ (see \cite{PRSS}). Note that when $\Phi$ consists of similarities, i.e. $\Phi=\{\phi_i(x)=r_iO_i x +t_i\}_{i=1}^l$, then $\m_{\Phi}$ is simply the Bernoulli measure corresponding to the probability vector $(r_1^s,\ldots,r_l^s),$ where $s$ is the unique solution to the equation $\sum_{i=1}^lr_i^s=1.$
Our main result for conformal iterated function systems is the following theorem.
\begin{thm}
\label{overlapping conformal theorem}
If $\Phi$ is a conformal iterated function system, then for any $z\in X,$ if $\theta:\mathbb{N}\to [0,\infty)$ is a decreasing function and satisfies $$\sum_{n=1}^{\infty} \sum_{\a\in\D^{n}} (Diam(X_{\a})\theta(n))^{\dim_{S}(\Phi)}=\infty,$$ then $\mu_{\Phi}$-almost every $x\in X$ is an element of $W_{\Phi}(z,Diam(X_\a)\theta(|\a|)).$
\end{thm} As stated above, when $\Phi$ satisfies the open set condition then $\mu_{\Phi}$ is equivalent to $\mathcal{H}^{\dim_{H}(X)}|_{X},$ it follows therefore that Theorem \ref{overlapping conformal theorem} implies Theorem \ref{conformal theorem}. For our purposes the real value of Theorem \ref{overlapping conformal theorem} is demonstrated in the following corollary.
\begin{cor}
\label{conformal cor}
Let $\Phi$ be a conformal iterated function system and suppose $\dim \mu_{\Phi}=\dim_{H}(X)$. Then for any $z\in X,$ if $\theta:\mathbb{N}\to [0,\infty)$ is a decreasing function and satisfies $$\sum_{n=1}^{\infty} \sum_{\a\in\D^{n}} (Diam(X_{\a})\theta(n))^{\dim_{S}(\Phi)}=\infty,$$ then $W_{\Phi}(z,Diam(X_\a)\theta(|\a|))$ has Hausdorff dimension equal to $\dim_{H}(X)$.
\end{cor}Corollary \ref{conformal cor} effectively reduces the problem of determining the Hausdorff dimension of $W_{\Phi}(z,Diam(X_\a)\theta(|\a|))$ to determining whether $\dim \mu_{\Phi}=\dim_{H}(X).$ Thankfully there are many results on the latter problem, and we can use these results together with Corollary \ref{conformal cor} to deduce further statements. We mention here only one such statement for the sake of brevity. The following statement follows by combining Theorem 1.1 from \cite{Hochman} and Corollary \ref{conformal cor}.
\begin{cor}
Assume $d=1$ and $\Phi$ consists solely of similarities. If $$\liminf_{n\to\infty}\frac{-\log \Delta_n}{n}<\infty,$$ where $$\Delta_n:=\min_{\a\neq \b\in \D^n}|\phi_{\a}(0)-\phi_{\b}(0)|,$$ then for any $z\in X,$ if $\theta:\mathbb{N}\to [0,\infty)$ is a decreasing function and satisfies $$\sum_{n=1}^{\infty} \sum_{\a\in\D^{n}} (Diam(X_{\a})\theta(n))^{\dim_{S}(\Phi)}=\infty,$$ then $W_{\Phi}(z,Diam(X_\a)\theta(|\a|))$ has Hausdorff dimension equal to $\dim_{H}(X)$.
\end{cor}
\subsection{Structure of the paper}
The rest of the paper is arranged as follows. In Section \ref{Preliminaries} we prove some general results that will allow us to prove our main theorems. In Section \ref{applications} we prove Theorems \ref{1d thm}, \ref{translation thm}, and \ref{random thm}. In Section \ref{Specific family} we prove Theorem \ref{precise result}. Section \ref{Colette section} is then concerned with the proof of Theorem \ref{Colette thm}, and in Section \ref{conformal section} we prove Theorem \ref{overlapping conformal theorem}. In Section \ref{misc} we apply the mass transference principle of Beresnevich and Velani to show how one can use our earlier results to deduce results on the Hausdorff measure and Hausdorff dimension of certain $W_{\Phi}(z,\Psi)$ when $$\sum_{\a\in \D^*}\Psi(\a)^{\dim_{H}(X)}<\infty.$$ In Section \ref{Examples} we include some explicit examples to accompany our main theorems. We conclude with some general discussion and pose some open questions in Section \ref{Final discussion}.
\section{Preliminary results}
\label{Preliminaries}
\subsection{A general framework}
In this section we prove some useful preliminaries that will allow us to prove the main results of this paper. Throughout this section $\Omega$ will denote a metric space equipped with some finite Borel measure $\eta$, and $\tilde{X}$ will denote some compact subset of $\mathbb{R}^d$. For each $n\in\mathbb{N}$ we will assume that there exists a finite set of continuous functions $\{f_{l,n}:\Omega\to \tilde{X}\}_{l=1}^{R_n}.$ For each $\omega\in\Omega$ we let $$Y_n(\omega):=\{f_{l,n}(\omega)\}_{l=1}^{R_n}.$$Before stating our general result we need to introduce some notation. Given $r>0$ we say that $Y\subset\mathbb{R}^d$ is an $r$-separated set if $|z-z'|>r,$ $\forall z,z'\in Y$ such that $z\neq z'.$ Given a finite set $Y\subset \mathbb{R}^d$ and $r>0,$ we let $$T(Y,r):=\sup\{\# Y':Y'\subseteq Y \textrm{ and }Y'\textrm{ is an }r\textrm{-separated set}\}.$$ We call $Y'\subseteq Y$ a maximal $r$-separated subset if $Y'$ is $r$-separated and $\# Y'=T(Y,r).$ Clearly a maximal $r$-separated subset always exists. Given a finite set $Y$ and $r>0,$ we will denote by $S(Y,r)$ an arbitrary choice of maximal $r$-separated subset.
The proposition stated below is the main technical result of this section.
\begin{prop}
\label{general prop}
Suppose the following properties are satisfied:
\begin{itemize}
\item There exists $\gamma>1$ such that $$R_n\asymp \gamma^n.$$
\item There exists $G:(0,\infty)\to (0,\infty)$ such that $\lim_{s\to 0}G(s)=0,$ and for all $n\in\mathbb{N}$ we have $$\eta(\Omega)-\int_{\Omega} \frac{T(Y_n(\omega),\frac{s}{R_{n}^{1/d}})}{R_{n}} d\eta(\omega)\leq G(s).$$
\end{itemize} Then for $\eta$-almost every $\omega\in\Omega,$ for any $h\in H$ the set $$\left\{x\in\mathbb{R}^d:x\in\bigcup_{l=1}^{R_{n}}B\left(f_{l,n}(\omega),\left(\frac{h(n)}{R_{n}}\right)^{1/d}\right)\textrm{ for i.m. } n\in \mathbb{N}\right\}$$ has positive Lebesgue measure.
\end{prop}Recall that the set of functions $H$ was defined in \eqref{H functions}.
Given $c>0,s>0,$ and $n\in\mathbb{N},$ we let $$B(c,s,n):=\Big\{\omega\in\Omega:\frac{T(Y_n(\omega),\frac{s}{R_n^{1/d}})}{R_n}>c\Big\}.$$ The following lemma shows that under the hypothesis of Proposition \ref{general prop}, a typical $\omega\in\Omega$ is contained in $B(c,s,n)$ for a large set of $n$ for appropriate choices of $c$ and $s$. This lemma will play an important role in Section \ref{applications} when we recover some results of Solomyak on the absolute continuity of Bernoulli convolutions, and on the Lebesgue measure of the attractor in the $\{0,1,3\}$ problem.
\begin{lemma}
\label{density separation lemma}
Assume there exists $G:(0,\infty)\to (0,\infty)$ such that $\lim_{s\to 0}G(s)=0,$ and for all $n\in\mathbb{N}$ we have $$\eta(\Omega)-\int_{\Omega} \frac{T(Y_n(\omega),\frac{s}{R_{n}^{1/d}})}{R_{n}} d\eta(\omega)\leq G(s).$$ Then
$$\eta\left(\bigcap_{\epsilon>0}\bigcup_{c,s>0}\{\omega:\overline{d}(n:\omega\in B(c,s,n))\geq 1-\epsilon\}\right)=\eta(\Omega).$$
\end{lemma}
\begin{proof}
Observe that $$0\leq \frac{T(Y_n(\omega),\frac{s}{R_{n}^{1/d}})}{R_n}\leq 1$$ for all $\omega\in\Omega$ and $n\in\mathbb{N}$. As a result of this inequality and our underling assumption, for any $c>0$, $s>0$, and $n\in N$, we have
$$\eta(B(c,s,n))+c\cdot \eta(B(c,s,n)^c)\geq \int_{\Omega} \frac{T(Y_n(\omega),\frac{s}{R(n)^{1/d}})}{R(n)} d\eta(\omega)\geq \eta(\Omega)-G(s).$$ This in turn implies $$\eta(B(c,s,n))\geq (1-c)\eta(\Omega)-G(s).$$ It follows that given $\epsilon>0,$ we can pick $c>0$ and $s>0$ independent of $n$ such that
\begin{equation}
\label{almost full}
\eta(B(c,s,n))\geq \eta(\Omega)-\epsilon.
\end{equation} Applying Fatou's lemma we have
\begin{align*}
\int_{\Omega} \overline{d}(n:\omega\in B(c,s,n))d\eta &=\int_{\Omega}\limsup_{N\to\infty}\frac{\#\{1\leq n\leq N:\omega\in B(c,s,n)\}}{N}d\eta\\
&=\int_{\Omega}\limsup_{N\to\infty}\frac{\sum_{n=1}^{N}\chi_{B(c,s,n)}(\omega)}{N}d\eta\\
&\stackrel{Fatou}{\geq} \limsup_{N\to\infty}\frac{\sum_{n=1}^{N}\int_{\Omega}\chi_{B(c,s,n)}(\omega)d\eta}{N}\\
&=\limsup_{N\to\infty} \frac{\sum_{n=1}^{N}\eta(B(c,s,n))}{N}\\
&\stackrel{\eqref{almost full}}{\geq} \eta(\Omega)-\epsilon.
\end{align*}Summarising the above, we have shown that for this choice of $c$ and $s$ we have
\begin{equation}
\label{Banach bound}\int_{\Omega} \overline{d}(n:\omega\in B(c,s,n))d\eta \geq \eta(\Omega)-\epsilon.
\end{equation}
For the purpose of obtaining a contradiction, suppose
\begin{equation}
\label{contradict1}
\eta\Big(\omega:\overline{d}(n:\omega\in B(c,s,n))\leq 1-\sqrt{\epsilon}\Big)> \sqrt{\epsilon}.
\end{equation}
Then using the fact $$0\leq \overline{d}(n:\omega\in B(c,s,n))\leq 1$$ for all $\omega\in \Omega$, we have
\begin{align*}
\int_{\Omega} \overline{d}(n:\omega\in B(c,s,n))d\eta& \leq \eta(\omega:\overline{d}(n:\omega\in B(c,s,n))\leq 1-\sqrt{\epsilon})(1-\sqrt{\epsilon})\\
&+ \eta(\omega:\overline{d}(n:\omega\in B(c,s,n))> 1-\sqrt{\epsilon})\\
&=\eta(\omega:\overline{d}(n:\omega\in B(c,s,n))\leq 1-\sqrt{\epsilon})(1-\sqrt{\epsilon})\\
&+ \eta(\Omega)-\eta(\omega:\overline{d}(n:\omega\in B(c,s,n))\leq 1-\sqrt{\epsilon})\\
&=\eta(\Omega)-\sqrt{\epsilon}\eta(\omega:\overline{d}(n:\omega\in B(c,s,n))\leq 1-\sqrt{\epsilon})\\
&\stackrel{\eqref{contradict1}}{<}\eta(\Omega)-\epsilon.
\end{align*}
This contradicts \eqref{Banach bound}. Therefore \eqref{contradict1} is not possible and we have that for any $\epsilon>0,$ there exists $c,s>0$ such that
\begin{equation}
\label{shown to hold}
\eta\Big(\omega:\overline{d}(n:\omega\in B(c,s,n))> 1-\sqrt{\epsilon}\Big)\geq \eta(\Omega)-\sqrt{\epsilon}.
\end{equation}
Equation \eqref{shown to hold} in turn implies that for any $\epsilon>0$ we have
\begin{equation}
\label{full measure sep} \eta\Big(\bigcup_{c,s>0}\{\omega:\overline{d}(n:\omega\in B(c,s,n))\geq 1-\epsilon\}\Big)=\eta(\Omega).
\end{equation} One can see how \eqref{full measure sep} follows from \eqref{shown to hold} by first fixing $\epsilon>0$ and then applying \eqref{shown to hold} for a countable collection of $\epsilon_k$ strictly smaller than $\epsilon$. Now intersecting over all $\epsilon>0,$ we see that \eqref{full measure sep} implies the desired equality:
$$\eta\Big(\bigcap_{\epsilon>0}\bigcup_{c,s>0}\{\omega:\overline{d}(n:\omega\in B(c,s,n))\geq 1-\epsilon\}\Big)=\eta(\Omega).$$
\end{proof}
To prove Proposition \ref{general prop} and many other results in this paper, we will rely upon the following useful lemma known as the generalised Borel-Cantelli Lemma.
\begin{lemma}
\label{Erdos lemma}
Let $(X,A,\mu)$ be a finite measure space and $E_n\in A$ be a sequence of
sets such that $\sum_{n=1}^{\infty}\mu(E_n)=\infty.$ Then
$$\mu(\limsup_{n\to\infty} E_{n})\geq \limsup_{Q\to\infty}\frac{(\sum_{n=1}^{Q}\mu(E_{n}))^{2}}{\sum_{n,m=1}^{Q}\mu(E_{n}\cap E_m)}.$$
\end{lemma}Lemma \ref{Erdos lemma} is due to Kochen and Stone \cite{KocSto}. For a proof of this lemma see either \cite[Lemma 2.3]{Har} or \cite[Lemma 5]{Spr}.
Proposition \ref{general prop} will follow from the following proposition. This result will also be useful when it comes to proving some of our later results.
\begin{prop}
\label{fixed omega}
Let $\omega\in \Omega$ and $h:\mathbb{N}\to[0,\infty).$ Assume the following properties are satisfied:
\begin{itemize}
\item There exists $\gamma>1$ such that $$R_n\asymp \gamma^n.$$
\item There exists $c>0$ and $s>0$ such that $$\sum_{n:\omega\in B(c,s,n)}h(n)=\infty.$$
\end{itemize} Then $$\left\{x\in\mathbb{R}^d:x\in\bigcup_{l=1}^{R_{n}}B\left(f_{l,n}(\omega),\left(\frac{h(n)}{R_{n}}\right)^{1/d}\right)\textrm{ for i.m. } n\in \mathbb{N}\right\}$$ has positive Lebesgue measure.
\end{prop}
\begin{proof}[Proof of Proposition \ref{fixed omega}]
We split our proof into individual steps for convenience. \\
\noindent \textbf{Step 1. Replacing our approximating function.}\\
Let $\omega$ and $h$ be fixed, and $c$ and $s>0$ be as in the statement of the proposition. We claim that
\begin{equation}
\label{divergence1}\sum_{n:\omega\in B(c,s,n)} \sum_{u\in S(Y_n(\omega),\frac{s}{R_{n}^{1/d}})} \mathcal{L}\Big(B\Big(u,\Big(\frac{h(n)}{R_n}\Big)^{1/d}\Big)\Big)=\infty.
\end{equation} This follows from our assumption $$\sum_{n:\omega\in B(c,s,n)}h(n)=\infty,$$ and the following:
\begin{align*}
\sum_{n:\omega\in B(c,s,n)} \sum_{u\in S(Y_n(\omega),\frac{s}{R_{n}^{1/d}})} \mathcal{L}\Big(B\Big(u,\Big(\frac{h(n)}{R_n}\Big)^{1/d}\Big)\Big)&= \sum_{n:\omega\in B(c,s,n)} \sum_{u\in S(Y_n(\omega),\frac{s}{R_{n}^{1/d}})} \frac{h(n)\L(B(0,1))}{R_n}\\
&=\sum_{n:\omega\in B(c,s,n)} T\Big(Y_n(\omega),\frac{s}{R_{n}^{1/d}}\Big)\frac{h(n)\L(B(0,1))}{R_n}\\
&\geq c \L(B(0,1))\sum_{n:\omega\in B(c,s,n)}h(n)\\
&=\infty.
\end{align*} Let us now define $$g(n):=\min \Big\{\Big(\frac{h(n)}{R_n}\Big)^{1/d},\frac{s}{3R_{n}^{1/d}}\Big\}.$$ We claim that we still have
\begin{equation}
\label{divergence2}\sum_{n:\omega\in B(c,s,n)} \sum_{u\in S(Y_n(\omega),\frac{s}{R_{n}^{1/d}})} \mathcal{L}(B(u,g(n)))=\infty.
\end{equation} If $g(n)=\frac{s}{3R_{n}^{1/d}}$ for finitely many $n\in\N,$ then \eqref{divergence1} would imply \eqref{divergence2}. Suppose therefore that $g(n)=\frac{s}{3R_{n}^{1/d}}$ for infinitely many $n\in\mathbb{N}$. For such an $n$ we would have $$\sum_{u\in S(Y_n(\omega),\frac{s}{R_{n}^{1/d}})} \mathcal{L}(B(u,g(n)))=T\Big(Y_n(\omega),\frac{s}{R_{n}^{1/d}}\Big) \frac{s^d\L(B(0,1))}{3^dR_{n}}> \frac{cs^d\L(B(0,1))}{3^d}.$$ This lower bound is strictly positive and independent of $n$. As such summing over it for infinitely many $n$ guarantees divergence. Therefore \eqref{divergence2} holds.
Note that it follows from the definition of $g$ that we will have proved our result if we can show that
\begin{equation}
\label{WANT1}
\L\left(\left\{x\in\mathbb{R}^d:x\in\bigcup_{l=1}^{R_{n}}B\left(f_{l,n}(\omega),g(n)\right)\textrm{ for i.m. } n\in \mathbb{N}\right\}\right)>0.
\end{equation}\\
\noindent \textbf{Step 2: Constructing our $E_n$.}\\
Since $g(n)\leq \frac{s}{3R_{n}^{1/d}}$ for all $n\in\mathbb{N}$, it follows that for any distinct $u,v\in S(Y_n(\omega),\frac{s}{R_{n}^{1/d}}),$ we must have
\begin{equation}
\label{disjoint balls}B(u,g(n))\cap B(v,g(n))=\emptyset.
\end{equation} For each $n$ such that $\omega\in B(c,s,n),$ let $$E_{n}=\bigcup_{u\in S(Y_n(\omega),\frac{s}{R(n)^{1/d}})} B(u,g(n)).$$ We will show that
\begin{equation}
\label{WANT2}
\L\Big(\Big\{x\in \mathbb{R}^d: x\in E_n \textrm{ for i.m. }n\in\mathbb{N} \textrm{ such that } \omega\in B(c,s,n)\Big\}\Big)>0.
\end{equation} Equation \eqref{WANT2} implies \eqref{WANT1}, so to complete our proof it suffices to show that \eqref{WANT2} holds. It follows from \eqref{divergence2} and \eqref{disjoint balls} that
\begin{equation}
\label{divergencez}
\sum_{n:\omega\in B(c,s,n)}\mathcal{L}(E_n)=\sum_{n:\omega\in B(c,s,n)} \sum_{u\in S(Y_{n}(\omega),\frac{s}{R_{n}^{1/d}})} \mathcal{L}(B(u,g(n))=\infty.
\end{equation}
Equation \eqref{divergencez} shows that our collection of sets $\{E_n\}_{n:\omega\in B(c,s,n)}$ satisfies the hypothesis of Lemma \ref{Erdos lemma}.
We record here for later use the following fact: for each $n\in\mathbb{N}$ such that $\omega\in B(c,s,n),$ we have
\begin{equation}
\label{E_n measure}
\L(E_n)\asymp R_ng(n)^d.
\end{equation}Equation \eqref{E_n measure} follows from \eqref{disjoint balls} and the fact that for each $n\in \N$ such that $\omega\in B(c,s,n),$ we have $$c R_n\leq T\Big(Y_n(\omega),\frac{s}{R_{n}^{1/d}}\Big)\leq R_n .$$\\
\noindent \textbf{Step 3: Bounding $\mathcal{L}(E_n\cap E_m)$.}\\
Assume $n$ is such that $\omega\in B(c,s,n),$ $m$ is such that $\omega\in B(c,s,m),$ and $m\neq n.$ Fix $u\in S\big(Y_n(\omega),\frac{s}{R_{n}^{1/d}}\big).$ We want to bound the quantity: $$\#\Big\{v\in S\Big(Y_m(\omega),\frac{s}{R_{m}^{1/d}}\Big):B(u,g(n))\cap B(v,g(m)) \neq \emptyset \Big\}.$$ If $g(m)\geq g(n),$ then every $v\in S\big(Y_m(\omega),\frac{s}{R_{m}^{1/d}}\big)$ satisfying $B(u,g(n))\cap B(v,g(m)) \neq \emptyset$ must also satisfy $B(v,g(m))\subset B(u,3g(m)).$ It follows therefore from \eqref{disjoint balls} and a volume argument that
\begin{equation}
\label{estimate1}\#\Big\{v\in S\Big(Y_m(\omega),\frac{s}{R_{m}^{1/d}}\Big):B(u,g(n))\cap B(v,g(m)) \neq \emptyset \Big\}=\mathcal{O}(1).
\end{equation}If $g(m)<g(n),$ then every $v\in S\big(Y_m(\omega),\frac{s}{R_{m}^{1/d}}\big)$ satisfying $B(u,g(n))\cap B(v,g(m)) \neq \emptyset$ must also satisfy $B(v,g(m))\subseteq B(u,3g(n)).$ Since the elements of $S\big(Y_m(\omega),\frac{s}{R_{m}^{1/d}}\big)$ are by definition separated by a factor $\frac{s}{R_{m}^{1/d}},$ it follows from a volume argument that
\begin{equation}
\label{estimate2}\#\Big\{v\in S\Big(Y_m(\omega),\frac{s}{R_{m}^{1/d}}\Big):B(u,g(n))\cap B(v,g(m)) \neq \emptyset \Big\}= \mathcal{O}\left(\frac{g(n)^dR_{m}}{s^d}+1\right).
\end{equation} Combining \eqref{estimate1} and \eqref{estimate2}, we see that for any $n\in\N$ such that $\omega\in B(c,s,n),$ and $m\neq n$ such that $\omega\in B(c,s,m),$ we have
\begin{equation}
\label{count bound}
\#\Big\{v\in S\Big(Y_m(\omega),\frac{s}{R_{m}^{1/d}}\Big):B(u,g(n))\cap B(v,g(m)) \neq \emptyset \Big\}= \mathcal{O}\left(\frac{g(n)^dR_{m}}{s^d}+1\right).
\end{equation}
We now use \eqref{count bound} to bound $\L(E_n\cap E_m):$
\begin{align*}
\L(E_n\cap E_m)&\stackrel{\eqref{disjoint balls}}{=}\sum_{u\in S(Y_n(\omega),\frac{s}{R_{n}^{1/d}})} \L(B(u,g(n))\cap E_m)\\
&\leq \sum_{u\in S(Y_n(\omega),\frac{s}{R_{n}^{1/d}})} \L(0,g(m))\#\Big\{v\in S\Big(Y_m(\omega),\frac{s}{R_{m}^{1/d}}\Big):B(u,g(n))\cap B(v,g(m)) \neq \emptyset \Big\}\\
&\stackrel{\eqref{count bound}}{=} \sum_{u\in S(Y_n(\omega),\frac{s}{R_{n}^{1/d}})} \mathcal{O}\Big(g(m)^{d}\Big(\frac{g(n)^dR_{m}}{s^d}+1\Big)\Big)\\
&=\mathcal{O}\Big(R_{n}g(m)^{d}\Big(\frac{g(n)^dR_{m}}{s^d}+1\Big)\Big).
\end{align*} Summarising the above, we have shown that for any $n\in\mathbb{N}$ such that $\omega\in B(c,s,n),$ and $m\neq n$ such that $\omega\in B(c,s,m),$ we have:
\begin{equation}
\label{intersection bound}
\L(E_n\cap E_m)=\mathcal{O}\Big(R_{n}g(m)^{d}\Big(\frac{g(n)^dR_{m}}{s^d}+1\Big)\Big).
\end{equation}
\noindent \textbf{Step 4. Applying Lemma \ref{Erdos lemma}.}\\
By Lemma \ref{Erdos lemma}, to prove that \eqref{WANT2} holds, and to finish our proof, it suffices to show that
\begin{equation}
\label{Erdos bound}
\sum_{\stackrel{n,m=1}{n:\omega\in B(c,s,n),\,m:\omega\in B(c,s,m)}}^{Q}\L(E_n\cap E_m)=\mathcal{O}\Big(\big(\sum_{\stackrel{n=1}{n:\omega\in B(c,s,n)}}^Q\L(E_n)\big)^2\Big).
\end{equation}This we do below. We start by separating terms:
\begin{equation}
\label{separating terms}
\sum_{\stackrel{n,m=1}{n:\omega\in B(c,s,n),\,m:\omega\in B(c,s,m)}}^{Q}\L(E_n\cap E_m)=\sum_{\stackrel{n=1}{n:\omega\in B(c,s,n)}}^{Q}\L(E_n)+2\sum_{\stackrel{m=2}{m:\omega\in B(c,s,m)}}^{Q}\sum_{\stackrel{n=1}{n:\omega\in B(c,s,n)}}^{m-1}\L(E_n\cap E_m).
\end{equation}Focusing on the first term on the right hand side of \eqref{separating terms}, we know that $$\sum_{\stackrel{n=1}{n:\omega\in B(c,s,n)}}^{\infty}\L(E_n)=\infty$$ by \eqref{divergencez}. Therefore, for all $Q$ sufficiently large we have $$\sum_{\stackrel{n=1}{n:\omega\in B(c,s,n)}}^{Q}\L(E_n)\geq 1.$$ This implies that
\begin{equation}
\label{square bound}
\sum_{\stackrel{n=1}{n:\omega\in B(c,s,n)}}^{Q}\L(E_n)=\mathcal{O}\Big( \big(\sum_{\stackrel{n=1}{n:\omega\in B(c,s,n)}}^{Q}\L(E_n)\big)^2\Big).
\end{equation}Focusing on the second term in \eqref{separating terms}, we have
\begin{align*}
\sum_{\stackrel{m=2}{m:\omega\in B(c,s,m)}}^{Q}\sum_{\stackrel{n=1}{n:\omega\in B(c,s,n)}}^{m-1}\L(E_n\cap E_m)&\stackrel{\eqref{intersection bound}}{=}\mathcal{O}\Big(\sum_{\stackrel{m=2}{m:\omega\in B(c,s,m)}}^{Q}\sum_{\stackrel{n=1}{n:\omega\in B(c,s,n)}}^{m-1}R_ng(m)^{d}\Big(\frac{g(n)^dR_m}{s^d}+1\Big)\Big)\\
&=\mathcal{O}\Big(\sum_{\stackrel{m=2}{m:\omega\in B(c,s,m)}}^{Q}\sum_{\stackrel{n=1}{n:\omega\in B(c,s,n)}}^{m-1}R_ng(n)^d R_mg(m)^d\Big)\\
&+\mathcal{O}\Big(\sum_{\stackrel{m=2}{m:\omega\in B(c,s,m)}}^{Q}\sum_{\stackrel{n=1}{n:\omega\in B(c,s,n)}}^{m-1}R_ng(m)^{d}\Big)
\end{align*}
Focusing on the first term in the above, we see that \begin{align*}
\sum_{\stackrel{m=2}{m:\omega\in B(c,s,m)}}^{Q}\sum_{\stackrel{n=1}{n:\omega\in B(c,s,n)}}^{m-1}R_ng(n)^d R_mg(m)^d&=\sum_{\stackrel{m=2}{m:\omega\in B(c,s,m)}}^{Q}R_mg(m)^d\sum_{\stackrel{n=1}{n:\omega\in B(c,s,n)}}^{m-1}R_ng(n)^d\\
&\leq \Big(\sum_{\stackrel{n=1}{n:\omega\in B(c,s,n)}}^{Q}R_ng(n)^d\Big)^2\\
&\stackrel{\eqref{E_n measure}}{=}\mathcal{O}\Big(\big(\sum_{\stackrel{n=1}{n:\omega\in B(c,s,n)}}^Q\L(E_n)\big)^2\Big).
\end{align*}
Focusing on the second term, we have
\begin{align*}
\sum_{\stackrel{m=2}{m:\omega\in B(c,s,m)}}^{Q}\sum_{\stackrel{n=1}{n:\omega\in B(c,s,n)}}^{m-1}R_ng(m)^{d}&=\mathcal{O}\Big( \sum_{\stackrel{m=2}{m:\omega\in B(c,s,m)}}^Q R_mg(m)^d\Big)\\
&\stackrel{\eqref{E_n measure}}{=}\mathcal{O}\Big(\sum_{\stackrel{m=1}{m:\omega\in B(c,s,m)}}^{Q} \L(E_m)\Big)\\
&\stackrel{\eqref{square bound}}{=}\mathcal{O}\Big( \big(\sum_{\stackrel{m=1}{m:\omega\in B(c,s,m)}}^{Q} \L(E_m)\big)^2\Big).
\end{align*}In the first equality above we used the assumption that $R_n\asymp \gamma^n,$ and therefore by properties of geometric series $$\sum_{\stackrel{n=1}{n:\omega\in B(c,s,n)}}^{m-1}R_n\leq \sum_{n=1}^{m-1}R_n=\mathcal{O}(R_m).$$ This is the only point in the proof where we use the assumption $R_n\asymp \gamma^n$.
Collecting the bounds obtained above, we see that
\begin{equation}
\label{square bound2}
\sum_{\stackrel{m=2}{m:\omega\in(B(c,s,m)}}^{Q}\sum_{\stackrel{n=1}{n:\omega\in B(c,s,n)}}^{m-1}\L(E_n\cap E_m)=\mathcal{O}\Big(\big (\sum_{\stackrel{n=1}{n:\omega\in B(c,s,n)}}^{Q}\L(E_n)\big)^2\Big).
\end{equation}
Substituting the bounds \eqref{square bound} and \eqref{square bound2} into \eqref{separating terms}, we see that \eqref{Erdos bound} holds as required. This completes our proof.
\end{proof}
With Proposition \ref{fixed omega} we can now prove Proposition \ref{general prop}.
\begin{proof}[Proof of Proposition \ref{general prop}]
Let $$P:=\bigcap_{\epsilon>0}\bigcup_{c,s>0}\{\omega:\overline{d}(n:\omega\in B(c,s,n))\geq 1-\epsilon\}.$$ Fix $\omega\in P$. For any $h\in H$, by definition there exists $\epsilon>0$ such that $h\in H_{\epsilon}$. It follows from the definition of $P,$ that there exists $c,s>0$ such that $$\overline{d}(n:\omega\in B(c,s,n))> 1-\epsilon.$$ In which case, by the definition of $H_{\epsilon},$ we must have $$\sum_{n:\omega\in B(c,s,n)}h(n)=\infty.$$ Applying Proposition \ref{fixed omega} it follows that
$$\left\{x\in\mathbb{R}^d:x\in\bigcup_{l=1}^{R_{n}}B\left(f_{l,n}(\omega),\left(\frac{h(n)}{R_{n}}\right)^{1/d}\right)\textrm{ for i.m. } n\in \mathbb{N}\right\}$$
has positive Lebesgue measure. Our result now follows since $\omega\in P$ was arbitrary and we know by Lemma \ref{density separation lemma} that $\eta(P)=\eta(\Omega).$
\end{proof}
\subsubsection{Verifying the hypothesis of Proposition \ref{general prop}.}
To prove Theorems \ref{1d thm}, \ref{translation thm} and \ref{random thm}, we will apply Proposition \ref{general prop}. Naturally to do so we need to verify the hypothesis of Proposition \ref{general prop}. The exponential growth condition on the number of elements in our set will be automatically satisfied. Verifying the second integral condition is more involved. We will show that this integral condition holds via a transversality argument. Unfortunately the quantity $T(Y_{n}(\omega),\frac{s}{R_{n}^{1/d}})$ is not immediately amenable to transversality techniques. Instead we study the quantity:
$$R(\omega,s,n):=\left\{(l,l')\in\{1,\ldots,R_n\}^2 :|f_{l,n}(\omega)-f_{l',n}(\omega)|\leq\frac{s}{R_{n}^{1/d}}\,\textrm{ and } l\neq l'\right\}.$$ The following lemma allows us to deduce the integral bound appearing in Proposition \ref{general prop} from a similar bound for $R(\omega,s,n)$.
\begin{lemma}
\label{integral bound}
Assume there exists $G:(0,\infty)\to(0,\infty)$ satisfying $\lim_{s\to 0}G(s)=0,$ such that for all $n\in N$ we have $$\int_{\Omega} \frac{\#R(\omega,s,n)}{R_n}d\eta\leq G(s).$$ Then for all $n\in N$ we have
$$\eta(\Omega)-\int_{\Omega}\frac{T(Y_{n}(\omega),\frac{s}{R_{n}^{1/d}})}{R_n}d\eta \leq G(s).$$
\end{lemma}
\begin{proof}Let
$$W(\omega,s,n):=\left\{l\in\{1,\ldots,R_n\}: |f_{l,n}(\omega)-f_{l',n}(\omega)|>\frac{s}{R_{n}^{1/d}}\, \forall l'\neq l\right\}.$$ Since for any distinct $l,l'\in W(\omega,s,n),$ the distance between $f_{l,n}(\omega)$ and $f_{l',n}(\omega)$ is at least $\frac{s}{R_{n}^{1/d}}$, it follows that $W(\omega,s,n)$ is a $\frac{s}{R_{n}^{1/d}}$-separated set. Therefore
\begin{equation}
\label{countbounda}
\# W(\omega,s,n)\leq T\Big(Y_n(\omega),\frac{s}{R_{n}^{1/d}}\Big).
\end{equation} Importantly we also have
\begin{equation}
\label{countboundb}\#W(\omega,s,n)^c=\#\left\{l\in\{1,\ldots,R_n\}:|f_{l,n}(\omega)-f_{l',n}(\omega)|\leq \frac{s}{R_{n}^{1/d}}\,\textrm{ for some }l'\neq l\right\}\leq \#R(\omega,s,n).
\end{equation} This follows because the map $f:R(\omega,s,n)\to W(\omega,s,n)^c$ defined by $f(l,l')=l$ is a surjective map.
Now suppose we have $G:(0,\infty)\to(0,\infty)$ satisfying the hypothesis of our proposition. Then for any $s>0$ and $n\in \N$ we have
\begin{align*}
\eta(\Omega)=\int_{\Omega} \frac{\# W(\omega,s,n)+\# W(\omega,s,n)^c}{R_n}d\eta &\stackrel{\eqref{countbounda}\, \eqref{countboundb}}\leq \int_{\Omega} \frac{T(Y_{n}(\omega),\frac{s}{R_{n}^{1/d}})}{R_n}d\eta+\int_{\Omega}\frac{\#R(\omega,s,n)}{R_n}d\eta\\
&\leq \int_{\Omega} \frac{T(Y_{n}(\omega),\frac{s}{R_{n}^{1/d}})}{R_n}d\eta+G(s).
\end{align*}This implies
$$\eta(\Omega)-\int_{\Omega}\frac{T(Y_{n}(\omega),\frac{s}{R_{n}^{1/d}})}{R_n}d\eta \leq G(s).$$
\end{proof}
\subsubsection{The non-existence of a Khintchine like result}
The purpose of this section is to prove the following proposition. It will be used in the proof of Theorem \ref{precise result} and Theorem \ref{Colette thm}. It demonstrates that a lack of separation along a subsequence can lead to the non-existence of a Khintchine like result.
\begin{prop}
\label{fail prop}
Let $\omega\in\Omega$ and suppose that for some $s>0$ we have $$\liminf_{n\to\infty} \frac{T(Y_{n}(\omega),\frac{s}{R_{n}^{1/d}})}{R_n}=0.$$ Then there exists $h:\mathbb{N}\to[0,\infty)$ such that $$\sum_{n=1}^{\infty}h(n)=\infty,$$ yet $$\left\{x\in\mathbb{R}^d:x\in\bigcup_{l=1}^{R_{n}}B\left(f_{l,n}(\omega),\left(\frac{h(n)}{R_{n}}\right)^{1/d}\right)\textrm{ for i.m. } n\in \mathbb{N}\right\}$$ has zero Lebesgue measure.
\end{prop}
\begin{proof}
Let $\omega\in \Omega$ and $s>0$ be as above. By our assumption, there exists a strictly increasing sequence $(n_j)$ such that
\begin{equation}
\label{separated upper bound}
T\Big(Y_{n_j}(\omega),\frac{s}{R_{n_j}^{1/d}}\Big)\leq \frac{R_{n_j}}{j^2}
\end{equation} for all $j\in\mathbb{N}$. By the definition of a maximal $s\cdot R_{n_j}^{-1/d}$-separated set, we know that for each $l\in\{1,\ldots,R_{n_j}\},$ there exists $u\in S(Y_{n_j}(\omega),\frac{s}{R_{n_j}^{1/d}})$ such that $|u-f_{l,n_j}(\omega)|\leq s\cdot R_{n_j}^{-1/d}.$ It follows that
\begin{equation}
\label{inclusion z}
\bigcup_{l=1}^{R_{n_{j}}}B\left(f_{l,n_j}(\omega),\frac{s}{R_{n_j}^{1/d}}\right)\subseteq \bigcup_{u\in S(Y_{n_j}(\omega),\frac{s}{R_{n_j}^{1/d}})}B\left(u,\frac{3s}{R_{n_j}^{1/d}}\right).
\end{equation}We now define our function $h:\mathbb{N}\to [0,\infty):$
$$h(n)=
\left\{
\begin{array}{ll}
s^d & \mbox{if } n=n_j \textrm{ for some }j\in\mathbb{N}\\
0 & \textrm{otherwise}
\end{array}
\right.
$$This function obviously satisfies $$\sum_{n=1}^{\infty}h(n)=\infty.$$ By \eqref{inclusion z} and the definition of $h,$ we see that
\begin{align}
&\L\left(\left\{x\in\mathbb{R}^d:x\in\bigcup_{l=1}^{R_{n}}B\left(f_{l,n}(\omega),\left(\frac{h(n)}{R_{n}}\right)^{1/d}\right)\textrm{ for i.m. } n\in \mathbb{N}\right\}\right)\nonumber \\
\leq &\L\left(\left\{x:x\in \bigcup_{u\in S(Y_{n_j}(\omega),\frac{s}{R_{n_j}^{1/d}})}B\left(u,\frac{3s}{R_{n_j}^{1/d}}\right)\textrm{ for i.m. }j\right\}\right)\label{inclusion zz}.
\end{align} So to prove our result it suffices to show that the right hand side of \eqref{inclusion zz} is zero. This fact now follows from the Borel-Cantelli lemma and the following inequalities:
\begin{align*}\sum_{j=1}^{\infty}\sum_{u\in S(Y_{n_j}(\omega),\frac{s}{R_{n_j}^{1/d}})}\L\Big(B\Big(u,\frac{3s}{R_{n_j}^{1/d}}\Big)\Big)&=\sum_{j=1}^{\infty}T\Big(Y_{n_j}(\omega),\frac{s}{R_{n_j}^{1/d}}\Big)\frac{(3s)^d\L(B(0,1))}{R_{n_j}}\\
&\stackrel{\eqref{separated upper bound}}{\leq}\sum_{j=1}^{\infty}\frac{(3s)^d\L(B(0,1))}{j^2}\\
&<\infty.
\end{align*}
\end{proof}
\subsection{Full measure statements}
The main result of the previous section was Proposition \ref{general prop}. This result provides sufficient conditions for us to conclude that for a parameterised family of points, almost surely each member of a class of limsup sets defined in terms of neighbourhoods of these points will have positive Lebesgue measure. We will eventually apply Proposition \ref{general prop} to the sets $U_{\Phi}(z,\m,h)$ defined in the introduction. Instead of just proving positive measure statements, we would like to be able to prove full measure results. The purpose of this section is to show how one can achieve this goal. Proposition \ref{full measure} achieves this by imposing some extra assumptions on the function $\Psi$. Proposition \ref{separated full measure} achieves this by imposing some stronger separation hypothesis.
The following lemma follows from Lemma 1 of \cite{BerVel2}. It is a consequence of the Lebesgue density theorem.
\begin{lemma}[\cite{BerVel2}]
\label{arbitrarily small}The following statements are true:
\begin{enumerate}
\item Let $(x_j)$ be a sequence of points in $\mathbb{R}^d$ and $(r_j),(r_j')$ be two sequences of positive real numbers both converging to zero. If $r_j\asymp r_j'$ then $$\L(x:x\in B(x_j,r_j) \textrm{ for i.m. } j)=\L(x:x\in B(x_j,r_j') \textrm{ for i.m. } j).$$
\item Let $B(x_j,r_j)$ be a sequence of balls in $\mathbb{R}^d$ such that $r_j\to 0$. Then $$\L(x:x\in B(x_j,r_j) \textrm{ for i.m. } j)=\L\left(\bigcap _{0<c<1}\{x:x\in B(x_j,cr_j) \textrm{ for i.m. } j\}\right).$$
\end{enumerate}
\end{lemma}
Lemma \ref{arbitrarily small} implies the following useful fact. If $\Psi:\D^*\to[0,\infty)$ is equivalent to $(\m,h)$ and $U_{\Phi}(z,\m,h)$ has positive Lebesgue measure, then $W_{\Phi}(z,\Psi)$ has positive Lebesgue measure. We will use this fact several times throughout this paper.
Lemma \ref{arbitrarily small} will be used in the proof of the following proposition and in the proofs of our main theorems. Recall that we say that a function $\Psi:\D^*\to[0,\infty)$ is weakly decaying if
$$\inf_{\a\in \D^*}\min_{i\in \D}\frac{\Psi(i\a)}{\Psi(\a)}>0.$$
\begin{prop}
\label{full measure}
The following statements are true:
\begin{enumerate}
\item Assume $\Phi$ is a collection of similarities with attractor $X$. If $z\in X$ is such that $\L(W_{\Phi}(z,\Psi))>0$ for some $\Psi$ that is weakly decaying, then Lebesgue almost every $x\in X$ is contained in $W_{\Phi}(z,\Psi)$.
\item Assume $\Phi$ is an arbitrary IFS and there exists $\mu,$ the pushforward of a $\sigma$-invariant ergodic probability measure $\m,$ satisfying $\mu\sim \L|_{X}$. Then if $z\in X$ is such that $\L(W_{\Phi}(z,\Psi))>0$ for some $\Psi$ that is weakly decaying, then Lebesgue almost every $x\in X$ is contained in $W_{\Phi}(z,\Psi)$.
\end{enumerate}
\end{prop}
\begin{proof}
We prove each statement separately. \\
\noindent \textbf{Proof of statement 1.}\\
Let $\Phi$ be an IFS consisting of similarities and suppose $z$ and $\Psi$ satisfy the hypothesis of the proposition. Let $$A:=\bigcap_{0<c<1}W_{\Phi}(z,c\Psi).$$ It follows from Lemma \ref{arbitrarily small} that $$\L(A)=\L(W_{\Phi}(z,\Psi))>0.$$ We claim that
\begin{equation}
\label{full measurea}\L\Big(\bigcup_{\a\in \D^*}\phi_{\a}(A)\Big)=\L(X).
\end{equation} To see that \eqref{full measurea} holds, suppose otherwise and assume $$\L\Big(X\setminus \bigcup_{\a\in \D^*}\phi_{\a}(A)\Big)>0.$$ Moreover, let $x^*$ be a density point of $$X\setminus \bigcup_{\a\in \D^*}\phi_{\a}(A).$$ Such a point has to exist by the Lebesgue density theorem.
Let $(b_j)\in \D^\N$ be such that $\pi((b_j))=x^*,$ and let $0<r<Diam(X)$ be arbitrary. We let $n(r)\in\N$ be such that $$Diam(X) \prod_{j=1}^{n(r)}r_{b_j}<r\leq Diam(X) \prod_{j=1}^{n(r)-1}r_{b_j}.$$ The parameter $n(r)$ satisfies the following:
\begin{equation}
\label{inclusion1}
(\phi_{b_1}\circ \cdots \circ \phi_{b_{n(r)}})(X)\subseteq B(x^*,r),
\end{equation} and
\begin{equation}
\label{prodapprox} \frac{r \cdot \min_{i\in D}r_i}{Diam(X)}\leq \prod_{j=1}^{n(r)}r_{b_j}.
\end{equation}
Using \eqref{inclusion1} and \eqref{prodapprox} we can now bound $$\L\Big(B(x^{*},r)\cap \Big(X\setminus \bigcup_{\a\in \D^*}\phi_{\a}(A)\Big)\Big).$$ Observe
\begin{align*}
\L\Big(B(x^{*},r)\cap \Big(X\setminus \bigcup_{\a\in \D^*}\phi_{\a}(A)\Big)\Big)&\stackrel{\eqref{inclusion1}}{\leq} \L(B(x^{*},r))- \L((\phi_{b_1}\circ \cdots \circ \phi_{b_{n(r)}})(A))\\
&=\L(B(0,1))r^d -\Big(\prod_{j=1}^{n(r)}r_{b_j}\Big)^d\L(A)\\
&\stackrel{\eqref{prodapprox}}{\leq} \L( B(0,1))r^d -\frac{r^d\L(A)\min_{i\in \D}r_i^d }{Diam(X)^d}\\
&\leq \L( B(0,1))r^d\Big(1-\frac{\L(A)\min_{i\in \D}r_i^d }{\L( B(0,1))Diam(X)^d}\Big).
\end{align*}
Therefore $$\limsup_{r\to 0}\frac{\L\big(B(x^{*},r)\cap \big(X\setminus \bigcup_{\a\in \D^*}\phi_{\a}(A)\big)\big)}{\L(B(x^*,r))}<1.$$ This implies that $x^*$ cannot be a density point of $$X\setminus \bigcup_{\a\in \D^*}\phi_{\a}(A),$$ and we may conclude that \eqref{full measurea} holds.
We will now show that
\begin{equation}
\label{limsup inclusion}\bigcup_{\a\in \D^*}\phi_{\a}(A)\subseteq W_{\Phi}(z,\Psi).
\end{equation}Let $$y\in \bigcup_{\a\in \D^*}\phi_{\a}(A)$$ be arbitrary. Let $b_1\cdots b_k\in \D^*$ and $v\in A$ be such that $$(\phi_{b_1}\circ \cdots \circ \phi_{b_k})(v)=y.$$ Let
\begin{equation}
\label{delightful}
d:=\inf_{\a\in \D^*}\min_{i\in \D}\frac{\Psi(i\a)}{\Psi(\a)}.
\end{equation} Since $\Psi$ is weakly decaying $d>0$.
Now suppose $\a\in \D^*$ is such that $$|\phi_{\a}(z)-v|\leq c\Psi(\a),$$ where
\begin{equation}
\label{c equation}
c:=d^k \max_{i\in \D}\{r_i\}^{-k}.
\end{equation} Then
\begin{align*}
|(\phi_{b_1}\circ \cdots \circ \phi_{b_k}\circ \phi_{\a})(z)-y|&=|(\phi_{b_1}\circ \cdots \circ \phi_{b_k}\circ \phi_{\a}(z)-\phi_{b_1}\circ \cdots \circ \phi_{b_k}(v)|\\
&= |\phi_{\a}(z)-v|\prod_{j=1}^k r_{b_j}\\
&\leq c\Psi(\a)\prod_{j=1}^k r_{b_j}\\
&\stackrel{\eqref{delightful}}{\leq} cd^{-k}\Psi(b_1\cdots b_k \a)\prod_{j=1}^k r_{b_j}\\
&\stackrel{\eqref{c equation}}{\leq}\Psi(b_1\cdots b_k\a).
\end{align*}
It follows that for this choice of $c,$ whenever
\begin{equation}
\label{solutiona}
|\phi_{\a}(z)-v|\leq c\Psi(\a),
\end{equation} we also have
\begin{equation}
\label{solutionb}|(\phi_{b_1}\circ \cdots \circ \phi_{b_k}\circ \phi_{\a}(z)-y|\leq \Psi(b_1\cdots b_k\a).
\end{equation} It follows from the definition of $A$ that $v$ has infinitely many solutions to \eqref{solutiona}, therefore $y$ has infinitely many solutions to \eqref{solutionb} and $y\in W_{\Phi}(z,\Psi)$. It follows now from \eqref{full measurea} and \eqref{limsup inclusion} that Lebesgue almost every $x\in X$ is contained in $W_{\Phi}(z,\Psi)$ \\
\noindent \textbf{Proof of statement 2.}\\
Let $\Phi$ be an IFS and $\mu$ be the pushforward of some $\sigma$-invariant ergodic probability measure $\m.$ We assume that assume $\mu\sim \L|_{X}$. Let $z$ and $\Psi$ satisfy the hypothesis of our proposition. Let $A$ be as in the proof of statement $1$. It follows from our assumptions and Lemma \ref{arbitrarily small} that $\L(A)>0$. Since $\mu\sim \L|_{X}$ we also have $\mu(A)>0$. We will prove that
\begin{equation}
\label{full measure10}
\mu\Big(\bigcup_{\a\in \D^*}\phi_{\a}(A)\Big)=1.
\end{equation} By our assumption $\m(\pi^{-1}(A))=\mu(A)>0.$ By the ergodicity of $\m$ we have
\begin{equation}
\label{ergodicity}
\m\Big(\bigcup_{n=0}^{\infty}\sigma^{-n}(\pi^{-1}(A))\Big)=1.
\end{equation} Now observe that
\begin{equation*}
\mu\Big(\bigcup_{\a\in \D^*}\phi_{\a}(A)\Big)=\m\Big(\pi^{-1}\Big(\bigcup_{\a\in \D^*}\phi_{\a}(A)\Big)\Big)\geq \m\Big(\bigcup_{n=0}^{\infty}\sigma^{-n}(\pi^{-1}(A))\Big).
\end{equation*} Therefore \eqref{ergodicity} implies \eqref{full measure10}. Since $\mu \sim \L|_{X}$ it follows that
\begin{equation}
\label{full measureABC} \L\Big(\bigcup_{\a\in \D^*}\phi_{\a}(A)\Big)=\L(X).
\end{equation}We now let $$y\in \bigcup_{\a\in \D^*}\phi_{\a}(A)$$ be arbitrary. Defining appropriate analogues of $b_1\cdots b_k$ and $c$ as in the proof of statement $1,$ it will follow that $y\in W_{\Phi}(z,\Psi)$. Therefore $$\bigcup_{\a\in \D^*}\phi_{\a}(A)\subseteq W_{\Phi}(z,\Psi).$$ Combining this fact with \eqref{full measureABC} completes the proof of statement $2$.
\end{proof}
Proposition \ref{full measure} is a useful technique for proving full measure statements, but there is an additional cost as we require the function $\Psi$ to be weakly decaying. The following proposition requires no extra condition on the function $\Psi$, but does require some stronger separation assumptions. Before stating this proposition we recall some of our earlier definitions. Given a slowly decaying probability measure $\m$ we define
$$L_{\m,n}=\{\a\in\D^*: \m([a_1\cdots a_{|\a|}])\leq c_{\m}^n<\m([a_1\cdots a_{|\a|-1}])\}$$ and $$R_{\m,n}=\#L_{\m,n}.$$ Moreover, given $z\in X$ we define $$Y_{\m,n}(z):=\{\phi_{\a}(z)\}_{\a\in L_{\m,n}}.$$
\begin{prop}
\label{separated full measure}
Suppose $\Phi=\{A_i x+t_i\}$ is a collection of affine contractions and $\mu$ is the pushforward of a Bernoulli measure $\m$. Assume that one of following properties is satisfied:
\begin{itemize}
\item $\Phi$ consists solely of similarities.
\item $d=2$ and all the matrices $A_i$ are equal.
\item All the matrices $A_i$ are simultaneously diagonalisable.
\end{itemize} Let $z\in X$ and suppose that for some $s>0$ there exists a subsequence $(n_k)$ satisfying $$\lim_{k\to\infty}\frac{T(Y_{\m,n_k}(z),\frac{s}{R_{\m,n_k}^{1/d}})}{R_{\m,n_k}}=1.$$ Then $\mu\sim \L|_{X},$ and for any $h$ that satisfies $$\sum_{k=1}^{\infty}h(n_k)=\infty,$$ we have that Lebesgue almost every $x\in X$ is contained in $U_{\Phi}(z,\m,h).$
\end{prop}
The proof of Proposition \ref{separated full measure} is more involved than Proposition \ref{full measure} and will rely on the following technical result.
\begin{lemma}
\label{equivalent measures}
\begin{enumerate}
\item Let $\mu$ be a self-similar measure. Then either $\mu\sim \L|_{X}$ or $\mu$ is singular.
\item Suppose $\Phi=\{A_i x+t_i\}$ is a collection of affine contractions and one of following properties is satisfied:
\begin{itemize}
\item $d=2$ and all the matrices $A_i$ are equal.
\item All the matrices $A_i$ are simultaneously diagonalisable.
\end{itemize} Then if $\mu$ is the pushforward of a Bernoulli measure we have either $\mu\sim \L|_{X}$ or $\mu$ is singular.
\item Let $\Phi$ be an arbitrary iterated function system and $\mu$ be the pushforward of a $\sigma$-invariant ergodic probability measure $\m$. Then either $\mu\ll\L$ or $\mu$ is singular (i.e. $\mu$ is of pure type).
\end{enumerate}
\end{lemma}
\begin{proof}
A proof of statement $1$ can be found in \cite{PeScSo}. It makes use of an argument originally appearing in \cite{MauSim}. Statement $2$ was proved in \cite[Section 4.4.]{Shm3} using ideas of Guzman \cite{Guz} and Fromberg \cite{NSW}.
We could not find a proof of statement $3$ so we include one for completeness. Suppose that $\mu$ is not singular, then by the Lebesgue decomposition theorem $\mu= \mu_0+\mu_1$ where $\mu_0\ll\L,$ $\mu_1\perp \L,$ and $\mu_0(X)>0.$ Suppose that $\mu\neq \mu_0$. Then there exists $A$ such that $\mu_1(A)>0.$ Since $\mu_1\perp \L,$ we may assume without loss of generality that $\L(A)=0.$ Using the ergodicity of $\m,$ it follows from an analogous argument to that used in the proof of statement $2$ from Proposition \ref{full measure} that $\mu(\cup_{\a\in \D^*}\phi_{\a}(A))=1.$ Therefore we must have $\mu_{0}(\cup_{\a\in \D^*}\phi_{\a}(A))>0,$ and by absolute continuity $\L(\cup_{\a\in \D^*}\phi_{\a}(A))>0.$ Since each $\phi_{\a}$ is a contraction, $\L(A)=0$ implies that $\L(\phi_{\a}(A))=0$ for all $\a\in \D^*$. This contradicts that $\L(\cup_{\a\in \D^*}\phi_{\a}(A))>0.$ Therefore we must have $\mu= \mu_0.$
\end{proof}Only statement $1$ and statement $2$ from Lemma \ref{equivalent measures} will be needed in the proof of Proposition \ref{separated full measure}. Statement $3$ is needed in the proof of the following result which we formulate as generally as possible.
\begin{prop}
\label{absolute continuity}
Let $\mu$ be the pushforward of a slowly decaying $\sigma$-invariant ergodic probability measure $\m$. If for some $z\in X$ and $s>0$ we have $$\limsup_{n\to\infty} \frac{T\big(Y_{\m,n}(z),\frac{s}{R_{\m,n}^{1/d}}\big)}{R_{\m,n}}>0,$$ then $\mu\ll\L$.
\end{prop}
\begin{proof}
We start our proof by remarking that for any $z\in X$,
\begin{equation}
\label{weak star}
\mu=\lim_{n\to\infty} \sum_{\a\in L_{\m,n}}\m([\a])\cdot \delta_{\phi_{\a}(z)},
\end{equation} where the convergence is meant with respect to the weak star topology\footnote{Equation \eqref{weak star} can be verified by checking that for each $n\in \N$ the measure $\sum_{\a\in L_{\m,n}}\m([\a])\cdot \delta_{\phi_{\a}(z)}$ is the pushforward of an appropriately chosen measure $\m_{n}$ on $\D^{\mathbb{N}},$ and that this sequence of measures satisfies $\lim_{n\to\infty}\m_{n}=\m$.}. By our assumption, for some $z\in X$ and $s>0,$ there exists a sequence $(n_k)$ and $c>0$ such that
\begin{equation}
\label{mass equation}
\frac{T\big(Y_{\m,n_k}(z),\frac{s}{R_{\m,n_k}^{1/d}}\big)}{R_{\m,n_k}}>c
\end{equation} for all $k$. Define $$\mu_{n_k}':=\sum_{\stackrel{\a\in L_{\m,n_k}}{\phi_{\a}(z)\in S\big(Y_{\m,n_k}(z),\frac{s}{R_{\m,n_k}^{1/d}}\big)}}\m([\a])\cdot \delta_{\phi_{\a}(z)}$$ and $$\mu_{n_k}'':=\sum_{\stackrel{\a\in L_{\m,n_k}}{\phi_{\a}(z)\notin S\big(Y_{\m,n_k}(z),\frac{s}{R_{\m,n_k}^{1/d}}\big)}}\m([\a])\cdot \delta_{\phi_{\a}(z)}.$$ Then $$\sum_{\a\in L_{\m,n_k}}\m([\a])\cdot \delta_{\phi_{\a}(z)}=\mu_{n_k}'+\mu_{n_k}''.$$ By taking subsequences if necessary, we may also assume without loss of generality that there exists two finite measures $\nu'$ and $\nu''$ such that $\lim_{k\to\infty}\mu'_{n_k}= \nu'$ and $\lim_{k\to\infty}\mu''_{n_k}= \nu''.$ Therefore by \eqref{weak star} we have $\mu=\nu'+\nu''$. We will prove that $\nu'(X)>0$ and $\nu'$ is absolutely continuous with respect to the Lebesgue measure. Since $\mu$ is either singular or absolutely continuous by Lemma \ref{equivalent measures}, it will follow that $\mu\ll \L$.
It follows from the definition of $L_{\m,n}$ that for any $\a,\a'\in L_{\m,n}$ we have $$\m([\a])\asymp \m([\a']).$$ This implies that for any $\a\in\L_{\m,n}$ we have
\begin{equation}
\label{BBC}
\m([\a])\asymp R_{\m,n}^{-1}.
\end{equation} Using \eqref{mass equation} and \eqref{BBC}, we have that $$\mu'_{n_k}(X)=\sum_{\stackrel{\a\in L_{\m,n_k}}{\phi_{\a}(z)\in S\big(Y_{\m,n_k}(z),\frac{s}{R_{\m,n_k}^{1/d}}\big)}}\m([\a])\asymp \frac{T\big(Y_{\m,n_k}(z),\frac{s}{R_{\m,n_k}^{1/d}}\big)}{R_{\m,n_k}}\asymp \frac{c\cdot R_{\m,n_k}}{R_{\m,n_k}}\asymp 1.$$ Therefore $\nu'(X)\geq \lim_{k\to\infty}\mu'_{n_k}(X) >0$. Now we prove that $\nu'$ is absolutely continuous. Fix an arbitrary open $d$-dimensional cube $(x_1,x_1+r)\times\cdots \times (x_d,x_d+r)\subset \mathbb{R}^d,$ we have
\begin{align}
& \mu'_{n_k}((x_1,x_1+r)\times \cdots\times (x_d,x_d+r))\nonumber\\
&=\sum_{\stackrel{\a\in L_{\m,n}}{\phi_{\a}(z)\in S\big(Y_{\m,n_k}(z),\frac{s}{R_{\m,n}^{1/d}}\big)\cap (x_1,x_1+r)\times\cdots\times (x_d,x_d+r)}} \m([\a])\nonumber\\
&=\mathcal{O}\left(\frac{\#\Big\{\phi_{\a}(z)\in S\big(Y_{\m,n_k}(z),\frac{s}{R_{\m,n}^{1/d}}\big)\cap (x_1,x_1+r)\times\cdots\times (x_d,x_d+r)\Big\}}{R_{\m,n}}\right)\label{jiggly}.
\end{align}In the last line we used \eqref{BBC}. Since the elements of $S\big(Y_{\m,n_k}(z),\frac{s}{R_{\m,n}^{1/d}}\big)$ are separated by a factor $\frac{s}{R_{\m,n}^{1/d}},$ it follows from a volume argument that we must have
$$\#\Big\{\phi_{\a}(z)\in S\big(Y_{\m,n_k}(z),\frac{s}{R_{\m,n}^{1/d}}\big)\cap (x_1,x_1+r)\times\cdots\times (x_d,x_d+r)\Big\}=\mathcal{O}\left(\frac{r^dR_{\m,n}}{s^d}\right).$$
Substituting this bound into \eqref{jiggly} we have $$\mu'_{n_k}((x_1,x_1+r)\times \cdots\times (x_d,x_d+r))=\mathcal{O}\left(\frac{r^dR_{\m,n}}{s^dR_{\m,n}}\right)=\mathcal{O}\left(\frac{r^d}{s^d}\right).$$
Letting $k\to\infty$, it follows that for any $d$-dimensional cube we have $$\nu'((x_1,x_1+r)\times \cdots\times (x_d,x_d+r))=\mathcal{O}\left(\frac{r^d}{s^d}\right).$$ Since $s$ is fixed $\nu'$ must be absolutely continuous. This completes our proof.
\end{proof}
As well as Proposition \ref{absolute continuity} being used in our proof of Proposition \ref{separated full measure}, it can be seen as a new tool for proving that measures are absolutely continuous. Proposition \ref{absolute continuity} can be used in conjunction with Lemma \ref{density separation lemma} and Lemma \ref{integral bound} to recover known results on the absolute continuity of measures within a parameterised family. We include one such instance of this in Section \ref{applications}, where we recover the well known result due to Solomyak that for almost every $\lambda\in(1/2,1),$ the unbiased Bernoulli convolution is absolutely continuous \cite{Sol}.
With these preliminary results we are now in a position to prove Proposition \ref{separated full measure}.
\begin{proof}[Proof of Proposition \ref{separated full measure}]
Let $\Phi$ be an IFS satisfying one of our conditions and $\mu$ be the pushforward of a Bernoulli measure $\m$. Let $z\in X,$ $s>0,$ and $(n_k)$ satisfy the hypothesis of our proposition. By an application of Proposition \ref{absolute continuity}, we know that $\mu\ll \L.$ Moreover, by Lemma \ref{equivalent measures} we also know that $\mu\sim \L|_{X}$.
To prove our result, it is sufficient to show that
\begin{equation}
\label{Want to showa}
\L\left(\left\{x\in\mathbb{R}^d:x\in \bigcup_{\a\in L_{\m,n_k}}B\left(\phi_{\a}(z),\Big(\frac{h(n_k)}{R_{\m,n_k}}\Big)^{1/d}\right)\textrm{for i.m. }k\in \N\right\}\right)=\L(X),
\end{equation} for any $h$ satisfying \begin{equation}
\label{h divergence}
\sum_{k=1}^{\infty}h(n_k)=\infty.
\end{equation}
It will then follow from Lemma \ref{arbitrarily small}, and the fact that $\m([\a])\asymp R_{\m,n}^{-1}$ for any $\a\in L_{\m,n},$ that \eqref{Want to showa} implies that Lebesgue almost every $x\in X$ is contained in $U_{\Phi}(z,\m,h)$ for any $h$ satisfying \eqref{h divergence}
Our proof of \eqref{Want to showa} will follow from a similar type of argument to that given in the proof of Proposition \ref{fixed omega}. Where necessary to avoid repetition we will omit certain details. Our strategy for proving \eqref{Want to showa} holds is to prove that for Lebesgue almost every $y\in X,$ there exists $c_y>0,$ such that for all $r$ sufficiently small we have
\begin{equation}
\label{Want to showb}\L\left(B(y,2r)\cap\left\{x\in\mathbb{R}^d:x\in \bigcup_{\a\in L_{\m,n_k}}B\left(\phi_{\a}(z),\Big(\frac{h(n_k)}{R_{\m,n_k}}\Big)^{1/d}\right)\textrm{for i.m. }k\in \N\right\}\right)\geq c_y r^d.
\end{equation}Importantly $c_y$ will not depend upon $r$. It follows then by an application of the Lebesgue density theorem that \eqref{Want to showb} implies \eqref{Want to showa}. As in the proof of Proposition \ref{fixed omega}, we split our proof of \eqref{Want to showb} into smaller steps.\\
\noindent \textbf{Step $1$. Local information.}\\
We have already established that $\mu\sim \L|_{X}.$ Let $d$ denote the Radon-Nikodym derivative $d\mu/d\L$. For $\mu$-almost every $y$ we must have $d(y)>0$. It follows now by the Lebesgue differentiation theorem, and the fact that $\mu\sim \L|_{X},$ that for Lebesgue almost every $y\in X$ we have $$\lim_{r\to 0}\frac{\mu(B(y,r))}{\L(B(y,r))}= d(y)>0.$$ In what follows $y$ is a fixed element of $X$ satisfying this property. Let $r^*$ be such that for all $r\in(0,r^*),$ we have
\begin{equation}
\label{quack}
\frac{d(y)}{2}<\frac{\mu(B(y,r))}{\L(B(y,r))}< 2d(y).
\end{equation} Now using that $\mu$ is the weak star limit of the sequence of measures $$\mu_{n_k}:=\sum_{\a\in L_{\m,n_k}}\m([\a])\cdot\delta_{\phi_{\a}(z)},$$ together with \eqref{quack}, we can assert that for each $r\in(0,r^*),$ for $k$ sufficiently large we have \begin{equation}
\label{dumbdumb}
\mu_{n_k}(B(y,r))=\sum_{\stackrel{\a\in L_{\m,n_k}}{\phi_{\a}(z)\in B(y,r)}}\m([\a])\asymp d(y)r^d.
\end{equation} By construction we know that each $\a\in L_{\m,n_k}$ satisfies $$\m([\a])\asymp R_{\m,n_k}^{-1}.$$ Therefore it follows from \eqref{dumbdumb} that for all $k$ sufficiently large
\begin{equation}
\label{local growth near}\frac{\#\{\a\in L_{\m,n_k}:\phi_{\a}(z)\in B(y,r)\}}{R_{\m,n_k}}\asymp d(y)r^d.
\end{equation}
Let
$$A(y,r,k):=\left\{\a\in L_{\m,n_{k}}:\phi_{\a}(z)\in S\Big(Y_{\m,n_k}(z),\frac{s}{R_{\m,n_k}^{1/d}}\Big)\cap B(y,r)\right\}.$$
Since $$\lim_{k\to\infty}\frac{T(Y_{\m,n_k}(z),\frac{s}{R_{\m,n_k}^{1/d}})}{R_{\m,n_k}}=1,$$ it follows from \eqref{local growth near} that there exists $K(r)\in\mathbb{N},$ such that for any $k\geq K(r)$ we have
\begin{equation}
\label{local growth}
\# A(y,r,k)\asymp R_{\m,n_k}d(y)r^d .
\end{equation} Equation \eqref{local growth} shows that for each $k\geq K(r),$ there is a large separated set that is local to $B(y,r)$.
We will prove that there exists $c_y>0$ such that
\begin{equation}
\label{WTS3}
\L\left(B(y,2r)\cap\left\{x\in\mathbb{R}^d:x\in \bigcup_{\a\in A(y,r,k)}B\left(\phi_{\a}(z),\Big(\frac{h(n_k)}{R_{\m,n_k}}\Big)^{1/d}\right)\textrm{for i.m. }k\in \N\right\}\right)\geq c_yr^d.
\end{equation} Equation \eqref{WTS3} implies \eqref{Want to showb}. So to complete our proof it suffices to show that \eqref{WTS3} holds.
For later use we note that \begin{equation}
\label{divergenceaa}
\sum_{k=K(r)}^{\infty}\sum_{\a\in A(y,r,k)}\L\left(B\big(\phi_{\a}(z),\Big(\frac{h(n_k)}{R_{\m,n_k}}\Big)^{1/d}\big)\right)=\infty.
\end{equation}Equation \eqref{divergenceaa} is true because
\begin{align*}
\sum_{k=K(r)}^{\infty}\sum_{\a\in A(y,r,k)}\L\Big(B\big(\phi_{\a}(z),\Big(\frac{h(n_k)}{R_{\m,n_k}}\Big)^{1/d}\big)\Big)&= \sum_{k=1}^{\infty}\# A(y,r,k)\frac{\L(B(0,1))h(n_k)}{R_{\m,n_k}}\\
&\stackrel{\eqref{local growth}}{\asymp} d(y)r^d\L(B(0,1))\sum_{k=K(r)}^{\infty} h(n_k)\\
&\stackrel{\eqref{h divergence}}{=}\infty.
\end{align*}\\
\noindent \textbf{Step $2.$ Replacing our approximating function.}\\
Let $$g(n_k):=\min\Big\{\Big(\frac{h(n_k)}{R_{\m,n_k}}\Big)^{1/d},\frac{s}{3R_{\m,n_k}^{1/d}}\Big\}.$$ For each $k\geq K(r)$ we define the set $$E_{n_k}:=\bigcup_{\a\in A(y,r,k)}B(\phi_{\a}(z),g(n_k)).$$ By construction the balls in this union are disjoint. Therefore by \eqref{local growth}, for each $k\geq K(r)$
\begin{equation}
\label{local measure growth}\L(E_{n_k})\asymp g(n_k)^dR_{\m,n_k}d(y)r^d.
\end{equation} By a similar argument to that given in the proof of Proposition \ref{fixed omega}, it follows from \eqref{divergenceaa} that we have
\begin{equation}
\label{divergencebb}\sum_{k=K(r)}^{\infty}\L(E_{n_k})=\infty.
\end{equation}
Therefore, the assumptions of Lemma \ref{Erdos lemma} are satisfied. We will use this lemma to show that
\begin{equation}
\label{WTS4}
\L\Big( \limsup_{k\to\infty}E_{n_k}\Big)\geq c_yr^d.
\end{equation}Since $$\limsup_{k\to\infty}E_{n_k}\subseteq B(y,2r),$$ because $\lim_{k\to\infty}g(n_k)=0$, we see that \eqref{WTS4} implies \eqref{WTS3}. So verifying \eqref{WTS4} will complete our proof. \\
\noindent \textbf{Step $3$. Bounding $L(E_{n_k}\cap E_{n_{k'}})$.}\\
By an analogous argument to that given in the proof of Proposition \ref{fixed omega}, we can show that for any $\a\in A(y,r,k),$ we have $$\#\left\{\a'\in A(y,r,k'):B(\phi_{\a}(z),g(n_k))\cap B(\phi_{\a'}(z),g(n_{k'}))\neq \emptyset\right\}=\mathcal{O}\Big(\frac{g(n_k)^d R_{\m,n_{k'}}}{s^d}+1\Big).$$ Using this estimate and \eqref{local growth}, it can be shown that for any distinct $k,k'\geq K(r)$ we have
\begin{equation}
\label{intersection bounda}\L(E_{n_k}\cap E_{n_k'})=\mathcal{O}\left(d(y) r^dR_{\m,n_k}g(n_{k'})^d\left(\frac{g(n_k)^d R_{\m,n_{k'}}}{s^d}+1\right)\right).
\end{equation}\\
\noindent \textbf{Step $4$. Applying Lemma \ref{Erdos lemma}.}\\
Using \eqref{intersection bounda}, we can then replicate the arguments used in the proof of Proposition \ref{fixed omega} to show that
\begin{equation}
\label{localbound1}\sum_{k,k'=K(r)}^{Q}\L(E_{n_k}\cap E_{n_k'})=\mathcal{O}\Big(d(y)r^d\Big(\sum_{k=K(r)}^QR_{\m,n_k}g(n_k)^d\Big)^2\Big).
\end{equation}We emphasise here that the underlying constants in \eqref{localbound1} do not depend upon $r$. By \eqref{local measure growth} we have
\begin{equation}
\label{localbound2}\Big(\sum_{k=K(r)}^{Q}\L(E_{n_k})\Big)^2\asymp r^{2d}d(y)^2 \Big(\sum_{k=K(r)}^{Q}R_{\m,n_k}g(n_k)\Big)^2.
\end{equation} Applying Lemma \ref{Erdos lemma} in conjunction with \eqref{localbound1} and \eqref{localbound2} yields $$\L\Big(\limsup_{k\to\infty}E_{n_k}\Big)\geq c_yr^d,$$ for some $c_y>0$ that does not depend upon $r$. Therefore \eqref{WTS4} holds and we have completed our proof.
\end{proof}
\section{Applications of Proposition \ref{general prop}}
\label{applications}
In this section we apply the results of Section \ref{Preliminaries} to prove Theorems \ref{1d thm}, \ref{translation thm} and \ref{random thm}. We begin by briefly explaining why the exponential growth condition appearing in Proposition \ref{general prop} will always be satisfied in our proofs.
Let $\m$ be a slowly decaying probability measure supported on $\D^{\N}$. We remark that each $\a\in L_{\m,n}$ satisfies $$\m([\a])\asymp c_{\m}^n.$$ Recall that $c_{\m}$ is defined in Section \ref{auxillary sets}. Importantly the cylinders corresponding to elements of $L_{\m,n}$ are disjoint, and we have $\m(\cup_{\a\in L_{\m,n}}[\a])=1$. It follows from these observations that $$R_{\m,n}\asymp c_{\m}^{-n}.$$ Similarly, if $\tilde{L}_{\m,n}\subseteq L_{\m,n}$ and $\m(\cup_{\a\in \tilde{L}_{\m,n}}[\a])>d$ for some $d>0$ for each $n$, then $$\#\tilde{L}_{\m,n}=:\tilde{R}_{\m,n}\asymp c_{\m}^{-n}.$$ Where the underlying constants depend upon $d$ but are independent of $n$. In our proofs of Theorems \ref{1d thm}, \ref{translation thm} and \ref{random thm}, it will be necessary to define an $\tilde{L}_{\m,n}$ contained in $L_{\m,n},$ whose union of cylinders has measure uniformly bounded away from zero. By the above discussion the exponential growth condition appearing in Proposition \ref{general prop} will automatically be satisfied by $\tilde{L}_{\m,n}$.
\subsection{Proof of Theorem \ref{1d thm}}
Before proceeding with our proof of Theorem \ref{1d thm} we recall some useful results from \cite{Sol}.
\begin{lemma}[Lemma 2.1 \cite{Sol}]
\label{delta lemma}
For any $\epsilon_1>0,$ there exists $\delta=\delta(\epsilon_1)>0$ such that if $g\in \mathcal{B}_{\Gamma}$, and $g(0)\neq 0,$ then $$x\in (0,\alpha(\mathcal{B}_{\Gamma})-\epsilon_1], |g(x)|<\delta \implies |g'(x)|>\delta.$$
\end{lemma}
Lemma \ref{delta lemma} has the following useful consequence.
\begin{lemma}
\label{zero intervals}
Let $\epsilon_1>0$ and $\delta(\epsilon_1)>0$ be as in Lemma \ref{delta lemma}. Then for any $\epsilon_2>0$ and $g\in \mathcal{B}_{\Gamma}$ such that $g(0)\neq 0$, we have $$\L\big(\{\lambda\in(0,\alpha(\mathcal{B}_{\Gamma})-\epsilon_1]: |g(\lambda)|\leq \epsilon_2\}\big)=\mathcal{O}(\epsilon_2).$$ Where the underlying constant depends upon $\delta(\epsilon_1).$
\end{lemma}Lemma \ref{zero intervals} follows from the analysis given in Section 2.4. from \cite{Sol}. Equipped with Lemma \ref{zero intervals} and the results of Section \ref{Preliminaries}, we can now prove Theorem \ref{1d thm}.
\begin{proof}[Proof of Theorem \ref{1d thm}]We treat each statement in this theorem individually. We start with the proof of statement $1$.\\
\noindent \textbf{Proof of statement $1$.}\\
Let us start by fixing $\m$ a slowly decaying $\sigma$-invariant ergodic probability measure with $\h(\m)>0,$ and $(a_j)\in \D^{\mathbb{N}}.$ Let $\epsilon_1>0$ be arbitrary. We now choose $\epsilon_2>0$ sufficiently small so that we have
\begin{equation}
\label{epsilons ratio}
e^{\h(\m)-\epsilon_2}(e^{-\h(\m)}+\epsilon_1)>1.
\end{equation}
By the Shannon-McMillan-Breiman theorem, we know that for $\m$-almost every $\a\in \D^{\mathbb{N}}$ we have
\begin{equation}
\label{SMB}
\lim_{n\to\infty}\frac{-\log \m([a_1\cdots a_n])}{n}= \h(\m).
\end{equation}
It follows from \eqref{SMB} and Egorov's theorem, that there exists $C=C(\epsilon_2)>0$ such that $$\m\Big(\a\in \D^{\N}: \frac{e^{k(-\h(\m)-\epsilon_2)}}{C}\leq \m([a_1\cdots a_k])\leq C e^{k(-\h(\m)+\epsilon_2)},\, \forall k\in\mathbb{N}\Big)>1/2.$$ Let $$\tilde{L}_{\m,n}:=\Big\{\a\in L_{\m,n}: \frac{e^{k(-\h(\m)-\epsilon)}}{C}\leq \m([a_1\cdots a_{k}])\leq C e^{k(-\h(\m)+\epsilon)},\, \forall 1\leq k\leq |\a|\Big\}$$and
$$\tilde{R}_{\m,n}:=\#\tilde{L}_{\m,n}.$$
Since $$\m\Big(\bigcup_{\a\in L_{\m,n}}[\a]\Big)=1,$$ it follows from the above that
\begin{equation}
\label{half bound}
\m\Big(\bigcup_{\a\in \tilde{L}_{\m,n}}[\a]\Big)>1/2.
\end{equation}By the discussion at the beginning of this section we know that $\tilde{R}_{\m,n}$ satisfies the exponential growth condition of Proposition \ref{general prop}. It also follows from this discussion that
\begin{equation}
\label{equivalen cardinality}\tilde{R}_{\m,n}\asymp R_{\m,n}.
\end{equation}
Recalling the notation used in Section \ref{Preliminaries}, let $$R(\lambda,s,n):=\left\{(\a,\a')\in \tilde{L}_{\m,n}\times \tilde{L}_{\m,n}:\left|\phi_{\a}\Big(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1}\Big)-\phi_{\a'}\Big(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1}\Big)\right|\leq \frac{s}{\tilde{R}_{\m,n}}\textrm{ and }\a\neq\a'\right\}.$$The main step in our proof of statement $1$ is to show that
\begin{equation}
\label{transversality separation}
\int_{e^{-\h(\m)}+\epsilon_1}^{\alpha(\mathcal{B}_{\Gamma})-\epsilon_1} \frac{\#R(\lambda,s,n)}{\tilde{R}_{\m,n}}d\lambda=\mathcal{O}(s).
\end{equation} We will then be able to employ the results of Section \ref{Preliminaries} to prove our theorem. Our proof of \eqref{transversality separation} is based upon an argument of Benjamini and Solomyak \cite{BenSol}, which in turn is based upon an argument of Peres and Solomyak \cite{PerSol}.\\
\noindent \textbf{Step 1. Proof of \eqref{transversality separation}.}\\
Observe the following:
\begin{align}
\label{2b substituted}
&\int_{e^{-\h(\m)}+\epsilon_1}^{\alpha(\mathcal{B}_{\Gamma})-\epsilon_1} \frac{\#R(\lambda,s,n)}{\tilde{R}_{\m,n}}d\lambda \nonumber\\
&=\int_{e^{-\h(\m)}+\epsilon_1}^{\alpha(\mathcal{B}_{\Gamma})-\epsilon_1}\frac{1}{\tilde{R}_{\m,n}}\sum_{\stackrel{\a,\a'\in \tilde{L}_{\m,n}}{\a\neq \a'}}\chi_{[-\frac{s}{\tilde{R}_{\m,n}},\frac{s}{\tilde{R}_{\m,n}}]}\left(\phi_{\a}\Big(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1}\Big)-\phi_{\a'}\Big(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1}\Big)\right)\, d\lambda \nonumber \\
&=\mathcal{O}\left(\tilde{R}_{\m,n}\int_{e^{-\h(\m)}+\epsilon_1}^{\alpha(\mathcal{B}_{\Gamma})-\epsilon_1}\sum_{\stackrel{\a,\a'\in \tilde{L}_{\m,n}}{\a\neq \a'}}\chi_{[-\frac{s}{\tilde{R}_{\m,n}},\frac{s}{\tilde{R}_{\m,n}}]}\left(\phi_{\a}\Big(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1}\Big)-\phi_{\a'}\Big(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1}\Big)\right)\m([\a])\m([\a'])\, d\lambda\right)\nonumber \\
&=\mathcal{O}\left(\tilde{R}_{\m,n}\sum_{\stackrel{\a,\a'\in \tilde{L}_{\m,n}}{\a\neq \a'}} \m([\a])\m([\a'])\int_{e^{-\h(\m)}+\epsilon_1}^{\alpha(\mathcal{B}_{\Gamma})-\epsilon_1} \chi_{[-\frac{s}{\tilde{R}_{\m,n}},\frac{s}{\tilde{R}_{\m,n}}]}\left(\phi_{\a}\Big(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1}\Big)-\phi_{\a'}\Big(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1}\Big)\right)d\lambda\right).
\end{align}
In the penultimate line we used that for any $\a\in \tilde{L}_{\m,n}$ we have $\m([\a])\asymp \tilde{R}_{\m,n}^{-1}$.
Note that for any distinct $\a,\a'\in \tilde{L}_{\m,n}$ we have $$\phi_{\a}\Big(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1}\Big)-\phi_{\a'}\Big(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1}\Big)\in \mathcal{B}_{\Gamma}.$$ Let $|\a\wedge \a'|:=\inf\{k:a_k\neq a_k'\}.$ Then $$\phi_{\a}\Big(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1}\Big)-\phi_{\a'}\Big(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1}\Big)=\lambda^{|\a\wedge \a'|-1}g(\lambda),$$ for some $g\in\mathcal{B}_{\Gamma}$ satisfying $g(0)\neq 0$. Therefore, for any distinct $\a,\a'\in \tilde{L}_{\m,n}$ we have
\begin{align*}
&\int_{e^{-\h(\m)}+\epsilon_1}^{\alpha(\mathcal{B}_{\Gamma})-\epsilon_1} \chi_{[-\frac{s}{\tilde{R}_{\m,n}},\frac{s}{\tilde{R}_{\m,n}}]}\left(\phi_{\a}\Big(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1}\Big)-\phi_{\a'}\Big(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1}\Big)\right)d\lambda\\
&=\L\left(\left\{\lambda\in (e^{-\h(\m)}+\epsilon_1,\alpha(\mathcal{B}_{\Gamma})-\epsilon_1):\phi_{\a}\Big(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1}\Big)-\phi_{\a'}\Big(\sum_{j=0}^{\infty}d_{a_j}\lambda^{j-1}\Big)\in \Big[-\frac{s}{\tilde{R}_{\m,n}},\frac{s}{\tilde{R}_{\m,n}}\Big]\right\}\right)\\
&=\L\left(\left\{\lambda\in (e^{-\h(\m)}+\epsilon_1,\alpha(\mathcal{B}_{\Gamma})-\epsilon_1)):\lambda^{|\a\wedge \a'|-1}g(\lambda)\in \Big[-\frac{s}{\tilde{R}_{\m,n}},\frac{s}{\tilde{R}_{\m,n}}\Big]\right\}\right)\\
&=\L\left(\left\{\lambda\in (e^{-\h(\m)}+\epsilon_1,\alpha(\mathcal{B}_{\Gamma})-\epsilon_1)):g(\lambda)\in \Big[-\frac{s\lambda^{-|\a\wedge \a'|+1}}{\tilde{R}_{\m,n}},\frac{s\lambda^{-|\a\wedge \a'|+1}}{\tilde{R}_{\m,n}}\Big]\right\}\right)\\
&\leq \L\left(\left\{\lambda\in (e^{-\h(\m)}+\epsilon_1,\alpha(\mathcal{B}_{\Gamma})-\epsilon_1)):g(\lambda)\in \Big[-\frac{s(e^{-\h(\m)}+\epsilon_1)^{-|\a\wedge \a'|+1}}{\tilde{R}_{\m,n}},\frac{s(e^{-\h(\m)}+\epsilon_1)^{-|\a\wedge \a'|+1}}{\tilde{R}_{\m,n}}\Big]\right\}\right)\\
&=\mathcal{O}\left(\frac{s(e^{-\h(\m)}+\epsilon_1)^{-|\a\wedge\a'|}}{\tilde{R}_{\m,n}}\right).
\end{align*}
Where in the last line we used Lemma \ref{zero intervals}. Summarising the above, we have shown that
\begin{equation}
\label{transversality bound}
\int_{e^{-\h(\m)}+\epsilon_1}^{\alpha(\mathcal{B}_{\Gamma})-\epsilon_1} \chi_{[-\frac{s}{\tilde{R}_{\m,n}},\frac{s}{\tilde{R}_{\m,n}}]}\left(\phi_{\a}(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1})-\phi_{\a'}(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1})\right)d\lambda=\mathcal{O}\left(\frac{s(e^{-\h(\m)}+\epsilon_1)^{-|\a\wedge\a'|}}{\tilde{R}_{\m,n}}\right).
\end{equation}
Substituting \eqref{transversality bound} into \eqref{2b substituted} we obtain:
\begin{align*}
\int_{e^{-\h(\m)}+\epsilon_1}^{\alpha(\mathcal{B}_{\Gamma})-\epsilon_1} \frac{\#R(\lambda,s,n)}{\tilde{R}_{\m,n}}d\lambda&=\mathcal{O}\left(\tilde{R}_{\m,n}\sum_{\stackrel{\a,\a'\in \tilde{L}_{\m,n}}{\a\neq \a'}} \m([\a])\m([\a']) \frac{s(e^{-\h(\m)}+\epsilon_1)^{-|\a\wedge \a'|}}{\tilde{R}_{\m,n}}\right)\\
&=\mathcal{O}\left(s\sum_{\stackrel{\a,\a'\in \tilde{L}_{\m,n}}{\a\neq \a'}} \m([\a])\m([\a']) (e^{-\h(\m)}+\epsilon_1)^{-|\a\wedge \a'|}\right)\\
&=\mathcal{O}\left(s\sum_{\a\in \tilde{L}_{\m,n}}\m([\a])\sum_{k=1}^{|\a|-1}\sum_{\stackrel{\a'\in\tilde{L}_{\m,n} }{|\a\wedge\a'|=k}}\m([\a']) (e^{-\h(\m)}+\epsilon_1)^{-k}\right)\\
&=\mathcal{O}\left(s\sum_{\a\in \tilde{L}_{\m,n}}\m([\a])\sum_{k=1}^{|\a|-1}\m([a_1\cdots a_{k-1}])(e^{-\h(\m)}+\epsilon_1)^{-k}\right)\\
&=\mathcal{O}\left(s\sum_{\a\in \tilde{L}_{\m,n}}\m([\a])\sum_{k=1}^{|\a|-1} e^{-k(\h(\m)-\epsilon_2)}(e^{-\h(\m)}+\epsilon_1)^{-k}\right)\\
&=\mathcal{O}\left(s\sum_{a\in \tilde{L}_{\m,n}}\m([\a])\sum_{k=1}^{\infty}e^{-k(\h(\m)-\epsilon_2)}(e^{-\h(\m)}+\epsilon_1)^{-k}\right).
\end{align*}
By \eqref{epsilons ratio} we know that $$\sum_{k=1}^{\infty}e^{-k(\h(\m)-\epsilon_2)}(e^{-\h(\m)}+\epsilon_1)^{-k}<\infty.$$ Therefore
$$\int_{e^{-\h(\m)}+\epsilon_1}^{\alpha(\mathcal{B}_{\Gamma})-\epsilon_1} \frac{\#R(\lambda,s,n)}{\tilde{R}_{\m,n}}d\lambda=\mathcal{O}\left(s\sum_{a\in \tilde{L}_{\m,n}}\m([\a])\right)=\mathcal{O}\left(s\right)$$ as required.\\
\noindent\textbf{Step 2. Applying \eqref{transversality separation}.}\\
Combining \eqref{transversality separation} and Lemma \ref{integral bound} we obtain
\begin{equation}
\L([e^{-\h(\m)}+\epsilon_1,\alpha(\mathcal{B}_{\Gamma})-\epsilon_1])-\int_{e^{-\h(\m)}+\epsilon_1}^{\alpha(\mathcal{B}_{\Gamma})-\epsilon_1}\frac{T\Big(\big\{\phi_{\a}\big(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1}\big)\big\}_{\a\in \tilde{L}_{\m,n}},\frac{s}{\tilde{R}_{\m,n}}\Big)}{\tilde{R}_{\m,n}}d\lambda=\mathcal{O}(s).
\end{equation} Therefore by Proposition \ref{general prop}, we know that for Lebesgue almost every $\lambda\in [e^{-\h(\m)}+\epsilon_1,\alpha(\mathcal{B}_{\Gamma})-\epsilon_1]$ the set$$\left\{x\in\mathbb{R}:x\in \bigcup_{\a\in \tilde{L}_{\m,n}}B\left(\phi_{\a}\left(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1}\right),\frac{h(n)}{\tilde{R}_{\m,n}}\right)\textrm{ for i.m. }n\in\N\right\}$$
has positive Lebesgue measure for any $h\in H$. Since $\epsilon_1$ was arbitrary, we can assert that for Lebesgue almost every $\lambda\in (e^{-\h(\m)},\alpha(\mathcal{B}_{\Gamma})),$ for any $h\in H$ the above set has positive Lebesgue measure. By \eqref{equivalen cardinality} we know that $\tilde{R}_{\m,n}\asymp R_{\m,n}$. Which by the discussion given at the start of this section implies $\tilde{R}_{\m,n}^{-1}\asymp \m([\a])$ for each $\a\in L_{\m,n}$. Therefore by Lemma \ref{arbitrarily small}, we may conclude that for Lebesgue almost every $\lambda\in (e^{-\h(\m)},\alpha(\mathcal{B}_{\Gamma})),$ for any $h\in H$ the set $U_{\Phi_{\lambda,D}}(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1},\m,h)$ has positive Lebesgue measure. \\
\noindent \textbf{Proof of statement 2.}\\
We start our proof of this statement by remarking that since $\m$ is the uniform $(1/l,\ldots,1/l)$ Bernoulli measure, we have $L_{\m,n}=\D^n$ for each $n\in\mathbb{N}$. Since the words in $\D^n$ have the same length and each similarity contracts by a factor $\lambda$, it can be shown that $$\phi_{\a}(z)-\phi_{\a}(z')=\lambda^n(z-z'),$$ for all $\a\in \D^n$ for any $z,z'\in X_{\lambda,D}$. Importantly this difference does not depend upon $\a.$ Therefore the sets $\{\phi_{\a}(z)\}_{\a\in\D^n}$ and $\{\phi_{\a}(z')\}_{\a\in \D^n}$ are translates of each other. In which case
\begin{equation}
\label{happy}
T\Big(\{\phi_{\a}(z)\}_{\a\in \D^n},\frac{s}{l^n}\Big)=T\Big(\{\phi_{\a}(z')\}_{\a\in \D^n},\frac{s}{l^n}\Big)
\end{equation} for any $z,z'\in X_{\lambda,D}$.
By \eqref{transversality separation}, the assumptions of Proposition \ref{general prop} are satisfied and by Lemma \ref{density separation lemma}, for any $(a_j)\in \D^{\mathbb{N}},$ for Lebesgue almost every $\lambda\in[1/l+\epsilon_1,\alpha(\mathcal{B}_{\Gamma})-\epsilon_1],$ given an $\epsilon>0$ we can pick $c,s>0$ such that
\begin{equation}
\label{happier}\overline{d}\left(n:\frac{T\Big(\big\{\phi_{\a}\big(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1}\big)\big\}_{\a\in \D^n},\frac{s}{l^n}\Big)}{l^n}\geq c \right)>1-\epsilon.
\end{equation} If $\lambda$ is such that \eqref{happier} holds for a specific sequence $(a_j)\in \D^{\mathbb{N}},$ then \eqref{happy} implies that it must hold for all $(a_j)\in \D^{\mathbb{N}}$ simultaneously. Therefore, we may assert that for Lebesgue almost every $\lambda\in[1/l+\epsilon_1,\alpha(\mathcal{B}_{\Gamma})-\epsilon_1],$ given an $\epsilon>0$ we can pick $c,s>0,$ such that for any $z\in X_{\lambda,D}$ we have
\begin{equation}
\label{home stretch} \overline{d}\left(n:\frac{T\Big(\big\{\phi_{\a}(z)\big\}_{\a\in \D^n},\frac{s}{l^n}\Big)}{l^n}\geq c \right)>1-\epsilon.
\end{equation} Examining the proof of Proposition \ref{general prop}, we see that \eqref{home stretch} implies that for Lebesgue almost every $\lambda\in [1/l+\epsilon_1,\alpha(\mathcal{B}_{\Gamma})-\epsilon_1],$ for any $z\in X_{\lambda,D}$ and $h\in H$, the set
$$\left\{x\in\mathbb{R}:x\in\bigcup_{\a\in \D^{n}}B\left(\phi_{\a}(z),\frac{h(n)}{l^{n}}\right)\textrm{ for i.m. }n\in \N\right\}$$
has positive Lebesgue measure. In other words, for Lebesgue almost every $\lambda\in [1/l+\epsilon_1,\alpha(\mathcal{B}_{\Gamma})-\epsilon_1],$ for any $z\in X_{\lambda,D}$ and $h\in H$, the set $U_{\Phi_{\lambda,D}}(z,\m,h)$ has positive Lebesgue measure. Since $\epsilon_1$ was arbitrary we can conclude our result for Lebesgue almost every $\lambda\in(1/l,\alpha(\mathcal{B}_{\Gamma})).$\\
\noindent \textbf{Proof of statement 3.}\\
By statement $1$ we know that for any $(a_j)\in \D^{\N},$ for Lebesgue almost every $\lambda\in(e^{-\h(\m)},\alpha(\mathcal{B}_{\Gamma})),$ for any $h\in H$ the set $U_{\Phi_{\lambda,D}}(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1},\m,h)$ has positive Lebesgue measure. It follows therefore by Lemma \ref{arbitrarily small} that for any $(a_j)\in \D^{\N},$ for Lebesgue almost every $\lambda\in(e^{-\h(\m)},\alpha(\mathcal{B}_{\Gamma})),$ for any $\Psi$ that is equivalent to $(\m,h)$ for some $h\in H,$ the set $W_{\Phi_{\lambda,D}}(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1},\Psi)$ has positive Lebesgue measure. Applying Proposition \ref{full measure} we may conclude that for any $(a_j)\in \D^{\N},$ for Lebesgue almost every $\lambda\in(e^{-\h(\m)},\alpha(\mathcal{B}_{\Gamma})),$ for any $\Psi\in \Upsilon_{\m}$ Lebesgue almost every $x\in X_{\lambda,D}$ is contained in $W_{\Phi_{\lambda,D}}(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1},\Psi)$.\\
\noindent \textbf{Proof of statement 4.}\\
The proof of statement $4$ is analogous to the proof of statement $3$. The only difference is that instead of using statement $1$ at the beginning we use statement $2$.
\end{proof}
We now explain how Corollary \ref{example cor} follows from Theorem \ref{1d thm}.
\begin{proof}[Proof of Corollary \ref{example cor}]
Let us start by fixing $h:\mathbb{N}\to[0,\infty)$ to be $h(n)=1/n$. We remark that this function $h$ is an element of $H$. This can be proved using the well known fact $$\sum_{n=1}^{N}\frac{1}{n}\sim \log N.$$ Let us now fix a Bernoulli measure $\m$ as in the statement of Corollary \ref{example cor}. Observe that for any $\a\in \D^*$ we have
\begin{equation}
\label{cheap decay}(\min_{i\in \D} p_i)^{|\a|}\leq \m([\a])\leq (\max_{i\in \D} p_i)^{|\a|}.
\end{equation} Using \eqref{cheap decay} and the fact that each $\a\in L_{\m,n}$ satisfies $\m([\a])\asymp c_{\m}^{-n},$ it can be shown that each $\a\in L_{\m,n}$ satisfies $$|\a|\asymp n.$$ This implies that for any $\a\in L_{\m,n}$ we have $$\frac{\prod_{j=1}^{|\a|}p_{\a_j}}{|\a|}\asymp \frac{\m([\a])}{n}.$$ In other words, the function $\Psi:\D^*\to[0,\infty)$ given by $$\Psi(\a)=\frac{\prod_{j=1}^{|\a|}p_{\a_j}}{|\a|}$$ is equivalent to $(\m,h)$ for our choice of $h$. One can verify that our function $\Psi$ is weakly decaying and hence $\Psi\in \Upsilon_{\m}.$ Therefore by Theorem \ref{1d thm}, for any $(a_j)\in \D^{\mathbb{N}},$ for almost every $\lambda\in(\prod_{i=1}^{l}p_i^{p_i},\alpha(\mathcal{B}_{\Gamma})),$ Lebesgue almost every $x\in X_{\lambda,D}$ is contained in the set $W_{\Phi_{\lambda,D}}(\sum_{j=1}^{\infty}d_{a_j}\lambda^{j-1},\Psi).$
\end{proof}
\subsubsection{Bernoulli Convolutions}
Given $\lambda\in(0,1)$ and $p\in(0,1)$, let $\mu_{\lambda,p}$ be the distribution of the random sum $$\sum_{j=0}^{\infty}\pm \lambda^{j},$$ where $+$ is chosen with probability $p$, and $-$ is chosen with probability $(1-p)$. When $p=1/2$ we simply denote $\mu_{\lambda,1/2}$ by $\mu_{\lambda}$. We call $\mu_{\lambda,p}$ a Bernoulli convolution. When we want to emphasise the case when $p=1/2$ we call $\mu_{\lambda}$ the unbiased Bernoulli convolution. Importantly, for each $p\in(0,1)$ the Bernoulli convolution $\mu_{\lambda,p}$ is a self-similar measure for the iterated function system $\Phi_{\lambda,\{-1,1\}}=\{\lambda x -1,\lambda x+1\}.$
The study of Bernoulli convolutions dates back to the $1930s$ and to the important work of Jessen and Wintner \cite{JesWin}, and Erd\H{o}s \cite{Erdos1, Erdos2}. When $\lambda\in(0,1/2)$ then $\mu_{\lambda,p}$ is supported on a Cantor set and determining the dimension of $\mu_{\lambda,p}$ is relatively straightforward. When $\lambda\in(1/2,1)$ the support of $\mu_{\lambda,p}$ is the interval $[\frac{-1}{1-\lambda},\frac{1}{1-\lambda}].$ Analysing a Bernoulli convolution for $\lambda\in(1/2,1)$ is a more difficult task. The important problems in this area are:
\begin{itemize}
\item To classify those $\lambda\in(1/2,1)$ and $p\in(0,1)$ such that \begin{equation}
\label{expected dimension}\dim_{H}\mu_{\lambda,p}=\min\Big\{ \frac{p\log p+(1-p)\log(1-p)}{\log \lambda},1\Big\}.
\end{equation}
\item To classify those $\lambda\in(1/2,1)$ and $p\in(0,1)$ such that $\mu_{\lambda,p}\ll\L$.
\end{itemize} Initial progress was made on the second problem by Erd\H{o}s in \cite{Erdos1}. He proved that whenever $\lambda$ is the reciprocal of a Pisot number then $\mu_{\lambda}\perp \L$. This result was later improved upon in two papers by Alexander and Yorke \cite{AleYor}, and Garsia \cite{Gar2}, who independently proved that $\dim_{H}\mu_{\lambda}<1$ when $\lambda$ is the reciprocal of a Pisot number. Garsia in \cite{Gar} also provided an explicit class of algebraic integers for which $\mu_{\lambda}\ll\L$. The next breakthrough came in a result of Solomyak \cite{Sol} who proved that $\mu_{\lambda}\ll\L$ with a density in $L^2$ for almost every $\lambda\in(1/2,1)$. His proof relied on studying the Fourier transform of $\mu_{\lambda}$. A simpler proof of this result was subsequently obtained by Peres and Solomyak in \cite{PerSol}. This proof relied upon a characterisation of absolute continuity in terms of differentiation of measures (see \cite{Mat}). Improvements and generalisations of this result appeared subsequently in \cite{PS}, \cite{PerSol2}, and \cite{Rams}. Over the last few years dramatic progress has been made on the problems listed above. In particular, Hochman in \cite{Hochman} proved that for a set $E$ of packing dimension $0$, it is the case that if $\lambda\in(1/2,1)\setminus E$ then we have equality in \eqref{expected dimension} for any $p\in(0,1)$. Building upon this result, Shmerkin in \cite{Shm} proved that $\mu_{\lambda}\ll \L$ for every $\lambda\in(1/2,1)$ outside of a set of Hausdorff dimension zero. This result was later generalised to the case of general $p$ by Shmerkin and Solomyak in \cite{ShmSol}. Similarly building upon the result of Hochman, Varju recently proved in \cite{Varju2} that $\dim_{H}\mu_{\lambda}=1$ whenever $\lambda$ is a transcendental number. Varju has also recently provided new explicit examples of $\lambda$ and $p$ such that $\mu_{\lambda,p}\ll \L$ (see \cite{Var}).
Theorem \ref{1d thm} can be applied to the IFS $\{\lambda x -1,\lambda x+1\}.$ In \cite{Sol} Solomyak proved that $\alpha(\mathcal{B}(\{-1,0,1\}))> 0.639,$ this was subsequently improved upon by Shmerkin and Solomyak in \cite{ShmSol2} who proved that $\alpha(\mathcal{B}(\{-1,0,1\}))> 0.668\ldots.$ Using this information we can prove the following result.
\begin{thm}
\label{BC cor}
Let $\Psi:\D^*\to[0,\infty)$ be given by $\Psi(\a)=\frac{1}{2^{|\a|}\cdot |\a|}$. Then for Lebesgue almost every $\lambda\in(1/2,0.668),$ we have that for any $z\in [\frac{-1}{1-\lambda},\frac{1}{1-\lambda}],$ Lebesgue almost every $x\in[\frac{-1}{1-\lambda},\frac{1}{1-\lambda}]$ is contained in $W_{\Phi_{\lambda,\{-1,1\}}}(z,\Psi)$.
\end{thm}The proof of Theorem \ref{BC cor} is an adaptation of the proof of Corollary \ref{example cor} and is therefore omitted.
As a by-product of our analysis we can recover the result of Solomyak that for Lebesgue almost every $\lambda\in(1/2,1)$ the unbiased Bernoulli convolution is absolutely continuous. Our approach does not allow us to assert anything about the density. However our approach does have the benefit of being particularly simple and intuitive. Instead of relying on the Fourier transform, differentiation of measures, or the advanced entropy methods of Hochman \cite{Hochman}, the proof given below appeals to the fact that $\mu_{\lambda}$ is of pure type and makes use of a decomposition argument due to Solomyak. For the sake of brevity, the proof below only focuses on the important features of the argument.
\begin{thm}[Soloymak \cite{Sol}]
For Lebesgue almost every $\lambda\in(1/2,1)$ we have $\mu_{\lambda}\ll\L$.
\end{thm}
\begin{proof}
We split our proof into individual steps.\\
\noindent \textbf{Step 1. Proof that $\mu_{\lambda}\ll \L$ for Lebesgue almost every $\lambda\in(1/2,0.668)$.}\\
Fix $(a_j)\in \D^\N$. We know by our proof of Theorem \ref{1d thm} that for any $\epsilon_1>0$ we have
\begin{equation}
\label{tran bound} \L([1/2+\epsilon_1,0.668-\epsilon_1])-\int_{1/2+\epsilon_1}^{0.668-\epsilon_1}\frac{T(\{\phi_{\a}(\sum_{j=1}^{\infty}a_j\lambda^{j-1})\}_{\a\in\D^n},\frac{s}{2^n})}{2^n}d\lambda=\mathcal{O}(s).
\end{equation} Combining \eqref{tran bound} with Lemma \ref{density separation lemma}, we may conclude that
\begin{equation}
\label{nearly finished}\L\Big(\bigcap_{\epsilon>0}\bigcup_{c,s>0}\{\lambda\in ([1/2+\epsilon_1,0.668-\epsilon_1]):\overline{d}(n:\omega\in B(c,s,n))\geq 1-\epsilon\}\Big)=\L([1/2+\epsilon_1,0.668-\epsilon_1]).
\end{equation} Here $$B(c,s,n)=\Big\{\lambda\in [1/2+\epsilon_1,0.668-\epsilon_1]: \frac{T(\{\phi_{\a}(\sum_{j=1}^{\infty}a_j\lambda^{j-1})\}_{\a\in\D^n},\frac{s}{2^n})}{2^n}\geq c\Big\}.$$ In particular, \eqref{nearly finished} implies that for Lebesgue almost every $\lambda\in [1/2+\epsilon_1,0.668-\epsilon_1],$ there exists $c>0$ and $s>0$ such that $$\limsup_{n\to\infty}\frac{T(\{\phi_{\a}(\sum_{j=1}^{\infty}a_j\lambda^{j-1})\}_{\a\in\D^n},\frac{s}{2^n})}{2^n}\geq c.$$ Applying Proposition \ref{absolute continuity}, it follows that for Lebesgue almost every $\lambda\in [1/2+\epsilon_1,0.668-\epsilon_1],$ the measure $\mu_{\lambda}$ is absolutely continuous. Since $\epsilon_1$ is arbitrary we know that for Lebesgue almost every $\lambda\in (1/2,0.668)$ the measure $\mu_{\lambda}$ is absolutely continuous. \\
\noindent \textbf{Step 2. Proof that $\mu_{\lambda}\ll \L$ for Lebesgue almost every $\lambda\in(2^{-2/3},0.713)$.}\\
Let $\eta_{\lambda}$ denote the distribution of the random sum $$\sum_{\stackrel{j=0}{j\neq 2 \textrm{ mod }3}}^{\infty}\pm \lambda^{j},$$ where each digit is chosen with probability $1/2$. One can show that $\mu_{\lambda}=\eta_{\lambda}\ast \nu_{\lambda}$ for some measure $\nu_{\lambda}$ corresponding to the remaining terms (see \cite{PerSol, Sol}). Since the convolution of an absolutely continuous measure with an arbitrary measure is still absolutely continuous, to prove $\mu_{\lambda}\ll \L$ for Lebesgue almost every $\lambda\in(2^{-2/3},0.713)$, it suffices to shown that $\eta_{\lambda}$ is absolutely continuous for Lebesgue almost every $\lambda\in(2^{-2/3},0.713)$. Importantly $\eta_{\lambda}$ can be realised as the self-similar measure for the iterated function system
$$\Big\{\rho_{1}(x)=\lambda^3x+1+\lambda,\,\, \rho_{2}(x)=\lambda^3x-1+\lambda,\,\, \rho_{3}(x)=\lambda^3x+1-\lambda,\,\,\rho_{4}(x)=\lambda^3x-1-\lambda\Big\}$$ and the uniform $(1/4,1/4,1/4,1/4)$ Bernoulli measure. Because the translation parameter depends upon $\lambda,$ this family of iterated function systems does not immediately fall into the class considered by Theorem \ref{1d thm}. However this distinction is only superficial, and one can adapt the argument used in the proof of \eqref{tran bound} to prove that for any $\epsilon_1>0$ and $(a_j)\in\{1,2,3,4\}^{\mathbb{N}},$ we have
\begin{equation}
\L([2^{-2/3}+\epsilon_1,0.713-\epsilon_1])-\label{dumb}\int_{2^{-2/3}+\epsilon_1}^{0.713-\epsilon_1}\frac{T(\{\rho_{\a}(\pi(a_j))\}_{\a\in\{1,2,3,4\}^n},\frac{s}{4^n})}{4^n}d\lambda=\mathcal{O}(s).
\end{equation} The parameter $0.713$ comes from \cite{Sol} and is a lower bound for the appropriate analogue of $\alpha(\mathcal{B}(\{-1,0,1\}))$ for the family of iterated function systems $\{\rho_1,\rho_2,\rho_3,\rho_4\}$. Without going into details, it can be shown that appropriate analogues of Lemma \ref{delta lemma} and Lemma \ref{zero intervals} persist for this family of iterated function systems. These statements can then be used to deduce that \eqref{dumb} holds. By the arguments used in step $1$, we can use \eqref{dumb} in conjunction with Lemma \ref{density separation lemma} and Proposition \ref{absolute continuity} to deduce that $\eta_{\lambda}$ is absolutely continuous for Lebesgue almost every $\lambda\in(2^{-2/3},0.713)$.\\
\noindent \textbf{Step 3. Proof that $\mu_{\lambda}\ll \L$ for Lebesgue almost every $\lambda\in(1/2,1)$.}\\
Since $(1/2,1/\sqrt{2})\subset (1/2,0.668)\cup (2^{-2/3},0,713),$ we know by the two previous steps that for Lebesgue almost every $\lambda\in(1/2,1/\sqrt{2}),$ the measure $\mu_{\lambda}$ is absolutely continuous. For any $\lambda\in(2^{-1/k},2^{-1/2k})$ for some $k\geq 2,$ we can express $\mu_{\lambda}$ as $\mu_{\lambda^k}\ast \nu_{\lambda}$ for some measure $\nu_{\lambda}$ (see \cite{PerSol, Sol}). Since for Lebesgue almost every $\lambda\in(1/2,1/\sqrt{2})$ the measure $\mu_{\lambda}$ is absolutely continuous, it follows that for Lebesgue almost every $\lambda\in (2^{-1/k},2^{-1/2k})$ the measure $\mu_{\lambda^k}$ is also absolutely continuous. Since $\mu_{\lambda}=\mu_{\lambda^k}\ast \nu_{\lambda}$ it follows that $\mu_{\lambda}$ is absolutely continuous for Lebesgue almost every $\lambda\in (2^{-1/k},2^{-1/2k})$. Importantly the intervals $(2^{-1/k},2^{-1/2k})$ exhaust $(1/\sqrt{2},1)$. It follows therefore that $\mu_{\lambda}$ is absolutely continuous for Lebesgue almost every $\lambda\in(1/\sqrt{2},1).$ Our previous steps cover the interval $(1/2,1/\sqrt{2}),$ so we may conclude that $\mu_{\lambda}$ is absolutely continuous for Lebesgue almost every $\lambda\in(1/2,1).$
\end{proof}
\subsubsection{The $\{0,1,3\}$ problem}
Let $\lambda\in(0,1)$ and $$C_{\lambda}:=\left\{\sum_{j=0}^{\infty}a_j\lambda^j:a_j\in\{0,1,3\}\right\}.$$ $C_{\lambda}$ is the attractor of the IFS $\{\lambda x,\lambda x +1,\lambda x +3\}$. When $\lambda\in(0,1/4)$ the IFS satisfies the strong separation condition and one can prove that $\dim_{H}C_{\lambda}=\frac{\log 3}{-\log \lambda}.$ When $\lambda\geq 2/5$ the set $C_{\lambda}$ is the interval $[0,\frac{3}{1-\lambda}]$. The two main problems in the study of $C_{\lambda}$ are:
\begin{itemize}
\item Classify those $\lambda\in(1/4,1/3)$ such that $\dim_{H}C_{\lambda}=\frac{\log 3}{-\log \lambda}.$
\item Classify those $\lambda\in(1/3,2/5)$ such that $C_{\lambda}$ has positive Lebesgue measure.
\end{itemize}Initial progress on these problems was made by Pollicott and Simon in \cite{PolSimon}, Keane, Smorodinsky and Solomyak in \cite{KeSmSo}, and Solomyak in \cite{Sol}. In \cite{PolSimon} it was shown that for Lebesgue almost every $\lambda\in(1/4,1/3)$ we have $\dim_{H}C_{\lambda}=\frac{\log 3}{-\log \lambda}.$ In \cite{Sol} it was shown that for Lebesgue almost every $\lambda\in(1/3,2/5)$ the set $C_{\lambda}$ has positive Lebesgue measure. It follows from the recent work of Hochman \cite{Hochman}, and Shmerkin and Solomyak \cite{ShmSol}, that the set of exceptions for both of these statements has zero Hausdorff dimension.
In \cite{Sol} it was shown that $\alpha(B(\{0,\pm 1,\pm 2, \pm 3\}))>0.418.$ Using this information we can prove the following result.
\begin{thm}
\label{013thm}
Let $\Psi:\D^*\to[0,\infty)$ be given by $\Psi(\a)=\frac{1}{3^{|\a|}|\a|}$. Then for Lebesgue almost every $\lambda\in(1/3,0.418),$ we have that for any $z\in C_{\lambda},$ Lebesgue almost every $x\in C_{\lambda}$ is contained in $W_{\Phi_{\lambda,\{0,1,3\}}}(z,\Psi).$
\end{thm}Just like the proof of Theorem \ref{BC cor}, the proof of Theorem \ref{013thm} is an adaptation of the proof of Corollary \ref{example cor} and is therefore omitted.
As stated above, in \cite{Sol} it was shown that for Lebesgue almost every $\lambda\in(1/3,2/5)$ the set $C_{\lambda}$ has positive Lebesgue measure. This was achieved by proving $C_{\lambda}$ supported an absolutely continuous self-similar measure. To the best of the author's knowledge, all results establishing that $C_{\lambda}$ has positive Lebesgue measure for some $\lambda\in(1/3,2/5)$ do so by proving that $C_{\lambda}$ supports an absolutely continuous self-similar measure. It is interesting therefore to note that our methods yield a simple proof of the fact stated above without any explicit mention of a measure. In the proof below, we instead construct a subset of $C_{\lambda}$ that has positive Lebesgue measure for Lebesgue almost every $\lambda\in(1/3,2/5)$.
\begin{thm}[Solomyak \cite{Sol}]
For Lebesgue almost every $\lambda\in(1/3,2/5)$ the set $C_{\lambda}$ has positive Lebesgue measure.
\end{thm}
\begin{proof}
Taking $(a_j)$ to be the sequence consisting of all zeros in our proof of Theorem \ref{1d thm}, so that $\pi(a_j)=0$ for all $\lambda$, it can be shown that for any $\epsilon_1>0$ we have
\begin{equation}
\label{tran bound2}
\L([1/3+\epsilon_1,4/5-\epsilon_1])-\int_{1/3+\epsilon}^{4/5-\epsilon_1}\frac{T(\{\sum_{j=0}^{n-1}a_j\lambda^j\}_{\a\in\{0,1,3\}^n},\frac{s}{3^n})}{3^n}d\lambda=\mathcal{O}(s).
\end{equation}Therefore by Lemma \ref{density separation lemma}, we have
$$\L\Big(\bigcap_{\epsilon>0}\bigcup_{c,s>0}\{\lambda\in ([1/3+\epsilon_1,4/5-\epsilon_1]):\overline{d}(n:\omega\in B(c,s,n))\geq 1-\epsilon\}\Big)=\L([1/3+\epsilon_1,4/5-\epsilon_1]).$$ This implies that for Lebesgue almost every $\lambda\in [1/3+\epsilon_1,4/5-\epsilon_1],$ there exists $c>0$ and $s>0$ such that for infinitely many $n\in\mathbb{N}$ we have
\begin{equation}
\label{n equation}
\frac{T(\{\sum_{j=0}^{n-1}a_j\lambda^j\}_{\a\in\{0,1,3\}},\frac{s}{3^n})}{3^n}\geq c.
\end{equation}
Let $\lambda'\in [1/3+\epsilon_1,4/5-\epsilon_1]$ be a $\lambda$ satisfying \eqref{n equation} for infinitely many $n$. For any $n\in \N$ satisfying \eqref{n equation} we must also have
\begin{align}
\label{cheap measure}cs&\leq \L\left(\bigcup_{u\in S(\{\sum_{j=0}^{n-1}a_j\lambda'^j\}_{\a\in\{0,1,3\}},\frac{s}{3^n})}\Big(u-\frac{s}{2\cdot 3^n},u+\frac{s}{2\cdot 3^n}\Big)\right)\nonumber\\
&\leq \L\left(\bigcup_{\a\in \{0,1,3\}^n}\Big(\sum_{j=0}^{n-1}a_j\lambda'^j-\frac{s}{2\cdot 3^n},\sum_{j=0}^{n-1}a_j\lambda'^j+\frac{s}{2\cdot 3^n}\Big)\right).
\end{align}
In which case it follows from \eqref{n equation} and \eqref{cheap measure} that
\begin{align*}
&\L\left(x: |x-\phi_{\a}(0)|\leq \frac{s}{2\cdot 3^n}\textrm{ for i.m. }\a\in \{0,1,3\}^*\right)\\
=&\L\left(\bigcap_{N=1}^{\infty}\bigcup_{n=N}\bigcup_{\a\in \{0,1,3\}^n}\Big(\sum_{j=0}^{n-1}a_j\lambda'^j-\frac{s}{2\cdot 3^n},\sum_{j=0}^{n-1}a_j\lambda'^j+\frac{s}{2\cdot 3^n}\Big)\right)\\
=&\lim_{N\to\infty} \L\left(\bigcup_{n=N}\bigcup_{\a\in \{0,1,3\}^n}\Big(\sum_{j=0}^{n-1}a_j\lambda'^j-\frac{s}{2\cdot 3^n},\sum_{j=0}^{n-1}a_j\lambda'^j+\frac{s}{2\cdot 3^n}\Big)\right)\\
\geq &cs.
\end{align*}
In the penultimate equality we used that Lebesgue measure is continuous from above. In the final inequality we used that there are infinitely many $n\in\mathbb{N}$ such that \eqref{n equation} holds, and therefore infinitely many $n\in\mathbb{N}$ such that \eqref{cheap measure} holds.
Since $$\left\{x: |x-\phi_{\a}(0)|\leq \frac{s}{2\cdot 3^n}\textrm{ for i.m. }\a\in\{0,1,3\}^*\right\}\subset C_{\lambda'},$$ it follows $C_{\lambda'}$ has positive Lebesgue measure. Since $\lambda'$ was arbitrary, it follows that for Lebesgue almost every $\lambda\in[1/3+\epsilon_1,4/5-\epsilon_1],$ the set $C_{\lambda}$ has positive Lebesgue measure. Since $\epsilon_1$ was arbitrary we can upgrade this statement and conclude that for Lebesgue almost every $\lambda\in(1/3,4/5),$ the set $C_{\lambda}$ has positive Lebesgue measure.
\end{proof}
\subsection{Proof of Theorem \ref{translation thm}}
In this section we prove Theorem \ref{translation thm}. Recall that in the setting of Theorem \ref{translation thm} we obtain a family of IFSs by first of all fixing a set of $d\times d$ non-singular matrices $\{A_i\}_{i=1}^l$ each satisfying $\|A_i\|<1$. For any $\t=(t_1,\ldots,t_l)\in\mathbb{R}^{ld}$ we then define $\Phi_{\t}$ to be the IFS consisting of the contractions $$\phi_{i}(x)=A_ix +t_i.$$ The parameter $\t$ is allowed to vary. We denote the corresponding attractor by $X_\t$ and the projection map from $\D^{\mathbb{N}}$ to $X_\t$ by $\pi_\t$.
To prove Theorem \ref{translation thm} we will need a technical result due to Jordan, Pollicott, and Simon from \cite{JoPoSi}. It is rephrased for our purposes.
\begin{lemma}\cite[Lemma 7]{JoPoSi}
\label{translation transversality}
Assume that $\|A_i\|<1/2$ for all $1\leq i\leq l$ and let $U$ be an arbitrary open ball in $\mathbb{R}^{ld}$. Then for any two distinct sequences $\a,\b\in\D^{\mathbb{N}}$ we have
$$\L\left(\{\t\in U:|\pi_{t}(\a)-\pi_t(\b)|\leq r\}\right)=\mathcal{O}\left(\frac{r^d}{\prod_{i=1}^d \alpha_{i}(A_{a_1\cdots a_{|\a\wedge \b|-1}})}\right).$$
\end{lemma}
With Lemma \ref{translation transversality} we are now in a position to prove Theorem \ref{translation thm}.
\begin{proof}[Proof of Theorem \ref{translation thm}]
Let us start by fixing a set of $d\times d$ non-singular matrices $\{A_i\}_{i=1}^l$ such that $\|A_i\|<1/2$ for all $1\leq i\leq l$. We prove each statement appearing in this theorem individually.\\
\noindent \textbf{Proof of statement $1$}\\
Instead of proving our result for Lebesgue almost every $\t\in\mathbb{R}^{ld},$ it is sufficient to prove our result for Lebesgue almost every $\t\in U,$ where $U$ is an arbitrary ball in $\mathbb{R}^{ld}$. In what follows we fix such a $U$.
By the Shannon-McMillan-Breiman theorem, and the definition of the Lyapunov exponent, we know that for $\m$-almost every $\a\in\D^{\mathbb{N}}$ we have $$\lim_{k\to\infty}\frac{-\log \m([a_1\cdots a_k])}{k}=\h(\m)$$ and for each $1\leq i\leq d$
$$\lim_{k\to\infty}\frac{\log \alpha_{i}(A_{a_1\cdots a_k})}{k}=\lambda_{i}(\m).$$ Applying Egorov's theorem, it follows that for any $\epsilon>0,$ there exists $C>0$ such that the set of $\a\in \D^{\mathbb{N}}$ satisfying
\begin{equation}
\label{exponential SHM}
\frac{e^{k(-\h(\m)-\epsilon)}}{C}\leq \m([a_1\cdots a_k])\leq Ce^{k(-\h(\m)+\epsilon)}
\end{equation}and
\begin{equation}
\label{exponential Lyapunov}
\frac{e^{k(\lambda_i(\m)-\epsilon)}}{C}\leq \alpha_{i}(A_{a_1\cdots a_k})\leq Ce^{k(\lambda_i(\m)+\epsilon)}
\end{equation}for each $1\leq i\leq d$ for all $n\in\mathbb{N},$ has $\m$-measure strictly larger than $1/2$. In what follows we will assume that $\epsilon$ has been picked to be sufficiently small so that we have
\begin{equation}
\label{epsilon small}
\h(\m)-\epsilon>-\lambda_1(\m)-\cdots -\lambda_d(\m)+d\epsilon.
\end{equation}
Such an $\epsilon$ exists because of our underlying assumption $\h(\m)>-\lambda_1(\m)-\cdots -\lambda_d(\m)$.
For each $n\in \N$ let $$\tilde{L}_{\m,n}=\{\a\in L_{\m,n}: \eqref{exponential SHM} \textrm{ and } \eqref{exponential Lyapunov} \textrm{ hold for }1\leq k\leq |\a|\}$$ and $$\tilde{R}_{\m,n}:=\# \widetilde{L}_{\m,n}.$$ It follows from the above that $$\m\Big(\bigcup_{\a\in \tilde{L}_{\m,n}}[\a]\Big)>1/2.$$ By the discussion given at the beginning of this section, we known $\tilde{R}_{\m,n}$ satisfies the exponential growth condition of Proposition \ref{general prop}. It also follows from our construction that
\begin{equation}
\label{hiphop}
\tilde{R}_{\m,n}\asymp R_{\m,n}.
\end{equation} Let us now fix $(a_j)\in \D^{\mathbb{N}}$ and let
$$ R(\t,s,n):=\left\{(\a,\a')\in \tilde{L}_{\m,n}\times \tilde{L}_{\m,n} :|\phi_{\a}(\pi_{\t}(a_j))-\phi_{\a}(\pi_{\t}(a_j))|\leq \frac{s}{\tilde{R}_{\m,n}^{1/d}} \textrm{ and }\a\neq \a'\right\}.$$
Our goal now is to prove the bound:
\begin{equation}
\label{WTSAA}\int_{U}\frac{\#R(\t,s,n)}{\tilde{R}_{\m,n}}d\L=\mathcal{O}(s^d).
\end{equation} Repeating the arguments given in the proof statement $1$ from Theorem \ref{1d thm}, it can be shown that
$$\int_{U}\frac{\#R(\t,s,n)}{\tilde{R}_{\m,n}}d\L=\mathcal{O}\left(\tilde{R}_{\m,n}\sum_{\stackrel{\a,\a'\in \tilde{L}_{\m,n}}{\a\neq \a'}}\m([\a])\m([\a'])\L(\t\in U:|\pi_\t(\a (a_j))-\pi_\t(\a' (a_j))|\leq \frac{s}{\tilde{R}_{\m,n}^{1/d}})\right).$$ Applying the bound given by Lemma \ref{translation transversality}, we obtain
\begin{align*}
\int_{U}\frac{\#R(\t,s,n)}{\tilde{R}_{\m,n}}d\L&= \mathcal{O}\left(\tilde{R}_{\m,n}\sum_{\stackrel{\a,\a'\in \tilde{L}_{\m,n}}{\a\neq \a'}}\m([\a])\m([\a'])\frac{s^d}{\tilde{R}_{\m,n} \prod_{i=1}^d \alpha_{i}(A_{a_1\cdots a_{|\a\wedge \a'|-1}})} \right)\\
&= \mathcal{O}\left(\sum_{\stackrel{\a,\a'\in \tilde{L}_{\m,n}}{\a\neq \a'}}\m([\a])\m([\a'])\frac{s^d}{\prod_{i=1}^d \alpha_{i}(A_{a_1\cdots a_{|\a\wedge \a'|-1}})} \right)\\
&= \mathcal{O}\left(s^d\sum_{\a\in \tilde{L}_{\m,n}}\m([\a])\sum_{k=1}^{|\a|-1}\sum_{\a':|\a\wedge \a'|=k}\frac{\m([\a'])}{\prod_{i=1}^d \alpha_{i}(A_{a_1\cdots a_{k-1}})} \right)\\
&= \mathcal{O}\left(s^d\sum_{\a\in \tilde{L}_{\m,n}}\m([\a])\sum_{k=1}^{|\a|-1}\frac{\m([a_1\cdots a_{k-1}])}{\prod_{i=1}^d \alpha_{i}(A_{a_1\cdots a_{k-1}})} \right).
\end{align*} We now substitute in the bounds provided by \eqref{exponential SHM} and \eqref{exponential Lyapunov} to obtain
\begin{align*}
\int_{U}\frac{\#R(\t,s,n)}{\tilde{R}_{\m,n}}d\L&= \mathcal{O}\left(s^d\sum_{\a\in \tilde{L}_{\m,n}}\m([\a])\sum_{k=1}^{|\a|-1}\frac{e^{k(-\h(\m)+\epsilon)}}{\prod_{i=1}^d e^{k(\lambda_i(\m)-\epsilon)}} \right) \\
&=\mathcal{O}\left(s^d\sum_{\a\in \tilde{L}_{\m,n}}\m([\a])\sum_{k=1}^{|\a|-1}\frac{e^{k(-\h(\m)+\epsilon)}}{ e^{k(\sum_{i=1}^d\lambda_i(\m)-d\epsilon)}} \right)\\
&=\mathcal{O}\left(s^d\sum_{\a\in \tilde{L}_{\m,n}}\m([\a])\sum_{k=1}^{\infty}\frac{e^{k(-\h(\m)+\epsilon)}}{ e^{k(\sum_{i=1}^d\lambda_i(\m)-d\epsilon)}}\right)\\
&=\mathcal{O}\left(s^d\sum_{\a\in \tilde{L}_{\m,n}}\m([\a])\right)\\
&=\mathcal{O}(s^d).
\end{align*} In our penultimate equality we used \eqref{epsilon small} to assert that $$\sum_{k=1}^{\infty}\frac{e^{k(-\h(\m)+\epsilon)}}{ e^{k(\sum_{i=1}^d\lambda_i(\m)-d\epsilon)}}<\infty.$$
We have shown that \eqref{WTSAA} holds. It follows now from \eqref{WTSAA} and Lemma \ref{integral bound} that
\begin{equation}
\label{integral bound2}
\L(U)-\int_{U}\frac{T\big(\{\phi_{\a}(\pi_\t(a_j))\}_{\a\in \tilde{L}_{\m,n}},\frac{s}{\tilde{R}_{\m,n}^{1/d}}\big)}{\tilde{R}_{\m,n}}d\lambda=\mathcal{O}(s^d).
\end{equation} Therefore, by Proposition \ref{general prop}, we have that for Lebesgue almost every $\t\in U,$ the set
$$\left\{x\in X:x\in\bigcup_{\a\in \tilde{L}_{\m,n}}B\left(\phi_{\a}(\pi_\t(a_j)),\Big(\frac{h(n)}{\tilde{R}_{\m,n}}\Big)^{1/d}\right)\textrm{ for i.m. } n\in \N \right\}$$
has positive Lebesgue measure for any $h\in H$. By \eqref{hiphop} we know that $\tilde{R}_{\m,n}\asymp R_{\m,n}.$ Which by the discussion given at the beginning of this section implies $\tilde{R}_{\m,n}^{-1}\asymp \m([\a])$ for each $\a\in L_{\m,n}$. Combining this fact with Lemma \ref{arbitrarily small} and the above, we can conclude that for Lebesgue almost every $\t\in U,$ for any $h\in H$ the set $U_{\Phi_\t}(\pi_{\t}(a_j),\m,h)$ has positive Lebesgue measure. \\
\noindent \textbf{Proof of statement 2}\\
Under the assumptions of statement $2,$ it can be shown that for any $\a\in \D^n$ the difference $\phi_{\a}(z)-\phi_{\a}(z')$ is independent of $\a$ for any $z,z'\in X$. Therefore $\{\phi_{\a}(\pi_\t(z))\}_{\a\in \D^n}$ is a translation of $\{\phi_{\a}(\pi_\t(z'))\}_{\a\in \D^n}$ for any $z,z'\in X$. The proof of statement $2$ now follows by the same reasoning as that given in the proof of statement $2$ from Theorem \ref{1d thm}.\\
\noindent \textbf{Proof of statement 3}\\
As in the proof of statement $1$, it suffices to show that statement $3$ holds for Lebesgue almost every $\t\in U$ where $U$ is an arbitrary ball. We know by statement $1$ that for any $(a_j)\in \D^{\N},$ for Lebesgue almost every $t\in U$, the set $U_{\Phi_\t}(\pi_{\t}(a_j),\m,h)$ has positive Lebesgue measure for any $h\in H$. Applying Lemma \ref{arbitrarily small} it follows that for any $(a_j)\in \D^{\N},$ for Lebesgue almost every $t\in U$, the set $W_{\Phi_\t}(\pi_{\t}(a_j),\Psi)$ has positive Lebesgue measure for any $\Psi$ that is equivalent to $(\m,h)$ for some $h\in H$. If each $A_i$ is a similarity then we apply the first part of Proposition \ref{full measure} to assert that for any $(a_j)\in \D^{\N},$ for Lebesgue almost every $\t\in U$, for any $\Psi\in \Upsilon_{\m}$ Lebesgue almost every $x\in X_{\t}$ is contained in $W_{\Phi_\t}(\pi_{\t}(a_j),\Psi)$.
To prove statement $3$ in the case when $d=2$ and all the matrices are equal, and in the case when all the matrices are simultaneously diagonalisable, we will apply the second part of Proposition \ref{full measure}. We need to show that under either of these conditions, for Lebesgue almost every $\t\in U$ the measure $\mu,$ the pushforward of our $\m,$ is equivalent to $\L|_{X_\t}$. Now let us assume our set of matrices satisfies either of these conditions. By \eqref{integral bound2} and Lemma \ref{density separation lemma} we know that $$\L\left(\bigcap_{\epsilon>0}\bigcup_{c,s>0}\{\t\in U: \overline{d}(n:\t\in B(c,s,n))\geq 1-\epsilon\}\right)=\L(U).$$ In particular, this implies that for Lebesgue almost every $\t\in U,$ there exists some $c,s>0$ such that $$\frac{T(\{\phi_{\a}(\pi_t(a_j))\}_{\a\in \tilde{L}_{\m,n}},\frac{s}{\tilde{R}_{\m,n}^{1/d}})}{\tilde{R}_{\m,n}}>c$$ for infinitely many $n\in\mathbb{N}$. By Proposition \ref{absolute continuity} it follows that $\mu\ll \L$ for Lebesgue almost every $\t\in U$. By our hypothesis and Lemma \ref{equivalent measures} we can improve this statement to $\mu\sim \L|_{X_\t}$ for Lebesgue almost every $\t\in U$. Now applying Proposition \ref{full measure} we can conclude that for any $(a_j)\in \D^{\N},$ for Lebesgue almost every $t\in U$, for any $\Psi\in \Upsilon_{\m},$ Lebesgue almost every $x\in X_{\t}$ is contained in $W_{\Phi_\t}(\pi_{\t}(a_j),\Psi).$ \\
\noindent \textbf{Proof of statement 4}\\
The proof of statement $4$ is an adaptation of statement $3,$ where the role of statement $1$ is played by statement $2$.
\end{proof}
The proof of Corollary \ref{translation cor} is analogous to the proof of Corollary \ref{example cor} and is therefore omitted.
\subsection{Proof of Theorem \ref{random thm}}
The proof of Theorem \ref{random thm} mirrors the proof of Theorem \ref{translation thm}. As such we will only state the appropriate analogue of Lemma \ref{translation transversality} and leave the details to the interested reader. The following lemma was proved in \cite{JoPoSi}.
\begin{lemma}\cite[Lemma 6]{JoPoSi}
Assume that $\|A_i\|<1$ for all $1\leq i\leq l$. For any two distinct sequences $\a,\b\in \D^{\mathbb{N}}$ we have $$\mathbf{P}(\y\in\mathbf{D}^{\mathbb{N}}:|\pi_\y(\a)-\pi_\y(\b)|\leq r)=\mathcal{O}\left(\frac{r^d}{\prod_{i=1}^d \alpha_{i}(A_{a_1\cdots a_{|\a\wedge \b|-1}})}\right).$$
\end{lemma}
\section{A specific family of IFSs}
\label{Specific family}
In this section we focus on the following family of IFSs:
$$\Phi_t:=\Big\{\phi_1(x)=\frac{x}{2},\,\phi_2(x)=\frac{x+1}{2},\,\phi_3(x)=\frac{x+t}{2},\,\phi_4(x)=\frac{x+1+t}{2}\Big\},$$ where $t\in [0,1]$. We also fix $\m$ throughout to be the uniform $(1/4,1/4,1/4,1/4)$ Bernoulli measure. To each $t\in[0,1]\setminus\mathbb{Q}$ we associate the continued fraction expansion $(\zeta_m)\in \N^{\N}$ and the corresponding sequence of partial quotients $(p_m/q_m)$. In this section we will make use of the following well known properties of continued fractions.
\begin{itemize}
\item For any $m\in\mathbb{N}$ we have
\begin{equation}
\label{property1}\frac{1}{q_m(q_{m+1}+q_m)}<\left|t-\frac{p_m}{q_m}\right|<\frac{1}{q_mq_{m+1}}.
\end{equation}
\item If we set $p_{-1}=1, q_{-1}=0, p_0=0, q_0=1$, then for any $m\geq 1$ we have
\begin{align}
\label{property2}
p_m&=\zeta_m p_{m-1}+p_{m-2}\\
q_m&=\zeta_m q_{m-1}+q_{m-2}. \nonumber
\end{align}
\item If $t$ is such that $(\zeta_m)$ is bounded, i.e. $t$ is badly approximable, then there exists $c_t>0$ such that for any $(p,q)\in\mathbb{Z}\times \mathbb{N},$ we have
\begin{equation}
\label{property3}
\Big|t-\frac{p}{q}\Big|\geq \frac{c_t}{q^2}.
\end{equation}
\item If $q< q_{m+1}$ then \begin{equation}
\label{property4}
|qt-p|\geq |q_{m}t-p_m|
\end{equation}for any $p\in\mathbb{Z}$.
\end{itemize} For a proof of these properties we refer the reader to \cite{Bug} and \cite{Cas}.
Let us now remark that for any $\a\in \D^n,$ there exist two sequences $(b_j), (c_j)\in\{0,1\}^{n}$ satisfying
\begin{equation}
\label{separate terms}
\phi_{\a}(x)=\frac{x}{2^n}+\sum_{j=1}^n\frac{b_j}{2^j}+t\sum_{j=1}^n\frac{c_j}{2^j}.
\end{equation}Importantly for each $\a\in \D^n$ the sequences $(b_j)$ and $(c_j)$ satisfying \eqref{separate terms} are unique. What is more, for any $(b_j),(c_j)\in\{0,1\}^{n},$ there exists a unique $\a\in\D^n$ such that $(b_j)$ and $(c_j)$ satisfy \eqref{separate terms} for this choice of $\a$.
We separate our proof of Theorem \ref{precise result} into individual propositions. Statement $1$ of Theorem \ref{precise result} is contained in the following result.
\begin{prop}
\label{overlap prop}
$\Phi_t$ contains an exact overlap if and only if $t\in \mathbb{Q}$. Moreover if $t\in \mathbb{Q},$ then for any $z\in[0,1+t],$ the set $U_{\Phi_t}(z,\m,1)$ has Hausdorff dimension strictly less than $1$.
\end{prop}
\begin{proof}
If $\Phi_t$ contains an exact overlap then there exists distinct $\a,\a'\in \D^{*}$ such that $\phi_{\a}=\phi_{\a'}$. By considering $\a\a'$ and $\a'\a$ if necessary, we can assume that $|\a|=|\a'|.$ Using \eqref{separate terms} we see that the following equivalences hold:
\begin{align*}
&\textrm{There exists distinct }\a,\a'\in \D^n \textrm{ such that }\phi_{\a}=\phi_{\a'}.\\
\iff & \textrm{There exists }(b_j),(c_j),(b_j'),(c_j')\in \{0,1\}^n \textrm{ such that } \sum_{j=1}^n\frac{b_j}{2^j}+t\sum_{j=1}^n\frac{c_j}{2^j}=\sum_{j=1}^n\frac{b_j'}{2^j}+t\sum_{j=1}^n\frac{c_j'}{2^j}\\
& \textrm{ and either } (b_j)\neq (b_j') \textrm{ or } (c_j)\neq (c_j').\\
\iff &\textrm{There exists }(b_j),(c_j),(b_j'),(c_j')\in \{0,1\}^n \textrm{ such that }\sum_{j=1}^n\frac{b_j-b_j'}{2^j}=t\sum_{j=1}^n\frac{c_j'-c_j}{2^j}\\
& \textrm{ and either } (b_j)\neq (b_j') \textrm{ or } (c_j)\neq (c_j').\\
\iff &\textrm{There exists }(b_j),(c_j),(b_j'),(c_j')\in \{0,1\}^n \textrm{ such that }\sum_{j=1}^n2^{n-j}(b_j-b_j')=t\sum_{j=1}^n2^{n-j}(c_j'-c_j)\\
& \textrm{ and either } (b_j)\neq (b_j') \textrm{ or } (c_j)\neq (c_j').\\
\iff &\textrm{There exists }1\leq p,q\leq 2^{n}-1 \textrm{ such that } p=qt.
\end{align*}It follows from these equivalences that there is an exact overlap if any only if $t\in\mathbb{Q}$.
We now prove the Hausdorff dimension part of our proposition. By the first part we know that $t\in\mathbb{Q}$ if and only if $\Phi_t$ contains an exact overlap. It follows from the presence of an exact overlap that for each $t\in \mathbb{Q},$ there exists $j\in \mathbb{N}$ such that $\#\{\phi_{\a}(z):\a\in \D^{j}\}\leq 4^j-1$ for any $z\in[0,1+t]$. This in turn implies that for any $k\in \mathbb{N}$ we have $\#\{\phi_{\a}(z):\a\in \D^{jk}\}\leq (4^j-1)^k$ for any $z\in[0,1+t]$. It follows now from this latter inequality that for an appropriate choice of $c>0,$ for any $z\in[0,1+t]$ we have
\begin{equation}\label{slower growth}
\#\{\phi_{\a}(z):\a\in \D^n\}=\mathcal{O}((4-c)^n).
\end{equation} For any $z\in[0,1+t]$ and $N\in \N$, the set of intervals $$\{[\phi_{\a}(z)-4^{-n},\phi_{\a}(z)+4^{-n}]\}_{\stackrel{n\geq N}{\phi_{\a}(z):\a\in\D^n}}$$forms a $2\cdot 4^{-N}$ cover of $U_{\Phi_t}(z,\m,1)$. Now let $s\in(0,1)$ be sufficiently large that $$(4-c)<4^s.$$It follows now that we have the following bound on the $s$-dimensional Hausdorff measure of $U_{\Phi_t}(z,\m,1)$
\begin{align*}
\mathcal{H}^s(U_{\Phi_t}(z,\m,1))&\leq \lim_{N\to\infty}\sum_{n=N}^{\infty}\sum_{\phi_{\a}(z):\a\in\D^n}Diam([\phi_{\a}(z)-4^{-n},\phi_{\a}(z)+4^{-n}])^s\\
&\stackrel{\eqref{slower growth}}{=} \lim_{N\to\infty}\mathcal{O}\left(\sum_{n=N}^{\infty} (4-c)^n4^{-ns}\right)\\
&=0.
\end{align*} In the last line we used that $(4-c)<4^s$ to guarantee $\sum_{n=1}^{\infty} (4-c)^n4^{-ns}<\infty$. Therefore $\dim_{H}(U_{\Phi_t}(z,\m,1))\leq s$ for any $z\in[0,1+t]$.
\end{proof}
Adapting the proof of the first part of Proposition \ref{overlap prop}, we can show that the following simple lemma holds.
\begin{lemma}
\label{obvious lemma}
Let $t\in [0,1]$, $z\in [0,1+t],$ and $s>0$. For $n$ sufficiently large, there exists distinct $\a,\a'\in \D^n$ such that $$|\phi_{\a}(z)-\phi_{\a'}(z)|\leq \frac{s}{4^n}$$ if and only if there exists $1\leq p,q\leq 2^{n}-1$ such that $$|qt-p|\leq \frac{s}{2^n}.$$
\end{lemma}
Lemma \ref{obvious lemma} will be used in the proofs of all the full measure statements in Theorem \ref{precise result}. It immediately yields the following proposition which corresponds to statement $3$ from Theorem \ref{precise result}.
\begin{prop}
If $t$ is badly approximable, then for any $z\in[0,1+t]$ and $h:\mathbb{N}\to[0,\infty)$ satisfying $\sum_{n=1}^{\infty}h(n)=\infty,$ we have that Lebesgue almost every $x\in[0,1+t]$ is contained in $U_{\Phi_t}(z,\m,h)$.
\end{prop}
\begin{proof}
Since $t$ is badly approximable, we know by \eqref{property3} that there exists $c_t>0$ such that \begin{equation}
\label{badly approximable}
|qt-p|\geq \frac{c_t}{q}
\end{equation} for all $(p,q)\in\mathbb{Z}\times\mathbb{N}$. Equation \eqref{badly approximable} implies that for any $1\leq p,q\leq 2^{n}-1$ we have $$|qt-p|>\frac{c_t}{2^n}.$$ Applying Lemma \ref{obvious lemma}, we see that for any $z\in [0,1+t],$ for all $n$ sufficiently large, if $\a,\a'\in \D^n$ are distinct then $$|\phi_{\a}(z)-\phi_{\a'}(z)|>\frac{c_t}{4^n}.$$ Therefore, for any $z\in [0,1+t]$ we have $$S\left(\{\phi_{\a}(z)\}_{\a\in \D^n},\frac{c_t}{4^n}\right)=\{\phi_{\a}(z)\}_{\a\in \D^n}$$ for all $n$ sufficiently large. Our result now follows by an application of Proposition \ref{separated full measure}.
\end{proof}
For our other full measure statements a more delicate analysis is required. We need to identify integers $n$ for which the set of images $\{\phi_{\a}(z)\}_{\a\in \D^n}$ are well separated. This we do in the following two lemmas.
\begin{lemma}
\label{diophantine lemma}
Let $s>0$. For $n$ sufficiently large, if $n$ satisfies $$2sq_{m}\leq 2^{n}-1<q_{m}$$ for some $m$, then for any $z\in [0,1+t]$ we have $$|\phi_{\a}(z)-\phi_{\a'}(z)|>\frac{s}{4^n},$$ for distinct $\a,\a'\in \D^n$.
\end{lemma}
\begin{proof}
Fix $s>0$. If $2^{n}-1<q_{m}$, then by \eqref{property1} and \eqref{property4}, for all $1\leq p,q\leq 2^{n}-1$ we have $$|qt-p|\geq |q_{m-1}t-p_{m-1}|\geq \frac{1}{2q_{m}}.$$ If $2sq_{m}\leq 2^{n}-1$ as well, then the above implies that for all $1\leq p,q\leq 2^{n}-1$ we have $$|qt-p|\geq \frac{s}{2^{n}-1}> \frac{s}{2^n}.$$ Applying Lemma \ref{obvious lemma} completes our proof.
\end{proof}
Lemma \ref{diophantine lemma} demonstrates that if $2^{n}-1$ is strictly less than but close to some denominator arising from the partial quotients of $t$, then at the $n$-th level we have good separation properties. The following lemma demonstrates a similar phenomenon but instead relies upon the digits appearing in the continued fraction expansion. To properly states this lemma we need to define the following sequence. Given $t$ with corresponding sequence of partial quotients $(p_m/q_m)$, we define the sequence of integers $(m_n)$ via the inequalities:
$$q_{m_n}\leq 2^{n}-1<q_{m_n+1}.$$
\begin{lemma}
\label{continued fraction lemma}
Let $s>0$. For $n$ sufficiently large, if $n$ is such that $\zeta_{m_n+1}\leq (3s)^{-1},$ then for any $z\in[0,1+t]$ we have $$|\phi_{\a}(z)-\phi_{\a'}(z)|>\frac{s}{4^n}$$ for distinct $\a,\a'\in \D^n$.
\end{lemma}
\begin{proof}
By \eqref{property1}, \eqref{property2}, and \eqref{property4}, we know that for any $1\leq p,q\leq 2^{n}-1$ we have $$|qt-p|\geq |q_{m_n}t-p_{m_n}|\geq \frac{1}{(q_{m_n+1}+q_{m_n})}= \frac{1}{(\zeta_{m_n+1}+1)q_{m_n}+q_{m_n-1}}> \frac{1}{3\zeta_{m_n+1}q_{m_n}}.$$ Now using our assumption $\zeta_{m_n+1}\leq (3s)^{-1},$ we may conclude that for any $1\leq p,q\leq 2^{n}-1$ we have $$|qt-p|\geq \frac{s}{q_{m_n}}>\frac{s}{2^n}.$$
Applying Lemma \ref{obvious lemma}, we may conclude our proof.
\end{proof}
With Lemma \ref{diophantine lemma} and Lemma \ref{continued fraction lemma} in mind we introduce the following definition. We say that $n$ is a good $s$-level if either $$2sq_{m}\leq 2^{n}-1<q_{m}$$ for some $m$, or if $$\zeta_{m_n +1}\leq (3s)^{-1}.$$ It follows from Lemma \ref{diophantine lemma} and Lemma \ref{continued fraction lemma} that if $n$ is a good $s$-level then $$S\left(\{\phi_{\a}(z)\}_{\a\in \D^n},\frac{s}{4^n}\right)=\{\phi_{\a}(z)\}_{\a\in \D^n}$$ for any $z\in [0,1+t]$.
The following proposition implies statement $2$ from Theorem \ref{precise result}.
\begin{prop}
\label{Fred}
If $t\notin \mathbb{Q},$ then there exists $h:\mathbb{N}\to[0,\infty)$ depending upon the continued fraction expansion of $t,$ such that $\lim_{n\to\infty}h(n)= 0,$ and for any $z\in[0,1+t]$ Lebesgue almost every $x\in [0,1+t]$ is contained in $U_{\Phi_t}(z,\m,h).$
\end{prop}
\begin{proof}
Fix $t\notin \mathbb{Q}$ and let $s=1/8$. For any $m\in\mathbb{N},$ it follows from the definition that $n$ is a good $1/8$-level if $n$ satisfies
\begin{equation}
\label{blahblah}
\frac{q_{m}}{4}\leq 2^{n}-1<q_{m}.
\end{equation}For any $m$ sufficiently large there is clearly at least one value of $n$ satisfying \eqref{blahblah}. As such there are infinitely many good $1/8$ levels. Now let $h:\mathbb{N}\to [0,\infty)$ be a function satisfying $\lim_{n\to\infty}h(n)=0$ and $$\sum_{\stackrel{n}{n \textrm{ is a good }1/8{\textrm{-level}}}}h(n)=\infty.$$ Now as remarked above, if $n$ is a good $1/8$-level, then $$S\left(\{\phi_{\a}(z)\}_{\a\in \D^n},\frac{1}{8\cdot 4^n}\right)=\{\phi_{\a}(z)\}_{\a\in \D^n}$$ for any $z\in[0,1+t]$. We may now apply Proposition \ref{separated full measure} and conclude that for any $z\in[0,1+t],$ Lebesgue almost every $x\in [0,1+t]$ is contained in $U_{\Phi_t}(z,\m,h)$ for this choice of $h$.
\end{proof}
In the proof of Proposition \ref{Fred} we showed that if $t\notin \mathbb{Q},$ then for infinitely many $n\in\mathbb{N}$ we have $$S(\{\phi_{\a}(z)\}_{\a\in \D^n},\frac{1}{8\cdot 4^n})=\{\phi_{\a}(z)\}_{\a\in \D^n}.$$ Theorem \ref{overlap or optimal} now follows from this observation and Proposition \ref{overlap prop}.
The following proposition implies statement $5$ from Theorem \ref{precise result}.
\begin{prop}
\label{propa}
Suppose $t\notin \mathbb{Q}$ is such that for any $\epsilon>0,$ there exists $L\in\mathbb{N}$ for which the following inequality holds for $M$ sufficiently large: $$\sum_{\stackrel{1\leq m \leq M}{\frac{q_{m+1}}{q_m}\geq L}}\log_{2}(\zeta_{m+1}+1) \leq \epsilon M.$$ Then for any $z\in[0,1+t]$ and $h\in H^*,$ Lebesgue almost every $x\in[0,1+t]$ is contained in $U_{\Phi_t}(z,\m,h)$.
\end{prop}
\begin{proof}
Fix $t$ satisfying the hypothesis of our proposition and $h\in H^*$. By definition, there exists $\epsilon>0$ such that for any $B\subset \mathbb{N}$ satisfying $\underline{d}(B)\geq 1-\epsilon$ we have
\begin{equation}
\label{bump}\sum_{n\in B}h(n)=\infty.
\end{equation}Now let us fix $s$ to be sufficiently small so that
\begin{equation}
\label{s condition}
\sum_{\stackrel{1\leq m\leq 2N+2}{q_{m+1}/q_{m}\geq 1/3s}}\log_2 (\zeta_{m+1}+1)\leq \epsilon N
\end{equation}for $N$ sufficiently large.
We observe that if $n$ is not a good $s$-level then by \eqref{property2} we must have $$\frac{q_{m_n+1}}{q_{m_n}}>\frac{1}{3s}.$$
Using \eqref{property2} and an induction argument, one can also show that $$q_{m}\geq 2^{\frac{m-2}{2}}$$ for all $m\geq 1$. Combining these observations, it follows that if $1\leq n\leq N$ and $n$ is not a good $s$-interval, then there exists $1\leq m\leq 2N+2$ such that $\frac{q_{m+1}}{q_{m}}\geq 1/3s$ and $q_m\leq 2^n-1<q_{m+1}$. As such we have the bound
\begin{equation}
\label{count bounda}\#\{1\leq n\leq N: n \textrm{ is not a good }s\textrm{-interval}\}\leq \sum_{\stackrel{1\leq m\leq 2N+2}{q_{m+1}/q_{m}\geq 1/3s}}\#\{n:q_{m}\leq 2^{n}-1< q_{m+1}\}.
\end{equation}By \eqref{property2} we know that for any $m\in\mathbb{N}$ we have $$\#\{n:q_{m}\leq 2^{n}-1< q_{m+1}\}\leq \log_2 (\zeta_{m+1}+1).$$ Substituting this bound into \eqref{count bounda} and applying \eqref{s condition}, we obtain
$$
\#\{1\leq n\leq N: n \textrm{ is not a good }s\textrm{-interval}\}\leq \sum_{\stackrel{1\leq m\leq 2N+2}{q_{m+1}/q_{m}\geq 1/3s}}\log_2 (\zeta_{m+1}+1)\leq \epsilon N$$ for $N$ sufficiently large. It follows therefore that $$\underline{d}(n: n\textrm{ is a good } s\textrm{-level})\geq 1-\epsilon.$$ In which case, by \eqref{bump} we have \begin{equation}
\label{robot}\sum_{\stackrel{n}{n\textrm{ is a good } s\textrm{-level}}}h(n)=\infty.
\end{equation} We know that for a good $s$-level we have $$S\left(\{\phi_{\a}(z)\}_{\a\in \D^n},\frac{s}{ 4^n}\right)=\{\phi_{\a}(z)\}_{\a\in \D^n},$$ for all $z\in [0,1+t]$. Therefore combining \eqref{robot} with Proposition \ref{separated full measure} finishes our proof.
\end{proof}
The following proposition implies statement $6$ from Theorem \ref{precise result}.
\begin{prop} Suppose $\mu$ is an ergodic invariant measure for the Gauss map, and satisfies $$\sum_{m=1}^{\infty}\mu\Big(\left[\frac{1}{m+1},\frac{1}{m}\right]\Big)\log_2 (m +1)<\infty.$$ Then for $\mu$-almost every $t,$ we have that for any $z\in[0,1+t]$ and $h\in H^*,$ Lebesgue almost every $x\in[0,1+t]$ is contained in $U_{\Phi_t}(z,\m,h).$ In particular, for Lebesgue almost every $t\in[0,1],$ we have that for any $z\in[0,1+t]$ and $h\in H^*$, Lebesgue almost every $x\in[0,1+t]$ is contained in $U_{\Phi_t}(z,\m,h)$.
\end{prop}
\begin{proof}
Let $\mu$ be a measure satisfying the hypothesis of our proposition. To prove the first part of our result we will show that $\mu$-almost every $t$ satisfies the hypothesis of Proposition \ref{propa}.
Recall that the Gauss map $T:[0,1]\setminus\mathbb{Q}\to[0,1]\setminus\mathbb{Q}$ is given by $$T(x)=\frac{1}{x}-\Bigl\lfloor\frac{1}{x}\Bigr\rfloor.$$ It is well known that the dynamics of the Gauss map and the continued fraction expansion of a number $t$ are intertwined. In particular, it is known that
\begin{equation}
\label{gauss}
\zeta_{m+1}=\zeta \textrm{ if and only if }T^{m}(t)\in\left(\frac{1}{\zeta+1},\frac{1}{\zeta}\right).
\end{equation} By \eqref{property2} we know that $q_{m+1}/q_{m}\geq L$ implies $\zeta_{m+1}\geq L-1$. Using \eqref{gauss} and this observation, we have that for any $t\notin\mathbb{Q}$
\begin{equation}
\label{Hat}\sum_{\stackrel{1\leq m \leq M}{\frac{q_{m+1}}{q_m}\geq L}}\log _2(\zeta_{m+1}+1) \leq \sum_{1\leq m\leq M}\chi_{(0,\frac{1}{L-1})}(T^m(t))\log_2(f(T^m(t))+1).
\end{equation} Where $f:(0,1]\to\mathbb{N}$ is given by $$f(t)=N\textrm{ if }t\in\left(\frac{1}{N+1},\frac{1}{N}\right].$$ By our assumptions on $\mu$, we know that for any $\epsilon>0$ we can pick $L$ sufficiently large such that $$\sum_{m=L-1}^{\infty}\mu\left(\left[\frac{1}{m+1},\frac{1}{m}\right]\right)\log_2(m+1)<\epsilon.$$
Assuming that we have picked such an $L$ sufficiently large, we know by the Birkhoff ergodic theorem that for $\mu$-almost every $t$ we have \begin{align*}
\lim_{M\to\infty}\frac{1}{M}\sum_{1\leq m\leq M}\chi_{(0,\frac{1}{L-1})}(T^m(t))\log_2(f(T^m(t))+1)&=\int \chi_{(0,\frac{1}{L-1})}\log(f(t)+1)d\mu(t)\\
&=\sum_{m=L-1}^{\infty}\mu\Big(\Big[\frac{1}{m+1},\frac{1}{m}\Big]\Big)\log_2(m+1)\\
&<\epsilon.
\end{align*}
Combining the above with \eqref{Hat} shows that $\mu$-almost every $t$ satisfies the hypothesis of Proposition \ref{propa}. Applying Proposition \ref{propa} completes the first half of our proof.
To deduce the Lebesgue almost every part of our proposition we remark that the Gauss measure given by $$\mu_{G}(A)=\frac{1}{\log 2}\int_{A} \frac{1}{1+x}\, dx$$ is an ergodic invariant measure for the Gauss map and is equivalent to the Lebesgue measure restricted to $[0,1]$. One can easily check that $\mu_{G}$ satisfies $$\mu_{G}\left(\left[\frac{1}{m+1},\frac{1}{m}\right]\right)=\mathcal{O}\left(\frac{1}{m^2}\right),$$ which clearly implies $$\sum_{m=1}^{\infty}\mu_{G}\left(\left[\frac{1}{m+1},\frac{1}{m}\right]\right)\log (m +1)<\infty.$$ Applying the first part of this proposition completes the proof.
\end{proof}
The following proposition proves statement $4$ from Theorem \ref{precise result}.
\begin{prop}
Suppose $t\notin\mathbb{Q}$ is not badly approximable. Then there exists $h:\mathbb{N}\to[0,\infty)$ such that $\sum_{n=1}^{\infty}h(n)=\infty,$ yet $U_{\Phi_t}(z,\m,h)$ has zero Lebesgue measure for any $z\in[0,1+t]$.
\end{prop}
\begin{proof}
Let $t\notin\mathbb{Q}$ and suppose $t$ is not badly approximable. We will prove that for some $s>0$ we have
\begin{equation}
\label{WTSQ}
\liminf_{n\to\infty}\frac{T(\{\phi_{\a}(z)\}_{\a\in \D^{n}},\frac{s}{4^{n}})}{4^{n}}=0,
\end{equation} for all $z\in[0,1+t]$. Proposition \ref{fail prop} then guarantees for each $z\in[0,1+t]$ the existence of a $h$ satisfying $\sum_{n=1}^{\infty}h(n)=\infty,$ such that $U_{\Phi_t}(z,\m,h)$ has zero Lebesgue measure. What is not clear from the statement of Proposition \ref{fail prop} is whether there exists a $h$ which satisfies this property simultaneously for all $z\in [0,1+t]$. Examining the proof of Proposition \ref{fail prop}, we see that the function $h$ that is constructed only depends upon the speed at which $$\frac{T(\{\phi_{\a}(z)\}_{\a\in \D^{n}},\frac{s}{4^{n}})}{4^{n}}$$ converges to zero along a subsequence. Since
\begin{equation}
\label{18}
T\left(\{\phi_{\a}(z)\}_{\a\in \D^{n}},\frac{s}{4^{n}}\right)=T\left(\{\phi_{\a}(z')\}_{\a\in \D^{n}},\frac{s}{4^{n}}\right)
\end{equation} for any $z,z'\in[0,1+t]$ and $n\in\mathbb{N}$, it is clear that the speed of convergence to zero along any subsequence is independent of $z$. In particular, the sequence $(n_j)$ constructed in \eqref{separated upper bound} is independent of the choice of $z$. Therefore the $h$ constructed in Proposition \ref{fail prop} will work for all $z\in[0,1+t]$ simultaneously. As such to prove our proposition it is sufficient to show that \eqref{WTSQ} holds for all $z\in [0,1+t]$.
It also follows from \eqref{18} that to prove there exists $s>0$ such that \eqref{WTSQ} holds for all $z\in[0,1+t]$, it suffices to prove that there exists $s>0$ such that \eqref{WTSQ} for a specific $z\in[0,1+t].$ As such let us now fix $z=0$. It can be shown that for any $n\in \N$ we have $$\{\phi_{\a}(0)\}_{\a\in\D^n}=\left\{\frac{p+qt}{2^n}: 0\leq p\leq 2^{n}-1,\,0\leq q\leq 2^n -1\right\}.$$
Since $t$ is not badly approximable, there exists a sequence $(m_k)$ such that $\zeta_{m_k+1}\geq k^3$ for all $k\in N$. In which case, by \eqref{property1} and \eqref{property2} we know that
\begin{equation}
\label{stepsize}
|q_{m_k}t-p_{m_k}|\leq \frac{1}{k^3q_{m_k}}
\end{equation} for each $k\in\mathbb{N}$. Without loss of generality we assume $q_{m_k}t-p_{m_k}>0$ for all $k$. This assumption will simplify some of our later arguments.
Define the sequence $(n_k)$ via the inequalities
\begin{equation}
\label{approximate N}
2^{n_k}\leq k^2q_{m_k}< 2^{n_k+1}.
\end{equation} Consider the set of $(p,q)\in\mathbb{N}^2$ satisfying
\begin{equation}
\label{glove1}kp_{m_k}\leq p\leq 2^{n_k}-1
\end{equation}
and
\begin{equation}
\label{glove2}0\leq q\leq 2^{n_k}-1-kq_{m_k}.
\end{equation}
Note that for any $(p,q)\in \N^2$ satisfying \eqref{glove1} and \eqref{glove2} we have $$0\leq p-ip_{m_k} \leq 2^{n_k}-1$$ and $$0\leq q+iq_{m_k} \leq 2^{n_k}-1$$ for all $0\leq i\leq k$.
Given $k\in \N$ we let $$z_1=\inf\Big\{\frac{p+tq}{2^{n_k}}: (p,q) \textrm{ satisfy }\eqref{glove1}\textrm{ and }\eqref{glove2}\Big\}.$$ Equations \eqref{stepsize} and \eqref{approximate N} imply that for all $0\leq i\leq k$ we have $$z_1+\frac{i(q_{m_k}t-p_{m_k})}{2^{n_k}}\in \left[z_1,z_1+\frac{k}{2^{n_k}k^3q_{m_k}}\right]\subseteq \left[z_1,z_1+\frac{1}{4^{n_k}}\right].$$
Assume we have defined $z_1,\ldots,z_l$, we then define $z_{l+1}$ to be $$z_{l+1}=\inf\left\{\frac{p+qt}{2^{n_k}}:\frac{p+qt}{2^{n_k}}> z_l+\frac{1}{4^{n_k}} \textrm{ and }(p,q) \textrm{ satisfy }\eqref{glove1}\textrm{ and }\eqref{glove2}\right\},$$ assuming the set we are taking the infimum over is non-empty. By an analogous argument to that given above, it can be shown that for all $0\leq i\leq k$ we have
$$z_{l+1}+\frac{i(q_{m_k}t-p_{m_k})}{2^{n_k}}\in \left[z_{l+1},z_{l+1}+\frac{1}{4^{n_k}}\right].$$ This process must eventually end yielding $z_1,\ldots,z_{L(k)}.$ By our construction, we known that if $(p,q)$ satisfy \eqref{glove1} and \eqref{glove2}, then there must exist $1\leq l\leq L(k)$ such that $$\frac{p+qt}{2^{n_k}}\in \left[z_l,z_l+\frac{1}{4^{n_k}}\right].$$ It also follows from our construction that each interval $[z_l,z_l+4^{-{n_k}}]$ contains at least $k+1$ distinct points of the form $\frac{p+qt}{2^{n_k}}$ where $0\leq p\leq 2^{n_k}-1$ and $0\leq q\leq 2^{n_k}-1.$ Since there are only $4^{n_k}$ such points we have
\begin{equation}
\label{cake1}
L(k)\leq \frac{4^{n_k}}{k+1}.
\end{equation} We also have the bound
\begin{equation}
\label{cake2}
\#\left\{(p,q):\textrm{either }\eqref{glove1} \textrm{ or }\eqref{glove2}\textrm{ is not satisfied}\right\}=\mathcal{O}(2^{n_k} kp_{m_k}+2^{n_k} kq_{m_k})).
\end{equation} Now let $S(\{\frac{p+tq}{2^{n_k}}\},\frac{1}{4^{n_k}})$ be a maximal $4^{-n_k}$ separated subset of $\{\frac{p+tq}{2^{n_k}}\},$ or equivalently of $\{\phi_{\a}(0)\}_{\a\in \D^{n_k}}$. Then we have
\begin{align}
\label{mario}
\frac{T(\{\frac{p+tq}{2^{n_k}}\},\frac{1}{4^{n_k}})}{4^{n_k}}&=\frac{\#\{(p,q): \eqref{glove1} \textrm{ and }\eqref{glove2} \textrm{ are satisfied and }\frac{p+tq}{2^{n_k}}\in S(\{\frac{p+tq}{2^{n_k}}\},\frac{1}{4^{n_k}})\}}{4^{n_k}} \\
&+\frac{\#\{(p,q):\textrm{either }\eqref{glove1} \textrm{ or }\eqref{glove2}\textrm{ is not satisfied and }\frac{p+tq}{2^{n_k}}\in S(\{\frac{p+tq}{2^{n_k}}\},\frac{1}{4^{n_k}})\}}{4^{n_k}}.\nonumber
\end{align}
If $(p,q)$ satisfy \eqref{glove1} and \eqref{glove2}, then as stated above $\frac{p+tq}{2^{n_k}}\in [z_l,z_l+\frac{1}{4^{n_k}}]$ for some $1\leq l\leq L(k)$. Clearly a $4^{-n_k}$ separated set can only contain one point from each interval $[z_l,z_l+\frac{1}{4^{n_k}}].$ Therefore
\begin{equation}
\label{cake3}\#\left\{(p,q): \eqref{glove1} \textrm{ and }\eqref{glove2} \textrm{ are satisfied and }\frac{p+tq}{2^n}\in S\Big(\Big\{\frac{p+tq}{2^{n_k}}\Big\},\frac{1}{4^{n_k}}\Big)\right\}\leq L(k).
\end{equation} Substituting the bounds \eqref{cake1}, \eqref{cake2}, and \eqref{cake3} into \eqref{mario}, we obtain
$$\frac{T(\{\frac{p+tq}{2^{n_k}}\},\frac{1}{4^{n_k}})}{4^{n_k}}=\mathcal{O}\left(\frac{1}{k+1}+\frac{kp_{m_{k}}+kq_{m_{k}}}{2^{n_k}}\right).$$ Employing \eqref{approximate N} and the fact $q_{m_k}\asymp p_{m_k},$ we obtain $$\frac{T(\{\frac{p+tq}{2^{n_k}}\},\frac{1}{4^{n_k}})}{4^{n_k}}=\mathcal{O}\left(\frac{1}{k}\right).$$ Therefore $$\lim_{k\to\infty}\frac{T(\{\phi_{\a}(z)\}_{\a\in \D^{n_k}},\frac{1}{4^{n_k}})}{4^{n_k}}=0$$ and our proof is complete.
\end{proof}
\section{Proof of Theorem \ref{Colette thm}}
\label{Colette section}
In this section we prove Theorem \ref{Colette thm}. We start with a reformulation of what it means for an IFS to be consistently separated with respect to a measure $\m$.
\begin{thm}
\label{new colette}
Let $\m$ be a slowly decaying measure. An IFS has the CS property with respect to $\m$ if and only if there exists $z\in X$ and $s>0$ such that $$\liminf_{n\to\infty}\frac{T(\{\phi_{\a}(z)\}_{\a\in L_{\m,n}},\frac{s}{R_{\m,n}^{1/d}})}{R_{\m,n}}>0.$$
\end{thm}
\begin{proof}
Suppose that for any $z\in X$ and $s>0$ we have $$\liminf_{n\to\infty}\frac{T(\{\phi_{\a}(z)\}_{\a\in L_{\m,n}},\frac{s}{R_{\m,n}^{1/d}})}{R_{\m,n}}=0.$$ Then by Proposition \ref{fail prop}, Lemma \ref{arbitrarily small}, and the fact $R_{\m,n}^{-1}\asymp \m([\a])$, for any $z\in X$ there exists $h:\mathbb{N}\to[0,\infty)$ such that $\sum_{n=1}^{\infty}h(n)=\infty,$ yet $U_{\Phi}(z,\m,h)$ has zero Lebesgue measure. Therefore the IFS cannot satisfy the CS property with respect to $\m$. So we have shown the rightwards implication in our if and only if.
Now suppose that there exists $z\in X$ and $s>0$ such that $$\liminf_{n\to\infty}\frac{T(\{\phi_{\a}(z)\}_{\a\in L_{\m,n}},\frac{s}{R_{\m,n}^{1/d}})}{R_{\m,n}}>0.$$ Then there exists $c>0$ such that for all $n$ sufficiently large we have $$\frac{T(\{\phi_{\a}(z)\}_{\a\in L_{\m,n}},\frac{s}{R_{\m,n}^{1/d}})}{R_{\m,n}}>c.$$ Combining the fact that $R_{\m,n}^{-1}\asymp \m([\a])$ for $\a\in L_{\m,n}$ together with Lemma \ref{arbitrarily small}, we see that Proposition \ref{fixed omega} implies that the set $U_{\Phi}(z,\m,h)$ has positive Lebesgue measure for any $h$ satisfying $\sum_{n=1}^{\infty}h(n)=\infty.$ Therefore our IFS satisfies the CS property with respect to $\m$ and we have proved the leftwards implication of our if and only if.
\end{proof}
The reformulation of the CS property provided by Theorem \ref{new colette} better explains why we used the terminology consistently separated to describe this property.
With the reformulation provided by Theorem \ref{new colette}, we can give a short proof of Theorem \ref{Colette thm}.
\begin{proof}[Proof of Theorem \ref{Colette thm}]
Let $\m$ be a slowly decaying $\sigma$-invariant ergodic probability measure. Suppose that $\mu$, the pushforward of $\m,$ is not absolutely continuous. Then by Proposition \ref{absolute continuity}, for any $z\in X$ and $s>0$ we have $$\lim_{n\to\infty}\frac{T(\{\phi_{\a}(z)\}_{\a\in L_{\m,n}},\frac{s}{R_{\m,n}^{1/d}})}{R_{\m,n}}=0.$$ By Theorem \ref{new colette} it follows that the IFS $\Phi$ does not satisfy the CS property with respect to $\m$.
\end{proof}
\section{Proof of Theorem \ref{overlapping conformal theorem}}
\label{conformal section}
In this section we prove Theorem \ref{overlapping conformal theorem}. Recall that Theorem \ref{overlapping conformal theorem} relates to conformal iterated function systems. The parameter $\dim_{S}(\Phi)$ is the unique solution to $$P(s\cdot \log |\phi_{a_1}'(\pi(\sigma(a_j)))|)=0.$$ Moreover, $\m_{\Phi}$ is the unique measure supported on $\D^{\mathbb{N}}$ satisfying $$h_{\m_{\Phi}}+\int \dim_{S}(\Phi)\cdot \log |\phi_{a_1}'(\pi(\sigma(a_j)))|d\m_{\Phi}=0.$$ To prove Theorem \ref{overlapping conformal theorem} we need to state some additional properties of the measure $\m_{\Phi}$:
\begin{itemize}
\item Let $x\in X$ and $(a_j)$ be such that $\pi(a_j)=x$. Then for any $r\in(0,Diam(X)),$ there exists $N(r)\in\mathbb{N}$ such that \begin{equation}
\label{prop1}X_{a_1\cdots a_{N(r)}}\subseteq B(x,r)\textrm{ and } Diam(X_{a_1\cdots a_{N(r)}})\approx r.
\end{equation}
\item For any $\a\in \D^*$ we have
\begin{equation}
\label{prop2}\m_{\Phi}([\a])\asymp Diam(X_{\a})^{\dim_{S}(X)}.
\end{equation}
\item For any $\a,\b\in\D^*$ we have
\begin{equation}
\label{prop3} \m_{\Phi}([\a\b])\asymp \m_{\Phi}([\a])\m_{\Phi}([\b]).
\end{equation}
\item For any $\a,\b\in\D^*$ we have
\begin{equation}
\label{prop5}
Diam(X_{\a\b})\asymp Diam(X_{\a})Diam(X_{\b}).
\end{equation}
\item There exists $\gamma\in(0,1)$ such that
\begin{equation}
\label{prop4}
\m_{\Phi}([\a]) =\mathcal{O}(\gamma^{|\a|}).
\end{equation}
\end{itemize}
For a proof of these properties we refer the reader to \cite{Fal2}, \cite{PRSS}, and \cite{Rue}.
Before giving our proof we make an observation. Given $\theta:\mathbb{N}\to[0,\infty)$ we have the following equivalences:
\begin{align*} \sum_{n=1}^{\infty}\sum_{\a\in\D^n}(Diam(X_\a)\theta(n))^{\dim_{S}(X)}=\infty
\iff&\sum_{n=1}^{\infty}\theta(n)^{\dim_{S}(\Phi)}\sum_{\a\in\D^n}Diam(X_\a)^{\dim_{S}(X)}=\infty\\
\stackrel{\eqref{prop2}}{\iff}&\sum_{n=1}^{\infty}\theta(n)^{\dim_{S}(\Phi)}\sum_{\a\in\D^n}\m_{\Phi}([\a])=\infty\\
\iff&\sum_{n=1}^{\infty}\theta(n)^{\dim_{S}(\Phi)}=\infty.
\end{align*}So the hypothesis of Theorem \ref{overlapping conformal theorem} can be restated in terms of the divergence of $\sum_{n=1}^{\infty}\theta(n)^{\dim_{S}(\Phi)}.$
\begin{proof}[Proof of Theorem \ref{overlapping conformal theorem}]
We split our proof into individual steps.\\
\noindent\textbf{Step $1$. Lifting to $\D^{\mathbb{N}}$.}\\
Let us fix $z\in X$ and $\theta$ satisfying the hypothesis of our theorem. Moreover, we let $(z_j)\in \D^{\N}$ be a sequence such that $\pi(z_j)=z$. For any $\a\in \D^*$ consider the ball $$B(\phi_{\a}(z),Diam(X_\a)\theta(|\a|)).$$ By \eqref{prop1} we know that there exists $N(\a,\theta)$ such that
\begin{equation}
\label{inclusionZZ}X_{\a z_1\cdots z_{N(\a,\theta)}}\subseteq B(\phi_{\a}(z),Diam(X_\a)\theta(|\a|))
\end{equation}and
\begin{equation}
\label{hamster}
Diam(X_{\a z_1 \cdots z_{N(\a,\theta)}})\asymp Diam(X_\a)\theta(|\a|).
\end{equation} In what follows we let $$\a_{\theta}:=\a z_1\cdots z_{N(\a,\theta)}.$$Equation \eqref{inclusionZZ} implies the following:
\begin{align*}
\mu_{\Phi}(W_{\Phi}(z,\theta))&=\m_{\Phi}((b_j):\pi(b_j)\in W_{\Phi}(z,\theta))\\
&\geq \m_{\Phi}((b_j):(b_j)\in [\a_{\theta}] \textrm{ for i.m. }\a\in \D^*).
\end{align*} To complete our proof it therefore suffices to show that
\begin{equation}
\label{need to show}\m_{\Phi}((b_j):(b_j)\in [\a_{\theta}] \textrm{ for i.m. }\a\in \D^*)=1.
\end{equation} Note that we have
\begin{equation}
\label{divergencezz}\sum_{n=1}^{\infty}\sum_{\a\in \D^n}\m_{\Phi}([\a_{\theta}])=\infty.
\end{equation}This is because of our underlying divergence assumption and the following:
$$\sum_{n=1}^{\infty}\sum_{\a\in \D^n}\m_{\Phi}([\a_{\theta}])\stackrel{\eqref{prop2}}{\asymp}\sum_{n=1}^{\infty}\sum_{\a\in \D^n}Diam(X_{\a_{\theta}})^{\dim_{S}(\Phi)}\stackrel{\eqref{hamster}}{\asymp}\sum_{n=1}^{\infty}\sum_{\a\in \D^n}(Diam(X_\a)\theta(|\a|))^{\dim_{S}(\Phi)}.$$
\noindent \textbf{Step 2. A density theorem for $\D^{\mathbb{N}}$.}\\
To prove \eqref{need to show} we will make use of a density argument. Since we are working in $\D^{\mathbb{N}}$ we do not have the Lebesgue density theorem. Instead we have the statement: suppose $E\subset \D^{\mathbb{N}}$ satisfies $\m_{\Phi}(E)>0,$ then for $\m_{\Phi}$-almost every $(c_j)\in E$ we have
\begin{equation}
\label{sequence density}\lim_{M\to\infty}\frac{\m_{\Phi}([c_1\cdots c_M]\cap E)}{\m_{\Phi}([c_1\cdots c_M])}=1.
\end{equation} One can see that this statement holds using the results of Rigot \cite{Rig}. In particular, we can equip $\D^\mathbb{N}$ with a metric so that $\m_{\Phi}$ is doubling measure. We can then apply Theorem $2.15$ and Theorem $3.1$ from \cite{Rig}. Using \eqref{sequence density}, we see that to prove \eqref{need to show}, it suffices to show that for any $(c_j)\in \D^{\mathbb{N}},$ there exists $d>0$ such that
\begin{equation}
\label{needtoshowb}
\frac{\m_{\Phi}([c_1\cdots c_M]\cap \{(b_j):(b_j)\in [\a_{\theta}] \textrm{ for i.m. }\a\in \D^*\})}{\m_{\Phi}([c_1\cdots c_M])}>d
\end{equation} for all $M$ sufficiently large. The rest of the proof now follows from a similar argument to that given by the author in \cite{Bak2}. The difference being here we are now working in the sequence space $\D^\N$ rather than $\mathbb{R}^d$. We include the relevant details for the sake of completeness. \\
\noindent\textbf{Step 3. Defining $E_n$ and verifying the hypothesis of Lemma \ref{Erdos lemma}.}\\
Let us fix $(c_j)\in\D^{\mathbb{N}}$ and $M\in\mathbb{N}$. In what follows we let $\c=c_1\cdots c_M$. For $n\geq M$ let $$E_n:=\left\{[\a_{\theta}]:\a\in\D^n\textrm{ and }a_1\cdots a_M=\c\right\},$$ and let $$E:=\limsup_{n\to\infty} E_n.$$ Note that $$E\subseteq [\c]\cap \big\{(b_j):(b_j)\in [\a_{\theta}] \textrm{ for i.m. }\a\in \D^*\big\}.$$ Therefore to prove \eqref{needtoshowb}, it is sufficient to prove that there exists $d>0$ independent of $M$ such that
\begin{equation}
\label{needtoshowc}
\m_{\Phi}(E)>d\m_{\Phi}([\c]).
\end{equation} Note that $$\sum_{n=M}^{\infty}\m_{\Phi}(E_n)=\infty.$$This follows from
\begin{align*}
\sum_{n=M}^{\infty}\m_{\Phi}(E_n)&=\sum_{n=M}^{\infty}\sum_{\stackrel{\a\in \D^n}{a_1\cdots a_M=\c}}\m_{\Phi}([\a_{\theta}])\\
&=\sum_{n=M}^{\infty}\sum_{\b\in \D^{n-M}}\m_{\Phi}([\c\b z_1\cdots z_{N(\c\b,\theta)}])\\
&\stackrel{\eqref{prop2}}{\asymp} \sum_{n=M}^{\infty}\sum_{\b\in \D^{n-M}} Diam(X_{\c\b z_1\cdots z_{N(\c\b,\theta)}})^{\dim_{S}(\Phi)}\\
&\stackrel{\eqref{hamster}}{\asymp} \sum_{n=M}^{\infty}\sum_{\b\in \D^{n-M}}(Diam(X_{ \c\b})\theta(n))^{\dim_{S}(\Phi)}\\
&\stackrel{\eqref{prop5}}{\asymp} \sum_{n=M}^{\infty}\sum_{\b\in \D^{n-M}} Diam(X_{\c})^{\dim_{S}(\Phi)}(Diam(X_{\b})\theta(n))^{\dim_{S}(\Phi)}\\
&\stackrel{\eqref{prop2}}{\asymp} Diam(X_{\c})^{\dim_{S}(\Phi)}\sum_{n=M}^{\infty}\theta(n)^{\dim_{S}(\Phi)}\sum_{\b\in \D^{n-M}}\m_{\Phi}([\b])\\
&=Diam(X_{\c})^{\dim_{S}(\Phi)}\sum_{n=M}^{\infty}\theta(n)^{\dim_{S}(\Phi)}\\
&=\infty.
\end{align*}
In the last line we made use of our underlying hypothesis and the equivalence stated before our proof. Importantly we see that the collection of sets $\{E_n\}_{n\geq M}$ satisfies the hypothesis of Lemma \ref{Erdos lemma}. \\
\noindent\textbf{Step 4. Bounding $\sum_{n,m=M}^Q\m_{\Phi}(E_n\cap E_m).$}\\
To apply Lemma \ref{Erdos lemma} we need to show that the following bound holds:
\begin{equation}
\label{superman}
\sum_{n,m=M}^Q\m_{\Phi}(E_n\cap E_m)=\mathcal{O}\left(\m_{\Phi}([\c])\left(\sum_{n=M}^{Q}\theta(n)^{\dim_{S}(\Phi)} + \left(\sum_{n=M}^{Q}\theta(n)^{\dim_{S}(\Phi)}\right)^2 \right)\right).
\end{equation}
Let $\a\in \D^n$ be such that $a_1\cdots a_M=\c$ and $m\geq M$. As a first step in our proof of \eqref{superman} we will bound $$\m_{\Phi}([\a_\theta]\cap E_m).$$ There are two cases that naturally arise, when $m>|\a|+N(\a,\theta)$ and when $|\a|<m\leq |\a|+N(\a,\theta).$ Let us consider first the case $|\a|<m\leq |\a|+N(\a,\theta).$ If $|\a|< m\leq |\a|+N(\a,\theta)$ then there is at most one $\a'\in \D^m$ such that
$$[\a_{\theta}]\cap [\a'_{\theta}]\neq\emptyset.$$ Moreover this $\a'$ must equal $\a z_1\cdots z_{m-n}.$ This gives us the bound:
\begin{align*}
\m_{\Phi}([\a_\theta]\cap E_m)&=\m_{\Phi}([\a_\theta]\cap[\a'_{\theta}])\\
&\leq \m_{\Phi}([\a'_{\theta}])\\
&\stackrel{\eqref{prop2}}{\asymp} Diam(X_{\a'_{\theta}})^{\dim_{S}(\Phi)}\\
&\stackrel{\eqref{hamster}}{\asymp} (Diam(X_{\a'})\theta(m))^{\dim_{S}(\Phi)}\\
&\stackrel{\eqref{prop5}}{\asymp}(Diam(X_{\a})Diam(X_{z_1\cdots z_{m-n}})\theta(m))^{\dim_{S}(\Phi)}\\
&\stackrel{\eqref{prop2}}{\asymp}\m_{\Phi}([\a])\m_{\Phi}([z_1\cdots z_{m-n}])\theta(m)^{\dim_{S}(\Phi)}\\
&\leq\m_{\Phi}([\a])\m_{\Phi}([z_1\cdots z_{m-n}])\theta(n)^{\dim_{S}(\Phi)}\\
&\stackrel{\eqref{prop4}}{=}\mathcal{O}\left(\m_{\Phi}([\a])\theta(n)^{\dim_{S}(\Phi)}\gamma^{m-n}\right).
\end{align*}
In the penultimate line we used that $\theta$ is decreasing. Thus we have shown that if $|\a|<m\leq |\a|+N(\a,\theta)$ then
\begin{equation}
\label{level 1 bound}
\m_{\Phi}([\a_{\theta}]\cap E_m)=\mathcal{O}\left(\m_{\Phi}([\a])\theta(n)^{\dim_{S}(\Phi)}\gamma^{m-n}\right).
\end{equation}
We now consider the case where $m>|\a|+N(\a,\theta)|.$ In this case, if $\a'\in \D^m$ and $$[\a_{\theta}]\cap [\a'_{\theta}]\neq\emptyset,$$ we must have $$a_1'\cdots a_{|\a|+N(\a,\theta)}'=\a_{\theta}.$$ Using this observation we obtain:
\begin{align*}
\m_{\Phi}([\a_{\theta}]\cap E_m)&=\sum_{\stackrel{\a'\in \D^m}{a_1'\cdots a_{|\a|+N(\a,\theta)}'=\a_{\theta}}}\m_{\Phi}([\a'_{\theta}])\\
&=\sum_{\b'\in D^{m-n-N(\a,\theta)}}\m_{\Phi}([\a_{\theta}\b'z_1\cdots z_{N(\a_{\theta}\b',\theta)}])\\
&\stackrel{\eqref{prop2}}{\asymp}\sum_{\b'\in D^{m-n-N(\a,\theta)}}Diam(X_{\a_{\theta}\b'z_1\cdots z_{N(\a_{\theta}\b',\theta)}})^{\dim_{S}(\Phi)}\\
&\stackrel{\eqref{hamster}}{\asymp}\sum_{\b'\in D^{m-n-N(\a,\theta)}}(Diam(X_{\a_{\theta}\b'})\theta(m))^{\dim_{S}(\Phi)}\\
&\stackrel{\eqref{prop5}}{\asymp}(Diam(X_{\a_{\theta}})\theta(m))^{\dim_{S}(\Phi)}\sum_{\b'\in D^{m-n-N(\a,\theta)}}Diam(X_{\b'})^{\dim_{S}(\Phi)}\\
&\stackrel{\eqref{prop2}}{\asymp}(Diam(X_{\a_{\theta}})\theta(m))^{\dim_{S}(\Phi)}\sum_{\b'\in D^{m-n-N(\a,\theta)}}\m_{\Phi}([\b'])\\
&\stackrel{\eqref{hamster}}{\asymp}(Diam(X_\a)\theta(n)\theta(m))^{\dim_{S}(\Phi)}\\
&\stackrel{\eqref{prop2}}{\asymp}\m_{\Phi}([\a])\theta(n)^{\dim_{S}(\Phi)}\theta(m)^{\dim_{S}(\Phi)}.
\end{align*}Thus we have shown that if $m>|\a|+N(\a,\theta)|$ then
\begin{equation}
\label{level 2 bound}
\m_{\Phi}([\a_{\theta}]\cap E_m)\asymp\m_{\Phi}([\a])\theta(n)^{\dim_{S}(\Phi)}\theta(m)^{\dim_{S}(\Phi)}.
\end{equation}Combining \eqref{level 1 bound} and \eqref{level 2 bound} we obtain the bound
\begin{equation}
\label{level 3 bound}
\m_{\Phi}([\a_{\theta}]\cap E_m)=\mathcal{O}\left(\m([\a])\theta(n)^{\dim_{S}(\Phi)}\gamma^{m-n}+\m_{\Phi}([\a])\theta(n)^{\dim_{S}(\Phi)}\theta(m)^{\dim_{S}(\Phi)}\right).
\end{equation}Importantly this bound holds for all $m>n$.
Applying \eqref{level 3 bound} we obtain:
\begin{align}
\label{triple split}
\sum_{n,m=M}^Q\m_{\Phi}(E_n\cap E_m)&=\sum_{n=M}^{Q}\m_{\Phi}(E_n)+2\sum_{n=M}^{Q-1}\sum_{m=n+1}^{Q}\m_{\Phi}(E_n\cap E_m)\nonumber\\
&=\sum_{n=M}^{Q}\m_{\Phi}(E_n)+2\sum_{n=M}^{Q-1}\sum_{\stackrel{\a\in \D^n}{a_1\cdots a_M=\c}}\sum_{m=n+1}^{Q}\m_{\Phi}([\a_{\theta}]\cap E_m)\nonumber\\
&\stackrel{\eqref{level 3 bound}}{=}\sum_{n=M}^{Q}\m_{\Phi}(E_n)+\mathcal{O}\left(\sum_{n=M}^{Q-1}\sum_{\stackrel{\a\in \D^n}{a_1\cdots a_M=\c}}\sum_{m=n+1}^{Q}\m_{\Phi}([\a])\theta(n)^{\dim_{S}(\Phi)}\gamma^{m-n}\right)\nonumber\\
&+\mathcal{O}\left(\sum_{n=M}^{Q-1}\sum_{\stackrel{\a\in \D^n}{a_1\cdots a_M=\c}}\sum_{m=n+1}^{Q}\m_{\Phi}([\a])\theta(n)^{\dim_{S}(\Phi)}\theta(m)^{\dim_{S}(\Phi)}\right).
\end{align}
We now analyse each term in \eqref{triple split} individually. Repeating the arguments given at the end of Step $3,$ we can show that
\begin{equation}
\label{piece1}
\sum_{n=M}^{Q}\m_{\Phi}(E_n)\asymp \m_{\Phi}([\c])\sum_{n=M}^Q\theta(n)^{\dim_{S}(\Phi)}.
\end{equation}Focusing on the second term in \eqref{triple split} we obtain:
\begin{align}
\label{piece2}
&\sum_{n=M}^{Q-1}\sum_{\stackrel{\a\in \D^n}{a_1\cdots a_M=\c}}\sum_{m=n+1}^{Q}\m_{\Phi}([\a])\theta(n)^{\dim_{S}(\Phi)}\gamma^{m-n}\nonumber\\
&\stackrel{\eqref{prop2}}{\asymp} \m_{\Phi}([\c])\sum_{n=M}^{Q-1}\sum_{\b\in \D^{n-M}}\sum_{m=n+1}^{Q}\m_{\Phi}([\b])\theta(n)^{\dim_{S}(\Phi)}\gamma^{m-n}\nonumber\\
&\asymp \m_{\Phi}([\c])\sum_{n=M}^{Q-1}\theta(n)^{\dim_{S}(\Phi)}\sum_{\b\in \D^{n-M}}\m_{\Phi}([\b])\sum_{m=n+1}^{Q}\gamma^{m-n}\nonumber\\
&=\mathcal{O}\left(\m_{\Phi}([\c])\sum_{n=M}^{Q-1}\theta(n)^{\dim_{S}(\Phi)}\right).
\end{align}In the last line we used that $\gamma\in(0,1)$ so $\sum_{m=n+1}^{Q}\gamma^{m-n}$ can be bounded above by a constant independent of $n$ and $Q$.
We now focus on the third term in \eqref{triple split}:
\begin{align}
\label{piece3}
&\sum_{n=M}^{Q-1}\sum_{\stackrel{\a\in \D^n}{a_1\cdots a_M=\c}}\sum_{m=n+1}^{Q}\m_{\Phi}([\a])\theta(n)^{\dim_{S}(\Phi)}\theta(m)^{\dim_{S}(\Phi)}\nonumber\\
& \stackrel{\eqref{prop2}}{\asymp}\m_{\Phi}([\c])\sum_{n=M}^{Q-1}\sum_{\b\in \D^{n-M}}\sum_{m=n+1}^{Q}\m_{\Phi}([\b])\theta(n)^{\dim_{S}(\Phi)}\theta(m)^{\dim_{S}(\Phi)}\nonumber\\
&=\m_{\Phi}([\c])\sum_{n=M}^{Q-1}\theta(n)^{\dim_{S}(\Phi)}\sum_{\b\in \D^{n-M}}\m_{\Phi}([\b])\sum_{m=n+1}^{Q}\theta(m)^{\dim_{S}(\Phi)}\nonumber\\
&\leq \m_{\Phi}([\c])\left(\sum_{n=M}^{Q}\theta(n)^{\dim_{S}(\Phi)}\right)^2.
\end{align}Substituting \eqref{piece1}, \eqref{piece2}, and \eqref{piece3} into \eqref{triple split} we obtain
$$\sum_{n,m=M}^Q\m_{\Phi}(E_n\cap E_m)=\mathcal{O}\left(\m_{\Phi}([\c])\left(\sum_{n=M}^{Q}\theta(n)^{\dim_{S}(\Phi)} + \left(\sum_{n=M}^{Q}\theta(n)^{\dim_{S}(\Phi)}\right)^2 \right)\right).$$ Therefore \eqref{superman} holds. \\
\noindent\textbf{Step 5. Applying Lemma \ref{Erdos lemma}.}\\
Since $\sum_{n=M}^{\infty}\theta(n)^{\dim_{S}(\Phi)}=\infty$ there exists $Q$ such that $\sum_{n=M}^{Q}\theta(n)^{\dim_{S}(\Phi)}>1.$ Therefore for $Q$ sufficiently large we have
\begin{equation}
\label{luigi}\sum_{n=M}^{Q}\theta(n)^{\dim_{S}(\Phi)}<\left(\sum_{n=M}^{Q}\theta(n)^{\dim_{S}(\Phi)}\right)^2.
\end{equation}It follows now by \eqref{superman}, \eqref{piece1} and \eqref{luigi} that there exists some $d>0$ independent of $M$ such that
$$\limsup_{Q\to\infty}\frac{(\sum_{n=M}^{Q}\m_{\Phi}(E_n))^2}{\sum_{n,m=M}^Q\m_{\Phi}(E_n\cap E_m)}\geq \limsup_{Q\to\infty}\frac{d\cdot\left(\m_{\Phi}([\c]) \sum_{n=M}^{Q}\theta(n)^{\dim_{S}(\Phi)}\right)^2}{\m_{\Phi}([\c])\left(\sum_{n=M}^{Q}\theta(n)^{\dim_{S}(\Phi)}\right)^2}= d \m_{\Phi}([\c]).$$
Applying Lemma \ref{Erdos lemma} it follows that $$\m_{\Phi}(\limsup_{n\to\infty} E_n)\geq d \m_{\Phi}([\c]).$$ This implies \eqref{needtoshowc} and completes our proof.
\end{proof}
\section{Applications of the mass transference principle}
\label{misc}
The main results of this paper give conditions ensuring a limsup set of the form $W_{\Phi}(z,\Psi)$ or $U_{\Phi}(z,\m,h)$ has positive or full Lebesgue measure. For these results it is necessary to assume that some appropriate volume sum diverges. If the relevant volume sum converged, then the limsup set in question would automatically have zero Lebesgue measure by the Borel-Cantelli lemma. It is still an interesting problem to determine the metric properties of a limsup set when the volume sum converges. Thankfully there is a powerful tool for determining the size of a limsup set when the volume sum converges. This tool is known as the mass transference principle and is due to Beresnevich and Velani \cite{BerVel}. We provide a brief account of this technique below.
We say that a set $X\subset\mathbb{R}^d$ is Ahlfors regular if $$\mathcal{H}^{\dim_{H}(X)}(X\cap B(x,r))\asymp r^{\dim_{H}(X)}$$ for all $x\in X$ and $0<r<Diam(X).$ Given $s>0$ and a ball $B(x,r),$ we define $$B^s:=B(x,r^{s/\dim_{H}(X)}).$$ The theorem stated below is a weaker version of a statement proved in \cite{BerVel}. It is sufficient for our purposes.
\begin{thm}
\label{Mass transference principle}
Let $X$ be Ahlfors regular and $(B_j)$ be a sequence of balls with radii converging to zero. Let $s>0$ and suppose that for any ball $B$ in $X$ we have $$\mathcal{H}^{\dim_{H}(X)}(B\cap \limsup_{j\to\infty}B_j^s)=\mathcal{H}^{\dim_{H}(X)}(B).$$ Then, for any ball $B$ in $X$ $$\mathcal{H}^s(B\cap \limsup_{j\to\infty} B_j)=\mathcal{H}^s(B).$$
\end{thm}
Theorem \ref{Mass transference principle} can be applied in conjunction with Theorems \ref{1d thm}, \ref{translation thm} and \ref{precise result}, to prove many Hausdorff dimension results for the limsup sets $W_{\Phi}(z,\Psi)$ and $U_{\Phi}(z,\m,h)$ when the appropriate volume sum converges. We simply have to restrict to a subset of the parameter space where we know that the corresponding attractor will always be Ahlfors regular. For the sake of brevity we content ourselves with the following statement for the family of iterated function systems studied in Section \ref{Specific family}. This statement is a consequence of Theorem \ref{precise result} and Theorem \ref{Mass transference principle}.
\begin{thm}
Suppose $t\notin \mathbb{Q},$ then for any $z\in [0,1+t]$ and $s>0$ we have $$\dim_{H}(W_{\Phi}(z,4^{-|\a|(1+s)}))=\frac{1}{1+s}$$and $$\mathcal{H}^{\frac{1}{1+s}}(W_{\Phi}(z,4^{-|\a|(1+s)}))=\infty.$$
\end{thm}
\section{Examples}
\label{Examples}
The purpose of this section is to provide some explicit examples to accompany the main results of this paper.
\subsection{IFSs satisfying the CS property}
Here we provide two classes of IFSs that satisfy the CS property with respect to some measure $\m$. These IFSs will have contraction ratios lying in a special class of algebraic integers known as Garsia numbers. A Garsia number is a positive real algebraic integer with norm $\pm 2$ whose Galois conjugates all have modulus strictly greater than $1$. Examples of Garsia numbers include $\sqrt[n]{2}$ for any $n\in\mathbb{N}$, and $1.76929\ldots,$ the appropriate root of $x^3-2x-2=0$. The lemma below is due to Garsia \cite{Gar}, for a short proof see \cite{Bak2}.
\begin{lemma}
\label{Garsia separation}
Let $\lambda$ be the reciprocal of a Garsia number. Then there exists $s>0$ such that for any two distinct $\a,\a'\in\{-1,1\}^n$ we have
$$\Big|\sum_{j=0}^{n-1}a_j\lambda^j-\sum_{j=0}^{n-1}a_j'\lambda^j\Big|> \frac{s}{2^n}.$$
\end{lemma}
\begin{example}
Let $\m$ be the $(1/2,1/2)$ Bernoulli measure and for each $\lambda\in(1/2,1),$ let the corresponding IFS be $$\Phi_{\lambda}:=\{\phi_{-1}(x)=\lambda x -1,\phi_1(x)=\lambda x+1\}.$$ For any $\a,\a'\in\{-1,1\}^{n}$ and $z\in [\frac{-1}{1-\lambda},\frac{1}{1-\lambda}],$ it can be shown that $$\phi_{\a}(z)-\phi_{\a'}(z)=\sum_{j=0}^{n-1}a_j\lambda^j-\sum_{j=0}^{n-1}a_j'\lambda^j.$$ Therefore by Lemma \ref{Garsia separation}, if $\lambda$ is the reciprocal of a Garsia number, for any $z\in [\frac{-1}{1-\lambda},\frac{1}{1-\lambda}]$ and distinct $\a,\a'\in\{-1,1\}^n,$ we have $$|\phi_{\a}(z)-\phi_{\a'}(z)|> \frac{s}{2^n}.$$ It follows that for any $z\in [\frac{-1}{1-\lambda},\frac{1}{1-\lambda}]$ we have $$S\left(\{\phi_{\a}(z)\}_{\a\in\{-1,1\}^n},\frac{s}{2^n}\right)= \{\phi_{\a}(z)\}_{\a\in\{-1,1\}^n}$$ for all $n\in\mathbb{N}$. Applying Proposition \ref{separated full measure} we see that for any $z\in[\frac{-1}{1-\lambda},\frac{1}{1-\lambda}]$ and $h:\mathbb{N}\to [0,\infty)$ satisfying $\sum_{n=1}^{\infty}h(n)=\infty,$ we have that Lebesgue almost every $x\in [\frac{-1}{1-\lambda},\frac{1}{1-\lambda}]$ is contained in $U_{\Phi_{\lambda}}(z,\m,h).$ Therefore if $\lambda$ is the reciprocal of a Garsia number, then the IFS $\Phi_{\lambda}$ has the CS property with respect to $\m$. This fact is a consequence of the main result of \cite{Bak}. The proof given there relied upon certain counting estimates due to Kempton \cite{Kem}. The argument given in the proof of Proposition \ref{separated full measure} doesn't rely on any such counting estimates. Instead we make use of the fact that the Bernoulli convolution is equivalent to the Lebesgue measure and is expressible as the weak star limit of weighted Dirac masses supported on elements of the set $\{\phi_{\a}(z)\}_{\a\in\{-1,1\}^n}$.
\end{example}
\begin{example}
Let $\m$ be the $(1/4,1/4,1/4,1/4)$ Bernoulli measure and let our IFS be \begin{align*}
\Phi_{\lambda_1,\lambda_2}:=\{&\phi_1(x,y)=(\lambda_1 x+1,\lambda_2 y+1),\phi_2(x,y)=(\lambda_1 x+1,\lambda_2 y-1),\\
&\phi_3(x,y)=(\lambda_1 x-1,\lambda_2 y+1),\phi_4(x,y)=(\lambda_1 x-1,\lambda_2 y-1)\},
\end{align*} where $\lambda_1,\lambda_2\in(1/2,1)$. For each $\Phi_{\lambda_1,\lambda_2}$ the corresponding attractor is $[\frac{-1}{1-\lambda_1},\frac{1}{1-\lambda_1}]\times [\frac{-1}{1-\lambda_2},\frac{1}{1-\lambda_2}]$. If both $\lambda_1$ and $\lambda_2$ are reciprocals of Garsia numbers, then it follows from Lemma \ref{Garsia separation} that for some $s>0,$ for any $z\in [\frac{-1}{1-\lambda_1},\frac{1}{1-\lambda_1}]\times [\frac{-1}{1-\lambda_2},\frac{1}{1-\lambda_2}],$ we have $$|\phi_{\a}(z)-\phi_{\a'}(z)|> \frac{s}{2^n}$$ for distinct $\a,\a'\in \{1,2,3,4\}^n.$ Therefore
$$S\left(\{\phi_{\a}(z)\}_{\a\in\{1,2,3,4\}^n},\frac{s}{2^n}\right)= \{\phi_{\a}(z)\}_{\a\in\{-1,1\}^n}$$ for any $z\in [\frac{-1}{1-\lambda_1},\frac{1}{1-\lambda_1}]\times [\frac{-1}{1-\lambda_2},\frac{1}{1-\lambda_2}]$ for all $n\in\mathbb{N}.$
Note that $d=2$ and each of our contractions have the same matrix part. Applying Proposition \ref{separated full measure}, we see that that for any $z\in [\frac{-1}{1-\lambda_1},\frac{1}{1-\lambda_1}]\times [\frac{-1}{1-\lambda_2},\frac{1}{1-\lambda_2}]$ and $h:\mathbb{N}\to [0,\infty)$ satisfying $\sum_{n=1}^{\infty}h(n)=\infty,$ we have that Lebesgue almost every $x\in [\frac{-1}{1-\lambda_1},\frac{1}{1-\lambda_1}]\times [\frac{-1}{1-\lambda_2},\frac{1}{1-\lambda_2}]$ is contained in $U_{\Phi_{\lambda_1,\lambda_2}}(z,\m,h).$ Therefore when $\lambda_1,\lambda_2$ are both reciprocals of Garsia numbers, the IFS $\Phi_{\lambda_1,\lambda_2}$ satisfies the CS property with respect to $\m$. It is perhaps also worth mentioning that by Proposition \ref{separated full measure}, if both $\lambda_1$ and $\lambda_2$ are reciprocals of Garsia numbers, then the pushforward of $\m$ is absolutely continuous.
\end{example}
\subsection{The non-existence of Khintchine like behaviour without exact overlaps}
In \cite{Bak2} the author asked whether the only mechanism preventing an IFS from observing some sort of Khintchine like behaviour was the presence of exact overlaps. The example below, which is based upon Example 1.2 from \cite{Hochman2}, shows that there are other mechanisms preventing Khintchine like behaviour.
\begin{example}
Pick $t^*\in(0,2/3)$ so that the IFS $$\Phi_{t^*}:=\left\{\phi_{1}(x)=\frac{x}{3},\,\phi_{2}(x)=\frac{x+1}{3},\,\phi_{3}(x)=\frac{x+2}{3},\,\phi_{4}(x)=\frac{x+t^*}{3}\right\}.$$ does not contain an exact overlap. Now consider the following IFS acting on $\mathbb{R}^2$:
\begin{align*}
\Phi_{t^*}':=\{&\phi_1'(x,y)=(x/3,y/3),\phi_2'(x,y)=((x+1)/3,y/3),\\
&\phi_3'(x,y)=((x+2)/3,y/3),\phi_4'(x,y)=((x+t^*)/3,y/3),\\
&\phi_5'(x,y)=(x/3,(y+2)/3),\phi_6'(x,y)=((x+1)/3,(y+2)/3),\\
&\phi_7'(x,y)=((x+2)/3,(y+2)/3),\phi_8'(x,y)=((x+t^*)/3,(y+2)/3)\}.
\end{align*}
The attractor $X$ for $\Phi_{t^*}'$ is $[0,1]\times C$, where $C$ is the middle third Cantor set. Therefore $\dim_{H}(X)=1+\frac{\log 2}{\log 3}$. Since $\Phi_{t^*}$ did not contain an exact overlap, it follows that $\Phi_{t^*}'$ also does not contain an exact overlap.
Let $\gamma\approx 0.279$ be such that $$8\gamma^{1+\frac{\log 2}{\log 3}}=1.$$ So in particular we have
\begin{equation}
\label{nearly} \sum_{n=1}^{\infty}\sum_{\a\in \D^n}\gamma^{n(1+\frac{\log 2}{\log 3})}=\infty.
\end{equation} If it were the case that our IFS exhibited Khintchine like behaviour, then with \eqref{nearly} in mind, at the very least we would expect that there exists $z\in X$ such that the set $$W:=\Big\{(x,y)\in \mathbb{R}^2:|(x,y)-\phi_{\a}'(z)|\leq \gamma^{|\a|} \textrm{ for i.m. }\a\in \bigcup_{n=1}^{\infty} \{1,\ldots,8\}^n\Big\}$$ has Hausdorff dimension equal to $1+\frac{\log 2}{\log 3}$. We now show that in fact $\dim_{H}(W)<1+\frac{\log 2}{\log 3}.$
Let $$\Phi'':=\left\{\phi_1''(y)=\frac{y}{3},\phi_2''(y)=\frac{y+2}{3}\right\}.$$ Clearly $\Phi''$ has the middle third Cantor set as its attractor. We now make the simple observation that if $(x,y)\in \mathbb{R}^2$ satisfies $|(x,y)-\phi_{\a}'(z)|\leq \gamma^{n}$ for some $\a \in\{1,\ldots,8\}^n$ for $z=(z_1,z_2)$, then $|y-\phi_{\a}''(z_2)|\leq \gamma^{n}$ for some $\a\in\{1,2\}^n$. This means that if $|(x,y)-\phi_{\a}'(z)|\leq \gamma^{n}$ for some $\a\in\{1,\ldots,8\}^n,$ then $(x,y)$ must be contained in one of $2^n$ horizontal strip of height $2\gamma^{n}$ and width $1$. Such a strip can be covered by $C(1/\gamma)^{n}$ balls of diameter $\gamma^{n}$ for some $C>0$ independent of $n$. It follows that the set of
$(x,y)\in\mathbb{R}^2$ satisfying $|(x,y)-\phi_{\a}'(z)|\leq \gamma^{n}$ for some $\a\in\{1,\ldots,8\}^n,$ can be covered by $C(2/\gamma)^n$ balls of diameter $\gamma^{n}$. For each $n$ let $U_n$ be such a collection of balls. By construction, for any $N\in\N$ the set $\cup_{n\geq N}\{B\in U_n\}$ is a $\gamma^N$ cover of $W$.
Now let
\begin{equation}
\label{scondition}
s>\frac{\log \gamma - \log 2}{\log \gamma}\approx 1.542.
\end{equation} Then
\begin{align*}
\mathcal{H}^s\left(W\right)\leq \lim_{N\to\infty}\sum_{n=N}^{\infty}\sum_{B\in U_n}Diam(B)^s\leq \lim_{N\to\infty}\sum_{n=N}^{\infty}C(2/\gamma)^n\cdot \gamma^{sn}=0.
\end{align*}In the final equality we used \eqref{scondition} to guarantee $\sum_{n=1}^{\infty}C(2/\gamma)^n\cdot \gamma^{sn}<\infty.$
We have shown that $\mathcal{H}^s(W)=0$ for any $s>\frac{\log \gamma - \log 2}{\log \gamma}$. Therefore $\dim_{H}(W)\leq \frac{\log \gamma - \log 2}{\log \gamma}.$ Since $\frac{\log \gamma - \log 2}{\log \gamma}\approx 1.542$ and $1+\frac{\log 2}{\log 3}\approx 1.631,$ we have $\dim_{H}(W)<1+\frac{\log 2}{\log 3}$ as required.
\end{example}
Note that this example can easily be generalised to demonstrate a similar phenomenon when the underlying attractor has positive Lebesgue measure.
\section{Final discussion and open problems}
\label{Final discussion}
A number of problems and questions naturally arise from the results of this paper. The first and likely most difficult question is the following:
\begin{itemize}
\item Can one derive general, verifiable conditions for an IFS under which we can conclude it exhibits Khintchine like behaviour?
\end{itemize} This question seems to be very difficult and appears to be out of reach of our current methods. As such it seems that a more reasonable immediate goal would be to prove results for general parameterised families of iterated function systems. One can define a parameterised family of iterated function systems in the following general way. Suppose that $U$ is an open subset of $\mathbb{R}^k,$ and for each $u\in U$ we have an IFS given by $$\Phi_u:=\{\phi_{i,u}(x)=A_{i}(u)(x)+t_{i}(u)\}_{i=1}^l,$$ where for each $1\leq i\leq l$ we have $A_{i}:U\to GL(d,\mathbb{R})\cap\{A:\|A\|<1\}$ and $t_{i}:U\to \mathbb{R}^d.$ For each $u\in U$ we denote the attractor corresponding to this iterated function system by $X_u$. We would like to be able to describe what, if any, Khintchine like behaviour is observed for $\Phi_u$ for a typical $u\in U$. The methods of this paper do not extend to this general a setting and only work when some transversality condition is assumed. We expect that the conjecture stated below holds under some weak assumptions on the functions $A_i$ and $t_i.$
For a $\sigma$-invariant ergodic probability measure $\m,$ and a fixed $u\in U,$ we denote the corresponding Lyapunov exponents by $\lambda_1(\m,u),\ldots,\lambda_d(\m,u).$
\begin{conjecture}
\label{conjecture}
Let $\m$ be a slowly decaying $\sigma$-invariant ergodic probability measure and suppose that $\h(\m)>-(\lambda_1(\m,u)+\cdots+\lambda_{d}(\m,u))$ for Lebesgue almost every $u\in U$. Then the following statements hold:
\begin{itemize}
\item For Lebesgue almost every $u\in U,$ for any $z\in X_u$ and $h\in H^*,$ Lebesgue almost every $x\in X_u$ is contained in $U_{\Phi_u}(z,\m,h)$.
\item For Lebesgue almost every $u\in U,$ for any $z\in X_u,$ there exists $h:\mathbb{N}\to[0,\infty)$ such that $\sum_{n=1}^{\infty}h(n)=\infty,$ yet $U_{\Phi_u}(z,\m,h)$ has zero Lebesgue measure.
\end{itemize}
\end{conjecture}
Much of the analysis of this paper was concerned with the sequence \begin{equation}
\label{important sequence}
\left(\frac{T(\{\phi_{\a}(z)\}_{\a\in L_{\m,n}},\frac{s}{R_{\m,n}^{1/d}})}{R_{\m,n}}\right)_{n=1}^{\infty},
\end{equation} where $z\in X$ and $\m$ is some slowly decaying $\sigma$-invariant ergodic probability measure. In fact each of our main results was obtained by deriving some quantitative information about the values this sequence takes for typical values of $n$. The behaviour of this sequence provides another useful method for measuring how an IFS overlaps. For the parameterised families considered above, we conjecture that the statement below is true under some weak assumptions on the maps $A_i$ and $t_i.$
\begin{conjecture}
\label{conjecture2}
Let $\m$ be a slowly decaying $\sigma$-invariant ergodic probability measure and suppose that $\h(\m)>-(\lambda_1(\m,u)+\cdots+\lambda_{d}(\m,u))$ for Lebesgue almost every $u\in U$. Then for Lebesgue almost every $u\in U$, for any $z\in X_u,$ for $s$ sufficiently small we have
$$0=\liminf_{n\to\infty}\frac{T(\{\phi_{\a}(z)\}_{\a\in L_{\m,n}},\frac{s}{R_{\m,n}^{1/d}})}{R_{\m,n}}<\limsup_{n\to\infty}\frac{T(\{\phi_{\a}(z)\}_{\a\in L_{\m,n}},\frac{s}{R_{\m,n}^{1/d}})}{R_{\m,n}}=1.$$
\end{conjecture}
One of the interesting ideas to arise from this paper is the notion of an IFS satisfying the CS property with respect to a measure $\m$. Proceeding via analogy with Theorem \ref{precise result}, we expect that given a measure $\m$, it is the case that within a parameterised family of IFSs the CS property will not typically be satisfied with respect to $\m$. Indeed if Conjecture \ref{conjecture2} were true then this statement would follow from Proposition \ref{fail prop}. That being said, we still expect that for a parameterised family of IFSs, it will often be the case that there exists a large subset of the parameter space where the IFS does satisfy the CS property with respect to $\m$. We conjecture that the statement below is true under some weak assumptions on the maps $A_i$ and $t_i.$
\begin{conjecture}
\label{conjecture3}
Let $\m$ be a slowly decaying $\sigma$-invariant ergodic probability measure and suppose that $\h(\m)>-(\lambda_1(\m,u)+\cdots+\lambda_{d}(\m,u))$ for Lebesgue almost every $u\in U$. Then there exists $U'\subset U$ such that $\dim_{H}(U')=k,$ and for any $u\in U'$ the IFS $\Phi_u$ satisfies the CS property with respect to $\m$.
\end{conjecture}Theorem \ref{precise result} supports the validity of Conjectures \ref{conjecture}, \ref{conjecture2}, and \ref{conjecture3}.
Theorem \ref{Colette thm} states that satisfying the CS property with respect to $\m$ implies the pushforward $\mu$ is absolutely continuous. The CS property appears to only be satisfied in exceptional circumstances. As such it is natural to wonder whether there exists a more easily verifiable condition phrased in terms of limsup sets, which implies the absolute continuity of $\mu$. We pose the following question:
\begin{itemize}
\item Let $\mu$ be the pushforward of a measure $\m$. What is the smallest class of functions, such that if for some $z\in X$ the set $U_{\Phi}(z,\m,h)$ has positive Lebesgue measure for all $h$ belonging to this class, then $\mu$ will be absolutely continuous?
\end{itemize}
Much of the work presented in this paper is inspired by the classical theorem of Khintchine stated as Theorem \ref{Khintchine} in our introduction. Along with Khintchine's theorem, one of the first results encountered in a course on Diophantine approximation is the following result due to Dirichlet.
\begin{thm}[Dirichlet]
\label{Dirichlet}
For any $x\in \R$ and $Q\in\N$, there exists $1\leq q\leq Q$ and $p\in\mathbb{Z}$ such
$$\left|x-\frac{p}{q}\right|< \frac{1}{qQ}.$$ Therefore, for any $x\in\mathbb{R}$ there exists infinitely many $(p,q)\in\mathbb{Z}\times\mathbb{N}$ satisfying $$\left|x-\frac{p}{q}\right|< \frac{1}{q^2}.$$
\end{thm}For us the interesting feature of Dirichlet's theorem lies in the fact that it is a statement for all $x\in \mathbb{R}$. In our setting it is obvious that for any IFS $\Phi$, for any $z\in X$ we have
\begin{equation}
\label{Dirichleta}X=\left\{x\in\mathbb{R}^d:|x-\phi_{\a}(z)|\leq Diam(X_{\a})\textrm{ for i.m. }\a\in \D^*\right\}.
\end{equation} The results of this paper demonstrate that for many overlapping IFSs, given a $z\in X,$ then Lebesgue almost every point in $X$ can be approximated by images of $z$ infinitely often at a scale decaying to zero at an exponentially faster rate than $Diam(X_{\a})$. See for example Theorem \ref{precise result} where Lebesgue almost every point can be approximated at the scale $4^{-|\a|},$ yet $Diam(X_{\a})=2^{-|\a|}.$ With Theorem \ref{Dirichlet} in mind, it is natural to wonder whether there exists conditions under which \eqref{Dirichleta} can be improved upon.
\begin{itemize}
\item Can one construct an IFS for which there exists $s>1$ such that $$X=\left\{x\in\mathbb{R}^d:|x-\phi_{\a}(z)|\leq Diam(X_{\a})^s\textrm{ for i.m. }\a\in \D^*\right\}.$$ Alternatively one could ask whether there exists $s>1$ such that these sets differ by a finite or countable set.
\end{itemize}
We remark here that for the family of IFSs $\{\lambda x -1,\lambda x+1\},$ it can be shown that there exists $\lambda\in(1/2,0.668)$ and $z\in[\frac{-1}{1-\lambda},\frac{1}{1-\lambda}],$ such that Lebesgue almost every $x\in[\frac{-1}{1-\lambda},\frac{1}{1-\lambda}]$ can be approximated by images of $z$ at the scale $2^{-|\a|},$ yet there exists a set of positive Hausdorff dimension within $[\frac{-1}{1-\lambda},\frac{1}{1-\lambda}]$ that cannot be approximated by images of $z$ at a scale better than $\lambda^{|\a|}$. For more details on this example we refer the reader to the discussion at the end of \cite{Bak}.
We conclude now by emphasising one of the technical difficulties that is present within this paper that is not present within similar works on this topic. In many situations, if $\mu=\mu'\ast \mu'',$ and we have some method for measuring how evenly distributed a measure is with $\mathbb{R}^d$ (examples of methods of measurement include: absolute continuity, entropy, and $L^q$ dimension), then often $\mu$ will be at least as evenly distributed as $\mu'$ with respect to this method of measurement. One may in fact see a strict increase in how evenly distributed $\mu$ is with respect to this method of measurement (see for example \cite{Hochman,Shm}). A useful feature of the pushforward of Bernoulli measures is that they are often equipped with some sort of convolution structure. In many papers this convolution structure and the idea described above can be exploited to obtain results (see for example \cite{Hochman,SSS,Shm,ShmSol,Sol,Var}). Within this paper, the relevant method for measuring how evenly distributed a measure is, is to study the sequence given by \eqref{important sequence}. On a technical level, one of the main difficulties for us is that this method of measurement does not behave well under convolution. This is easy to see with an example. Let $\m$ be the $(1/2,1/2)$ Bernoulli measure and let our IFS be $\{\phi_1(x)=\frac{x}{2},\phi_2(x)=\frac{x+1}{2}\}.$ For this IFS the attractor is $[0,1].$ We denote the pushforward of $\m$ by $\mu'$. It is easy to see that for any $z\in [0,1]$ and $n\in\mathbb{N}$, we have
\begin{equation}
\label{optimall}\frac{T(\{\phi_{\a}(z)\}_{\a\in \{1,2\}n},\frac{1}{2\cdot 2^n})}{2^n}=1.
\end{equation}
So $\mu'$ exhibits an optimal level of separation. Now let $t\in (0,1)\cap \mathbb{Q}$ and consider the IFS $\{\phi_1(x)=\frac{x}{2},\phi_2(x)=\frac{x+t}{2}\}.$ For this IFS the attractor is $[0,t].$ We denote the pushforward of $\m$ for this IFS by $\mu''$. It is easy to see that for $\mu''$ we also have the optimal level of separation described by \eqref{optimall}. Consider the measure $\mu=\mu'\ast \mu''.$ This measure is simply the pushforward of the $(1/4,1/4,1/4,1/4)$ Bernoulli measure with respect to the IFS $$\Big\{\phi_1(x)=\frac{x}{2},\phi_2(x)=\frac{x+1}{2},\phi_3(x)=\frac{x+t}{2},\phi_{4}(x)=\frac{x+1+t}{2}\Big\},$$ i.e. the IFS studied in Theorem \ref{precise result}. Examining the proof of Proposition \ref{overlap prop}, we see that for any $t\in(0,1)\cap\mathbb{Q},$ there exists $c>0$ such that for any $z\in[0,1+t]$ and $s>0$ we have \begin{equation}
\label{optimal fail}
T\left(\{\phi_{\a}(z)\}_{\a\in \{1,2,3,4\}^n},\frac{s}{4^n}\right)=\mathcal{O}((4-c)^n).
\end{equation} Equation \eqref{optimal fail} demonstrates that we no longer have the strong separation properties that we saw
earlier for our two measures $\mu'$ and $\mu''$. We have in fact seen that after convolving $\mu'$ and $\mu''$ there is a drop in how evenly distributed the resulting measure is within $\mathbb{R}$. One could view this failure to improve under convolution as a consequence of how sensitive our method of measurement is to exact overlaps.
\medskip\noindent {\bf Acknowledgments.} The author would like to thank the anonymous referee for their valuable comments. The author would like to thank Tom Kempton and Boris Solomyak for providing some useful feedback on an initial draft, and Ariel Rapaport for pointing out the reference \cite{Shm3}. This research was supported by the EPSRC grant EP/M001903/1.
|
2,869,038,154,116 | arxiv | \section{Introduction}
The Taurus cloud complex is one of the nearest star-forming regions
\citep[$d\sim140$~pc,][references therein]{gal18} and has a relatively
large stellar population \citep[$N\sim500$,][this work]{ken08},
making it well-suited for studies of star formation
that reach low stellar masses and have good statistics.
In addition, Taurus has an unusually low stellar density
compared to other nearby molecular clouds, so it can help constrain
how the star formation process depends on environment.
However, a complete census of Taurus is challenging given that
its members are distributed across a large area of sky
($\sim100$~deg$^2$).
Surveys for members of Taurus have steadily improved in sensitivity
and areal coverage over the last 30 years
\citep[][references therein]{kra17,luh17}.
We have recently sought to advance this work in \citet{esp17} and \citet{luh18}.
In the first survey, we searched for members down to planetary
masses ($<15$~$M_{\rm Jup}$) across a large fraction of Taurus using optical
and infrared (IR) imaging from the {\it Spitzer Space Telescope}
\citep{wer04}, the United Kingdom Infrared Telescope (UKIRT) Infrared Deep
Sky Survey \citep[UKIDSS,][]{law07}, Pan-STARRS1 \citep[PS1,][]{kai02,kai10},
and the {\it Wide-field Infrared Survey Explorer} \citep[{\it WISE},][]{wri10}.
Meanwhile, \citet{luh18} used high-precision astrometry and optical photometry
from the second data release (DR2) of the {\it Gaia} mission
\citep{per01,deb12,gaia16b,gaia18} to perform a thorough census
of stellar members with low-to-moderate extinctions across the entire cloud
complex.
We have continued the survey for low-mass brown dwarfs in Taurus
from \citet{esp18} by including new IR imaging from UKIRT and the
Canada-France-Hawaii Telescope (CFHT). We also have obtained spectra of
most of the candidate stellar members that were identified with {\it Gaia}
by \citet{luh18}.
In this paper, we update the catalog of known members of Taurus from
\citet{luh18} (Section~\ref{sec:mem}), identify candidate members
using photometry, proper motions, and parallaxes
(Sections~\ref{sec:ident1} and \ref{sec:ident2}),
and spectroscopically classify the candidates (Section \ref{sec:spec}).
We assess the new members for evidence of circumstellar disks and
estimate the disk fraction as a function of stellar mass among the
known members (Section~\ref{sec:disk}). We conclude by using our new
census of Taurus to constrain the region's initial mass function (IMF),
particularly at the lowest masses (Section~\ref{sec:imf}).
\section{Catalog of Known Members of Taurus}
\label{sec:mem}
For our census of Taurus, we begin with the 438 objects adopted as members
by \citet{luh18}, which were vetted for contaminants using the proper motions
and parallaxes from {\it Gaia} DR2.
In that catalog, the components of a given multiple system appeared as
a single entry if they were unresolved by {\it Gaia} and
the imaging data utilized by \citet{esp17}.
\citet{luh18} overlooked the fact that HK~Tau~B and V1195~Tau~B were
resolved from their primaries by {\it Gaia}. They are now given separate
entries in our census. We also adopt 2MASS J04282999+2358482 as a member,
which was discovered to be a young late-M object by \citet{giz99}.
It satisfies our photometric and astrometric criteria for membership
and is located near other known members.
We exclude from our membership list one of the stars from from \citet{luh18},
2MASS 05023985+2459337, for reasons discussed in the Appendix.
When the 79 new members from our survey are included (Section~\ref{sec:spec}),
we arrive at a catalog of 519 known members of Taurus, which
are presented in Table~\ref{tab:mem}.
That tabulation contains adopted spectral types,
astrometry and photometry from {\it Gaia} DR2 and the corresponding kinematic
populations from \citet{luh18}, proper motions measured in Section~\ref{sec:pm},
near-IR photometry from various sources,
mid-IR photometry from {\it Spitzer} and {\it WISE} and the resulting
disk classifications \citep[][Section~\ref{sec:disk}]{luh10,esp14,esp17},
and extinction estimates. For each star that appears in {\it Gaia} DR2,
we also list the value of the re-normalized unit weight error
\citep[RUWE,][]{lin18}, which indicates the quality of the astrometric fit
(Section~\ref{sec:ident2}). \citet{luh18} compiled available radial velocity
measurements for known members of Taurus and calculated
$UVW$ velocities from the combination of those radial velocities
and the {\it Gaia} proper motions and parallaxes. Two of the new members
from our survey,
{\it Gaia} 146708734143437568 and 152104381299305856, also have radial velocity
measurements (16.6$\pm$0.8~km~s$^{-1}$, 17.5$\pm$1.6~km~s$^{-1}$), both of
which are from {\it Gaia} DR2.
A map of the spatial distribution of the members is shown in
Figure~\ref{fig:spatial1}. Kinematic and photometric data for the members
within nine fields that cover subsections of Taurus are plotted in
Figures~\ref{fig:spatial2}-\ref{fig:spatiallast}, which contain diagrams from
\citet{luh18} that have been updated to include the new members from this work.
The boundaries for those fields are indicated in Figure~\ref{fig:spatial1}.
\section{Identification of Candidate Members at Substellar Masses}
\label{sec:ident1}
\subsection{Photometry and Astrometry}
\subsubsection{Data Utilized by \citet{esp17}}
\label{sec:esp17}
In \citet{esp17}, we identified candidate substellar members of Taurus
based on their proper motions and positions in color-magnitude diagrams
(CMDs). We considered astrometry and photometry for objects within a field
encompassing all of the Taurus clouds
($\alpha=4^{\rm h}$--$5^{\rm h}10^{\rm m}$,
$\delta=15\arcdeg$--31$\arcdeg$)
in several optical and IR bands:
$JHK_s$ from the Point Source Catalog of the Two Micron All Sky Survey
\citep[2MASS,][]{cut03,skr06},
bands at 3.6, 4.5, 5.8, and 8.0~$\upmu$m\ ([3.6], [4.5], [5.8], [8.0])
from the Infrared Array Camera \citep[IRAC;][]{faz04} on
the {\it Spitzer Space Telescope} \citep{wer04},
$ZYJHK$ from data release 10 of UKIDSS,
$rizy_{P1}$ from the first data release of PS1 \citep{cha16,fle16},
$riz$ from data release 13 of the Sloan Digital Sky Survey
\citep[SDSS,][]{yor00,fin04,alb17},
$G$ from the first data release of {\it Gaia} \citep{gaia16a,gaia16b},
and bands at 3.5, 4.6, 12, and 22~$\upmu$m\ ($W1$, $W2$, $W3$, $W4$)
from the AllWISE Source Catalog.
The extinction for each object was estimated using $J-H$ and $J-K_s$
colors and it was used to deredden the photometry in the CMDs with
reddening relations from \citet{ind05}, \citet{sch16a}, and \citet{xue16}.
We measured relative proper motions between
2MASS and {\it Gaia} DR1, between 2MASS and PS1, and across several epochs
of IRAC imaging. Relative motions from UKIDSS were also employed.
2MASS, {\it WISE}, PS1 provided data for the entirety of our survey
field while {\it Spitzer}, SDSS, and UKIDSS covered a subset of it
\citep{luh17,esp17}. The fields observed by IRAC are indicated in
Figure~\ref{fig:fields}.
\subsubsection{UHS}
New $J$-band photometry has become available in Taurus through the first data
release of the UKIRT Hemisphere Survey \citep[UHS,][]{dye18}.
Those data were taken with UKIRT's Wide Field Camera
(WFCAM) \citep{cas01}, which also was used for UKIDSS.
UHS provides $J$ photometry for a large portion of Taurus that was not observed
by UKIDSS in that band. We have adopted the $1\arcsec$ aperture photometry
from UHS, which has a similar completeness limit as the data from UKIDSS
($J\sim18.5$).
\subsubsection{UKIRT}
\label{sec:ukirtphot}
UKIDSS has imaged most of Taurus in $K$ and roughly half of it in $ZYJ$.
Only a small portion of Taurus was observed in $H$.
When combined with $J$ and an optical band,
$H$ is particularly useful for distinguishing late-type members
from reddened background stars.
To improve the coverage of Taurus in $H$ and the other bands,
we have obtained new images with WFCAM at UKIRT.
The observations were similar to those from UHS and UKIDSS, consisting of
$3\times15$~s exposures in $Y$ and $4\times10$~s exposures in $JHK$ at
each position. The data were collected between September and December of 2017.
In Figure~\ref{fig:fields}, we show the fields that now have $JHK$ photometry
from WFCAM through UKIDSS, UHS, and our observations.
The initial data reduction steps (e.g., flat fielding, registration,
coaddition) were performed by the WFCAM pipeline \citep{irw04,ham08}.
We derived the flux calibration for the resulting images using photometry
from previous surveys (e.g., PS1, 2MASS).
The typical values of FWHM for point sources in the images were
$1\farcs1$ for $Y$ and $0\farcs8$ for $JHK$.
We identified sources in the pipeline images and measured aperture photometry
for them using the routines {\tt starfind} and {\tt phot} in IRAF.
We estimated the completeness limits of the data based on the
magnitudes at which the logarithm of the number of stars as a function of
magnitudes deviates from a linear slope and begins to decline,
which were 18.5, 17.5, and 17.2 for $J$, $H$, and $K$, respectively.
Similar limits are exhibited by the UKIDSS data in Taurus.
Our $Y$ data have a completeness limit near 18.5, which is $\sim0.5$~mag
brighter than the value for UKIDSS.
\subsubsection{CFHT}
Near-IR images of portions of Taurus are publicly available from the archive of
CFHT. We have utilized the data taken in $J$, $H$, and a narrowband filter
at 1.45~$\upmu$m\ ($W$) with the Wide-field Infrared Camera (WIRCam)
through programs 15BC11, 16AC13, 16BC17 (E. Artigau), 15BT11 (W.-P. Chen),
16AF16, 16BF22 (M. Bonnefoy), and 16AT04 (P. Chiang).
For most of the observations, individual exposure times were 10, 10,
and 65~s for $J$, $H$, and $W$, respectively, and the number of exposures per
field was 4--8. Point sources in the images typically exhibited
FWHM$\sim0\farcs6$--$0\farcs8$.
We began our analysis with the images from the CFHT archive that had been
pipeline processed (e.g., flat fielding, dark subtraction, bad pixel masking).
We registered and combined the images for a given field and filter.
For the resulting mosaics, we derived the astrometric and flux calibrations
with data from 2MASS and UKIDSS. Since $W$ is a custom filter that is
absent from 2MASS and UKIDSS, we calibrated the relative photometry
among different $W$ images such that their loci of reddened stars in
$W-H$ versus $J-H$ were aligned with each other. Relative photometry
of this kind in $W$ is sufficient for our purposes of identifying late-type
objects based on colors.
We identified sources in the images and measured their aperture photometry
with {\tt starfind} and {\tt phot} in IRAF.
The completeness limits for these data are $J=18.2$, $W=18.0$, and $H=17.5$.
The fields covered by WIRCam are indicated in Figure~\ref{fig:fields}.
\subsection{Color-Magnitude Diagrams}
We have identified all matching sources among our new catalogs
and those considered by \citet{esp17}. When multiple measurements in
similar bands were available for a star, we selected the data to adopt
in the manner described by \citet{esp17}.
We omitted photometry with errors $>0.15$~mag in $Y$ and $>0.1$~mag
in the other bands.
In \citet{esp17}, we constructed diagrams of $K_s$ (or $K$) versus
$G-K_s$, $r-K_s$, $i-K_s$, $z_{\rm P1}-K_s$, $Z-K_s$, $y_{\rm P1}-K_s$,
$Y-K_s$, $H-K_s$, $K_s-[3.6]$, and $K_s-W1$. We also included a diagram
of $W1$ versus $W1-W2$. As explained in that study, we estimated the extinction
for individual stars from $J-H$ and $J-K_s$ and dereddened the photometry
in most of the CMDs. In each CMD, we marked a boundary that followed
the lower envelope of the sequence of known members. Objects
appearing above any of those boundaries and not appearing below any of them
were treated as candidate members. We have applied those CMDs to our updated
compilation of photometry. Four examples of the CMDs are presented in
Figure~\ref{fig:criteria}. In addition, we show a diagram
that makes use of the $W$-band data from WIRCam, $W-H$ versus $J-W$.
The $W$ filter falls within a steam absorption band while $J$ and $H$
encompass continuum on either side of the band, so the combination of
$J-W$ and $W-H$ can be used to identify late-type objects via their strong
steam absorption. In the diagram of those colors, we have plotted
a reddening vector near the lower edge of the population of known members
later than M6. Objects above that vector are treated as late-type candidates
as long as they are not rejected by any other diagrams. We note that
many of the known $<$M6 members of Taurus within the WIRcam images are
saturated, and hence are absent from the diagram of $W-H$ versus $J-W$.
\subsection{Proper Motions}
\label{sec:pm}
As mentioned in Section~\ref{sec:esp17}, \citet{esp17} measured proper
motions in Taurus with astrometry from 2MASS, PS1, {\it Gaia} DR1, and IRAC.
In addition to those data, we have made use of new motions measured from
a combination of 2MASS, IRAC, UKIDSS, UHS, and our new WFCAM data, which have
epochs spanning 20 years. The latter four sets of data, which are deeper than
2MASS, span 13 years and reach the lowest masses in Taurus among the
available motions. To measure the proper motions, we began by
aligning each set of astrometry to the {\it Gaia} DR2 reference frame.
Motions were then computed with a linear fit to the available astrometry.
In Figure~\ref{fig:pm}, we show the resulting motions for individual
known members of Taurus and for other sources projected against Taurus,
which are represented by density contours. The measurements for the known
members have typical errors of $\sim2$--3~mas~yr$^{-1}$.
Motions with errors of $>10$~mas~yr$^{-1}$ are ignored.
As done with the other catalogs of proper motions
in \citet{esp17}, we have identified candidate members based on motions
that have 1$\sigma$ errors overlapping with a radius of 10~mas yr$^{-1}$
from the median motion of the known members (Figure~\ref{fig:pm}).
Six known members of Taurus do not satisfy this threshold, three of which have
{\it Gaia} DR2 motions that are consistent with membership \citep{luh18}.
The remaining three sources are 2MASS J04355209+2255039, J04354526+2737130,
and J04574903+3015195. The first two objects also have discrepant motions
in {\it Gaia} DR2, but are retained as members for reasons discussed by
\citet{luh18}. The third star is retained as a member since it is
near known members and is only slightly beyond our proper motion threshold.
\section{Identification of Candidate Members at Stellar Masses}
\label{sec:ident2}
{\it Gaia} DR2 provides high-precision astrometry at an
unprecedented depth for a wide-field survey.
For stars at $G\lesssim20$,
the {\it Gaia} parallaxes and proper motions have typical errors of
$\lesssim0.7$~mas and $\lesssim1.2$~mas~yr$^{-1}$, respectively \citep{gaia18},
which correspond to errors of $\lesssim10$\% and $\lesssim5$\%, respectively,
for unreddened members of Taurus at masses of $\gtrsim0.05$~$M_\odot$.
As a result, {\it Gaia} DR2 enables the precise kinematic identification
of members of Taurus at stellar masses.
\citet{luh18} selected stars from {\it Gaia} DR2 that have
proper motions and parallaxes that are similar to those of
the known members of Taurus. In that analysis, the
parameters {\tt astrometric\_gof\_al} and {\tt astrometric\_excess\_noise}
from {\it Gaia} DR2 were used to identify stars with poor astrometric fits,
and hence potentially unreliable astrometry.
More recently, \citet{lin18} has presented a
new parameter, RUWE, that serves as a better indicator of the goodness of fit.
He found that the distribution of RUWE in {\it Gaia} DR2 exhibited a break
near 1.4 between the distribution centered at unity expected
for well-behaved fits and a long tail to higher values.
Thus, RUWE$\lesssim$1.4 could be adopted as a criterion for reliable
astrometry. To illustrate the application of this threshold to Taurus,
we plot in Figure~\ref{fig:ruwe} the distribution of log(RUWE) for known
members (including the new ones from this work) that have parallaxes and
proper motions from {\it Gaia} DR2. We also indicate the subset
of members that are noted in \citet{luh18} and the Appendix
as having discrepant parallaxes (Table~\ref{tab:mem}).
The latter distribution does begin just
above the threshold of 1.4 from \citet{lin18}, supporting its applicability
to Taurus. Above that threshold, the fraction of members with discrepant
astrometry increases with higher values of RUWE.
Most members with RUWE$>1.4$ do not have discrepant astrometry, indicating
that many stars with high RUWE have fits that are sufficiently good for useful
astrometry.
The selection criteria from \citet{luh18} produced a sample of 62 candidate
members of Taurus. Most of the candidates should have spectral types of
M2--M6 based on their colors. In the next section, we present spectroscopic
classifications for 61 of those stars, 54 of which are adopted as members.
The one remaining candidate that lacks a spectrum is {\it Gaia}
157816859599833472. It is located $6\arcsec$ from a much brighter star,
HD~30111. The two stars have similar proper motions, but the parallax of
HD~30111 ($3.0\pm0.2$~mas) is much smaller than those of Taurus members
(6--8~mas), so it was not selected as a candidate member.
However, the value of RUWE for HD~30111 is high enough (2.4)
to indicate a poor astrometric fit and potentially unreliable astrometry.
Therefore, based on its proximity
to the candidate {\it Gaia} 157816859599833472 and its similar motion,
we treat HD~30111 as a candidate member.
\citet{luh18} noted three stars that did not satisfy the selection
criteria for candidates but were located within a few arcseconds of
candidates, and hence could be companions to them.
They consist of {\it Gaia} 164475467659453056, {\it Gaia} 3314526695839460352,
and {\it Gaia} 3409647203400743552.
We present spectral classifications for those stars in the next section.
We have searched for additional companions
that were detected by {\it Gaia} but were not identified as candidate
members by \citet{luh18}.
We began by retrieving all sources from {\it Gaia} DR2 that are within
$5\arcsec$ from known Taurus members. We omitted companions or candidate
companions that were already known and stars that appear below the sequence
of Taurus members in CMDs of {\it Gaia} photometry. The remaining sample
consists of {\it Gaia} 152416436441091584, {\it Gaia} 3415706130945884416,
{\it Gaia} 148400225409163776, and {\it Gaia} 154586322638884992.
All of these stars have photometry in only one {\it Gaia} band. The first
two objects lack measurements of proper motion and parallax.
Those parameters have large uncertainties for 148400225409163776,
but are similar to the measurements for its primary.
The proper motions and parallaxes of 154586322638884992 and its primary
differ significantly, but the latter may have unreliable astrometry
based on the large value of its RUWE (7.2).
The six candidates discussed in this section that lack spectroscopy
are listed in Table~\ref{tab:cand}.
\section{Spectroscopy of Candidate Members}
\label{sec:spec}
\subsection{Observations}
\label{sec:obs}
We have obtained spectra of 140 candidate members of Taurus identified in
Sections~\ref{sec:ident1} and \ref{sec:ident2}\footnote{Three of these
candidates were found independently by \citet{zha18} and are treated
as previously known members in this work.}, three known companions
that lack spectral classifications, and the primary for one of the latter stars.
We also searched for publicly available spectra of our candidates in
the data archives of observatories and spectroscopic surveys, finding
spectra with sufficient signal-to-noise ratios (SNRs) for 38 objects.
Those archival observations consist of 31 optical spectra from the third data
release of the Large Sky Area Multi-Object Fiber Spectroscopic Telescope survey
\citep[LAMOST;][]{cui12,zha12} and seven IR spectra collected through programs
GN-2017A-Q-81, GN-2017B-Q-35 (L. Albert), and GN-2017B-Q-19 (E. Magnier) with
the Gemini Near-Infrared Spectrograph \citep[GNIRS;][]{eli06}.
We present spectra for a total of 168 objects, some of which were
observed with multiple instruments.
This spectroscopic sample includes 61 of the 62 candidates identified
by \citet{luh18}, three additional stars from that study that did not satisfy
the criteria for candidacy but were located very close to candidates
(see Section~\ref{sec:ident2}), three known companions in Taurus that
lack measured spectral types (JH223~B, XEST~20-071~B, V892~Tau~NE),
and XEST~20-071~A, which was observed at the same time as its companion.
The remaining 100 targets in our sample were selected from the candidates
identified in Section~\ref{sec:ident1}. The highest priority was
given to candidates within the area of full $JHK$ coverage from WFCAM
(Figure~\ref{fig:fields}).
We performed our spectroscopy with the Red Channel Spectrograph \citep{sch89}
and the MMT and Magellan Infrared Spectrograph \citep[MMIRS;][]{mcl12} at the
MMT, GNIRS and the Gemini Multi-Object Spectrograph \citep[GMOS;][]{hoo04} at
Gemini North, SpeX \citep{ray03} at the NASA Infrared Telescope Facility (IRTF),
and the Low-Resolution Spectrograph~2 \citep[LRS2;][]{cho14,cho16}
at the Hobby-Eberly Telescope (HET).
The instrument configurations are summarized in Table~\ref{tab:log}.
The date and instrument for each object are listed in Table \ref{tab:spec}.
The archival data from LAMOST and GNIRS have been included in
Tables~\ref{tab:log} and \ref{tab:spec}.
We reduced the data from SpeX with the Spextool package \citep{cus04} and
corrected them for telluric absorption using spectra of A0V stars
\citep{vac03}. The GNIRS and MMIRS data were reduced and corrected for
telluric absorption in a similar manner using routines within IRAF.
The optical spectra from the Red Channel and GMOS were also reduced with IRAF.
The LRS2 data were processed with the LRS2 Quick-Look
Pipeline (B. L. Indahl, in preparation), which is briefly described by
\citet{dav18}. Fully reduced spectra are provided by the LAMOST survey.
We present examples of the reduced optical and IR spectra
in Figures \ref{fig:op} and \ref{fig:ir}, respectively.
All of the reduced spectra are provided in electronic
files that accompany those figures with the exception of the LAMOST
data, which are available from http://www.lamost.org.
\subsection{Spectral Classification}
\label{sec:specclass}
We have used the spectra from the previous section to estimate spectral types
and to identify evidence of youth that would support membership in Taurus.
Given their colors and magnitudes, the candidates in our spectroscopic sample
should have M/L spectral types if they are members. For these types, we
have utilized diagnostics of youth that include Li {\tt I} absorption at
6707~\AA\ and gravity-sensitive features like the Na {\tt I} doublet near
8190~\AA\ and the shape of the $H$-band continuum \citep{mar96,luh97,luc01}.
Our measurements of the equivalent widths of Li {\tt I} are listed in
Table~\ref{tab:spec} and are plotted versus spectral type
in Figure \ref{fig:li}. For the range of spectral types of the objects with
useful Li constraints ($\geq$K7), most known members of Taurus have
equivalent widths of $\gtrsim$0.4~\AA\ \citep{bas91,mag92,mar94}.
For young objects at $<$M5 and field dwarfs, we classified the optical spectra
through comparison to dwarf standards \citep{kir91,kir97,hen94}.
The optical data for young sources at $\geq$M5 were classified with
the average spectra of dwarf and giant standards \citep{luh97,luh99}.
For the near-IR spectra, we applied young standards \citep{luh17} and dwarf
standards \citep{cus05,ray09} as appropriate.
The resulting classifications are listed in Table~\ref{tab:spec}.
For the young objects that were observed with IR spectroscopy, we have
used the slopes of those data relative to the best-fitting standards to derive
estimates of extinction. Spectra of young L dwarfs with
low-to-moderate SNRs can be matched by standards across
a wide range of types when extinction is a free parameter \citep{luh17},
as illustrated in Figure~\ref{fig:ir}, where three of the coolest new members
are compared to standard spectra bracketing their classifications.
Moderately young stars ($\sim$10--100~Myr) that are unrelated to the
Taurus clouds \citep[$\sim2$~Myr,][]{pal00}
are scattered across the large field that we have selected
for our survey \citep[][references therein]{luh18}.
As a result, a spectroscopic signature of youth may not be sufficient
evidence of membership in Taurus, particularly if it provides
only a rough constraint on age (e.g., $<100$~Myr).
In addition, some of the young contaminants have proper motions that are
close enough to those of Taurus members that the former can appear to have
motions consistent with membership when the errors are $\gtrsim3$~mas~yr$^{-1}$,
which applies to most non-{\it Gaia} data \citep{luh18}.
Given these considerations, we have taken the following approach to assigning
membership in our spectroscopic sample.
We treat an object as a member if its proper motion and parallax from
{\it Gaia} DR2 support membership (i.e., the candidates from
\citet{luh18}) and its spectrum shows evidence of youth, which is taken
to be $W_\lambda\gtrsim0.4$~\AA\ when a Li measurement is available.
If Li is detected at a weaker level ($\gtrsim$0.15~\AA) and the {\it Gaia}
data agree closely with those of known members, we also adopt the star
as a member (Appendix).
Candidate companions to known members are adopted as members
if they have spectroscopic signatures of youth. Discrepant {\it Gaia}
astrometry is ignored when the astrometric fit is poor (RUWE$\gtrsim1.4$),
which applies to some of the candidate companions.
If {\it Gaia} does not offer reliable measurements of parallax and proper
motion, a candidate is adopted as a member if its available proper motion
data are consistent with membership, its spectrum shows evidence of youth,
it appears within the sequence of known members in CMDs and in a diagram of
$M_K$ versus spectral type, and it is within $\sim1\arcdeg$ of known members.
Based on the above criteria, 86 of the 168 objects in our spectroscopic sample
are members of Taurus, as indicated in Table~\ref{tab:spec}.
Three members were previously known companions
that lacked spectral classifications, one member is the primary for
one of those companions, and three members were independently found
in a recent survey \citep{zha18}. The remaining 79 members are newly
confirmed in this work. As discussed in Section~\ref{sec:mem}, the
census of Taurus now contains 519 known members. Our survey has
doubled the number of known members at $\geq$M9 and has uncovered the
faintest known members in $M_K$, as illustrated in Figure~\ref{fig:mksp},
where we show extinction-corrected $M_K$ versus spectral type for previously
known members
and our new members. UGCS J041757.97+283233.9 is now the faintest known member.
It has a very red spectrum, which is consistent with spectral types ranging
from M9 ($A_V=7.6$) to L7 ($A_V=0$), as illustrated in Figure~\ref{fig:ir}.
Assuming a $K$-band bolometric correction for young L dwarfs \citep{fil15},
the median parallax of 7.8~mas for the nearest group of members, and $A_V=3.5$,
we estimate log~$L_{\rm bol}=-3.76$ for UGCS J041757.97+283233.9, which implies
a mass of 0.003--0.01~$M_\odot$ ($\sim$3--10~$M_{\rm Jup}$) for ages of
1--10~Myr according to evolutionary models \citep{bur97,cha00}.
Most (54/61) of the candidate members identified with {\it Gaia} data
by \citet{luh18} have been adopted as Taurus members.
Among the other candidates that were not part of that sample,
39 are field stars that were observed prior to our WFCAM imaging and
the release of {\it Gaia} DR2 and that would be rejected by our current
criteria that incorporate those data.
\citet{luh18} found that members of Taurus exhibit four distinct populations
in terms of parallax and proper motion, which were given names of
red, blue, green, and cyan.
We have assigned the new members to those populations when the necessary
data are available from {\it Gaia} DR2, as indicated in Table~\ref{tab:mem}
and Figures~\ref{fig:spatial2}--\ref{fig:spatiallast}.
Comments on the spectral types, membership, and kinematics of individual
objects are provided in the Appendix.
\section{Circumstellar Disks}
\label{sec:disk}
\subsection{Disk Detection and Classification}
\label{sec:diskclass}
We have compiled the available mid-IR photometry of the new members of Taurus
from this work to check for evidence of circumstellar disks via the
presence of IR emission in excess above the expected photospheric emission.
We also have performed this analysis on members adopted by \citet{luh18}
that were not examined for disks by \citet{esp17} or earlier studies.
We make use of photometry in $W1$--$W4$ from the AllWISE Source Catalog,
[3.6]--[8.0] from IRAC on {\it Spitzer}, and
the 24 $\upmu$m\ band of the Multiband Imaging Photometer for
{\it Spitzer} \citep[MIPS;][]{rie04}, which is denoted as [24].
The {\it Spitzer} data were measured in the manner described by \citet{luh10}.
We present the resulting {\it WISE} and {\it Spitzer} data
in Table~\ref{tab:mem}. In addition, we have included photometry from those
facilities for previously known members \citep{luh10,esp14,esp17}.
To determine if excess mid-IR emission is present in a given object
and to classify the evolutionary stage of a detected disk, we have followed
the methods and terminology described in \citet{luh10} and \citet{esp14}
\citep[see also][]{esp18}. In summary, we calculated extinction-corrected
colors of the mid-IR photometry relative to $K_s$,
measured color excesses relative to photospheric colors,
and used the sizes of those excesses (when present) to
estimate the evolutionary stages of the disks.
The colors utilized for that analysis are plotted as a function of spectral
type in Figure~\ref{fig:excess} for both previously known members and the
newly classified members. The disks are assigned to the following categories:
optically thick {\it full} (primordial) disks with no large gaps or holes
that affect the mid-IR spectral energy distribution (SED);
optically thick {\it transitional} disks with large inner holes;
optically thin {\it evolved} disks with no large gaps;
optically thin {\it evolved transitional} disks with large inner holes; and
optically thin {\it debris} disks that are composed of second-generational
dust from planetesimal collisions \citep{ken05,rie05,her07,luh10,esp12}.
The Taurus members that have been found since \citet{esp17} are plotted
with red and blue symbols in Figure~\ref{fig:excess} according to the
presence or absence of excess emission, respectively.
Ten of those stars exhibit mid-IR excess emission.
Four of them have excesses only in [24] or $W4$, although one of them,
HD~30378 (B9.5), is only slightly below our adopted excess threshold in $W3$.
The excesses are small ($\sim0.5$~mag) for the other three stars with excesses
in only [24]/$W4$, which consist of 2MASS J04355694+2351472 (M5.75),
2MASS J04390571+2338112 (M6), and 2MASS J04584681+2954407 (M4).
Two M9.5 members, 2MASS J04213847+2754146 and UGCS J042438.53+264118.5,
have excesses in $[4.5]$, $W2$, and [8.0] and lack reliable detections at
longer wavelengths.
2MASS J04451654+3141202 (M5.5) has excesses in $W2$ and $W3$.
It is blended with another source in the ${\it WISE}$ images, but it
clearly dominates in $W3$.
2MASS 04282999+2358482 (M9.25) and UGCS J043907.76+264236.0 (M9.5--L4)
have marginal excesses at [8.0] and UGCS J041757.97+283233.9 (M9-L7)
has a marginal excess at [4.5].
The latter is the faintest known member in extinction-corrected $K$.
It is difficult to reliably identify the presence of excess emission for the
coolest members because of the uncertainties in the photospheric colors and
the spectral classifications of young L-type objects \citep{esp17,espetal17}.
The evolutionary stages assigned to the newly identified disks are presented in
Table~\ref{tab:mem}, where we also include the classifications of previously
known members \citep{luh10,esp14,esp17}.
\subsection{Disk Fraction}
\citet{luh10} measured the fraction of Taurus members that have disks
as a function of spectral type (and hence mass) for known members
observed by {\it Spitzer}.
Since that study, mid-IR photometry has become available for additional
members from both {\it Spitzer} and {\it WISE} and many new members have
been identified. Therefore, it would be worthwhile to perform a new
calculation of the disk fraction in Taurus using our new catalog of members.
The evolutionary stages of young stellar objects consist
of classes 0 and I (protostar+disk+infalling envelope), class II (star+disk),
and class III \citep[star without disk;][]{lad84,lad87,and93,gre94}.
Taurus members that have been previously designated as class 0 or class I
are marked as such in Table~\ref{tab:mem}. We consider all other members
with disks to be class II objects. Some disks have classifications of
``debris/evolved transitional" because we cannot distinguish between these
two classes with the available data. Stars with debris disks are normally
counted as class III objects, but since very few debris disks are expected in
a region as young as Taurus, we treat all of the ``debris/evolved transitional"
disks as class II.
As done in \citet{luh10}, we define the disk fraction as N(II)/N(II+III)
and we measure it as a function of spectral type using bins of
$<$K6, K6--M3.5, M3.75--M5.75, M6--M8, and M8--M9.75.
We exclude stars that lack measured spectral types, all of which
are protostars or close companions. We also omit objects with spectral types
of $\geq$L0 because of the difficulty in reliably identifying the presence of
excess emission from disks (Section~\ref{sec:diskclass}).
The resulting disk fraction is tabulated and plotted
in Table~\ref{tab:disks} and Figure~\ref{fig:disks}, respectively.
The current census of class II members of Taurus should have a high level of
completeness for spectral types earlier than $\sim$M8 \citep{esp14}.
However, the census may be incomplete for class III at high extinctions,
which would lead to an overestimate of the disk fraction when all known
members are considered. To investigate this possibility, we have
computed disk fractions for samples of members that should be complete
for both classes II and III.
Now that most of the candidate members identified with {\it Gaia}
by \citet{luh18} have been observed spectroscopically,
the census should be complete for both class II and class III
members earlier than M6--M7 at low extinctions \citep[$A_J<1$,][]{luh18}.
Meanwhile, the census within the WFCAM field in Figure~\ref{fig:fields} should
be complete for $\lesssim$L0 at $A_J<1.5$ (Section \ref{sec:imfcomp}).
We find that the disk fractions for $A_J<1$ across the entirety of Taurus
and for $A_J<1.5$ within the WFCAM fields are indistinguishable from
the disk fraction in Figure~\ref{fig:disks} for all known members.
The disk fraction in Figure~\ref{fig:disks} is near $\sim0.7$ and 0.4 for
spectral types of $\leq$M3.5 and $>$M3.5, respectively.
A similar trend with spectral type was present in the data from \citet{luh10},
although the disk fraction was slightly higher than
our new measurement ($\sim0.75$ and 0.45).
The disk fraction in Taurus is similar to that in Chamaeleon~I,
which is $\sim$0.7 and 0.45 for $\leq$M3.5 and $>$M3.5 \citep{luh10}.
For all spectral types combined, Taurus has a disk fraction of $\sim$0.5,
which is roughly similar to the disk fractions of IC~348 and NGC~1333
($\sim$0.4 and 0.6). However, those two clusters
do not show a variation with spectral type \citep{luh16}.
\section{Initial Mass Function}
\label{sec:imf}
\subsection{Completeness}
\label{sec:imfcomp}
To derive constraints on the IMF in Taurus from our new census of members,
we begin by evaluating the completeness of that census.
In Section~\ref{sec:ident1}, we focused on the identification of candidate
substellar members within the WFCAM fields in Figure~\ref{fig:fields}.
To evaluate the completeness of our census within that field,
we employ a CMD constructed from $H$ and $K_s$, which offer the greatest
sensitivity to low-mass members of Taurus among the available bands.
In Figure~\ref{fig:remain}, we plot $K_s$ versus $H-K_s$ for the
known members in the WFCAM fields and all other sources in those fields
that 1) are not rejected by the photometric and proper motion criteria
from Section~\ref{sec:ident1} or the astrometric criteria from \citet{luh18}
and 2) are not known nonmembers based on spectroscopy or other data.
There are very few remaining sources with undetermined membership status
within a wide range of magnitudes and reddenings.
For instance, the current census within the WFCAM fields
should be complete for an extinction-corrected magnitude of $K_s<15.7$
($\lesssim$L0) for $A_J<1.5$.
In Section~\ref{sec:ident2}, we adopted the candidate stellar members
that were identified by \citet{luh18} using data for the entirety of Taurus
from {\it Gaia} DR2. That study demonstrated that the census of Taurus
should be complete for spectral types earlier than M6--M7 at $A_J<1$ after
including the {\it Gaia} candidates that are spectroscopically confirmed
to be members.
\subsection{Distributions of Spectral Type, $M_K$, and Mass}
Based on the analysis in the previous section, we
have defined two extinction-limited samples of known Taurus members that
should have well-defined completeness limits, making them suitable
for characterizing the IMF: members in the WFCAM fields
with $A_J<1.5$ and members at any location with $A_J<1$.
Stars that lack extinction estimates are excluded from these samples,
which consist of protostars, close companions, and edge-on disks.
As done in our recent studies
of IC~348, NGC~1333, and Chamaeleon~I \citep{luh16,espetal17},
we use distributions of spectral types and extinction-corrected $M_K$
as observational proxies for the IMF.
The distributions of these parameters for our two extinction-limited
samples are presented in Figure~\ref{fig:imf}.
For objects that lack parallax measurements, we derived $M_K$ using the
median parallax of the nearest population of members.
In addition, we have estimated the IMF for each sample using
distributions of spectral types in which the bins are selected to
approximate logarithmic intervals of mass according to evolutionary models
\citep{bar98,bar15} and the temperature scale for young stars \citep{luh03},
as done for the disk fraction in Figure~\ref{fig:disks}.
The resulting IMFs are shown in Figure~\ref{fig:imf}.
The two samples of Taurus members in Figure~\ref{fig:imf} have similar
distributions, which is not surprising given
the large overlap between the fields in question (i.e., the WFCAM fields
encompass a large majority of known members).
\citet{luh18} found that a $A_J<1$ sample of members
with the {\it Gaia} candidates included exhibited a prominent maximum at M5
($\sim0.15$~$M_\odot$),
and thus resembled denser clusters like IC~348, NGC~1333, Chamaeleon~I,
and the Orion Nebula Cluster \citep{dar12,hil13,luh16,espetal17}.
Since we have confirmed most of the {\it Gaia}
candidates as members, our $A_J<1$ sample in Figure~\ref{fig:imf} has
a similar distribution of spectral types as in \citet{luh18}.
In the $A_J<1.5$ sample for the WFCAM fields, the distributions of
spectral type and $M_K$ decrease rapidly
below the peak and remain roughly flat
at substellar masses down to the completeness limit, which
corresponds to $\sim5$--13~$M_{\rm Jup}$ for
ages of 1--10~Myr according to evolutionary models \citep{bur97,cha00}.
Thus, our completeness limit in that sample does not appear to be near
a low-mass cutoff. The faintest known member, UGCS J041757.97+283233.9, has
an estimated mass of $\sim$3--10~$M_{\rm Jup}$ (Section \ref{sec:specclass}),
which represents an upper limit on the minimum mass in Taurus.
These results are consistent with recent surveys for brown dwarfs
in other star-forming regions \citep{luh16,esp17,zap17,lod18},
young associations \citep{liu13,kel15,sch16b,bes17},
and the solar neighborhood \citep{kir19}, which have found that the
IMF extends down to $\lesssim5$~$M_{\rm Jup}$.
Using the minimum variance unbiased estimator for a power-law distribution,
we calculate a slope of $\alpha=1.0\pm0.1$\footnote{$\alpha$ is defined
such that $dN/dM\propto M^{-\alpha}$. $\alpha=1$ corresponds to a slope
of zero when the mass function is plotted in logarithmic units, as done
in Figure~\ref{fig:imf}.} between
the hydrogen burning limit and the completeness limit (0.01--0.08~$M_\odot$)
in the IMF for the $A_J<1.5$ sample, which is shallower than the slope of
$\alpha\sim-0.3$ in the lognormal mass function from \citet{cha05}.
\section{Conclusion}
In \citet{esp17}, we searched for substellar members of Taurus using
photometry and proper motions from 2MASS, UKIDSS, PS1, SDSS,
{\it Spitzer}, {\it WISE}, and {\it Gaia} DR1. We have identified additional
candidate members by incorporating new data from UKIRT and CFHT.
In \citet{luh18}, candidate members at stellar masses were identified
with high-precision proper motions and parallaxes from {\it Gaia} DR2.
We have measured spectral types and assessed membership for
candidates from these two samples using optical and IR spectra.
Through this analysis, we have identified 79 new members of Taurus, which
brings the total number of known members in our census to 519.
Our survey has doubled the number of known members at $\geq$M9 and
has uncovered the faintest known members in $M_K$, which should have
masses extending down to $\sim3$--10~$M_{\rm Jup}$ for ages of 1--10~Myr
\citep{bur97,cha00}.
According to data from {\it Gaia} DR2, our census of Taurus should be nearly
complete for spectral types earlier than M6--M7 at $A_J<1$ across
the entire cloud complex \citep{luh18}.
Meanwhile, we have demonstrated that the census should be
complete for extinction-corrected magnitudes of $K<15.7$ at $A_J<1.5$
within a large field that encompasses $\sim72$\% of the known members.
That magnitude limit corresponds to $\sim5$--13~$M_{\rm Jup}$ for ages of
1--10~Myr. For the known members within that field and extinction limit,
we have used distributions of spectral types and $M_K$
as observational proxies for the IMF. Those distributions remain roughly
constant at substellar masses down to the completeness limit, and thus
show no sign of a decline towards a low-mass cutoff.
We have used mid-IR photometry from {\it Spitzer} and {\it WISE} to search for
evidence of circumstellar disks among the new members from our survey,
as well as the few members adopted by \citet{luh18} that were not examined
for disks by \citet{esp17} or earlier studies.
By combining those results with disk classifications for
all other members in our census \citep{luh10,esp14,esp17}, we have derived
a disk fraction of $\sim$0.7 and 0.4 for spectral types of $\leq$M3.5 and
$>$M3.5, respectively, which is slightly lower than previous measurements
based on less complete catalogs of members \citep{luh10}.
\acknowledgements
K.L. acknowledges support from NASA grant 80NSSC18K0444.
The UKIRT data were obtained through program U/17B/UA05. UKIRT is
owned by the University of Hawaii (UH) and operated by the UH Institute for Astronomy. When the data reported here were acquired, UKIRT was supported by NASA and operated under an agreement among the University of Hawaii, the University of Arizona, and Lockheed Martin Advanced Technology Center; operations were enabled through the cooperation of the East Asian Observatory.
The Gemini data were obtained through
programs GN-2017B-Q-8, GN-2018B-Q-114, GN-2018B-FT-205, GN-2018B-FT-207.
Gemini is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnolog\'{i}a e Innovaci\'{o}n Productiva (Argentina), and Minist\'{e}rio da Ci\^{e}ncia, Tecnologia e Inova\c{c}\~{a}o (Brazil).
The IRTF is operated by the University of Hawaii under contract NNH14CK55B with NASA.
The MMT Observatory is a joint facility of the University of Arizona and the
Smithsonian Institution.
LRS2 was developed and funded by the University of Texas at Austin McDonald Observatory and Department of Astronomy and by Pennsylvania State University. We thank the Leibniz-Institut f\"ur Astrophysik Potsdam (AIP) and the Institut fur Astrophysik G\"ottingen (IAG) for their contributions to the construction of the integral field units.
The HET is a joint project of the University of Texas at Austin, the
Pennsylvania State University, Stanford University,
Ludwig-Maximillians-Universit\"at M\"unchen, and Georg-August-Universit\"at
G\"ottingen and is named in honor of its principal benefactors, William
P. Hobby and Robert E. Eberly.
The LAMOST data were obtained with the Guoshoujing Telescope,
which is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences.
WIRCAM is a joint project of CFHT, Taiwan,
Korea, Canada, France, and the Canada-France-Hawaii Telescope (CFHT) which
is operated by the National Research Council (NRC) of Canada, the Institute
National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii.
The {\it Spitzer Space Telescope} and the IPAC Infrared Science Archive (IRSA)
are operated by JPL and Caltech under contract with NASA.
2MASS is a joint project of the University of
Massachusetts and the Infrared Processing and Analysis Center (IPAC) at
Caltech, funded by NASA and the NSF.
Funding for SDSS has been provided by the Alfred P. Sloan Foundation,
the Participating Institutions, the NSF,
the U.S. Department of Energy, NASA, the Japanese Monbukagakusho, the
Max Planck Society, and the Higher Education Funding Council for England.
The SDSS Web Site is http://www.sdss.org/.
The SDSS is managed by the Astrophysical Research Consortium for the
Participating Institutions. The Participating Institutions are the American
Museum of Natural History, Astrophysical Institute Potsdam, University of
Basel, University of Cambridge, Case Western Reserve University, The University
of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the
Japan Participation Group, The Johns Hopkins University, the Joint Institute
for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and
Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences,
Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy,
the Max-Planck-Institute for Astrophysics, New Mexico State University,
Ohio State University, University of Pittsburgh, University of Portsmouth,
Princeton University, the United States Naval Observatory, and the University
of Washington.
PS1 and its public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the NSF Grant AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation.
The Center for Exoplanets and Habitable Worlds is supported by the
Pennsylvania State University, the Eberly College of Science, and the
Pennsylvania Space Grant Consortium.
{\it Facilities: } \facility{UKIRT (WFCAM)},
\facility{MMT (Red Channel, MMIRS)},
\facility{Gemini:North (GMOS, GNIRS)},
\facility{CFHT (WIRCam)},
\facility{IRTF (SpeX)},
\facility{Spitzer (IRAC)},
\facility{HET (LRS2)}
|
2,869,038,154,117 | arxiv |
\section{Technical Lemmas}
\begin{lemma}\label{lem:nonexist_ratio_2_global}
For any integer $d\geq2$ define the sequence
\[ g(d)=\left(1+d^{-\frac{1}{2(k_d+1)}}\right)^{k_d}+\frac{2\sqrt{d}}{\ln
d\left(1+\frac{\ln d}{2d}\right)^d}
\qquad\text{where}\;\; k_d=\left\lceil\frac{\ln d}{2\ln\ln d}\right\rceil .
\]
Then $\lim_{d\rightarrow\infty}g(d)=1$.
\end{lemma}
\begin{proof} Define
\[g_1(d) = \parens{1 + d^{- \frac{1}{2 (k_d + 1)}} }^{k_d}
\quad \text{ and } \quad g_2 (d) = \frac{2 \sqrt{d}}{\parens{1 + \frac{\ln d}{2
d}}^d \ln d} \] so that $g(d) = g_1(d) + g_2(d)$. We will show the desired
convergence by establishing that $\lim_{d\to\infty} g_1(d)=1$ and $\lim_{d\to\infty}
g_2(d)=0$.
We will make use of the following inequalities (see, e.g., \citep[Eq.~4.5.13]{Olver2010}):
\begin{equation}
\exp \parens{\frac{xy}{x+y}} < \parens{1 + \frac{x}{y}}^y < \exp (x),\quad\text{for all }x,y>0.
\label{ineq:bound_e}
\end{equation}
First, we show $\lim_{d \rightarrow \infty} g_1(d) = 1$. As $d$ and $k_d$ are
positive, we have $g_1(d) > 1$ for every $d$. Furthermore, $g_1$ is increasing in
$k_d$, and $k_d < \frac{\ln d}{2 \ln \ln d} + 1$. Thus,
\begin{equation*} g_1(d) < \parens{1 + d ^{- \frac{1}{ \frac{\ln d}{\ln \ln d} +
4} } }^{ \frac{\ln d}{2 \ln \ln d} + 1}.
\end{equation*}
Using the second inequality of \eqref{ineq:bound_e} with $y = \frac{\ln d}{2 \ln \ln d} + 1$ and $x = y d^{-\frac{\ln \ln d}{\ln d + 4 \ln \ln d}}$ , we can further bound
\begin{equation} g_1 (d) < \exp \parens{ \frac{\frac{\ln d}{2 \ln \ln d} + 1 }{
d^{\frac{\ln \ln d}{\ln d + 4 \ln \ln d}} } }.
\label{eq:ub_g1}
\end{equation}
We will show that the argument of the exponential function on the r.h.s.\
of~\eqref{eq:ub_g1} goes to $0$ for $d \rightarrow \infty$, thus proving the claim.
Replacing $ \ln d = \exp \parens{\ln \ln d}$ in the numerator and $d = \exp \parens{\ln d}$ in the
denominator, that argument can be written as
\begin{equation}
\frac{\exp \parens{\ln \ln d} \parens{\frac{1}{2 \ln \ln d} + \frac{1}{\ln d}} }{ \exp
\parens{\frac{\ln d \ln \ln d}{\ln d + 4 \ln \ln d}} }
=
\parens{\frac{1}{2 \ln \ln d} + \frac{1}{\ln d}} \exp \parens{ \frac{4 (\ln \ln
d)^2}{\ln d + \ln \ln d} } .
\label{eq:ub_g1_arg}
\end{equation}
The argument of the exponential function on the r.h.s.\ of \eqref{eq:ub_g1_arg} goes
to $0$, as $\ln d$ is the dominating term in the denominator for $d \rightarrow
\infty$. Thus, the whole expression in \eqref{eq:ub_g1_arg} goes to $0$.
Next, we show that $\lim_{d \rightarrow \infty} g_2(d) = 0$. As $d \ge 2$, we have
that $g_2(d) > 0$ for every $d$. Using the first inequality of \eqref{ineq:bound_e} with $x =
\frac{\ln d}{2}$ and $y = d$, we have
\[
\parens{1 + \frac{\ln d}{2d}}^d > \exp \parens{\frac{d \ln d}{\ln d + 2d}}.
\] Thus, we obtain an upper bound on $g_2$ by writing $\sqrt{d} =
\exp \parens{\frac{\ln d}{2}}$:
\begin{equation} g_2(d) <
\frac{2 \exp \parens{\frac{\ln d}{2}}}{\ln d \exp \parens{\frac{d \ln d}{\ln d + 2d}} } =
\frac{2}{\ln d} \exp \parens{\frac{(\ln d)^2}{2 \ln d + 4d}}.
\label{eq:g2_ub}
\end{equation} As $4d$ is the dominating term in the denominator of the argument of
the exponential function, the argument goes to $0$ for $d \rightarrow \infty$, and
thus the r.h.s. of \eqref{eq:g2_ub} goes to $0$, showing the claim.
\end{proof}
\section{Discussion and Future Directions} \label{sec:discussion_future}
In this paper we showed that weighted congestion games with polynomial latencies of degree
$d$ do not have $\alpha$-PNE for
$\alpha<\alpha(d)=\varOmega\left(\frac{\sqrt{d}}{\ln d}\right)$. For general cost
functions, we proved that $n$-PNE always exist whereas $\alpha$-PNE in general do
not, where $n$ is the number of players and
$\alpha<\Phi_{n-1}=\varTheta\left(\frac{n}{\ln n}\right)$. We also transformed the
nonexistence results into complexity-theoretic results, establishing that deciding
whether such $\alpha$-PNE exist is itself an NP-hard problem.
We now identify two possible directions for follow-up work. A first obvious question would be
to reduce the nonexistence gap between $\varOmega\left(\frac{\sqrt{d}}{\ln
d}\right)$ (derived in~\cref{th:nonexistence} of this paper) and $d$ (shown in~\cite{Caragiannis:2019aa}) for polynomials of degree $d$; similarly for the gap between
$\varTheta\left(\frac{n}{\ln n}\right)$ (\cref{th:nonexistence_general_costs}) and $n$ (\cref{th:existence_general_n}) for general cost functions and $n$
players. Notice that all current methods for proving upper bounds (i.e.,
existence) are essentially based on potential function arguments; thus it might be
necessary to come up with novel ideas and techniques to overcome the current gaps.
A second direction would be to study the complexity of \emph{finding} $\alpha$-PNE,
when they are guaranteed to exist. For example, for polynomials of degree $d$, we
know that $d$-improving dynamics eventually reach a
$d$-PNE~\cite{Caragiannis:2019aa}, and so finding such an approximate equilibrium
lies in the complexity class PLS of local search problems (see, e.g.,
\cite{Johnson:1988aa,Schaffer:1991aa}). However, from a complexity theory
perspective the only known lower bound is the PLS-completeness of finding an
\emph{exact} equilibrium for \emph{unweighted} congestion
games~\cite{Fabrikant2004a} (and this is true even for $d=1$, i.e., affine cost
functions; see~\cite{Ackermann2008}). On the other hand, we know that $d^{O(d)}$-PNE
can be computed in polynomial time (see, e.g.,
\cite{Caragiannis2015a,gns2018_arxiv,Feldotto2017}). It would be then very
interesting to establish a ``gradation'' in complexity (e.g., from NP-hardness to
PLS-hardness to P) as the parameter $\alpha$ increases from $1$ to $d^{O(d)}$.
\section{General Cost Functions} \label{sec:general_costs}
In this final section we leave the domain of polynomial latencies and study the
existence of approximate equilibria in general congestion games having arbitrary
(nondecreasing) cost functions. Our parameter of interest, with respect to which
both our positive and negative results are going to be stated, is the number
of players $n$. We start by showing that $n$-PNE always exist:
\begin{theorem}
\label{th:existence_general_n}
Every weighted congestion game with $n$ players and arbitrary (nondecreasing) cost functions has an $n$-approximate PNE.
\end{theorem}
\begin{proof}
Fix a weighted congestion game with $n\geq 2$ players, some strategy profile $\vecc{s}$, and a possible deviation $s'_i$ of player $i$. First notice that we can bound the change in the cost of any other player $j\neq i$ as
\begin{align}
C_{j}(s'_i,\vecc s_{-i}) - C_{j}(\vecc s)
&= \sum_{e\in s_j} c_e\parens{x_e(s'_i,\vecc s_{-i})} - \sum_{e\in s_j} c_e \parens{x_e(\vecc s)} \nonumber\\
&\!\begin{multlined}[b]
= \sum_{e\in s_j\cap(s'_i\setminus s_i)} \left[c_e \parens{x_e(s'_i,\vecc s_{-i})} - c_e \parens{x_e(\vecc s)}\right] \\
+\sum_{e\in s_j\cap(s_i\setminus s'_i)} \left[c_e \parens{x_e(s'_i,\vecc s_{-i})} - c_e \parens{x_e(\vecc s)}\right]
\end{multlined} \label{eq:helper_3}\\
& \leq \sum_{e\in s_j\cap(s'_i\setminus s_i)} \left[c_e \parens{x_e(s'_i,\vecc s_{-i})} - c_e \parens{x_e(\vecc s)}\right]\nonumber \\
&\leq \sum_{e\in s'_i} c_e \parens{x_e(s'_i,\vecc s_{-i})}\nonumber \\
&= C_i(s'_i,\vecc s_{-i}), \label{eq:helper_2}
\end{align}
the first inequality holding due to the fact that the second sum in~\eqref{eq:helper_3} contains only nonpositive terms (since the latency functions are nondecreasing).
Next, define the social cost $C(\vecc{s})=\sum_{i\in N}C_i(\vecc{s})$. Adding the above inequality over all players $j\neq i$ (of which there are $n-1$) and rearranging, we successively derive:
\begin{align}
\sum_{j\neq i}C_{j}(s'_i,\vecc s_{-i}) - \sum_{j\neq i}C_{j}(\vecc s) &\leq (n-1)C_i(s'_i,\vecc s_{-i}) \nonumber\\
\left(C(s'_i,\vecc s_{-i})-C_i(s'_i,\vecc s_{-i})\right)-\left(C(\vecc{s})-C_i(\vecc{s})\right) &\leq (n-1)C_i(s'_i,\vecc s_{-i}) \nonumber\\
\qquad C(s'_i,\vecc s_{-i})-C(\vecc{s}) &\leq nC_i(s'_i,\vecc s_{-i})-C_i(\vecc{s}).\label{eq:napxpotential}
\end{align}
We conclude that, if $s'_i$ is an $n$-improving deviation for player $i$ (i.e., $nC_i(s'_i,\vecc s_{-i})<C_i(\vecc{s})$), then the social cost must strictly decrease after this move. Thus, any (global or local) minimizer of the social cost must be an $n$-PNE (the existence of such a minimizer is guaranteed by the fact that the strategy spaces are finite).
\end{proof}
The above proof not only establishes the existence of $n$-approximate equilibria in
general congestion games, but also highlights a few additional interesting features.
First, due to the key inequality~\eqref{eq:napxpotential}, $n$-PNE are reachable via
sequences of $n$-improving moves, in addition to arising also as minimizers of the
social cost function. These attributes give a nice ``constructive'' flavour
to~\cref{th:existence_general_n}.
Secondly, exactly because social cost optima are $n$-PNE, the \emph{Price of
Stability}\footnote{The Price of Stability (PoS) is a well-established and
extensively studied notion in algorithmic game theory, originally studied
in~\citep{ADKTWR04,Correa2004}. It captures the minimum approximation ratio of the
social cost between equilibria and the optimal solution (see, e.g.,
\citep{Christodoulou2015,cggs2018-journal}); in other words, it is the best-case
analogue of the the Price of Anarchy (PoA) notion of~\citet{Koutsoupias2009a}.} of
$n$-PNE is optimal (i.e., equal to $1$) as well.
Another, more succinct way, to interpret these observations is within the context of
\emph{approximate potentials} (see, e.g.,
\citep{Chen2008,Christodoulou2011a,cggs2018-journal}); \eqref{eq:napxpotential}
establishes that the social cost itself is always an $n$-approximate potential of
any congestion game.
Next, we design a family of games that do not admit $\varTheta\left(\frac{n}{\ln
n}\right)$-PNE, thus nearly matching the upper bound~\cref{th:existence_general_n}.
\begin{theorem}
\label{th:nonexistence_general_costs}
For any integer $n\geq 2$, there exist weighted congestion games with $n$ players and general cost functions that do not have $\alpha$-approximate PNE for any $\alpha<\Phi_{n-1}$, where
$\Phi_m\sim\frac{m}{\ln m}$ is the unique positive solution of $(x+1)^m=x^{m+1}$.
\end{theorem}
\begin{proof}
For any integer $n\geq 2$, let
$\xi=\Phi_{n-1}$ be the positive solution of $(x+1)^{n-1}=x^{n}$. Then,
equivalently,
\begin{equation}
\label{eq:helper_5}
\left(1+\frac{1}{\xi}\right)^{n-1}=\xi.
\end{equation}
Furthermore, as we
mentioned in~\cref{sec:model}, $\xi>1$ and asymptotically $\Phi_{n-1}\sim\frac{n}{\ln n}$.
Consider the following congestion game $\mathcal{G}_n$.
There are $n=m+1$ players $0,1, \ldots, m$, where player $i$ has weight $w_i=\nicefrac{1}{2^i}$. In particular, this means that for any $i \in \set{1,\dots,m}$: $\sum_{k=i}^m w_k< w_{i-1}\leq w_0$.
Furthermore, there are $2(m+1)$ resources $a_0, a_1, \ldots, a_m, b_0, b_1, \ldots, b_m$, where resources $a_i$ and $b_i$ have the same cost function $c_i$ given by
\[
c_{a_0}(x)=c_{b_0}(x)=c_0(x)=
\begin{cases}
1, & \text{if } x\geq w_0, \\
0, & otherwise;
\end{cases}
\]
and for all $i \in \set{1,\dots,m}$,\\
\[
c_{a_i}(x)=c_{b_i}(x)=c_i(x)=
\begin{cases}
\frac{1}{\xi}\left(1+\frac{1}{\xi}\right)^{i-1}, & \text{if } x\geq w_0 + w_i, \\
0, & otherwise.
\end{cases}
\]
The strategy set of player $0$ and of all players $i \in \set{1,\dots,m}$ are, respectively,
\[
S_0=\{\{a_0, \ldots, a_m\},\{b_0, \ldots, b_m\}\},
\qquad \text{and} \qquad
S_i=\{\{a_0, \ldots, a_{i-1}, b_i\},\{b_0, \ldots, b_{i-1}, a_i\}\}.
\]
We show that this game has no $\alpha$-PNE, for any $\alpha<\xi$, by proving that in any outcome there is at least one player that can
deviate and improve her cost by a factor of at least $\xi$. Due to symmetry it is
sufficient to consider the following two kinds of outcomes:
\emph{\underline{Case 1:} Player $0$ is alone on resource $a_0$.}
Then player $0$ must have chosen $\{a_0, \ldots, a_m\}$, and all other players
$i \in \set{1,\dots,m}$ must have chosen strategy $\{b_0, \ldots, b_{i-1}, a_i\}$. In this
outcome, player $0$ has a cost of
\begin{align*}
c_0(w_0)+ \sum_{i=1}^{m} c_i(w_0+w_i)
= 1 + \frac{1}{\xi} \sum_{i=1}^{m}\left(1+\frac{1}{\xi}\right)^{i-1}
=\left(1+\frac{1}{\xi}\right)^{m}
= \xi,
\end{align*}
where the last equality follows by the fact that $m=n-1$ and~\eqref{eq:helper_5}.
Deviating to $\{b_0, \ldots ,b_m\}$, player 0 would get a cost of
\[
c_0(w_0+\ldots+ w_m)+ \sum_{i=1}^{m} c_i\left(w_0+\sum_{j=i+1}^m w_j\right) = 1+0.
\]
Thus player $0$ can improve by a factor of at least
$\xi$.
\emph{\underline{Case 2:} Player 0 is sharing resource $a_0$ with at least one other player.}
Let $j$ be the smallest index of such a player, i.e., player $j$ plays $\{a_0,
\ldots, a_{j-1}, b_j\}$ and all players $i \in \set{1,\dots,j-1}$ have chosen strategy $\{b_0,
\ldots, b_{i-1}, a_i\}$. In such a profile the cost of player $j$ is at least
\begin{align*}
c_0(w_0+w_j)+ \sum_{i=1}^{j-1} c_i(w_0+w_i+w_j)
= 1 + \frac{1}{\xi} \sum_{i=1}^{j-1} \left(1+\frac{1}{\xi}\right)^{i-1}
=\left(1+\frac{1}{\xi}\right)^{j-1},
\end{align*}
while deviating to $j$'s other strategy would result in a cost of at most
\begin{align*}
c_0\left(\sum_{i=1}^{m} w_i\right) + \sum_{i=1}^{j-1} c_i\left(\sum_{k=i+1}^m w_k\right) + c_j\left(w_0+\sum_{k=j}^m w_k\right)
= 0 + 0 + \frac{1}{\xi}\left(1+\frac{1}{\xi}\right)^{j-1}.
\end{align*}
Thus player $j$ can improve by a factor of at least
$\xi$.
\end{proof}
Similar to the spirit of the rest of our paper so far, we'd like to show an
NP-hardness result for deciding existence of $\alpha$-PNE for general games as well.
We do exactly that in the following theorem, where now $\alpha$ grows as
$\tilde{\varTheta}(n)$. Again, we use the circuit gadget and combine it with the game
from the previous nonexistence~\cref{th:nonexistence_general_costs}. The main
difference to the previous reductions is that now $n$ is part of the input. On the
other hand we are not restricted to polynomial latencies, so we use step functions
having a single breakpoint.
\begin{theorem}
\label{th:hardness_existence_general_alt} Let $\varepsilon>0$, and let $\tilde{\alpha}:\mathbb{N}_{\geq2} \longrightarrow \mathbb{R}$ be any
sequence of reals such that $1\leq \tilde\alpha(n) <
\frac{\Phi_{n-1}}{1+\varepsilon}=\tilde{\varTheta}(n)$,
where $\Phi_m\sim\frac{m}{\ln m}$ is the unique positive solution of $(x+1)^m=x^{m+1}$. Then, it is NP-hard to decide whether a (weighted) congestion game with $n$ players has an $\tilde\alpha(n)$-approximate PNE.
If in addition $\tilde{\alpha}$ is a polynomial-time computable real sequence (as defined in \cref{sec:model}), the aforementioned problem is NP-complete.
\end{theorem}
\begin{proof}
Recall that we have $\Phi_{n-1}\sim\frac{n}{\ln n}$. Given $\varepsilon>0$, without loss of generality assume $\varepsilon<1$, so that $1+\nicefrac{\varepsilon}{3}<(1+\varepsilon)(1-\nicefrac{\varepsilon}{3})$. Let $n_0,\ell\in\mathbb{N}$ be large enough natural numbers such that
\begin{equation}
1+\frac{1}{\ell}<\frac{(1+\varepsilon)(1-\frac{\varepsilon}{3})}{1+\frac{\varepsilon}{3}}
\qquad\text{and}\qquad
\left(1-\frac{\varepsilon}{3}\right)\frac{n}{\ln n}\leq\Phi_{n-1}\leq\left(1+\frac{\varepsilon}{3}\right)\frac{n}{\ln n}
\quad\text{for all}\;\;n\geq n_0.
\label{eq:hardness_existence_general_aux1}
\end{equation}
We will again reduce from \textsc{Circuit Satisfiability}: given a circuit $C$, we must construct (in polynomial time) a game $\tilde{\mathcal{G}}$, say with $\tilde{n}$ players, that has an $\tilde{\alpha}(\tilde{n})$-PNE if and only if $C$ has a satisfying assignment. Without loss of generality assume that $C$ is in canonical form (as described in \cref{sec:circuit}); add also one extra gate that negates the output of $C$, making this the new output of a circuit $\bar{C}$, say with $m$ inputs and $K$ NAND gates.
Let $s= m+K+1$, $n= \ell s$, and take a large enough integer $d$ such that $3^{\nicefrac{d}{2}}>\Phi_{n-1}$. Note that $s$, $n$ and a suitable $d$ can all be found in time polynomial in the description of $C$. To conclude the preliminaries of this proof, assume also without loss of generality that $s\geq n_0$; if $s$ is bounded by a constant, determining whether $C$ has a satisfying assignment can be done in constant time.
Next, given $\bar{C}$ and $d$, construct the game $\mathcal{G}^d_\mu$ where $\mu$ is such that $3^{\nicefrac{d}{2}}-\varepsilon(\mu)>\Phi_{n-1}$, as in \cref{sec:circuit}. Notice that $\mathcal{G}^d_\mu$ can be computed in polynomial time from $C$, and that the $\Phi_{n-1}$-improving Nash dynamics of this game emulate the computation of the circuit. Consider also the game $\mathcal{G}_n$ with $n$ players from \cref{th:nonexistence_general_costs} that does not have $\alpha$-PNE for any $\alpha<\Phi_{n-1}$.
We would like to merge $\mathcal{G}^d_\mu$ and $\mathcal{G}_n$ into a single game $\tilde{\mathcal{G}}$, in such a way that $\tilde{\mathcal{G}}$ has an approximate PNE if and only if $C$ has a satisfying assignment. Following the same technique as in \cref{th:bb_hardness}, we would like to extend the strategies of the output player of $\mathcal{G}^d_\mu$ to include resources that are used by players in $\mathcal{G}_n$. For this technique to work, we must rescale the weights and cost functions in $\mathcal{G}_n$.
In particular, we divide all weights of the players in $\mathcal{G}_n$ by 2 (so that the sum of the weights of all the players is less than 1) and halve the breakpoints of the cost functions accordingly. We also add a new dummy resource with cost function
\[ c_{\text{dummy}}(x)=
\begin{cases}
\Phi_{n-1}^2, & \text{if}\;\; x\geq 1, \\
0, & \text{otherwise};
\end{cases}
\]
We are now ready to describe the congestion game $\tilde{\mathcal{G}}$ that is obtained by merging the circuit game $\mathcal{G}^d_\mu$ with the (rescaling of) game $\mathcal{G}_n$. Note that this game has $n+s=(\ell+1) s$ players: $s$ from the circuit game (which all have weight 1) and $n$ from the nonexistence gadget. The set of resources corresponds to the union of the gate resources of $\mathcal{G}^d_\mu$, the resources in $\mathcal{G}_n$, and the dummy resource. Similarly to the proof of \cref{th:bb_hardness},
\begin{itemize}
\item we do not change the strategies of the players in $\mathcal{G}^d_\mu$, with the exception of the output player $G_1$;
\item the zero strategy of the output player $G_1$ remains the same as in $\mathcal{G}^d_\mu$, but her one strategy is augmented with the dummy resource; that is, $s^1_{G_1}=\{1_1,\text{dummy}\}$;
\item each player $i$ in $\mathcal{G}_n$ keeps her original strategies, and gets a new dummy strategy $s_{i,\text{dummy}}=\{\text{dummy}\}$.
\end{itemize}
With the above description,\footnote{This almost concludes the description of the game -- the only problem is that some of the cost functions of the game are defined in terms of $\Phi_{n-1}$, which is not a rational number. To make the proof formally correct, one can approximate $\Phi_{n-1}$ sufficiently close by a rational $\bar{\Phi}_{n-1}<\Phi_{n-1}$.
We omit the details.}
the only thing left to prove NP-hardness is that $C$ has a satisfying assignment if and only if $\tilde{\mathcal{G}}$ has an $\tilde{\alpha}(n+s)$-PNE.
The proof follows the same approach as in \cref{th:bb_hardness}. Letting $\alpha<\Phi_{n-1}$, we suppose that $\tilde{\mathcal{G}}$ has an $\alpha$-PNE, say $\vecc{s}$, and proceed to prove that $C$ has a satisfying assignment.
As before, if $\vecc{s}$ is an $\alpha$-PNE, then every gate player that is not the output player must respect the NAND semantics, and this strategy is $\alpha$-dominating.
For the output player, the cost of her zero strategy remains the same, and the cost of her one strategy increases by exactly $\Phi_{n-1}^2<3^d<\frac{\mu}{\mu-1}3^d$.
Hence, if $\vecc{s}_X$ is a satisfying assignment, then the zero strategy of the output player (which negates the output of the original circuit $C$) remains $\alpha$-dominating; on the other hand, if $\vecc{s}_X$ is not a satisfying assignment, then the ratio between the costs of the zero strategy and the one strategy of the output player is at least
$$\frac{c_{0_1}(2)}{c_{1_1}(2)+\Phi_{n-1}^2}>\frac{\lambda\mu 2^d}{\mu 2^d+\frac{\mu}{\mu-1}3^d}=\lambda\left(\frac{1}{1+\frac{1}{\mu-1}\left(\frac{3}{2}\right)^d}\right)
> 3^{\nicefrac{d}{2}} \parens{\frac{1}{1 + \frac{1}{\mu - 1} 3^d } }> 3^{\nicefrac{d}{2}} - \varepsilon(\mu)>\alpha.$$
Hence, respecting the NAND semantics remains $\alpha$-dominating for the output player as well. As a consequence, the input players are also locked to their strategies (i.e.\ they have no incentive to change).
Now, if the output player happened to be playing her one strategy, this could not be an $\alpha$-PNE. For each of the players in $\mathcal{G}_n$, the dummy strategy would incur a cost of $\Phi_{n-1}^2$, whereas any other strategy would give a cost of at most $\Phi_{n-1}$. Thus the dummy strategy would be $\Phi_{n-1}$-dominated, and the players in $\mathcal{G}_n$ must be playing on their original sets of strategies, for which we know that $\alpha$-PNE do not occur.
The above argument proves that, in an $\alpha$-PNE, the output player must be playing her zero strategy. Since the output player, by construction, negates the output of $C$, this implies that $C$ must have a satisfying assignment. This also implies that the congestion on the dummy resource cannot reach the breakpoint of 1, and hence it would be $\alpha$-dominating for each of the players in $\mathcal{G}_n$ to play her dummy strategy (and incur a cost of 0). Thus, $\vecc{s}$ is an exact PNE as well.
For the converse direction, suppose $C$ has a satisfying assignment $\vecc{s}_X$. Then this can be extended to an $\alpha$-PNE of $\tilde{\mathcal{G}}$ in which the input players play according to $\vecc{s}_X$, the gate players play according to the NAND semantics, the output player of $\mathcal{G}^d_\mu$ plays the zero strategy, and each player in $\mathcal{G}_n$ plays the dummy strategy.
We have proven that, for any $\alpha<\Phi_{n-1}$, $C$ has a satisfying assignment iff $\tilde{\mathcal{G}}$ has an $\alpha$-PNE. To conclude the proof, we verify that $\tilde{\alpha}(n+s)<\Phi_{n-1}$:
\begin{align*}
\tilde{\alpha}(n+s)&<\frac{\Phi_{n+s-1}}{1+\varepsilon}\\
&\leq\frac{1+\frac{\varepsilon}{3}}{1+\varepsilon}\frac{n+s}{\ln(n+s)}\\
&\leq\frac{1+\frac{\varepsilon}{3}}{(1+\varepsilon)(1-\frac{\varepsilon}{3})}\frac{(n+s)\ln n}{n\ln(n+s)}\Phi_{n-1}\\
&<\frac{(1+ \frac{s}{n})(1+\frac{\varepsilon}{3})}{(1+\varepsilon)(1-\frac{\varepsilon}{3})}\Phi_{n-1}\\
&=\frac{(1+ \frac{1}{\ell})(1+\frac{\varepsilon}{3})}{(1+\varepsilon)(1-\frac{\varepsilon}{3})}\Phi_{n-1}<\Phi_{n-1}.
\end{align*}
The first inequality comes from the assumption on $\tilde{\alpha}$, the second and third come from the upper and lower bounds on $\Phi_{n}$ from \eqref{eq:hardness_existence_general_aux1} and the fact that $n+s\geq n\geq s\geq n_0$, the fourth comes from the trivial bound $\ln n<\ln (n+s)$, the equality comes from the definition of $n=\ell s$, and the final step comes from the choice of $\ell$ in \eqref{eq:hardness_existence_general_aux1}.
We conclude that the problem of deciding whether a (weighted) congestion game with $n$ players has an $\tilde{\alpha}(n)$-PNE is NP-hard. If in addition $\tilde{\alpha}$ is a polynomial-time computable real sequence, the problem is also in NP; given a game with $n$ players and a (candidate) strategy profile, verify that this is an $\tilde{\alpha}(n)$-PNE by iterating over all possible moves of all players and verifying that none of these are $\tilde{\alpha}(n)$-improving.
\end{proof}
\section{Hardness of Existence} \label{sec:hardness_existence}
In this section we show that it is NP-hard to decide whether a polynomial congestion
game has an $\alpha$-PNE. For this we use a black-box reduction: our hard instance is obtained by
combining any
(weighted) polynomial congestion game $\mathcal{G}$ without $\alpha$-PNE (i.e., the
game from \Cref{sec:nonexistence}) with the circuit gadget of the previous section.
To achieve this, it would be convenient to make some assumptions on the game $\mathcal{G}$, which however do not influence the existence or nonexistence of approximate equilibria.
\paragraph*{Structural Properties of $\mathcal{G}$}
Without loss of generality, we assume that a weighted polynomial congestion game of degree $d$ has the following structural properties.
\begin{itemize}
\item \emph{No player has an empty strategy.} If, for some player $i$, $\emptyset\in S_i$, then this strategy would be $\alpha$-dominating for $i$. Removing $i$ from the game description would not affect the (non)existence of (approximate) equilibria\footnote{By this we mean, if $\mathcal{G}$ has (resp.\ does not have) $\alpha$-PNE, then $\tilde{\mathcal{G}}$, obtained by removing player $i$ from the game, still has (resp.\ still does not have) $\alpha$-PNE.}.
\item \emph{No player has zero weight.} If a player $i$ had zero weight, her
strategy would not influence the costs of the strategies of the other players.
Again, removing $i$ from the game description would not affect the (non)existence
of equilibria.
\item \emph{Each resource $e$ has a monomial cost function with a strictly positive coefficient}, i.e.\ $c_e(x)=a_e x^{k_e}$ where $a_e>0$ and $k_e\in\{0,\ldots,d\}$. If a resource had a more general cost function $c_e(x)=a_{e,0}+a_{e,1}x+\ldots+ a_{e,d} x^{d}$, we could split it into at most $d+1$ resources with (positive) monomial costs, $c_{e,0}(x)=a_{e,0}$, $c_{e,1}(x)=a_{e,1} x$, \ldots , $c_{e,d}(x)=a_{e,d} x^d$.
These monomial cost resources replace the original resource, appearing on every strategy that included $e$.
\item \emph{No resource $e$ has a constant cost function.} If a resource $e$ had a constant cost function $c_e(x)=a_{e,0}$, we could replace it by new resources having monomial cost.
For each player $i$ of weight $w_i$, replace resource $e$ by a resource $e_i$ with monomial cost $c_{e_i}(x)=\frac{a_{e,0}}{w_i}x$, that is used exclusively by player $i$ on her strategies that originally had resource $e$.
Note that $c_{e_i}(w_i) = a_{e,0}$, so that this modification does not change the player's costs, neither has an effect on the (non)existence of approximate equilibria.
If a resource has cost function constantly equal to zero, we can simply remove it from the description of the game.
\end{itemize}
For a game having the above properties, we define the (strictly positive) quantities
\begin{equation}
\label{eq:bb_hardness_aux}
a_{\min} = \min_{e\in E} a_e,\quad W = \sum_{i\in N}w_i,\quad c_{\max} = \sum_{e\in E}c_e(W).
\end{equation}
Note that $c_{\max}$ is an upper bound on the cost of any player on any strategy profile.
\paragraph*{Rescaling of $\mathcal{G}$}
In our construction of the combined game we have to make sure that the weights of the players in $\mathcal{G}$ are smaller than the weights of the players in the circuit gadget. We introduce the following rescaling argument.
For any $\gamma\in(0,1]$ define the game $\tilde{\mathcal{G}}_\gamma$, where we rescale the player weights and resource cost coefficients in $\mathcal{G}$ as
\begin{equation}
\label{eq:bb_hardness_scaling}
\tilde{a}_e=\gamma^{d+1-k_e} a_e,\quad \tilde{w}_i=\gamma w_i,\quad \tilde{c}_e(x)=\tilde{a}_e x^{k_e}.
\end{equation}
This changes the quantities in \eqref{eq:bb_hardness_aux} for $\tilde{\mathcal{G}}_\gamma$ to (recall that $k_e\geq 1$)
\begin{align*}
\tilde{a}_{\min}&=\min_{e\in E} \tilde{a}_e=\min_{e\in E}\gamma^{d+1-k_e} a_e\geq\gamma^{d}\min_{e\in E}a_e=\gamma^d a_{\min},\\
\tilde{W}&=\sum_{i\in N}\tilde{w}_i=\sum_{i\in N}\gamma w_i=\gamma W,\\
\tilde{c}_{\max}&=\sum_{e\in E}\tilde{c}_e(\tilde{W})=\sum_{e\in E}\tilde{a}_e(\gamma W)^{k_e}= \sum_{e\in E}\gamma^{d+1}a_e W^{k_e}=\gamma^{d+1}\sum_{e\in E}c_e(W)=\gamma^{d+1}c_{\max}.
\end{align*}
In $\tilde{\mathcal{G}}_\gamma$ the player costs are all uniformly scaled as $\tilde{C}_i(\vecc{s})=\gamma^{d+1} C_i(\vecc{s})$, so that the Nash dynamics and the (non)existence of equilibria are preserved.
The next lemma formalizes the combination of both game gadgets and, furthermore, establishes the gap-introduction in the equilibrium factor. Using it, we will derive our key hardness tool of~\cref{th:hardness_existence}.
\begin{lemma}\label{th:bb_hardness}
Fix any integer $d\geq 2$ and real $\alpha\geq 1$. Suppose there exists a weighted polynomial congestion game $\mathcal{G}$ of degree $d$ that does \emph{not} have an $\alpha$-approximate PNE. Then, for any circuit $C$ there exists a game $\tilde{\mathcal{G}}_C$ with the following property: the sets of $\alpha$-approximate PNE and exact PNE of $\tilde{\mathcal{G}}_C$ coincide and are in one-to-one correspondence with the set of satisfying assignments of $C$. In particular, one of the following holds: either
\begin{enumerate}
\item $C$ has a satisfying assignment, in which case $\tilde{\mathcal{G}}_C$ has an exact PNE (and thus, also an $\alpha$-approximate PNE); or
\item $C$ has no satisfying assignments, in which case $\tilde{\mathcal{G}}_C$ has no $\alpha$-approximate PNE (and thus, also no exact PNE).
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\mathcal{G}$ be a congestion game as in the statement of the theorem having the above mentioned structural properties.
Recalling that weighted polynomial congestion games of degree $d$ have $d$-PNE \cite{Caragiannis:2019aa}, this implies that $\alpha<d<3^{\nicefrac{d}{2}}$. Fix some $0<\varepsilon<3^{\nicefrac{d}{2}}-\alpha$ and take $\mu\geq 1+\frac{3^{d + \nicefrac{d}{2}}}{ \min \set{\varepsilon,1}}$; in this way $\alpha<3^{\nicefrac{d}{2}}-\varepsilon\leq3^{\nicefrac{d}{2}}-\varepsilon(\mu)$.
Given a circuit $C$ we construct the game $\tilde{\mathcal{G}}_C$ as follows.
We combine the game $\mathcal{G}^d_\mu$ whose Nash dynamics model the
NAND semantics of $C$, as described in \cref{sec:circuit}, with the game $\tilde{\mathcal{G}}_\gamma$
obtained from $\mathcal{G}$ via the aforementioned rescaling.
We choose $\gamma \in (0,1]$ sufficiently small such that the following three inequalities hold for the quantities in \eqref{eq:bb_hardness_aux} for $\mathcal{G}$:
\begin{equation}
\label{eq:bb_hardness_gamma}
\gamma W<1,
\quad \gamma\sum_{e\in E}a_e<\frac{\mu}{\mu-1}\left(\frac{3}{2}\right)^d,
\quad \gamma\alpha^2<\frac{a_{\min}}{c_{\max}}.
\end{equation}
Thus, the set of players in $\tilde{\mathcal{G}}_C$ corresponds to the (disjoint) union of the
static, input and gate players in $\mathcal{G}^d_\mu$ (which all have weights $1$) and the players in
$\tilde{\mathcal{G}}_\gamma$ (with weights $\tilde w_i$). We also consider a new dummy resource with constant cost
$c_{\mathrm{dummy}}(x)=\frac{\tilde{a}_{\min}}{\alpha}$. Thus, the set of resources
corresponds to the (disjoint) union of the gate resources $0_k,1_k$ in $\mathcal{G}^d_\mu$,
the resources in $\tilde{\mathcal{G}}_\gamma$, and the dummy resource. We augment the strategy space
of the players as follows:
\begin{itemize}
\item each input player or gate player of $\mathcal{G}^d_\mu$ that is \emph{not} the output player $G_1$ has the same strategies as in $\mathcal{G}^d_\mu$ (i.e.\ either the zero or the one strategy);
\item the zero strategy of the output player $G_1$ is the same as in $\mathcal{G}^d_\mu$, but her one strategy is augmented with \emph{every} resource in $\tilde{\mathcal{G}}_\gamma$; that is, $s^1_{G_1}=\{1_1\}\cup E(\tilde{\mathcal{G}}_\gamma)$;
\item each player $i$ in $\tilde{\mathcal{G}}_\gamma$ keeps her original strategies as in $\tilde{\mathcal{G}}_\gamma$, and gets a new dummy strategy $s_{i,\mathrm{dummy}}=\{\mathrm{dummy}\}$.
\end{itemize}
A graphical representation of the game $\tilde{\mathcal{G}}_C$ can be seen
in~\cref{fig:mergegame}.
\begin{figure}[t]
\centering
\includegraphics[trim={0 23.5cm 12cm 0}, clip]{figure_merge-circuit-nonapprox.pdf}
\caption{Merging a circuit game (on the left) and a game without approximate equilibria (on the right). Changes to the subgames are indicated by solid arrows.
The new one strategy of $G_1$ consists of $1_1$ and all resources in $\tilde{\mathcal{G}}_\gamma$, while the zero strategy stays unchanged. The players of $\tilde{\mathcal{G}}_\gamma$ get a new strategy (the dummy resource), and keep their old strategies playing in $\tilde{\mathcal{G}}_\gamma$ .
}
\label{fig:mergegame}
\end{figure}
To finish the proof, we need to show that every $\alpha$-PNE of $\tilde{\mathcal{G}}_C$ is an exact PNE and corresponds to a satisfying assignment of $C$; and, conversely, that every satisfying assignment of $C$ gives rise to an exact PNE of $\tilde{\mathcal{G}}_C$ (and thus, an $\alpha$-PNE as well).
Suppose that $\vecc{s}$ is an $\alpha$-PNE of $\tilde{\mathcal{G}}_C$, and let $\vecc{s}_X$ denote the strategy profile restricted to the input players of $\mathcal{G}^d_\mu$. Then, as in the proof of \cref{lem:circuit_NANDsemantics}, every gate player that is not the output player must respect the NAND semantics, and this is an $\alpha$-dominating strategy. For the output player, either $\vecc{s}_X$ is a non-satisfying assignment, in which case the zero strategy of $G_1$ was $\alpha$-dominating,
and this remains $\alpha$-dominating in the game $\tilde{\mathcal{G}}_C$ (since only the cost of the one strategy increased for the output player); or $\vecc{s}_X$ is a satisfying assignment.
In the second case, we now argue that the one strategy of $G_1$ remains $\alpha$-dominating. The cost of the output player on the zero strategy is at least $c_{0_1}(2)=\lambda\mu2^d$, and the cost on the one strategy is at most
\[ c_{1_1}(2)+\sum_{e\in E}\tilde{c}_e(1+\gamma W)=\mu 2^d+\sum_{e\in E}\gamma^{d+1-k_e}a_e(1+\gamma W)^{k_e}<\mu 2^d+\gamma\sum_{e\in E}a_e 2^d<\mu 2^d+\frac{\mu}{\mu-1}3^d,
\]
where we used the first and second bounds from \eqref{eq:bb_hardness_gamma}. Thus, the ratio between the costs is at least
\[\frac{\lambda\mu2^d}{\mu 2^d+\frac{\mu}{\mu-1}3^d}=\lambda\left(\frac{1}{1+\frac{1}{\mu-1}\left(\frac{3}{2}\right)^d}\right)
> 3^{\nicefrac{d}{2}} \parens{\frac{1}{1 + \frac{1}{\mu - 1} 3^d } }
> 3^{\nicefrac{d}{2}} - \varepsilon(\mu)
>\alpha.
\]
Given that the gate players must follow the NAND semantics, the input players are also locked to their strategies (i.e.\ they have no incentive to change) due to the proof of \cref{lem:circuit_NANDsemantics}. The only players left to consider are the players from $\tilde{\mathcal{G}}_\gamma$.
First we show that, since $\vecc{s}$ is an $\alpha$-PNE, the output player must be playing her one strategy.
If this was not the case, then each dummy strategy of a player in $\tilde{\mathcal{G}}_\gamma$ is $\alpha$-dominated by any other strategy: the dummy strategy incurs a cost of $\frac{\tilde{a}_{\min}}{\alpha}\geq\gamma^d\frac{a_{\min}}{\alpha}$, whereas any other strategy would give a cost of at most $\tilde{c}_{\max}=\gamma^{d+1}c_{\max}$ (this is because the output player is not playing any of the resources in $\tilde{\mathcal{G}}_\gamma$). The ratio between the costs is thus at least
\[ \frac{\gamma^da_{\min}}{\gamma^{d+1}c_{\max}\alpha}=\frac{a_{\min}}{\gamma c_{\max}\alpha}>\alpha.
\]
Since the dummy strategies are $\alpha$-dominated, the players in $\tilde{\mathcal{G}}_\gamma$ must be playing on their original sets of strategies. The only way for $\vecc{s}$ to be an $\alpha$-PNE would be if $\mathcal{G}$ had an $\alpha$-PNE to begin with, which yields a contradiction.
Thus, the output player is playing the one strategy (and hence, is present in every resource in $\tilde{\mathcal{G}}_\gamma$). In such a case, we can conclude that each dummy strategy is now $\alpha$-dominating.
If a player $i$ in $\tilde{\mathcal{G}}_\gamma$ is not playing a dummy strategy, she is playing at least one resource in $\tilde{\mathcal{G}}_\gamma$, say resource $e$. Her cost is at least $\tilde{c}_e(1+\tilde{w}_i)=\tilde{a}_e(1+\tilde{w}_i)^{k_e}>\tilde{a}_e\geq\tilde{a}_{\min}$ (the strict inequality holds since, by the structural properties of our game, all of $\tilde{a}_e$, $\tilde{w}_i$ and $k_e$ are strictly positive quantities). On the other hand, the cost of playing the dummy strategy is $\frac{\tilde{a}_{\min}}{\alpha}$.
Thus, the ratio between the costs is greater than $\alpha$.
We have concluded that, if $\vecc{s}$ is an $\alpha$-PNE of $\tilde{\mathcal{G}}_C$, then $\vecc{s}_X$ corresponds to a satisfying assignment of $C$, all the gate players are playing according to the NAND semantics, the output player is playing the one strategy, and all players of $\tilde{\mathcal{G}}_\gamma$ are playing the dummy strategies. In this case, we also have observed that each player's current strategy is $\alpha$-dominating, so the strategy profile is an exact PNE.
To finish the proof, we need to argue that every satisfying assignment gives rise to a unique $\alpha$-PNE. Let $\vecc{s}_X$ be the strategy profile corresponding to this assignment for the input players in $\mathcal{G}^d_\mu$.
Then, as before, there is one and exactly one $\alpha$-PNE $\vecc{s}$ in $\tilde{\mathcal{G}}_C$ that agrees with $\vecc{s}_X$; namely, each gate player follows the NAND semantics, the output player plays the one strategy, and the players in $\tilde{\mathcal{G}}_\gamma$ play the dummy strategies.
\end{proof}
\begin{theorem}
\label{th:hardness_existence}
For any integer $d\geq 2$ and real $\alpha\geq 1$, suppose there exists a weighted polynomial congestion game which does not have an $\alpha$-approximate PNE. Then it is NP-hard to decide whether (weighted) polynomial congestion games of degree $d$ have an
$\alpha$-approximate PNE. If in addition $\alpha$ is polynomial-time computable,\footnote{Recall the definition of polynomial-time computable real number at the end of \cref{sec:model}.}
the aforementioned problem is NP-complete.
\end{theorem}
\begin{proof}
Let $d\geq 2$ and $\alpha\geq 1$. Let $\mathcal{G}$ be a weighted polynomial congestion game of degree $d$ that has no $\alpha$-PNE; this means that for every strategy profile $\vecc{s}$ there exists a player $i$ and a strategy $s'_i\neq s_i$ such that $C_i(s_i,\vecc{s}_{-i})>\alpha \cdot C_i \parens{s'_i,\vecc{s}_{-i}}$.
Note that the functions $C_i$ are polynomials of degree $d$ and hence they are continuous on the weights $w_i$ and the coefficients $a_e$ appearing on the cost functions. Hence, any arbitrarily small perturbation of the $w_i,a_e$ does not change the sign of the above inequality. Thus, without loss of generality, we can assume that all $w_i,a_e$ are rational numbers. By a similar reasoning, we can let $\bar{\alpha}>\alpha$ be a rational number sufficiently close to $\alpha$ such that $\mathcal{G}$ still does not have an $\bar{\alpha}$-PNE.
Next, we consider the game $\tilde{\mathcal{G}}_\gamma$ obtained from $\mathcal{G}$ by rescaling, as in the proof of \cref{th:bb_hardness}, but with $\bar{\alpha}$ playing the role of $\alpha$. Notice that the rescaling is done via the choice of a sufficiently small $\gamma$, according to \eqref{eq:bb_hardness_gamma}, and hence in particular we can take $\gamma$ to be a sufficiently small rational. In this way, all the player weights and coefficients in the cost of resources are rational numbers scaled by a rational number and hence rationals.
Finally, we are able to provide the desired NP reduction from \textsc{Circuit Satisfiability}. Given a Boolean circuit $C'$ built with 2-input NAND gates, transform it into a valid circuit $C$ in canonical form. From $C$ we can construct in polynomial time the game $\tilde{\mathcal{G}}_C$ as described in the proof of \cref{th:bb_hardness}. The `circuit part', i.e.\ the game $\mathcal{G}^d_\mu$, is obtained in polynomial time from $C$, as in the proof of \cref{th:hardness_PNE}; the description of the game $\tilde{\mathcal{G}}_\gamma$ involves only rational numbers,
and hence the game can be represented by a constant number of bits (i.e.\ independent of the circuit $C$). Similarly, the additional dummy strategy has a constant delay of $\nicefrac{\tilde{a}_{\min}}{\bar{\alpha}}$,
and can be represented with a single rational number. Merging both $\mathcal{G}^d_\mu$ and $\tilde{\mathcal{G}}_\gamma$ into a single game $\tilde{\mathcal{G}}_C$ can be done in linear time. Since $C$ has a satisfying assignment iff $\tilde{\mathcal{G}}_C$ has an $\alpha$-PNE (or $\bar{\alpha}$-PNE), this concludes that the problem described is NP-hard.
If $\alpha$ is polynomial-time computable, the problem is clearly in NP: given a weighted polynomial congestion game of degree $d$ and a strategy profile $\vecc{s}$, one can check if $\vecc{s}$ is an $\alpha$-PNE by computing the ratios between the cost of each player in $\vecc{s}$ and their cost for each possible deviation, and comparing these ratios with $\alpha$.
\end{proof}
Combining the hardness result of \cref{th:hardness_existence} together with the
nonexistence result of \cref{th:nonexistence} we get the following corollary, which
is the main result of this section.
\begin{corollary}
\label{th:hardness_existence2}
For any integer $d\geq 2$ and real $\alpha\in[1,\alpha(d))$, it is NP-hard to decide whether (weighted) polynomial congestion games of degree $d$ have an
$\alpha$-approximate PNE, where $\alpha(d)=\tilde{\varOmega}(\sqrt{d})$ is the same
as in~\cref{th:nonexistence}. If in addition $\alpha$ is polynomial-time computable, the aforementioned problem is NP-complete.
\end{corollary}
Notice that, in the proof of \cref{th:bb_hardness,th:hardness_existence}, we constructed a polynomial-time reduction from \textsc{Circuit Satisfiability} to the problem of determining whether a given congestion game has an $\alpha$-PNE. Not only does this reduction map YES-instances of one problem to YES-instances of the other, but it also induces a bijection between the sets of satisfying assignments of a circuit $C$ and $\alpha$-PNE of the corresponding game $\tilde{\mathcal{G}}_C$. That is, this reduction is \emph{parsimonious}. As a consequence, we can directly lift hardness of problems associated with counting satisfying assignments to \textsc{Circuit Satisfiability} into problems associated with counting $\alpha$-PNE of congestion games:
\begin{corollary}
\label{th:hardness_counting}
Let $k\geq 1$ and $d\geq 2$ be integers and $\alpha\in[1,\alpha(d))$ where $\alpha(d)=\tilde{\varOmega}(\sqrt{d})$ is the same as in~\cref{th:nonexistence}. Then
\begin{itemize}
\item it is {\#}P-hard to count the number of $\alpha$-approximate PNE of (weighted) polynomial congestion games of degree $d$;
\item it is NP-hard to decide whether a (weighted) polynomial congestion game of degree $d$ has at least $k$ distinct $\alpha$-approximate PNE.
\end{itemize}
\end{corollary}
\begin{proof}
The hardness of the first problem comes from the {\#}P-hardness of the counting version of \textsc{Circuit Satisfiability} (see, e.g., \citep[Ch.~18]{PapadimitriouComplexity}). For the hardness of the second problem, it is immediate to see that the following problem is NP-complete, for any fixed integer $k\geq 1$: given a circuit $C$, decide whether there are at least $k$ distinct satisfying assignments for $C$ (simply add ``dummy'' variables to the description of the circuit).
\end{proof}
\section{The Hardness Gadget} \label{sec:circuit}
In this section we construct an unweighted polynomial congestion game from a Boolean
circuit. In the $\alpha$-PNE of this game the players emulate the computation of the
circuit. This gadget will be used in reductions from \textsc{Circuit Satisfiability}
to show NP-hardness of several problems related to the existence of approximate
equilibria with some additional properties. For example, deciding whether a
congestion game has an $\alpha$-PNE where a certain set of players choose a specific
strategy profile (\cref{th:hardness_PNE}).
\paragraph*{Circuit Model}
We consider Boolean circuits consisting of NOT gates and 2-input NAND gates only. We assume that the two inputs to every NAND gate are different. Otherwise we replace the NAND gate by a NOT gate, without changing the semantics of the circuit.
We further assume that every input bit is connected to exactly one gate and this gate is a NOT gate. See \Cref{subfig:validcirc} for a \emph{valid} circuit.
In a valid circuit we replace every NOT gate by an equivalent NAND gate, where one of the inputs is fixed to 1.
See the replacement of gates $g_5, g_4$ and $g_2$ in the example in \Cref{subfig:elimNotgates}.
Thus, we look at circuits of 2-input NAND gates where both inputs to a NAND gate are different and every input bit of the circuit is connected to exactly one NAND gate where the other input is fixed to 1. A circuit of this form is said to be in \emph{canonical form}.
For a circuit $C$ and a vector $x \in \set{0,1}^n$ we denote by $C(x)$ the output of the circuit on input $x$.
We model a circuit $C$ in canonical form as a \emph{directed acyclic graph}. The nodes of this graph correspond to the input bits $x_1, \ldots, x_n$, the gates $g_1, \ldots, g_K$ and a node $1$ for all fixed inputs.
There is an arc from a gate $g$ to a gate $g'$ if the output of $g$ is input to gate $g'$ and there are arcs from the fixed input and all input bits to the connected gates.
We index the gates in reverse topological order, so that all successors of a gate $g_k$ have a smaller index and the output of gate $g_1$ is the output of the circuit.
Denote by $\delta^+(v)$ the set of the direct successors of node $v$.
Then we have $|\delta^+(x_i)| = 1$ for all input bits $x_i$ and $\delta^+(g_k) \subseteq \set[g_{k'}]{k' < k} $ for every gate $g_k$.
See \Cref{fig:circ} for an example of a valid circuit, its canonical form and the corresponding directed acyclic graph.
\begin{figure}[t]
\begin{subfigure}{0.33 \textwidth}
\centering
\includegraphics[trim={0.6cm 24.7cm 0 0}, clip, scale = 1.1]{figure_valid-circuit.pdf}
\caption{valid circuit $C$}
\label{subfig:validcirc}
\end{subfigure} \quad
\begin{subfigure}{0.3 \textwidth}
\centering
\includegraphics[trim={0.6cm 24.7cm 14cm 0}, clip, scale = 1.1]{figure_circuit-canonical.pdf}
\caption{ canonical form of $C$}
\label{subfig:elimNotgates}
\end{subfigure} \qquad
\begin{subfigure}{0.3 \textwidth}
\centering
\includegraphics[scale=1.15]{figure_circuit-dag.pdf}
\caption{directed acyclic graph}
\label{subfig:circuitdag}
\end{subfigure}
\caption{Example of a valid circuit $C$ (having both NOT and NAND gates), its canonical form (having only NAND gates), and the directed acyclic graph corresponding to $C$.}
\label{fig:circ}
\end{figure}
\paragraph*{Translation to Congestion Game}
Fix some integer $d \ge 1$ and a parameter $\mu \ge 1 + 2\cdot3^{d + \nicefrac{d}{2}}$.
From a valid circuit in canonical form with input bits $x_1, \ldots, x_n$, gates $g_1, \ldots, g_K$ and the extra input fixed to $1$, we construct a polynomial congestion game $\mathcal{G}_\mu^d$ of degree $d$.
There are $n$ \emph{input players} $X_1, \ldots, X_n$ for every input bit, a \emph{static player} $P$ for the input fixed to $1$, and $K$ \emph{gate players} $G_1, \ldots, G_K$ for the output bit of every gate. $G_1$ is sometimes called \emph{output player} as $g_1$ corresponds to the output $C(x)$.
The idea is that every input and every gate player have a \emph{zero} and a \emph{one strategy}, corresponding to the respective bit being $0$ or $1$. In every $\alpha$-PNE we want the players to emulate the computation of the circuit, i.e.~the NAND semantics of the gates should be respected.
For every gate $g_k$, we introduce two \emph{resources} $\resourceZero{k}$ and $\resourceOne{k}$. The zero (one) strategy of a player consists of the $\resourceZero{k'}$ ($\resourceOne{k'}$) resources of the direct successors in the directed acyclic graph corresponding to the circuit and its own $\resourceZero{k}$ ($\resourceOne{k}$) resource (for gate players). The static player has only one strategy playing all $\resourceOne{k'}$ resources of the gates where one input is fixed to 1.
Formally, we have
\begin{equation*}
s_{X_i}^0 = \set[\resourceZero{k}]{g_k \in \delta^+(x_i)} \text{ and } s_{X_i}^1 = \set[\resourceOne{k}]{g_k \in \delta^+(x_i)}
\end{equation*}
for the zero and one strategy of an input player $X_i$. Recall that $\delta^+(x_i)$ is the set of direct successors of $x_i$, thus every strategy of an input player consists of exactly one resource.
For a gate player $G_k$ we have the two strategies
\begin{equation*}
s_{G_k}^0 = \set{\resourceZero{k}} \cup \set[\resourceZero{k'}]{g_{k'} \in \delta^+(g_k)} \text{ and } s_{G_k}^1 = \set{\resourceOne{k}} \cup \set[\resourceOne{k'}]{g_{k'} \in \delta^+(g_k)}
\end{equation*}
consisting of at most $k$ resources each.
The strategy of the static player is $s_P = \set[\resourceOne{k}]{g_k \in \delta^+(1)}$.
Notice that all $3$ players related to a gate $g_k$ (gate player $G_k$ and the two players corresponding to the input bits) are different and observe that every resource $\resourceZero{k}$ and $\resourceOne{k}$ can be played by exactly those $3$ players.
We define the cost functions of the resources using parameter $\mu$.
The cost functions for resources $\resourceOne{k}$ are given by $\costOne{k}$ and for resources $\resourceZero{k}$ by $\costZero{k}$, where
\begin{equation}
\costOne{k}(x) = \mu^k x^d \qquad \text{and} \qquad
\costZero{k}(x) = \lambda \mu^k x^d
\text{, with }
\lambda = 3^{\nicefrac{d}{2}}
.
\label{eq:lambda_def}
\end{equation}
Our construction here is inspired by the lockable circuit games of Skopalik and
Vöcking~\cite{Skopalik2008}. The key technical differences are that our gadgets use
polynomial cost functions (instead of general cost functions) and only $2$ resources
per gate (instead of $3$). Moreover, while in~\cite{Skopalik2008} these games are used as
part of a PLS-reduction from \textsc{Circuit/FLIP}, we are also interested in
constructing a gadget to be studied on its own, since this can give rise to
additional results of independent interest (see~\cref{th:hardness_PNE}).
\paragraph*{Properties of the Gadget}
For a valid circuit $C$ in canonical form consider the game $\mathcal{G}^d_\mu$ as defined above.
We interpret any strategy profile $\vecc{s}$ of the input players as a bit vector $x \in \set{0,1}^n$ by setting $x_i = 0$ if $s_{X_i} = s_{X_i}^0$ and $x_i = 1 $ otherwise.
The gate players are said to \emph{follow the NAND semantics} in a strategy profile, if for every gate $g_k$ the following holds:
\begin{itemize}
\item if both players corresponding to the input bits of $g_k$ play their one strategy, then the gate player $G_k$ plays her zero strategy;
\item if at least one of the players corresponding to the input bits of $g_k$ plays her zero strategy, then the gate player $G_k$ plays her one strategy.
\end{itemize}
We show that for the right choice of $\alpha$, the set of $\alpha$-PNE in $\mathcal{G}^d_\mu$ is the same as the set of all strategy profiles where the gate players follow the NAND semantics.
Define
\begin{equation}
\label{eq:epsilon_mu_def}
\varepsilon(\mu) = \frac{3^{d + \nicefrac{d}{2}}}{\mu - 1}.
\end{equation}
From our choice of $\mu$, we obtain $ 3^{\nicefrac{d}{2}} - \varepsilon(\mu)
\ge 3^{\nicefrac{d}{2}} - \frac{1}{2} > 1 $.
For any valid circuit $C$ in canonical form and a valid choice of $\mu$ the following lemma holds for $\mathcal{G}^d_\mu$.
\begin{lemma}
\label{lem:circuit_NANDsemantics}
Let $\vecc{s}_X$ be any strategy profile for the input players $X_1, \ldots, X_n$ and let $x \in \set{0,1}^n$ be the bit vector represented by $\vecc{s}_X$.
For any $\mu \ge 1+2\cdot3^{d+\nicefrac{d}{2}}$ and any $1 \le \alpha < 3^{\nicefrac{d}{2}} -
\varepsilon(\mu)$, there is a unique $\alpha$-approximate PNE\footnote{Which, as a
matter of fact, is actually also an \emph{exact} PNE.} in $\mathcal{G}^d_\mu$ where the input
players play according to $\vecc{s}_X$. In particular, in this $\alpha$-PNE the gate players follow the NAND semantics, and the output player $G_1$ plays
according to $C(x)$.
\end{lemma}
\begin{proof}
Let $\mu>1+2\cdot3^{d+\nicefrac{d}{2}}$ and $\alpha < 3^{\nicefrac{d}{2}} - \varepsilon(\mu)$.
First, we fix the input players to the strategies given by $\vecc{s}_X$ and show that in any $\alpha$-PNE every gate player follows the NAND semantics, as otherwise changing to the strategy corresponding to the NAND of its input bits is an $\alpha$-improving move.
Second, we show that in any $\alpha$-PNE where the gate players follow the NAND semantics, the input players have no incentive to change their strategy.
In total we get that every strategy profile for the input players can be extended to an $\alpha$-PNE, where the gate players emulate the circuit. Hence this $\alpha$-PNE is unique.
Let $\vecc{s}_X$ be any strategy profile for the input players $X_1, \ldots, X_n$ and let $\vecc{s}$ be an $\alpha$-PNE of $\mathcal{G}^d_\mu$ where the input players play according to $\vecc{s}_X$.
Take $G_k$ to be any of the gate players and let $P_a$ and $P_b$ be the players corresponding to the input bits of gate $g_k$. Note that $P_a$ and $P_b$ can be other gate players or input players, and one of them can be the static player.
To show that $G_k$ follows the NAND semantics we consider two cases.
\emph{\underline{Case 1:} Both $P_a$ and $P_b$ play their one strategy in $\vecc{s}$.}
As both $P_a$ and $P_b$ play resource $\resourceOne{k}$ and all three players $P_a, P_b$ and $G_k$ are different, the cost of $G_k$'s one strategy is at least $\costOne{k}(3)$.
The cost of $G_k$'s zero strategy is at most $\costZero{k}(1) + \sum_{k'=1}^{k-1} \costZero{k'}(3)$.
Thus, we have
\begin{equation*}
\frac{C_{G_k}(s_{G_k}^1, \vecc{s}_{-G_k})}{C_{G_k}(s_{G_k}^0, \vecc{s}_{-G_k})}
\ge \frac{\costOne{k}(3)}{\costZero{k}(1) + \sum_{k'=1}^{k-1} \costZero{k'}(3)}
= \frac{\mu^k 3^d}{\lambda \mu^k + \sum_{k'=1}^{k-1} \lambda \mu^{k'} 3^d}
> \frac{3^d}{\lambda} \parens{ \frac{1}{1 + \frac{1}{\mu - 1} 3^d } },
\end{equation*}
where we used that $ \frac{1}{\mu^k} \sum_{k'=1}^{k-1} \mu^{k'} = \frac{1}{\mu^k} \parens{\frac{\mu ^ k - \mu}{\mu - 1}} < \frac{1}{\mu - 1} $.
By the definition of $\lambda$ (see~\eqref{eq:lambda_def}) and $\varepsilon(\mu)$ (see~\eqref{eq:epsilon_mu_def}), we obtain
\begin{equation}\label{eq:nandsemantics_aux_1}
\frac{3^d}{\lambda} \parens{ \frac{1}{1 + \frac{1}{\mu - 1} 3^d } }
= 3^{\nicefrac{d}{2}} \parens{\frac{1}{1 + \frac{1}{\mu - 1} 3^d } }
> 3^{\nicefrac{d}{2}} \parens{1 - \frac{1}{\mu - 1} 3^d }
= 3^{\nicefrac{d}{2}} - \varepsilon(\mu)
> \alpha
.
\end{equation}
Hence, changing from the one to the zero strategy would be an $\alpha$-improving move for $G_k$.
Thus, $G_k$ must follow the NAND semantics and play her zero strategy in $\vecc{s}$.
\emph{\underline{Case 2:} At least one of $P_a$ or $P_b$ is playing her zero strategy in $\vecc{s}$.}
By similar arguments to the previous case, we obtain that the cost of $G_k$'s zero strategy is at least $\costZero{k}(2)$ and the cost of the one strategy is at most
$ \costOne{k}(2) + \sum_{k'=1}^{k-1} \costOne{k'}(3) $.
Then, we get that
\begin{equation*}
\frac{C_{G_k}(s_{G_k}^0, \vecc{s}_{-G_k})}{C_{G_k}(s_{G_k}^1, \vecc{s}_{-G_k})}
\ge \frac{\costZero{k}(2)}{\costOne{k}(2) + \sum_{k'=1}^{k-1} \costOne{k'}(3)}
= \frac{\lambda \mu^k 2^d}{\mu^k 2^d + \sum_{k'=1}^{k-1} \mu^{k'} 3^d}
> \lambda \parens{ \frac{1}{1 + \frac{1}{\mu - 1} \parens{\frac{3}{2}}^d } }
.
\end{equation*}
By the definition of $\lambda$ and $\varepsilon(\mu)$, we obtain
\begin{equation}\label{eq:nandsemantics_aux_2}
\lambda \parens{ \frac{1}{1 + \frac{1}{\mu - 1} \parens{\frac{3}{2}}^d } }
> 3^{\nicefrac{d}{2}} \parens{\frac{1}{1 + \frac{1}{\mu - 1} 3^d } }
> 3^{\nicefrac{d}{2}} \parens{1 - \frac{1}{\mu - 1} 3^d }
= 3^{\nicefrac{d}{2}} - \varepsilon(\mu)
>\alpha
.
\end{equation}
Hence, changing from the zero to the one strategy would be an $\alpha$-improving move for $G_k$.
Thus, $G_k$ must follow the NAND semantics and play her one strategy in $\vecc{s}$.
We just showed that, in an $\alpha$-PNE, every gate player must follow the NAND semantics.
This implies that there is \emph{at most one} $\alpha$-PNE where the input players play according to $\vecc{s}_X$, since the NAND semantics uniquely define the strategy of the remaining players.
To conclude the proof, we must argue that this yields in fact an $\alpha$-PNE, meaning that the input players are also `locked' to their strategies in $\vecc{s}_X$ and have no incentive to deviate.
To that end, let $\vecc{s}$ be a strategy profile PNE of $\mathcal{G}^d_\mu$ where the gate
players follow the NAND semantics and let $X_i$ be any of the input players.
Recall that every input bit $x_i$ is connected to exactly one gate, say
$g_k=g_{k(i)}$, while the other input is fixed to 1. To show that $X_i$ does not
have an incentive to change her strategy, we consider two cases.
\emph{\underline{Case 1:} $X_i$ plays her zero strategy in $\vecc{s}$.}
As $G_k$ follows the NAND semantics in $\vecc{s}$ and the other input of $g_k$ is fixed to $1$, we know that $G_k$ must be playing her one strategy.
This incurs a cost of $\costZero{k}(1) = \lambda \mu^k$ to $X_i$.
On the other hand, if $X_i$ changed to her one strategy this would incur a cost of $\costOne{k}(3) = \mu^k 3^d$.
\emph{\underline{Case 2:} $X_i$ plays her one strategy in $\vecc{s}$.}
As $G_k$ follows the NAND semantics in $\vecc{s}$ and the other input of $g_k$ is fixed to $1$, we know that $G_k$ must be playing her zero strategy.
Thus, incurring a cost of $\costOne{k}(2) = \mu^k 2^d$ for $X_i$.
On the other hand, if $X_i$ changed to her zero strategy this would incur a cost of $\costZero{k}(2) = \lambda \mu^k 2^d $.
In both cases it is $\alpha$-dominating for $X_i$ not to change her strategy, since $\alpha< 3^{\nicefrac{d}{2}}-\varepsilon(\mu)<3^{\nicefrac{d}{2}} = \nicefrac{3^d}{\lambda} = \lambda$.
\end{proof}
We are now ready to show our main result of this section; using the circuit game
described above, we show NP-hardness of deciding whether approximate equilibria with
additional properties exist.
\begin{theorem}
\label{th:hardness_PNE}
The following problems are NP-hard, even for \emph{unweighted} polynomial congestion games of degree $d\geq 1$, for all $\alpha\in [1,3^{\nicefrac{d}{2}})$ and all $z>0$:
\begin{itemize}
\item ``Does there exist an $\alpha$-approximate PNE in which a certain subset of players are playing a specific strategy profile?''
\item ``Does there exist an $\alpha$-approximate PNE in which a certain resource is used by at least one player?''
\item ``Does there exist an $\alpha$-approximate PNE in which a certain player has cost at most $z$?''
\end{itemize}
\end{theorem}
\begin{proof}
For the first problem we reduce from \textsc{Circuit Satisfiability}: given a Boolean circuit with $n$ input bits and one output bit, is there an assignment of the input bits where the output of the circuit is $1$?
This problem is NP-hard even for circuits consisting only of 2-input NAND gates \citep{PapadimitriouComplexity}.
Let $C'$ be a Boolean circuit of 2-input NAND gates.
We transform $C'$ into a valid circuit $C$ by connecting every input bit to a NOT gate and the output of this NOT gate to all gates connected to the input bit in $C'$.
Thus, $C'(x) = C(\bar{x})$, where $\bar{x}$ denotes the vector obtained from $x \in \set{0,1}^n$ by flipping every bit.
Hence, we have that $C'$ is a YES-instance to \textsc{Circuit Satisfiability} if and only if $C$ is a YES-instance.
Let $\alpha \in [1,3^{\nicefrac{d}{2}})$, then there is an $\varepsilon > 0$ with $\alpha < 3^{\nicefrac{d}{2}} - \varepsilon$. We set $\mu =1 + \frac{3^{d + \nicefrac{d}{2}}}{ \min \set{\varepsilon,1}}$.
For this choice of $\mu$, we obtain $\varepsilon(\mu) \le \varepsilon$ and thus $3^{\nicefrac{d}{2}} - \varepsilon(\mu) \ge 3^{\nicefrac{d}{2}} - \varepsilon > \alpha$.
From the canonical form of $C'$ we construct\footnote{To be precise, the description of the game $\mathcal{G}^d_\mu$ involves the quantities $\mu=1 + \frac{3^{d + \nicefrac{d}{2}}}{ \min \set{\varepsilon,1}}$ and $\lambda=3^{\nicefrac{d}{2}}$, which in general might be irrational. In order to incorporate this game into our reduction, it is enough to choose a rational $\mu$ such that $\mu>1 + \frac{3^{d + \nicefrac{d}{2}}}{ \min \set{\varepsilon,1}}$,
and a rational $\lambda$ such that $\alpha\left(1+\frac{1}{\mu-1}3^d\right)<\lambda<\frac{3^d}{\alpha\left(1+\frac{1}{\mu-1}3^d\right)}$. In this way, $\mathcal{G}^d_\mu$ is described entirely via rational numbers, while preserving the inequalities in \eqref{eq:nandsemantics_aux_1} and \eqref{eq:nandsemantics_aux_2}.} the game $\mathcal{G}^d_\mu$.
The subset of players we are looking at is the output player $G_1$ and the specific strategy for $G_1$ is her one strategy $s_{G_1}^1$.
We show that there is an $\alpha$-PNE where $G_1$ plays $s_{G_1}^1$ if and only if $C$ is a YES-instance to \textsc{Circuit Satisfiability}.
Suppose there is a bit vector $x \in \set{0,1}^n$ such that $C(x) = 1$.
Let $\vecc{s}_X$ be the strategy profile for the input players of $\mathcal{G}^d_\mu$ corresponding to $x$.
Since $3^{\nicefrac{d}{2}} - \varepsilon(\mu) > \alpha$, \Cref{lem:circuit_NANDsemantics} holds for $\mathcal{G}^d_\mu$ and $\alpha$.
Hence, the profile $\vecc{s}_X$ can be extended to an $\alpha$-PNE where $G_1$ plays according to $C(x)$. Thus, there is an $\alpha$-PNE where $G_1$ plays $s_{G_1}^1$.
On the other hand, suppose for all bit vectors $x \in \set{0,1}^n$ it holds $C(x) = 0$.
Again, by \Cref{lem:circuit_NANDsemantics} we know that for any choice of strategies for the input players, the only $\alpha$-PNE is a profile where the gate players follow the NAND semantics.
Thus in this case, $G_1$ is playing $s_{G_1}^0$ in any $\alpha$-PNE.
To show NP-hardness of the second problem we reduce from the first part of this theorem. From the proof above we know that this problem is even NP-hard if the given subset of players consists only of one player.
Let $(\mathcal{G}, P, s_P)$ be an instance of this problem, i.e.~$\mathcal{G}$ is an unweighted polynomial congestion game of degree $d$, $P$ is one of its players and $s_P$ is a specific strategy for $P$.
We construct a new game $\mathcal{G}'$ from $\mathcal{G}$ by adding a new resource $r$ with cost $0$ to the strategy $s_P$. The rest of the game $\mathcal{G}$ stays unchanged. As $r$ does not incur any cost, the set of $\alpha$-PNE in $\mathcal{G}'$ and $\mathcal{G}$ are the same.
Thus, there exists an $\alpha$-PNE in $\mathcal{G}'$ where the resource $r$ is used by at least one player if and only if there is an $\alpha$-PNE in $\mathcal{G}$ where $P$ plays $s_P$.
The hardness of the third problem is shown by a reduction from \textsc{Circuit Satisfiability}, similar to the proof for the first problem. For $\alpha \in [1, 3^{\nicefrac{d}{2}})$ we choose $\mu$ as before, so that \Cref{lem:circuit_NANDsemantics} holds for $\alpha$ and the game $\mathcal{G}_\mu^d$ for a suitable circuit.
Let $C'$ be an instance of \textsc{Circuit Satisfiability}. By negating the output of $C'$, we obtain a circuit $\overline{C'}$. As before we transform $\overline{C'}$ to a valid circuit $\overline{C}$, so that $C'(x) = \lnot \overline{C}(\bar{x})$ holds. From the canonical form of $\overline{C}$ we construct the game $\mathcal{G}^d_\mu$. Note that the output player $G_1$ of this game is the output of a gate, where one of the inputs is fixed to $1$, as we negated the output of $C'$ by connecting the output of $C'$ to a NOT gate.
We show that there is an $\alpha$-PNE in $\mathcal{G}^d_\mu$, where $G_1$ has cost at most $\lambda \mu$, if and only if $C'$ is a YES-instance to \textsc{Circuit Satisfiability}.
Suppose there is a bit vector $x \in \set{0,1}^n$ with $C'(x) = 1$, then there is a vector $\bar{x}$ with $\overline{C}(\bar{x}) = 0$. Let $\vecc{s}_X$ be the strategy profile for the input players of $\mathcal{G}^d_\mu$ corresponding to $\bar{x}$. By \Cref{lem:circuit_NANDsemantics} this profile can be extended to an $\alpha$-PNE,
where $G_1$ is playing her zero strategy.
As the gate players follow the NAND semantics in this PNE, the cost of player $G_1$ is exactly $\costZero{1}(1) = \lambda \mu$.
If, on the other hand, for all bit vectors $x \in \set{0,1}^n$ we have $C'(x) = 0$, then for all $\bar{x} \in \set{0,1}^n$ we have $\overline{C}(\bar{x}) = 1$. Thus, using \Cref{lem:circuit_NANDsemantics} we know that in every $\alpha$-PNE $G_1$ plays her one strategy.
As $G_1$ follows the NAND semantics in any $\alpha$-PNE and the player corresponding to one of the inputs of $g_1$ is the static player, we obtain that the cost of $G_1$ is exactly $\costOne{1}(2) = \mu 2^d$.
Noticing that $\lambda = 3^{\nicefrac{d}{2}} < 2^d$, we have deduced the following: either $C'$ is a YES-instance, and $\mathcal{G}^d_\mu$ has an $\alpha$-PNE where $G_1$ has a cost of (at most) $\lambda \mu$; or $C'$ is a NO-instance, and for every $\alpha$-PNE of $\mathcal{G}^d_\mu$, $G_1$ has a cost of (at least) $2^d \mu$. This immediately implies that determining whether an $\alpha$-PNE exists in which a certain player has cost at most $z$ is NP-hard for $\lambda\mu<z<2^d\mu$.
To prove that the problem remains NP-hard for an arbitrary $z>0$, simply take a rational $c$ such that $c\lambda\mu<z<c2^d\mu$ and rescale all costs of the resources in $\mathcal{G}^d_\mu$ by $c$.
%
\end{proof}
\section{Introduction}\label{sec:intro}
\emph{Congestion games} constitute the standard framework to study settings where
selfish players compete over common resources. They are one of the most well-studied
classes of games within the field of \emph{algorithmic game
theory}~\citep{Roughgarden2016,2007a}, covering a wide range of applications,
including, e.g., traffic routing and load balancing.
In their most general form, each player has her own weight and the latency on each
resource is a nondecreasing function of the total weight of players that occupy it.
The cost of a player on a given outcome is just the total latency that she is
experiencing, summed over all the resources she is using.
The canonical approach to analysing such systems and predicting the behaviour of the
participants is the ubiquitous game-theoretic tool of equilibrium analysis.
More specifically, we are interested in the \emph{pure Nash equilibria (PNE)} of
those games; these are stable configurations from which no player would benefit from
unilaterally deviating. However, it is a well-known fact that such desirable
outcomes might not always exist, even in very simple weighted congestion games. A natural
response, especially from a computer science perspective, is to relax the solution
notion itself by considering \emph{approximate} pure Nash equilibria ($\alpha$-PNE);
these are states from which, even if a player could improve her cost by deviating,
this improvement could not be by more than a (multiplicative) factor of $\alpha\geq
1$. Allowing the parameter $\alpha$ to grow sufficiently large, existence of $\alpha$-PNE is restored.
But how large does $\alpha$ really \emph{need} to be? And,
perhaps more importantly from a computational perspective, how hard is it to check
whether a specific game has indeed an $\alpha$-PNE?
\subsection{Related Work}\label{sec:related}
The origins of the systematic study of (atomic) congestion games can be traced back
to the influential work of~\citet{Rosenthal1973a,Rosenthal1973b}. Although Rosenthal
showed the existence of congestion games without PNE, he also proved that
\emph{unweighted} congestion games always possess such equilibria. His proof is
based on a simple but ingenious \emph{potential function} argument, which up to this
day is essentially still the only general tool for establishing existence of pure equilibria.
In follow-up work~\citep{Goemans2005,Libman2001,Fotakis2005a}, the nonexistence of
PNE was demonstrated even for special simple classes of (weighted) games, including
network congestion games with quadratic cost functions and games where the player
weights are either $1$ or $2$. On the other hand, we know that equilibria do exist
for affine or exponential
latencies~\citep{Fotakis2005a,Panagopoulou2007,Harks2012a}, as well as for the class
of singleton\footnote{These are congestion games where the players can only occupy
single resources.} games~\citep{Fotakis2009a,Harks2012}. \citet{Dunkel2008} were
able to extend the nonexistence instance of~\citet{Fotakis2005a} to a gadget in order to show that deciding
whether a congestion game with step cost functions has a PNE is a (strongly) NP-hard problem, via a reduction from \textsc{3-Partition}.
Regarding approximate equilibria, \citet{Hansknecht2014} gave instances of very
simple, two-player polynomial congestion games that do not have $\alpha$-PNE, for $\alpha\approx
1.153$. This lower bound is achieved by numerically solving an optimization
program, using polynomial latencies of maximum degree $d=4$. On the positive side,
\citet{Caragiannis2011} proved that $d!$-PNE always exist; this upper bound on the
existence of $\alpha$-PNE was later improved to
$\alpha=d+1$~\citep{Hansknecht2014,cggs2018-journal} and
$\alpha=d$~\citep{Caragiannis:2019aa}.
\subsection{Our Results and Techniques}\label{sec:results}
After formalizing our model in~\cref{sec:model}, in~\cref{sec:nonexistence} we show
the nonexistence of $\varTheta(\frac{\sqrt{d}}{\ln d})$-approximate equilibria for
polynomial congestion games of degree $d$. This is the first super-constant lower
bound on the nonexistence of $\alpha$-PNE, significantly improving upon the previous
constant of $\alpha\approx 1.153$ and reducing the gap with the currently best upper
bound of $d$. More specifically (\cref{th:nonexistence}), for any integer $d$ we
construct congestion games with polynomial cost functions of maximum degree $d$ (and
nonnegative coefficients) that do not have $\alpha$-PNE, for any $\alpha<\alpha(d)$
where $\alpha(d)$ is a function that grows as
$\alpha(d)=\varOmega\left(\frac{\sqrt{d}}{\ln d}\right)$.
To derive this bound, we had to use a novel construction with a number of players
growing unboundedly as a function of $d$.
Next, in~\cref{sec:circuit} we turn our attention to computational hardness
constructions. Starting from a Boolean circuit, we create a gadget that transfers
hard instances of the classic \textsc{Circuit Satisfiability} problem to (even
unweighted) polynomial congestion games. Our construction is inspired by the work of Skopalik and Vöcking~\cite{Skopalik2008}, who used a similar family of lockable circuit games in their PLS-hardness result. Using this gadget we can immediately
establish computational hardness for various computational questions of interest
involving congestion games (\cref{th:hardness_PNE}). For example, we show that
deciding whether a $d$-degree polynomial congestion game has an $\alpha$-PNE in
which a specific set of players play a specific strategy profile is NP-hard, even up
to exponentially-approximate equilibria; more specifically, the hardness holds for
\emph{any} $\alpha < 3^{\nicefrac{d}{2}}$.
Our investigation of the hardness questions presented in~\cref{th:hardness_PNE} (and
later on in~\cref{th:hardness_counting} as well)
was inspired by some similar results presented before by Conitzer and Sandholm
\cite{Conitzer:2008aa} (and even earlier in~\cite{Gilboa1989}) for \emph{mixed} Nash
equilibria in general (normal-form) games.
To the best of our knowledge, our paper is the first to study these questions for
\emph{pure} equilibria in the context of congestion games.
It is of interest to also note here that our
hardness gadget is \emph{gap-introducing}, in the sense that the $\alpha$-PNE and exact PNE of the game coincide.
In~\cref{sec:hardness_existence} we demonstrate how one can combine the hardness
gadget of ~\cref{sec:circuit}, in a black-box way, with any nonexistence instance
for $\alpha$-PNE, in order to derive hardness for the decision version of the
existence of $\alpha$-PNE (\cref{th:bb_hardness}, \cref{th:hardness_existence}). As
a consequence, using the previous $\varOmega\left(\frac{\sqrt{d}}{\ln d}\right)$ lower bound
construction of~\cref{sec:nonexistence}, we can show that deciding whether a
(weighted) polynomial congestion has an $\alpha$-PNE is NP-hard, for any
$\alpha<\alpha(d)$, where $\alpha(d)=\varOmega\left(\frac{\sqrt{d}}{\ln d}\right)$
(\cref{th:hardness_existence2}). Since our hardness is established via a rather
transparent, ``master'' reduction from \textsc{Circuit Satisfiability}, which in
particular is parsimonious, one can derive hardness for a family of related
computation problems; for example, we show that computing the number of
$\alpha$-approximate equilibria of a weighted polynomial congestion game is \#P-hard
(\cref{th:hardness_counting}).
In~\cref{sec:general_costs} we drop the assumption on polynomial cost
functions, and study the existence of approximate equilibria under arbitrary
(nondecreasing) latencies as a function of the number of players $n$.
We prove that $n$-player congestion games always have $n$-approximate PNE
(\cref{th:existence_general_n}). As a consequence, one cannot hope to derive
super-constant nonexistence lower bounds by using just simple instances with a fixed
number of players (similar to, e.g., Hansknecht et al.~\cite{Hansknecht2014}).
In particular, this shows that the super-constant number of players in
our construction in~\cref{th:nonexistence} is necessary.
Furthermore, we pair
this positive result with an almost matching lower bound
(\cref{th:nonexistence_general_costs}): we give examples of $n$-player congestion
games (where latencies are simple step functions with a single breakpoint) that
do not have $\alpha$-PNE for all $\alpha<\alpha(n)$, where $\alpha(n)$ grows
according to $\alpha(n)=\varOmega\left(\frac{n}{\ln n}\right)$.
Finally, inspired by our hardness construction for the
polynomial case, we also give a new reduction that establishes NP-hardness for deciding
whether an $\alpha$-PNE exists, for any $\alpha<\alpha(n)= \varOmega\left(\frac{n}{\ln n}\right)$.
Notice that now the number of players $n$ is part of the description of the game (i.e., part of the input) as opposed to the maximum degree $d$ for the polynomial case (which was assumed to be fixed).
On the other hand though, we have more flexibility on designing our gadget latencies, since they can be arbitrary functions.
Concluding, we would like to elaborate on a couple of points. First, the reader would
have already noticed that in all our hardness results the (in)approximability
parameter $\alpha$ ranges freely within an entire interval of the form
$[1,\tilde\alpha)$, where $\tilde \alpha$ is a function of the degree $d$ (for
polynomial congestion games) or of the number of players $n$; and that $\alpha$, $\tilde\alpha$ are
\emph{not} part of the problem's input. It is easy to see that
these features only make our results stronger, with respect to computational
hardness, but also more robust. Secondly, although in this introductory section all our hardness
results were presented in terms of NP-\emph{hardness}, they immediately translate to
NP-\emph{completeness} under standard assumptions on the parameter $\alpha$; e.g., if $\alpha$
is rational (for a more detailed discussion of this, see also the end
of~\cref{sec:model}).
\section{Model and Notation} \label{sec:model}
A (weighted, atomic) \emph{congestion game} is defined by: a finite (nonempty) set of
\emph{resources} $E$, each $e\in E$ having a nondecreasing
\emph{cost (or latency) function} $c_e:\mathbb{R}_{>0} \longrightarrow \mathbb{R}_{\geq 0}$; and a finite (nonempty) set of
\emph{players} $N$, $\card{N}=n$, each $i\in N$ having a \emph{weight} $w_i>0$
and a set of \emph{strategies} $S_i\subseteq 2^E$.
If all players have the same weight, $w_i=1$ for all $i\in N$, the game is called
\emph{unweighted}.
A \emph{polynomial congestion game} of degree $d$, for $d$ a nonnegative integer, is a
congestion game such that all its cost functions are polynomials of degree at most $d$
with nonnegative coefficients.
A \emph{strategy profile} (or \emph{outcome}) $\vecc s=(s_1,s_2,\dots,s_n)$ is a
collection of strategies, one for each player, i.e.\ $\vecc s \in \vecc{S}=
S_1\times S_2
\times\dots\times S_n$. Each strategy profile $\vecc s$ induces a \emph{cost} of
$C_i(\vecc s)=\sum_{e\in s_i} c_e(x_e(\vecc s))$ to every player $i\in N$, where
$x_e(\vecc s)=\sum_{i: e\in s_i} w_i$ is the induced \emph{load} on resource $e$. An
outcome $\vecc s$ will be called \emph{$\alpha$-approximate (pure Nash) equilibrium
($\alpha$-PNE)}, where $\alpha\geq 1$, if no player can unilaterally improve her
cost by more than a factor of $\alpha$. Formally:
\begin{equation}
\label{eq:approx_PNE_def} C_i(\vecc s) \leq \alpha \cdot C_i\parens{s_i',\vecc s_{-i}}
\qquad \text{for all } i\in N \text{ and all } s_i'\in S_i.
\end{equation} Here we have used the standard game-theoretic notation of $\vecc
s_{-i}$ to denote the vector of strategies resulting from $\vecc s$ if we remove its
$i$-th coordinate; in that way, one can write $\vecc s=(s_i,\vecc s_{-i})$. Notice
that for the special case of $\alpha=1$, \eqref{eq:approx_PNE_def} is equivalent to
the classical definition of pure Nash equilibria; for emphasis, we will sometimes
refer to such $1$-PNE as \emph{exact} equilibria.
If \eqref{eq:approx_PNE_def} does not hold, it means that player $i$ could improve
her cost by more than $\alpha$ by moving from $s_i$ to some other strategy $s'_i$.
We call such a move \emph{$\alpha$-improving}. Finally, strategy $s_i$ is said to be
\emph{$\alpha$-dominating} for player $i$ (with respect to a fixed profile
$\vecc{s}_{-i}$) if
\begin{equation}
\label{eq:dominating_def} C_i \parens{s_i',\vecc s_{-i}} > \alpha \cdot C_i(\vecc s)
\qquad \text{ for all } s_i'\neq s_i.
\end{equation}
In other words, if a strategy $s_i$ is $\alpha$-dominating, every move from some other
strategy $s'_i$ to $s_i$ is $\alpha$-improving. Notice that each player $i$ can have at most one
$\alpha$-dominating strategy (for $\vecc{s}_{-i}$ fixed). In our proofs,
we will employ a \emph{gap-introducing} technique by constructing games with the
property that, for any player $i$ and any strategy profile $\vecc{s}_{-i}$, there is
always a (unique) $\alpha$-dominating strategy for player $i$. As a consequence, the
sets of $\alpha$-PNE and exact PNE coincide.
Finally, for a positive integer $n$, we will use $\Phi_n$ to denote the unique
positive solution of equation $(x+1)^n=x^{n+1}$. Then, $\Phi_n$ is strictly
increasing with respect to $n$, with $\Phi_1=\phi\approx 1.618$ (golden ratio) and
asymptotically $\Phi_n\sim \frac{n}{\ln n}$
(see~\citep[Lemma~A.3]{cggs2018-journal}).
\paragraph*{Computational Complexity}
Most of the results in this paper involve complexity questions, regarding the
existence of (approximate) equilibria. Whenever we deal with such statements, we
will implicitly assume that the congestion game instances given as inputs to our
problems can be succinctly represented in the following way:
\begin{itemize}[noitemsep]
\item all player have \emph{rational} weights;
\item the resource cost functions are
``efficiently computable''; for polynomial latencies in particular, we will assume that
the coefficients are \emph{rationals}; and for step functions we assume that their values
and breakpoints are \emph{rationals};
\item the strategy sets are
given \emph{explicitly}.\footnote{Alternatively, we could have simply assumed succinct
representability of the strategies. A prominent such case is that of \emph{network}
congestion games, where each player's strategies are all feasible paths between two
specific nodes of an underlying graph. Notice however that, since in this paper we
are proving hardness results, insisting on explicit representation only
makes our results even stronger.}
\end{itemize}
There are also computational considerations to be made about the number $\alpha$
appearing in the definition of $\alpha$-PNE. In our results (e.g.,
\cref{th:hardness_PNE,th:hardness_existence}), we will prove NP-hardness of
determining whether games have $\alpha$-PNE for any arbitrary real $\alpha$ below
the nonexistence bound, \emph{regardless of whether $\alpha$ is rational or
irrational, computable or uncomputable}. However, to prove NP-completeness, i.e.\ to
prove that the decision problem belongs in NP (as in \cref{th:hardness_existence}),
we need to be able to verify, given a strategy profile and a deviation of some
player, whether this deviation is an $\alpha$-improving move. This can be achieved
by additionally assuming that the \emph{upper Dedekind cut} of $\alpha$,
$R_\alpha=\set[q\in\mathbb{Q}]{q>\alpha}$, is a language decidable in polynomial time. In
this paper we will refer to such an $\alpha$ as a \emph{polynomial-time computable}
real number. In particular, notice that rationals are polynomial-time computable;
thus the NP-completeness of the $\alpha$-PNE problem does hold for $\alpha$
rational. We refer the interested reader to \citet{Ko1983} for a detailed discussion
on polynomial-time computable numbers (which is beyond the scope of our paper), as
well as for a comparison with other axiomatizations using binary digits
representations or convergent sequences.
If, more generally, $\alpha:\mathbb{N}\longrightarrow\mathbb{R}$ is a sequence of reals (as in
\cref{th:hardness_existence_general_alt}), we say that $\alpha$ is a
\emph{polynomial-time computable} real sequence if
$R_\alpha=\set[(n,q)\in \mathbb{N} \times \mathbb{Q}]{q>\alpha(n)}$ is a language decidable in
polynomial time.
\section{The Nonexistence Gadget} \label{sec:nonexistence}
In this section we give examples of polynomial congestion games of degree $d$, that do \emph{not} have $\alpha(d)$-approximate equilibria; $\alpha(d)$ grows as $\varOmega\left(\frac{\sqrt{d}}{\ln d}\right)$.
Fixing a degree $d\geq 2$, we construct a family of games $\mathcal{G}^d_{(n,k,w,\beta)}$, specified by parameters $n \in \mathbb{N}, k \in \set{1, \ldots, d}, w \in [0,1]$, and $\beta \in [0,1]$.
In $\mathcal{G}^d_{(n,k,w,\beta)}$ there are $n+1$ players: a \emph{heavy player} of weight $1$ and $n$ \emph{light players} $1,\ldots,n$ of equal weights $w$.
There are $2(n+1)$ resources $a_0,a_1, \ldots, a_n, b_0,b_1,\ldots,b_n$ where $a_0$ and $b_0$ have the same cost function $c_0$ and all other resources $a_1,\ldots,a_n,b_1, \ldots, b_n$ have the same cost function $c_1$ given by
\[ c_0(x)=x^k \quad \text{and} \quad c_1(x)=\beta x^d.
\]
Each player has exactly two strategies, and the strategy sets are given by
\[ S_0=\set{ \set{a_0,\ldots,a_n},\set{b_0,\ldots,b_n} }
\quad\text{and}\quad
S_i=\set{\set{a_0,b_i},\set{b_0,a_i} }\quad\text{for }i=1,\ldots,n.
\]
The structure of the strategies is visualized in \Cref{fig:nonexistence}.
\begin{figure}[t]
\centering
\begin{tikzpicture}
\def 1 {1}
\def 0.35 {0.35}
\draw node (a0) {$a_0$}
node [right = 1 of a0] (a1) {$a_1$}
node [right = 1 of a1] (adots1) {$\cdots$}
node [right = 1 of adots1] (ai) {$a_i$}
node [right = 1 of ai] (adots2) {$\cdots$}
node [right = 1 of adots2] (phb1) {\phantom{$b_1$}}
node [right = 1 of phb1] (an) {$a_n$}
node [below right = 0.44*1 and 0.44*1 of a1] (mai) {}
node [below = 0.4*1 of an] (man) {}
;
\draw node [below = 1 of a0] (bn) {$b_n$}
node [right = 1 of bn] (pha1) {\phantom{$a_1$}}
node [right = 1 of pha1] (bdots1) {$\cdots$}
node [right = 1 of bdots1] (bi) {$b_i$}
node [right = 1 of bi] (bdots2) {$\cdots$}
node [right = 1 of bdots2] (b1) {$b_1$}
node [below = 1 of an] (b0) {$b_0$}
node [above left = 0.35*1 and 0.45*1 of b1] (mbi) {}
node [above = 0.35*1 of bn] (mbn) {}
;
\draw[thick] (ai) ellipse (6*1 cm and 0.4 cm);
\draw[thick, dashed] (bi) ellipse (6*1 cm and 0.4 cm);
\draw[rotate around = {-17 : (mai)}, thick, color=lightgray]
(mai) ellipse (3*1 cm and 0.35 cm);
\draw[rotate around = {-17 : (mbi)}, thick, color=lightgray, dashed]
(mbi) ellipse (3*1 cm and 0.35 cm);
%
\draw[rotate around ={90 : (man)}, thick, color=gray, dashed]
(man) ellipse (1 cm and 0.35 cm);
\draw[rotate around = {90 : (mbn)}, thick, color=gray]
(mbn) ellipse (1 cm and 0.35 cm);
\end{tikzpicture}
\caption{Strategies of the game $\mathcal{G}^d_{(n,k,w,\beta)}$. Resources contained in the two ellipses of the same colour correspond to the two strategies of a player.
The strategies of the heavy player and light players $n$ and $i$ are depicted in black, grey and light grey, respectively.}
\label{fig:nonexistence}
\end{figure}
In the following theorem we give a lower bound on $\alpha$, depending on parameters
$(n,k,w,\beta)$, such that games $\mathcal{G}^d_{(n,k,w,\beta)}$ do not admit an
$\alpha$-PNE. Maximizing this lower bound over all games in the family, we obtain a
general lower bound $\alpha(d)$ on the inapproximability for polynomial congestion
games of degree $d$ (see~\eqref{eq:nonexist_ratio_opt} and its plot
in~\cref{fig:nonexistence_small}). Finally, choosing specific values for the
parameters $(n,k,w,\beta)$, we prove that $\alpha(d)$ is asymptotically lower
bounded by $\varOmega(\frac{\sqrt{d}}{\ln d})$.
\begin{theorem}
\label{th:nonexistence}
For any integer $d\geq 2$, there exist (weighted) polynomial congestion games of
degree $d$ that do not have $\alpha$-approximate PNE for any $\alpha<\alpha(d)$,
where
\begin{samepage}
\begin{align}
\alpha(d)=&\sup_{n,k,w,\beta}\min \set{\frac{1+n\beta(1+w)^d}{(1+nw)^k+n\beta},\frac{(1+w)^k+\beta w^d}{(nw)^k+\beta(1+w)^d} } \label{eq:nonexist_ratio_opt}\\
&\text{s.t.}\quad n\in\mathbb{N},k\in\{1,\ldots,d\},w\in[0,1],\beta\in[0,1].\nonumber
\end{align}
\end{samepage}
In particular, we have the asymptotics $\alpha(d)=\Omega\parens{\frac{\sqrt{d}}{\ln d}}$ and the bound $\alpha(d)\geq\frac{\sqrt{d}}{2\ln d}$, valid for large enough $d$. A plot of the exact values of $\alpha(d)$ (given
by~\eqref{eq:nonexist_ratio_opt}) for small degrees can be found
in~\Cref{fig:nonexistence_small}.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\input{plot_nonexistence_small}
\end{tikzpicture}
\caption{Nonexistence of $\alpha(d)$-PNE for weighted polynomial
congestion games of degree $d$, as given by~\eqref{eq:nonexist_ratio_opt}
in~\cref{th:nonexistence}, for $d=2,3,\dots,100$.
In particular, for small values of $d$, $\alpha(2)\approx 1.054$, $\alpha(3) \approx 1.107$ and $\alpha(4)\approx 1.153$.
}
\label{fig:nonexistence_small}
\end{figure}
\end{theorem}
Interestingly, for the special case of $d=2,3,4$, the values of $\alpha(d)$ (see \cref{fig:nonexistence_small}) yield
\emph{exactly} the same lower bounds with Hansknecht et al.~\cite{Hansknecht2014}.
This is a direct consequence of the fact that $n=1$ turns out to be an optimal
choice in~\eqref{eq:nonexist_ratio_opt} for $d\leq 4$, corresponding to an instance
with only $n+1=2$ players (which is the regime of the construction
in~\cite{Hansknecht2014}); however, this is not the case for larger values of $d$, where more
players are now needed in order to derive the best possible value
in~\eqref{eq:nonexist_ratio_opt}.
Furthermore, as we discussed also in
\cref{sec:results}, no construction with only $2$ players can result in bounds
larger than $2$ (\cref{th:existence_general_n}).
\begin{proof}
Due to symmetries, it is enough to just consider the following two cases for the strategy profiles in game $\mathcal{G}^d_{(n,k,w,\beta)}$ described above:
\emph{\underline{Case 1:} The heavy player is alone on resource $a_0$.}
This means that every light player $i\in\{1,\ldots, n\}$ must have chosen strategy $\{b_0,a_i\}$. Thus the heavy player incurs a cost of $c_0(1)+n c_1(1+w)$; while, deviating to strategy $\{b_0,\ldots,b_n\}$, she would incur a cost of $c_0(1+nw)+nc_1(1)$. The improvement factor can then be lower bounded by
\begin{equation*}
\frac{c_0(1)+n c_1(1+w)}{c_0(1+nw)+nc_1(1)}=\frac{1+n\beta(1+w)^d}{(1+nw)^k+n\beta}.
\label{eq:nonexist_ratio_1}
\end{equation*}
\emph{\underline{Case 2:} The heavy player shares resource $a_0$ with at least one light player $i\in\{1,\ldots, n\}$.}
Thus player $i$ incurs a cost of at least $c_0(1+w)+c_1(w)$; while, deviating to strategy $\{b_0,a_i\}$, she would incur a cost of at most $c_0(nw)+c_1(1+w)$. The improvement factor can then be lower bounded by
\begin{equation*}\label{eq:nonexist_ratio_2}\frac{c_0(1+w)+c_1(w)}{c_0(nw)+c_1(1+w)}=\frac{(1+w)^k+\beta w^d}{(nw)^k+\beta(1+w)^d}.\end{equation*}
In order for the game to not have an $\alpha$-PNE, it is enough to guarantee that both ratios are greater than $\alpha$.
Maximizing these ratios over all games in the family, yields the lower bound in the statement of the theorem,
\begin{align*}
\alpha(d)= &\sup_{n,k,w,\beta}\min
\set{ \frac{1+n\beta(1+w)^d}{(1+nw)^k+n\beta},\frac{(1+w)^k+\beta w^d}{(nw)^k+\beta(1+w)^d} }\\
&\text{s.t.} \quad n\in\mathbb{N},k\in\{1,\ldots,d\},w\in[0,1],\beta\in[0,1].
\end{align*}
For small values of $d$ the above quantity can be computed numerically
(see~\cref{fig:nonexistence_small}); in particular, for $d=2,3,4$ this yields the
same lower bounds as in~\citet{Hansknecht2014}, since $n=1$ is the optimal choice.
Next we prove the asymptotics $\alpha(d)=\Omega\left(\frac{\sqrt{d}}{\ln d}\right)$. To that end, we take the following choice of parameters:
\[
w=\frac{\ln d}{2d}\,,\,
k=\left\lceil\frac{\ln d}{2\ln\ln d}\right\rceil\,,\,
\beta=\frac{1}{d^{\frac{k}{2(k+1)}}(1+w)^d}\,,\,
n=\left\lfloor\frac{1}{d^{\frac{1}{2(k+1)}}w}\right\rfloor.
\]
One can check that this choice satisfies $k \in \set{1, \ldots, d}$ (for $d \ge 4$) and $w,\beta \in [0,1]$.
We can bound the expressions appearing in \eqref{eq:nonexist_ratio_opt} as follows.
\begin{align}
1+n\beta(1+w)^d
&\geq1+\left(\frac{1}{d^{\frac{1}{2(k+1)}}w}-1\right)\frac{1}{d^{\frac{k}{2(k+1)}}(1+w)^d}(1+w)^d\nonumber\\
&= 1+\left(\frac{2d}{d^{\frac{1}{2(k+1)}}\ln d}-1\right)\frac{1}{d^{\frac{k}{2(k+1)}}}\nonumber\\
&=\frac{2d}{d^{\frac{1}{2(k+1)}+\frac{k}{2(k+1)}}\ln d}+1-\frac{1}{d^{\frac{k}{2(k+1)}}}\nonumber\\
&\geq\frac{2d}{d^{\nicefrac{1}{2}}\ln d}\tag{since $d\geq1$}\\
&=\frac{2\sqrt{d}}{\ln d};\label{eq:nonexist_aux_bound_1}\\
(1+nw)^k+n\beta
&\leq\left(1+\frac{1}{d^{\frac{1}{2(k+1)}}w}w\right)^k + \frac{1}{d^{\frac{1}{2(k+1)}}wd^{\frac{k}{2(k+1)}}(1+w)^d} \nonumber\\
&=\left(1+d^{-\frac{1}{2(k+1)}}\right)^k+\frac{1}{d^{\nicefrac{1}{2}}\frac{\ln d}{2d}\left(1+\frac{\ln d}{2d}\right)^d}\nonumber\\
&=\left(1+d^{-\frac{1}{2(k+1)}}\right)^k+\frac{2\sqrt{d}}{\ln d\left(1+\frac{\ln d}{2d}\right)^d};\label{eq:nonexist_aux_bound_2}\\
(1+w)^k+\beta w^d&\geq 1; \label{eq:nonexist_aux_bound_3}\\
(nw)^k+\beta(1+w)^d
&\leq \left(\frac{1}{d^{\frac{1}{2(k+1)}}w}w\right)^k+\frac{1}{d^{\frac{k}{2(k+1)}}(1+w)^d}(1+w)^d\nonumber\\
&=2\cdot\frac{1}{d^{\frac{k}{2(k+1)}}}=\frac{2d^{\frac{1}{2(k+1)}}}{\sqrt{d}}\leq\frac{2d^\frac{\ln\ln d}{\ln d}}{\sqrt{d}}=\frac{2\ln d}{\sqrt{d}}. \label{eq:nonexist_aux_bound_4}
\end{align}
In the Appendix, we prove (\cref{lem:nonexist_ratio_2_global}) that the final quantity in \eqref{eq:nonexist_aux_bound_2} converges to 1 as $d\rightarrow\infty$; in particular, it is upper bounded by $4$ for $d$ large enough. Thus, we can lower bound the ratios of \eqref{eq:nonexist_ratio_opt} as
\begin{align}
\frac{1+n\beta(1+w)^d}{(1+nw)^k+n\beta}&\geq\frac{\frac{2\sqrt{d}}{\ln d}}{4}=\frac{\sqrt{d}}{2\ln d}=\Omega\left(\frac{\sqrt{d}}{\ln d}\right), \tag{from \eqref{eq:nonexist_aux_bound_1}, \eqref{eq:nonexist_aux_bound_2} and large $d$}\\
\frac{(1+w)^k+\beta w^d}{(nw)^k+\beta(1+w)^d}&\geq\frac{1}{\frac{2\ln d}{\sqrt{d}}}=\frac{\sqrt{d}}{2\ln d}=\Omega\left(\frac{\sqrt{d}}{\ln d}\right).\tag{from \eqref{eq:nonexist_aux_bound_3} and \eqref{eq:nonexist_aux_bound_4}}
\end{align}
This proves the asymptotics and the bound $\alpha(d)\geq\frac{\sqrt{d}}{2\ln d}$ for large $d$.
\end{proof}
|
2,869,038,154,118 | arxiv | \section{Introduction}
A sequence $(x(n))_{n\geq1}\subseteq[0,1)$ is \emph{uniformly distributed modulo $1$} if the proportion of points in any subinterval $I \subseteq[0,1)$ converges to the size of the interval: $\#\{n\leq N:\,x(n)\in I \}\sim N|I| $, as $N\rightarrow\infty$. The theory of uniform distribution dates back to 1916, to a seminal paper of Weyl \cite{Weyl1916}, and constitutes a simple test of pseudo-randomness. A well-known result of Fej\'{e}r, see \cite[p. 13]{KupiersNiederreiter1974}, implies that for any $A>1$ and any $\alpha>0$ the sequence
\[
(\alpha(\log n)^{A}
\mathrm{\,mod\,}1)_{n>0}
\]
is uniformly distributed. While for $A=1$, the sequence is \emph{not uniformly distributed}. In this paper, we study stronger, local tests for pseudo-randomness for this sequence.
Given an increasing $\R$-valued sequence, $(\omega(n))=(\omega(n))_{n>0}$ we denote the sequence modulo $1$ by
\begin{align*}
x(n) : = \omega(n) \Mod 1.
\end{align*}
Further, let $u_N(n)\subset [0,1)$ denote the ordered sequence, thus $u_N(1) \le u_N(2) \le \dots \le u_N(N)$. With that, we define the \emph{gap distribution} of $(x(n))$ as the limiting distribution (if it exists): for $s>0$
\begin{align*}
P(s) : = \lim_{N \to \infty}\frac{ \#\{n \le N : N\|u_N(n)-u_{N}(n+1)\| < s\}}{N},
\end{align*}
where $\|\cdot \|$ denotes the distance to the nearest integer, and $u_N(N+1)= u_N(1)$. Thus $P(s)$ represents the limiting proportion of (scaled) gaps between (spatially) neighboring elements in the sequence which are less than $s$. We say a sequence has \emph{Poissonian gap distribution} if $P(s) = e^{-s}$, the expected value for a uniformly distributed sequence on the unit interval.
\begin{figure}[H]
\includegraphics[viewport=50bp 460bp 330bp 770bp,clip,scale=0.5]{Gaps_log}~~~~\includegraphics[viewport=50bp 460bp 330bp 770bp,clip,scale=0.5]{Gaps_log_2}~~~\includegraphics[viewport=50bp 460bp 330bp 770bp,clip,scale=0.5]{Gaps_log_3}
\caption{From left to right: the histograms represent the gap distribution at time $N$ of $(\log n)_{n\protect\geq1}$, $((\log n)^{2})_{n\protect > 0}$, and $((\log n)^{3})_{n\protect> 0}$ when $N=10^{5}$ and the curve is the graph of $x\protect\mapsto e^{-x}$. Note that $(\log n)$ is not even uniformly distributed, and thus the gap distribution cannot be Poissonian.}
\end{figure}
\newpage
\noindent Our main theorem is the following
\begin{theorem}\label{thm:main}
Let $\omega(n):= \alpha ( \log n)^{A}$ for $A>1$ and any $\alpha >0$, then $x(n)$ has Poissonian gap distribution.
\end{theorem}
\vspace{1mm}
In fact, this theorem follows (via the method of moments) from Theorem \ref{thm:correlations} (below) which states that for every $m \ge 2$ the $m$-point correlation function of this sequence is Poissonian. By which we mean the following: Let $m\geq 2$ be an integer, and let $f \in C_c^\infty(\R^{m-1})$ be a compactly supported function which can be thought of as a stand-in for the characteristic function of a Cartesian product of compact intervals in $\R^{m-1}$. Let $[N]:=\{1,\ldots,N\}$ and define the \emph{$m$-point correlation} of $(x(n))$, at time $N$, to be
\begin{equation}\label{def: m-point correlation function}
R^{(m)}(N,f) := \frac{1}{N}
\sum_{\vect{n} \in [N]^m}^\ast
f(N\|x(n_1)-x(n_2)\|,N \|x(n_2)-x(n_3)\|, \dots, N\|x(n_{m-1})-x(n_{m})\|),
\end{equation}
where $\ds \sum^\ast$ denotes a sum over distinct $m$-tuples. Thus the $m$-point correlation measures how correlated points are on the scale of the average gap between neighboring points (which is $N^{-1}$). We say $(x(n))$ has \emph{Poissonian $m$-point correlation} if
\begin{align}\label{def: expectation}
\lim_{N \to \infty} R^{(m)}(N,f) = \int_{\R^{m-1}} f(\vect{x}) \mathrm{d}\vect{x} =: \expect{f}
\,\, \mathrm{for\,any\,} f \in C_c^\infty(\R^{m-1}).
\end{align}
That is, if the $m$-point correlation converges to the expected value if the sequence was uniformly distributed on the unit interval.
\begin{theorem}\label{thm:correlations}
Let $\omega(n):= \alpha (\log n)^{A}$ for $A>1$ and any $\alpha >0$, then $x(n)$ has Poissonian $m$-level correlations for all $m \ge 2$.
\end{theorem}
\vspace{1mm}
It should be noted that Theorem \ref{thm:correlations} is far stronger than Theorem \ref{thm:main}. In addition to the gap distribution, Theorem \ref{thm:correlations} allows us to recover a wide-variety of statistics such as the $i^{th}$ nearest neighbor distribution for any $i \ge 1$.
\vspace{2mm}
\textbf{Previous Work:} The study of uniform distribution and fine-scale local statistics of sequences modulo $1$ has a long history which we outlined in more detail in a previous paper \cite{LutskoSourmelidisTechnau2021}. If we consider the sequence $(\alpha n^\theta\,\mathrm{mod}\,1)_{n\geq1}$, there have been many attempts to understand the local statistics, in particular the pair correlation (when $m=2$). Here it is known that for any $\theta \neq 1$, then, if $\alpha$ belongs to a set of full measure, the pair correlation function is Poissonian \cite{RudnickSarnak1998, AistleitnerEl-BazMunsch2021, RudnickTechnau2021}. However there are very few explicit (i.e. non-metric) results. When $\theta = 2$ Heath-Brown \cite{Heath-Brown2010} gave an algorithmic construction of certain $\alpha$ for which the pair correlation is Poissonian, however this construction did not give an exact number. When $\theta = 1/2$ and $\alpha^2 \in \Q$ the problem lends itself to tools from homogeneous dynamics. This was exploited by Elkies and McMullen \cite{ElkiesMcMullen2004} who showed that the gap distribution is \emph{not} Poissonian, and by El-Baz, Marklof, Vinogradov \cite{El-BazMarklofVinogradov2015a} who showed that the sequence $(\alpha n^{1/2}\, \mathrm{mod}\,1)_{n\in \N \setminus \square}$ where $\square$ denotes the set of squares, does have Poissonian pair correlation.
With these sparse exceptions, the only explicit results occur when the exponent $\theta$ is small. If $\theta\le 14/41$ the authors and Sourmelidis \cite{LutskoSourmelidisTechnau2021} showed that the pair correlation function is Poissonian for all values of $\alpha >0$. This was later extended by the authors \cite{LutskoTechnau2021} to show that these monomial sequences exhibit Poissonian $m$-point correlations (for $m\geq 3$) for any $\alpha >0$ if $\theta< 1/(m^2+m-1)$. To the best of our knowledge the former is the only explicit result proving Poissonian pair correlations for sequences modulo $1$, and the latter result is the only result proving convergence of the higher order correlations to any limit.
The authors' previous work motivates the natural question: what about sequences which grow slower than any power of $n$? It is natural to hypothesize that such sequences might exhibit Poissonian $m$-point correlations for all $m$. However, there is a constraint, Marklof and Str\"{o}mbergsson \cite{MarklofStrom2013} have shown that the gap distribution of $( (\log n)/(\log b) \,\mathrm{mod}\, 1)_{n\geq 1}$ exists for $b>1$, and is \emph{not} Poissonian (thus the correlations cannot all be Poissonian). However, they also showed, that in the limit as $b$ tends to $1$, this limiting distribution converges to the Poissonian distribution (see \cite[(74)]{MarklofStrom2013}). Thus, the natural quesstion becomes: what can be said about sequences growing faster than $\log(n)$ but slower than any power of $n$?
With that context in mind, our result has several implications. First, it provides the only example at present of an explicit sequence whose $m$-point correlations can be shown to converge to the Poissonian limit (and thus whose gap distribution is Poissonian). Second, it answers the natural question implied by our previous work on monomial sequences. Finally, it answers the natural question implied by Marklof and Str\"{o}mbergsson's result on logarithmic sequences.
\subsection{Plan of Paper}
The proof of Theorem \ref{thm:correlations} follows the same broad lines as the proof in \cite{LutskoTechnau2021}, however, in that context we could be satisfied with power savings. In the present context we need to work much harder to achieve logarithmic savings. Moreover many of the technical details cannot be imported from that paper since our sequence is no longer polynomial.
In the remainder we take $\alpha = 1$, the same exact proof applies to general $\alpha$. We again argue in roughly three steps: first, fix $m \ge 2$ and assume the sequence has Poissonian $j$-point correlation for $2\le j< m$.
\begin{enumerate}[label = {\tt [Step \arabic*]}]
\item Remove the distinctness condition in the $m$-point correlation by relating the completed correlation to the $m^{th}$ moment of a random variable. This will add a new frequency variable, with the benefit of decorrelating the sequence elements. Then we perform a Fourier decomposition of this moment and using a combinatorial argument from \cite[\S 3]{LutskoTechnau2021}, we reduce the problem of convergence for the moment to convergence of one particular term to an explicit `target'.
\item Using various partitions of unity we further reduce the problem to an asymptotic evaluation of the $L^m([0,1])$-norm of a two dimensional exponential sum. Then we apply van der Corput's $B$-process in each of these variables. If we stop here and apply the triangle inequality the resulting error term is of size $O((\log N)^{(A+1)m})$.
\item Finally we expand the $L^m([0,1])$-norm giving an oscillatory integral. Then using a localized version of Van der Corput's lemma we can achieve an extra saving to bound the error term by $o(1)$. Whereas, in \cite{LutskoTechnau2021} we could use classical theorems from linear algebra to justify applying this lemma, in the present context we rely on recent work of Khare and Tao \cite{KhareTao2021} involving generalized Vandermonde matrices.
\end{enumerate}
\textbf{Notation:}
Throughout, we use the usual Bachmann--Landau notation: for functions $f,g:X \rightarrow \mathbb{R}$, defined on some set $X$, we write $f \ll g$ (or $f=O(g)$) to denote that there exists a constant $C>0$ such that $\vert f(x)\vert \leq C \vert g(x) \vert$ for all $x\in X$. Moreover let $f\asymp g$ denote $f \ll g$ and $g \ll f$, and let $f = o(g)$ denote that $\frac{f(x)}{g(x)} \to 0$.
Given a Schwartz function $f: \R^m \to \R$, let $\wh{f}$ denote the $m$-dimensional Fourier transform:
\begin{align*}
\wh{f}(\vect{k}) : = \int_{\R^m} f(\vect{x}) e(-\vect{x}\cdot \vect{k}) \mathrm{d}\vect{x}, \qquad \text{for } \vect{k} \in \Z^m.
\end{align*}
Here, and throughout we let $e(x):= e^{2\pi i x}$.
All of the sums which appear range over integers, in the indicated interval. We will frequently be taking sums over multiple variables, thus if $\vect{u}$ is an $m$-dimensional vector, for brevity, we write
\begin{align*}
\sum_{\vect{k} \in [f(\vect{u}),g(\vect{u}))}F(\vect{k}) = \sum_{k_1 \in [f(u_1),g(u_1))} \dots \sum_{k_m \in [f(u_m),g(u_m))}F(\vect{k}).
\end{align*}
Moreover, all $L^p$ norms are taken with respect to Lebesgue measure, we often do not include the domain when it is obvious. Let
\begin{align*}
\Z^\ast:= \Z \setminus \{0\}.
\end{align*}
For ease of notation, $\varepsilon>0$ may vary from line to line by a bounded constant.
\section{Preliminaries}
\label{ss:The A and B Processes}
The following stationary phase principle is derived from the work of Blomer, Khan and Young \cite[Proposition 8.2]{BlomerKhanYoung2013}. In application we will not make use of the full asymptotic expansion, but this will give us a particularly good error term which is essential to our argument.
\begin{proposition}\label{prop: stationary phase}[Stationary phase expansion]
Let $w\in C_{c}^{\infty}$ be supported in a compact interval $J$ of length $\Omega_w>0$ so that there exists an $\Lambda_w>0$ for which
\begin{align*}
w^{(j)}(x)\ll_{j}\Lambda_w \Omega_w^{-j}
\end{align*}
for all $j\in\mathbb{N}$. Suppose $\psi$ is a smooth function on $J$ so that there exists a unique critical point $x_{0}$ with $\psi'(x_{0})=0$. Suppose there exist values $\Lambda_{\psi}>0$ and $\Omega_\psi>0$ such that
\begin{align*}
\psi''(x)\gg \Lambda_{\psi}\Omega_{\psi}^{-2},\qquad \qquad\psi^{(j)}(x)\ll_j \Lambda_{\psi}\Omega_{\psi}^{-j}
\end{align*}
for all $j > 2$. Moreover, let $\delta\in(0,1/10)$, and $Z:=\Omega_w + \Omega_{\psi}+\Lambda_w+\Lambda_{\psi}+1$. If
\begin{equation}
\Lambda_{\psi}\geq Z^{3\delta}, \qquad \mathrm{and}\qquad\mathrm{\Omega_w\geq\frac{\Omega_{\psi}Z^{\frac{\delta}{2}}}{\Lambda_{\psi}^{1/2}}}\label{eq: constraints on X,Y,Z}
\end{equation}
hold, then
\[
I:=\int_{-\infty}^{\infty}w(x)e(\psi(x))\,\mathrm{d}x
\]
has the asymptotic expansion
\[
I=\frac{e(\psi(x_{0}))}{\sqrt{\psi''(x_{0})}}\sum_{0\leq j \leq3C/\delta}p_{j}(x_{0})+O_{C,\delta}(Z^{-C})
\]
for any fixed $C\in\mathbb{Z}_{\geq1}$; here
\begin{align*}
p_n(x_0) := \frac{e(1/8)}{n!} \left(\frac{i}{2 \psi^{\prime\prime}(x_0)}\right)^nG^{(2n)}(x_0)
\end{align*}
where
\begin{align*}
G(x):= w(x)e(H(x)), \qquad H(x) := \psi(x)-\psi(x_0) -\frac{1}{2} \psi^{\prime\prime}(x_0)(x-x_0)^2.
\end{align*}
\end{proposition}
In a slightly simpler form we have:
\begin{lemma}[First order stationary phase] \label{lem: stationary phase}
Let $\psi$ and $w$ be smooth, real valued functions defined on a compact interval $[a, b]$. Let $\psi(a) = \psi(b) = 0$. Suppose there exists constants $\Lambda_\psi,\Omega_w,\Omega_\psi \geq 3$ so that
\begin{align} \label{eq: growth conditions}
\psi^{(j)}(x) \ll \frac{\Lambda_\psi}{ \Omega_\psi^j}, \ \ w^{(j)} (x)
\ll \frac{1}{\Omega_w^j}\, \ \ \
\text{and} \ \ \ \psi^{(2)} (x)\gg \frac{\Lambda_\psi}{ \Omega_\psi^2}
\end{align}
for all $j=0, \ldots, 4$ and all $x\in [a,b]$. If $\psi^\prime(x_0)=0$ for a unique $x_0\in[a,b]$, and if $\psi^{(2)}(x)>0$, then
\begin{equation*}
\begin{split}
\int_{a}^{b} w(x) e(\psi(x)) \rd x = &\frac{e(\psi(x_0) + 1/8)}{\sqrt{\abs{\psi^{(2)}(x_0)}}}
w(x_0)
+ O\left( \frac{\Omega_\psi} {\Lambda_{\psi}^{3/2+O(\varepsilon)}}
\right),
\end{split}
\end{equation*}
provided $\Omega_\psi /\Omega_w \ll \log \Omega_{\psi}$. If instead $\psi^{(2)}(x)<0$ on $[a,b]$ then the same equation holds with $e(\psi(x_0) + 1/8)$ replaced by $e(\psi(x_0) - 1/8)$.
\end{lemma}
\noindent Moreover, we also need the following version of van der Corput's lemma (\cite[Ch. VIII, Prop. 2]{Stein1993}).
\begin{lemma}[van der Corput's lemma] \label{lem: van der Corput's lemma}
Let $[a,b]$ be a compact interval.
Let $\psi,w:[a,b]\rightarrow\mathbb{R}$ be smooth functions.
Assume $\psi''$ does not change sign
on $[a,b]$ and that for some $j\geq 1$ and $\Lambda>0$ the bound
\[
\vert\psi^{(j)}(x)\vert\geq\Lambda
\]
holds for all $x\in [a,b]$.
Then
\[
\int_{a}^{b}\,e(\psi(x)) w(x)
\,\mathrm{d}x \ll \Big(\vert w(b) \vert + \int_{a}^b \vert
w'(x)\vert\, \mathrm{d}x \Big)
\Lambda^{-1/j}
\]
where the implied constant depends only on $j$.
\end{lemma}
\textbf{Generalized Vandermonde matrices:} One of the primary difficulties which we will encounter in Section \ref{s:Off-diagonal} is the fact that taking derivatives of exponentials (which arise as the inverse of the $\log$'s defining our sequence) increases in complexity as we take derivatives. To handle this we appeal to a recent result of Khare and Tao \cite{KhareTao2021} which requires some notational set-up. Given an $M$-tuple $\vect{u} \in \R^M$, let
\begin{align*}
V(\vect{u}) := \prod_{1\le i < j \le M}(u_j-u_i)
\end{align*}
denote the Vandermonde determinant. Furthermore given two tuples $\vect{u}$ and $\vect{n}$ we define
\begin{align*}
\vect{u}^{\vect{n}} := u_1^{n_1} \cdots u_M^{n_M}, \qquad \mbox{ and } \qquad \vect{u}^{\circ\vect{n}} := \begin{pmatrix}
u_1^{n_1} & u_1^{n_2} & \dots & u_1^{n_M}\\
u_2^{n_1} & u_2^{n_2} & \dots & u_2^{n_M}\\
\vdots &\vdots & \vdots &\vdots \\
u_M^{n_1} & u_M^{n_2} & \dots & u_M^{n_M}
\end{pmatrix},
\end{align*}
the latter being a generalized Vandermonde matrix. Finally let $\vect{n}_{\text{min}} :=(0,1, \dots, M-1)$. Then Khare and Tao established the following
\begin{lemma}[{\cite[Lemma 5.3]{KhareTao2021}}] \label{lem:KT}
Let $K$ be a compact subset of the cone
\begin{align*}
\{(n_1, \dots, n_{M})\in \R^M \ : \ n_1 < \dots < n_{M} \}.
\end{align*}
Then there exist constants $C,c>0$ such that
\begin{align}\label{KT}
cV(\vect{u}) \vect{u}^{\vect{n}-\vect{n}_{\text{min}}} \le \det(\vect{u}^{\circ \vect{n}}) \le C V(\vect{u}) \vect{u}^{\vect{n}-\vect{n}_{\text{min}}}
\end{align}
for all $\vect{u} \in (0,\infty)^M$ with $u_1 \le \dots \le u_M$ and all $\vect{n} \in K$.
\end{lemma}
\section{Combinatorial Completion}
\label{s:Completion}
The proof of Theorem \ref{thm:correlations} follows an inductive argument. Thus, fix $m \ge 2$ and assume $(x(n))$ has $j$-point correlations for all $j < m$. Let $f$ be a $C_c^\infty(\R)$ function, and define
\begin{align*}
S_N(s,f)=S_N : = \sum_{n \in [N]} \sum_{k \in \Z} f(N(\omega(n) +k +s)).
\end{align*}
Note that if $f$ was the indicator function of an interval $I$, then $S_N$ would count the number of points in $(x_n)_{n \le N}$ which land in the shifted interval $I/N+s/N$. Now consider the $m^{th}$-moment of $S_N$, then one can show that (see \cite[\S 3]{LutskoTechnau2021})
\begin{align}
\cM^{(m)}(N) &:= \int_0^1 S_N(s,f)^m \mathrm{d}s\notag\\
&= \int_0^1 \sum_{\vect{n} \in [N]^m}
\sum_{\vect{k} \in \Z^m}
\left( f(N(\omega(n_1)+k_1 + s)) \cdots f(N(\omega(n_m) +k_m + s))\right) \mathrm{d} s \label{m moment}\\
&= \frac{1}{N}\sum_{\vect{n} \in [N]^m}
\sum_{\vect{k}\in \Z^{m-1}}
F\left(N(\omega(n_1)-\omega(n_2) +k_1), \dots
N(\omega(n_{m-1})-\omega(n_m) +k_{m-1})\right) \notag,
\end{align}
where
\begin{align*}
F(z_1,z_2, \dots , z_{m-1}) := \int_{\R} f(s)f(z_1+z_2+\dots + z_{m-1}+s)f(z_2+\dots + z_{m-1}+s)\cdots f(z_{m-1}+s)\,\rd s.
\end{align*}
As such we can relate the $m^{th}$ moment of $S_N$ to the $m$-point correlation of $F$. Note that since $f$ has compact support, $F$ has compact support. To recover the $m$-point correlation in full generality, we replace the moment $\int S_N(s,f)^m \mathrm{d}s$ with the mixed moment $\int \prod_{i=1}^m S_N(s,f_i) \mathrm{d} s$ for some collection of $f_i:\R\to \R$. The below proof can be applied in this generality, however for ease of notation we only explain the details in the former case.
In fact, we can use an argument from \cite[\S 8]{Marklof2003} to show that it is sufficient to prove convergence for functions $f$ such that the support of $\wh{f}$ is in $C_c^\infty(\R)$. While this implies that the support of $f$ is unbounded, the same argument, together with the decay of Fourier coefficients, applies and we reach the same conclusion about $F$. In the following proof, the support of $\wh{f}$ does not play a crucial role. Increasing the support of $\wh{f}$ increases the range of the $\vect{k}$ variable by a constant multiple. But fortunately in the end we will achieve a very small power saving, so the constant multiple will not ruin the result. To avoid carrying a constant through we assume the support of $\wh{f}$ is contained in $(-1,1)$.
\subsection{Combinatorial Target}\label{sec: combinatorial prep}
We will need the following combinatorial definitions to explain how to prove convergence of the $m$-point correlation from \eqref{m moment}. Given a partition $\cP$ of $[ m]$, we say that $j\in [m]$ is \emph{isolated} if $j$ belongs to a partition element of size $1$. A partition is called \emph{non-isolating} if no element is isolated (and otherwise we say it is \emph{isolating}). For our example $\cP = \{\{1,3\}, \{4\}, \{2,5,6\}\}$ we have that $4$ is isolated, and thus $\cP$ is isolating.
Now consider the right hand side of \eqref{m moment}, applying Poisson summation to each of the $k_i$ sums gives us
\begin{align}\label{M k def}
\cM^{(m)}(N) = \frac{1}{N^m} \int_0^1 \sum_{\vect{n} \in [N]^m}
\sum_{\vect{k} \in \Z^m} \wh{f}\Big(\frac{\vect{k}}{N}\Big)
e( \vect{k}\cdot \omega(\vect{n}) + \vect{k}\cdot \vect{1}s) \mathrm{d} s,
\end{align}
where $\omega(\vect{n}) := (\omega(n_1), \omega(n_2), \dots, \omega(n_m))$.
In \cite[\S 3]{LutskoTechnau2021} we showed that, if
\begin{align*}
\cE(N):= \frac{1}{N^m} \int_0^1 \sum_{\vect{n} \in [N]^m}\sum_{\substack{\vect{k} \in (\Z^{\ast})^m}} \wh{f}\left(\frac{\vect{k}}{N}\right)e( \vect{k}\cdot \omega(\vect{n}) + \vect{k}\cdot \vect{1}s) \mathrm{d} s,
\end{align*}
then for fixed $m$, and assuming the inductive hypothesis, Theorem \ref{thm:correlations} reduces to the following lemma.
\begin{lemma}\label{lem:MP = KP non-isolating}
Let $\mathscr{P}_m$ denote the set of non-isolating partitions of $[m]$. We have that
\begin{gather}
\lim_{N \to \infty}\cE(N) = \sum_{\cP\in \mathscr{P}_m} \expect{f^{\abs{P_1}}}\cdots \expect{f^{\abs{P_d}}}.\label{m target}
\end{gather}
where the partition $\cP = (P_1, P_2, \dots, P_d)$, and $\abs{P_i}$ is the size of $P_i$.
\end{lemma}
\subsection{Dyadic Decomposition}
It is convenient to decompose the sums over $n$ and $k$ into dyadic ranges in a smooth manner. Given $N$, we let $Q>1$ be the unique integer with $e^{Q}\leq N < e^{Q+1}$. Now, we describe a smooth partition of unity which approximates the indicator function of $[1,N]$. Strictly speaking, these partitions depend on $Q$, however we suppress it from the notation. Furthermore, since we want asymptotics of $\cE(N)$, we need to take a bit of care at the right end point of $[1,N]$, and so a tighter than dyadic decomposition is needed. Let us make this precise. For $0\le q < Q$ we let $\mathfrak{N}_q: \R\to [0,1]$ denote a smooth function for which
\begin{align*}
\operatorname{supp}(\mathfrak{N}_q) \subset [e^{q}/2, 2 e^q)
\end{align*}
and such that $\mathfrak{N}_{q}(x) + \mathfrak{N}_{q+1}(x) = 1 $ for $x$ between $e^{q+1}/2$ and $2 e^q$. Now for $q \ge Q$ we let $\mathfrak{N}_q$ form a smooth partition of unity for which
\begin{align*}
&\sum_{q=0}^{2Q-1} \mathfrak{N}_q (x) =\begin{cases}
1 & \mbox{if } 1< x < e^{Q}\\
0 & \mbox{if } x< 1/2 \mbox{ or } x > N + \frac{3N}{\log(N)}
\end{cases},\,\mathrm{and}\\
& \operatorname{supp}(\mathfrak{N}_q) \subset \left[\frac{e^{Q}}{2} + (q-Q)\frac{e^{Q}}{2Q}, \frac{e^Q}{Q} + (3+q-Q)\frac{e^{Q}}{2Q}\right).
\end{align*}
Let $\Vert \cdot \Vert_{\infty}$ denote the sup norm of a function $f : \R \to \R$. We impose the following condition on the derivatives:
\begin{align}\label{N deriv}
\Vert \mathfrak{N}_{q}^{(j)}\Vert_{\infty} \ll \begin{cases}
e^{-qj} & \mbox{ for } q < Q\\
(e^{Q}/Q)^{-j} & \mbox{ for } Q< q,
\end{cases}
\end{align}
for $j \ge 1$. For technical reasons, assume $\mathfrak{N}_q^{(1)}$ changes sign only once. Thus
\begin{equation}\label{eq: upper bound moment}
\cE(N) \le \int_0^1
\bigg( \frac{1}{N}
\sum_{q=0}^{2Q-1} \sum_{n \in \Z}
\mathfrak{N}_{q}(n)
\sum_{k\neq 0}
\wh{f}\left(\frac{ k}{N}\right)
e( k \omega(n) + ks)
\bigg)^m \mathrm{d} s.
\end{equation}
A similar lower bound can also be achieved by omitting some terms from the partition.
We similarly decompose the $k$ sums, although thanks to the compact support of $\wh{f}$ we do not need to worry about $k\ge N$. Let $\mathfrak{K}_u:\R \to [0,1]$ be a smooth function such that, for $U :=\lceil \log N \rceil$
\begin{align*}
\sum_{u=-U}^{U} \mathfrak{K}_u(k) =\begin{cases}
1 & \mbox{ if } \vert k\vert \in [1, N)\\
0 & \mbox{ if } \vert k\vert < 1/2 \mbox{ or } \vert k\vert > 2N,
\end{cases}
\end{align*}
and the symmetry $\mathfrak{K}_{-u}(k) = \mathfrak{K}_{u}(-k)$ holds true for
all $u,k> 0$. Additionally
\begin{align*}
&\supp(\mathfrak{K}_u) =[e^{u /2}, 2e^{u})
\qquad\qquad \mbox{ if } u \ge 0\,\,,\mbox{ and } \\
&
\Vert \mathfrak{K}_{u}^{(j)} \Vert_{\infty} \ll e^{-\abs{u}j},\qquad\qquad
\mbox{for all } j\ge 1 .
\end{align*}
As for $\mathfrak{N}_q$, we also assume $\mathfrak{K}_u^{(1)}$ changes sign exactly once.
Therefore a central role is played by the smoothed exponential sums
\begin{equation}\label{def: E_qu}
\cE_{q,u}(s):=
\frac{1}{N}\sum_{k\in \Z} \mathfrak{K}_{u}(k)
\wh{f}\Big(\frac{k}{N}\Big)e( ks)
\sum_{n\in \Z} \mathfrak{N}_{q}(n)e( k\omega(n) ).
\end{equation}
Notice that \eqref{eq: upper bound moment} and the compact support of $\wh{f}$ imply
\begin{align*}
\cE(N) \ll \bigg\Vert \sum_{u=-U}^{U}\sum_{q=0}^{2Q-1} \cE_{q,u}
\bigg\Vert_{L^m(\R)}^m.
\end{align*}
Now write
\begin{align*}
\cF(N) := \frac{1}{N^m}
\sum_{\vect{q}=0}^{2Q-1}
\sum_{\vect{u} = -U}^{U}
\sum_{\vect{k},\vect{n} \in \Z^m}
\mathfrak{K}_{\vect{u}}(\vect{k})
\mathfrak{N}_{\vect{q}}(\vect{n})
\int_0^1\wh{f}\Big(\frac{\vect{k}}{N}\Big)
e( \vect{k}\cdot \omega(\vect{n}) + \vect{k}\cdot \vect{1} s)\, \mathrm{d} s,
\end{align*}
where $\mathfrak{N}_{\vect{q}}(\vect{n}) := \mathfrak{N}_{q_1}(n_1)\mathfrak{N}_{q_2}(n_2)\cdots\mathfrak{N}_{q_m}(n_m)$ and $\mathfrak{K}_{\vect{u}}(\vect{k}) := \mathfrak{K}_{u_1}(k_1)\mathfrak{K}_{u_2}(k_2)\cdots\mathfrak{K}_{u_m}(k_m)$. Our goal will be to establish that $\cF(N)$ is equal to the right hand side of \eqref{m target} up to a $o(1)$ term. Then, since we can establish the same asymptotic for the lower bound, we may conclude the asymptotic for $\cE(N)$. Since the details are identical, we will only focus on $\cF(N)$.
Fixing $\vect{q}$ and $\vect{u}$, we let
\begin{align*}
\cF_{\vect{q},\vect{u}}(N)
& =
\frac{1}{N^m}\int_0^1
\sum_{\substack{\vect{n},\vect{k} \in \Z^m}}
\mathfrak{N}_{\vect{q}}(\vect{n})
\mathfrak{K}_{\vect{u}}(\vect{k})
\wh{f}\Big(\frac{\vect{k}}{N}\Big)
e(\vect{k}\cdot \omega(\vect{n}) + \vect{k}\cdot \vect{1}s) \mathrm{d} s.
\end{align*}
\begin{remark}
In the proceeding sections, we will fix $\vect{q}$ and $\vect{u}$. Because of the way we have defined $\mathfrak{N}_q$, this implies two cases: $q<Q$ and $q\ge Q$. The only real difference in these two cases are the bounds in \eqref{N deriv}, which differ by a factor of $Q = \log(N)$. To keep the notation simple, we will assume we have $q <Q$ and work with the first bound. In practice the logarithmic correction does not affect any of the results or proofs.
\end{remark}
\section{Applying the $B$-process}
\label{s:Applying B}
\subsection{Degenerate Regimes}
Fix $\delta=\frac{1}{m+1}$. We say $(q,u)\in [2 Q] \times [-U,U]$ is \emph{degenerate} if either one of the following holds
\begin{align*}
\abs{u} < q^\frac{A-1}{2},
\,\, \mathrm{or}\,\,
q \leq \delta Q.
\end{align*}
Otherwise $(q,u)$ is called \emph{non-degenerate}. Let $\mathscr{G}(N)$ denote the set of all non-degenerate pairs $(q,u)$. In this section it is enough to suppose that $u>0$ (and therefore $k>0$). Next, we show that degenerate $(q,u)$ contribute a negligible amount to $\cF(N)$.
First, assume $q \le \delta Q$. Expanding the $m^{th}$-power, evaluating the $s$-integral and trivial estimation yield
\begin{align*}
\Vert \cE_{q,u} \Vert_{L^m}^m
\ll \frac{1}{N^m} \#\{k_1,\dots,k_m\asymp e^{u}: k_1 +\dots +k_m = 0 \}N^{m\delta}
\ll N^{m\delta-1}.
\end{align*}
If $u<q^{(A-1)/2}$ and $q>\delta Q$, then the Kusmin--Landau estimate (see \cite[Corollary 8.11]{IwaniecKowalski2004}) implies
\begin{align*}
\sum_{n\in \Z}
\mathfrak{N}_{q}(n)
e( k \omega(n)) \ll \frac{e^q}{k q^{A-1}},
\end{align*}
and hence
\begin{align*}
\Vert \cE_{q,u} \Vert_\infty
\ll \frac{1}{N}
\sum_{k\asymp e^{u}}
\frac{e^q}{k q^{A-1}}
\ll \frac{1}{N}
\frac{e^q}{ q^{A-1}}.
\end{align*}
Note
\[
\sum_{q\leq Q}\frac{e^{q}}{q^{A-1}}\ll\int_{1}^{Q}\frac{e^{q}}{q^{A-1}}\,\mathrm{d}q=\int_{1}^{Q/2}\frac{e^{q}}{q^{A-1}}\,\mathrm{d}q+\int_{Q/2}^{Q}\frac{e^{q}}{q^{A-1}}\,\mathrm{d}q\ll e^{Q/2}+\frac{1}{Q^{A-1}}\int_{Q/2}^{Q}e^{Q}\,\mathrm{d}q\ll\frac{e^{Q}}{Q^{A-1}}.
\]
Thus,
\[
\Big\Vert \sum_{\delta Q\leq q\leq Q} \sum_{u\leq q^{(A-1)/2}}
\mathcal{E}_{q,u} \Big\Vert_\infty \ll
\frac{1}{N}
\sum_{q\leq Q} \sum_{u\leq q^{(A-1)/2}}\sum_{k\asymp e^{u}}\frac{1}{k}
\frac{e^{q}}{q^{A-1}}
\ll
\frac{1}{N} \frac{e^{Q}}{Q^{A-1}}
\sum_{u\leq Q^{(A-1)/2}}1\leq\frac{1}{Q^{\frac{A-1}{2}}}.
\]
Taking the $L^m$-norm then yields:
\begin{align*}
\bigg \Vert \sum_{(q,u)\in [2Q]\times[-U,U] \setminus \mathscr{G}(N)}\cE_{q,u}
\bigg\Vert_{L^m}^m \ll_{\delta} \log(N)^{-\rho},
\end{align*}
for some $\rho>0$. Hence the triangle inequality implies
\begin{equation}\label{eq: cal F reduced}
\mathcal{F}(N) =
\bigg\Vert \sum_{(u,q) \in \mathscr{G}(N)}\cE_{q,u} \bigg\Vert_{L^m}^m + O(N^{-\rho}).
\end{equation}
Next, to dismiss the degenerate regimes, let $w,W$ denote strictly positive numbers satisfying
$w<W$. Consider
\[
g_{w,W}(x):=\min\left(\frac{1}{\Vert xw\Vert},\frac{1}{W}\right),
\]
here $\| \cdot\|$ denotes the distance to the nearest integer. We shall need (as in \cite[Proof of Lemma 4.1]{LutskoTechnau2021}):
\begin{lemma}
\label{lem: square_root_cancellation on average over s}If $W<1/10$,
then
\[
\sum_{e^{u}\leq\left|k\right|<e^{u+1}}g_{w,W}(k)\ll\left(e^{u}+\frac{1}{w}\right)\log\left(1/W\right)
\]
where the implied constant is absolute.
\end{lemma}
\begin{proof}
The proof is elementary, hence we only sketch the main idea. If $e^{u}w <1$ then we achieve the bound $\frac{1}{w} \log(1/W)$, and otherwise we get the bound $e^{u}\log(1/W)$. Focusing on the latter, first make a case distinction between those $x$ which contribute $\frac{1}{\|xw\|}$ and those that contribute $\frac{1}{W}$. Then count how many contribute the latter. For the former, since the spacing between consecutive points is small, we can convert the sum into $e^uw$ many integrals of the form $\frac{1}{w}\int_{W/w}^{1/w} \frac{1}{x} \mathrm{d}x$.
\end{proof}
With the previous lemma at hand, we can show that an additional degenerate regime is negligible. Specifically, when we apply the $B$-process, the first step is to apply Poisson summation. Depending on the new summation variable there may, or may not, be a stationary point. The following lemma allows us to dismiss the contribution when there is no stationary point. Fix $k\asymp e^{u}$ and let $[a,b]:=\mathrm{supp}(\mathfrak{N}_{q})$. Consider
\[
\mathrm{Err}(k):=\sum_{\underset{m_{q}(r)>0}{r\in\mathbb{Z}}}\int_{\mathbb{R}}\,e(\Phi_r(x))\,\mathfrak{N}_{q}(x)\,\mathrm{d}x
\]
where
\[
\Phi_r(x):=k \omega(x)-rx,\qquad m_{q}(r):=\min_{x\in[a,b]}\vert\Phi_{r}(x)\vert.
\]
Our next aim is to show that the smooth exponential sum
\[
\mathrm{Err}_{u}(s):=
\sum_{k\in\mathbb{Z}}e(ks)\mathrm{Err}(k)\mathfrak{K}_{u}(k)
\widehat{f}\Big( \frac{k}{N}\Big)
\]
is small on average over $s$:
\begin{lemma}\label{lem: error small in L_m average}
Fix any constant $C>0$, then the bound
\begin{align}
I_{\vect{u}}:=\int_0^1\prod_{i=1}^m \Err_{u_i}(s)\mathrm{d}s\ll Q^{-C}N^{m},
\end{align}
holds uniformly in $Q^{\frac{A-1}{2}}\leq \vect{u}\ll Q$.
\end{lemma}
\begin{proof}
Let $\mathcal{L}_{\vect{u}}$ denote the truncated sub-lattice of $\mathbb{Z}^{m}$ defined by gathering all $\vect{k}\in\mathbb{Z}^{m}$ so that $k_{1}+\ldots+k_{m}=0$ and $\vert k_{i}\vert\asymp e^{u_i}$ for all $i\leq m$. The quantity $\mathcal{L}_{\vect{u}}$ arises from
\begin{equation}
I_{\vect{u}}=
\sum_{\underset{i\leq m}{\left|k_{i}\right|\asymp e^{u}}}
\Biggl(\Biggl(\prod_{i\leq m}
\mathrm{Err}(k_{i})\mathfrak{K}_{u}(k_{i})\widehat{f}\Big( \frac{k_i}{N}\Big)
\Biggr)\int_{0}^{1}e((k_{1}+\ldots+k_{m})s)\,\mathrm{d}s\Biggr)
\ll \sum_{\vect{k}\in\mathcal{L}_{\vect{u}}}
\biggl(\prod_{i\leq m}\mathrm{Err}(k_{i})\biggr).\label{eq: bound on L_m norm of Err}
\end{equation}
Partial integration, and the dyadic decomposition allow one to show that the contribution of $\vert r\vert \geq Q^{O(1)}$ to $\Err(k_i)$ can be bounded by $O(Q^{-C})$ for any $C>0$. Hence, from van der Corput's lemma (Lemma \ref{lem: van der Corput's lemma}) with $j=2$ and the assumption $m_{q}(r)>0$, we infer
\[
\mathrm{Err}(k)\ll Q^{O(1)}\min\left(\frac{1}{\Vert k\omega'(a)-r\Vert},\frac{1}{(k\omega''(a))^{1/2}}\right)= Q^{O(1)} \min\left(\frac{1}{\Vert k\omega'(a)\Vert},\frac{1}{(k\omega''(a))^{1/2}}\right)
\]
where the implied constant is absolute. Notice that $\omega'(a)\asymp q^{A-1}e^{-q}=:w$, and
\[
k\omega''(a)\asymp(e^{u-2q}q^{A-1})^{1/2}=:W.
\]
Thus $\mathrm{Err}(k)\ll g_{w,W}(k) Q^{O(1)}.$ Using $\mathrm{Err}(k_{i})\ll g_{w,W}(k_{i}) Q^{O(1)}$ for $i<m$ and $\mathrm{Err}(k_{m})\ll Q^{O(1)}/W$ in (\ref{eq: bound on L_m norm of Err}) produces the estimate
\begin{equation}
I_{\vect{u}}\ll\frac{Q^{O(1)}}{W}
\sum_{\underset{i<m}{\vert k_{i}\vert\asymp e^{u}}}
\biggl(\prod_{i<m}g_{w,W}(k_{i})\biggr)
=\frac{Q^{O(1)}}{W}\biggl(\sum_{\vert k \vert
\asymp e^{u}}g_{w,W}(k)\biggr)^{m-1}.\label{eq: interm L_m normbnd}
\end{equation}
Suppose $W\geq N^{-\varepsilon}$, then $g_{w,W}(k)\leq N^{\varepsilon}$
and we obtain that
\[
I_{\vect{u}}\ll Q^{O(1)} N^{\varepsilon m}e^{u_1+\dots+u_{m-1}}\ll N^{m-1+\varepsilon m}\ll Q^{-C}N^{m}.
\]
Now suppose $W<N^{-\varepsilon}\leq1/10$. Then Lemma \ref{lem: square_root_cancellation on average over s}
is applicable and yields
\[
\sum_{\vert k\vert\asymp e^{u}}g_{w,W}(k)
\ll\left(e^{u}+1/w\right)\log\left(1/W\right)
\ll\left(e^{u}+e^{q}\right)\log\left(1/W\right)\ll NQ.
\]
Plugging this into \eqref{eq: interm L_m normbnd} and using
$1/W\ll e^{q-u/2}q^{(1-A)/2}\ll Ne^{-\frac{u}{2}}$
shows that
\[
I_{\vect{u}}\ll Q^{O(1)}
\frac{(NQ)^{m-1}}{W}\ll Q^{O(1)} (NQ)^{m}e^{-\frac{u}{2}}.
\]
Because $u\geq Q^{\frac{A-1}{2}}$, we certainly have $e^{-\frac{u}{2}}\ll Q^{-C}$ for any $C>0$ and thus the proof is complete.
\end{proof}
\subsection{First application of the $B$-Process}
\label{ss:B proc trip}
First, following the lead set out in \cite{LutskoSourmelidisTechnau2021} we apply the $B$-process in the $n$-variable. Assume without loss of generality that $k >0$ (if $k <0$ we take complex conjugates and the w.l.o.g. assumption that $f$ is even).
Given $r \in \Z$, let $x_{k,r}$ denote the stationary point of the function $k\omega(x) - rx$, thus:
\begin{align*}
x_{k,r} : = \wt{\omega}\left(\frac{r}{k}\right),
\end{align*}
where $\wt{\omega}(x):= (\omega^{\prime})^{-1}(x)$, the inverse of the derivative of $\omega$. This is well defined as long as $x > e^{A-1}$ (the inflection point of $\omega$) which is satisfied in the non-degenerate regime. Then, after applying the $B$-process, the phase will be transformed to
\begin{align*}
\phi(k,r): = k\omega\left(x_{k,r}\right) - r x_{k,r}.
\end{align*}
With that, the next lemma states that $\cE_{q,u}$ is well-approximated by
\begin{align}\label{EB def}
\cE_{q,u}^{(B)}(s):=
\frac{e(-1/8)}{N}\sum_{k\geq 0} \mathfrak{K}_{ u}(k)\wh{f}\Big(\frac{ k}{N}\Big) e(ks)
\sum_{r\geq 0} \frac{\mathfrak{N}_{q}(x_{k,r})}{\sqrt{k\omega^{\prime\prime}(x_{k,r})}}e(\phi(k,r)).
\end{align}
\begin{proposition}\label{prop:E B triple}
If $u\geq Q^{(A-1)/2}$, then
\begin{align}
\Vert \cE_{q,u} - \cE^{(B)}_{q,u}\Vert_{L^m}^m \ll Q^{-100m},
\end{align}
uniformly for all non-degenerate $(u,q)\in \mathscr{G}(N)$.
\end{proposition}
\begin{proof}
Let $[a,b]:= \supp(\mathfrak{N}_q)$, let $\Phi_r(x):= k\omega(x) - rx$, and let $m(r):= \min \{\abs{\Phi_r^\prime(x)} \, : \, x \in [a,b]\}$. As usual when applying the $B$-process we first apply Poisson summation and integration by parts:
\begin{align*}
\sum_{n \in \Z} \mathfrak{N}_q(n) e(k\omega(n)) = \sum_{r \in \Z} \int_{-\infty}^\infty \mathfrak{N}_q(x) e(\Phi_r(x)) \mathrm{d} x = M(k) + \Err(k),
\end{align*}
where $M(k)$ gathers the contributions when $r\in\Z$ with $m(r)=0$ (i.e with a stationary point) and $\Err(k)$ gathers the contribution of $0 < m(r)$.
In the notation of Lemma \ref{lem: stationary phase}, let $w(x):= \mathfrak{N}_q(x)$, $\Lambda_{\psi} := \omega(e^q)e^{u} = q^{A}e^{u}$, and $\Omega_\psi = \Omega_{w} := e^q$. Since $(u,q)$ is non-degenerate we have that $\Lambda_\psi/\Omega_\psi \gg q$, and hence
\begin{align}\label{eq: stationary phase main term}
M(k) = e(-1/8)
\sum_{r\geq 0} \frac{\mathfrak{N}_{q}(x_{k,r})}{\sqrt{k\omega^{\prime\prime}(x_{k,r})}}e(\phi(k,r)) + O\left(\left(q \Lambda_\psi^{1/2+O(\varepsilon)}\right)^{-1}\right).
\end{align}
Summing \eqref{eq: stationary phase main term} against $ N^{-1} \mathfrak{K}_{u}(k) \wh{f}( k/N) e(ks)$ for $k\geq 0$ gives rise to $\mathcal{E}_{q,u}^{\mathrm{(B)}}$. The term coming from
\begin{align*}
\Err(k) N^{-1} \mathfrak{K}_{u}(k) \wh{f}( k/N) e(ks) =
\frac{1}{N}\mathrm{Err}_{u}(s)
\end{align*}
can be bounded sufficiently by Lemma \ref{lem: error small in L_m average} and the triangle inequality.
\end{proof}
Since $x_{k,r}$ is roughly of size $e^{q}$, if we stop here, and apply the triangle inequality to \eqref{EB def} we would get
\begin{align}\label{EB def}
\abs{\cE_{q,u}^{(B)}(s)}\ll
\frac{1}{N}\sum_{k\geq 0} \mathfrak{K}_{ u}(k) e^q \frac{1}{\sqrt{k}} \frac{k}{e^q} \ll
\frac{1}{N} e^{3u/2} \ll N^{1/2}.
\end{align}
Hence, we still need to find a saving of $O(N^{1/2})$. To achieve most of this, we now apply the $B$-process in the $k$ variable. This will require the following a priori bounds.
\subsection{Amplitude Bounds}
\label{ss:Amplitude}
Before proceeding with the second application of the $B$-process, we require bounds on the amplitude function
\begin{align*}
\Psi_{q,u}(k,r,s) = \Psi_{q,u} : = \frac{\mathfrak{N}_q(x_{k,r}) \mathfrak{K}_u(k)}{\sqrt{k \omega^{\prime\prime}(x_{k,r})}} \wh{f}\left(\frac{k}{N}\right),
\end{align*}
and its derivatives; for which we have the following lemma
\begin{lemma} \label{lem:amp bounds}
For any pair $q,u $ as above, and any $j \ge 1$, we have the following bounds
\begin{align}\label{amp bounds}
\|\partial_k^j \Psi_{q,u}(k,r,\cdot)\|_\infty
\ll e^{-uj} Q^{O(1)}\Vert \Psi_{q,u} \Vert_{\infty}
\end{align}
where the implicit constant in the exponent depends on $j$, but not $q,u$. Moreover
\begin{align*}
\|\Psi_{q,u}\|_\infty \ll e^{q-u/2} q^{-\frac{1}{2}(A-1)}.
\end{align*}
\end{lemma}
\begin{proof}
First note that since $\Psi_{q,u}$ is a product of functions of $k$, if we can establish \eqref{amp bounds} for each of these functions, then the overall bound will hold for $\Psi_{q,u}(k,r,s)$ by the product rule. Moreover the bound is obvious for $\mathfrak{K}_u(k)$, $\wh{f}(k/N)$, and $k^{-1/2}$.
Thus consider first $\partial_k \mathfrak{N}_q(x_{k,r}) = \mathfrak{N}_q^\prime(x_{k,r}) \partial_k(x_{k,r}) $. By assumption since $x_{k,r} \asymp e^{q}$, we have that $\mathfrak{N}_q^\prime(x_{k,r}) \ll e^{-q}$. Again, by repeated application of the product rule, it suffices to show that $\partial_k^j x_{k,r} \ll e^{q-uj}Q^{O(1)}$. To that end, begin with the following equation
\begin{align*}
1 = \partial_x(x) =\partial_x( \wt{\omega}(\omega^{\prime}(x)))= \wt{\omega}^\prime (\omega^\prime(x)) \omega^{\prime\prime}(x).
\end{align*}
Hence $\wt{\omega}^\prime (\omega^\prime(x)) = \frac{1}{\omega^{\prime\prime}(x)}$ which we can write as
\begin{align*}
\wt{\omega}^\prime (\omega^\prime(x)) = x^2 f_1(\log(x))
\end{align*}
where $f_1$ is a rational function. Now we take $j-1$ derivatives of each side. Inductively, one sees that there exist rational functions $f_j$ such that
\begin{align*}
\wt{\omega}^{(j)} (\omega^\prime(x)) = x^{j+1} f_j(\log(x)).
\end{align*}
Setting $x= x_{k,r}=\wt\omega(r/k)$ then gives
\begin{align}\label{omega tilde bound}
\wt{\omega}^{(j)} (r/k) = x_{k,r}^{j+1} f_j(\log(x_{k,r})).
\end{align}
With \eqref{omega tilde bound}, we can use repeated application of the product rule to bound
\begin{align*}
\partial_k^{j} x_{k,r} &= \partial_k^{j} \wt{\omega}(r/k)\\
& = -\partial_k^{j-1} \wt{\omega}^\prime (r/k)\left(\frac{r}{k^2}\right)\\
& \ll \wt{\omega}^{(j)} (r/k)\left(\frac{r}{k^2}\right)^{j} + \wt{\omega}^\prime (r/k)\left(\frac{r}{k^{1+j}}\right)\\
& \ll x_{k,r}^{j+1} f_j(\log(x_{k,r}))\left(\frac{r}{k^2}\right)^{j} + x_{k,r}^2 f_1(\log(x_{k,r}))\left(\frac{r}{k^{1+j}}\right).
\end{align*}
Now recall that $k \asymp e^{u}$, $x_{k,r} \asymp e^{q}$, and $r \asymp e^{u-q}q^{A-1}$, thus
\begin{align*}
\partial_k^{j} x_{k,r} & \ll \left(e^{q(j+1)} \left(\frac{e^{u-q}}{e^{2u}}\right)^{j} + e^{2q}\left(\frac{e^{u-q}}{e^{(1+j)u}}\right)\right)Q^{O(1)}\\
& \ll e^{q-ju}Q^{O(1)}.
\end{align*}
Hence $\partial_k^{(j)} \mathfrak{N}_q(x_{k,r}) \ll e^{-ju} Q^{O(1)}$.
The same argument suffices to prove that $\partial_k^{j}\frac{1}{\sqrt{\omega^{\prime\prime}(x_{k,r})}} \ll e^{q-ju} Q^{O(1)}$.
\end{proof}
\subsection{Second Application of the $B$-Process}
\label{s:Second Application}
Now, we apply the $B$-process in the $k$-variable. At the present stage, the phase function is $\phi(k,r)+ks$. Thus, for $h \in \Z$ let $\mu = \mu_{h,r,s}$ be the unique stationary point of $k \mapsto\phi(k,r) - (h-s)k$. Namely:
\begin{align*}
(\partial_\mu\phi)(\mu,r) = h-s.
\end{align*}
After the second application of the $B$-process, the phase will be transformed to
\begin{align*}
\Phi(h,r,s) = \phi(\mu,r) - (h-s)\mu.
\end{align*}
With that, let (again for $u >0)$
\begin{align} \label{EBB def}
\cE^{(BB)}_{q,u}(s):= \frac{1}{N} \sum_{r\ge 0}\sum_{h \ge 0}
\wh{f} \left(\frac{\mu}{N}\right)\mathfrak{K}_u(\mu)\mathfrak{N}_{q}(x_{\mu,r}) \frac{1}{\sqrt{\abs{\mu \omega^{\prime\prime}(x_{\mu,r}) \cdot (\partial_{\mu\mu}\phi)(\mu,r) }}} e(\Phi(h,r,s)).
\end{align}
We can now apply the $B$-Process for a second time and conclude
\begin{proposition} \label{B proc twice}
We have
\begin{align}
\left\Vert \mathcal{E}_{q,u}^{\mathrm{(BB)}}-\mathcal{E}_{q,u}^{\mathrm{(B)}}\right\Vert _{L^{m}([0,1])}=O(N^{-\frac{1}{2m} +\varepsilon}),
\end{align}
uniformly for any non-degenerate $(q,u)\in \mathscr{G}(N)$.
\end{proposition}
Before we can prove the above proposition, we need some preparations. Note the following: we have
\[
k\omega'(n)=Ak \frac{(\log n)^{A-1}}{n}\leq 10 A e^{u-q}q^{A-1}.
\]
If $u-q+(A-1)\log q<-10$ then $10 A e^{u-q}q^{A-1}=10 Ae^{-10 A}\leq0.6$. Hence, there is no stationary point in the first application of the $B$-process. Thus the contribution from this regime is disposed of by the first $B$-process. Therefore, from now on we assume that
\begin{equation}
u\geq q-(A-1)\log q-10A,\quad\mathrm{in\,particular}\quad e^{u}\gg e^{q}q^{1-A}.\label{eq: lower bound on u in terms of q}
\end{equation}
\subsection*{Non-essential regimes}
In this section we estimate the contribution from regimes where $u$ is smaller by a power of a logarithm than the top scale $Q$. We shall see that this regime can be disposed off. More precisely, let
\[
\mathcal{T}(N):=\{(q,u)\in\mathscr{G}(N):\,u\leq\log N- 10 A\log\log N\}.
\]
We shall see that contribution $\mathcal{T}(N)$ is negligible by showing that the function
\begin{equation}
T_{N}(s):=\sum_{(q,u)\in\mathcal{T}(N)}\mathcal{E}_{q,u}^{(\mathrm{B})}(s)\label{def: tiny contribution}
\end{equation}
has a small $\Vert\cdot\Vert_{\infty}$-norm (in the $s\in [0,1]$ variable). To prove this, we need to ensure that in
\[
\mathcal{E}_{q,u}^{(\mathrm{B})}(s)=\frac{e(-1/8)}{N}\sum_{r\geq0}\sum_{k\geq0}\Psi_{q,u}(k,r,s)e(\phi(k,r)-ks)
\]
the amplitude function
\[
\Psi_{q,u}(k,r,s):=\frac{\mathfrak{N}_{q}(x_{k,r})
\mathfrak{K}_{u}(k)}{\sqrt{k\omega''(x_{k,r})}}\widehat{f}\left(\frac{k}{N}\right)
\]
has a suitably good decay in $k$.
\begin{lemma} \label{lem: L1 bound}
If (\ref{eq: lower bound on u in terms of q}) holds, then
\[
\left\Vert k\mapsto\partial_{k}
\Psi_{q,u}(k,r,s)\right\Vert_{L^{1}(\mathbb{R})}
\ll e^{u/2} q^{-\frac{1}{2}(A-1)},
\]
uniformly for $r$ and $s$ in the prescribed ranges.
\end{lemma}
\begin{proof}
First use the triangle inequality to bound
\begin{align*}
\left\Vert k\mapsto\partial_{k}
\Psi_{q,u}(k,r,s)\right\Vert_{L^{1}(\mathbb{R})}
\ll \left\Vert \partial_k\left\{ \frac{\mathfrak{N}_{q}(x_{k,r})
\mathfrak{K}_{u}(k)}{\sqrt{k\omega''(x_{k,r})}}\right\}\widehat{f}\left(\frac{k}{N}\right) \right\Vert_{L^{1}(\mathbb{R})} + \left\Vert \frac{\mathfrak{N}_{q}(x_{k,r})
\mathfrak{K}_{u}(k)}{\sqrt{k\omega''(x_{k,r})}}\partial_k\widehat{f}\left(\frac{k}{N}\right) \right\Vert_{L^{1}(\mathbb{R})}.
\end{align*}
Since $\wh{f}$ has bounded derivative, the term on the right can be bounded by $1/N$ times the sup norm times $e^{u}$.
Since $\widehat{f}\left(\frac{k}{N}\right)$ is bounded, and $\frac{\mathfrak{N}_{q}(x_{k,r})
\mathfrak{K}_{u}(k)}{\sqrt{k\omega''(x_{k,r})}}$ changes sign finitely many times, we can apply the fundamental theorem of calculus and bound the whole by
\begin{align*}
\left\Vert k\mapsto\partial_{k}
\Psi_{q,u}(k,r,s)\right\Vert_{L^{1}(\mathbb{R})}
\ll \left\Vert k \mapsto \frac{1}{\sqrt{k\omega''(x_{k,r})}} \right\Vert_{L^{\infty}(\mathbb{R})}.
\end{align*}
\end{proof}
Now we are in the position to prove that the contribution from \eqref{def: tiny contribution} is negligible thanks to a second derivative test. This is one of the places where, in contrast to the monomial case, we only win by a logarithmic factor. Moreover, this logarithmic saving goes to $0$ as $A$ approaches $1$.
\begin{lemma}
\label{lem: second derivative bound}The oscillatory integral
\begin{align} \label{I def}
I_{q,u}(h,r):=\int_{-\infty}^{\infty}\Psi_{q,u}(t,r,s)e(\phi(t,r)-t(h-s))\,\mathrm{d}t,
\end{align}
satisfies the bound
\begin{equation}
I_{q,u}(h,r)\ll e^{q} q^{1-A}\label{eq: bound on oscillatory integral second B}
\end{equation}
uniformly in $h,$ and $r$ in ranges prescribed by $\Psi$.
\end{lemma}
\begin{proof}
We aim to apply van der Corput's lemma (Lemma \ref{lem: van der Corput's lemma}) for a second derivative bound. For that, first note that $\partial_{t}\phi(t,r)=\omega(x_{t,r})+t\partial_{t}(\omega(x_{t,r}))-r\partial_{t}(x_{t,r})$. Now, since
\begin{equation}
\partial_{t}(\omega(x_{t,r}))=\omega'(x_{t,r})\partial_{t}(x_{t,r})=\frac{r}{t}\partial_{t}(x_{t,r}),\label{eq: derivative of omega xkr}
\end{equation}
it follows that
\begin{equation}
\partial_{t}\phi(t,r)=\omega(x_{t,r}).\label{eq: second B 1 deriv comp}
\end{equation}
Now we bound the second derivative of $\phi(t,r)-t(s+h)$. By \eqref{eq: derivative of omega xkr} and \eqref{eq: second B 1 deriv comp}, we have
\begin{align*}
\partial_{t}^{2}[\phi(t,r)] & =\partial_{t}[\omega(x_{t,r})]=\frac{r}{t}\partial_{t}[x_{t,r}].
\end{align*}
Thus
\begin{align*}
\partial_{t}^{2}[\phi(t,r)]=-\frac{1}{\omega''(x_{t,r})}\frac{r^{2}}{t^{3}}.
\end{align*}
Taking $x_{t,r}\asymp e^{q}$ into account gives
\begin{align}
\partial_{t}^{2}[\phi(t,r)]\asymp\frac{1}{e^{-2q}q^{A-1}}\frac{(e^{u-q}q^{A-1})^{2}}{e^{3u}}=e^{-u}q^{A-1}.\label{eq: size of second deriv in k}
\end{align}
The upshot, by van der Corput's lemma (Lemma \ref{lem: van der Corput's lemma}), is that
\[
I_{q,u}(h,r)\ll \|\Psi\|_\infty (e^{-u}q^{A-1})^{-1/2} \ll e^{q} q^{1-A}.
\]
\end{proof}
Now we are in the position to prove:
\begin{lemma}
We have that, as a function of $s\in[0,1]$, the sup-norm $\Vert T_{N}\Vert_{\infty}\ll(\log N)^{-8A}$.
\end{lemma}
\begin{proof}
Note that
\begin{equation}
\mathcal{E}_{q,u}^{(\mathrm{B})}(s)\ll\frac{1}{N}\sum_{r\asymp e^{u-q}q^{A-1}}\vert\Xi(r)\vert\qquad\mathrm{where}\qquad\Xi(r):=\sum_{k\geq0}\Psi_{q,u}(k,r,s)e(\phi(k,r)-ks).\label{eq: Poisson on E^B}
\end{equation}
By Poisson summation,
\[
\Xi(r)=\sum_{h\in\mathbb{Z}}I_{q,u}(h,r).
\]
We decompose the right hand side into the contribution $\Xi_{1}(r)$ coming from $\vert h\vert>(4Q)^{A}$, and a contribution $\Xi_{2}(r)$ from the regime $\vert h\vert\leq(4Q)^{A}$. Next, we argue that $\Xi_{1}(r)$ can be disposed off by partial integration. Because $x_{k,r}\leq2N$, we have
\[
\omega(x_{k,r})=(\log x_{k,r})^{A}\leq(3Q)^{A}.
\]
Note for $\vert h\vert > (4Q)^{A}$, by \eqref{eq: second B 1 deriv comp}, the inequality
\[
\partial_{k}[\phi(k,r)-k(s+h)]\gg h
\]
holds true, uniformly in $r$ and $s$. As a result, partial integration yields, for any constant $C>0$, the bound
\begin{align*}
I_{q,u}(h,r)\ll\left\Vert k\mapsto\partial_{k}\Psi_{q,u}(k,r,s)\right\Vert _{L^{1}(\mathbb{R})}h^{-C}.
\end{align*}
Therefore,
\begin{align*}
\Xi_{1}(r)\ll\left\Vert k\mapsto\partial_{k}\Psi_{q,u}(k,r,s)\right\Vert _{L^{1}(\mathbb{R})}\sum_{h\geq(4Q)^{A}}h^{-C}.
\end{align*}
Recall that we have $q\geq \frac{1}{m+1} Q$. Thus, taking $C$ to be large and using Lemma \ref{lem: L1 bound}, we deduce that
\begin{equation}
\Xi_{1}(r)\ll_{C_{1}}e^{\frac{u}{2}}Q^{-C_{1}}\label{eq: bound on Xi1}
\end{equation}
for any constant $C_{1}>0$. All in all, we have shown so far
\begin{align*}
\Xi(r)\ll\Xi_{2}(r)+e^{\frac{u}{2}}Q^{-C_{1}}.
\end{align*}
In $\Xi_{2}(r)$ there are $O(Q^{A})$ choices of $h$. By using Lemma \ref{lem: second derivative bound} we conclude
\begin{equation}
\Xi_{2}(r)\ll e^{q}q^{1-A}Q^{A}.\label{eq: bound on Xi2}
\end{equation}
By combining (\ref{eq: bound on Xi1}) and (\ref{eq: bound on Xi2}), we deduce from (\ref{eq: Poisson on E^B}) that
\[
\left\Vert \mathcal{E}_{q,u}^{(\mathrm{B})}(\cdot)\right\Vert _{\infty}\ll\frac{1}{N}\sum_{r\asymp e^{u-q}q^{A-1}}e^{q}q^{1-A} Q^{A}\ll \frac{1}{N}e^{u} Q^{A}.
\]
As a result,
\[
\left\Vert T_{N}(\cdot)\right\Vert_\infty \ll\frac{1}{N}\sum_{(u,q)\in\mathcal{T}(N)}e^{u}Q^{A}\ll\frac{1}{N}\sum_{u\leq\log N-10A\log\log N}e^{u}Q^{A+1}\ll\frac{1}{(\log N)^{10A}}(\log N)^{A+1}\ll\frac{1}{(\log N)^{8A}}.
\]
\end{proof}
\subsection*{Essential regimes}
At this stage, we are ready to apply our stationary phase expansion (Proposition \ref{prop: stationary phase}), and thus effectively apply the $B$-process a second time. Recall that after applying Poisson summation, the phase will be $\psi_{r,h}(t) = \psi(t): = \phi(t,r) - t(h-s)$. Let
\begin{align*}
W_{q,u}(t):=\frac{\mathfrak{N}_{q}(x_{t,r})\mathfrak{K}_{u}(t)}{\sqrt{t\omega''(x_{t,r})}}\widehat{f}\left(\frac{t}{N}\right)e\left(\psi(t)- \psi(\mu)-\frac{1}{2}(t-\mu)^{2} \psi^{\prime\prime}(\mu) \right).
\end{align*}
Further, define
\begin{align*}
p_{j}(\mu):=c_j \left(\frac{1}{\psi''(\mu)}\right)^{j}W_{q,u}^{(2j)}(\mu),
\end{align*}
where $p_0(\mu) = e(1/8) W_{q,u}(\mu)$. Note that, by \eqref{eq: size of second deriv in k}, one can bound
\begin{align}\label{p bounds}
p_{j}(\mu) \ll p_1(\mu) \ll N^{\varepsilon}\frac{1}{\psi^{\prime\prime}(\mu)}\frac{1}{\mu^{1/2}} \frac{\omega^{\prime\prime\prime\prime}(x_{\mu,r}) \left(\partial_t x_{t,r}|_{t=\mu}\right)^2}{\omega^{\prime\prime}(x_{\mu,r})^{3/2}} \ll e^{u/2-q} N^\varepsilon, \qquad j \ge 1.
\end{align}
Hence let
\begin{align*}
P_{q,u}(h,r,s):=\frac{e(\psi(\mu))}{\sqrt{\psi^{\prime\prime}(\mu)}}\left(p_{0}(\mu)+p_1(\mu)\right),
\end{align*}
and set
\begin{align*}
E_{q,u}^{\mathrm{(BB)}}(s):=\frac{e(-1/8)}{N}\sum_{r\geq0}\sum_{h\geq0}P_{q,u}(h,r,s).
\end{align*}
Before proving Proposition \ref{B proc twice} we need the following lemma.
\begin{lemma}\label{lem: average of double bar}
For any $c\in[0,1]$ and any $M>10$, we have the bound
\[
\int_{0}^{1}\,\min (\left\Vert c+s\right\Vert ^{-1},M )\,\mathrm{d}s \leq 2 \log M.
\]
\end{lemma}
\begin{proof}
Decomposing into intervals where
$\left\Vert c+s\right\Vert ^{-1}\leq M$
as well as intervals
where $\left\Vert c+s\right\Vert ^{-1}>M$
and then using straightforward estimates imply the claimed bound.
\end{proof}
Now we can prove Proposition \ref{B proc twice}.
\begin{proof}[Proof of Proposition \ref{B proc twice}]
Fix $ s \in [0,1]$ and recall the definition of $I_{q,u}(h,r)$ from \eqref{I def}, then by Poisson summation
\begin{align*}
\cE_{q,u}^{\mathrm{(B)}}(s)=\frac{e(-1/8)}{N}\sum_{r\geq0}\sum_{h\in\mathbb{Z}}I_{q,u}(h,r).
\end{align*}
Let $[a,b]:=\supp(\mathfrak{K}_u)$, and
\begin{align*}
m_{r}(h):=\min_{k\in[a,b]}\vert \psi_{r,h}^\prime(k)\vert.
\end{align*}
We decompose the $h$-summation into three different ranges:
$$\sum_{h \in \Z} I_{q,u}(h,r) = \mathfrak{C}_1+\mathfrak{C}_2+\mathfrak{C}_3,$$
where the first contribution, $\mathfrak{C}_{1}(r,s)$ is where $m_{r}(h)=0$, the second contribution, $\mathfrak{C}_{2}(r,s)$ is where $0<m_{r}(k)\leq N^{\varepsilon}$, and the third contribution $\mathfrak{C}_{3}(r,s)$ is where $m_{r}(h)\geq N^{\varepsilon}$. Integration by parts shows that
$$\mathfrak{C}_{3}(r,s)\ll N^{-100}.$$
Next, we handle $\mathfrak{C}_{1}(r,s)$. To this end, we shall apply Proposition \ref{prop: stationary phase}, in whose notation we have
\begin{align*}
\Omega_w & :=e^{u},\qquad
\Lambda_w :=e^{q-u/2+\varepsilon},\qquad \Lambda_{\psi}:=e^{u}q^{A-1},\qquad \Omega_{\psi}:=e^{u}.
\end{align*}
The decay of the amplitude function was shown in Lemma \ref{lem:amp bounds}, the decay of the phase function follows from a short calculation we omit. Next, since we have disposed of the inessential regimes, we see
\[
Z:=\Omega_{\psi}+\Lambda_w+\Lambda_{\psi}+\Omega_w+1\asymp e^{u}q^{A-1}\asymp N^{1+o(1)}.
\]
Further,
\[
\frac{\Omega_{\psi}}{\Lambda_{\psi}^{1/2}}Z^{\frac{\delta}{2}}=\frac{e^{\frac{u}{2}}}{q^{\frac{1}{2}A}}Z^{\frac{\delta}{2}}\asymp\frac{e^{u(\frac{1}{2}+\delta)}}{q^{\frac{1}{2}A+\frac{\delta}{2}(A-1)}}.
\]
Hence taking $\delta:=1/11$ is compatible with the assumption (\ref{eq: constraints on X,Y,Z}).
Thus
\[
\mathfrak{C}_{1}(r,s)=\sum_{h\geq0} P_{q,u}(h,r,s) + O(N^{-1/11}).
\]
Now we bound $\mathfrak{C}_{2}(r,s)$. First note that $\omega(x_{t,r})$ is monotonic in $t$. To see this set the derivative equal to $0$:
\begin{align*}
A\frac{\log(x_{t,r})^{A-1}}{x_{t,r}} \partial_t x_{t,r} = A\frac{\log(x_{t,r})^{A-1}}{x_{t,r}} \omega^{\prime}(r/t)(-r/t^2) = 0.
\end{align*}
However, since $x_{t,r}\asymp e^q$, this implies $\omega^\prime(r/k) =0$, which is a contradiction. Thus, by van der Corput's lemma (Lemma \ref{lem: van der Corput's lemma}) for the first derivative, and monotonicity, we have
\[
\mathfrak{C}_{2}(r,s)\ll N^{\frac{1}{2}+\varepsilon}\min\left(\left\Vert \omega(x_{a,r})+s\right\Vert ^{-1},N^{\frac{1}{2}+o(1)}\right)
\]
where we used \eqref{eq: size of second deriv in k} and the fact that $\partial_t \phi(t,r) = \omega(x_{t,r})$. Notice that
\[
\Big\Vert \frac{1}{N}\sum_{r\geq0}\mathfrak{C}_{2}(r,\cdot)
\Big\Vert _{L^{m}}^{m}
\ll
\Big\Vert \frac{1}{N}\sum_{r\geq0}\mathfrak{C}_{2}(r,\cdot)
\Big\Vert _{\infty}^{m-1}
\Big\Vert \frac{1}{N}
\sum_{r\geq0}\mathfrak{C}_{2}(r,\cdot)
\Big\Vert_{L^{1}}.
\]
By (\ref{eq: bound on oscillatory integral second B}) we see
\[
\Big\Vert \frac{1}{N}\sum_{r\geq0}\mathfrak{C}_{2}(r,s)
\Big\Vert _{\infty}^{m-1}\ll N^{O(\varepsilon)}.
\]
Hence it remains to estimate
\[
N^{O(\varepsilon)}\sum_{r\asymp e^{u-q}q^{A-1}}\frac{1}{\sqrt{N}}\int_{0}^{1}\,\min\left(\left\Vert \omega(x_{a,r})+s\right\Vert ^{-1},N^{\frac{1}{2}+o(1)}\right)\,\mathrm{d}s.
\]
By exploiting Lemma \ref{lem: average of double bar} we see
\[
\Big\Vert \frac{1}{N}
\sum_{r\geq0}\mathfrak{C}_{2}(r,\cdot)
\Big\Vert _{L^{m}}^m
\ll
\sum_{r\asymp e^{u-q}q^{A-1}}
\frac{N^{o(1)}}{\sqrt{N}}\ll N^{o(1)-\frac{1}{2}}
\]
which implies the claim.
Finally, it remains to show that
\begin{align*}
\|\cE^{(BB)}_{q,u}(\cdot) - E^{(BB)}_{q,u}(\cdot)\|_{L^m}^m =O(N^{-1/2 +\varepsilon})
\end{align*}
from which we can apply the triangle inequality to conclude Proposition \ref{B proc twice}. For this, recall the bounds \eqref{p bounds}. Since $\cE_{q,u}^{(BB)}$ is simply the term arising from $p_0(\mu)$, we have that
\begin{align*}
\|\cE^{(BB)}_{q,u}(\cdot) - E^{(BB)}_{q,u}(\cdot)\|_{L^m}^m \ll \frac{1}{N}\sum_{r\in \Z} p_1 \ll \frac{e^{u-q/2}N^\varepsilon}{N}.
\end{align*}
From here the bound follows from the ranges of $q$ and $u$.
\end{proof}
Before proceeding, we note that \eqref{EBB def} can be simplified. In particular
\begin{lemma}\label{lem:Phi mu}
Given, $h$, $r$, and $s$ as above, we have
\begin{align*}
\mu_{h,r,s} = \frac{r}{\omega^\prime(\omega^{-1}(h-s))}, \qquad \Phi(h,r,s) = -r \omega^{-1}(h-s)
\end{align*}
and moreover
\begin{align}\label{kjbkhb}
\mu \omega^{\prime\prime}(x_{\mu,r}) \cdot (\partial_{\mu\mu}\phi)(\mu,r) =-\frac{r^2}{\mu^2}.
\end{align}
\end{lemma}
\begin{proof}
Recall $x_{k,r}=\tilde{\omega}\left(\frac{r}{k}\right)$. Now, to compute $\mu$, we have:
\begin{align*}
0 & = \partial_\mu \left(\mu \omega(x_{\mu,r})-rx_{\mu,r}\right) -( h-s)\\
& = \omega(x_{\mu,r}) + \mu \omega^\prime(x_{\mu,r}) \left(\partial_\mu x_{\mu,r}\right) - r \partial_\mu x_{\mu,r} - (h-s).
\end{align*}
Consider first
\begin{align*}
\partial_\mu x_{\mu,r} &= \partial_\mu\left(\wt{\omega}\left(\frac{r}{\mu}\right)\right)
= \wt{\omega}^\prime \left(\frac{r}{\mu}\right) \left(-\frac{r}{\mu^2}\right).
\end{align*}
Furthermore, since $\mu = \wt{\omega}(\omega^\prime(\mu))$, we may differentiate both sides and then change variables to see
\begin{align}\label{omegaprimprime}
\wt{\omega}^\prime(r/\mu) = \frac{1}{\omega^{\prime\prime}(\wt{\omega}(r/\mu))}.
\end{align}
Hence
\begin{align*}
\partial_\mu (x_{\mu, r}) = - \omega\left(\wt{\omega}\left(\frac{r}{\mu}\right)\right) \frac{r}{\mu^2 \omega^{\prime\prime}(\wt{\omega}(r/\mu))}.
\end{align*}
Hence
\begin{align*}
0 & = \omega(x_{\mu,r}) - r \left( \omega\left(\wt{\omega}\left(\frac{r}{\mu}\right)\right) \frac{r}{\mu^2 \omega^{\prime\prime}(\wt{\omega}(r/\mu))}\right) + \omega\left(\wt{\omega}\left(\frac{r}{\mu}\right)\right) \frac{r^2}{\mu^2 \omega^{\prime\prime}(\wt{\omega}(r/\mu))} - (h-s)\\
&= \omega(x_{\mu,r}) - (h-s).
\end{align*}
Hence $\omega(\wt{\omega}(r/\mu)) = h-s$. Solving for $\mu$ gives:
\begin{align*}
\mu = \frac{r}{\omega^\prime(\omega^{-1}(h-s))}.
\end{align*}
Moreover, we can simplify the phase as follows
\begin{align*}
\Phi(h,r,s) &= \phi(\mu,r) - (h-s)\mu\\
&= \mu \omega(x_{\mu,r}) - r x_{\mu,r} - (h-s)\mu\\
&= \mu \omega(\wt{\omega}(r/\mu)) - r \wt{\omega}(r/\mu) - (h-s)\mu\\
&= \frac{r(h-s)}{\omega^\prime(\omega^{-1}(h-s))} - r \omega^{-1}(h-s) - (h-s)\frac{r}{\omega^\prime(\omega^{-1}(h-s))}\\
&= - r \omega^{-1}(h-s).
\end{align*}
Turning now to \eqref{kjbkhb}, we note that since, by the definition of $\mu$ we have that $\partial_\mu \phi(\mu,r) = h-s$, and $h-s = \omega (\wt{\omega}(r/\mu))$ we may differentiate both sides of the former to deduce
\begin{align*}
\partial_{\mu\mu}\phi(\mu,r) &= \partial_\mu\left( \omega (\wt{\omega}(r/\mu))\right)\\
&= \omega^\prime(\wt{\omega}(r/\mu)) \wt{\omega}^\prime(r/\mu) (- r/\mu^2)\\
&= - (r^2/\mu^3) \wt{\omega}^\prime(r/\mu).
\end{align*}
Now using \eqref{omegaprimprime} we conclude that
\begin{align*}
\mu \omega^{\prime\prime}(x_{r,\mu}) \cdot (\partial_{\mu\mu}\phi)(\mu,r)
&= -\mu \omega^{\prime\prime}(\wt{\omega}(r/\mu)) (r^2/\mu^3) \wt{\omega}^\prime(r/\mu)\\
&= -(r^2/\mu^2) \omega^{\prime\prime}(\wt{\omega}(r/\mu)) \frac{1}{\omega^{\prime\prime}(\wt{\omega}(r/\mu))} = -\frac{r^2}{\mu^2}.
\end{align*}
\end{proof}
Applying Lemma \ref{lem:Phi mu} and inserting some definitions allows us to write
\begin{align} \label{EBB redef}
\cE^{(BB)}_{q,u}(s)=
\frac{1}{N}\sum_{r\ge 0}\sum_{h \ge 0}
\wh{f} \left(\frac{\mu}{N}\right)\mathfrak{K}_u(\mu)\mathfrak{N}_{q}(\wt{\omega}(r/\mu)) \frac{\mu}{r} e(-r\omega^{-1}(h-s)).
\end{align}
Returning now to the full $L^m$ norm, let $\sigma_i := \sigma(u_i) := \frac{u_i}{\abs{u_i}}$. Proposition \ref{prop:E B triple}, Proposition \ref{B proc twice} and expanding the $m^{th}$-power yields
\begin{equation}\label{eq: cal F B-processed}
\mathcal{F}(N) = \sum_{\sigma_1,\ldots,\sigma_m\in \{\pm 1\} }
\sum_{\substack{(u_i,q_i) \in \mathscr{G}(N)\\ u_i>0}}
\int_{0}^{1}
\prod_{\substack{i\leq m\\ \sigma_i >0}}
\cE^{\mathrm{(BB)}}_{q_i, u_i}(s)
\prod_{\substack{i\leq m\\ \sigma_i <0}}
\overline{\cE^{\mathrm{(BB)}}_{q_i, u_i}(s)}
\rd s
+ O(N^{-\varepsilon/2}).
\end{equation}
To simplify this expression, for a fixed $\vect{u}$ and $\vect{q}$, and $\vect{\mu}=(\mu_1,\ldots,\mu_m)$ let $\mathfrak{K}_{\vect{u}}(\vect{\mu}):= \prod_{i\leq m}\mathfrak{K}_{u_i}(\mu_i) $. The functions $ \mathfrak{N}_{\vect{q}}\left(\vect{\mu},s\right)$ and $\wh{f}(\vect{\mu}/N)$ are defined similarly. Aside from the error term, the right hand side of \eqref{eq: cal F B-processed} splits into a sum over
\begin{align*}
\cF_{\vect{q},\vect{u}} := \frac{1}{N^m}
\sum_{\vect{r} \in \Z^m}\frac{1}{r_1r_2\cdots r_m}
\int_0^1\sum_{\vect{h}\in \Z^m }
\mathfrak{K}_{\vect{u}}(\vect{\mu})
\mathfrak{N}_{\vect{q}}\left(\vect{\mu},s\right)
A_{\vect{h},\vect{r}}(s)
e\left( \varphi_{\vect{h},\vect{r}}(s)\right) \mathrm{d}s
\end{align*}
where the phase function is given by
\begin{align*}
\varphi_{\vect{h},\vect{r}}(s) := -(r_1\omega^{-1}(h_1-s)+r_2\omega^{-1}(h_2-s) + \dots + r_m\omega^{-1}(h_m-s)),
\end{align*}
and where
\begin{align*}
A_{\vect{h},\vect{r}}(s) : = \wh{f}\left(\frac{ \vect{\mu}}{N}\right) \mu_1\mu_2 \dots \mu_m.
\end{align*}
Now we distinguish between two cases. First, the set of all $(\vect{r},\vect{h})$ where the phase $\varphi_{\vect{h},\vect{r}}(s)$ vanishes identically, which we call the \emph{diagonal}; and its complement, the \emph{off-diagonal}. Let
\begin{align*}
\mathscr{A} : = \{ (\vect{r}, \vect{h}) \in \N\times \N :
\varphi_{\vect{h},\vect{r}}(s) = 0, \forall s \in [0,1] \},
\end{align*}
and let
\begin{align*}
\eta(\vect{r},\vect{h}):=\begin{cases}
1 & \mbox{ if } (\vect{r},\vect{h}) \not\in \mathscr{A} \\
0 & \mbox{ if } (\vect{r},\vect{h}) \in \mathscr{A}.
\end{cases}
\end{align*}
The diagonal, as we show, contributes the main term, while
the off-diagonal contribution is negligible (see section \ref{s:Off-diagonal}).
\section{Extracting the Diagonal}
\label{s:Diagonal}
First, we establish an asymptotic for the diagonal. The below sums range over $\vect{q} \in [2Q]^m$, $\vect{u}\in [-U,U]$, and $\vect{r},\vect{h}\in \Z$. Let
\begin{align*}
\cD_N
&= \frac{1}{N^m}\sum_{\vect{q},\vect{u},\vect{r},\vect{h} }(1-\eta(\vect{r},\vect{h}))\frac{1}{r_1r_2\cdots r_m} \int_0^1 \mathfrak{K}_{\vect{u}}(\vect{\mu}) \mathfrak{N}_{\vect{q}}\left(\vect{\mu},s\right) A_{\vect{h},\vect{r}}(s) \mathrm{d}s.
\end{align*}
With that, the following lemma establishes the main asymptotic needed to prove Lemma \ref{lem:MP = KP non-isolating} (and thus Theorem \ref{thm:correlations}).
\begin{lemma}\label{lem:diag}
We have
\begin{align}\label{diag}
\lim_{N\rightarrow \infty} \cD_N =
\sum_{\cP\in \mathscr{P}_m}
\mathbf{E}(f^{\abs{P_1}})\cdots \mathbf{E}(f^{\abs{P_d}}).
\end{align}
where the sum is over all non-isolating partitions of $[m]$,
which we denote $\cP = (P_1, \dots, P_d)$.
\end{lemma}
\begin{proof}
Since the Fourier transform $\wh{f}$ is assumed to have compact support, we can evaluate the sum over $\vect{u}$ and eliminate the factors $\mathfrak{K}_{\vect{u}}$. Hence
\begin{align*}
\cD_N
&= \frac{1}{N^m}\sum_{\vect{q},\vect{r},\vect{h} } \one(\abs{\mu_i} >0)(1-\eta(\vect{r},\vect{h}))\frac{1}{r_1r_2\dots r_m} \int_0^1 \mathfrak{N}_{\vect{q}}\left(\vect{\mu},s \right) A_{\vect{h},\vect{r}}(s) \mathrm{d}s,
\end{align*}
here the indicator function takes care of the fact that we extracted the contribution when $k_i=0$.
The condition that the phase is zero, is equivalent to a condition on $\vect{h}$ and $\vect{r}$. Specifically, this happens in the following situation: let $\cP$ be a non-isolating partition of $[m]$, we say a vector $(\vect{r},\vect{h})$ is $\cP$\emph{-adjusted} if for every $P \in \cP$ we have: $h_i = h_j$ for all $i, j \in P$, and $\sum_{i\in P} r_i = 0$. The diagonal is restricted to $\cP$-adjusted vectors. Now
\begin{align*}
\chi_{\cP,1}(\vect{r})
:=
\begin{cases}
1 & \mbox{ if $\sum_{i\in P} r_i =0$ for each $P \in \cP$}\\
0 & \mbox{ otherwise,}
\end{cases},
\qquad
\chi_{\cP,2}(\vect{h})
:=
\begin{cases}
1 & \mbox{ if $h_i =h_j$ for all $i,j \in P\in \cP$}\\
0 & \mbox{ otherwise,}
\end{cases}
\end{align*}
here $\chi_{\cP,1}(\vect{r})\chi_{\cP,2}(\vect{h})$ encodes the condition that $(\vect{r},\vect{h})$ is $\cP$-adjusted. Thus, we may write
\begin{align*}
\cD_N
&= \frac{1}{N^m}\sum_{\cP\in \mathscr{P}_m} \sum_{\vect{q},\vect{r}, \vect{h} }\chi_{\cP,1}(\vect{r})\chi_{\cP,2}(\vect{h})\frac{1}{r_1r_2\cdots r_m}\left( \int_0^1 \mathfrak{N}_{\vect{q}}\left(\vect{\mu}, s\right)\wh{f}\left(\frac{ \vect{\mu}}{N}\right) \mu_1\mu_2\cdots \mu_m \mathrm{d}s\right) +o(1).
\end{align*}
Inserting the definition of $\mu_i$ then gives
\begin{align*}
\cD_N
&= \frac{1}{N^m}\sum_{\cP\in \mathscr{P}_m}\sum_{\vect{q},\vect{r}, \vect{h} } \chi_{\cP,1}(\vect{r})\chi_{\cP,2}(\vect{h}) \int_0^1 \mathfrak{N}_{\vect{q}}\left(\vect{\mu},s\right)\wh{f}\left(\frac{\vect{\mu}}{N}\right) \prod_{i=1}^m\left(\frac{1}{\omega^\prime(\omega^{-1}(h_i-s))}\right) \mathrm{d}s +o(1).
\end{align*}
Now note that the $\vect{r}$ variable only appears in $\wh{f}\left(\vect{\mu}/N\right)$, that is
\begin{align}\label{D intermediate}
\begin{aligned}
\cD_N
&= \frac{1}{N^m}\sum_{\cP\in \mathscr{P}_m}
\sum_{P \in \cP}
\sum_{\vect{q}, h }
\int_0^1 \mathfrak{N}_{\vect{q},P}\left(h\right)
\left(\frac{1}{\omega^\prime(\omega^{-1}(h))}\right)^{\abs{P}}
\sum_{\substack{\vect{r}\in \Z^{\abs{P}}\\r_i \neq 0}}
\chi(\vect{r}) \wh{f}\left(\frac{1}{N\omega^\prime(\omega^{-1}(h))}
\vect{r}\right) \mathrm{d}s(1+ o(1)),
\end{aligned}
\end{align}
where $\chi(\vect{r})$ is $1$ if $\sum_{i=1}^{\abs{P}} r_i = 0$
and where $\mathfrak{N}_{\vect{q},P}(h) = \prod_{i\in P}\mathfrak{N}_{q_i} (\mu_i, s ) $. We can apply Euler's summation formula (\cite[Theorem 3.1]{Apostol1976}) to conclude that
\begin{align*}
\sum_{\substack{\vect{r}\in \Z^{\abs{P}}\\r_i \neq 0}}\chi(\vect{r}) \wh{f}\left(\frac{1}{N\omega^\prime(\omega^{-1}(h))}
\vect{r}\right)
=
\int_{\R^{\abs{P}}} \chi(\vect{x})\wh{f}\left(\frac{1}{N\omega^\prime(\omega^{-1}(h))} \vect{x}\right) \mathrm{d}\vect{x}\left(1+ o(1) \right).
\end{align*}
Changing variables then yields
\begin{align*}
& \int_{\R^{\abs{P}}} \chi(\vect{x})\wh{f}\left(\frac{1}{N\omega^\prime(\omega^{-1}(h))} \vect{x}\right)\mathrm{d}\vect{x}
=N^{\abs{P}-1}\omega^\prime(\omega^{-1}(h))^{\abs{P}-1} \int_{\R^{\abs{P}}} \chi_{\cP}(\vect{x}) \wh{f}\left(\vect{x}\right)\mathrm{d}\vect{x} \left(1+ O\left(N^{-\theta}\right)\right),
\end{align*}
note that $\chi(\vect{x})$ fixes $x_{\abs{P}} = - \sum_{i=1}^{\abs{P}-1} x_i$. Plugging this into our \eqref{D intermediate} gives
\begin{align*}
\cD_N
&= \frac{1}{N^d}\sum_{\cP\in \mathscr{P}_m}\sum_{P \in \cP}
\sum_{\vect{q}, h } \mathfrak{N}_{\vect{q},P}\left(h\right)\omega^\prime(\omega^{-1}(h))
\int_{\R^{\abs{P}-1}} \wh{f}(x_1, \dots, x_{\abs{P}-1}, -\vect{x}\cdot \vect{1}) \rd \vect{x} (1+ o(1)).
\end{align*}
Next, we may apply Euler's summation formula and a change of variables to conclude that
\begin{align*}
\cD_N
&= \sum_{\cP\in \mathscr{P}_m}\sum_{P \in \cP}
\left( \int_{\R^{\abs{P}-1}} \wh{f}(x_1, \dots, x_{\abs{P}-1}, -\vect{x}\cdot \vect{1}) \rd \vect{x} \right)(1+ o(1)).
\end{align*}
From there we apply Fourier analysis as in \cite[Proof of Lemma 5.1]{LutskoTechnau2021} to conclude \eqref{diag}.
\end{proof}
\section{Bounding the Off-Diagonal}
\label{s:Off-diagonal}
Recall the off-diagonal contribution is given by
\begin{align*}
\cO_N
&= \sum_{\vect{q},\vect{u}}\frac{1}{N^m}\sum_{\vect{r},\vect{h} }\eta(\vect{r},\vect{h})\int_0^1\frac{\mu_1\mu_2\cdots \mu_m}{r_1r_2\cdots r_m} \wh{f}\left(\frac{\vect{\mu}}{N}\right) \mathfrak{K}_{\vect{u}}(\vect{\mu}) \mathfrak{N}_{\vect{q}}\left(\vect{\mu},s\right)e(\Phi(\vect{h},\vect{r},s)) \mathrm{d}s.
\end{align*}
where $r_i \asymp e^{u_i-q_i}q_i^{A-1}$, the variable $h_i \asymp q^{A}$. Finally the phase function
\begin{align*}
\Phi(\vect{h},\vect{r},s) = -\sum_{i=1}^m r_i \exp((h_i-s)^{1/A}).
\end{align*}
If we were to bound the oscillatory integral trivially, we would achieve the bound $\cO_N \ll (\log N)^{(A+1)m}$. Therefore all that is needed is a small power saving, for which we can exploit the oscillatory integral
\begin{align*}
I(\vect{h},\vect{r}) := \int_0^1 \cA_{\vect{h},\vect{r}}(s) e(\Phi(\vect{h},\vect{r},s)) \mathrm{d}s
\end{align*}
where
\begin{align*}
\cA_{\vect{h},\vect{r}}(s) : = \frac{\mu_1\mu_2\cdots \mu_m}{r_1r_2\cdots r_m} \wh{f}\left(\frac{\vect{\mu}}{N}\right) \mathfrak{K}_{\vect{u}}(\vect{\mu}) \mathfrak{N}_{\vect{q}}\left(\vect{\mu},s\right).
\end{align*}
While bounding this integral is more involved in the present setting, we can nevertheless use the proof in \cite[Section 6]{LutskoTechnau2021} as a guide. In Proposition \ref{prop:int s}, we achieve a power-saving, for this reason we can ignore the sums over $\vect{q}$ and $\vect{u}$ which give a logarithmic number of choices.
Since we are working on the off-diagonal we may write the phase as
\begin{align}\label{Phi}
\Phi(\vect{h},\vect{r},s) = \sum_{i=1}^l r_i \exp((h_i-s)^{1/A}) - \sum_{i=l}^L r_i \exp((h_i-s)^{1/A}),
\end{align}
where we may now assume that $r_i >0$, the $h_i$ are pairwise distinct, and $L< m $. We restrict attention
to the case $L = m$ (this is also the most difficult case
and the other cases can be done analogously).
\begin{proposition}\label{prop:int s}
Let $\Phi$ be as above, then for any $\vep>0$ we have
\begin{align*}
I(\vect{h},\vect{r}) \ll N^{\varepsilon}\mathfrak{K}_{\vect{u}}(\vect{\mu}_0) \mathfrak{N}_{\vect{q}}(\vect{\mu}_0,0) \frac{e^{u_1 + \dots + u_m}}{r_1 \cdots r_m} N^{- 1/m +\vep}
\end{align*}
as $N \to \infty$, where $\mu_{0,i} =\frac{r_i}{\omega^\prime(\omega^{-1}(h_i))}$. The implied constants are independent of $\vect{h}$ and $\vect{r}$ provided $\eta_{\vect{r}}(\vect{h}) \neq 0$.
\end{proposition}
\begin{proof}
As in \cite{LutskoTechnau2021} we shall prove Proposition \ref{prop:int s} by showing that one of the first $m$ derivatives of $\Phi$ is small. Then we can apply van der Corput's lemma to the integral and achieve the necessary bound. However, since the phase function is a sum of exponentials (as oppose to a sum of monomials as it was in our previous work), achieving these bounds is significantly more involved than in \cite{LutskoTechnau2021}.
The $j^{th}$ derivative is given by (we will send $s \mapsto -s$ to avoid having to deal with minus signs at the moment)
\begin{align*}
D_j &= \sum_{i=1}^m r_i \exp((h_i+s)^{1/A}) \left\{ A^{-j} (h_i+s)^{j/A-j} + c_{j,1}(h_i+s)^{(j-1)/A-j} + \dots + c_{j,j-1}(h_i+s)^{(1/A-j)} \right\}\\
&= : \sum_{i=1}^m b_i P_j(h_i)
\end{align*}
where the $c_{j,k}$ depend only on $A$ and $j$, where $b_i := r_i \exp((h_i+s)^{1/A})$.
In matrix form, let $\vect{D} :=(D_1, \dots, D_m)$ denote the vector of the first $m$ derivatives, and let $\vect{b}:= (b_1, \dots, b_m)$. Then
\begin{align*}
&\vect{D} = \vect{b} M,
\qquad \mbox{ where } (M)_{ij}:=P_j(h_i).
\end{align*}
To prove Proposition \ref{prop:int s} we will lower bound the determinant of $M$. Thus we will show that $M$ is invertible, and hence we will be able to lower bound the $\ell^2$-norm of $\vect{D}$. For this, consider the $j^{th}$ row of $M$
\begin{align*}
(M)_j = (P_j(h_1), \dots, P_j(h_m)).
\end{align*}
We can write $P_j(h_1) : = \sum_{k=0}^{j-1} c_{j,k} (h_i+s)^{t_k/A-j}$, where $t_k:= j-k$. Since the determinant is multilinear in the rows, we can decompose the determinant of $M$ as
\begin{align}
\det(M) = \sum_{\vect{t}\in \cT} c_{\vect{t}} \det((h_i+s)^{t_j/A-j})_{i,j\le m})
\end{align}
where $c_{\vect{t}}$ are constants depending only on $\vect{t}$ and the sum over $\vect{t}$ ranges over the set
\begin{align*}
\cT : = \{\vect{t} \in \N^m :\ t_j \le j, \ \forall j \in [1,m] \}.
\end{align*}
Let $X_{\vect{t}}:= ((h_i+s)^{t_j/A-j})_{i,j\le m}$. We claim that $\det(M) = c_{\vect{t}_M}\det(X_{\vect{t}_M})(1+O(\max_i (h_i^{-1/A})))$ as $N \to \infty$, where $\vect{t}_M := (1,2,\dots, m)$.
To establish this claim, we appeal to the work of Khare and Tao, see Lemma \ref{lem:KT}. Namely, let $\vect{H} := (h_1+s, \dots , h_m +s)$ with $h_1 > h_2> \dots$ let $\vect{T}(\vect{t}):= (t_1/A-1, \dots, t_m/A-m)$. Then we can write
\begin{align*}
X_{\vect{t}} := \vect{H}^{\circ \vect{T}(\vect{t})}.
\end{align*}
Now invoking Lemma \ref{lem:KT} we have
\begin{align*}
\det(X_{\vect{t}}) \asymp V(\vect{H}) \vect{H}^{\vect{T}(\vect{t})-\vect{n}_{\text{min}}}.
\end{align*}
Note that we may need to interchange the rows and/or columns of $X_{\vect{t}}$ to guarantee that the conditions of Lemma \ref{lem:KT} are met. However this will only change the sign of the determinant and thus won't affect the magnitude.
Now, fix $\vect{t} \in \cT$ such that $\vect{t} \neq \vect{t}_M$ and compare
\begin{align*}
\abs{\det(X_{\vect{t}_M})} - \abs{\det(X_{\vect{t}})} \ge \abs{V(\vect{H})}\left( \abs{\vect{H}^{\vect{T}(\vect{t}_M)-\vect{n}_{\text{min}}}}-\abs{\vect{H}^{\vect{T}(\vect{t})-\vect{n}_{\text{min}}}}\right).
\end{align*}
Since $\vect{t}_M \neq \vect{t}$ we conclude that all coordinates $t_i \le (\vect{t}_M)_i$ and there exists a $k$ such that $t_k < (\vect{t}_M)_k$. Therefore
\begin{align*}
\abs{\det(X_{\vect{t}_M})} - \abs{\det(X_{\vect{t}})} = \abs{V(\vect{H})\vect{H}^{\vect{T}(\vect{t}_M)-\vect{n}_{\text{min}}}}(1+ O(\max_i(h_i^{-1/A})) = \abs{\det(X_{\vect{t}_M})}(1+ O(\max_i(h_i^{-1/A})).
\end{align*}
This proves our claim.
Hence
\begin{align}\label{detM bound}
\begin{aligned}
\abs{\det(M)} &= \abs{c_{\vect{t}_M} \det(X_{\vect{t}_M})}(1+O(\max_i(h_i^{-1/A}))\\
&= \abs{c_{\vect{t}_M}V(\vect{H}) \vect{H}^{\vect{T}(\vect{t}_M)-\vect{n}_{\text{min}}}}(1+O(\max_i(h_i^{-1/A}))\\
&= \abs{c_{\vect{t}_M}}\left(\prod_{j=1}^m (h_j+s)^{j/A-2j+1 }\right)\prod_{1\le i<j\le m} (h_i-h_j)(1+O(\max_i(h_i^{-1/A}))
\end{aligned}
\end{align}
which is clearly larger than $0$ (since $h_i-h_j >1$ and $s>-1$).
Hence $M$ is invertible, and we conclude that
\begin{align*}
\vect{D} M^{-1} &= \vect{b},\\
\|\vect{D}\|_{2}\| M^{-1}\|_{\text{spec}} &\ge \|\vect{b}\|_2,\\
\|\vect{D}\|_{2} &\ge \frac{\|\vect{b}\|_2}{\| M^{-1}\|_{\text{spec}}},\\
\end{align*}
where $\|\cdot\|_{\text{spec}}$ denotes the spectral norm with respect to the $\ell^2$ vector norm. Recall that $\|M^{-1}\|_{\text{spec}}$ is simply the largest eigenvalue of $M^{-1}$. Hence $\det(M^{-1})^{1/m} \le \|M^{-1}\|_{\text{spec}}$.
We can bound the spectral norm by the maximum norm
\begin{align*}
\|M^{-1}\|_{\text{spec}} \ll \max_{i,j} (M^{-1})_{i,j}
\end{align*}
However each entry of $M^{-1}$ is equal to $\frac{1}{\det(M)}$ times a cofactor of $M$, by Cramer's rule. This, together with the size of the $h_i$ is enough to show that
\begin{align*}
\|\vect{D}\|_{2} &\gg \|\vect{b}\|_2 \log(N)^{-1000m}.
\end{align*}
Now using the bounds on $\vect{b}$ (which come from the essential ranges of $h_i$ and $r_i$) we conclude
\begin{align*}
\|\vect{D}\|_2 \gg N^{1-\vep}.
\end{align*}
From here we can apply the localized van der Corput's lemma {\cite[Lemma 3.3]{TechnauYesha2020}} as we did in \cite{LutskoTechnau2021} to conclude Proposition \ref{prop:int s}.
\end{proof}
\section{Proof of Lemma \ref{lem:MP = KP non-isolating}}
Thanks to the preceding argument, we conclude that
\begin{align*}
\lim_{N \to \infty}\cK_{m}(N) &= \sum_{\cP\in \mathscr{P}_m} \expect{f^{\abs{P_1}}}\cdots \expect{f^{\abs{P_d}}} + \lim_{N \to \infty} \cO_N.
\end{align*}
Moreover, the off-diagonal term can be bounded using Proposition \ref{prop:int s} as follows:
\begin{align*}
\cO_N &= \frac{1}{N^m}\sum_{\vect{q},\vect{u}}\sum_{\vect{r}, \vect{h}} \eta(\vect{r,},\vect{h}) I(\vect{h},\vect{r})\\
&\ll \frac{1}{N^m}\sum_{\vect{q},\vect{u}}\sum_{\vect{r}, \vect{h}}\mathfrak{K}_{\vect{u}}(\vect{\mu}_0) \mathfrak{N}_{\vect{q}}(\vect{\mu}_0,0) \frac{e^{u_1 + \dots + u_m}}{r_1 \cdots r_m} \max_{i \le m} e^{-u_i/m}N^{\varepsilon}.
\end{align*}
Note that we are summing over reciprocals of $r_i$ and recall that the $h_i$ have size $q_i^{A}$, thus, we may evaluate the sums over $\vect{h}$ and $\vect{r}$ and gain at most a logarithmic factor (which can be absorbed into the $\varepsilon$). Thus
\begin{align*}
\cO_N &\ll \frac{1}{N^m}\sum_{\vect{q},\vect{u}} e^{u_1 + \dots + u_m} \max_{i \le m} e^{-u_i/m}N^{\varepsilon}.
\end{align*}
Likewise there are logarithmically many $\vect{q}$ and $\vect{u}$. Thus maximizing the upper bound, we arrive at
\begin{align*}
\cO_N &\ll N^{-1/m +\varepsilon},
\end{align*}
this concludes our proof of Lemma \ref{lem:MP = KP non-isolating}. From there, Theorem \ref{thm:main} and Theorem \ref{thm:correlations} follow from the argument in Section \ref{s:Completion}.
\small
\section*{Acknowledgements}
NT was supported by a Schr\"{o}dinger Fellowship of the Austrian Science Fund (FWF): project J 4464-N. We thank Apoorva Khare, Jens Marklof and Zeev Rudnick for comments on a previous version of the paper.
\bibliographystyle{alpha}
|
2,869,038,154,119 | arxiv | \section{Introduction}
The compositions of current power grids undergo radical changes. As of now, power grids are still dominated by big conventional power
plants based on fossil fuel or nuclear power exhibiting a large power output. Essentially, their effective topology is locally star-like with transmission lines going from large plants to regional
consumers. As more and more renewable power sources contribute, this is about to change and topologies will become more decentralized and more
recurrent. The topologies of current grids largely vary, with large differences, e.g. between grids on islands such as Britain and those in
continental Europe, or between areas of different population densities. In addition, renewable sources will strongly modify these structures in a yet
unknown way. The synchronization dynamics of many power grids with a special topology are well analyzed \cite{Motter13}, such as the British power grid \cite{Witt12}
or the European power transmission network \cite{Loza12}. The general impact of grid topologies on collective dynamics is not systematically understood, in particular with respect to
decentralization.
Here, we study collective dynamics of oscillatory power grid models with a special focus on how a wide range of topologies, regular, small-world and random,
influence stability of synchronous (phase-locked) solutions. We analyze the onset of phase-locking between power generators and consumers as well as the local and global stability of
the stable state. In particular, we address the question of how phase-locking is affected in different topologies if large power plants are replaced
by small decentralized power sources. For our simulations, we model the dynamics of the power grid as a network of coupled second-order oscillators, which are
derived from basic equations of synchronous machines \cite{Fila08}. This model bridges the gap between large-scale static network models \cite{Mott02,Scha06,Simo08,Heid08} on
the one hand and detailed component-level models of smaller network \cite{Eurostag} on the other. It thus admits systematic access to emergent
dynamical phenomena in large power grids.
The article is organized as follows. We present a dynamical model for power grids in Sec.~\ref{sec-model}. The basic dynamic properties, including
stable synchronization, power outage and coexistence of these two states, are discussed in Sec.~\ref{sec-smallnets} for elementary networks. These
studies reveal the mechanism of self-organized synchronization in a power grid and help understanding the dynamics also for more complex
networks. In Sec.~\ref{sec-largenets} we present a detailed analysis of large power grids of different topologies. We investigate the onset of
phase-locking and analyze the stability of the phase-locked state against perturbations, with an emphasis on how the dynamics depends on the
decentralization of the power generators. Stability aspects of decentralizing power networks has been briefly reported before for the British
transmission grid \cite{Witt12}.
\section{Coupled oscillator model for power grids}
\label{sec-model}
We consider an oscillator model where each element is one of two types of elements, generator or consumer \cite{Fila08,Prab94}. Every element $i$ is described by the
same equation of motion with a parameter $P_i$ giving the generated $(P_i>0)$ or consumed $(P_i<0)$ power. The state of each element is determined
by its phase angle $\phi_i (t)$ and velocity $\dot\phi_i(t)$. During
the regular operation, generators as well as consumers within the grid run with the same frequency $\Omega =2\pi\times 50\text{Hz}$ or
$\Omega =2\pi\times 60\text{Hz}$. The phase of each element $i$ is then written as
\begin{equation}
\phi_i(t)=\Omega t + \theta_i(t) \label{eqn:phase},
\end{equation}
where $\theta_i$ denotes the phase difference to the set value $\Omega t$.\\
The equation of motion for all $\theta_i$ can now be obtainend from the energy conservation law, that is the generated or consumed energy
$P^{\text{source}}_i$ of each single element must equal the energy sum given or taken from the grid plus the accumulated and dissipated energy of
this element. The dissipation power of each element is $P^{\text{diss}}_i=\kappa_i(\dot\phi_i)^2$, the accumulated power\
$P^\text{acc}_i=\frac{1}{2}I_i\frac{d}{dt}(\dot\phi_i)^2$ and the transitional power between two elements is
$P^{\text{trans}}_{ij}=-P^{\text{max}}_{ij}\sin(\phi_j-\phi_i)$. Therefore $P^{\text{source}}_i$
is the sum of these:
\begin{equation}
P^{\text{source}}_i= P^{\text{diss}}_i+ P^{\text{acc}}_i+ P^{\text{trans}}_{ij}\label{eqn:grund}.
\end{equation}
An energy flow between two elements is only possible if there is a phase difference between these two. Inserting equation (\ref{eqn:phase}) and
assuming only slow phase changes compared to the frequency $\Omega$ $(|\dot\theta_i| \ll\Omega)$. The dynamics of the $i$th machine is given by:
\begin{equation}
I_i\Omega\ddot\theta_i=P^{\text{source}}_i
-\kappa_i\Omega^2-2\kappa_i\Omega\dot\theta_i+\sum_j P^{\text{max}}_{ij}\sin(\theta_j-\theta_i).
\end{equation}
Note that in this equation only the phase differences $\theta_i$ to the fixed phase $\Omega t$ appear. This shows that only the phase difference
between the elements of the grid matters. The elements $K_{ij}=\frac{P^{\text{max}}_{ij}}{I_i\Omega}$ constitute the connection matrix of the
entire grid, therefore it decodes wether or not there is a transmission line between two elements ($i$ and $j$). With
$P_i=\frac{P^{\text{source}}_i-\kappa_i \Omega^2}{I_i\Omega}$ and $\alpha_i=\frac{2 \kappa_i}{I_i}$ this leads to the following equation of motion:
\begin{equation}
\frac{d^2\theta_i}{dt^2}=P_i-\alpha_i\frac{d\theta_i}{dt}+\sum_j K_{ij}\sin(\theta_j-\theta_i).
\end{equation}
The equation can now be rescaled with $s=\alpha t$ and new variables $\tilde P=P/ \alpha^2$ and $\tilde K=K/ \alpha^2$. This leads to:
\begin{equation}
\frac{d^2\theta_i}{ds^2}=\tilde P_i-\frac{d\theta_i}{ds}+\sum_j \tilde K_{ij}\sin(\theta_j-\theta_i).
\end{equation}
In the stable state both derivatives $\frac{d\theta_i}{dt}$ and $\frac{d^2\theta_i}{dt^2}$ are zero, such that
\begin{equation}
0=P_i + \sum_j K_{ij}\sin(\theta_j-\theta_i)
\end{equation}
holds for each element in the stable state. For the sum over all equations, one for each element $i$, the following holds
\begin{equation}
\sum_i P_i=\sum_{i<j}K_{ij}\sin(\theta_j-\theta_i)+\sum_{i>j}K_{ij}\sin(\theta_j-\theta_i)=0, \label{eqn:sum}
\end{equation}
because $K_{ij}=K_{ji}$ and the sin-function is antisymmetric. This means it is a necessary condition that the sum of the generated power
$(P_i>0)$ equals the sum of the consumed power $(P_i<0)$ in the stable state.\\
For our simulations we consider large centralized power plants generating $P^{\text{source}}_i = 100 \, {\rm MW}$ each. A synchronous generator of this
size would have a moment of inertia of the order of $I_i=10^4 \, {\rm kg \, m}^2$. The mechanically dissipated power $\kappa_i\Omega^2$ usually is a small
fraction of $P^{\text{source}}$ only. However, in a realistic power grid there are additional sources of dissipation, especially ohmic losses and
because of damper windings \cite{Macho}, which are not taken into account directly in the coupled oscillator model. Therefore we set
$\alpha_i=0.1 s^{-1}$ and $P_i = 10 s^{-2}$ \cite{Fila08} for large power plants. For
a typical consumer we assume $P_i = -1 s^{-2}$, corresponding to a small city. For a renewable power plant we assume $P_i=2.5 s^{-2}$. A major overhead power line can
have a transmission capacity of up to $P^{\text{max}}_{ij} = 700 \, {\rm MW}$. A power line connecting a small city usually has a smaller transmission
capacity, such that $K_{ij} \le 10^2 s^{-2}$ is realistic. We take $\Omega = 2\pi\times 50\text{Hz}$.
\section{Dynamics of elementary networks}
\label{sec-smallnets}
\subsection{Dynamics of one generator coupled with one consumer}
\label{sec-two-elements}
\begin{figure}[t]
\centering
\includegraphics[width=10cm, angle=0]{dyn_simple_2ab}
\includegraphics[width=10cm, angle=0]{dyn_simple_2cd_new}
\caption{\label{fig:globalphase}
Dynamics of an elementary network with one generator and one consumer for $\alpha=0.1 s^{-1}$.\\
(a) Globally stable phase locking for $P_0 = 1 s^{-2}$ and $K = 2 s^{-2}$\\
(b) Globally unstable phase locking (limit cycle) for $P_0 = 1 s^{-2}$ and $K = 0.5 s^{-2}$\\
(c) Coexistence of phase locking (normal operation) and limit cycle (power outage) for $P_0 = 1 s^{-2}$ and $K = 1.1 s^{-2}$\\
(d) Stability phase diagram in parameter space.
}
\end{figure}
We first analyze the simplest non trivial grid, a two-element system consisting of one generator and one consumer. This system is analytically
solvable and reveals some general aspects also present in more complex systems. This system can only reach equilibrium if equation (\ref{eqn:sum})
is satisfied, such that $-P_1 = P_2$ must hold. With $\Delta P=P_2-P_1$ the equation of motion for this system can be simplified in such a way, that only the phase difference
$\Delta\theta=\theta_2-\theta_1$ and the difference velocity $\Delta\chi:=\Delta\dot\theta$ between the oscillators is decisive:
\begin{eqnarray}
\Delta\dot\chi&=&\Delta P-\alpha\Delta\chi-2K\sin\Delta\theta \label{eqn:eom}\nonumber \\
\Delta\dot\theta&=&\Delta\chi.
\end{eqnarray}
Figure \ref{fig:globalphase} shows different scenarios for the two-element system. For $2 K \geq \Delta P$ two fixed points come into being
\ref{fig:globalphase}(a), whose local stability is analyzed in detail below. The system is globally stable as is shown in the bottom area of
Fig.~\ref{fig:globalphase} (d). For $2K < \Delta P$ the load exceeds the capacity of the link. No stable operation is possible and all trajectories
converge to a limit cycle as shown in fig \ref{fig:globalphase} (b) and in the upper area of \ref{fig:globalphase} (d). In the remaining region
of parameter space, the fixed point and the limit cycle coexist such that the dynamics depend crucially on the initial conditions as shown in
fig \ref{fig:globalphase} (c) (cf.~\cite{Risk96}). Most major power grids are operating close to the edge of stability, i.e. in the region of
coexistence, at least during periods of high loads. Therefore the dynamics depends crucially on the initial conditions and static power grid models
are insufficient.
Let us now analyze the fixed points of the equations of motion (\ref{eqn:eom}) in more detail. In terms of the phase difference $\Delta\theta$,
they are given by:
\begin{eqnarray}
T_1&:&\quad \begin{pmatrix}\Delta\chi^*\\\Delta\theta^*\end{pmatrix}=\begin{pmatrix}0\\ \arcsin\frac{\Delta P}{2K}\end{pmatrix}, \nonumber \\
T_2&:&\quad \begin{pmatrix}\Delta\chi^*\\\Delta\theta^*\end{pmatrix}=\begin{pmatrix}0\\ \pi -\arcsin\frac{\Delta P}{2K}\end{pmatrix}.
\end{eqnarray}
For $\Delta P > 2K$ no fixed point can exist as discussed above. The critical coupling strengths $K_c$ is therefore $\Delta P/2$. Otherwise fixed points exist and the system can reach a stationary state. For
$\Delta P=2K$ only one fixed point exists, $T_1=T_2$, at $\left(\Delta\chi^*,\Delta\theta^*\right)=(0,\pi/2)$. It is neutraly stable.\\
We have two fixed points for $2K>\Delta P$. The local stability of these fixed points is
determined by the eigenvalues of the Jacobian of the dynamical system (\ref{eqn:eom}), which are given by
\begin{equation}
\lambda_{\pm}^{(1)}=-\frac{\alpha}{2}\pm\sqrt{\left(\frac{\alpha}{2}\right)^2-\sqrt{4K^2-\Delta P^2}}
\end{equation}
at the first fixed point $T_1$ and
\begin{equation}
\lambda_{\pm}^{(2)}=-\frac{\alpha}{2}\pm\sqrt{\left(\frac{\alpha}{2}\right)^2+\sqrt{4K^2-\Delta P^2}}
\end{equation}
at the second fixed point $T_2$, respectively.
Depending on $K$, the eigenvalues at the first fixed point are either both real and negativ or complex with negative real values. One
eigenvalue at the second fixed point is always real and positive, the other one real and negative. Thus only the first fixed point is stable and enables a stable
operation of the power grid. It has real and negative eigenvalues for $K_c < K < K_2=\sqrt{\frac{\alpha^4}{64}+\frac{\Delta P^2}{4}}$, which is only
possible for large $\alpha$, i.e. the system is overdamped. For $K \geq K_2$ it has complex eigenvalues with a negative real value $|\Re(\lambda)|\equiv \frac{\alpha}{2}$,
for which the power grid exhibits damped oscillations around the fixed point. As power grids should work with only minimal losses, which corresponds to $K \geq K_2$, this is the
practically relevant setting.\\
\subsection{Dynamics of motif networks}
\label{sec-starnet}
\begin{figure}[t]
\centering
\includegraphics[width=8cm, angle=0]{starnet2}
\caption{\label{fig-starnet} Motif networks: simplified phase description}
\end{figure}
We discuss the dynamics of the two motif networks shown in Fig.~\ref{fig-starnet}. These two can be considered as building
blocks of the large-scale quasi-regular network that will be analyzed in the next section.
Fig.~\ref{fig-starnet} (a) shows a simple network, where a small renewable energy source provides the power for $N=3$ consumer units with $d=3$ connections. To analyze
the most homogeneous setting we assume that all consumers have the same phase $\theta_1$ and a power load of $-P_0$ and all transmission lines have the
same capacity $K$. The power generator has the phase $\theta_0$ and provides a power of $N P_0$. The reduced equations of
motion then read
\begin{eqnarray}
\ddot \theta_0 &=& N \, P_0 - \dot \theta_0
+ d K \sin(\theta_1 - \theta_0), \nonumber \\
\ddot \theta_1 &=& - P_0 - \dot \theta_1
+ K \sin(\theta_0 - \theta_1)
\end{eqnarray}.
For this motif class the condition $|N|=|d|$ always holds, such that the steady state is determined by $\sin(\theta_0 - \theta_1) = P_0/K$. The condition for the existence of a steady state is thus
$K > K_c = P_0$, i.e. each transmission line must be able to transmit the power load of one consumer unit.
Fig.~\ref{fig-starnet} (b) shows a different network, where $N=12$ consumer units arranged on a squared lattice with $d_1=4$ connections between
the central power source $(\theta_0)$ and the nearest consumers ($\theta_1$) and $d_2=2$ connections between the consumers with phase $\theta_1$ and those with
$\theta_2$. Due to the symmetry of the problem we have to consider only three different phases. The reduced equations
of motion then read
\begin{eqnarray}
\ddot \theta_0 &=& N \, P_0 - \dot \theta_0
+ d_1 K \sin(\theta_1 - \theta_0), \nonumber \\
\ddot \theta_1 &=& - P_0 - \dot \theta_1
+ d_2 K \sin(\theta_2 - \theta_1) + K \sin(\theta_0 - \theta_1), \nonumber \\
\ddot \theta_2 &=& - P_0 - \dot \theta_2
+ K \sin(\theta_1 - \theta_2).
\end{eqnarray}
For the steady state we thus find the relations
\begin{eqnarray}
\sin(\theta_0 - \theta_1) &=& (N P_0)/(d_1K) \nonumber \\
\sin(\theta_1 - \theta_2) &=& P_0/K.
\end{eqnarray}
The coupling strengths $K$ must now be higher than the critical coupling strenghts
\begin{equation}
K_c=\frac{NP_0}{d_1}\label{eqn:coupling}
\end{equation}
to enable a stable operation. For the example shown in Fig.~\ref{fig-starnet} (b) we now have a higher critical coupling strength $K_c = 3 P_0$
compared to the previous motif for the existence of a steady state. This is immediately clear from physical reasons, as the transmission lines
leading away from the power plant now have to serve 3 consumer units instead of just one.
\section{Dynamics of large power grids}
\label{sec-largenets}
\subsection{Network topology}
\begin{figure}[t]
\centering
\includegraphics[width=7cm, angle=0]{networks2}
\caption{\label{fig-network} Small size cartoons of different network topologies: (a) Quasi-regular grid, (b) random network
and (c) small-world network.}
\end{figure}
We now turn to the collective behavior of large networks of coupled generators and consumers and analyze how the dynamics and stability of a power
grid depend on the network structure. We emphasize how the stability is affected when large power plants are replaced by many small
decentralized power sources.
In the following we consider power grids of $N_C = 100$ consumers units with the same power load $-P_0$ each. In all simulations we
assume $P_0 = 1 s^{-2}$ with $\alpha=0.1 s^{-1}$ as discussed in Sec.~\ref{sec-model}. The demand of the consumers is met by
$N_P \in \{0,\ldots,10\}$ large power plants, which provide a power $P_P = 10 \, P_0$ each. The remaining power is generated by $N_R$ small
decentralized power stations, which contribute $P_R = 2.5 \, P_0$ each. Consumers and generators are connected by transmission lines with a
capacity $K$, assumed to be the same for all connections.\\ We consider three types of networks topologies, schematically
shown in Fig.~\ref{fig-network}. In a quasi-regular power grid, all consumers are placed on a squared lattice. The generators are placed randomly at
the lattice and connected to the adjacent four consumer units (cf. Fig.~\ref{fig-network} (a)). In a random network, all elements
are linked completely randomly with an average number of six connections per node (cf. Fig.~\ref{fig-network} (b)). A small world network is
obtained by a standard rewiring algorithm \cite{Watt98} as follows. Starting from ring network, where every element is connected to its four nearest
neighbors, the connections are randomly rewired with a probability of $0.1$ (cf. Fig.~\ref{fig-network} (c)).
\subsection{The synchronization transition}
We analyze the requirements for the onset of phase locking between generators and consumers, in particular the minimal coupling
strength $K_c$. An example for the synchronization transition is shown in Fig.~\ref{fig-sync}, where the dynamics of the phases $\theta_i(t)$ is
shown for two different values of the coupling strength $K$. Without coupling, $K=0$, all elements of the grid oscillate with their natural frequency. For small values of $K$, synchronization sets in between the renewable generators and the consumers whose frequency difference is
rather small (cf. Fig.~\ref{fig-sync} (a)). Only if the coupling is further increased (Fig.~\ref{fig-sync} (b)), all generators synchronize so
that a stable operation of the power grid is possible.
\begin{figure}[t]
\centering
\includegraphics[width=10cm, angle=0]{synctrans4}
\caption{\label{fig-sync} Synchronization dynamics of a quasi-regular power grid. (a) For a weak coupling the phases $\theta_j(t)$ of the small renewable
decentralized generators (green lines) synchronize with the consumers (blue lines), but not the phases of the large power plants (red lines).
Thus the order parameter $r(t)$ fluctuates around a zero mean. (b) Global phase-locking of all generators and consumers is achieved for a large
coupling strength, such that the order parameters $r(t)$ is almost one.}
\end{figure}
The phase coherence of the oscillators is quantified by the order parameter \cite{Stro00}
\begin{equation}
r(t) = \frac{1}{N} \sum \nolimits_j e^{i \theta_i(t)},
\end{equation}
which is also plotted in Fig.~\ref{fig-sync}. For a synchronous operation, the real part of the order parameters is almost one, while it fluctuates around
zero otherwise. In the long time limit, the system will either relax to a steady synchronous state or to a limit cycle where the generators and
consumers are decoupled and $r(t)$ oscillates around zero. In order to quantify synchronization in the long time limit we thus define the
averaged order parameter
\begin{equation}
r_{\infty} := \lim_{t_1 \rightarrow \infty} \lim_{t_2 \rightarrow \infty}
\frac{1}{t_2} \int^{t_1+t_2}_{t_1} r(t) \, dt.
\end{equation}
In numerical simulations the integration time $t_2$ must be finite, but large compared to the oscillation period if the system
converges to a limit cycle. Furthermore we consider the averaged squared phase velocity
\begin{equation}
v^2(t) = \frac{1}{N} \sum \nolimits_j \dot \theta_j(t)^2,
\end{equation}
and its limiting value
\begin{equation}
v_\infty^2 := \lim_{t_1 \rightarrow \infty} \lim_{t_2 \rightarrow \infty}\frac{1}{t_2}\int_{t_1}^{t_1+t_2}v^2(t)dt
\end{equation}
as a measure of whether the grid relaxes to a stationary state.
These two quantities are plotted in Fig.~\ref{fig-order-fluct} as a function of the coupling
strength $K/P_0$ for 20 realizations of a quasi-regular network with 100 consumers and
40 \% renewable energy sources. The onset of synchronization is clearly visible: If the
coupling is smaller than a critical value $K_c$ no steady synchronized state exists
and $r_{\infty} = 0$ by definition. Increasing $K$ above $K_c$ leads to the
onset of phase locking such that $r_{\infty}$ jumps to a non-zero value.
The critical value of the coupling strength is found to lie in the range
$K_c/P_0 \approx 3.1 - 4.2$, depending on the random realization of
the network topology.
\begin{figure}[t]
\centering
\includegraphics[width=10cm, angle=0]{orderpara4}
\caption{\label{fig-order-fluct}
The synchronization transition as a function of the coupling strength $K$:
The order parameter $r_{\infty}$ (left-hand side) and the phase velocity
$v_{\infty}$ (right-hand side) in the long time limit.
The dynamics has been simulated for 20 different realizations of a quasi-regular
network consisting of 100 consumers, $N_P=6$ large power pants and $N_R=16$ small power
generators.
}
\end{figure}
The synchronization transition is quantitatively analyzed in Fig.~\ref{fig-order}. We plotted $r_{\infty}$ and
$v_{\infty}$ for three different network topologies averaged over 100 random realizations for each amount of decentralized energy sources for every
topology. The synchronization transition strongly depends on the structure of the network, and in particular the amount of power provided by small
decentralized energy sources. Each line in Fig.~\ref{fig-sync} corresponds to a different fraction of decentralized energy $1 - N_P/10$, where $N_P$ is
the number of large conventional power plants feeding the grid. Most interestingly, the introduction of small decentralized power sources
(i.e. the reduction of $N_P$) promotes the onset of synchronization. This phenomenon is most obvious for the random and the small-worlds
structures.
Let us first analyze the quasi-regular grid in the limiting cases $N_P = 10$ (only large power plants) and $N_P=0$ (only small decentralized power
stations) in detail. The existence of a synchronized steady state requires that the transmission lines leading away from a generator have enough
capacity to transfer the complete power, i.e. $10 \, P_0$ for a large power plant and $2.5 \, P_0$ for a small power station.
In a quasi-regular grid every generator is connected with exactly four transmission lines, which leads to the following estimate for the critical
coupling strength (cf. equation \ref{eqn:coupling}):
\begin{eqnarray}
K_c &=& 10P_0/4 \, \qquad \mbox{for} \, N_P = 10, \nonumber \\
K_c &=& 2.5P_0/4 \, \qquad \;\; \mbox{for} \, N_P = 0.
\label{eqn-kc-est}
\end{eqnarray}
These values only hold for a completely homogeneous distribution of the power load and thus rather present a lower bound for $K_c$ in a
realistic network. Indeed, the numerical results shown in Fig.~\ref{fig-order} (a) yield a critical coupling strength of
$K_c \approx 3.2 \times P_0$ and $K_c \approx 1 \times P_0$, respectively.
\begin{figure}[t]
\centering
\includegraphics[width=10cm, angle=0]{orderpara3_new}
\caption{\label{fig-order}
The synchronization transition for different fractions of decentralized energy
sources $1-N_P/10$ feeding the grid and for different network topologies:
(a) Quasi-regular grid, (b) random network and (c) small-world network.
The order parameter $r_{\infty}$ and the phase velocity
$v_{\infty}$ (cf. Fig.~\ref{fig-order-fluct}) have been averaged over
100 realizations for each network structure and each fraction of decentralized sources.}
\end{figure}
For networks with a mixed structure of power generators ($N_P \in \{ 1,\ldots,9\}$) we observe that the synchronization transition is determined by the
large power plants, i.e. the critical coupling is always given by $K_c \approx 3.2 \times P_0$ as long as $N_P \neq 0$. However, the transition is
now extremely sharp -- the order parameter does not increase smoothly but rather jumps to a high value. This results from the fact that all small
power stations are already strongly synchronized with the consumers for smaller values of $K$ and only the few large power plants are missing. When
they finally fall in as the coupling strength exceeds $K_c$, the order parameter $r$ immediately jumps to a large value.
The sharp transition at $K_c$ is a characteristic of the quasi-regular grid. For a random and a small-world network different classes of power
generators exist, which are connected with different numbers of transmission lines. These different classes get synchronized to the consumers
one after another as $K$ is increased, starting with the class with the highest amount of transmission lines to the one with fewest. Therefore we
observe a smooth increase of the order parameter $r$.
\subsection{Local stability and synchronization time}
\begin{figure}[t]
\centering
\includegraphics[width=10cm, angle=0]{relax2b_new}
\caption{\label{fig-relax}
Relaxation to the synchronized steady state:
(a) Illustration of the relaxation process $(K/P_0=10$ and $N_p = 10)$. We have plotted the dynamics of the phases $\theta_j$ only for one
generator (red) and one consumer (blue) for the sake of clarity.
(b) Exponential decrease of the distance to the steady state (blue line) and a fit according to $d(t) \sim e^{-t/\tau_{\rm sync}}$ (black line).
(c) The synchronisation time $\tau_{\rm sync}$ as a function of
the fraction of decentralized energy sources $1-N_P/10$
for a regular ($\circ$), a random ($\square$) and a small-world grid ($\diamond$). Cases where the system does not relax have been discarded.
}
\end{figure}
A sufficiently large coupling of the nodes leads to synchronization of all nodes of a power grid as shown in the preceding section. Starting from
an arbitrary state in the basin of attraction, the network relaxes to the stable synchronized state with a time scale $\tau_{\rm sync}$. For
instance, Fig.~\ref{fig-relax} (a) shows the damped oscillations of the phase $\theta_j(t)$ of a power plant and a consumer in a quasi-regular grid
with $K=10$ and $N_P = 10$. In order to quantify the relaxation, we calculate the distance to the steady state
\begin{equation}
d(t) =\left( \sum_{i=1}^N d_1^2(\theta_i(t),\theta_{i,\rm st})
+ d_2^2(\dot \theta_i(t),\dot \theta_{i,\rm st})\right)^{\frac{1}{2}},
\label{eqn-dist-tot}
\end{equation}
where the subscript 'st' denotes the steady state values. For the phase velocities $d_2$ denotes the common Euclidian distance
$d_2^2\left(\dot\alpha,\dot\beta\right)=|\dot\alpha-\dot\beta |^2$, while the circular
distance of the phases is defined as
\begin{equation}
d_1(\alpha,\beta) = 1 - \cos(\alpha - \beta).
\end{equation}
The distance $d(t)$ decreases exponentially during the relaxation to the steady state as shown in Fig.~\ref{fig-relax} (b). The
black line in the figure shows a fit with the function $d(t) = d_0 \exp(-t/\tau_{\rm sync})$. Thus synchronization time $\tau_{\rm sync}$
measures the local stability of the stable fixed point, being the inverse of the stability exponent $\lambda$
(cf. the discussion in Sec.~\ref{sec-two-elements}).
Fig.~\ref{fig-relax} (c) shows how the synchronization time depends on the structure of the network and the mixture of power generators. For
several paradigmatic systems of oscillators, it was recently shown that the time scale of the relaxation process depends crucially on the network
structure \cite{Grab10}. Here, however, we have a network of {\it damped} second order oscillators. Therefore the
relaxation is almost exclusively given by the inverse damping constant $\alpha^{-1}$. Indeed we find $\tau_{\rm sync} \gtrsim \alpha^{-1}$. For
an elementary grid with two nodes only, this was shown rigorously in Sec.~\ref{sec-two-elements}. As soon as the coupling strength exceeds a critical
value $K>K_2$, the real part of the stability exponent is given by $\alpha$, independent of the other system parameters. A different value is found
only for intermediate values of the coupling strength $K_c < K < K_2$. Generally, this remains true also for a complex network of many
consumers and generators as shown in Fig.~\ref{fig-relax} (c). For the given parameter values we observe neither a systematic dependence of the
synchronization time $\tau_{\rm sync}$ on the network topology nor on the number of large ($N_P$) and small ($N_R$) power generators. The mean
value of $\tau_{\rm sync}$ is always slightly larger than the relaxation constant $\alpha^{-1}$. Furthermore, also the standard deviation
of $\tau_{\rm sync}$ for different realizations of the random networks is only maximum 3 percent of the mean value. A significant influence of the network structure on the
synchronization time has been found only in the weak damping limit, i.e. for very large values of $P_0/\alpha$ and $K/\alpha$.
\begin{figure}[t]
\centering
\includegraphics[width=8.4cm, angle=0]{robustness1b}
\caption{\label{fig:robustness1}
Weak and strong perturbation. The upper panels show the time-dependent power load of the consumers. A perturbation of strength $P_{\rm pert}$ is
applied in the time interval $t \in [5,6]$. The lower panels show the resulting dynamics of the phase $\theta_j$ and the frequency $\dot \theta_j$
of the consumers (blue lines) and the power plants (red lines). The dynamics relaxes back to a steady state after the perturbation for a weak
perturbation (a), but not for a strong perturbation (b). In both cases we assume a regular grid with $N_P = 10$.
}
\end{figure}
\subsection{Stability against perturbations}
\begin{figure}[t]
\centering
\includegraphics[width=10cm, angle=0]{robustness3_new}
\caption{\label{fig:robustness2}
Robustness of a power grid. The panels show the fraction of random grids which are unstable against a perturbation as a function of the
perturbation strength $P_{\rm pert}$ and the fraction of decentralized energy $1 - N_P/10$. (a) Quasi-regular grid, (b) random network and
(c) small-world network.
}
\end{figure}
Finally, we test the stability of different network structures against perturbations on the consumers side. We perturb the system after
it has reached a stable state and measure if the system relaxes to a steady state after the perturbation has been switched off
again. The perturbation is realized by an increased power demand of each consumer during a short time interval ($\Delta t = 10s$) as
illustrated in the upper panels of Fig.~\ref{fig:robustness1}. Therefore the condition of \ref{eqn:sum} is violated and the system cannot remain in its stable state. After the perturbation is switched off again, the system relaxes
back to a steady state or not, depending on the strength of the perturbation. Fig.~\ref{fig:robustness1} shows examples of the dynamics for a weak
(a) and strong (b) perturbation, respectively.
These simulations are repeated 100 times for every value of the perturbation strength for each of the three network topologies. We then count the fraction of
networks which are unstable, i.e do not relax back to a steady state. The results are summarized in Fig.~\ref{fig:robustness2} for different network
topologies. The figure shows the fraction of unstable grids as a function of the perturbation strength and the number of large power plants. For all
topologies, the best situation is found when the power is generated by both large power plants and small power generators. An explanation is that
the moment of inertia of a power source is larger if it delivers more power, which makes it more stable against perturbations. On the other hand,
a more distributed arrangement of power stations favors a stable synchronous operation as shown in Sec. \ref{sec-starnet}.
Furthermore, the variability of the power grids is stronger for low values of $N_P$, i.e. few large power plants. The results do not change much
for networks which many power sources (i.e. high $N_P$) because more power sources are distributed in the grid. Thus the random
networks differ only weakly and one observes a sharp transition between stable and unstable. This is different if only few large power plant are
present in the network. For certain arrangements of power stations the system can reach a steady state even for strong
perturbations. But the system can also fail to do so with only small perturbations if the power stations are clustered. This emphasizes the
necessity for a careful planning of the structure of a power grid to guarantee maximum stability.
\section{Conclusion and Outlook}
In the present article we have analyzed a dynamical network model for the dynamics of a power grid. Each element of the network is modeled as a second-order
oscillator similar to a synchronous generator or motor. Such a model bridges the gap between a microscopic description of electric machines and
static models of large supply networks. It incorporates the basic dynamical effects of coupled electric machines, but it is still simple enough to
simulate and understand the collective phenomena in complex network topologies.
The basic dynamical mechanisms were explored for elementary network structures. We showed that a self-organized phase-locking of all generators and
motors in the network is possible. However, this requires a strong enough coupling between elements. If the coupling is decreased, the synchronized
steady state of the system vanishes.
We devoted the second part to a numerical investigation of the dynamics of large networks of coupled generators and consumers, with an
emphasis on self-organized phase-locking and the stability of the synchronized state for different topologies. It was shown that the critical
coupling strength for the onset of synchronization depends strongly on the degree of decentralization. Many small generators can
synchronize with a lower coupling strength than few large power plants for all considered topologies. The relaxation time to the steady state, however, depends only weakly on
the network structure and is generally determined by the dissipation rate of the generators and motors. Furthermore we investigated the robustness
of the synchronized steady state against a short perturbation of the power consumption. We found that networks powered by a mixture of small
generators and large power plants are most robust. However, synchrony was lost only for perturbations at least five times their normal energy consumption
in all topologies for the given parameter values.
For the future it would be desirable to gain more insight into the stability of power grids regarding transmission line failures, which is not fully
understood yet \cite{Kurths13}. For instance, an enormous challenge for the construction of future power grids is that wind energy sources are
planned predominantly at seasides such that energy is often generated far away from most consumers. That means that a lot of new transmission lines wil be added
into the grid and such many more potential transmission line failures can occur. Although the general topology of these future power grids seem to be not that
decisive for their functionality, the impact of including or deleting single links is still not fully understood and unexpected behaviors can occur \cite{Witthaut}. Furthermore it is highly
desirable to gain more inside into collective phenomenma such as cascading failures to prevent major outages in the future.
|
2,869,038,154,120 | arxiv | \section{Introduction}
Spiral structures are the most prominent features of disk galaxies, but their
physical origin is still debated. Pitch angle ($\varphi$), defined as the
angle between the tangent of the spiral and the azimuthal direction, describes
the degree of tightness of a spiral arm \citep{Binney}.
As the tightness of spiral arms constitutes one of the essential criteria of
Hubble's scheme of morphological classification of galaxies \citep{Hubble1926,
Sandage1961}, the pitch angle of spiral arms tends to decrease (become more
tightly wound) from late to early Hubble types \citep{Kennicutt1981, Ma2002,
Yu2018a}. Density wave theory \citep{LinShu64, Bertin1989, Bertin1989b,
Bertin1996} offers the most successful framework to explain spiral structures.
The semi-empirical study of \cite{Roberts1975} showed that density wave theory
can fit observed spiral arms and suggested that mass concentration is the main
determinant of spiral arm pitch angle.
By constructing appropriate basic states of galaxy models, \cite{Bertin1989}
numerically calculated density wave modes that are able to represent all
Hubble types and confirmed that spiral arms become tighter with increasing
mass fraction in the central spherical component.
A number of studies have used other lines of evidence
to argue in favor of density wave theory, in terms of the classic aging of
the stellar population that produces a color gradient across spiral arms
\citep{Gonzalez1996, Martinez2009a, Martinez2011, Yu2018b} and the dependence
of pitch angle on wavelength predicted by \cite{Gittins2004}
\citep{Martinez2012,Martinez2013,Martinez2014,Yu2018b}.
In contrast, $N$-body simulations of isolated pure stellar disk galaxies
generate spiral arms as transient but recurrent structures \citep{Carlberg1985,
Bottema2003, Sellwood2011, Fujii2011, Grand2012a, Grand2012b, Baba2013,
Onghia2013}. In these simulations, the tendency for the number of arms to
increase with decreasing disk mass fraction and for the pitch angle to increase
with decreasing velocity shear are roughly consistent with the predictions of
swing amplification theory \citep{Julian1966, Goldreich1978, Toomre1981}. In
this picture, wherein spirals are transient, the pitch angle reflects
the effects of differential rotation alone.
Within this backdrop, it is instructive to elucidate in a quantitative manner
how spiral arm pitch angle relates to various galaxy properties. Many attempts
have been made, with mixed results. Pitch angles of spiral arms were found to
correlate strongly with maximum rotation velocity \citep{Kennicutt1981} and
mean rotation velocity over the region containing spiral arms
\citep{Kennicutt1982}, with larger rotational velocities leading to more
tightly wound arms. The more recent analysis of \cite{Kendall2015}, however,
does not support this, although their sample contains only 13 objects.
Similarly, \cite{Kennicutt1981} found that pitch angle decreases with brighter
absolute magnitude.
\cite{Kennicutt1981} further noted a correlation between pitch angle and
Morgan's (\citeyear{Morgan1958}, \citeyear{Morgan1959}) classification, which
is primarily based on subjective estimates of bulge-to-disk ratio, such that
galaxies with larger bulge fraction tending to have smaller pitch angle, but
with considerable scatter.
Paradoxically, this trend is not corroborated when concentration index is used
as a proxy of concentration of mass \citep{Seigar1998, Kendall2015}.
\cite{Seigar2005, Seigar2006} reported a tight connection between pitch angle
and morphology of the galactic rotation curve, quantified by the shear rate,
with open arms associated with rising rotation curves and tightly wound arms
connected to flat and falling rotation curves. On the other hand,
\cite{Kendall2015} disputed the tightness of the correlation. Furthermore,
\cite{Yu2018a} show that about 1/3 of the pitch angles in \cite{Seigar2006}
have been severely overestimated; the remeasured pitch angles correlate with
shear rate weakly at best. The pitch angle of spiral arms is also found to be
correlated strongly with the galaxy's central stellar velocity dispersion, and
hence with black hole mass \citep{Seigar2008}, by virtue of the well-known
relation between black hole mass and bulge stellar velocity dispersion
(see \citealt{KH13}, and references therein).
Lastly, to round off this list of confusing and often contradictory results,
\cite{Hart2017} analyzed a large sample of galaxies selected from the Sloan
Digital Sky Survey \citep[SDSS;][]{York2000} and found very weak correlations
between pitch angle and galaxy mass, but the surprising trend that pitch angle
increases with increasing bulge-to-total mass ratio.
\begin{figure*}
\figurenum{1}
\centering
\includegraphics[width=14cm]{Dist_WJC.eps}
\caption{Distribution of Hubble types for our sample of 79 galaxies (red-hatched histograms) compared with the sample of 238 objects in \cite{Kalinova2017} (blue-hatched histograms). Our sample spans the full range of Hubble types of disk galaxies, even including an elliptical, which actually has weak but detectable spiral arms.}
\end{figure*}
The morphology of spiral arms may depend on wavelength. Weak, two-arm spirals
had been seen in some flocculent galaxies \citep{Block1994, Thornley1996,
Thornley1997, Elmegreen1999, Block1999}. Still, pitch angles measured in the
near-infrared generally agree well with those measured in the optical
\citep{Seigar2006, Davis2012}. The recent study by \cite{Yu2018b} report a
mean difference in pitch angle of only $\sim 0.5\degr$ between spiral arms
observed in $3.6\micron$ and {\it R}-band images, which lays the foundation
for our use of SDSS {\it r}-band images to measure pitch angle in this paper.
The Calar Alto Legacy Integral Field Area (CALIFA) survey \citep{Sanchez2012}
targets a diameter-limited sample of galaxies covering all morphological types,
in the redshift range $0.005$\,$<$\,$z$\,$<$\,$0.03$. For these low redshifts,
SDSS provides images of adequate qualiy for measuring pitch angle \citep{Yu2018a}.
\cite{FBJ2017} extracted stellar kinematic maps of 300 CALIFA galaxies
using the {\tt PPXF} fitting procedure \citep{Cappellari2004}. From this
original sample, after discarding galaxy mergers and cases with uncertain
dynamical models, \cite{Kalinova2017} derived circular velocity curves for 238
objects using detailed stellar dynamical Jeans modeling \citep{Cappellari2008}.
Other galaxy properties, such as stellar masses, photometric decomposition, and
star formation rates are also available \citep{Walcher2014, Sanchez2016Pipe3D,
Sanchez2017, JMA2017, Catalan-Torrecilla2017, Gilhuly2018}. This database
enables us to perform a comprehensive study of the dependence of spiral arm
pitch angle on various galaxy properties.
\section{Data}
We select our galaxies from the sample of \cite{Kalinova2017}, who provide
rotation curves of 238 CALIFA galaxies, which can be used to derive velocity
shear rates. We use their corresponding SDSS {\it r}-band images to analyze
their spiral arms. We visually inspect the images to exclude ellipticals,
edge-on disks, and irregular systems, finally settling on 93 nearly face-on
galaxies having spiral structure. We then generate a mask to exclude
foreground stars and produce star-cleaned images following the procedures
described in \cite{Ho2011}. The background level was determined from 10
randomly selected empty sky regions and then subtracted from the star-cleaned
images. As shown in the next section, we successfully measured pitch angles
for 79 of the 93 galaxies. The distribution of Hubble types for the final
sample of 79 galaxies is shown in Figure 1 (red-hatched histograms), which is
compared with the sample of 238 galaxies of
\cite[blue-hatched histograms;][]{Kalinova2017}. The Hubble types come from the
CALIFA team \citep{Walcher2014}. The subsample of 79 galaxies well represents
the distribution of Hubble types of the parent sample from \cite{Kalinova2017};
it even includes an elliptical galaxy, which actually exhibits faint arms.
The stellar masses of the galaxies ($M_*^{\rm gal}$) are from
\cite{Sanchez2017}, who analyzed the stellar population using the Pipe2D
pipeline \citep{Sanchez2016BPipe3D}. \cite{JMA2017} performed two-dimensional
multi-component photometric decomposition and characterized the main stellar
substructures (bulge, bar, and disk), whose mass and star formation history
were studied by \cite{Catalan-Torrecilla2017}. From these studies we can
compile the bulge-to-total light ratio ($B/T$), bulge stellar mass
($M_*^{\rm bul}$), and disk stellar mass ($M_*^{\rm disk}$).
The uncertainty of $B/T$ is primarily systematic in origin, driven by model assumptions instead of statistical errors from the fitting. Following
\citep{Gao2017}, we assign 10\% fractional uncertainty to $B/T$.
The concentration indices ($C_{28}$), derived from the isophotal analysis of
\cite{Gilhuly2018},
have fractional errors pf $3\%$.
The absolute {\it B}-band magnitudes ($M_B$) come from
HyperLeda \citep{Paturel2003}. The central velocity dispersions ($\sigma_c$)
and their uncertainties are calculated as the mean value and standard
deviation of the velocity dispersion, provided by \cite{FBJ2017}, within
3$\arcsec$ of the galaxy center.
\cite{Kalinova2017} performed principal component analysis of the rotation curves of the CALIFA galaxies and provided the coefficient of the first eigenvector (PC$_1$), which quantitatively describes the shape and amplitude of the rotation curve of the central region. Section 4 shows that PC$_1$ is useful for our study. Table 1 lists the above-described parameters for our sample.
\begin{figure*}
\centering
\includegraphics[width=17.5cm]{table1_p1.eps}
\end{figure*}
\section{Pitch Angle and Shear Rate}
\subsection{Measuring pitch angle}
An accurate determination of the sky projection parameters---ellipticity ($e$)
and position angle (PA)--- for the galaxies is essential for the study of
spiral arms. We adopt two methods to measure the $e$ and PA of galaxies and
determine the optimal results. One is to use the {\tt IRAF} task {\tt ellipse}
to extract radial profiles of $e$ and PA from isophotal analysis. The adopted
values of $e$ and PA are obtained by averaging their profiles in the region
where the disk component dominates. The second method is to use a
two-dimensional Fourier transformation of the disk region, minimizing the real
part of the Fourier spectra, which corresponds to the bimodal component, to
derive $e$ and PA. These two methods assume that the disk is intrinsically
circular. To determine the optimal results, we deproject the galaxies to their
face-on orientation using the $e$ and PA values from these two methods, giving
preference to that which yields a rounder deprojected image or more
logarithmic-shaped spiral arms. Our adopted values of $e$ and PA are listed in
Table~1.
The most widely used techniques to measure the pitch angle of spiral arms
employ discrete Fourier transformation, either in one dimension (1DDFT)
\citep{Grosbol2004, Kendall2011, Yu2018a} or in two dimensions (2DDFT)
\citep{Kalnajs1975, Iye1982, Krakow1982, Puerari1992, Puerari1993, Block1999,
Davis2012, Yu2018a}. \cite{Yu2018a} discuss and use both techniques, in the
context of images from the Carnegie-Irvine Galaxy Survey \citep[CGS;][]{Ho2011}.
As the pitch angles obtained from both methods are actually consistent within
a small scatter of $2\degr$ \citep{Yu2018a}, we use 2DDFT to measure pitch
angle for the majority (72/79) of our sample; the 1DDFT method was used for
seven cases for which the 2DDFT method failed. In total we successfully
measured pitch angles for 79 of 93 objects. Table 1 lists our pitch angle
measurements, including the radial range and Fourier mode used for the
calculation. Pitch angles could not be
measured for the rest of the galaxies because the arms are too weak (6
galaxies), too flocculent (4 galaxies), or too wound up such that isophotes
cross a single arm more than once (4 galaxies). As shown in the distribution
of Hubble types in Figure 1, $20\%$ (6) of the S0 or S0/a galaxies have very
weak but detectable spiral arms whose pitch angles can be measured. An extreme
case is NGC~1349, which is classified as ``E'' in \cite{Walcher2014} but ``S0''
in HyperLeda. Compared with previous studies of spiral arms that use known
``spirals'' as one of their selection criteria \citep{Kennicutt1981,
Seigar1998, Seigar2005, Seigar2006, Kendall2011, Elmegreen2014}, our sample is
more complete by including disks in early-type galaxies with faint arms.
\subsection{Measuring shear rate}
The galactic shear rate ($\Gamma$), which quantifies the morphology of the
rotation curve, is given by
\begin{eqnarray}
\Gamma = \frac{2A}{\Omega} = 2 - \frac{\kappa^2}{2\Omega^2}=1 - (R/V_{c})({\rm d}V_{c}/{\rm d}R),
\end{eqnarray}
\noindent
where $R$ is the radial distance from the center, $V_c$ is the
circular velocity, $A$ is first Oort constant, $\Omega$ is angular speed, and $\kappa$ is epicyclic frequency.
Eq. (1) is the most widely adopted definition of shear rate \citep[e.g.,][]{Bertin1989b,
Grand2013, Dobbs2014, Michikoshi2014},
but is 2 times the shear rate defined by \cite{Seigar2005}. We
use the rotation curves of \cite{Kalinova2017} to derive $\Gamma$.
We first identify an outer region that is beyond the turnover of the rotation
curve ([$r_{i}$,$r_{o}$]), one that preferably coincides with the radial
region used to derive the pitch angle. For a few galaxies whose radial region
occupied by spiral arms exceeds the radial extent of the rotation curve, we
choose the outer region where the rotation curve has become stable. To
evaluate $\Gamma$, we fit the function
\begin{eqnarray}
V_c=b\times e^{(1-\Gamma)\text{ln}R},
\end{eqnarray}
\noindent
where $b$ is a coefficient, to the rotation curve in three radial ranges:
$[r_i, r_o-\Delta r]$, $[r_i+\Delta r, r_o]$, and $[r_i+\Delta r/2, r_o-
\Delta r/2]$, where $\Delta r=0.2(r_o-r_i)$. The value of $\Gamma$ and its
uncertainty (Table 1), introduced from the choice of radial range, are
estimated as the mean value and standard deviation of the three values of
$\Gamma$ obtained from the fitting over the above three radial ranges.
Figure 2 gives an illustration for IC~1511, for which the filled points
represent the rotation curve and the solid line marks the curve that yields
a shear rate of $\Gamma=0.74$. Note that when $\Gamma=1$, the outer part of
the rotation curve is flat; when $\Gamma<1$, the outer part of the rotation
curve is rising; when $\Gamma>1$, the outer part of the rotation curve is
falling.
\begin{figure}
\figurenum{2}
\centering
\includegraphics[width=8cm]{exampleShear.eps}
\caption{Example of measuring shear rate for IC~1151. Three radius ranges: $[r_i, r_o-\Delta r]$,
$[r_i+\Delta r, r_o]$, and $[r_i+\Delta r/2, r_o-\Delta r/2]$, where $\Delta r=0.2(r_o-r_i)$, are used
to fit the function in Equation (2). The solid line represents the function with averaged $\Gamma$ and $b$.}
\end{figure}
\section{The Relationship Between Pitch Angle and Galaxy Properties}
In this section, we study the dependence of spiral arm pitch angle on various
galaxy properties (morphology, luminosity, stellar mass, and kinematics),
and then compare our findings with previous studies, which sometimes reveal
conflicting results.
\subsection{Dependence on Galaxy Morphology}
\begin{figure}
\figurenum{3}
\centering
\includegraphics[width=7.5cm]{comparetype.eps}
\caption{
Comparison between Hubble type of our sample, adopted from \cite{Walcher2014},
with classifications given in Hyperleda \citep{Paturel2003}. The correspondence
between Hubble type and $T$ value is as follows: E: $T$\,$=$\,$-5$; S0:
$T$\,=\,$-2$; S0/a: $T$\,=\,$0$; Sa: $T$\,=\,$1$; Sab: $T$\,=\,$2$; Sb:
$T$\,=\,$3$; Sbc: $T$\,=\,$4$; Sc: $T$\,=\,$5$; Scd: $T$\,=\,$6$; Sd:
$T$\,=\,$7$; Sdm: $T$\,=\,$8$; Sm: $T$\,=\,$9$.
The dotted line mark the $1:1$ ratio. The Hubble types from these two sources
are roughly consistent with a total scatter in $T$ value of $\sigma_T = 1.3$.
}
\end{figure}
\begin{figure*}
\figurenum{4}
\centering
\includegraphics[width=16cm]{pitch_type.eps}
\caption{Variation of spiral arm pitch angle with Hubble type from (a) \cite{Walcher2014} and (b) Hyperleda \citep{Paturel2003}.
The large open points mark the mean value and associated errors. The
uncertainty of the mean Hubble type is determined by $\sigma_T/\sqrt{N}$, with
$N$ the number of objects in each Hubble type bin.
Most of the scatter in pitch angle for a given Hubble type
is real and not caused by subjective classification.}
\end{figure*}
Hubble types are subjective. Different classifications place different
weights on the classification criteria and may lead to different results. To
assess the uncertainty of the Hubble types of our sample, we compare the types
determined by \cite{Walcher2014} with those given in Hyperleda
\citep{Paturel2003}: the two are consistent within a scatter of $\sigma_T =
1.3$ in $T$ (Figure 3). The measured pitch angle of spiral arms is plotted
against Hubble type determined by \cite{Walcher2014} in Figure~4a, where the
open points mark the mean value and associated errors. The
uncertainty of the mean Hubble type is determined by $\sigma_T/\sqrt{N}$, with
$N$ the number of objects in each Hubble type bin.
Despite the fact that our sample contains fewer
early-type spirals than late-type spirals, our results confirm that on
average spiral arms tend to be more tightly wound in galaxies of earlier Hubble
type, but with large scatter in pitch angle ($\sim7\degr$). Most of the
early-type spirals (Sab and earlier) have pitch angles less than $15\degr$,
while later type spirals (Sb and later) can have both high ($\sim 30\degr$)
and low ($\sim 10\degr$) pitch angles. This behavior is not entirely
consistent with the Hubble classification system, which implicitly considers
tightness of spiral arms. Part of the scatter in the relation between pitch
angle and Hubble type may come from the subjective nature of morphological
classification.
However, repeating
the analysis using Hubble types from Hyperleda \citep{Paturel2003} yields
very similar results (Figure~4b), indicating that a large portion of scatter
in pitch angle at a given Hubble type is real. Studies by \cite{Kennicutt1981}
and \cite{Ma2002} also found a weak correlation between pitch angle and Hubble
type, with large scatter in pitch angle at a given type. But such a
trend was not seen by \cite{Kendall2015}, probably because of their small
sample of just 13 objects, nor by \cite{Seigar1998}, probably because nearly
all of their pitch angles were less than $15\degr$.
Density wave theory predicts an inverse correlation between pitch angle and
mass concentration \citep{LinShu64, Roberts1975, Bertin1989, Bertin1989b}. The
relative prominence of the bulge, as reflected, for instance, in the
bulge-to-total light ratio ($B/T$), should provide a reasonable proxy for the
stellar mass concentration.
The same holds for the concentration parameter $C_{28} \equiv R_{80}/R_{20}$,
with $R_{20}$ and $R_{80}$ the radii enclosing, respectively, 20\% and 80\% of
the total flux. Figure~5 shows the relation between pitch angle $\varphi$
and $B/T$ and $C_{28}$. We group the data points into five equal-sized bins of
$B/T$ and $C_{28}$, and then calculate the mean value and standard deviation
of each bin.
The number of bins is set to ensure sufficient sampling.
In general, $\varphi$ decreases with increasing $B/T$ and $C_{28}$,
with Pearson correlation coefficients of $\rho$\,=\,$-0.28$ and $\rho$\,=\,$-0.46$, respectively, although the
scatter is substantial.
These results are at odds with the conclusions
of \cite{Hart2017}, who reported a slight tendency for $\varphi$ to rise with
increasing ratio of bulge mass to total mass, precisely the {\it opposite}\ of
what we see. Contrary to previous studies \citep{Seigar1998, Kendall2015},
we find a significant correlation between $\varphi$ and $C_{28}$:
$\varphi$ decreases from $=23\fdg7\pm4\fdg8$ at $C_{28}$\,=\,$2.0\pm0.2$ to
$13\fdg4\pm6\fdg1$ at $C_{28}$\,=\,$5.0\pm0.2$. The marked scatter in the $\varphi
- B/T$ diagram may stem, in part, from the many complications in
bulge-to-disk decomposition \citep{Gao2017}.
Our results imply that galaxies with more centrally concentrated mass
distributions tend to have more tightly wound spiral arms.
\subsection{Dependence on Luminosity and Mass}
\begin{figure*}
\figurenum{5}
\centering
\includegraphics[width=16cm]{pitch_concentration.eps}
\caption{
Variation of pitch angle of spiral arms with (a) bulge-to-total light ratio $B/T$ and (b) concentration
index $C_{28}$.
The data points are separated into five equal-sized bins of $B/T$ or $C_{28}$.
The mean value and standard
deviation of each bin is marked by solid black points and associated error
bars. Pitch angle correlates weakly with $B/T$ and somewhat stronger with
$C_{28}$. For each of the five bins of $C_{28}$, the data are further
separated into two subsets according to the mean value of $M_*^{\rm gal}$;
the mean and standard deviation of the subset above and below the mean are
marked by the blue and red crosses, respectively.
The Pearson's correlation coefficient, $\rho$, is shown on the bottom-left of each panel.}
\end{figure*}
\begin{figure*}
\figurenum{6}
\centering
\includegraphics[width=15cm]{pitch_Luminosity_mass_disk.eps}
\caption{Dependence of pitch angle on (a) absolute {\it B}-band magnitude
($M_B$), (b) galaxy stellar mass ($M_*^{\rm gal}$), (c) bulge stellar mass
($M_*^{\rm bul}$), and (d) disk stellar mass ($M_*^{\rm disk}$).
The solid black points and their associated error bars mark the mean value and
standard deviation of the data in each of the five bins of the parameter on the
X-axis value.
To obtain sufficient data points, the number of objects in the first bin was
manually adjusted to include all sources up to
$M_B = -19.3$, $M_*^{\rm gal} = 10^{10}$\,$M_{\odot}$, $M_*^{\rm bul} = 10^8$\,$M_{\odot}$, and $M_*^{\rm disk} = 10^{10}$\,$M_{\odot}$, and
the rest of the data were further grouped into four equal bins.
Results from \cite{Hart2017} are marked by orange lines,
which show a nearly flat trend. For each of the five bins of $M_*^{\rm gal}$,
the data are further separated into two subsets according to the mean value of
$C_{28}$; the mean and standard deviation of the subset above and below
the mean are marked by the blue and red crosses, respectively.
The Pearson's correlation coefficient, $\rho$, is shown on the bottom-left of each panel.}
\end{figure*}
Figure~6 examines the variation of pitch angle with absolute {\it B}-band
magnitude ($M_B$), total galaxy stellar mass ($M_*^{\rm gal}$), and
separately the stellar mass of the bulge ($M_*^{\rm bul}$) and disk
($M_*^{\rm disk}$). The results from \cite{Hart2017}, marked by the
orange line, are shown for comparison.
As shown in Figure~6, the distributions of the data are not homogenous, and
there are fewer data points in the faint, low-mass end.
Thus, we manually adjust the number of objects in the first bin
to include all sources up to
$M_B = -19.3$, $M_*^{\rm gal} = 10^{10}$\,$M_{\odot}$, $M_*^{\rm bul} = 10^8$\,$M_{\odot}$, and $M_*^{\rm disk} = 10^{10}$\,$M_{\odot}$, and
the rest of the data were further grouped into four equal-sized bins.
The binned data support the notion
that more luminous, more massive galaxies tend to have more tightly wound
spiral arms (smaller values of $\varphi$).
The apparent bimodal distribution at $M_B\approx -20.7$ mag is probably an
artifact
of the small number of data points in the range from 8$\degr$ to 32$\degr$.
The correlation between $\varphi$ and $M_B$ is mainly driven by the close
coupling between $M_B$ and $M_*^{\rm gal}$.
The measured pitch angles
decrease from $\varphi$\,=\,$26\fdg0\pm5\fdg9$ at $\log (M_*^{\rm gal}/M_{\odot})$\,=\,$9.6\pm0.2$ to $\varphi$\,=\,$13\fdg8\pm5\fdg3$ at $\log (M_*^{\rm gal}/M_{\odot})$\,=\,$11.1\pm0.1$.
Similarly, pitch angles decrease toward larger $M_*^{\rm bul}$ and $M_*^{\rm disk}$,
with scatter comparable to that of the $\varphi-M_*^{\rm gal}$ relation. The
apparent flattening or even reversal toward the lowest mass bin is probably
caused by insufficient sampling. As illustrated by the orange lines,
\cite{Hart2017} find essentially no relationship between pitch angle and
stellar mass (total, bulge, or disk), and at a given mass their pitch angles
are systematically smaller than ours. The discrepancy is likely caused by the
usage of different techniques to measure pitch angle. The Fourier
transformation we employ uses flux as weighting when calculating Fourier
spectra, and so it tends to extract information from the dominant modes of
the spiral structure. \cite{Hart2017}, by contrast, employ the code
{\tt SpArcFiRe} \citep{sparcfire2014}, which uses texture analysis to identify
arc-like segments, including very faint arms, and then averages the pitch
angles of these segments, weighting by their length. However, the faint arms,
whose physical significance is unclear, may adversely affect their final
measurement of pitch angle.
Our findings are qualitatively consistent with the theoretical expectations of
density wave theory.
\cite{Roberts1975} argued that the mass concentration determines the pitch
angle of spiral arms, a conclusion confirmed by the modal analysis of
\cite{Bertin1989}. In a similar vein, \cite{Hozumi2003} suggested that
tighter spiral arms are associated with higher surface density. In Figure~6b,
we study the effect of mass
concentration on arm tightness at {\it fixed}\ galaxy mass, by grouping the
data into bins of $\log (M_*^{\rm gal})$ after dividing the sample into two
subsets according to their mean value of $C_{28}$. For a given stellar mass
$\log (M_*^{\rm gal}/M_{\odot})\gtrsim10.4$, we note that galaxies with
higher $C_{28}$ (blue crosses) tend to have smaller $\varphi$, but the
tendency disappears for stellar masses $\log (M_*^{\rm gal}/M_{\odot})
\lesssim10.4$, likely due to small number statistics. At the same time, at
fixed $C_{28}$ more massive galaxies tend to have tighter arms; the difference
decreases with decreasing $C_{28}$ and vanishes at $C_{28}\approx 2$
(Figure~5b).
\subsection{Dependence on Galaxy Kinematics}
\subsubsection{Central velocity dispersion}
\begin{figure}
\figurenum{7}
\centering
\includegraphics[width=8cm]{pitch_sigma.eps}
\caption{Correlation between pitch angle and central velocity dispersion ($\sigma_c$).
The data points are grouped into five equal bins of $\sigma_c$.
The solid black points and their associated error bars mark the mean value and
standard deviation of the data in each bin.
The pitch angle decreases with increasing $\sigma_c$, with small scatter
for $\sigma_c\ga100$\,km\,s$^{-1}$, but for $\sigma_c\lesssim100$\,km\,s$^{-1}$
the mean value of pitch angle remains roughly constant. For the the five bins
of $\sigma_c$, the data points are further separated into two parts according
to the mean value of $\log(M_*^{\rm gal})$; the mean and standard deviation of
the top and bottom parts are marked by the blue and red crosses, respectively.
The Pearson's correlation coefficient, $\rho$, is shown on the bottom-left corner.
}
\end{figure}
Figure~7 plots pitch angles versus the central velocity dispersion
($\sigma_c$). The correlation is strong, with
with Pearson correlation coefficient $\rho$\,=\,$-0.64$.
For galaxies with $\sigma_c\ga100$\,km\,s$^{-1}$, pitch angle
decreases with $\sigma_c$ with small scatter, reaching a mean value of
$\varphi = 9\fdg2\degr\pm2\fdg2$ at $\sigma_c=193\pm11$\,km\,s$^{-1}$, the
highest velocity dispersion covered by our sample. No galaxies with high
$\sigma_c$ show open spiral arms. By contrast, galaxies with
$\sigma_c\lesssim100$\,km\,s$^{-1}$ host arms with a wide spread in pitch
angles, from values as high as $\varphi \approx 30\degr$ to as low as
$\varphi \approx 15\degr$. The pitch angle seems to remain at a roughly
constant mean value of $\varphi \approx 23\degr$ for
$\sigma_c\lesssim100$\,km\,s$^{-1}$.
\subsubsection{Comparison with previous results}
\cite{Seigar2008} reported a strong inverse correlation between spiral arm
pitch angle central stellar velocity dispersion for a sample of 27 galaxies.
Here, we independently reexamine their results. Instead of using the same
images used in \cite{Seigar2008}, we use, whenever possible, images of higher
quality: seven images from CGS \citep{Ho2011}, 16 images from SDSS, and
three images collected from the NASA/IPAC Extragalactic Database
(NED)\footnote{\tt http://nedwww.ipac.caltech.edu}. The data point for the
Milky Way is not included, as its pitch angle may be unreliable. We uniformly
analyze all the objects, using, for consistency, the 2D Fourier transformation
method. \cite{Seigar2008} did not provide the projection parameters ($e$ and
PA) for the galaxies, and so measure them following the procedure described in
Section 3.1. We successfully measure pitch angles for 21 of the 26 galaxies.
Direct comparison of our new pitch angle measurements with those published by
\cite{Seigar2008} reveal that a significant fraction of them (6/21) were
severely and systematically overstimated (by more than $8\degr$). Figure~8
illustrates the four cases with the most serious discrepancy, whose pitch
angles were overestimated by more than $10\degr$. The panels in left column
present the unsharp-masked images overplotted with the synthetic arm with the
pitch angle derived from this work (red solid line) and from the work of
\citeauthor{Seigar2008} (\citeyear{Seigar2008}; orange dash-dotted line). The panels in the right column
plot the 2D Fourier spectra, with an arrow indicating the peak chosen to
derive the pitch angle. It is obvious that the synthetic arms created using
our pitch angles trace the spiral arms very well, whereas those that adopt the
pitch angles from \cite{Seigar2008} do not. In the extreme case of NGC~3938
(top row), \cite{Seigar2008} quoted a pitch angle of $43\fdg4$, whereas we
find $16\fdg7$.
These four galaxies have clear spiral arms and a distinctly dominant Fourier
mode. Their pitch angles can be measured rather straightforwardly and
unambiguously. The large discrepancies with the published values cannot arise
from errors in the measurement technique.
With our updated pitch angle measurements in hand for the 21 galaxies
reanalyzed by us, we redraw in Figure~9 the relation between $\varphi$ and
$\sigma_c$ as originally published by \cite{Seigar2008}, using $\sigma_c$
given by those authors (their Table~1). The relationship between $\varphi$
and $\sigma_c$ is considerably less tight than claimed by \cite{Seigar2008},
and instead closely resembles our results based on a much larger sample
(Figure~7).
\begin{figure*}
\figurenum{8}
\centering
\includegraphics[width=12cm]{exampleSiyueSeigar2008.eps}
\caption{
Illustration of our new measurements of pitch angle for four galaxies in common with the study of \cite{Seigar2008}. (Left) Unsharp-masked image overplotted with synthetic arms with our pitch angle
(red-solid curve) and that of \citeauthor{Seigar2008} (\citeyear{Seigar2008}; dashed-orange curve). (Right) Fourier spectra
to derive the pitch angle, with an arrow indicating the peak chosen.
}
\end{figure*}
\begin{figure}
\figurenum{9}
\centering
\includegraphics[width=8cm]{siyue_seigar_sigma.eps}
\caption{Correlation between pitch angle and $\sigma_c$. Blue crosses mark the
results from \cite{Seigar2008}, while black solid points represent our new
measurements. Data points for the same galaxy are connected with a dashed line
for comparison. Our new measurements show that there are objects with pitch
angles $\sim20\degr$ at $\sigma_c\approx 50$\,km\,s$^{-1}$, making the
trend very similar to that presented in Figure~7.
}
\end{figure}
\subsubsection{Implications}
\begin{figure}
\figurenum{10}
\centering
\includegraphics[width=8cm]{pitch_mass_sig.eps}
\caption{Galaxy stellar mass $(M_*^{\rm gal})$ is plotted against central
velocity dispersion ($\sigma_c$). The data are further separated into five
fan-shaped bins, denoted by number of 1 (purple), 2 (red), 3 (orange), 4
(green), and 5 (blue). The mean value and standard deviation of pitch angle in
each bin are shown in the inset panel: (1) $26\fdg0\pm6\fdg0$, (2)
$23\fdg1\pm6\fdg4$, (3) $21\fdg9\pm6\fdg4$, (4) $16\fdg7\pm4\fdg7$, and (5)
$9\fdg3\pm2\fdg0$. Our results suggest that when $\sigma_c$ or
$M_*^{\rm gal}$ is high, the pitch angle is mainly determined by
$\sigma_c$, while when $\sigma_c$ or $M_*^{\rm gal}$ is low, the pitch
angle is mainly determined by $M_*^{\rm gal}$.
}
\end{figure}
Pitch angle correlates strongly with $\sigma_c$ for
$\sigma\ga100$\,km\,s$^{-1}$. As $\sigma_c$ is measured within the central
$3\arcsec$ ($\sim 1$ kpc for our median sample distance of 68 Mpc), it must
connect with certain global properties to influence galactic-scale spiral
structure. It is well known that the luminosity of elliptical galaxies scales
with stellar velocity dispersion following $L\propto\sigma^4$ \citep{Faber1976},
but the Faber-Jackson relation of bulges is not well-determined. It varies
systematically with Hubble type \citep{Whitmore1981, Kormendy1983} and between
classical bulges and pseudobulges \citep{Kormendy2004}. The correlation
between pitch angle and $\sigma_c$ is not entirely consistent with that
between pitch angle and bulge mass. The $\varphi - M_*^{\rm bul}$ relation
clearly has larger scatter than the $\varphi - \sigma_c$ relation, especially
at the high-mass end (Figure~6c). For $\sigma_c\lesssim100$\,km\,s$^{-1}$,
the mean value of pitch angle remains essentially constant, suggesting that in
this regime another parameter determines the pitch angle. In order to study
the effect of galaxy mass on pitch angle for a given central velocity
dispersion, for each of the five bins of $\sigma_c$, we again split the data
into two subsets according to the mean value of $\log(M_*^{\rm gal})$, and then
separately examine the behavior of each. As Figure~7 shows, when
$\sigma_c\ga100$\,km\,s$^{-1}$ stellar mass does not help to reduce the
scatter in pitch angle, but in the low-$\sigma_c$ regime more massive galaxies
tend to have smaller pitch angle, consistent with the empirical relation found
in Section 4.2. Thus, galaxy mass can account for part of the scatter in
pitch angle.
The $\varphi-M_*^{\rm gal}$ relation and the $\varphi-\sigma_c$
relation are actually projections of a stronger three-parameter relation
involving $\varphi$, $M_*^{\rm gal}$, and $\sigma_c$. Figure~10 plots
$\log(M_*^{\rm gal})$ against $\sigma_c$.
To separate the data equally into five fan-shaped bins, numerically denoted 1
to 5, we scale the data by dividing them by
($\sigma_{c, {\rm max}}$\,$-$\,$\sigma_{c, {\rm min}}$) and
($\log M^{\rm gal}_{*, {\rm max}}$\,$-$\,$\log M^{\rm gal}_{*, {\rm min}}$)
and then generate six radiant dotted lines
in the scaled parameter space, with equal separation in orientation angle of
$18\degr$, originating from a reference point with maximum $\sigma_c$ and
minimum $M_*^{\rm gal}$.
The counterparts of the dotted lines in the original ($\sigma_c$, $\log M^{\rm gal}_*$) space are shown in Figure~10.
We calculate the mean and standard deviation of $\varphi$ in each bin.
The inset panel in Figure~10 shows the tendency
of pitch angle progressively increasing from the fifth bin
($\varphi$\,=\,$9\fdg3\pm2\fdg0$) to the first bin
($\varphi$\,=\,$26\fdg0\pm6\fdg0$). Our results suggest that when $\sigma_c$
or $M_*^{\rm gal}$ is high, $\varphi$ is mainly determined by $\sigma_c$,
whereas when $\sigma_c$ or $M_*^{\rm gal}$ is low, $\varphi$ is mainly
determined by $M_*^{\rm gal}$. Figure~10 can explain the behavior of the
$\varphi-M_*^{\rm gal}$ and $\varphi-\sigma_c$ relations. In the high-mass
regime, the scatter in $\varphi$ is large at a given $M_*^{\rm gal}$ (Figure
5b) because $\sigma_c$ is high ($\sim 130-210$ km\,s$^{-1}$) and $\varphi$ is
mainly dictated by $\sigma_c$. In the low-$\sigma_c$ regime, the mean value of
$\varphi$ remains nearly constant with $\sigma_c$ (Figure~7) because
$\varphi$ is mainly determined by $M_*^{\rm gal}$ instead of $\sigma_c$.
Therefore, our results suggest that two primary parameters---central velocity
dispersion and galaxy mass---synergistically determine spiral arm pitch angle.
\subsubsection{Morphology of the Rotation Curve of the Central Region}
\begin{figure}
\figurenum{11}
\centering
\includegraphics[width=8cm]{pitch_pc.eps}
\caption{The pitch angle shows a strong inverse correlation with PC$_1$.
The data points are grouped into five equal bins of PC$_1$.
The solid black points and their associated error bars mark the mean value and
standard deviation of the data in each bin. PC$_1$
reflects the morphology of the rotation curve in the central region. Galaxies
with high PC$_1$ ($>0$) have a high-amplitude, centrally peaked rotation curve
that attains a sharp maximum near the center, followed by a dip and then a
broad maximum of the disk component; those with low PC$_1$ ($<0$) have a
low-amplitude, slow-rising rotation curve that rises gently from the center in
a rigid-body fashion.
The Pearson's correlation coefficient, $\rho$, is shown on the bottom-left corner.
}
\end{figure}
The observed rotation curves can be grouped roughly qualitatively into several
types according to their behavior in the central region
\citep[e.g.,][]{Keel1993, Sofue1999}. \cite{Kalinova2017} applied principal
component analysis to quantitatively classify the rotation curves of the
CALIFA galaxies. The coefficient of the first eigenvector
PC$_1$ describes the morphology of the rotation curve of the central region.
Galaxies with high PC$_1$ ($>0$) have a high-amplitude, centrally peaked
rotation curve that attains a sharp maximum near the center, followed by a
dip and then a broad maximum of the disk component; those with low PC$_1$
($<0$) have a low-amplitude, slow-rising rotation curve that rises gradually
from the center in a rigid-body fashion. As the coefficient PC$_1$ is a
measure of both the shape and amplitude of the rotation curve, PC$_1$
simultaneously reflects the mass of the central component and the mass of
the disk, especially for bulgeless galaxies. As expected from the
$\varphi\,-\,\sigma_c\,-\,M_*^{\rm gal}$ relation, $\varphi$ shows a
strong inverse correlation with PC$_1$ (Figure~11;
$\rho$\,=\,$-0.66$).
In addition to baryonic mass, the shape and the amplitude of the rotation
curve also reflect the mass distribution of dark matter in galaxies. Hence,
the strong correlation between pitch angle
and morphology of the central rotation curve also implies that dark matter
content might help to shape spiral arms.
\subsubsection{Shear rate}
\begin{figure}
\figurenum{12}
\centering
\includegraphics[width=8cm]{pitch_shear.eps}
\caption{Comparison between pitch angle and shear rate.
Our measurements are
plotted as small open black points, with large solid black points denoting the
mean and associated standard deviation for five equal bins in $\Gamma$. The results
from $N$-body simulations of \cite{Grand2013} and \cite{Michikoshi2014} are
plotted as green crosses and blue triangles, respectively. The black solid
curve traces the theoretical prediction of swing amplification theory, given
by \cite{Michikoshi2014}, assuming $Q>1.5$. The dashed curve denotes the
prediction for $Q\approx 1.2$ (Eq. (5)).
The Pearson's correlation coefficient, $\rho$, is shown on the bottom-left corner.
}
\end{figure}
The strong correlation between pitch angle and shear rate $\Gamma$ originally
suggested by \citep{Seigar2005, Seigar2006} has not been substantiated by the
recent study of \cite{Yu2018a}, who show that $\sim 1/3$ of the pitch angle
measurements of \cite{Seigar2006} have been severely overestimated, as a
consequence of which the correlation between $\varphi$ and $\Gamma$ is much
weaker than previously claimed\footnote{It behooves us to note that we
uncovered similar problems with the pitch angle measurements in
\cite{Seigar2005}. For example, these authors report $\varphi = 30\fdg4\pm1.9$
for ESO~576$-$G51, but the Fourier spectra shown in their Figure~2 clearly
demonstrate that its dominant $m=2$ mode reaches its peak at $p\approx 6$,
which corresponds to only $\varphi \approx 18\degr$. Inspection of the Fourier
spectra of other galaxies in their study (e.g., ESO~474$-$G33) reveals similar
inconsistencies.}. We reassess the relationship between $\varphi$ and $\Gamma$
in Figure~12.
Pitch angle does have a tendency to decline with increasing
$\Gamma$ ($\rho$\,=\,$-0.49$), although the scatter is large.
Two physical mechanisms may explain this behavior.
An association between $\varphi$ and $\Gamma$ was recently explored using
numerical simulations by \cite{Grand2013} and \cite{Michikoshi2014}. $N$-body
simulations of isolated stellar disks produce transient but recurrent
local spiral arms \citep{Carlberg1985, Bottema2003, Sellwood2011, Fujii2011,
Grand2012a, Grand2012b, Baba2013, Onghia2013}. \cite{Onghia2013} argue that
spiral arms originate from the dynamical response of a self-gravitating
shearing disk to local density perturbations. Differential motion tends to
stretch and break up the spiral arms locally. In regions where self-gravity
dominates, the disk is locally overdense and generates arm segments, which
reconnect and make up the spiral arms. The simulations of \cite{Grand2013}
and \cite{Michikoshi2014}, marked by green crosses and blue triangles in
Figure~12, share the same general tendency for $\varphi$ to decline with
increasing $\Gamma$. However, a systematic offset can be clearly seen.
For a given shear rate, the predicted pitch angles are larger than the observed
values, typically by $\Delta\varphi\ga8\degr$, especially at the low-$\Gamma$
end. This implies that, in terms of arm morphology, $N$-body simulations
cannot yet generate realistic spiral arms even if the resolution of the
simulations is very high.
Swing amplification \citep{Julian1966, Goldreich1978, Toomre1981} is a
mechanism of amplifying spiral arms when a leading spiral pattern rotates
to a trailing one due to the shear in a differentially rotating disk.
Swing amplification theory is reasonably consistent $N$-body simulations in
terms of the predicted number of arms, which is approximately inversely
proportional to the mass fraction of the disk component \citep{Carlberg1985,
Bottema2003, Fujii2011, Onghia2013}, and in terms of the relationship between
the shear rate of the rotation curve and the pitch angle of the simulated
spirals \citep{Grand2013, Michikoshi2014}.
\cite{Michikoshi2014} derived a theoretical relation between $\varphi$ and
$\Gamma$ in the context of swing amplification theory. We give a brief
summary here. We consider a material arm that swings from leading to trailing
due to differential motion. The pitch angle evolves as
\citep[e.g.,][]{Binney}
\begin{eqnarray}
\tan\varphi=\frac{1}{2At},
\end{eqnarray}
\noindent
where $A$ is the first Oort constant and $t=0$ means that the arm extends
radially outward across the disk ($\varphi$\,=\,$90\degr$).
In the simulations of \cite{Michikoshi2014}, Toomre's $Q$ parameter increases
rapidly and exceeds 1.5 for $\Gamma\gtrsim 0.5$. In the linear approximation
of swing amplification theory, if $Q>1.5$, the maximum amplification is reached
at $t_{\rm max} \backsimeq3.5/\kappa$, where $\kappa$ is the epicyclic
frequency. They interprete the most amplified short wave as the spiral
structures observed in their simulation. Combined with
$\Gamma=\frac{2A}{\Omega}$, the relation between pitch angle and shear rate
becomes (their Eq. 15)
\begin{eqnarray}
\tan\varphi = \frac{2}{7} \frac{\sqrt{4-2\Gamma}}{\Gamma}.
\end{eqnarray}
\noindent
This is indicated by the solid line in Figure~12. The prediction from swing
amplification theory is quantitatively consistent with the results from
$N$-body simulations. Note that the predicted pitch angle for a given shear
rate is still higher than our measurements (by $\sim8\degr$). This is because
Eq. (4) assumes $Q>1.5$, which is indeed the case for the simulated spiral
galaxies of \cite{Michikoshi2014}. However, as suggested by \cite{Bertin1989},
because of self-regulation of gas content, Toomre's $Q$ parameter in the outer
regions of galactic disks should be close to unity. On the other hand, $Q$ may
be substantially larger in the central regions of galaxies because of the
presence of a bulge. Considering both the effects of the disk and the bulge,
if $Q$ is constrained to $\sim 1.2$, we estimate from Figure 7 of
\cite{Michikoshi2014} that $t_{\rm max} \approx 5.5/\kappa$. For
$Q \approx 1.2$, we obtain, from swing amplification theory,
\begin{eqnarray}
\tan\varphi = \frac{2}{11} \frac{\sqrt{4-2\Gamma}}{\Gamma}.
\end{eqnarray}
\noindent
This revised relation, presented as the dashed curve in Figure~12, is now
consistent with our observational results.
Our results have two important implications. First, if swing amplification
theory is the correct framework to explain spiral structures in galaxies, then
Toomre's $Q$ should be roughly 1.2. And second, to generate more
realistic spiral arms, simulations may need an effective cooling mechanism for
the disk, perhaps by including the effects of gas, to lower Toomre's $Q$.
In the framework in which spiral structure constitute transient material arms,
the shape of the arms should reflect the effects of differential rotation alone.
Although we present some evidence supporting this picture, the existence of
other stronger empirical relationships
between pitch angle and $\sigma_c$ or PC$_1$, which have statistically
stronger correlation coefficients,
probably rules out the transient material arms scenario.
A more compelling, alternative explanation for the relationship between pitch
angle and shear rate comes from density wave theory. As the shape of the
rotation curve depends on the distribution of mass, the shear rate reflects
the mass concentration. Consequently, the inverse correlation between $\varphi$
and $\Gamma$ is qualitatively consistent with the expectations of density wave
models \citep{LinShu64, Roberts1975, Bertin1989, Bertin1989b}.
The predicted relation betweem $\varphi$ and $\Gamma$ is consistent with our
observed $\varphi-C_{28}$ relation (Figure~5b).
\section{Summary and Conclusions}
After more than half a century of research, the physical origin of spiral arms
in galaxies is still a topic of debate \cite[for a review, see][]{Dobbs2014}.
The pioneering work of \cite{Kennicutt1981} systematically established the
dependence of spiral arm pitch angle ($\varphi$) on galaxy properties, but
there have been only a handful of quantitative follow-up studies since \citep{Ma2002,
Seigar2005,Seigar2006,Seigar2008,Kendall2011,Kendall2015,Davis2015,Hart2017}.
The CALIFA survey of nearby galaxies provides a good opportunity to revisit
this problem, given the plethora of relevant ancillary measurements available
for the sample, including luminosity, stellar mass, photometric decomposition,
and kinematics \citep{Walcher2014,Sanchez2016Pipe3D,FBJ2017,Sanchez2017,JMA2017,
Catalan-Torrecilla2017, Gilhuly2018}. Because of the low redshift of the
sample, SDSS images are adequate to resolve the spiral structure of the
galaxies \citep{Yu2018a}. This paper uses SDSS $r$-band images to perform a
detailed analysis of the spiral structure of the 79 relatively face-on CALIFA
spiral galaxies with available published rotation curves. We aim to
systematically examine the correlation between spiral arm pitch angle and
other galaxy properties to investigate the physical origin of spiral arms.
As implicit in the Hubble classification system \citep{Hubble1926, Sandage1961},
we confirm that spiral arms become more open from early to late-type galaxies.
The pitch angle of spiral arms decreases with brighter absolute magnitude,
larger stellar mass (total, bulge, or disk), and higher concentration
($C_{28}$), although all these correlations contain significant scatter. Pitch
angle is also correlated with $B/T$. For a given $M_*^{\rm gal}$, galaxies
with higher $C_{28}$ have tighter spiral arms. Similarly, for a given $C_{28}$,
more massive galaxies have tighter spirals.
These trends are consistent with the density wave theory for spirals, which
predicts that pitch angle decreases with higher mass concentration
\citep{Roberts1975} and larger surface density \citep{Hozumi2003}.
We also find a strong correlation between pitch angle and
central stellar velocity dispersion:
$\sigma_c\ga100$\,km\,s$^{-1}$, $\varphi$ decreases with increasing $\sigma_c$
with small scatter, whereas $\varphi$ remains roughly constant for
$\sigma_c\lesssim100$\,km\,s$^{-1}$. This bevavior has important
implications. We show that $\varphi$ is mainly determined by $\sigma_c$ for
massive galaxies, while the primary determinant of $\varphi$ becomes
$M_*^{\rm gal}$ for less massive galaxies. We then demonstrate that the
$\varphi - M_*^{\rm gal}$ and $\varphi - \sigma_c$ relations are projections
of higher dimensional relationship between $\varphi$, $M_*^{\rm gal}$, and
$\sigma_c$.
Spiral arm pitch angle is closely connected to the morphology of the central
rotation curve, quantified by PC$_1$, the coefficient of the first eigenvector
from principal component analysis. Galaxies with centrally peaked rotation
curves tend to have tight arms; those with slow-rising rotation curves tend to
have loose arms. As PC$_1$ reflects both the mass of the central component
and of the disk, especially for bulgeless galaxies, the connection between
pitch angle and the morphology of the central rotation curve is consistent
with the $\varphi - \sigma_c - M_*^{\rm gal}$ relation.
We do not confirm the strong connection between pitch angle and galactic
shear rate ($\Gamma$) found in previous studies. $N$-body simulations
\cite[e.g.,][]{Grand2013, Michikoshi2014}, while generally successful in
reproducing the qualitative dependence between $\varphi$ and $\Gamma$,
systematically overpredicts $\varphi$ at fixed $\Gamma$. The observed inverse
correlation between $\varphi$ and $\Gamma$ can be interpreted in the context
of transient material arms obeying swing amplification theory, provided that
Toomre's $Q \approx 1.2$. In this scenario, the shape of the spiral arms
reflects the effects of differential rotation alone.
Differential rotation, however, is likely not the primary determinant of spiral
arm pitch angle, as the empirical correlation between $\varphi$ and $\Gamma$ is
not as strong as those between $\varphi$ and $\sigma_c$ or PC$_1$. The
totality of the evidence places greater weight on the density wave theory for
the origin of spiral arms in galaxies.
\begin{acknowledgements}
This work was supported by the National Key R\&D Program of China
(2016YFA0400702) and the National Science Foundation of China (11473002,
11721303). We are grateful to an anonymous referee for helpful feedback.
\end{acknowledgements}
|
2,869,038,154,121 | arxiv | \section{Introduction}
Understanding the chemical evolution of nearby galaxies has become
more and more interesting in the recent past, given the wealth of new,
detailed data available for their stellar and gaseous components. M33 (NGC 598)
is an ideal laboratory for testing chemical evolution models.
As a nearby blue star-forming galaxy, with a large angular size (optical size 53\arcmin$\times$83\arcmin,
Holmberg \cite{holmberg58}), and an intermediate inclination ($i$=53$^\circ$),
it is one of the galaxies observed with the greatest resolution and sensitivity.
Distance estimates range from 730~kpc (Brunthaler et al.~\cite{brunthaler05}) to 964~kpc (Bonanos et al.~\cite{bonanos06}). In this paper we adopt
an average value of 840 kpc (Freedman et al. \cite{freedman91}).
Many spectroscopic
studies are available, aiming at determining the physical and chemical properties
of its H~{\sc ii}\ regions (cf., e.g., Smith~\cite{smith75}, Kwitter \&
Aller~\cite{kwitter81}, V\'{\i}lchez et al.~\cite{vilchez88},
Willner \& Nelson-Patel~\cite{willner02}, Crockett et
al.~\cite{crockett06}, Magrini et al. \cite{magrini07a}, Rosolowsky \& Simon \cite{rs08}, Rubin et al. \cite{rubin08}).
These H~{\sc ii}\ regions trace the present
interstellar medium (ISM) and their metallicity is a measure of the star
formation rate (SFR) integrated over the whole galaxy lifetime.
As a result, the metallicity of H~{\sc ii}\ regions gives interesting constraints
to galactic chemical evolution models.
In the past, studies of the metallicity gradient of H~{\sc ii}\ regions in M33
have shown a very steep profile. First quantitative spectroscopic studies
were carried out by Smith~(\cite{smith75}), Kwitter \&
Aller~(\cite{kwitter81}) and V\'{\i}lchez et
al.~(\cite{vilchez88}). Their observations, limited to the
brightest and largest H~{\sc ii}\ regions, led to a radial oxygen gradient about -0.1~dex~kpc$^{-1}$.
Considering the observations by the above researchers, Garnett et al.~(\cite{garnett97}) obtained an overall O/H
gradient of -0.11$\pm$0.02~dex~kpc$^{-1}$.
With increasing sample sizes, flatter
gradients have been determined, in
particular by {\em i}) Willner \& Nelson-Patel~(\cite{willner02}) who derive neon
abundances of 25 H~{\sc ii}\ regions from infrared lines, obtaining a Ne/H
gradient of -0.034$\pm$0.015~dex~kpc$^{-1}$; {\em ii}) Crockett et
al.~(\cite{crockett06}) who derive even shallower gradients
(-0.016$\pm$ 0.017~dex~kpc$^{-1}$ for Ne/H and
-0.012$\pm$0.011~dex~kpc$^{-1}$ for O/H) from optical spectra of 6 H~{\sc ii}\
regions; {\em iii}) Magrini et al. (\cite{magrini07a}) who obtain an O/H gradient of
14 H~{\sc ii}\ regions, located in the radial range from $\sim$2 to
$\sim$7.2 kpc with a slope of $-0.054\pm0.011$~dex~kpc$^{-1}$; {\em iv})
Rosolowsky \& Simon (\cite{rs08}) who observe the largest sample of H~{\sc ii}\
regions, 61, finding a slope of $-0.027\pm0.012$~dex~kpc$^{-1}$ from $\sim$0.2 to
$\sim$6 kpc;
{\em v}) Rubin et al. (\cite{rubin08}) who obtain Ne/H and S/H gradients from
Spitzer infrared spectra of -0.058$\pm$0.014~dex~kpc$^{-1}$ and -0.052$\pm$0.021~dex~kpc$^{-1}$, respectively.
Rosolowsky \& Simon (\cite{rs08}) attribute the large
discrepancies in different authors' determinations to an intrinsic
scatter of about 0.1 dex around the average gradient, but the scatter is unexplained by
the abundance uncertainties. This kind of scatter is also seen in the
Milky Way gradient from H~{\sc ii}\ regions (e.g. Afflerbach et al. \cite{aff97}), and it may be caused
by metallicity fluctuations in the ISM and by the spiral arms. Thus a
limited number of observations, coupled with a significant metallicity
scatter at a given radius, may produce widely varying results.
In the case of a shallow gradient this effect is even stronger;
for example, for
an abundance gradient of -0.02-0.03 dex kpc$^{-1}$ in a galaxy with a radius of
10 kpc and a scatter of 0.1 dex, one would
need more than 30 H~{\sc ii}\ regions to obtain a good estimate of the slope of the gradient (Bresolin et al. \cite{bresolin09}).
Thus, only a large number of measurements can overcome the
uncertainties engendered by the intrinsic variance and relatively
shallow gradient of M33.
Our sample is composed of 48 H~{\sc ii}\ regions.
We derived the physical
and chemical properties for 19 H~{\sc ii}\ regions which have not been observed previously, and
for other 14 H~{\sc ii}\ regions whose chemical abundances have already been published.
For the remaining H~{\sc ii}\ regions, the faintness of their spectra did not allow
any reliable abundance determination, since their electron temperature ($T_{e}$) could not be derived.
We complemented these observations by a sample of 102 planetary nebulae (PNe),
already presented by Magrini et al. (\cite{magrini09}, hereafter M09), observed during the same run.
The main advantage of observing a combined sample of H~{\sc ii}\ regions and PNe is being able
to use not only the same observational set-up, but also the same data reduction and analysis techniques,
and to use identical abundance determination methods.
Although our sample of H~{\sc ii}\
regions does not add much to the literature,
the presence of several objects in common with previous studies allows us to
check the consistency of different sets of chemical abundance results.
By including at the same time two stellar populations of different
ages but with similar spectroscopic characteristics,
our observations allowed us to study for the first time the global metallicity, its 2-dimensional (2D) distribution and its radial gradient,
at two different epochs in the galaxy's lifetime avoiding biases introduced by different metallicity analysis.
The aim of the present study is to settle the questions of the value of the metallicity gradient in
M33 and its time evolution. In this framework, the new observations of H~{\sc ii}\ regions and PNe in M33 complemented
with the previous data represent the largest metallicity database available for an external galaxy.
In addition to metallicity data, recent results, such as the detection of inside-out
growth in the disk of M33 (Williams et al. \cite{williams09}), the detailed analysis of the star formation
both in the whole disk (Verley et al. \cite{verley09}) and in several giant H~{\sc ii}\ regions (Relano \& Kennicutt \cite{relano09}),
stimulated us to revise the already existing chemical evolution model (M07b)
and the star formation process in M33.
Particular attention was put on the observational constraints that our previous model failed to reproduce, such as the
radial profile of the molecular gas and the relationship between the SFR and the molecular gas.
The paper is organized as follows. In Sect. \ref{sect_mmt} we describe
our sample of H~{\sc ii}\ regions observed with MMT.
These data, together with a large literature dataset, allowed us to
compute the metallicity gradient of H~{\sc ii}\ regions again. In
Sect. \ref{sect_distr} we present the 2D distribution of the metallicity
and the radial gradient of different types of H~{\sc ii}\ regions.
In Sect. 4 we discuss the off-centre metallicity peak and its
origin. In Sect. \ref{sect_model} the data are compared
with the prediction of chemical evolution model of M33.
Finally, our conclusions and a summary are given in
Sect. \ref{sect_conclu}.
\section{The H~{\sc ii}\ region dataset}
\label{sect_mmt}
Hot O-B stars ionize their surrounding medium, producing the characteristic emission-line spectra of
H~{\sc ii}\ regions.
The H~{\sc ii}\ regions of M33 studied in the literature span a wide range of luminosities.
Their intrinsic brightness led giant H~{\sc ii}\ regions to be preferred in the
earlier studies (e.g. Kwitter \& Aller \cite{kwitter81}, Vilchez et al. \cite{vilchez88}) when only
relatively small telescopes were available. Smaller and fainter H~{\sc ii}\ regions have instead been
the subject of later spectroscopic investigations
(e.g., Magrini et al. \cite{magrini07a}, Rosolowsky \& Simon \cite{rs08}).
The latest abundance determinations have been restricted to the H~{\sc ii}\ regions with available electron temperature
measurements.
Several emission-line diagnostics of nebular $T_{e}$\
are indeed present in the optical spectrum of an H~{\sc ii}\ region, namely: [O~{\sc iii}]\
4363 \AA, [N~{\sc ii}]\ 5755 \AA, [S~{\sc iii}]\ 6312 \AA, [O~{\sc ii}]\ 7320-7330 \AA. Determining
$T_{e}$\ is the only way to derive the ionic and total chemical abundances safely and
accurately.
An assumed $T_{e}$\ could produce error of a factor of 2 or more in the final
chemical abundances (cf., e.g., Osterbrock \& Ferland
\cite{ost06}). This is why in the following analysis we include only those H~{\sc ii}\ regions whose $T_{e}$\ is directly measured.
\subsection{The MMT observations: data reduction and analysis}
In November 2007, we obtained spectra of 48 H~{\sc ii}\ regions (and 102 PNe) in M33 using the
MMT Hectospec fiber-fed spectrograph (Fabricant et al. \cite{fabricant05}). The
spectrograph was equipped with an atmospheric dispersion corrector and
it was used with a single setup: 270 mm$^{-1}$ grating at a dispersion
of 1.2 \AA ~pixel$^{-1}$. The resulting total spectral coverage
ranged from approximately 3600
\AA\ to 9100 \AA, thus including the basic emission-lines necessary
for determining their
physical and chemical properties. The instrument deploys 300 fibers
over a 1-degree diameter field of view, and the fiber diameter is $\sim$ 1.5\arcsec\
(6 pc adopting a distance of 840 kpc to M~33).
Some of the H~{\sc ii}\ regions in our sample already have published spectra in the literature so we use them as control sample,
while several are new. In Table \ref{tab_pos} we list the H~{\sc ii}\ regions from the
total observed sample for which we can derive the physical
and chemical properties.
In Table \ref{tab_pos} we list the H~{\sc ii}\ regions from the
total observed sample for which we can derive the physical
and chemical properties. The identification names are
from: BCLMP-- Boulesteix et al. (\cite{boulesteix74});
CPSDP-- Courtes et al. (\cite{courtes87}); GDK99--Gordon et al. (\cite{gordon99}); EPR2003--Engargiola et al. (\cite{engargiola03}); MJ98--Massey \& Johnson (\cite{massey98}).
The H~{\sc ii}\ regions not identified in
previous works are labled with
LGC-HII-n as in Magrini et al. (\cite{magrini07a}), standing for H~{\sc ii}\ regions discovered by the Local Group
Census project (cf. Corradi \& Magrini \cite{corradi06}). The
coordinates J2000.0 of the position of the fibers projected on the sky are
shown in the third and forth columns. They do not correspond exactly
to the centre of the emission line objects, but generally to the
maximum [O~{\sc iii}]\ emissivity.
\begin{table}
\caption{MMT observations of H~{\sc ii}\ regions with chemical abundance determination. }
\label{tab_pos}
\scriptsize{
\begin{tabular}{llcc}
\hline\hline
New sample &ID & RA & Dec \\
(1) &(2) & (3) & (4) \\
\hline
1 & BCLMP 275A & 1:32:29.5 & 30:36:07.90 \\
2 &GDK99 3 & 1:32:31.7 & 30:35:27.39 \\
3&LGCHII14 & 1:32:33.3 & 30:32:01.90 \\
4&CPSDP 26 & 1:32:33.7 & 30:27:06.60 \\
5&LGCHII15 & 1:32:40.8 & 30:24:24.99 \\
6&EPR2003 87 & 1:32:42.4 & 30:22:25.59 \\
7&LGCHII16 & 1:32:43.5 & 30:35:17.29 \\
8&LGCHII17 & 1:33:11.3 & 30:39:03.39 \\
9&BCLMP 694 & 1:33:52.1 & 30:47:15.59 \\
10&BCLMP 759 & 1:33:56.8 & 30:22:16.50 \\
11&MJ98 WR 112 & 1:33:57.3 & 30:35:11.09 \\
12&LGCHII18 & 1:33:58.9 & 30:55:31.30 \\
13&BCLMP 282 &1:32:39.1 & 30:40:42.10 \\
14&BCLMP 264 &1:32:40.2 & 30:22:34.70 \\
15&BCLMP 238 &1:32:44.5 & 30:34:54.30 \\
16&BCLMP 239 &1:32:51.8& 30:33:05.20 \\
17&BCLMP 261 &1:32:54.1 & 30:23:18.70\\
18&CPSDP 123 &1:33:20.4 & 30:32:49.20\\
19&CPSDP 43A &1:33:23.9 & 30:26:15.00 \\
\hline
Control sample & & &\\
\hline
20 &LGCHII2 & 1:32:43.0 & 30:19:31.19 \\
21 &LGCHII3 & 1:32:45.9 & 30:41:35.50 \\
22 &BCLMP289 & 1:32:58.5 & 30:44:28.60 \\
23 &BCLMP218 & 1:33:00.3 & 30:30:47.30 \\
24 &MA1 & 1:33:03.4 & 30:11:18.70 \\
25 &BCLMP290 & 1:33:11.4 & 30:45:15.09 \\
26 &IC132 & 1:33:15.8 & 30:56:45.00 \\
27 &BCLMP45 & 1:33:29.0 & 30:40:24.79 \\
28 &BCLMP670 & 1:34:03.3 & 30:53:09.29 \\
29 &MA2 & 1:34:15.5 & 30:37:11.00 \\
30 &BCLMP691 & 1:34:16.6 & 30:51:53.99 \\
31 &IC131 & 1:33:15.0 & 30:45:09.00\\
32 & IC133 & 1:33:15.9 & 30:53:01.00\\
33 &BCLMP745 & 1:34:37.6 & 30:34:55.00 \\
\hline
\hline
\end{tabular}}
\end{table}
The spectra were reduced using the Hectospec package. The relative
flux calibration was done observing the standard star Hiltm600 (Massey
et al.~\cite{massey88}) during the nights of October 15 and November 27. The
emission-line fluxes were measured with the package SPLOT of
IRAF\footnote{IRAF is distributed by the National Optical Astronomy
Observatory, which is operated by the Association of Universities for
Research in Astronomy (AURA) under cooperative agreement with the
National Science Foundation}. Errors in the fluxes were calculated
taking the statistical error in the measurement of the
fluxes into account, as well as systematic errors of the flux calibrations,
background determination, and sky subtraction. The observed line
fluxes were corrected for the effect of the interstellar extinction
using the extinction law of Mathis~(\cite{mathis90}) with $R_V$=3.1. We
derived c({H$\beta$}), the logarithmic nebular extinction, by using the
weighted average of the observed-to-theoretical Balmer ratios of
H$\alpha$, H$\gamma$, and H$\delta$ to H$\beta$ (Osterbrock \& Ferland
\cite{ost06}). The detailed description of the data reduction and the plasma and chemical
analysis can be found in Magrini et al. (\cite{magrini09}, hereafter M09).
Spectra of two H~{\sc ii}\ regions, one close to the galactic centre and one in the outer part of the disk
are shown in Fig.\ref{Fig_sp}.
\begin{figure}
\centering
\includegraphics[angle=0,width=10cm]{13564fg1.ps}
\caption{Two example spectra of H~{\sc ii}\ regions located at different galactocentric distance: BCLMP 694 at $\sim$ 2 kpc
and MA1 at $\sim$7.4 kpc. }
\label{Fig_sp}%
\end{figure}
Table \ref{tab_flux} gives the results of our emission-line measurements and extinction
corrections for 33 H~{\sc ii}\ regions whose
spectra were suitable for determining physical and chemical properties.
The columns of Table \ref{tab_flux}
indicate: (1) H~{\sc ii}\ region name; (2) nebular extinction coefficient
c({H$\beta$})\ with its error; (3) emitting ion; (4) rest-frame wavelength
in \AA; (5) measured line fluxes; (6) absolute errors on the measured line fluxes;
(7) extinction corrected line fluxes. Both F$_{\lambda}$ (5)
and I$_{\lambda}$ (7) are expressed on a scale where H$\beta$=100. Table
\ref{tab_flux} is published in its entirety in the
electronic edition of \aap. A portion is
shown here for guidance regarding its form and content.
The analysed H~{\sc ii}\ regions represent about 2/3 of our sample. The remaining 1/3 H~{\sc ii}\ regions have noisy spectra and
are distributed at all galactocentric radii.
We used the extinction-corrected intensities to obtain the electron
densities and temperatures. Electronic density was derived from the intensities of the sulphur-line doublet
[S~{\sc ii}]\ 6716,6731 \AA.
We used the intensities of several emission-line ratios, when available, to derive low and medium-excitation temperatures
(see also Osterbrock \& Ferland \cite{ost06}, $\S$5.2):
[N~{\sc ii}] $\lambda$5755/($\lambda$6548 +
$\lambda$6584) and [O~{\sc ii}] $\lambda$3727/($\lambda$7320 +
$\lambda$7330) for low-excitation $T_{e}$, while [O~{\sc iii}] $\lambda$4363/($\lambda$5007 + $\lambda$4959)
and [S~{\sc iii}] $\lambda$6312/($\lambda$9069 +
$\lambda$9532) for medium-excitation $T_{e}$.
We performed plasma diagnostics and ionic abundance calculation by using the 5-level atom model included in the {\it nebular}
analysis package in IRAF/STSDAS (Shaw \& Dufour~\cite{shaw94}).
The elemental
abundances are then determined by applying the ionization correction
factors (ICFs) following the prescriptions by Kingsburgh \&
Barlow~(\cite{kb94}) for the case where only optical lines are
available.
In the abundance analysis we adopted
T$_{\rm e}$[N II]\ and/or T$_{\rm e}$[O II]\ for computing the N$^+$, O$^+$, S$^+$ abundances, while
T$_{\rm e}$[O III]\ and/or T$_{\rm e}$[S III]\ for O$^{2+}$, S$^{2+}$, Ar$^{2+}$,
He${^+}$, and He$^{2+}$.
We calculated the abundances of He~{\sc i}\ and He~{\sc ii}\ using the
equations
of Benjamin et al.~(\cite{benjamin99}) in two density regimes, i.e. $n_{e}$
$>$1000 cm$^{-3}$ and $\leq$1000 cm$^{-3}$. The Clegg's collisional
populations were taken into account (Clegg \cite{clegg87}).
In Table~\ref{tab_abu} we present the electron densities and temperature, and the
ionic and total chemical abundances of our H~{\sc ii}\ region sample, which only includes
H~{\sc ii}\ regions with at least one measured value of $T_{e}$. The columns of Table~\ref{tab_abu} present: (1) identification name;
(2) label of each plasma diagnostic and abundances available; (3)
relative values obtained from our analysis.
Table~3 is published entirety in the
electronic edition.
We derived the temperature and density uncertainties
using the error propagation of the absolute
errors on the line fluxes. The errors on the ionic and total abundances were computed taking
the uncertainties in the observed fluxes, in the electron
temperatures and densities, and in c({H$\beta$}) into account.
In Table \ref{tab_tot_abu} a summary of the total abundances He/H, O/H,
N/H, Ne/H, S/H. and Ar/H
and their errors are presented.
The He abundance is shown by number with its absolute error, while the metal abundances
are expressed in the form of log(X/H)+12 with errors expressed in dex.
The last row indicates the average abundances computed by number.
\subsection{The PN data-set}
The PN population of M33 was studied by Magrini et al. (\cite{magrini09}) using multi-fiber
spectroscopy with Hectospec at the MMT with the same spectroscopic setup and during the same nights
as the H~{\sc ii}\ region observations presented here.
Spectra of 102 PNe were analysed and plasma diagnostics and chemical abundances
obtained for 93 PNe where the necessary diagnostic lines were measured.
The data reduction and the plasma diagnostics followed exactly the same procedure
as described in the present paper, thus ensuring
that no biases are introduced for the different analysis of the spectra.
About 20$\%$ of the studied PNe have young progenitors, the so-called Type I PNe.
The rest of the PNe in the sample are the progenies of an old disk stellar population, with main
sequence masses M$<$3M${_\odot}$ and ages t$>$0.3~Gyr.
A tight relation between the O/H and Ne/H abundances was found,
excluding that both elements have been altered by PN progenitors and supporting the validity of
oxygen as a good tracer of the ISM composition at the epoch of the progenitors' birth.
\begin{table}
\caption{Observed and de-reddened fluxes. }
\label{tab_flux}
\scriptsize{
\begin{tabular}{lllllll}
\hline\hline
ID & c({H$\beta$}) & Ion & $\lambda$ (\AA) & F$_{\lambda}$ & $\Delta$(F$_{\lambda}$) & I$_{\lambda}$ \\
(1)&(2) &(3) &(4) &(5) &(6) &(7)\\
\hline
BCLMP275A &0.493$\pm$0.009 &[OII] & 3727 & 186.4 & 2.2 & 256.7\\
&&[NeIII]/HI &3968& 3.2 & 0.7 & 4.1\\
&&HI & 4100 & 13.0 & 1.1 & 16.2 \\
&&HI & 4340 & 34.4 & 1.1 & 39.8 \\
&&[OIII] &4363 & 1.1 & 0.8& 1.2 \\
&&HI &4861 & 100.0& 1.6 & 100.1 \\
&&[OIII] &4959 & 57.3 & 1.2 & 55.8 \\
&&[OIII] &5007 & 169.9 & 2.0 & 163.3 \\
&&HeI &5876 & 13.5 & 0.9 & 10.7 \\
&&[SIII] &6312 & 1.7 & 0.8 & 1.3 \\
&&[NII] &6548 & 9.1 & 1.0 & 6.5 \\
&&HI &6563 & 409.6 & 2.3 & 290.2 \\
&&[NII] &6584 & 28.9 & 1.1 & 20.4 \\
&&HeI &6678 & 3.9 & 0.8 & 2.7 \\
&&[SII] &6717 & 32.7 & 1.1 & 22.6 \\
&&[SII] & 6731 & 23.1 & 1.2 & 15.9 \\
&&HeI & 7065 & 2.6 & 0.8 & 1.7 \\
&&[ArIII] &7135 & 9.8 & 0.8 & 6.4 \\
&&[SIII] &9069 & 21.4 & 1.0 & 10.2 \\
\hline
GDK99 3 &0.448$\pm$0.007 & [OII] & 3727 & 83.3 & 1.0 & 111.4 \\
&&HI &3835 & 6.8 & 0.6 & 8.8 \\
&&[NeIII] &3869 & 25.9 & 0.6 & 33.4 \\
&&HeI &3889 & 9.8 & 0.6 & 12.6 \\
&&[NeIII]/HI &3968 & 18.0 & 0.7 & 22.7 \\
&&HI &4100 & 17.5 & 0.7 & 21.2 \\
&&HI &4340 & 40.0 & 0.9 & 45.8 \\
&&[OIII] &4363 & 3.3 & 0.5 & 3.8 \\
&&HeI &4471 & 3.5 & 0.5 & 3.8 \\
&&HeII &4686 & 0.5 & 0.5 & 0.6 \\
&&HI &4861 & 100.0 & 1.3 & 100.1 \\
&&[OIII] &4959 & 196.2 & 1.5 & 191.5 \\
&&[OIII] &5007 & 585.9 & 2.8 & 565.0 \\
&&HeI &5876 & 14.0 & 0.7 & 11.3 \\
&&[NII] &6548 & 6.1 & 0.4 & 4.5 \\
&&HI &6563 & 396.4 & 1.9 & 289.9 \\
&&[NII] &6584 & 19.2 & 0.6 & 14.0 \\
&&HeI &6678 & 4.3 & 0.4 & 3.1 \\
&&[SII] &6717 & 31.0 & 0.7 & 22.2 \\
&&[SII] & 6731 & 21.6 & 0.7 & 15.4 \\
&&HeI & 7065 & 3.4 & 0.5 & 2.3 \\
&&[ArIII] &7135 & 14.8 & 0.5 & 9.9 \\
\hline
\hline
\end{tabular}}
\end{table}
\begin{center}
\begin{table}
\caption{Plasma diagnostics and abundances. }
\label{tab_abu}
\scriptsize{
\begin{tabular}{cll}
\hline\hline
ID & & \\
(1)&(2)&(3)\\
\hline
BCLMP725A & & \\
&T$_{\rm e}$[O III] &10600 \\
& T$_{\rm e}$[S III] &15800\\
&HeI/H & 0.076\\
& [OII]/H &7.740e-05\\
& [OIII]/H & 4.737e-05\\
& ICF(O) &1.000\\
& O/H &1.248e-04\\
&[NII]/H &3.287e-06\\
&ICF(N) &1.612\\
&N/H &5.299e-06\\
&[ArIII]/H &4.910e-07\\
&ICF(Ar) &1.87\\
&Ar/H &9.182e-07\\
&[SII]/H &7.486e-07\\
&[SIII]/H &1.100e-06\\
&ICF(S) &1.019\\
&S/H &1.884e-06\\
\hline
GDK99 3& & \\
&T$_{\rm e}$[O III] & 10200 \\
& HeI/H & 0.081 \\
&[OII]/H & 3.880e-05\\
&[OIII]/H &1.860e-04\\
& ICF(O) & 1.006 \\
& O/H &2.261e-04\\
&[NII]/H &2.536e-06\\
&ICF(N) &5.826\\
& N/H & 1.475e-05\\
&[NeIII]/H &3.150e-05\\
&ICF(Ne) &1.215\\
&Ne/H &3.829e-05\\
&[ArIII]/H &8.560e-07\\
&ICF(Ar) &1.87\\
&Ar/H &1.601e-06\\
&[SII]/H &8.157e-07\\
&ICF(S) &1.323\\
& S/H &8.255e-06\\
\hline
\hline
\end{tabular}
}
\end{table}
\end{center}
\begin{center}
\begin{table*}
\caption{The chemical abundances of our MMT sample. }
\label{tab_tot_abu}
\scriptsize{
\begin{tabular}{lllllll}
\hline\hline
\# & He/H & O/H & N/H & Ne/H & Ar/H & S/H \\
(1) &(2) &(3) &(4) &(5) &(6) &(7)\\
\hline
1 & 0.076$\pm$0.001 &8.10$\pm$0.07 & 6.72$\pm$0.15 & - & 5.96$\pm$0.34 & 6.27$\pm$0.20\\
2 & 0.081$\pm$0.001 & 8.35$\pm$0.05 & 7.17$\pm$0.11 & 7.58$\pm$0.015& 6.20$\pm$0.09 & 6.92$\pm$0.21 \\
3 & 0.058$\pm$0.003 &7.97$\pm$0.08 & 6.88$\pm$0.12 &- & 6.02$\pm$0.11 & 6.62$\pm$0.15\\
4 & 0.088$\pm$0.010 &8.36$\pm$0.05 & 7.24$\pm$0.19 &- & 5.82$\pm$0.18 & 6.46$\pm$0.19\\
5 & 0.085$\pm$0.005 &8.45$\pm$0.02 & 7.35$\pm$0.14 &- &- & 6.59$\pm$0.12\\
6 &0.114$\pm$0.003 &8.52$\pm$0.08 & 6.86$\pm$0.15 &7.60$\pm$0.20 & 6.27$\pm$0.12 & 6.50$\pm$0.20\\
7 & 0.085$\pm$0.005 &8.30$\pm$0.12 & 7.09$\pm$0.20 &7.00$\pm$0.18 & 6.07$\pm$0.16 & 6.61$\pm$0.15\\
8 & 0.113$\pm$0.005 &8.21$\pm$0.07 & 7.30$\pm$0.24 &7.69$\pm$0.42 & 6.20$\pm$0.20 & 6.49$\pm$0.15\\
9 &0.092$\pm$0.008 &8.33$\pm$0.10 & 7.21$\pm$0.10 & 7.53$\pm$0.20 &6.22$\pm$0.30 & 6.79$\pm$0.10\\
10 & 0.077$\pm$0.005 &8.11$\pm$0.08 & 6.88$\pm$0.29 & - & 6.16$\pm$0.22 & 6.66$\pm$0.15\\
11 & 0.093$\pm$0.003 &8.30$\pm$0.07 & 7.37$\pm$0.02 & - & 6.30$\pm$0.30 & 7.09$\pm$0.11\\
12 & 0.091$\pm$0.008 &8.15$\pm$0.07 &7.94$\pm$0.05 &7.20$\pm$0.10 &6.24$\pm$0.10 &6.51$\pm$0.20\\
13 &0.080$\pm$0.005 & 8.32$\pm$0.05 & 6.92$\pm$0.12 &- &6.19$\pm$0.12 &6.36$\pm$0.18 \\
14 &0.076$\pm$0.003 &7.87$\pm$0.08 & 6.93$\pm$0.15 & 7.11$\pm$0.20 & 5.89$\pm$0.20 & 6.41$\pm$0.18\\
15 &0.089$\pm$0.006 &7.92$\pm$0.08 & 6.75$\pm$0.15 & - & 5.92$\pm$0.18 & 6.57$\pm$0.15\\
16 &0.086$\pm$ 0.005 &8.27$\pm$0.07 & 7.20$\pm$0.14 & - & - & 6.52$\pm$0.20\\
17 &0.115$\pm$0.008 &8.01$\pm$0.08 & 6.62$\pm$0.16 & 7.16$\pm$0.18 & 5.93$\pm$0.14 & 6.40$\pm$0.18\\
18 &0.078$\pm$0.005 &8.35$\pm$0.05 & 7.35$\pm$0.12 & - & 6.35$\pm$0.15 & 6.94$\pm$0.20\\
19 &0.059$\pm$0.003 &8.40$\pm$0.05 & 8.05$\pm$0.10 & 8.03$\pm$ 0.18 & 5.16$\pm$0.18 & 7.02$\pm$0.17\\
20 &0.083$\pm$0.001 & 8.08$\pm$0.05 & 6.91$\pm$0.15 & 6.87$\pm$0.18 & 6.18$\pm$0.25 & 6.64$\pm$0.15\\
21 &0.086$\pm$0.001 &8.42$\pm$0.06 & 7.56$\pm$0.13 & 7.31$\pm$0.15 & 6.01$\pm$0.27 & 6.63$\pm$0.10\\
22 & 0.072$\pm$0.005 & 8.35$\pm$0.12 & 7.34$\pm$0.16 & 7.75$\pm$0.38 & 5.67$\pm$0.50 & 6.89$\pm$0.11\\
23 &0.088$\pm$0.001 & 8.17$\pm$0.12 & 6.97$\pm$0.18 & - & 6.15$\pm$0.14 & 6.73$\pm$0.23 \\
24 &0.080$\pm$0.008 & 8.28$\pm$0.15 & 7.10$\pm$0.20 & 7.61$\pm$0.28 & 6.14$\pm$0.32 & 6.71$\pm$0.40\\
25 &0.096$\pm$0.005 &8.38$\pm$0.13 & 7.37$\pm$0.15 & 7.36$\pm$0.20 & 5.84$\pm$0.10 & 6.57$\pm$0.15\\
26 &0.061$\pm$0.003 & 7.98$\pm$0.05 &6.98$\pm$0.15 & 7.28$\pm$0.12 & 5.86$\pm$0.15 & 6.36$\pm$0.13\\
27 &0.095$\pm$0.001 & 8.48$\pm$0.08 & 7.62$\pm$0.12 & 7.73$\pm$0.15 & 6.33$\pm$0.25 & 6.87$\pm$0.15\\
28 &0.088$\pm$0.002 & 8.30$\pm$0.07 &7.10$\pm$0.18 & 7.42$\pm$0.20 &6.23$\pm$0.30 & 6.63$\pm$0.20\\
29 & 0.091$\pm$0.005 & 8.31$\pm$0.10 & 7.19$\pm$0.15 & 7.46$\pm$0.25 & 6.42$\pm$0.24 & 6.92$\pm$0.21\\
30 &0.096$\pm$0.003 & 8.42$\pm$0.06 & 7.20$\pm$0.12 & 7.85$\pm$0.21 & 6.37$\pm$0.32 & 6.75$\pm$0.23\\
31 & 0.097$\pm$0.005 &8.47$\pm$0.08 & 7.26$\pm$0.15 & 7.76$\pm$0.20 & 6.29$\pm$0.25 & 7.06$\pm$0.15\\
32 & 0.079$\pm$0.005 &8.27$\pm$0.08 & 7.21$\pm$0.17 & 7.58$\pm$ 0.21 & 5.49$\pm$ 0.30 & 6.83$\pm$0.15\\
33 &0.067$\pm$0.008 &7.93$\pm$0.10 & 7.10$\pm$0.20 & - & 6.09$\pm$0.21 & 6.51$\pm$0.20\\
\hline
& & & & & & \\
&0.085$\pm$0.011 &8.27$_{-0.17}^{+0.12}$ & 7.31$_{-0.35}^{+0.30}$& 7.56$_{-0.30}^{+0.18}$ &6.13$_{-0.22}^{+0.14}$ & 6.71$_{-0.34}^{+0.19}$ \\ & & \\
\hline
\end{tabular}}
\end{table*}
\end{center}
\section{The metallicity distribution in M33}
\label{sect_distr}
The large amount of chemical abundance data from H~{\sc ii}\ regions in M33 allow
us to analyse the spatially-resolved distribution of metals in the ISM.
In this section, we present the
radial distribution and the map of O/H,
using the new data presented in this paper and all previous oxygen determinations for which
$T_{e}$\ has been measured.
\subsection{The metallicity gradient of H~{\sc ii}\ regions}
\label{sect_grad}
Our cumulative sample includes: {\em i)} H~{\sc ii}\
regions by Magrini et al. (\cite{magrini07a}), which includes abundances from their own sample
and previous abundance determinations (all with $T_{e}$\, and with abundances recomputed uniformly);
{\em ii)} the sample by Rosolowsky \&
Simon (\cite{rs08}); {\em iii)} the present sample (see Table \ref{tab_abu}).
In Fig.~\ref{Fig_oxy}, we show the oxygen abundance as a function of
galactocentric distance for the cumulative sample of H~{\sc ii}\ regions.
In this figure each point corresponds to a single region; i.e., we do not plot multiple measurements
for the same region but only the value with the lowest error.
Note the large dispersion in the radial region between 1 and 2 kpc from the centre caused by several high- and low-metallicity
regions, located in the southern arm (see Sect. 3.3), which might be related to the presence of a bar
(e. g., Corbelli \& Walterbos \cite{corbelli07}).
We applied the routine
{\it fitexy} in Numerical Recipes (Press et
al. 1992) to fit the relation between the oxygen abundances and the galactocentric distances,
taking their errors into account and minimizing $\chi^2$.
Typical errors on the de-projected galactocentric distances associated
with the uncertainty on the inclination were less than 0.1 kpc (Magrini et al. \cite{magrini07a}).
The fit to the complete sample gives a gradient of
\begin{equation}
12 + {\rm log(O/H)} = -0.037 (\pm 0.009) ~ {\rm R_{GC}} + 8.465 (\pm
0.038)
\label{eq1}
\end{equation}
where R$_{\rm GC}$ is the de-projected galactocentric distance in kpc,
computed by assuming an inclination of 53$^\circ$ and a position angle
of 22$^\circ$.
A weighted linear least-square fit to the MMT sample only gives a gradient of
\begin{equation}
12 + {\rm log(O/H)} = -0.044 (\pm 0.017) ~ {\rm R_{GC}} + 8.447 (\pm0.084),
\label{eq2}
\end{equation}
which is consistent within the errors with the gradient from the larger sample.
In the rest of the paper, we use the larger and more complete sample when discussing the metallicity
gradient and its possible time variation, but excluding the first kpc region
where only a few low metallicity regions were analysed. We discuss
the possible reasons for the lower value of the central metallicity later in this section and in Sect. \ref{sec_centre}.
The O/H gradient of the whole sample of H~{\sc ii}\ regions sample, excluding the central 1 kpc, is
\begin{equation}
12 + {\rm log(O/H)} = -0.044 (\pm 0.009) ~ {\rm R_{GC}} + 8.498 (\pm0.041).
\label{eq3}
\end{equation}
We also checked the metallicity gradient
by tracing it in different
areas of the galaxy, namely in the northwest and in the southeast halves, separately,
and in the nearest and in the farthest sides.
The results are shown in Fig. \ref{Fig_oxy_nswe}. The northern and southern gradients, as
well as those relative to the nearest and farthest sides, are identical within the uncertainties,
with slopes around -0.03-4 dex kpc$^{-1}$.
The only difference found between the metallicity gradients obtained for sections of the galaxy is the presence
of a high metallicity peak in the southern arm.
In conclusion, the present H~{\sc ii}\ region sample (literature $+$ present-work, 103 objects),
including only nebulae with measured $T_{e}$, reinforce the recent results on the slope of
the M33 O/H gradient, with a global slope up to around 8 kpc of -0.03 dex kpc$^{-1}$,
and of about -0.04 dex kpc$^{-1}$ excluding the central 1 kpc.
The very central regions remain somewhat undersampled (6 objects within 1 kpc from the
centre, and 9 within 1.5 kpc) and in disagreement with
other results.
A comparison with the metallicity gradient derived from young stars, which
are representative of the same epoch in the lifetime of the galaxy as H~{\sc ii}\ regions, and with
the infrared spectroscopy of H~{\sc ii}\ regions, is necessary.
Stellar abundances were obtained by Herrero et al.~(\cite{herrero94}) for
AB-supergiants, McCarthy et al.~(\cite{mccarthy95}) and Venn et
al.~(\cite{venn98}) for A-type supergiant stars, and Monteverde et
al. (\cite{monteverde97}, \cite{monteverde00}) and Urbaneja et
al.~(\cite{urbaneja05}) for B-type supergiant stars. The largest sample of Urbaneja et al.~(\cite{urbaneja05})
gave a O/H gradient of -0.06$\pm$0.02~dex~kpc$^{-1}$.
Recently, U et al. (\cite{U09}) has presented spectroscopic observations of a set of A
and B supergiants. They determined stellar metallicities
and derived the metallicity gradient in the disk of M33, finding solar metallicity
at the centre and 0.3 solar in the outskirts at a distance of 8 kpc.
Their average metallicity gradient is -0.07$\pm$0.01 dex kpc$^{-1}$.
At a given radius, H~{\sc ii}\ regions have abundances slightly
below the stellar results, and this is probably due to the depletion of oxygen in H~{\sc ii}\ regions on dust grains
(e.g., Bresolin et al. \cite{bresolin09}).
The slopes of the two gradients agree if the comparison is done between about 1 ant 8 kpc.
The cause of the difference between the supergiant and H~{\sc ii}\ region gradient is the metallicity value in the central regions.
In fact, the H~{\sc ii}\ regions located within 1 kpc from centre have metallicity below solar, whereas
the supergiants are metal rich, ranging from solar values to above solar.
The origin of this discrepancy is not the
temperature gradients within the nebulae (Stasi{\'n}ska 2005) because they become important at higher metal abundances.
Recent observations of 25 H~{\sc ii}\ regions by Rubin et al. (\cite{rubin08}) with {\em Spitzer}
have allowed a measurement of the Ne/H and S/H gradient across the disk of M33 showing no
decrease in chemical abundances in the central regions. Infrared Ne and S emission lines do not have a strong dependence
on $T_{e}$, and consequently their abundances can be determined even without a temperature measurement.
One way to explain the low metallicity in the central 1.0$\times$1.0 kpc$^{2}$
area is related to the criterion used for H~{\sc ii}\ regions.
Usually, chemical abundances derived from optical spectroscopy rely on direct measurement of the electron temperature, given by the
[O~{\sc iii}]\ 4363\AA\, emission line. This emission line is inversely proportional to the oxygen abundance
and barely detectable for O/H$>$8.6 from an average luminosity nebula (e.g., Nagao et al. \cite{nagao06}).
The request for [O~{\sc iii}]\ 4363 \AA\ detection might determine a bias towards lower metallicity,
with the exclusion of the highest metallicity nebulae.
This could explain the differences between the optical spectroscopy results
and both the stellar abundance determinations, and the H~{\sc ii}\ region infrared spectroscopy.
\begin{figure}
\centering
\includegraphics[angle=-90,width=8cm]{13564fg2.ps}
\caption{The O/H radial gradient for the cumulative H~{\sc ii}\ region sample: filled circles are the MMT
observations (new and control samples), empty circles are the literature abundances.
The continuous line is the weighted linear least-square fit of Equation 3, i.e., with a radial range
from 1 to 8 kpc from the M33 centre. The dashed vertical line indicates the regions located at its left-side and
excluded from the fit. }
\label{Fig_oxy}%
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=-90,width=8cm]{13564fg3.ps}
\includegraphics[angle=-90,width=8cm]{13564fg4.ps}
\caption{The O/H radial gradient obtained in different regions of M33. Top: nearest side (filled squares) and
farthest side (empty squares) gradient. Bottom: North (filled squares) and South (empty squares) gradient.
In each panel the weighted linear least-square fits of the two regions are shown with
two lines (continuous and dotted). }
\label{Fig_oxy_nswe}%
\end{figure}
\subsection{The abundance gradients of the other chemical elements}
\label{sec_other}
Our analysis allowed us to measure other chemical elements in addition to oxygen, as He/H,
N/H, Ne/H, S/H. and Ar/H.
The more reliable measurement is that of oxygen for the reasons illustrated in the Appendix,
and we use it to follow the chemical evolution of M33.
Nevertheless the other chemical elements are measured in enough H~{\sc ii}\ regions
to compute their radial gradients.
In Table \ref{tab_other} we show the slopes and central abundance values of the radial gradients of
N/H, Ne/H, S/H. and Ar/H of our sample of H~{\sc ii}\ regions.
We did not calculate the radial gradient of He/H because we measured only the ionized fraction
of He in H~{\sc ii}\ regions, which is only a small part of the total helium abundance.
All gradients have a negative slope, consistent, within the errors, with the slope found for O/H, while
N/H is a bit steeper, as already noticed, e.g., by Magrini et al.(\cite{magrini07a}). Its different behaviour with
respect to the $\alpha$-elements, as oxygen, neon, sulphur, and argon, comes from
the different places of production. $\alpha$-elements are indeed produced
by SNe II, which are the final phase of the evolution of massive stars, while nitrogen is one of the final products
of the evolution of long-lived low- and intermediate-mass stars. This is discussed in detail in Sec.\ref{sect_model}.
Finally, there is a very good agreement of the S/H and Ne/H gradients with those derived
from the infrared spectra of H~{\sc ii}\ regions by Rubin et al.
(\cite{rubin08}), for which they found a gradient of
-0.058$\pm$0.014~dex~kpc$^{-1}$ for
Ne/H and -0.052$\pm$0.021~dex~kpc$^{-1}$ for S/H.
\begin{table}
\caption{The radial gradients of N/H, Ne/H, S/H. and Ar/H }
\label{tab_other}
\begin{tabular}{lll}
\hline\hline
12 + log(X/H) &slope & central value \\
(1) &(2) &(3) \\
\hline
N/H & -0.08$\pm$0.03 & 7.53$\pm$0.15 \\
Ne/H & -0.05$\pm$0.04 & 7.71$\pm$0.21 \\
S/H & -0.06$\pm$0.02 & 6.41$\pm$0.11 \\
Ar/H & -0.07$\pm$0.03 & 6.98$\pm$0.13 \\
\hline
\hline
\end{tabular}
\end{table}
\subsection{The population-dependent metallicity gradient: giant vs. faint and compact H~{\sc ii}\ regions}
We now examine whether any selection effect can be responsible for
the difference between the steep gradient found in the early studies
and the shallower gradient of this work.
To this goal, we subdivided the sample of H~{\sc ii}\ regions according to their
projected size and surface brightness in the H$\alpha$\ emission-line.
Then, we computed the intrinsic luminosity and the radius in an
H$\alpha$\ emission-line calibrated map (courtesy of R. Walterbos) for each nebula of the whole sample.
Defining the surface
brightness (SB) as the ratio between the total flux and the area expressed in arcsec$^2$, we
subdivided the sample according to their size and SB.
Considering their size, we defined them as {\em small} if their radius $R<$15\arcsec\ (60 pc at the distance 840 kpc) and
as {\em large} if $R\geq$15\arcsec.
Considering their surface brightness, we define them as {\em bright} if their
surface brightness SB$>$ 5.5$\times$10$^{-19}$ erg cm$^{-2}$ s$^{-1}$ arcsec$^{-1}$,
and as {\em faint} if SB is lower than this limit.
The four combinations are allowed, i.e. H~{\sc ii}\ regions can be {\em small} and either {\em bright}
or {\em faint}, or {\em large} and again {\em bright}
or {\em faint}.
In Table \ref{tab_sb} we show the galactocentric distance, R$_{GC}$, the
H$\alpha$\ observed total flux, F$_{H\alpha}$, the radius, $R$, and
the SB, of the so-called {\em giant} regions, i.e. those with SB$>$ 5.5$\times$10$^{-19}$ erg cm$^{-2}$ s$^{-1}$ arcsec$^{-1}$ and $R\geq$15\arcsec.
To compare the total population with the {\em giant} H~{\sc ii}\ regions, we show in Fig. \ref{Fig_oxy_sb}
the oxygen abundances of the cumulative sample, averaged in bins of 1 kpc each,
together with the abundances of each single {\em giant} H~{\sc ii}\ region.
The O/H gradient of the H~{\sc ii}\ regions in Table \ref{tab_sb}, computed with a
weighted linear least-square fit, is
\begin{equation}
12 + {\rm log(O/H)} = -0.089 (\pm 0.023) ~ {\rm R_{GC}} + 8.72 (\pm0.09).
\label{eq3}
\end{equation}
The gradient of the remaining sample is the same of given in Eq.\ref{eq1}.
The {\em giant} regions show a significantly steeper gradient, consistent
with the gradients by Smith~(\cite{smith75}), Kwitter \& Aller~(\cite{kwitter81}),
V\'{\i}lchez et al.~(\cite{vilchez88}), and Garnett et al.~(\cite{garnett97}).
The question is whether this gradient is really significantly different from
the whole sample,
and if this is the case, what is the reason of such different behaviour.
Owing to the small number of {\em giant} H~{\sc ii}\ regions (9 in total), the uncertainty on the slope of their gradient
is high. Thus it could still be in partial agreement, within the errors, with the larger sample, and
their difference might stem from metallicity fluctuations in the ISM.
On the other hand, the characteristics of the {\em giant} regions might be truly different from the average sample.
For example large self-bound units are not destroyed by massive stars and thus retain their original
structure and get continuously enriched by SF.
However, while this might be the case for the metallicity peak near the centre, it does
predict that giant regions should have higher metallicity at all galactocentric radii, which is clearly not the case.
Moreover, in a recent paper, Rela{\~n}o \& Kennicutt (\cite{relano09}) studied the star formation in
luminous H~{\sc ii}\ regions in M33, which correspond mostly to our {\em giant} H~{\sc ii}\ regions.
They found that the observed UV and H$\alpha$\ luminosities are consistent
with a young stellar population (3-4 Myr), born in an instantaneous burst.
Thus the steeper gradient might result form a combination of a small statistics and of
a metal self-enrichment effect in the {\em giant} region sample.
\begin{table}
\caption{{\em Giant} H~{\sc ii}\ regions with derived chemical abundances.}
\label{tab_sb}
\scriptsize{
\begin{tabular}{lllll}
\hline
\hline
Id & R$_{GC}$ & F$_{H\alpha}$ & R & SB \\
& (kpc) &(10$^{-15}$erg/cm$^2$s) & (arcsec) &(10$^{-19}$erg/cm$^2$s arcsec$^2$) \\
\hline
NGC595 & 1.7 & 13.0 & 25 & 16.1 \\
C001Ab & 1.9 & 3.2 & 20 & 6.2 \\
MA2 & 2.5 & 4.5 & 25 & 5.6 \\
BCLMP691 & 3.3 & 2.4 & 15 & 8.2 \\
NGC604 & 3.5 & 36.6 & 30 & 31.4 \\
IC131 & 3.9 & 1.9 & 15 & 6.6 \\
BCLMP290 & 4.2 & 2.2 & 15 & 7.5 \\
NGC588 & 5.5 & 4.2 & 20 & 8.0 \\
IC132 & 6.4 & 3.5 & 20 & 6.8 \\
\hline
\hline
\end{tabular}
}
\end{table}
\begin{figure}
\centering
\includegraphics[angle=-90,width=8cm]{13564fg5.ps}
\caption{The O/H radial gradient: the giant H~{\sc ii}\ regions (empty triangles)
and the complete sample averaged in bins, each 1 kpc wide (filled squares). The continuous line is the
weighted mean least square fit of the giant H~{\sc ii}\ region sample, while the dotted line refers to
the complete sample. }
\label{Fig_oxy_sb}%
\end{figure}
\subsection{The 2-dimensional distribution of metals}
\label{sect2d}
The usual way to study the metallicity distribution in disk galaxies
is to average it azimuthally, assuming that {\em i}) the centre of the
galaxy coincides with the peak of the metallicity distribution and {\em ii}) the metallicity
distribution is axially symmetric.
The large number of metallicity measurements in M33, both
from H~{\sc ii}\ regions and from PNe, allowed us to reconstruct not only
their radial gradient, but also their spatial distribution projected
onto the disk.
In Fig. \ref{fig_2dmap}, we show the two-dimensional metallicity
distributions for M33 from H~{\sc ii}\ regions and from PNe superimposed to a contour map of the stellar
mass distribution derived from the JHK image, a composition of the image of Regan \& Vogel (\cite{regan94})
and the 2MASS image.
The O/H abundances were averaged in bins of 0.8$\times$0.8 kpc$^2$.
The white pixels indicate areas where metallicity measurements
are lacking: for H~{\sc ii}\ regions they generally correspond to the interarm regions, while
for PNe to spiral arms.
\begin{figure}
\centering
\includegraphics[angle=90,width=13cm]{13564fg6.ps}
\includegraphics[angle=90,width=13cm]{13564fg7.ps}
\caption{The oxygen abundance maps of M33 (60\arcmin $\times$ 60\arcmin): H~{\sc ii}\ regions (top) and PNe (bottom).
The O/H abundances are averaged in bins of 0.8$\times$0.8 kpc$^2$.
The colour-scale shows the oxygen abundance as indicated in the label. North at the top, east to the left.
The optical centre of M33 is located at (0,0).
The contour levels represent the stellar mass distribution derived from the JHK image of M33.}
\label{fig_2dmap}%
\end{figure}
The H~{\sc ii}\ regions with the highest metallicity are not located in the optical centre of the
galaxy (0,0 in the map), but rather lie at radius 1-2 kpc in the southeast direction.
Also in the case of PNe,
most of the metal rich PNe are located in the southern part of M33, from 2 to 4 kpc from the centre.
However the lack of known PNe
in the northern spiral arm at the same distance of the southern metal-rich PNe
(because of the extended H~{\sc ii}\ regions not allowing the identification of stellar emission-line sources)
does not allow a complete 2D picture of their metal distribution
around the central regions.
To estimate the location of the off-centre metallicity peak for the H~{\sc ii}\ region map, we
divided its squared 10\arcmin$\times$10\arcmin\ region, centered at RA 1:33:50.9 dec 30:39:36
(M33 centre from the 2MASS survey, Skrutskie et al. \cite{2mass}),
with a 10$\times$10 grid.
We computed the radial O/H gradients varying the central position in the grid and then finding the one
that minimizes the scatter of the gradient.
We found an off-centre position at RA 1:33:59 dec 30:33:35 (J2000.0), which corresponds to the location
of the high-metallicity H~{\sc ii}\ regions in the southern arm.
The oxygen gradient measured from this central position of the whole H~{\sc ii}\ region population, including also the
central objects, is
\begin{equation}
12 + {\rm log(O/H)} = -0.021 (\pm 0.007) ~ {\rm R_{GC}} + 8.36 (\pm0.03),
\label{eq5}
\end{equation}
and it is shown in Fig. \ref{Fig_oxy_off}.
\begin{figure}
\centering
\includegraphics[angle=-90,width=8cm]{13564fg8.ps}
\caption{The O/H radial gradient computed with the centre located at RA 1:33:59 dec 30:33:35 (J2000.0) to minimize the
dispersion in the slope of the radial gradient. }
\label{Fig_oxy_off}%
\end{figure}
The gradient of Eq. \ref{eq5} is flatter and with a somewhat lower absolute dispersion than the one in Eq.\ref{eq1}.
The reduction of the dispersion in the O/H gradient due to the
displacement of the galaxy centre is not enough strong to confirm that
the metallicity maximum corresponds to the real centre of the metallicity distribution.
\section{Why is the metallicity peak off-centre? }
\label{sec_centre}
In the following, we examine several possibilities to explain
the presence of the off-centre metallicity maximum and the low
metallicity measured in the central region:
{\em i}) a local effect of ISM metallicity fluctuations; {\em
ii}) the lack of dominant gravitational centre
in the galaxy; {\em iii}) the selection criterion of H~{\sc ii}
\ regions for metallicity determination.
\subsection{Local metallicity fluctuations}
Simon \& Rosolowsky (\cite{rs08}) have already noticed the
non-axisymmetric distribution of H~{\sc ii}\ region abundances and
suggest that the material enriched by the most recent
generation of star formation in the arm might not have been azimuthally
mixed through the galaxy. A strong OB association located in the
southern arm might be responsible for the enhancement enrichment at
the location of the metallicity peak. Velocity shear is present in
M33 even at small radii because of the
slow rise in the rotation curve (Corbelli et. \cite{corbelli03}).
At the peak location, differential rotation will shear up the bubble of
metals produced in the surrounding ISM in about 10$^8$~yrs. The
timescale seems long enough to allow vigorous star formation
at a particular location to enrich the ISM of metals well above the
average value. However, the large dispersion in the metallicity
around the peak location seems to rule out an inefficient
azimuthal mixing or redistribution of the metals.
\subsection{The lack of a gravitational centre}
The non-axysymmetric metallicity distribution might
be related
to a general non-axisymmetric character of central regions of M33,
noticed in the past by several authors.
Colin \& Athanassoula (\cite{colin81})
found that the young population displacement is located
towards the southern side of M33 and amounts to approximately
2-3 \arcmin, i.e. 480-720 kpc.
Using evidence of other asymmetries in the
inner regions of M33, such as those present in
the distribution of H~{\sc i}} \def\heii{He~{\sc ii}\ atomic gas, of H~{\sc ii}\ regions, and in the
kinematics,
they proposed a bulge centre presently located in the
northern part of the galaxy, which
is rotating retrogradely around the barycenter of the galaxy.
The analysis of infrared images (Minniti et al. \cite{minniti93}), however,
seems to point out to a small bulge with a much smaller displacement
the one advocated by Colin \& Athanassoula (\cite{colin81}).
A detailed analysis of the kinematics of the innermost regions of M33
by Corbelli \& Walterbos
(\cite{corbelli07}) confirms asymmetries in the
stellar and gas velocities, which however seem more related to
the presence of a weak bar. The exact galaxy centre is uncertain on scales of a few arcsec.
Thus, even if M33 lacks of a dominant gravitational centre
of M33 and the bright central cluster might migrate around it,
it seems unlikely that the centre of the galaxy is off by
several hundreds pc from where the bright cluster lie.
The marginal gain in the dispersion in re-computing the
metallicity gradient from an off-centre position (see previous
section) confirms that this hypothesis seems unlikely.
\subsection{Selection criterion}
That the average metallicity at the centre seems lower than at 1.5~kpc
is hard to explain in
the framework of an inside-out disk formation scenario. We now
discuss the possibility than in the central regions the
metallicity might be higher
than reported in this paper because of a bias in
the H~{\sc ii}\ region selection. As explained in Sect. 3.1
the inclusion of H~{\sc ii}\ regions in our sample requires
determining of the electron temperature through the detection
of the faint oxygen auroral line. As the metallicity increases
the line becomes so faint as to be detectable only in bright
complexes. The centre of M33 lacks of vigorous star-forming
sites, so the most cooler, metal rich H~{\sc ii}\ regions
have the oxygen auroral line below the detection threshold.
To prove that this might be the case we searched
the literature for the existing H~{\sc ii}\ region
spectra inside 1.5 kpc radius, which were not included in our
database because of the undetectable [O~{\sc iii}]\ 4363 line.
We found 4 H~{\sc ii}\ regions in the database of
Magrini et al. (\cite{magrini07a})
for which optical spectroscopy is available but no detection
of temperature diagnostic lines. Their names, coordinates,
galactocentric distances, assumed electron temperatures, and
oxygen abundances from M07a fluxes are shown in Table \ref{tab_cr}.
In Fig.\ref{fig_temp} we plot the relationship between the electron temperature
and the galactocentric distance for the complete H~{\sc ii}\ region sample.
The weighted mean least square fit gives a relationship between
the two quantities
\begin{equation}
T_{e}=(410\pm80) \times R_{GC} + 8600\pm320
\label{eqte}
\end{equation}
where $T_{e}$\ is expressed in K and R$_{GC}$ in kpc.
We thus derived $T_{e}$\ for the H~{\sc ii}\ regions of Table \ref{tab_cr}, adopting it
in their chemical abundance calculations.
\begin{figure}
\centering
\includegraphics[angle=-90,width=8cm]{13564fg9.ps}
\caption{The radial gradient of $T_{e}$\ for the H~{\sc ii}\ regions of the present sample and literature data. The continuous line is a mean least square fit. }
\label{fig_temp}%
\end{figure}
Using the intensities by M07a, complemented with upper limit
measurements of the [O~{\sc ii}]\ 7320-7330 \AA\,
when not available in the original paper, we roughly estimated the
oxygen abundance of the four
H~{\sc ii}\ regions located within 1.5 kpc.
Their location in the radial gradient is shown in Fig.
\ref{Fig_oxy_cr}. The sources clearly lie above the average
metallicity determined in the centre of our database.
It is therefore likely that the adopted
criterion for source selection, based on the positive detection of
lines for determining the electron temperature, might be responsible
for the low metallicity derived in the centre.
In the 1.5 kpc central region, there are
8 H~{\sc ii}\ regions with
measured electron temperature having oxygen abundances
from $\sim$8 to $\sim$8.5 (see Fig. 1), namely B0043b (O/H 8.214),
B0029 (8.211), B0038b (8.391),
B0016 (7.98), B0079c (8.386), B0027b (8.37), B0033b (8.399), B0090
(8.506).
Their metal abundance and dispersion are consistent with the hypothesis
that we are missing several high-metallicity
regions within 1.5 kpc of the centre since they only seem to trace
the low metallicity side of the distribution
present at each given radius.
Similarly, this bias partially explains the different gradient of
the {\it giant} H~{\sc ii}\ regions: we only include metal-rich H~{\sc ii}\
regions (with metallicity above a critical value) in the sample
if they are very luminous because only these show
detectable temperature diagnostic lines.
\begin{table}
\caption{H~{\sc ii}\ regions in the central regions without direct $T_{e}$ measurement.}
\label{tab_cr}
\scriptsize{
\begin{tabular}{llllll}
\hline
\hline
ID & RA & Dec & R$_{GC}$ &
$T_{e}$ & 12 + log(O/H) \\
& \multicolumn{2}{c}{J2000.0} & (kpc) & (K) & dex \\
& (1) & (2) & (3) & (4)& (5) \\
BCLMP~93a & 1:33:52.6 & 30:39:08 & 0.23 & 8700 & 8.53$^{a}$\\
BCLMP~301h & 1:33:52.6 & 30:45:03 & 1.45 & 9200 & 8.60 \\
BCLMP~4a & 1:33:59.5 & 30:35:47 & 1.48 & 9200 & 8.33\\
M33-SNR~64 & 1:34:00.1 & 30:42:19 & 0.87 & 8950 & 8.40\\
\hline
\hline
\end{tabular}
a) V\'{\i}lchez et al. \cite{vilchez88} (CC93) adopted $T_{e}$=6000 k
obtaining 12 + log (O/H)=9.02
}
\end{table}
\begin{figure}
\centering
\includegraphics[angle=-90,width=8cm]{13564fg10.ps}
\caption{The O/H radial gradient: the symbols for the H~{\sc ii}\ regions with a positive detection
of $T_{e}$\ diagnostic lines are as in Fig. 1, while the filled squares are the regions of Table \ref{tab_cr} for which the
$T_{e}$\ is extrapolated from Eq. \ref{eqte}. The continuous line is the fit of Eq. 4. }
\label{Fig_oxy_cr}%
\end{figure}
\section{The chemical evolution of M33}
\label{sect_model}
The metallicity gradient derived in Sect.\ref{sect_grad} from our sample of
H~{\sc ii}\ regions characterizes the ISM composition in M33 at the present time.
Together with the metallicity gradient from the PN population
(M09), these data allow setting new constraints on current models of
galactic chemical evolution.
The model of M07b, specifically designed for M33, is able to predict
the radial distribution of molecular gas, atomic gas, stars, SFR, and
the time evolution of the metallicity gradient. In the following, we
discuss the modifications needed to
reproduce the metallicity gradient of H~{\sc ii}\ regions and PNe derived in
this work and in M09.
\subsection{A revised model of chemical evolution}
The multiphase model adopted by M07b follows the formation and
destruction of diffuse gas, clouds, and stars, by means of the simple
parameterizations of physical processes (e.g. Ferrini et al.
\cite{ferrini92}). In particular, the SFR is the result of two
processes: cloud-cloud interactions (the dominant process) and star
formation induced by the interaction of massive stars on molecular
clouds. At variance with other models, the relationship between the SFR surface density, $\Sigma_{SFR}$,
and the molecular
gas surface density, $\Sigma_{H_2}$, (the so-called Schmidt law)
is thus a by-product of the model and is not assumed.
In general, the relation between the surface density of
total gas and SFR has a slope of $1.4\pm 0.1$, but the slope can vary
from galaxy to galaxy (Kennicutt \cite{ken98}).
In the particular case of M33, a tight correlation exists between
the SFR, measured from the FUV emission, and the surface density
of molecular gas has a well-defined slope (Verley et al. 2009),
\begin{equation}
\Sigma_{SFR} = A \Sigma^{1.1\pm0.1}_{H_2}.
\label{eq_sfl}
\end{equation}
According to M07b, the best model for M33 (the so-called {\em
accretion} model, with almost constant infall) suggests a long lasting
phase of disk formation resulting from a continuous accretion of
intergalactic medium during the galaxy lifetime. We refer to M07b for
the general description of the model and of the adopted parameters for
M33. Here we concisely describe the model and only the
updated processes and equations in detail.
The galaxy is divided into $N$ coaxial cylindrical annuli with inner
and outer galactocentric radii $R_i$ ($i=1,N$) and $R_{i+1}$,
respectively, mean radius $R_{i+1/2}\equiv (R_i+R_{i+1})/2$ and height
$h(R_{i+1/2})$. Each annulus is divided into two {\em zones}, the {\em
halo} and the {\em disk}, each made of diffuse gas $g$, clouds $c$,
stars $s$, and stellar remnants $r$. The {\em halo} component includes
the primordial baryonic halo but also the material accreted from the
intergalactic medium.
At time $t=0$, all the baryonic mass of the galaxy is in the form of
diffuse gas in the halo. At later times, the mass fraction in the
various components is modified by several conversion processes: diffuse
gas of the halo falls into the disk, diffuse gas is converted into
clouds, clouds collapse to form stars and are disrupted by massive
stars, and stars evolve into remnants and return a fraction of their mass
to the diffuse gas. In this framework, each annulus evolves
independently (i.e. without radial mass flows) keeping its total mass
(halo $+$ disk) fixed from $t=0$ to $t_{\rm gal}=13.6$~Gyr.
The disk of mass $M_D(t)$ is formed by continuous
infall from the halo of mass $M_H(t)$ at a rate
\begin{equation}
\frac{{\rm d}M_D}{{\rm d}t}=fM_H,
\label{eqm1}
\end{equation}
where $f$ is a coefficient proportional to the inverse of the infall
timescale.
Clouds condense out of diffuse gas at a rate $\mu$ and are disrupted by
cloud-cloud collisions at a rate $H^\prime$,
\begin{equation}
\frac{{\rm d}M_c}{{\rm d}t}=\mu M_g^{3/2}-H^\prime M_c^2,
\label{eqm2}
\end{equation}
where $M_g(t)$ is the mass fraction (with respect to the total baryonic
mass of the galaxy) of diffuse gas, and $M_c(t)$ was defined above.
Stars form by cloud-cloud interactions at a rate $H$ and by the
interactions of massive stars with clouds at a rate $a$,
\begin{equation}
\frac{{\rm d}M_s}{{\rm d}t}=H M_c^2+aM_sM_c-DM_s,
\label{eqm3}
\end{equation}
where $M_s(t)$ is the mass fraction in stars, $D$ the stellar death
rate and $M_c(t)$ as above. All rate
coefficients of the model are assumed to be independent of time but
are functions of the galactocentric radius $R_{GC}$ (cf. M07b for their values
and radial behaviors).
\subsection{The choice of the stellar yields and the IMF}
The model results are sensitive to the assumed stellar yields. For low- and intermediate-mass stars
($M<8$~$M_\odot$), we used the yields by Gavil\'an et
al.~(\cite{gavilan05}).
The yields of Marigo (\cite{marigo01}) give comparable results without
any appreciable difference in the computed gradients of chemical
elements produced by intermediate mass stars, such as N with respect to the gradients
computed with the yields by Gavil\'an et
al.~(\cite{gavilan05}).
For stars in the mass range $8~M_\odot < M < 35~M_\odot$ we adopt the yields by Chieffi
\& Limongi~(\cite{chieffi04}). The yields
of more massive stars are affected by the considerable uncertainties
associated with different assumptions about the modelling of processes
like convection, semi-convection, overshooting, and mass loss. Other
difficulties arise from the simulation of the supernova explosion and
the possible fallback after the explosion, that strongly influences the
production of iron-peak elements. It is not surprising then that the
results of different authors (e.g. Arnett~\cite{arnett95}; Woosley \&
Weaver~\cite{woosley95}; Thielemann et al.~\cite{thielemann96}; Aubert
et al.~\cite{aubert96}) disagree in some cases by orders of magnitude.
In our models, we estimate the yields of stars in the
mass range $35~M_\odot <M<100~M_\odot$ by linear extrapolation of the
yields in the mass range $8~M_\odot < M < 35~M_\odot$.
Another important ingredient in the chemical
evolution model is the initial mass function (IMF).
Several works support the idea that the IMF is universal in space and
constant in time (Wyse~\cite{wyse97}; Scalo~\cite{scalo98};
Kroupa~\cite{kroupa02}), apart from local fluctuations.
There are several parameterizations of
the IMF (see e.g. Romano et al.~\cite{romano05} for
a complete review), starting from, e.g.,
Salpeter~(\cite{salpeter55}),
Tinsley~(\cite{tinsley80}), Scalo~(\cite{scalo86}), Kroupa et
al.~(\cite{kroupa93}), Ferrini et al.~(\cite{ferrini90}), Scalo~(\cite{scalo98}),
Chabrier~(\cite{chabrier03}).
In the following, we test the possibility that the observed
flat gradients can be explained in terms of a non-standard IMF.
In fact, the magnitude and the slope
of chemical abundance gradients are related to the number of stars in
each mass range, and so to the IMF.
The goal is to reproduce, if possible, the flat gradient
supported by recent observations by only varying the IMF.
In Fig. \ref{fig_imf} we compare the oxygen and nitrogen gradients of
H~{\sc ii}\ regions with present-day abundance profiles from the M07b model.
The choice of the IMF does not affect the slope of O/H and
N/H gradients, but simply shifts the abundance profiles
to higher (or lower) values according to the amount of stars that produce oxygen (massive stars) or nitrogen (low- and intermediate-mass stars).
With the adopted stellar yields, the best fits of the metallicity distribution
of M33 are obtained with the IMF by Ferrini et al.~(\cite{ferrini90}) and Scalo~(\cite{scalo86}).
For the revised model we thus adopt the parameterization of Ferrini et al. (1990).
The different slopes of the O/H and N/H gradients are reproduced fairly well by the chemical
evolution model as a natural consequence of the different mass ranges of stellar production, hence timescales,
of these two chemical elements (see Sect.\ref{sec_other} for a comparison with the measured gradients).
\begin{figure}
\centering
\includegraphics[angle=270,width=8cm]{13564fg11.ps}
\includegraphics[angle=270,width=8cm]{13564fg12.ps}
\caption{The oxygen (top panel) and nitrogen (bottom panel) radial gradients of M07b with several parameterization of the IMF:
Salpeter~(\cite{salpeter55}) (continuous green line),
Tinsley~(\cite{tinsley80}) (dotted red line), Scalo~(\cite{scalo86}) (dashed magenta line), Scalo~\cite{scalo98}) (long dashed yellow line),
Ferrini et al.~(\cite{ferrini90}) (long dash-dotted blue line), Chabrier~(\cite{chabrier03}) (dash-dotted black line). }
\label{fig_imf}
\end{figure}
\subsection{Comparison with the observations}
The observational constraints to the model are those described in M07b,
complemented with the O/H and N/H radial gradients of H~{\sc ii}\ regions and PNe
from this work and M09, respectively, and the radial profile of the SFR
determined by Verley et al.~(2009) from far ultraviolet (FUV)
observations corrected for extinction. For H~{\sc ii}\ regions we used the
gradient derived from the whole population (see Eq. 4), without any
distinction in terms of size and brightness, but excluding the central 1 kpc.
Giant H~{\sc ii}\ regions might not be representative of the current ISM
abundance due owing their possible chemical self-enrichment.
In Figs. \ref{fig_sl} and \ref{fig_mol} we show the radial surface density of molecular gas (from
the single dish observations of Corbelli~\cite{corbelli03}, averaged
over 1 kpc bins) and the SFR as a function of the surface density of
molecular gas (Verley et al.~2009), respectively.
Both
figures show the predictions of the model of M07b at
$t=t_{\rm gal}$. Clearly, the model of M07b in its original formulation
is unable to reproduce these constraints. We have therefore
considered other parameterizations of the star formation process by
varying the radial dependence of the SFE, represented by the coefficient $H$.
Our experiments show that, to reproduce the
observed behaviour of the radial gas distribution and the observed Schmidt law,
is necessary to increase the efficiency of star formation at large
radii. This can be accomplished in many ways.
In the previous version of the model (M07b) $H$ decreased with $R_{GC}$
to consider a geometrical correction resulting
from how large galactocentric distances correspond
to larger volumes (see Ferrini et al. \cite{ferrini94}). In the present work
we assume that $H$ is constant with radius, thus implying that
the star formation efficiency increases linearly with
galactocentric radius. In Sect. \ref{sec_ocon} we describe
the observational evidence in support this assumption in M33.
\begin{figure}
\centering
\includegraphics[angle=270,width=8cm]{13564fg13.ps}
\caption{The SFR derived from
the UV emission corrected for extinction by Verley et al. (2009)
vs. the surface density Schmidt law: filled circles (blue) are the molecular
gas averaged in bins 1 kpc each (Corbelli \cite{corbelli03}; curves represent the model
at 0.5 Gyr from the disk formation ({\em dotted curve}), 2 ({\em dashed curve}),
3 ({\em long-dashed curve}), 5 ({\em dot-dashed curve}),
8 ({\em long dash-dotted curve}), 12 ({\em long-short dashed curve}),
and at 13.6~Gyr ({\em solid red curve}); the solid green curve is the model
by M07b at 13.6 Gyr. }
\label{fig_sl}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=270,width=8cm]{13564fg14.ps}
\caption{The radial surface density of molecular gas: filled circles (blue) are the observed molecular gas averaged in bins 1 kpc wide. Model curves have the same symbols as in Fig. \ref{fig_sl}. }
\label{fig_mol}
\end{figure}
Assuming that $H$ is spatially constant, we obtain the results
shown in
Figs.~\ref{fig_sl} and \ref{fig_mol} at $t=0.5$, 2, 3, 5, 8, 12, 13.6
Gyr. The relationship between molecular gas and SFR predicted
by the modified M07b model is more or less constant with time and in
good agreement with the data. The predicted Schmidt law (at the present
time) has an average exponent $\sim 1.2$, in agreement with the
observations, whereas the original M07b model produces a Schmidt law
with a higher exponent, $\sim 2.2$. The better agreement with the
observations obtained with the revised M07b model suggests that M33 is
more efficient in forming stars than normal local Universe spiral
galaxies, in particular in its outer regions.
\subsection{Flat gradients and SF efficiency}
\label{sec_ocon}
Is there evidence of flat gradients in other galaxies?
How can they be explained?
A flat gradient in the outer regions of our Galaxy has been observed by several authors
(e.g., Yong et al. \cite{yong05}, Carraro et al.~\cite{carraro07}; Sestito et al.~\cite{sestito07})
using different metallicity tracers (e.g., Cepheids, open clusters).
However, other tracers, such as PNe, do not show this behaviour, indicating flat gradients
across the whole disk (cf. Stanghellini et al. \cite{stanghellini06}, Perinotto \& Morbidelli \cite{peri06}).
The outer Galactic plateau might be a phenomenon similar to the flat gradient of M33.
Since M33 is less massive than the MW, its halo collapse
phase, responsible for the steep gradient in the inner regions (R$_{GC} <$11-12 kpc)
of our Galaxy (cf. Magrini et al. \cite{m09}), is less marked.
Thus in M33 the difference between the {\em inner} and {\em outer} gradients is
less evident than in the MW.
The flat metallicity distribution of the MW at large radii
has been explained with several chemical evolution models, among them
those by Andrievsky et al.~(\cite{andri04}), Chiappini et
al.~(\cite{chiappini01}) model C, and Magrini et al. (\cite{m09}).
Andrievsky et al.~(\cite{andri04}) explain the flat metallicity distribution beyond 11-12 kpc
when assuming that the SFR is a
combination of two components: one proportional to the gas surface
density, and the other depending on the relative velocity of the
interstellar gas and spiral arms. With these assumptions they explain
why the breaking point in the slope of the gradient and the
consequent outer flattening, occur around the co-rotational radius.
Chiappini et al.~(\cite{chiappini01}) assume two main accretion
episodes in the lifetime of the Galaxy, the first one forming the
halo and bulge and the second one forming the thin disk.
Their model C is the one able to reproduce a flat gradient in the
external regions, assuming that there is no threshold in the gas density
during the halo/thick-disk phase, and thus allowing the formation of the
outer plateau from infalling gas enriched in the halo.
Finally, Magrini et al. (\cite{m09}) reproduce the Galactic gradient thanks to the
radial dependence of the infall rate (exponentially decreasing with
radius) and with the radial
dependence of the star and cloud formation processes.
To reproduce a completely flat gradient in the outer regions would,
however, require additional accretion of gas uniformly falling onto the disk, which would result in
inconsistent behaviour of the current SFR.
{\em Why do we use a radially increasing SFE to reproduce the flat metallicity gradient of M33?}
M33 in general is quite different from large spiral galaxies in terms of SF. A comparison of the SFR
to the H$_{2}$ mass shows that M33, like the intermediate redshift galaxies, has a significantly
higher SFE than large local Universe spirals.
There is also observational evidence that the SFE varies with radius in M33.
Kennicutt (\cite{ken98b}), Wong \& Blitz (\cite{wong02}), and Murgia et al. (\cite{murgia02}) find a molecular gas depletion
timescale (inversely proportional to the SFE) that varies radially, decreasing by a factor $\sim$2-3.
Also Gardan et al. (\cite{gardan07}) from the CO to SFR ratio also find a dependence
of the depletion timescale on radius, decreasing of a factor $\sim$2 over 4 kpc.
These observations prompted us to model the chemical evolution of M33 taking SF processes
depending on radius into account.
The general assumption of our model is that the SF is driven by cloud collisions. Increasing the efficiency
of this process with radius does not imply that cloud-cloud collisions are more efficient in the outer regions,
but that additional processes may contribute to the SF in the peripheral regions.
\subsection{Implications for the metallicity gradient and its evolution}
The introduction of more realistic SF process in M33 has significant
consequences on its metallicity gradient and evolution. As described
by M09, oxygen modification (destruction or creation) does not occur in
the PN population of M33. Thus oxygen, the best-determined element in
nebular optical spectroscopy, can be safely used as a tracer of the ISM
composition at the epoch of the formation of PN progenitors. Most PNe
in M33 are older than 0.3 Gyr, and probably much older, up to an age of
10 Gyr.
The metallicity gradient of PNe, including only those with progenitor stars older than
0.3 Gyr and excluding PNe located in the first kpc
from the centre, is
\begin{equation}
12 + {\rm log(O/H)} = -0.040 (\pm 0.014) ~ {\rm R_{GC}} + 8.43 (\pm0.06).
\end{equation}
As discussed above, the whole sample of H~{\sc ii}\ regions is representative
of the current ISM composition.
The H~{\sc ii}\ region and PN gradients in the same radial region, 1 kpc $\lesssim R_{GC} \lesssim$ 8 kpc,
are indistinguishable within the
uncertainties in their slopes, with a slightly translation, $\sim$0.1 dex, towards higher metallicity in the H~{\sc ii}\ region sample.
This means that very little evolution of metallicity has occurred in the past few Gyr.
In Figure \ref{fig_grad} we show the observed metallicity gradient of M33
together with the results of the present model (top panel) and of the original
model of M07b (bottom panel). For a better comparison,
the oxygen abundances
of the samples of H~{\sc ii}\ regions and disk PNe have been averaged
over bins of 1 kpc each. This allows us to highlight the translation
towards the lower metallicity of the PNe. The curves
correspond to the present and to 5 Gyr ago (8.6 Gyr from
the disk formation, assuming an age of 13.6 Gyr for M33), i.e., approximately the
average period of formation of the PN progenitors.
the present
model predicts a higher SF in the outer regions, reproducing the
observed flatness of the oxygen gradient at large radii better than the M07b
model. In addition, the evolution of the metallicity
gradient predicted by the new model is consistent with the
observations: a negligible change in the slope and a small
translation to higher metallicities. Finally, we see that the assumed
dependence of the SFE with galactocentric radius,
required to reproduce the metallicity evolution of M33,
takes place always as in M07b with a slow building up of
the disk of M33 by accretion from an intergalactic
medium or halo gas.
For completeness we also show the N/H radial gradient of the sample of H~{\sc ii}\ regions.
For this element the temporal evolution cannot be inferred using PN abundances,
since PNe modify their nitrogen composition during their lifetime.
The nitrogen abundances have been averaged
over bins of 1 kpc each.
The observed N/H gradient is significantly steeper than the oxygen gradient,
as predicted by the model.
\begin{figure}
\centering
\includegraphics[angle=0,width=10cm]{13564fg15.ps}
\caption{The O/H radial gradient evolution.
Top panel: filled circles (blue) are the H~{\sc ii}\ region
oxygen abundances from the dataset described in Sect.\ref{sect_mmt},
averaged over radial bins, each 1 kpc wide; filled squares (magenta) are
averaged non-Type I PN oxygen abundances by M09.
The gradient
5 Gyr ago ({\em dashed curve}) and now (continuous curve) as predicted by the
present model.
Bottom panel: the same as in the top panel but for the {\em accretion} model by M07b.}
\label{fig_grad}%
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=0,width=10cm]{13564fg16.ps}
\caption{The N/H radial gradient.
Top panel: filled circles (blue) are the H~{\sc ii}\ region
nitrogen abundances from the dataset described in Sect.\ref{sect_mmt},
averaged over radial bins, each 1 kpc wide.
The gradient at the present time as predicted by the
present model is the continuous curve.
Bottom panel: the same as in the top panel but for the {\em accretion} model by M07b.}
\label{fig_grad2}%
\end{figure}
The time evolution of the metallicity gradient is strictly correlated to the
inside-out growth of the disk.
In a recent work, Williams et al. (\cite{williams09}) presented the resolved photometry of four fields located at different
galactocentric distances, from 0.9 to 6.1 kpc.
Their photometry provides a detailed census of stellar populations and their ages at different radii.
They find that the percentage of the stellar mass formed prior to {\em z}= 1 ($\sim$8 Gyr ago) changes
from $\sim$71\% in the innermost field at 0.9 kpc from the centre
to $\sim$16\% in the outermost SF region at 6.1 kpc.
We thus compare these results with the cumulative SF predicted by our model
at different ages. The agreement is good in the central field, where model and observations both have
about 70\% of the stellar mass already formed 8 Gyr ago. Also at 2.5 kpc,
corresponding to their second field, the observed
stellar mass formed prior to {\em z}= 1 is about 50\% and the model results give
47\%.
In the outermost fields, at 4.3 and 6.1 kpc, the agreement is not as good,
since the observations show that 20\% and 16\% of stars formed before 8 Gyr ago, while
the model predicts $\sim$44\% for both fields.
With the model by M07b, the situation is almost unvaried in the inner regions,
while in the outer regions there is a slightly lower percentage of stars formed before 8 Gyr ago, $\sim$33\%.
The recent evolution of the outermost SF regions of M33
can be explained by an accretion of material in the peripheral regions
at recent epochs.
In our model, the scalelength of the disk is constant, and the accretion in the outer
regions is a continuous process. Thus, we are
able to reproduce the integrated SF at any radius and its by-product, the metallicity,
but not to reproduce its temporal behaviour if it is dominated by stochastic events, such as, e.g.,
sporadic massive accretion of gas and tidal
interactions.
Finally, outside the SF area of the disk, the age of populations of the outer disk/halo show an age increase
with radius (Barker et al. \cite{barker07a}, \cite{barker07b}). This agrees well with our model results: there is
a slightly larger number of old stars beyond 8 kpc than in the peripheral part of the SF disk.
\section{Summary and conclusions}
\label{sect_conclu}
We have studied the chemical evolution of M33 by means of new spectroscopic observations
of H~{\sc ii}\ regions, together with literature data of H~{\sc ii}\ regions and PNe.
We derived the radial oxygen gradient in the range from 1 to $\sim$8 kpc. Its slope is -0.044$\pm$ 0.009
dex kpc$^{-1}$.
We excluded the central 1 kpc region from the gradient because
of its low metallicity.
In fact, the 2D metallicity map is off-centre, with a peak in the southern
arm at 1-2 kpc from the centre. We measured the highest metallicity gradient in
bright regions which are not located in the centre of M33.
We explained this effect with a bias in the measurement of nebular
abundances towards low metallicity in the central regions and/or with SF bursts which
had not had time to mix azimuthally.
At a galactocentric distance of about 1.5 kpc, the spread of metal abundances is much greater than
the dispersion along the average gradient, and it is possibly related to the presence of a bar.
We analysed and discussed the metallicity gradient of {\em giant} H~{\sc ii}\ regions, i.e. the largest regions with high surface brightness.
We find it steeper than the average of the sample gradient, in agreement with the early spectroscopic
studies of H~{\sc ii}\ regions in M33,
based mainly on the brightest objects.
We compared the metallicity gradients of H~{\sc ii}\ regions and of PNe, obtained with the same set of observations and
analysis techniques. We find a substantially unchanged slope of the gradient, and an overall increase in metallicity with time.
We can explain the slow evolution of the metallicity gradient from the present to the
birth of the PN progenitors with a chemical evolution {\em accretion} model, as in M07b, if the
SFE is higher at larger radii.
|
2,869,038,154,122 | arxiv | \section{Introduction}
There has been a great deal of attention devoted in recent
years to the properties of the hot X-ray-emitting plasma
(the intracluster medium, hereafter ICM) in the central
regions of massive galaxy groups and clusters. To a large
extent, the focus has been on the competition between
radiative cooling losses and various mechanisms that could be
heating the gas and therefore (at least partially) offsetting
the effects of cooling. Prior to the launch of the {\it
Chandra} and {\it XMM-Newton} X-ray observatories,
it was generally accepted that large quantities of the ICM
should be cooling down to very low energies, where it would
cease to emit X-rays and eventually condense out
into molecular clouds and form stars (Fabian 1994).
However, the amount of cold gas and star formation actually
observed in most systems is well below what was expected
based on analyses of {\it ROSAT}, {\it ASCA}, and {\it
Einstein} X-ray data (e.g., Voit \& Donahue 1995; Edge 2001).
New high spatial and spectral resolution data from {\it
Chandra} and {\it XMM-Newton} has shown that, while cooling
is clearly important in many groups and clusters (so-called
`cool core' systems), relatively little gas is actually
cooling out of the X-ray band to temperatures below
$\sim10^7$ K (Peterson et al.\ 2003).
It is now widely believed that some energetic form of
non-gravitational heating is compensating for the
losses due to cooling. Indeed, it seems likely that such
heating goes beyond a simple compensation for the radiated
energy. As well as having a profound effect on the properties
of the ICM, it seems that the heat input also has
important consequences for the formation and evolution of the
central brightest cluster galaxy (BCG) and therefore for the
bright end of the galaxy luminosity function (e.g.,
Benson et al.\ 2003; Bower et al.\ 2006, 2008). The thermal
state of the ICM therefore provides an important probe of
these processes and a concerted
theoretical effort has been undertaken to explore the
effects of various heating sources (e.g., supermassive black
holes, supernovae, thermal conduction, dynamical friction
heating of orbiting satellites) using analytic and
semi-analytic models, in addition to idealised and full
cosmological hydrodynamic simulations (e.g., Binney \& Tabor
1995; Narayan \& Medvedev 2001; Ruszkowski et al.\ 2004;
McCarthy et al.\ 2004; Kim et al.\ 2005; Voit \& Donahue
2005; Sijacki et al.\ 2008). In most of these approaches,
it is implicitly assumed that {\it gravitationally-induced}
heating (e.g., via hydrodynamic shocks, turbulent mixing,
energy-exchange between the gas and dark matter) that occurs
during mergers/accretion is understood and treated with a
sufficient level of accuracy. For example, most
current analytic and semi-analytic models of groups and
clusters attempt
to take the effects of gravitational heating into account by
assuming distributions for the gas that are taken from (or
inspired by) non-radiative cosmological hydrodynamic
simulations --- the assumption being that these simulations
accurately and self-consistently track gravitational processes.
However, it has been known for some time that, even in the
case of identical initial setups, the results of non-radiative
cosmological simulations depend on the numerical scheme
adopted for tracking the gas hydrodynamics. In particular,
mesh-based Eulerian (such as adaptive mesh refinement,
hereafter AMR) codes appear to systematically produce higher
entropy (lower density) gas cores within groups and clusters
than do particle-based Lagrangian (such as smoothed particle
hydrodynamics, hereafter SPH) codes (see, e.g., Frenk et al.\
1999; Ascasibar et al\. 2003; Voit, Kay, \& Bryan 2005; Dolag
et al.\ 2005; O'Shea et al.\ 2005). This may be regarded as
somewhat surprising, given the ability of these codes
to accurately reproduce a variety of different analytically
solvable test problems (e.g., Sod shocks, Sedov blasts, the
gravitational collapse of a uniform sphere; see, e.g.,
Tasker et al.\ 2008), although clearly hierarchical
structure formation is a more complex and challenging test
of the codes. At present, the origin of the
cores and the discrepancy in their amplitudes between
Eulerian and Lagrangian codes in cosmologically-simulated
groups and clusters is unclear. There have been suggestions that it
could be the result of insufficient resolution in the mesh
simulations, artificial entropy generation in the SPH simulations,
Galilean non-invariance of the mesh simulations, and differences in
the amount of mixing in the SPH and mesh simulations (e.g., Dolag et
al.\ 2005; Wadsley et al.\ 2008). We explore all of these potential
causes in \S 4.
Clearly, though, this matter is worth further investigation,
as it potentially has important implications for the
competition between heating and cooling in groups and
clusters (the cooling time of the ICM has a steep dependence
on its core entropy) and the bright end of the galaxy
luminosity function. And it is important to consider that
the total heating is not merely the sum of the gravitational
and non-gravitational heating terms. The Rankine-Hugoniot
jump conditions tell us that the efficiency of shock heating
(either gravitational or non-gravitational in origin) {\it
depends on the density of the gas at the time of heating}
(see, e.g., the discussion in McCarthy et al.\ 2008a). This
implies that if gas has been heated before being shocked,
the entropy generated in the shock can actually be amplified
(Voit et al.\ 2003; Voit \& Ponman 2003; Borgani et al.\ 2005;
Younger \& Bryan 2007). The point is that gravitational and
non-gravitational heating will couple together in complex ways,
so it is important that we are confident that gravitational
heating is being handled with sufficient accuracy in
the simulations.
A major difficulty in studying the origin of the cores in
cosmological simulations is that the group and cluster
environment can be extremely complex, with many hundreds of
substructures (depending on the numerical resolution)
orbiting about at any given time . Furthermore,
such simulations can be quite computationally-expensive if one
wishes to resolve in detail the innermost regions of groups
and clusters (note that, typically, the simulated cores have
sizes $\la 0.1 r_{200}$). An alternative approach, which we
adopt in the present study, is to use idealised simulations of
binary mergers to study the relevant gravitational heating
processes. The advantages of such an approach are obviously
that the environment is much cleaner, therefore offering a
better chance of isolating the key processes at
play, and that the systems are fully resolved from the onset
of the simulation. The relatively modest computational
expense of such idealised simulations also puts us in a
position to be able to vary the relevant physical and
numerical parameters in a systematic way and to study their
effects.
Idealised merger simulations have been used extensively to
study a variety of phenomena, such as the disruption of
cooling flows (G{\'o}mez et al.\ 2002; Ritchie \& Thomas
2002; Poole et al.\ 2008), the intrinsic scatter in cluster
X-ray and Sunyaev-Zel'dovich effect scaling relations
(Ricker \& Sarazin 2001; Poole et al.\ 2007), the generation
of cold fronts and related phenomena (e.g., Ascasibar \&
Markevitch 2006), and the ram pressure stripping of orbiting
galaxies (e.g., Mori \& Burkert 2000; McCarthy et al.\
2008b). However, to our knowledge, idealised merger
simulations have not been used to elucidate the important
issue raised above, nor have they even been used to
demonstrate whether or not this issue even exists in
non-cosmological simulations.
In the present study, we perform a detailed comparison of
idealised binary mergers run with the widely-used public
simulations codes {\small FLASH} (an AMR code) and {\small GADGET-2}
(a SPH code). The paper is organised as follows. In \S 2 we
give a brief description of the simulation codes and
the relevant adopted numerical parameters. In addition, we
describe the initial conditions (e.g., structure of the
merging systems, mass ratio, orbit) of our idealised
simulations. In \S 3, we present a detailed comparison of
results from the {\small FLASH} and {\small GADGET-2} runs and confirm that
there is a significant difference in the amount of
central entropy generated with the two codes. In \S 4 we
explore several possible causes for the differences we see.
Finally, in \S 5, we summarise and discuss our findings.
\section{Simulations}
\subsection{The Codes}
Below, we provide brief descriptions of the {\small GADGET-2} and
{\small FLASH} hydrodynamic codes used in this study and the
parameters we have adopted. The interested reader is referred to
Springel, Yoshida, \& White (2001) and Springel (2005b) for in-depth
descriptions of {\small GADGET-2} and to Fryxell et al.\ (2000) for the {\small FLASH}
code. Both codes are representative examples of
their respective AMR and SPH hydrodynamic formulations, as has been
shown in the recent code comparison of Tasker et al.\ (2008).
\subsubsection{{\small FLASH} }
{\small FLASH} is a publicly available AMR code developed by the Alliances
Center for Astrophysical Thermonuclear Flashes\footnote{See the flash
website at: \\ http://flash.uchicago.edu/}. Originally intended for
the study of X-ray bursts and supernovae, it has since been adapted for
many astrophysical conditions and now includes modules for relativistic
hydrodynamics, thermal conduction, radiative cooling,
magnetohydrodynamics, thermonuclear burning, self-gravity and particle
dynamics via a particle-mesh approach. In this study we use
{\small FLASH} version 2.5.
{\small FLASH} solves the Reimann problem using the piecewise
parabolic method (PPM; Colella \& Woodward 1984).
The present work uses the default parameters, which have been
thoroughly tested against numerous analytical tests
(see Fryxell et al.\ 2000). The maximum number of Newton-Raphson
iterations permitted within the Riemann solver was increased in order
to allow it to deal with particularly sharp shocks and discontinuities
whilst the default tolerance was maintained. The hydrodynamic
algorithm also adopted the default settings with periodic boundary
conditions being applied to the gas as well as to the gravity solver.
We have modified the gravity solver in {\small FLASH} to use an FFT on top of
the multigrid solver (written by T. Theuns). This results in a vast
reduction in the time spent calculating the self-gravity of the
simulation relative to the publicly available version. We
have rigorously tested the new algorithm against the default multigrid
solver, more tests are presented in Tasker et al.\ (2008).
To identify regions of rapid flow change, {\small FLASH} 's
refinement and de-refinement criteria can incorporate the
adapted L\"{o}hner (1987) error estimator. This
calculates the modified second derivative of the desired variable,
normalised by the average of its gradient over one cell.
With this applied to the density as is common place, we
imposed the additional user-defined criteria whereby the
density has to exceed a threshold of $200\rho_{c}$, below
which the refinement is set to the minimum $64^{3}$ mesh.
This restricts the refinement to the interior of the clusters
and, as we demonstrate below, was found to yield nearly
identical results to uniform grid runs with resolution equal
to the maximum resolution in the equivalent AMR run.
{\small FLASH} uses an Oct-Tree block-structured AMR grid, in which the block
to be refined is replaced by 8 blocks (in three dimensions), of which
the cell size is one half of that of the parent block. Each block
contains the same number of cells, $N_x=16$ cells in each dimension in our
runs. The maximum allowed level of refinement, $l$, is one of the
parameters of the run. At refinement level $l$, a fully refined AMR
grid will have $N_x\,2^{l-1}$ cells on a side.
All the {\small FLASH} merger runs are simulated in 20 Mpc on a side periodic
boxes in a non-expanding (Newtonian) space and are run for a duration
of $\simeq 10$ Gyr. By default, our {\small FLASH} simulations are run with a
maximum of $l=6$ levels of refinement ($512^3$ cells when fully
refined), corresponding to a minimum cell size of $\approx 39$ kpc,
which is small in comparison to the entropy cores produced in
non-radiative cosmological simulations of clusters (but note we
explicitly test the effects of resolution in \S 3). The simulations
include non-radiative hydrodynamics and gravity.
\subsubsection{GADGET-2}
{\small GADGET-2} is a publicly available TreeSPH code designed for the
simulation of cosmological structure formation. By default,
the code implements the entropy
conserving SPH formalism proposed by Springel \& Hernquist
(2002). The code is massively parallel, has been highly
optimised and is very memory efficient. This has led
to it being used for some of the largest cosmological
simulations to date, including the first N-body simulation
with more than $10^{10}$ dark matter particles (the
{\it Millennium Simulation}; Springel et al.\ 2005a).
The SPH formalism is inherently Lagrangian in nature and
fully adaptive, with `refinement' based on the density. The
particles represent discrete mass elements with the fluid
variables being obtained through a kernel interpolation
technique (Lucy 1977; Gingold \& Monaghan 1977; Monaghan
1992). The entropy injected through shocks is captured
through the use of an artificial viscosity term (see, e.g.,
Monaghan 1997). We will explore the sensitivity of our
merger simulation results to the artificial viscosity in
\S~\ref{viscosity}.
By default, gravity is solved through the use of a combined
tree particle-mesh (TreePM) approach. The TreePM method
allows for substantial speed ups over the traditional tree
algorithm
by calculating long range forces with the particle-mesh
approach using Fast Fourier techniques. Higher gravitational
spatial resolution is then achieved by applying the tree over
small scales only, maintaining the dynamic range of the tree
technique. This allows {\small GADGET-2} to vastly exceed the
gravitational resolving power of mesh codes which rely on
the particle-mesh technique alone and are thus limited to
the minimum cell spacing.
We adopt the following numerical parameters by default for
our {\small GADGET-2} runs (but note that most of these are
systematically varied in \S 3). The artificial bulk
viscosity, $\alpha_{\rm visc}$, is set to 0.8. The number
of SPH smoothing neighbours, $N_{\rm sph}$, is set to 32.
Each of our $10^{15} M_\odot$ model clusters (see \S 2.1)
are comprised of $5\times10^5$ gas and dark matter particles
within $r_{200}$, and the gas to total mass ratio is
$0.141$. Thus, the particle masses are $m_{\rm gas} = 2.83
\times 10^8 M_\odot$ and $m_{\rm dm} = 1.72 \times 10^{9}
M_\odot$. The gravitational softening length is set to 10
kpc, which corresponds to $\approx 5 \times 10^{-3} r_{200}$
initially. All the {\small GADGET-2} merger runs are simulated in 20
Mpc on a side periodic boxes in a non-expanding (Newtonian)
space and are run for a duration of $\simeq$ 10 Gyr. The
simulations include basic hydrodynamics only (i.e., are
non-radiative).
\subsection{Initial Conditions}
In our simulations, the galaxy clusters are initially
represented by spherically-symmetric systems composed of a
realistic mixture of dark matter and gas.
The dark matter is assumed to follow a NFW distribution
(Navarro et al.\ 1996; 1997):
\begin{equation}
\rho(r) = \frac{\rho_s}{(r/r_s)(1+r/r_s)^2}
\end{equation}
\noindent where $\rho_s = M_s/(4 \pi r_s^3)$ and
\begin{equation}
M_s=\frac{M_{200}}{\ln(1+r_{200}/r_s)-(r_{200}/r_s)/(1+r_{200}/r_s)} \ \ .
\end{equation}
Here, $r_{200}$ is the radius within which the mean
density is 200 times the critical density, $\rho_{\rm crit}$,
and $M_{200} \equiv M(r_{200}) = (4/3) \pi r_{200}^3 \times
200 \rho_{\rm crit}$.
The dark matter distribution is fully specified once an
appropriate scale radius ($r_s$) is selected. The scale
radius can be expressed in terms of the halo concentration
$c_{200} = r_{200}/r_s$. We adopt a concentration of
$c_{200} = 4$ for all our systems. This value is typical of
massive clusters formed in $\Lambda$CDM cosmological
simulations (e.g., Neto et al.\ 2007).
In order to maintain the desired NFW configuration,
appropriate velocities must be assigned to each dark matter
particle. For this, we follow the method outlined in McCarthy
et al.\ (2007). Briefly, the three velocity components are
selected randomly from a Gaussian distribution whose width is
given by the local velocity dispersion [i.e, $\sigma(r)$].
The velocity dispersion profile itself is determined by
solving the Jeans equation for the mass density distribution
given in eqn.\ (1). As in McCarthy et al.\ (2007), the dark
matter haloes are run separately in isolation in {\small GADGET-2}
for many dynamical times to ensure that they have fully relaxed.
For the gaseous component, we assume a powerlaw
configuration for the entropy\footnote{Note, the quantity $K$
is the not the actual thermodynamic specific entropy ($s$) of
the gas, but is related to it via the simple relation $s
\propto \ln{K^{3/2}}$. However, for historical reasons we
will refer to $K$ as the entropy.}, $K \equiv P \rho_{\rm
gas}^{-5/3}$, by default. In particular,
\begin{equation}
\frac{K(r)}{K_{200}} = 1.47 \biggl(\frac{r}{r_{200}} \biggr)^{1.22}\,,
\end{equation}
\noindent where the `virial entropy', $K_{200}$, is given by
\begin{equation}
K_{200} \equiv \frac{G M_{200}}{2 \ r_{200}} \frac{1}{(200
\ \rho_{\rm crit})^{2/3}}\,.
\end{equation}
This distribution matches the entropy profiles of groups and
clusters formed in the non-radiative cosmological simulations
of Voit, Kay, \& Bryan (2005) (VKB05) for $r \ga 0.1 r_{200}$. It
is noteworthy that VKB05 find that
this
distribution approximately matches the entropy profiles of
both SPH (the {\small GADGET-2} code) and AMR (the {\small ENZO} code)
simulations. Within $0.1 r_{200}$, however, the AMR and SPH
simulations show evidence for entropy cores, but of
systematically different amplitudes. We initialise our
systems without an entropy core [i.e., eqn.
(3) is assumed to hold over all radii initially] to see,
first, if such cores are established during the merging
process and, if so, whether the amplitudes differ between the
SPH and AMR runs, as they do in cosmological simulations. We
leave it for future work to explore the differences that
result (if any) between SPH and AMR codes when large cores are
already present in the initial systems (e.g., the merger of
two `non-cool core' systems).
With the mass distribution of dark matter established (i.e.,
after having run the dark matter haloes in isolation) and an
entropy distribution for the gas given by eqn.\ (3), we
numerically compute the radial gas pressure profile (and
therefore also the gas density and temperature profiles),
taking into account the self-gravity of the gas, by
simultaneously solving the equations of hydrostatic
equilibrium and mass continuity:
\begin{eqnarray}
\frac{{\rm dlog}P}{{\rm dlog}M_{\rm gas}}= - \frac{G M_{\rm
gas} M_{\rm tot}}{4 \pi r^4 P} \\
\frac{{\rm dlog}r}{{\rm dlog}M_{\rm gas}}= \frac{M_{\rm gas}}{4
\pi r^3} \biggl(\frac{K}{P} \biggr)^{3/5}
\end{eqnarray}
Two boundary conditions are required to solve these equations.
The first condition is that $r(M_{\rm gas}=0) = 0$.
The second condition is that the total mass of hot gas
within $r_{200}$ yields a realistic baryon fraction of
$M_{\rm gas}/M_{tot} = 0.141$. In order to meet the second
condition, we choose a value for $P(M_{\rm gas}=0)$ and
propagate the solution outwards to $r_{200}$. We then
iteratively vary the inner pressure until the desired baryon
fraction is achieved.
For the {\small GADGET-2} simulations, the gas particle positions
are assigned by radially morphing a glass distribution until
the desired gas mass profile is obtained (see McCarthy et
al.\ 2007). The entropy (or equivalently internal energy per
unit mass) of each particle is specified by interpolating
with eqn.\ (3). For the {\small FLASH} simulations, the gas
density and entropy of each grid cell is computed by using
eqn.\ (3) and interpolating within the radial gas density
profile resulting from the solution of eqns.\ (5) and (6).
Thus, for both the {\small GADGET-2} and {\small FLASH} simulations
we start with identical dark matter haloes (using particle
positions and velocities from the {\small GADGET-2} isolated runs
with a gravitational softening length of 10kpc) and
gas haloes, which have been established by interpolating
within radial profiles that have been computed numerically
under the assumption that the gas is initially in hydrostatic
equilibrium within the dark matter halo. Note that when
varying the resolution of the {\small FLASH} simulations we simply change
the maximum number of refinements $l$ --- we do not vary the number
of dark matter particles --- in this way, the initial dark matter
distribution is always the same as in the low resolution {\small GADGET-2} run
in all the simulations we have run.
In both the {\small GADGET-2} and {\small FLASH} simulations, the gaseous haloes
are surrounded by
a low density pressure-confining gaseous medium that prevents
the systems from expanding prior to the collision (i.e., so
that in the case of an isolated halo the object would be
static) but otherwise it has a negligible dynamical effect on
the system.
Isolated gas+DM haloes were run in both {\small GADGET-2} and {\small FLASH} for
10 Gyr in order to test the stability of the initial gas and dark
matter haloes. Although deviations in the central entropy develop over
the course of the isolated simulations, indicating the systems are not
in perfect equilibrium initially, they are small in amplitude (the
central entropy increases by $<10\%$ over 10 Gyr), especially in
comparison to the factor of $\sim2-3$ jump in the central entropy that
occurs as a result of shock heating during the merger. Furthermore, we
note that the amplitude of the deviations in the isolated runs are
significantly decreased as the resolution of these runs is increased.
Our merger simulations, however, are numerically converged (see \S 3),
indicating that the deviations have a negligible effect on merger
simulation results and the conclusions we have drawn from them.
\begin{figure*}
\centering
\leavevmode
\epsfysize=8.4cm \epsfbox{figs/entropy_radius_res_test.eps}
\epsfysize=8.4cm \epsfbox{figs/entropy_gasmass_res_test.eps}
\caption{
Plots demonstrating the entropy cores formed in idealised
head on mergers of equal mass ($10^{15} M_\odot$) clusters
in the {\small FLASH} and {\small GADGET-2} simulations. The left hand panel
shows the final radial entropy distribution,
where the data points are the median entropy value in radial
bins and the error bars correspond to the 25th and 75th
percentiles. The dashed black line represents the initial
powerlaw configuration. The solid blue squares, solid
blue triangles, and open circles represent the low resolution
($10^{5}$ gas particles), default ($10^{6}$ gas particles), and
high resolution ($10^{7}$ gas particles) {\small GADGET-2} runs. The
minimum SPH smoothing lengths of these simulations throughout the
runs are approximately $25$, $11$, and $5$ kpc, respectively.
The solid red squares, solid red triangles,
solid red pentagons, and open red circles represent {\small FLASH}
AMR runs with $l =$ 5, 6 (default), 7, and 8,
respectively, which have minimum cell sizes of $\approx 78$,
$39$, $19.5$, and $9.8$ kpc (respectively). (These would be
equivalent to the resolutions of uniform grid runs with
$256^3$, $512^3$, $1024^3$ and $2048^3$ cells.) The open red
triangles represent a uniform $512^3$ {\small FLASH} run with a cell
size of $\approx 39$ kpc (for reference, $r_{200,i} \simeq
2062$ kpc). With the exception
of the lowest resolution AMR run, all of the {\small FLASH} runs
essentially lie on top of one another, as do the Gadget
runs, meaning
both runs are numerically converged. However, importantly
the two codes have converged to results that differ by a
factor $\sim 2$ in central entropy.
The right hand panel presents the results in a slightly
different way: it shows the entropy as a function of
enclosed gas mass $K(<M_{\rm gas})$. This is constructed by
simply sorting the particles/cells by entropy in ascending
order and then summing masses of the particles/cells. The
results have been
normalised to the final distribution of the default {\small GADGET-2}
run (dashed black line). The dashed blue and solid blue curves
represent the low and high resolution {\small GADGET-2} runs, respectively, whereas
the dotted red,
solid red, short-dashed red, and long-dashed red curves
represent the {\small FLASH} AMR runs $l =$ 5, 6, 7,
and 8. The thin solid red curve represents the uniform
$512^3$ {\small FLASH} run. Again we see that the default {\small GADGET-2}
and {\small FLASH} runs are effectively converged, but to a
significantly different profile.}
\label{entropy_radius}
\end{figure*}
\section{Idealised cluster mergers}
\label{thecomparison}
The existence of a discrepancy between the inner properties
of the gas in groups and clusters formed in AMR and SPH
cosmological simulations was first noticed in the
Santa Barbara code comparison of Frenk et al.\ (1999). It was
subsequently verified in several works, including Dolag et
al.\ (2005), O'Shea et al.\ (2005), Kravtsov, Nagai \&
Vikhlinin (2005), and VKB05. The latter study
in particular clearly demonstrated, using a relatively large
sample of
$\sim 60$ simulated groups and clusters, that those systems
formed in the AMR simulations had systematically larger
entropy cores than their SPH counterparts. Since
this effect was observed in cosmological simulations, it was
generally thought that the discrepancy was due to
insufficient resolution in the mesh codes at high
redshift (we note, however, that VKB05 argued against resolution
being the cause). This would result in under-resolved small scale
structure formation in the early universe. This explanation
is consistent with the fact that in the Santa
Barbara comparison the entropy core amplitude tended to be
larger for the lower resolution mesh code runs. Our first aim is
therefore to determine
whether the effect is indeed due to resolution limitations,
or if it is due to a more fundamental difference between the
two types of code. We test this using identical idealised
binary mergers of spherically-symmetric clusters in {\small GADGET-2}
and {\small FLASH} , where it is possible to explore the effects of
finite resolution with relatively modest computational
expense (compared to full cosmological simulations).
\subsection{A Significant Discrepancy}
\label{asignificantdescrepancy}
As a starting point, we investigate the generation of entropy
cores in a head on merger between two identical $10^{15}
M_\odot$ clusters, each colliding with an initial speed of
$0.5 V_{\rm circ}(r_{200}) \simeq 722$ km/s [i.e., the
initial
{\it relative} velocity is $V_{\rm circ}(r_{200})$, which is
typical of merging systems in cosmological simulations;
see, e.g., Benson 2005]. The system is initialised such that
the two clusters are just barely touching (i.e., their
centres are separated by $2 r_{200}$). The simulations are
run for a duration of 10 Gyr, by the end of which the merged
system has relaxed and there is very little entropy generation
ongoing.
\begin{table}
\caption{Characteristics of the head on simulations presented in
\S 3.1}
\centering
\begin{tabular}{ l l l}
\hline
FLASH sim. & No. cells & Max. spatial res.\\
& & (kpc)\\
\hline
$l=5$ & equiv.\ $256^3$ & 78 \\
$l=6$ (default) & equiv.\ $512^3$ & 39 \\
$l=7$ & equiv.\ $1024^3$ & 19.5 \\
$l=8$ & equiv.\ $2048^3$ & 9.8 \\
$512^3$ & $512^3$ & 39 \\
\hline
\\
\hline
GADGET-2 sim. & No. gas particles & Max. spatial res.\\
& & (kpc)\\
\hline
low res. & $10^5$ & $\approx 25$ \\
default & $10^6$ & $\approx 11$ \\
hi res. & $10^7$ & $\approx 5$ \\
\hline
\end{tabular}
\end{table}
Our idealised test gives very similar results to non-radiative
cosmological simulations --- there is a distinct
difference in the amplitude of the entropy cores in the AMR
and SPH simulations, with the entropy in the mesh code a
factor $\sim 2$ higher than the SPH code. It is evident that
the difference between the codes is captured in a single merger
event. An immediate
question is whether this is the result of the different
effective resolutions of the codes. Resolution tests can be
seen in the left hand panel of Figure~\ref{entropy_radius},
where we plot the resulting radial entropy distributions.
For {\small GADGET-2} , we compare runs with $10^5$, $10^6$ (the default),
and $10^7$ particles. For {\small FLASH} we compare AMR runs with
minimum cell sizes of $\approx 78$, $39$ (the default), $19.5$, and
$9.8$ kpc and a uniform grid run with the default 39~kpc cell size.
The simulation characteristics for these head on mergers are
presented in Table 1.
To make a direct comparison with the cosmological results of
VKB05 (see their Fig.\ 5), we normalise the entropy by the
initial `virial' entropy ($K_{200}$; see eqn.\ 4) and the
radius by the initial virial radius, $r_{200}$.
The plot clearly shows that the simulations converge on two
distinctly different solutions within the inner ten percent
of $r_{200}$, whereas the entropy at large radii shows
relatively good agreement between the two codes. The
simulations performed for the resolution test span a factor
of 8 in spatial resolution in {\small FLASH} and approximately a factor of
5 in {\small GADGET-2} . The {\small FLASH} AMR runs effectively
converge after reaching a peak resolution equivalent to a
$512^{3}$ run (i.e., a peak spatial resolution of $\approx
39$ kpc or $\approx 0.019 r_{200}$). We have also tried a {\small FLASH} run with
a uniform (as opposed to adaptive) $512^3$ grid and the results
essentially trace the AMR run with an equivalent peak resolution. This
reassures us that our AMR refinement criteria is correctly capturing
all regions of significance. The lowest resolution SPH run, which only
has $5\times10^4$ gas particles within $r_{200}$ initially, has a slightly
higher final central entropy than the default and high resolution SPH
runs. This may not be surprising given the tests and modelling presented
in Steinmetz \& White (1997). These authors demonstrated that with such
small particle numbers, two-body heating will be important if the mass of
a dark matter particle is significantly above the mass of a gas particle.
The {\small GADGET-2} runs converge, however, when the number of gas and dark
matter particles are increased by an order magnitude (i.e., as in our
default run), yielding a maximum spatial resolution of $\approx 11$ kpc
(here we use the minimum SPH smoothing length as a measure of the maximum
spatial resolution).
\begin{figure*}
\centering
\includegraphics[width=13.5cm]{figs/8panel_new.eps}
\caption{Logarithmic entropy slices (i.e., thickness of
zero) through the centre of
the default {\small FLASH} merger simulation (with $l=6$) at times 0, 1, 2, 3, 4,
5, 7 and 10 Gyr. The lowest
entropy material is shown in blue, increasing in entropy
through green, yellow to red. Each panel is 6 Mpc on a side.
Significant entropy is generated at $t\approx 2$~Gyr when
the cores collide and gas is squirted out, and again later on
when this gas reaccretes.
}
\label{8panelimage}
\end{figure*}
A comparison of the left hand panel of Fig.\ 1 to Fig.\ 5 of VKB05
reveals a remarkable
correspondence between the results of our idealised merger
simulations and those of their cosmological simulations
(which spanned system masses of $\sim 10^{13-15} M_\odot$).
They find that the ratio of the AMR and SPH core amplitudes is
$\sim 2$ in both the idealised and cosmological
simulations. This difference is also seen in the Santa Barbara
comparison of Frenk et al.\ (1999) when comparisons are made
between the SPH simulations and the highest resolution AMR
simulations carried out in that study (ie., the `Bryan' AMR
results)\footnote{We note, however, that the lower resolution AMR
simulations in that study produced larger entropy cores, which
suggests that they may not have been numerically converged (as in
the case of $l=5$ AMR run in Fig.\ 1).}. This consistency
presumably indicates that whatever mechanism is
responsible for the differing core amplitudes in
the cosmological simulations is also responsible for the
differing core amplitudes in our idealised simulations.
This is encouraging, as it implies the generation of the
entropy cores can be studied with idealised simulations.
As outlined in \S 1, the advantage of idealised
simulations over cosmological simulations is their relative
simplicity. This gives us hope that we can use idealised
simulations to track down the underlying cause of the
discrepancy between particle-based and mesh-based
hydrodynamic codes.
The right hand panel of Figure~\ref{entropy_radius} shows
the resulting entropy distributions plotted in a slightly
different fashion. Here we plot the entropy as a function of
`enclosed' gas mass $K(<M_{\rm gas})$. This is constructed
by simply ranking the particles/cells by entropy in ascending
order and then summing the masses of the particles/cells [the
inverse, $M_{\rm gas}(K)$, would therefore be the total mass
of gas with entropy lower than $K$]. Convective stability
ensures that, eventually when the system is fully relaxed,
the lowest-entropy gas will be located at the very centre of
the potential well, while the highest entropy gas will be
located at the system periphery. $K(<M_{\rm gas})$ is
therefore arguably a more fundamental quantity than $K(r)$
and we adopt this test throughout the rest of the paper. It
is also noteworthy that in order to compute $K(<M_{\rm
gas})$ one does not first need to select a reference point
(e.g., the centre of mass or the position of the particle
with the lowest potential energy) or to bin the
particles/cells in any way, both of which could
introduce ambiguities in the comparison between the SPH and
AMR simulations (albeit likely minor ones).
In the right hand panel of Figure~\ref{entropy_radius}, we
plot the resulting $K(<M_{\rm gas})$ distributions normalised
to the final entropy distribution of the default {\small GADGET-2}
run. Here we see that the lowest-entropy gas in the
{\small FLASH} runs have a higher entropy, by a factor of $\approx
1.9-2.0$, than the lowest-entropy gas in the default {\small GADGET-2} run.
Naively, looking at the right hand panel of
Figure~\ref{entropy_radius} one might conclude that the
discrepancy is fairly minor, given that $\sim 95$\% of the
gas has been heated to a similar degree in the SPH and AMR
simulations. But it is important to keep in mind that it is
the properties of the lowest-entropy gas in particular that are
most relevant to the issue of heating vs.\ cooling in groups
and clusters (and indeed in haloes of all masses), since
this is the gas that has the shortest cooling time.
The agreement between our results and those from cosmological
simulations (e.g., Frenk et al.\ 1999; VKB05) is striking. The
convergence of the entropy distributions in our idealised simulations
negates the explanation that inadequate resolution of the high
redshift universe in cosmological AMR simulations is the root cause of
the discrepancy between the entropy cores in SPH and AMR
simulations (although we note that some of the lower
resolution AMR simulations in the study of Frenk et al.\ may not
have been fully converged and therefore the discrepancy may
have been somewhat exaggerated in that study for those
simulations).
We therefore conclude that the higher entropy generation in AMR codes
relative to SPH codes within the cores of groups and clusters arises
out of a more fundamental difference in the adopted algorithms. Below
we examine in more detail how the entropy is generated during the
merging process in the simulations and we then systematically explore
several possible causes for the differences in the simulations.
\subsection{An overview of heating in the simulations}
\label{overview}
We have demonstrated that the entropy generation that takes
place in our idealised mergers is robust to our choice of
resolution, yet a difference persists in the amount of
central entropy that is generated in the SPH and mesh
simulations. We now examine the entropy generation as a
function of time in the simulations, which may provide clues
to the origin of the difference between the codes.
Figure~\ref{8panelimage} shows $\log(K)$ in a slice
through the centre of the default {\small FLASH} simulation at times
0, 1, 2, 3, 4, 5, 7 and 10 Gyr. This may be compared to
Figure~\ref{entropygastime}, which shows the entropy
distribution of the simulations as a function of time (this
figure is described in detail below). Briefly, as the cores
approach each other, a relatively gentle shock front forms
between the touching edges of the clusters, with gas being
forced out
perpendicular to the collision axis. Strong heating does
not actually occur until approximately the time when the
cores collide, roughly 1.8 Gyrs into the run. The shock
generated through the core collision propagates outwards,
heating material in the outer regions of the system. This
heating causes the gas to expand and actually overshoot
hydrostatic equilibrium. Eventually, the gas, which
remains gravitationally bound to the system, begins to fall
back onto the system, producing a series of weaker secondary
shocks. Gas at the outskirts of the system, which is the
least bound, takes the longest to re-accrete. This
dependence of the time for gas to be re-accreted upon the
distance from the centre results in a more gradual increase
in entropy than seen in the initial core collision.
In a {\it qualitative} sense, the heating process that takes
place in the {\small FLASH} simulations is therefore very similar to
that seen in the {\small GADGET-2} simulations (see \S 3 of McCarthy et
al.\ 2007 for an overview of the entropy evolution in
idealised {\small GADGET-2} mergers).
The top left panel in Figure~\ref{entropygastime} shows the
ratio of $K(<M_{\rm gas})$ in the default {\small FLASH} run relative
to $K(<M_{\rm gas})$ in the default {\small GADGET-2} run. The various
curves represent the ratio at different times during the
simulations (see figure key --- note that these correspond
to the same outputs displayed in Figure~\ref{8panelimage}).
It can clearly be seen that the bulk of the difference in
the {\it final} entropy distributions of the simulations is
established around the time of core collision. The ratio of
the central entropy in the {\small FLASH} simulation to the central
entropy in the {\small GADGET-2} simulation converges after $\approx
4$ Gyr. The top right panel shows the time evolution of the
lowest-entropy gas only in both the {\small GADGET-2} and {\small FLASH} runs.
Here we see there are similar trends with time, in the sense
that there are two main entropy generation episodes (core
collision and re-accretion), but that the entropy
generated in the first event is much larger in the {\small FLASH} run
than in the {\small GADGET-2} run. Far outside the core, however, the
results are very similar. For completeness, the bottom two
panels show $K(<M_{\rm gas})$ at different times for the
{\small GADGET-2} and {\small FLASH} runs separately.
The small initial drop in the central entropy at 1 Gyr in the
{\small FLASH} run (see bottom left panel) is most likely due to
interpolation errors at low resolution. This drop in entropy
should not physically occur without cooling processes (which are
not included in our simulations), but there is nothing to prevent a
dip from occurring in the simulations due to numerical inaccuracies
(the second law of thermodynamics is not explicitly hardwired into
the mesh code). At low resolutions, small violations in entropy
conservation can occur due to inaccurate interpolations made by the
code. We have verified that the small drop in entropy does not
occur in the higher resolution {\small FLASH} runs. We note that while
this effect is present in default {\small FLASH} run, it is small and as
demonstrated in Fig.\ 1 the default run is numerically converged.
It is interesting that the {\small FLASH} to {\small GADGET-2} central entropy
ratio converges relatively early on in the simulations. This
is in spite of the fact that a significant fraction of the
entropy that is generated in both simulations is actually
generated at later times, during the re-accretion phase.
Evidently, this phase occurs in a very similar fashion in
both simulations. In \S 4, we will return to the point that
the difference between the results of the AMR and SPH
simulations arises around the time of core collision.
\begin{figure*}
\centering
\leavevmode
\epsfysize=6.0cm \epsfbox{figs/entropy_gasmass_time_norm.eps}
\epsfysize=6.0cm \epsfbox{figs/entropy_gasmass_time.eps}
\epsfysize=6.0cm \epsfbox{figs/entropy_gasmass_time_flash.eps}
\epsfysize=6.0cm \epsfbox{figs/entropy_gasmass_time_gad.eps}
\caption{The time-dependence of entropy generation in the
default {\small GADGET-2} and {\small FLASH} runs. The top left panel shows
the ratio of $K(<M_{\rm gas})$ in the default {\small FLASH} run to
$K(<M_{\rm gas})$ in the default {\small GADGET-2} run. The various
curves represent the ratio at different times during the
simulations (see legend). The top right panel shows the time
evolution of the lowest-entropy gas only in the default runs.
Shown are $K(<M_{\rm gas}/M_{\rm gas,tot} = 0.03)$ (thick
curves) and $K(<M_{\rm gas}/M_{\rm gas,tot} = 0.05)$ (thin
curves) for the {\small FLASH} (long-dashed red curves) and {\small GADGET-2}
(solid blue curves) runs (i.e., having sorted the gas
particles/cells by entropy, we show the evolution of the
entropy that encloses 3\% and 5\% of the total gas mass).
The curves have been normalised to their initial values at
the start of the simulations. The short-dashed black curve
represents the ratio of {\small FLASH} to {\small GADGET-2} entropies enclosing
3\% of the total gas mass. The bottom two panels show the
$K(<M_{\rm gas})$ distributions for the default {\small FLASH} and
{\small GADGET-2} runs separately, at different times during the
simulation. Together, these plots illustrate that the
difference in the final entropy distributions of the {\small FLASH}
and {\small GADGET-2} runs is primarily established around the time of
core collision ($\sim 2-3$ Gyr). It is worth noting, however,
that significant entropy generation continues after this
time, but it occurs in nearly the same fashion in
the AMR and SPH runs. }
\label{entropygastime}
\end{figure*}
\subsection{Alternative setups}
It is important to verify that the conclusions we have drawn
from our default setup are not unique to that specific
initial configuration. Using a suite of merger simulations
of varying mass ratio and orbital parameters, McCarthy et
al.\ (2007) demonstrated that the entropy generation that
takes place does so in a qualitatively similar manner to
that described above in all their simulations. However,
these authors examined only SPH simulations. We have
therefore run several additional merger simulations in both
{\small GADGET-2} and {\small FLASH} to check the robustness of our conclusions.
All of these mergers are carried out using the same resolution
as adopted for the default {\small GADGET-2} and {\small FLASH} runs.
In Figure~\ref{orbit_test}, we plot the final {\small FLASH} to
{\small GADGET-2} $K(<M_{\rm gas})$ ratio for equal mass mergers with
varying orbital parameters (see figure caption). In all
cases, {\small FLASH} systematically produces larger entropy cores
than {\small GADGET-2} , and by a similar factor to that seen in the
default merger setup. Interestingly, the off-axis case
results in a somewhat larger central entropy discrepancy
between {\small GADGET-2} and {\small FLASH} , even though the bulk energetics
of this merger are the same as for the default case. A
fundamental difference between the off-axis case and the
default run is that the former takes a longer time for the
cores to collide and subsequently relax (but note
by the end of the off-axis simulation there is very little
ongoing entropy generation, as in the default case). This
may suggest that the timescale over which entropy is
generated plays some role in setting the magnitude of the
discrepancy between the AMR and SPH simulations. For
example, one possibility is that `pre-shocking' due to the
artificial viscosity i.e., entropy generation during the early phases
of the collision when the interaction is subsonic or mildly transonic)
in the SPH simulations becomes more
relevant over longer timescales. Another possibility is that
mixing, which is expected to be more prevalent in Eulerian
mesh simulations than in SPH simulations, plays a larger
role if the two clusters spend more time in orbit about each
other before relaxing into a single merged system (of
course, one also expects enhanced mixing in the off-axis
case simply because of the geometry). We explore these and
other possible causes of the difference in \S 4.
In addition to varying the orbital parameters, we have also
experimented with colliding a cluster composed of dark matter
only with another cluster composed of a realistic
mixture of gas and dark matter (in this case, we
simulated the head on merger of two equal mass $10^{15}
M_\odot$ clusters with an initial relative velocity of
$\simeq 1444$ km/s). Obviously, this is not an
astrophysically reasonable setup. However, a number of
studies have suggested that there is a link between the
entropy core in clusters formed in non-radiative
cosmological simulations and the amount of
energy exchanged between the gas and the dark matter in
these systems (e.g., Lin et al.\ 2006; McCarthy et al.\
2007). It is therefore interesting to see whether this
experiment exposes any significant differences with respect
to the results of our default merger simulation.
\begin{figure}
\centering
\includegraphics[width=8.4cm]{figs/orbit_test.eps}
\caption{The ratio of {\small FLASH} to {\small GADGET-2} final entropy
distributions for equal mass mergers of varying initial
orbital parameters. The solid blue curve represents the
default setup (head on collision with an initial relative
velocity of $V_{\rm circ}(r_{200})$). The long-dashed green
and short-dashed cyan curves represent head on collisions
with initial relative velocities of $0.5
V_{\rm circ}(r_{200})$ and $0$ (i.e., at rest initially).
The dot dashed red curve represents an off-axis collision
with an initial relative radial velocity of $\simeq 0.95
V_{\rm circ}(r_{200})$ and an initial relative tangential
velocity of $\simeq 0.312 V_{\rm circ}(r_{200})$ (i.e., the
total energy is equivalent to that of the default setup).
Also shown (dotted magenta curve), is
the entropy ratio of a run where one of the clusters is
composed of dark matter only and the other of a realistic
mixture of gas and dark matter (see text). All these
simulations result in a comparable difference in entropy
profile between the mesh code and the SPH code.
} \label{orbit_test}
\end{figure}
The dotted magenta curve in Figure~\ref{orbit_test}
represents the final {\small FLASH} to {\small GADGET-2} $K(<M_{\rm gas})$
ratio for the case where a dark matter only cluster merges
with another cluster composed of both gas and dark matter.
The results of this test are remarkably similar to that of
our default merger case. This indicates that the mechanism
responsible for the difference in heating in the mesh and
SPH simulations in the default merger simulation is also
operating in this setup. Although this does not pin down the
difference between the mesh and SPH simulations, it does
suggest that the difference has little to do with differences
in the properties of the large hydrodynamic shock that
occurs at core collision, as there is no corresponding large
hydrodynamic shock in the case where one cluster is composed
entirely of dark matter. However, it is clear from
Figure 3 that the difference between the default mesh and SPH
simulations is established around the time of core
collision, implying that some source of heating other than
the large hydrodynamic shock is operating at this time (at
least in the {\small FLASH} simulation). We return to this point in
\S 4.
\section{What Causes The Difference?}
There are fundamental differences between Eulerian mesh-based
and Lagrangian particle-based codes in terms of how they
compute the hydrodynamic and gravitational forces. Ideally,
in the limit of sufficiently high resolution, the two
techniques would yield identical results for a given initial
setup. Indeed, both techniques have been shown to match
with high accuracy a variety of test problems with known
analytic solutions. However, as has been demonstrated above
(and in other recent studies; e.g., Agertz et al.\ 2007;
Trac et al.\ 2007; Wadsley et al.\ 2008) differences that
do not appear to depend on resolution present themselves in
certain complex, but astrophysically-relevant, circumstances.
In what follows, we explore several different possible
causes for why the central heating that takes place in mesh
simulations exceeds that in the SPH simulations. The
possible causes we explore include:
\begin{itemize}
\item{\S~\ref{gravitysolvers} A difference in gravity
solvers - Most currently popular mesh codes (including
{\small FLASH} and {\small ENZO} ) use a particle-mesh (PM) approach to
calculate the gravitational force. To accurately capture
short range forces it is therefore necessary to have a
finely-sampled mesh. By contrast, particle-based codes
(such as {\small GADGET-2} and {\small GASOLINE} ) often make use of tree
algorithms or combined tree particle-mesh (TreePM)
algorithms, where the tree is used to compute the short range
forces and a mesh is used to compute long range forces.
Since the gravitational potential can vary rapidly during
major mergers and large quantities of mass can temporarily be
compressed into small volumes, it is conceivable
differences in the gravity solvers and/or the adopted
gravitational force resolution could give rise to different
amounts of entropy generation in the simulations.}
\item{\S~\ref{galileaninv} Galilean non-invariance of
mesh codes - Given explicit dependencies in the Riemann
solver's input states, all Eulerian mesh codes are
inherently not Galilean invariant to some degree. This can
lead to spurious entropy generation in the cores of systems
as they merely translate across the simulation volume (e.g.,
Tasker et al.\ 2008).}
\item{\S~\ref{viscosity} `Pre-shocking' in the SPH
runs -
Artificial viscosity is required in SPH codes to capture the
effects of shock heating. However, the artificial viscosity
can in principle lead to entropy production in regions where
no shocks should be present (e.g., Dolag et al.\ 2005). If
such pre-shocking is significant prior to core collision in
our SPH simulations, it could result in a reduced efficiency
of the primary shock.}
\item{\S~\ref{mixing} A difference in the amount of mixing in
SPH and mesh codes - Mixing will be suppressed in
standard SPH implementations where steep density gradients
are present, since Rayleigh-Taylor and Kelvin-Helmholtz
instabilities are artificially damped in such circumstances
(e.g., Agertz et al.\ 2007). In addition, the standard
implementation of artificial viscosity will damp out even
{\it subsonic} motions in SPH simulations, thereby inhibiting
mixing (Dolag et al.\ 2005). On the other hand, one expects
there to be some degree of over-mixing in mesh codes, since
fluids are implicitly assumed to be fully mixed on scales
smaller than the minimum cell size.}
\\
\\
We now investigate each of these possible causes in turn. We
do not claim that these are the only possible causes for the
differences we see in the simulations. They do, however,
represent the most commonly invoked possible solutions
(along with hydrodynamic resolution, which we explored in \S
3) to the entropy core discrepancy between SPH and mesh
codes.
\end{itemize}
\subsection{Is it due to a difference in the gravity solvers?}
\label{gravitysolvers}
\begin{figure}
\centering
\includegraphics[width=8.4cm]{figs/DMmass_radius_res_test.eps}
\caption{A plot comparing the resulting dark matter mass
distributions for the default merger setup at 10 Gyr. The dark matter
mass profiles have been normalised to the final dark matter
mass profile of the default resolution {\small GADGET-2} run. The
dotted magenta, solid red, short-dashed green, and
long-dashed blue curves represent the {\small FLASH} AMR runs
with $l =$ 5, 6, 7, and 8, respectively, which
correspond to peak grid cell sizes of $\approx 78$, $39$,
$19.5$, and $9.8$ kpc (respectively). The thin solid red
curve represents the uniform $512^3$ {\small FLASH} run. The
gravitational softening length adopted for the {\small GADGET-2} run is
10 kpc. For reference, $r_{200,i} \simeq 2062$ kpc. The
vertical dashed line indicates four softening lengths. The
{\small FLASH} dark matter distribution converges to the {\small GADGET-2}
result when the numerical resolutions become similar: the
observed differences in gas entropy are not due to differences
in the underlying dark matter dynamics.
}
\label{DMconverge}
\end{figure}
In the {\small FLASH} simulations, gravity is computed using a
standard particle-mesh approach. With this approach,
the gravitational force will be computed accurately only
on scales larger than the finest cell size.
By contrast, the {\small GADGET-2} simulations make use of a combined
TreePM approach, where the tree algorithm computes the short
range gravitational forces and the particle-mesh algorithm is
used only to compute long range forces. To test whether or
not differences in the gravity solvers (and/or gravitational
force resolution) are important, we compare the final
mass distributions of the dark matter in our simulations. The
distribution of the dark matter should be insensitive to the
properties of the diffuse baryonic component, since its
contribution to the overall mass budget is small by
comparison to the dark matter.\footnote{We have explicitly
verified this by running a merger between clusters with gas
mass fractions that are a factor of 10 lower than assumed in
our default run.} Thus, the final distribution of the dark
matter tells us primarily about the gravitational interaction alone
between the two clusters.
Figure~\ref{DMconverge} shows the ratio of the final {\small FLASH}
dark matter mass profiles to the final
{\small GADGET-2} dark matter mass profile. Recall that in all runs
the number of dark matter particles is the same. The differences
that are seen in this figure result from solving for
gravitational potential on a finer mesh. For the
lowest resolution {\small FLASH} run, we see that the final
dark matter mass profile deviates
significantly from that of the default {\small GADGET-2} run for $r
\la 0.04 r_{200,i}$. However, this should not be surprising,
as the minimum cell size in the default {\small FLASH} run is $\sim
0.02 r_{200,i}$. By increasing the maximum refinement level,
$l$, we see that the discrepancy between the
final {\small FLASH} and
{\small GADGET-2} dark matter mass profiles is limited to smaller and
smaller radii. With $l = 8$, the minimum cell
size is equivalent to the gravitational softening length
adopted in the default {\small GADGET-2} run. In this case, the
final dark matter mass distribution agrees with that of the
default {\small GADGET-2} run to within a few percent at all radii
beyond a few softening lengths (or a few cell sizes), which is
all that should be reasonably expected. A comparison of the various
{\small FLASH} runs with one another (compare, e.g., the default {\small FLASH} run
with the $l=8$ run, for which there is a $\sim6$\% discrepancy out as far
as $0.1 r_{200}$) may suggest a somewhat slower rate of convergence to the
default {\small GADGET-2} result than one might naively have expected. Given that
we have tested the new FFT gravity solver against both the default
multigrid solver and a range of simple
analytic problems and confirmed its accuracy to a much higher level than
this, we speculate that the slow rate of convergence is due to the
relatively small number of dark matter particles used in the mesh
simulations. In the future, it would be useful to vary the number of dark
matter particles in the mesh simulations to verify this hypothesis.
In summary, we find that the resulting dark matter distributions
agree very well in the {\small GADGET-2} and {\small FLASH} simulations when the
effective resolutions are comparable. The intrinsic
differences between the solvers therefore appear to be minor.
More importantly for our purposes, even though the gravitational force
resolution for the default {\small FLASH} run is not as high as for
the default {\small GADGET-2} run, this has no important consequences
for the comparison of the final entropy distributions of the
gas. It is important to note that, even though
the final mass distribution in the {\small FLASH} simulations
shows small differences between $l = 6$ and 8,
Figure~\ref{entropy_radius} shows that the entropy
distribution is converged for $l \ge 6$
and is not at all affected by the improvement in the
gravitational potential.
\subsection{Is it due to Galilean non-invariance of grid
hydrodynamics?}
\label{galileaninv}
\begin{figure}
\centering
\includegraphics[width=8.4cm]{figs/flash_galilean.eps}
\caption{Testing the effects of Galilean non-invariance on
the
{\small FLASH} merger simulations. Plotted is the final entropy
distribution, normalised to the initial one, for the default
{\small FLASH} merger simulation and various different `takes' on
the default run. The solid black curve represents the
default
run, the short-dashed blue curves represents a merger where
one cluster is held static and the other given a bulk
velocity
twice that in the default run (i.e., the relative velocity
is
unchanged from the default run), and the dotted blue curve
represents this same merger but with the size of the time
steps reduced by an order of magnitude. The dashed cyan,
green, and red lines represent mergers that take place on an
oblique angle to mesh at 33 degrees, 45 degrees with $l=7$
and 45 degrees with $l=8$,
respectively. This comparison illustrates that the
effects of Galilean non-invariance on the resulting entropy
distribution are minor and do not account for the difference
in the entropy core amplitudes of the mesh and SPH
simulations.
}
\label{flashgalilean}
\end{figure}
Due to the nature of Riemann solvers (which are a fundamental
feature of AMR codes), it is possible for the evolution of a
system to be Galilean non-invariant. This arises from the
fixed position of the grid relative to the fluid. The Riemann
shock tube initial conditions are constructed by determining
the amount of material that can influence the cell boundary
from either side, within a given time step based on the sound
speed. The Riemann problem is then solved at
the boundary based on the fluid properties either side of the boundary.
By applying a bulk velocity to the medium in a given direction, the
nature of the solution changes. Although an ideal solver
would be able to decouple the bulk velocity from the velocity
discontinuity at the shock, the discrete nature of the problem
means that the code may not be Galilean invariant. Since one expects
large bulk motions to be relevant for cosmological structure
formation, and clearly is quite relevant for our merger
simulations, it is important to quantify what effects (if any)
Galilean non-invariance has on our AMR simulations.
We have tested the Galilean non-invariance of our {\small FLASH}
simulations in two ways. In the first test, we simulate an
isolated cluster moving across the mesh with an (initial) bulk
velocity of $V_{\rm circ}(r_{200})$ ($\simeq 1444$ km/s) and
compare it to an isolated cluster with zero bulk velocity.
This is similar to the test carried out recently by Tasker
et al.\ (2008). In agreement with Tasker et al.\ (2008), we
find that there is some spurious generation of entropy in
the very central regions ($M_{\rm gas}/M_{\rm gas,tot} \la
0.03$) of the isolated cluster that was given an initial
bulk motion. However, after $\approx 2$ Gyr of evolution
(i.e., the time when the clusters collide in our default
merger simulation), the increase in the central entropy is
only $\sim 10$\%. This is small in comparison to the
$\sim300$\% jump that takes place at core collision in
our merger simulations. This suggests that spurious
entropy generation prior to the merger is minimal and
does not account for the difference we see between the SPH and
AMR simulations.
In the second test, we consider different implementations of
the default merger simulation. In one case, instead of giving
both systems equal but opposite bulk velocities (each with
magnitude $0.5 V_{\rm circ}$), we fix one and give the other
an initial velocity that is twice the default value, so that
the relative velocity is unchanged. (We also tried reducing
the size of the time steps for this simulation by an order
magnitude.) In addition, we have tried mergers that take
place at oblique angles relative to the grid. If the merger
is well-resolved and the dynamics are Galilean invariant,
all these simulations should yield the same result.
Figure~\ref{flashgalilean} shows the resulting entropy
distributions for these different runs. The results of this
test confirm what was found above; i.e., that there is some
dependence on the reference frame adopted, but that this
effect is minor in general (the central entropy is modified
by $\la 10$\%) and does not account for the discrepancy we
see between entropy core amplitudes in the default {\small GADGET-2}
and {\small FLASH} simulations.
\begin{figure}
\centering
\includegraphics[width=8.4cm]{figs/viscosity_switch.eps}
\caption{Testing the effects of pre-shocking due to
artificial
viscosity in the default {\small GADGET-2} merger simulation. This
plot shows the evolution of the central entropy (enclosing 3\% of
the gas mass) around the time of first core collision. The
solid blue triangles represent the default {\small GADGET-2} simulation. The
solid cyan points, solid green points, and solid red points represent
runs where the artificial viscosity is kept at a very low level
($\alpha_{\rm visc} = 0.05$) until $t \approx$ 1.6, 1.7,
1.8 Gyr, respectively, at which point the artificial viscosity
is set back to its default value. The solid magenta squares
represent a run with low artificial viscosity throughout,
and the open triangles represent the default {\small FLASH}
simulation. Reducing the value of the artificial viscosity parameter
before the cores collide delays the increase in entropy (cyan, green
and solid red), however as soon as the original value is restored, the
entropy $K$ increases to a level nearly independent of when $\alpha$
was restored. Therefore pre-shocking has little effect on the
post-shock value of $K$.
}
\label{viscosityswitch}
\end{figure}
\subsection{Is it due to `pre-shocking' in SPH?}
\label{viscosity}
Artificial viscosity is required in SPH codes in order to
handle hydrodynamic shocks. The artificial viscosity acts
as an excess pressure in the equation of motion, converting
gas kinetic energy into internal energy, and therefore raising
the entropy of the gas. In standard SPH implementations, the
magnitude of the artificial viscosity is fixed in both space
and time for particles that are approaching one another (it
being set to zero otherwise). This implies that even in cases
where the Mach number is less than unity, i.e., where formally
a shock should not exist, (spurious) entropy generation can
occur. This raises the possibility that significant
`pre-shocking' could occur in our SPH merger simulations.
This may have the effect of reducing the efficiency of the
large shock that occurs at core collision and could therefore
potentially explain the discrepancy between the mesh and SPH
simulations.
\begin{figure*}
\centering
\leavevmode
\epsfysize=8.4cm \epsfbox{figs/tracers_entropy.eps}
\epsfysize=8.4cm \epsfbox{figs/tracers_mass.eps}
\caption{Quantifying the amount of heating and mixing in the
default {\small GADGET-2} and {\small FLASH} merger simulations. {\it Left:}
The entropy of particles (tracer particles in the case of
{\small FLASH} ) at time $t$ vs. the initial entropy of those
particles. The solid green line is the line of equality
[$K(t) = K(t=0)$; i.e., no heating]. The
shaded blue and red regions represent the distributions
from the {\small GADGET-2} and {\small FLASH} simulations, respectively. They
enclose 50\% of the particles; i.e., the lower/upper bounds
represent the 25th/75th percentiles for $K(t)$ at
fixed $K(t=0)$. The dashed blue and red lines
represent the median $K(t)$ at fixed $K(t=0)$. The central
entropy in the {\small FLASH} runs increases significantly more than
in the {\small GADGET-2} run when the cores collide, at $t\sim 2$~Gyr,
the increase in entropy later is similar between the two
codes. The scatter in {\small FLASH} entropy is also much larger than
in {\small GADGET-2} . {\it Right:} The enclosed gas mass of particles
(tracers particles in the case of {\small FLASH} ) at time $t$ vs. the
initial enclosed mass of those particles. The enclosed gas
mass of each particle is calculated by summing the masses of
all other particles (or cells) with entropies lower than the
particle under consideration. The solid green line is the
line of equality [$M_{\rm gas}(t) = M_{\rm gas}(t=0)$; i.e.,
no mass
mixing]. The shaded blue and red regions represent the
distributions from the {\small GADGET-2} and {\small FLASH} simulations,
respectively. They enclose 50\% of the particles; i.e., the
lower/upper bounds represent the 25th/75th percentiles for
$M_{\rm gas}(t)$ at fixed $M_{\rm gas}(t=0)$. The dashed
blue and red lines represent the median $M_{\rm gas}(t)$ at
fixed $M_{\rm gas}(t=0)$. Particles in {\small FLASH} mix much more
than in {\small GADGET-2} .}
\label{tracers}
\end{figure*}
Dolag et al.\ (2005) raised this possibility and tested it in
SPH cosmological simulations of massive galaxy clusters.
They implemented a new variable artificial viscosity scheme
by embedding an on-the-fly shock detection algorithm in
{\small GADGET-2} that indicates if particles are in a supersonic flow
or not. If so, the artificial viscosity is set to a typical
value, if not the artificial viscosity is greatly reduced.
This new implementation should significantly reduce the
amount of pre-shocking that takes place during formation of
the clusters. The resulting clusters indeed had somewhat
higher central entropies relative to clusters
simulated with the standard artificial viscosity
implementation (although the new scheme does not appear to
fully alleviate the discrepancy between mesh and SPH
codes). However, whether the central entropy was raised
because of the reduction in pre-shocking or if it was due
to an increase in the amount of mixing is unclear.
Our idealised mergers offer an interesting opportunity to
re-examine this test. In particular, because of the
symmetrical geometry of the merger, little or no mixing is
expected until the cores collide, as prior to this time there
is no interpenetration of the gas particles belonging to the
two clusters (we have verified this). This means that we are
in a position to isolate the effects of pre-shocking
from mixing early on in the simulations. To do so, we have
devised a crude method meant to mimic the variable
artificial viscosity scheme of Dolag et al.\ (2005). In
particular, we run the default merger with a low artificial
viscosity (with $\alpha_{\rm visc} = 0.05$, i.e.,
approximately the minimum value adopted by Dolag et al.\ )
until the cores collide, at which point we switch the
viscosity back to its default value. We then examine the
amount of entropy generated in the large shock.
Figure~\ref{viscosityswitch} shows the evolution of the
central entropy around the time of core collision. Shown are
a few different runs where we switch the artificial
viscosity back to its default value at different times (since
the exact time of `core collision' is somewhat ill-defined).
Here we see that prior to the large shock very little entropy
has been generated, which is expected given the low
artificial viscosity adopted up to this point. A comparison
of these runs to the default {\small GADGET-2} simulation (see inset in
Figure~\ref{viscosityswitch}) shows that there is evidence for a small
amount of pre-shocking in the default run.
However, we find that for the cases where the artificial viscosity is
set to a low value, the resulting entropy jump (after the viscosity is
switched back to the default value) is nearly the same as in the
default merger simulation. In other words, pre-shocking appears to
have had a minimal effect on the strength of the heating that
occurs at core collision in the default SPH simulation. This argues
against pre-shocking as the cause of the difference we see between
the mesh and SPH codes.
Lastly, we have also tried varying $\alpha_{\rm visc}$
over the range $0.5$ and $1.0$ (i.e.,
values typically adopted in SPH studies; Springel 2005b) for the
default {\small GADGET-2} run. We find that the SPH results are robust to
variations in $\alpha_{\rm visc}$ and cannot reconcile the
differences between SPH and AMR results.
\begin{figure*}
\centering
\leavevmode
\epsfysize=8.4cm \epsfbox{figs/gadget_tracers.eps}
\epsfysize=8.4cm \epsfbox{figs/flash_tracers.eps}
\caption{The final spatial distribution of particles
(tracer particles in the case of {\small FLASH} ) with the lowest
initial entropies (we select the central 5\% of
particles/tracer particles in both clusters). The blue
points represent particles belonging to one of the clusters and the
red points represent particles belonging to the other. {\it
Left:} The low resolution {\small GADGET-2} simulation. {\it Right:} The
default {\small FLASH} simulation. There is a high degree of
mixing in the mesh simulation, whereas there remain two
distinct blobs corresponding to the infallen clusters in the SPH
simulation. The difference arises immediately following core collision
through the turbulent mixing that it drives.
}
\label{particles}
\end{figure*}
\subsection{Is it due to a difference in the amount of
mixing in SPH and mesh codes?}
\label{mixing}
Our experiments with off-axis collisions and collisions with
a cluster containing only dark matter suggest that mixing
plays an important role in generating the differences between
the codes.
Several recent studies (e.g., Dolag et al.\ 2005; Wadsley et
al.\ 2008) have argued that mixing is handled poorly in
standard implementations of SPH, both because (standard)
artificial viscosity acts to damp turbulent motions and
because the growth of KH and RT instabilities is inhibited in
regions where steep density gradients are present (Agertz et
al.\ 2007). Using cosmological SPH simulations that have been
modified in order to enhance mixing\footnote{Note the
that modifications implemented by Dolag et al.\ (2005) and
Wadsley et al.\ (2008) differ. As described in \S 4.3,
Dolag et al.\ (2005) implemented a variable artificial
viscosity, whereas Wadsley et al.\ (2008) introduced a
turbulent heat flux term to the Lagrangian energy
equation in an attempt to explicitly model turbulent
dissipation.}, Dolag et al.\ (2005) and Wadsley et al. (2008)
have shown that it is possible to generate higher central
entropies in their galaxy clusters (relative to clusters
simulated using standard implementations of SPH), yielding
closer agreement with the results of cosmological mesh
simulations. This is certainly suggestive that mixing may be
the primary cause of the discrepancy between mesh and SPH
codes. However, these authors did not run mesh
simulations of galaxy clusters and therefore did not perform
a direct comparison of the amount of mixing in SPH vs. mesh
simulations of clusters. Even if one were to directly
compare cosmological SPH and mesh cluster simulations, the
complexity of the cosmological environment and the
hierarchical growth of clusters would make it difficult to
clearly demonstrate that mixing is indeed the difference.
Our idealised mergers offer a potentially much cleaner way to
test the mixing hypothesis. To do so, we re-run the default
{\small FLASH} merger simulation but this time we include a large
number of `tracer particles', which are massless and follow
the hydrodynamic flow of the gas during the simulation.
The tracer particles are advanced using a second order
accurate predictor-corrector time advancement scheme with the
particle velocities being interpolated from the grid (further
details are given in the flash
manual (version 2.5) at: \\ http://flash.uchicago.edu/).
Each tracer particle has a unique ID that is preserved
throughout the simulation, allowing us to track
the gas in a Lagrangian fashion, precisely as is done in
Lagrangian SPH simulations. To simplify the comparison
further, we initially distribute the tracer particles within the
two clusters in our {\small FLASH} simulation in exactly the same way
as the particles in our initial {\small GADGET-2} setup.
In the left hand panel of Figure~\ref{tracers}, we plot the
final vs. the initial entropy of particles in the default {\small GADGET-2} and
{\small FLASH} merger simulations. This plot clearly
demonstrates that the lowest-entropy gas is preferentially
heated in both simulations, however the degree of heating of
that gas in the mesh simulation is much higher than in the SPH
simulation. Consistent with our analysis in \S 3, we find
that the bulk of this difference is established around the
time of core collision. It is also interesting that the
scatter in the final entropy (for a given initial entropy)
is much larger in the mesh simulation. The larger scatter
implies that convective mixing is more prevalent in
the mesh simulation. At or immediately following core
collision ($t \approx 2-3$ Gyr), there is an indication
that, typically, gas initially at the very centre of the two
clusters (which initially had the lowest entropy) has been
heated more strongly than gas further out [compare, e.g.,
the median $K(t=5 {\rm Gyr})$ at $K(t=0)/K_{200,i} \approx
0.02$ to the median at $K(t=0)/K_{200,i} \approx 0.08$].
Such an entropy inversion does not occur in the SPH
simulations and likely signals that the extra mixing in the
mesh simulation has boosted entropy production.
\begin{figure*}
\centering
\leavevmode
\epsfysize=8.4cm \epsfbox{figs/entropy_frame_023_gadget.eps}
\epsfysize=8.4cm \epsfbox{figs/entropy_frame_023_flash.eps}
\caption{Logarithmic projected entropy maps of the default
{\small GADGET-2} simulation and {\small FLASH} simulation with $l=8$ at $t =
2.3$ Gyr, just after the collision of the
cores. Note that the peak spatial resolutions of the two simulations are
similar (approx.\ 10 kpc; we also note that the {\it median} SPH
smoothing length for the default {\small GADGET-2} run is $\approx 20$ kpc within
the mixing region, $r \la 200$ kpc.) To highlight the central regions,
we have reset the
value of any pixel with projected entropy greater than $0.5
K_{200,i}$ to $0.5 K_{200,i}$. In
these maps, the minimum entropy (black) is $\approx 0.07
K_{200,i}$ and the maximum entropy (white) is $0.5
K_{200,i}$. The maps are 2~Mpc on a side, and project over
a depth of 2~Mpc. Note that the images are not directly comparable
with figure 2, where a slice is shown. {\it Left:}
the {\small GADGET-2} simulation. {\it Right:} the {\small FLASH}
simulation. The {\small FLASH} entropy distribution is characterised
by vortices on a range of scales, which mix gas with
different entropies. These vortices are mostly absent in the
{\small GADGET-2} simulation.}
\label{instabilities}
\end{figure*}
In the right hand panel of Figure~\ref{tracers} we plot the
final vs. the initial enclosed gas mass of particles in the
default {\small GADGET-2} and {\small FLASH} merger simulations. The enclosed
gas mass of each particle (or tracer particle) is calculated
by summing the masses of all other particles (or cells) with
entropies lower than the particle under
consideration\footnote{In convective equilibrium, the
enclosed gas mass calculated in this way also corresponds
to the total mass of gas of all other particles (or cells)
within the cluster-centric radius (or at lower, more negative
gravitational potential energies) of the particle under
consideration. We have verified this for the final output
when the merged system has relaxed by, instead of
summing the masses of all particles with entropy lower than $K_i$,
by summing the masses of all particles with potentials lower
than $\Phi_i$ for the $i$th particle.}. This plot confirms
our mixing expectations based on the entropy plot in the
left hand panel. In particular, only a small amount of mass
mixing is seen in the SPH simulation, whereas in the mesh
simulation the central $\sim 5\%$ of the gas mass has been
fully mixed.
The higher degree of mixing in the {\small FLASH} simulation is shown
pictorially in Figure~\ref{particles}. The left panel shows
the final spatial distribution of the initially
lowest-entropy particles in {\small GADGET-2} simulation, while the
right panel is the analogous plot for tracer particles in
the {\small FLASH} simulation (see figure caption). The larger
degree of mixing in the mesh simulation relative to the SPH
simulation is clearly evident. In the {\small FLASH} simulation, particles
from the two clusters are intermingled in the final state,
while distinct red and blue regions are readily apparent in
the SPH calculation, a difference which arises immediately
following core collision.
The increased mixing boosts entropy production in the {\small FLASH}
simulations, but what is the origin of the increased
mixing? We now return to the point raised in \S 3, that the
bulk of the difference between mesh and SPH simulation is
established around the time of core collision. This is in
spite of the fact that significant entropy generation
proceeds in both simulations until $t \sim 6$ Gyr. Evidently,
both codes treat the entropy generation in the re-accretion
phase in a very similar manner. What is different about the
initial core collision phase? As pointed out recently by
Agertz et al.\ (2007), SPH suppresses the growth of
instabilities in regions where steep density gradients
are present due to spurious pressure forces acting on the
particles. Could this effect be responsible for the
difference we see? To test this idea, we generate 2D
projected entropy maps of the SPH and mesh simulations to
search for signs of clear instability development. In the
case of the SPH simulation, we first
smooth the particle entropies (and densities) onto a 3D grid
using the SPH smoothing kernel and the smoothing lengths of
each particle computed by {\small GADGET-2} . We then compute a
gas mass-weighted projected entropy by projecting along the
$z$-axis. In the case
of the {\small FLASH} simulation, the cell entropies and densities
are interpolated onto a 3D grid and projected in the same
manner as for the {\small GADGET-2} simulation.
Figure~\ref{instabilities} shows a snap shot of the two simulations
at $t = 2.3$ Gyr, just after core collision. Large
vortices and eddies are easily visible in the projected entropy
map of the {\small FLASH} simulation but none are evident in the
{\small GADGET-2} simulation. In order to study the duration of
these eddies, we have generated 100 such snap shots
for each simulation, separated by fixed 0.1 Gyr intervals. Analysing the
projected entropy maps as a movie\footnote{For movies see {\em
``Research: Cores in Simulated Clusters'' } at http://www.icc.dur.ac.uk/}, we
find that these large vortices and eddies persist in the {\small FLASH}
simulation
from $t \approx 1.8 - 3.2$ Gyr. This corresponds very well with
the timescale over which the difference between the SPH and mesh
codes is established (see, e.g., the dashed black curve in the top
right panel of Figure~\ref{entropygastime}).
We therefore conclude that extra mixing in the mesh
simulations, brought on by the growth of instabilities around
the time of core collision, is largely responsible for the
difference in the final entropy core amplitudes between the
mesh and SPH simulations. Physically, one expects the development
of such instabilities, since the KH timescale, $\tau_{\rm KH}$, is
relatively short at around the time of core collision.
We therefore conclude that there is a degree of under-mixing in
the SPH simulations\footnote{But we note that very
high resolution 2D `blob' simulations carried out by Springel
(2005b) do clearly show evidence for vortices. It is presently
unclear if these are a consequence of the very different
physical setup explored in that study (note that the gas density
gradients are much smaller than in the present study) or the
extremely high resolution used in their 2D simulations, and
whether or not these
vortices lead to enhanced mixing and entropy production.}
Whether or not the {\small FLASH} simulations
yield the correct result, however, is harder to ascertain.
As fluids are {\it forced} to numerically mix on scales
smaller than the minimum cell size, it is possible that there
is a non-negligible degree of over-mixing in the mesh
simulations. Our resolution tests (see \S 3) show evidence
for the default mesh simulation being converged, but it may
be that the resolution needs to be increased by much larger
factors than we have tried (or are presently accessible with
current hardware and software) in order to see a difference.
\section{Summary and Discussion}
In this paper, we set out to investigate the origin of the
discrepancy in the entropy structure of clusters formed in
Eulerian mesh-based simulations compared to those formed in
Lagrangian SPH simulations. While SPH simulations form
clusters with almost powerlaw entropy distributions down to
small radii, Eulerian simulations form much larger cores with
the entropy distribution being truncated at significantly
higher values. Previously it has been suspected that this
discrepancy arose from the limited resolution of the mesh
based methods, making it impossible for such codes to
accurately trace the formation of dense gas structures at
high redshift.
By running simulations of the merging of idealised clusters,
we have shown that this is not the origin of the discrepancy.
We used the {\small GADGET-2} code (Springel 2005b) to compute the SPH
solution and the {\small FLASH} code (Fryxell et al.\ 2000) to
compute the Eulerian mesh solution. In these idealised
simulations, the initial gas density structure is resolved
from the onset of the simulations, yet the final
entropy distributions are significantly different. The
magnitude of the difference generated in idealised mergers
is comparable to that seen in the final clusters formed in
full cosmological simulations. A resolution study shows
that the discrepancy in the idealised simulations cannot be
attributed to a difference in the effective resolutions of
the simulations. Thus, the origin of the discrepancy must
lie in the code's different treatments of gravity and/or
hydrodynamics.
We considered various causes in some detail. We found that
the difference was {\em not} due to:
\begin{itemize}
\item{The use of different gravity solvers. The two codes
differ in that {\small GADGET-2} uses a TreePM method to determine
forces, while {\small FLASH} uses the PM method alone. The different
force resolutions of the codes could plausibly lead to
differences in the energy transfer between gas and dark
matter. Yet we find that the dark matter distributions
produced by the two codes are almost identical when the mesh
code is run at comparable resolution to the SPH code.}
\item{Galilean non-invariance of mesh codes. We investigate
whether the results are changed if we change the rest-frame
defined by the hydrodynamic mesh. Although we find that an
artificial core can be generated in this way in the mesh
code, its size is much smaller than the core formed once the
clusters collide, and is not enough to explain the difference
between {\small FLASH} and {\small GADGET-2} . We show that most of
this entropy difference is generated in the space of $\sim 1$
Gyr when the cluster cores first collide.}
\item{Pre-shocking in SPH. We consider the possibility that
the artificial viscosity of the SPH method might generate
entropy in the flow prior to the core collision, thus
reducing the efficiency with which entropy is
generated later. By greatly reducing the artificial
viscosity ahead of the core collision, we show that this
effect is negligible.}
\end{itemize}
Having shown that none of these numerical issues can explain
the difference of the final entropy distributions, we
investigated the role of fluid mixing in the two codes.
Several recent studies (e.g., Dolag et al.\ 2005; Wadsley et
al.\ 2008) have argued that if one increases the amount
of mixing in SPH simulations the result is larger cluster
entropy cores that resemble the AMR results. While this is
certainly suggestive, it does not clearly demonstrate that it
is the enhanced mixing in mesh simulations that is indeed the
main driver of the difference (a larger entropy core in the
mesh simulations need not necessarily have been established
by mixing). By injecting tracer particles into our {\small FLASH}
simulations, we have been able to make an explicit comparison
of the amount of mixing in the SPH and mesh simulations of
clusters. We find very substantial differences. In the SPH
computation, there is a very close relation between the
initial entropy of a particle and its final entropy. In
contrast, tracer particles in the {\small FLASH} simulation only
show a close connection for high initial entropies. The
lowest $\sim5$\% of gas (by initial entropy) is completely
mixed in the {\small FLASH} simulation. We conclude that mixing and
fluid instabilities are the cause of the discrepancy between
the simulation methods.
The origin of this mixing is closely connected to the
suppression of turbulence in SPH codes compared to the
Eulerian methods. This can easily be seen by comparing the
flow structure when the clusters collide: while the {\small FLASH}
image is dominated by large scale eddies, these are absent
from the SPH realisation (see Figure~\ref{instabilities}). It
is now established that SPH codes tend to suppress the
growth of Kelvin-Helmholtz instabilities in shear flows, and
this seems to be the origin of the differences in our
simulation results (e.g., Agertz et al.\ 2007). These
structures result in entropy generation through mixing, an
irreversible process whose role is underestimated by the SPH
method. Of course, it is not clear that the turbulent
structures are correctly captured in the mesh simulations
(Iapichino \& Niemeyer 2008; Wadsley et al.\ 2008). The mesh
forces fluids to be mixed on the scale of individual cells.
In nature, this is achieved through turbulent cascades that
mix material on progressively smaller and smaller scales: the
mesh code may well overestimate the speed and effectiveness
of this process. Ultimately, deep X-ray observations may be
able to tell us whether the mixing that occurs in the
mesh simulations is too efficient. An attempt at
studying large-scale turbulence in clusters was made recently
by Schuecker et al.\ (2004). Their analysis of {\it
XMM-Newton} observations of the Coma cluster indicated the
presence of a scale-invariant pressure fluctuation spectrum
on scales of 40-90 kpc and found that it could be
well described by a projected Kolmogorov/Oboukhov-type
turbulence spectrum. If the observed pressure fluctuations
are indeed driven by scale-invariant turbulence, this would
suggest that current mesh simulations have the resolution
required to accurately treat the turbulent mixing process.
Alternatively, several authors have suggested that ICM may be highly
viscous (eg., Fabian et al.\ 2003) with the result that fluid
instabilities will be strongly suppressed by
physical processes. This might favour the use of SPH methods which
include a physical viscosity (Sijacki \& Springel 2006).
It is a significant advance that we now understand the
origin of this long standing discrepancy. Our work also has
several important implications. Firstly, as outlined in \S 1,
there has been much discussion in the recent literature on
the competition between heating and cooling in galaxy
groups and clusters. The current consensus is that heating
from AGN is approximately sufficient to offset cooling losses
in observed cool core clusters (e.g., McNamara \& Nulsen
2007). However, observed present-day AGN power output seems
energetically incapable of explaining the large number of
systems that do not possess cool cores\footnote{Recent
estimates suggest that $\sim50$\% of all massive X-ray
clusters in flux-limited samples do not
have cool cores (e.g., Chen et al.\ 2007). Since at fixed
mass cool core clusters tend to be more luminous than
non-cool core clusters, the fraction of non-cool core
cluster in flux-limited samples may actually be an
underestimate of the true fraction.} (McCarthy et al.\ 2008).
Recent high resolution X-ray observations demonstrate that
these systems have higher central entropies than typical cool
core clusters (e.g., Dunn \& Fabian 2008). One way of
getting around the energetics issue is to invoke an early
episode of preheating (e.g., Kaiser 1991; Evrard \& Henry 1991).
Energetically, it is more efficient to raise the entropy of
the (proto-)ICM prior to it having fallen into the cluster
potential well, as its density would have been much lower
than it is today (McCarthy et al.\ 2008). Preheating remains
an attractive explanation for these systems.
However, as we have seen from our idealised merger
simulations, the
amount of central entropy generated in our mesh simulations
is significant and is even comparable to the levels observed
in the central regions of non-cool core clusters. It is
therefore tempting to invoke mergers and the mixing they
induce as an explanation for these systems. However, before
a definitive statement to this effect can be made, much
larger regions of parameter space should be explored.
In particular, a much larger range of impact parameters and
mass ratios is required, in addition to switching on the
effects of radiative cooling (which we have neglected in the
present study). This would be the mesh code analog of the
SPH study carried out by Poole et al. (2006; see
also Poole et al.\ 2008). We leave this for future work.
Alternatively, large cosmological mesh simulations, which
self-consistently track the hierarchical growth of clusters,
would be useful for testing the merger hypothesis. Indeed,
Burns et al.\ (2008) have recently carried out a large mesh
cosmological simulation (with the {\small ENZO} code) and argue that
mergers at high redshift play an important role in the
establishment of present-day entropy cores. However, these
results appear to be at odds with the cosmological mesh
simulations (run with the {\small ART} code) of Nagai et al.\ (2007)
(see also Kravtsov et al.\ 2005). These authors find that
most of their clusters have large cooling flows at the
present-day, similar to what is seen in some SPH cosmological
simulations (e.g., Kay et al.\ 2004; Borgani et al.\ 2006).
On the other hand, the SPH simulations of Keres et al.\ (2008)
appear to yield clusters with large entropy cores. This may be ascribed to
the lack of effective feedback in their simulations, as radiative cooling
selectively removes the lowest entropy gas (see, e.g., Bryan 2000;
Voit et al.\ 2002), leaving only high entropy (long cooling time)
gas remaining in the simulated clusters. However, all the simulations
just mentioned suffer from the overcooling problem (Balogh et al.\ 2001),
so it is not clear to what extent the large entropy cores in clusters
{\it in either mesh or SPH} simulations are produced by shock heating,
overcooling, or both. All of these simulations
implement different prescriptions for radiative cooling (e.g.,
metal-dependent or not), star formation, and feedback, and this may
lie at the heart of the different findings. A new generation of
cosmological code comparisons will be essential in sorting out these
apparently discrepant findings. The focus should not only be on
understanding the differences in the hot gas properties, but also on the
distribution and amount of stellar matter, as the evolution of the cold
and hot baryons are obviously intimately linked. Reasonably tight limits
on the amount of baryonic mass in the form of stars now exists (see,
e.g., Balogh et al.z 2008) and provides a useful target for the next
generation of simulations. At present, merger-induced mixing
as an explanation for intrinsic scatter in the hot gas properties
of groups and clusters remains an open question.
Secondly, we have learnt a great deal about the nature of gas
accretion and the development of hot gas haloes from SPH
simulations of the universe. Since we now see that these
simulations may underestimate the degree of mixing that
occurs, which of these results are robust, which need
revision? For example, Keres et al.\ (2005) (among others)
have argued that cold accretion by galaxies plays a dominant
role in fuelling the star formation in galaxies. Is it
plausible that turbulent eddies could disrupt and mix such
cold streams as they try to penetrate through the hot halo?
We can estimate the significance of the effect by comparing
the Kelvin-Helmholtz timescale, $\tau_{\rm KH}$, with the
free-fall time, $\tau_{\rm FF}$. The KH timescale is given
by (see, e.g., Nulsen 1982; Price 2008)
\begin{equation}
\tau_{\rm KH} \equiv \frac{2 \pi}{\omega}
\end{equation}
\noindent where
\begin{equation}
\omega = \frac{2 \pi}{k} \frac{(\rho \rho')^{1/2} v_{\rm rel}}{(\rho+\rho')}
\end{equation}
\noindent and $\rho$ is the density of the hot halo, $\rho'$
is the density of the cold stream, $k$ is the wave number of
the instability, and $v_{\rm rel}$ the velocity of the stream
relative to the hot halo. If the stream and hot halo are in
approximate pressure equilibrium, this implies a large
density contrast (e.g., a $10^4$ K stream falling into a
$10^6$ K hot halo of a Milky Way-type system would imply a
density contrast of $100$). In the limit of $\rho' \gg
\rho$ and recognising that the mode responsible for the
destruction of the stream is comparable to the size of the
stream (i.e., $k \sim 2 \pi / r'$), eqns.\ (7) and (8) reduce
to:
\begin{equation}
\tau_{\rm KH} \approx \frac{r'}{v_{\rm rel}}\biggl(\frac{\rho'}{\rho}\biggr)^{1/2}
\end{equation}
Adopting $\rho'/\rho = 100$, $r' = 100$ kpc, and $v_{\rm rel}
= 200$ km/s (perhaps typical numbers for a cold stream
falling into a Milky Way-type system), we find $\tau_{\rm
KH} \sim 5$ Gyr. The free-fall time, $\tau_{\rm FF} = R_{\rm
vir}/V_{\rm circ}(R_{\rm vir})$ [where $R_{\rm vir}$ is the
virial
radius of main system and $V_{\rm circ}(R_{\rm vir})$ is the
circular velocity of the main system at its virial radius],
is $\sim 1$ Gyr for a Milky Way-type system with mass
$M_{\rm vir} \sim 10^{12} M_\odot$. On this basis, it
seems that the stream would be stable because of the large
density contrast in the flows. It is clear, however, that
the universality of these effects need to be treated with
caution, as the free-fall and KH timescales are not vastly
discrepant. High resolution mesh simulations (cosmological
or idealised) of Milky Way-like systems would provide a
valuable check of the SPH results.
Finally, the SPH method has great advantages in terms of
computational speed, effective resolution and Galilean
invariance. Is it therefore possible to keep these advantages
and add additional small scale transport processes to the
code in order to offset the suppression of mixing? Wadsley
et al.\ (2008) and Price (2008) have presented possible
approaches based on including a thermal diffusion term in
the SPH equations. Although the approaches differ in their
mathematical details, the overall effect is the same.
However, it is not yet clear how well
this approach will work in cosmological simulations that
include cooling (and feedback), since the thermal diffusion
must be carefully controlled to avoid unphysical suppression
of cooling in hydrostatic regions (e.g., Dolag et al.\ 2005).
One possibility might be to incorporate such terms as a
negative surface tension in regions of large entropy
contrast (Hu \& Adams 2006). An alternative approach is to
combine the best features of the SPH method, such as the way
that it continuously adapts to the local gas density and
its flow, with the advantage of a Riemann based method of
solving the fluid dynamic equations (e.g., Inutsuka 2002).
Clearly, there is a great need to find simple problems in
which to test these codes: simple shock tube experiments are
not sufficient because they do not include the disordered
fluid motions that are responsible for generating the
entropy core. Idealised mergers represent a step forward,
but the problem is still not sufficiently simple that it is
possible to use self-similar scaling techniques (e.g.,
Bertschinger 1985, 1989) to establish the correct solution .
One possibility is to consider the generation of turbulent
eddies in a fluctuating gravitational potential. We have
begun such experiments, but (although the fluid flow patterns
are clearly different) simply passing a gravitational
potential through a uniform plasma at constant velocity does
not expose the differences between SPH and Eulerian mesh
based methods that we see in the idealised merger case. We
will tackle the minimum complexity that is needed to generate
these differences in a future paper.
\section*{Acknowledgements}
The authors thank the referee for a careful reading of the manuscript
and suggestions that improved that paper. They also thank Volker
Springel, Mark Voit, and Michael Balogh for very helpful discussions. NLM
and RAC acknowledge support from STFC studentships. IGMcC acknowledges
support from a NSERC Postdoctoral Fellowship at the ICC in Durham and a
Kavli Institute Fellowship at the Kavli Institute for Cosmology,
Cambridge. These simulations were performed and analysed on
COSMA, the HPC facilities at the ICC in Durham and we gratefully
acknowledge kind support from Lydia Heck for computing support. The
{\small FLASH} software used in this work was in part developed by the
DOE-supported ASC / Alliance Center for Astrophysical Thermonuclear
Flashes at the University of Chicago.
|
2,869,038,154,123 | arxiv | \section{Introduction}
Recent developments in point cloud data research have witnessed the emergence of many supervised approaches \cite{Qi2017,qi2017pointnet++,wang2019dynamic,li2018pointcnn,wang2019graph}. Most efforts of current research are dedicated into two tasks: point cloud shape classification (a.k.a. shape recognition) and point cloud segmentation (a.k.a. semantic segmentation). For both tasks, the success of the state-of-the-art methods is attributed mostly to the deep learning architecture \cite{Qi2017} and the availability of large amount of labelled 3d point cloud data \cite{mo2019partnet,armeni20163d}.
Although the community is still focused on pushing forward in the former direction, we believe the latter issue, i.e. data annotation, is an overlooked bottleneck. In particular, it is assumed that all points
for the point cloud segmentation task
are provided with ground-truth labels, which is often in the range of 1k to 10k points for a 3d shape \cite{yi2016scalable,mo2019partnet}. The order of magnitude increases drastically to millions of points for a real indoor scene \cite{landrieu2018large}. As a result, very accurate labels for billions of points are needed in a dataset to train good segmentation models. Despite the developments of modern annotation toolkits \cite{mo2019partnet,armeni20163d} to facilitate large-scale annotation, exhaustive labelling is still prohibitively expensive for ever growing new datasets.
\begin{figure}
\centering
\includegraphics[width=1.02\linewidth]{./Figure/WeakSupConcept_v3.pdf}
\vspace{-0.5cm}
\caption{Illustration of the weak supervision concept in this work. Our approach achieves
segmentation with only a fraction of labelled points.
.}\label{fig:WeakSupConcept}
\vspace{-0.5cm}
\end{figure}
In this work, we raise the question on whether it is possible to learn a point cloud segmentation model with only partially labelled points. And, if so, how many is enough for good segmentation. This problem is often referred to as weakly supervised learning in the literature \cite{Zhou2017review} as illustrated in Fig.~\ref{fig:WeakSupConcept}. To the best of our knowledge, there are only a handful of works which tried to address related problems \cite{guinard2017weakly,mei2019semantic}. In \cite{guinard2017weakly}, a non-parametric conditional random field classifier (CRF) is proposed to capture the geometric structure for weakly supervised segmentation. However, it casts the task into a pure structured optimization problem, and thus fail to capture the context, e.g. spatial and color cues. A method for semi-supervised 3D LiDAR data segmentation is proposed in
\cite{mei2019semantic}. It converts 3D points into a depth map with CNNs applied for feature learning, and the semi-supervised constraint is generated from the temporal consistency of the LiDAR scans. Consequently, it is not applicable to general 3D point cloud segmentation.
To enable the weakly supervised segmentation with both strong contextual modelling ability and handling generic 3D point cloud data, we choose to build upon the state-of-the-art deep neural networks for learning point cloud feature embedding \cite{Qi2017,wang2019dynamic}. Given partially labelled point cloud data, we employ an incomplete supervision branch with softmax cross-entropy loss that penalizes only on labelled points. We observe that such simple strategy can succeed even at 10$\times$ fewer labels, i.e. only $10\%$ of the points are labelled. This is because the learning gradient of the incomplete supervision can be considered as a sampling approximation of the full supervision. In Sect.~\ref{sect:IncompleteSup}, we show our analysis that the approximated gradient converges to the true gradient in distribution, and the gap is subjected to a normal distribution with variance inversely proportional to the number of sampled points. As a result, the approximated gradient is close to the true gradient given enough labelled points. The analysis also gives an insight into choosing the best annotation strategy under fixed budget. We conclude that it is always better to extensively annotate more samples with fewer labelled points in each sample than to intensively label fewer samples with more (or fully) labelled points.
As the above method imposes constraints only on the labelled points, we propose additional constraints to the unlabelled points in three orthogonal directions. First, we introduce an additional inexact supervision branch which defines a point cloud sample level cross entropy loss
in a similar way to multi-instance learning\cite{Zhou2015,ilse2018attention}. It aims to suppress the activation of any point with respect to the negative categories. Second, we introduce a Siamese self-supervision branch by augmenting the training sample with a random in-plane rotation and flipping, and then encourage the original and augmented point-wise predictions to be consistent. Finally, we make the observation that semantic parts/objects are often continuous in the spatial and color spaces. To this end, we propose a spatial and color smoothness constraint to encourage spatially adjacent points with similar color to have the same prediction. Such constraint can be further applied at inference stage by solving a soft constrained optimization that resembles label propagation on a graph \cite{zhu2003semi}. Our proposed network is illustrated in Fig.~\ref{fig:Network}.
\begin{figure}
\centering
\includegraphics[width=1.05\linewidth]{./Figure/Network_v3.pdf}
\caption{Our network architecture for weakly supervised point cloud segmentation. Red lines indicate back propagation flow.
}\label{fig:Network}
\vspace{-0.5cm}
\end{figure}
\textbf{Our contributions} are fourfold. i) To the best of our knowledge, this is the first work to investigate weakly supervised point cloud segmentation within a deep learning context. ii) We give an explanation to the success of weak supervision and provide insight into annotation strategy under a fixed labelling budget. iii) We adopt three additional losses based on inexact supervision, self-supervision and spatial and color smoothness to further constrain unlabelled data. iv) Experiments are carried out on three public dataset which serve as benchmarks to encourage future research.
\section{Related Work}
Weakly supervised learning aims to use weaker annotations, often in the form of partially labelled dataset or samples. In this work, we follow the definition of weak supervision made by \cite{Zhou2017review}. More specifically, we are concern with two types of weak supervision: incomplete and inexact supervision.
\vspace{-0.45cm}
\paragraph{Incomplete Supervision.} This is also referred to as semi-supervised learning in the literature \cite{zhu2003semi,belkin2006manifold,papandreou2015weakly,bearman2016s,Laine_ICLR17,Kipf_ICLR17,iscen2019label}. We interchangeably use semi-supervised, weakly supervised and weak supervision in this paper to refer to this type of supervision.
It is assumed that only partial instances are labelled, e.g. only a few images are labelled for the recognition task \cite{zhu2003semi,zhou2004learning,iscen2019label}, a few bounding boxes or pixels are labelled for the image segmentation task \cite{papandreou2015weakly,bearman2016s} or a few nodes are labelled for graph inference \cite{Kipf_ICLR17}. The success is often attributed to the exploitation of problem specific assumptions including graph manifold \cite{zhu2003semi,belkin2006manifold,Kipf_ICLR17}, spatial and color continuity \cite{papandreou2015weakly,bearman2016s}, etc.
Another line of works are based on ensemble learning by introducing additional constraints such as consistency between original and altered data, e.g. the addition of noise \cite{rasmus2015semi}, rotation \cite{Laine_ICLR17} or adversarial training \cite{miyato2018virtual}. This has further inspired ensemble approaches \cite{tarvainen2017mean,radosavovic2018data} akin to data distillation. Up till now, most of these works emphasize on large-scale image data, while very limited works have addressed point cloud data. \cite{mei2019semantic} proposes a semi-supervised framework for point cloud segmentation. However, it does not directly learn from point cloud data and the required annotation is quite large. \cite{guinard2017weakly} proposes to exploit the geometric homogeneity and formulated a CRF-like inference framework. Nonetheless, it is purely optimization-based, and thus fails to capture the spatial relation between semantic labels. In this work, we make use of the state-of-the-art deep neural networks, and incorporate additional spatial constraints to further regularize the model. Thus we take advantage of both spatial correlation provided by deep models and geometric priors.
\vspace{-0.45cm}
\paragraph{Inexact Supervision.} It is also referred as weakly annotation in the image segmentation community \cite{Kolesnikov16,shi2016weakly}. They aim to infer the per-pixel prediction from a per-image level annotation \cite{Kolesnikov16,shi2016weakly} for image segmentation tasks. The class activation map (CAM) \cite{Zhou2015} is proposed to highlight the attention of of CNN based on discriminative supervision. It is proven to be a good prior model for weakly supervised segmentation \cite{Kolesnikov16,Wang2018}. Inexact supervision is often complementary to incomplete supervision, and therefore, it is also used to improve semi-supervised image segmentation\cite{bearman2016s}. In this work, we introduce inexact supervision as a complement to incomplete supervision for the task of point cloud segmentation.
\vspace{-0.45cm}
\paragraph{Point Cloud Analysis.} It is applied on 3D shapes and has received much attention in recent years. The PointNet \cite{Qi2017} is initially proposed to learn 3D point cloud feature through cascaded multi-layer perceptrons (mlps) for point cloud classification and segmentation. The following works \cite{qi2017pointnet++,wang2019dynamic,li2018pointcnn,wang2018deep,landrieu2018large} are subsequently proposed to exploit local geometry through local pooling or graph convolution. Among all tasks of point cloud analysis, semantic segmentation is of high importance due to its potential application in robotics and the existing works rely on learning classifiers at point-level \cite{Qi2017}. However, this paradigm requires exhaustive point-level labelling and does not scale well. To resolve this issue, we propose a weakly supervised approach that requires only a fraction of points to be labelled. We also note that \cite{te2018rgcnn} proposes to add spatial smoothness regularization to the training objective. \cite{choy20194d} proposes to refine prediction via CRF. Nevertheless, both works require full supervision, while our work is based on a more challenging weak supervision setting.
\section{Methodology}
\subsection{Point Cloud Encoder Network}
We formally denote the input point cloud data as $\{\matr{X}_b\}_{b=1\cdots B}$ with $B$ individual shapes (e.g. shape segmentation) or room blocks (e.g. indoor point cloud segmentation). Each sample $\matr{X}_b\in \mathcal{R}^{N\times F}$ consists of $N$ 3d points with the xyz coordinates and possibly additional features, e.g. RGB values. Each sample is further accompanied with per-point segmentation label $\vect{y}_b\in \{1,\cdots K\}^{N}$, e.g. fuselage, wing and engine of a plane.
For clarity
, we denote the one-hot encoded label as $\matr{\hat{Y}}\in\{0,1\}^{B\times N\times K}$.
A point cloud encoder network $f(\matr{X};\Theta)$ parameterized by $\Theta$ is employed to obtain the embedded point cloud features $\matr{Z}_b\in \set{R}^{N\times K}$. We note that the dimension of the embedding is the same as the number of segmentation categories.
The recent development on point cloud deep learning \cite{Qi2017,qi2017pointnet++,li2018pointcnn} provides many candidate encoder networks, which are evaluated in the experiment section.
\subsection{Incomplete Supervision Branch}\label{sect:IncompleteSup}
We assume that only a few points in the point cloud samples $\{\matr{X}_b\}$ are labelled with the ground-truth. Specifically, we denote a binary mask as $\matr{M}\in\{0,1\}^{B\times N}$, which is 1 for a labelled point and 0 otherwise.
Furthermore, we define a softmax cross-entropy loss on the labelled point as
\begin{equation}
l_{seg} = -\frac{1}{C}\sum_b\sum_im_{bi}\sum_k \hat{y}_{bik}\log\frac{\exp(z_{bik})}{\sum_k \exp(z_{bik})},
\end{equation}
where $C=\sum_{b,i}m_{bi}=||\matr{M}||_1$ is the normalization variable.
\vspace{-0.4cm}
\paragraph{Discussion:} According to the experiments, we found that
our method yields competitive results with as few as $10\%$ labelled points, i.e. $||\matr{M}||_1/(B\cdot N)=0.1$. The rationale is detailed in the following.
We first assume that two networks with similar weights -- one trained with full supervision and the other with weak supervision should produce similar results.
Assuming that both networks start with an identical initialization, the higher similarity of the gradients at each step means a higher chance for the two networks to converge to similar results. Now, we write the gradients with full supervision $\nabla_\Theta l_f$ and weak supervision $\nabla_\Theta l_w$ as
\begin{equation}\label{eq:Gradient}
\vspace{-0.2cm}
\begin{split}
\nabla_\Theta l_{f} &= \frac{1}{B\cdot N}\sum_b\sum_i\sum_k \nabla_\Theta l_{bik}, \quad \text{and} \\
\nabla_\Theta l_{w} &= \frac{1}{C}\sum_b\sum_im_{bi}\sum_k \nabla_\Theta l_{bik}, \\
\text{where}\quad l_{bik}&=-\hat{y}_{bik}\text{log}\frac{\exp(z_{bik})}{\sum_k \exp(z_{bik})}.
\end{split}
\end{equation}
This relation is also illustrated in Fig.~\ref{fig:IncompleteSup}.
\begin{figure}[b]
\includegraphics[width=1\linewidth]{./Figure/IncompleteSup.pdf}
\caption{Illustration of incomplete supervision and labeling strategies with fixed budget.}\label{fig:IncompleteSup}
\vspace{-0.5cm}
\end{figure}
At each training step, the direction of the learning gradient is the mean of the gradients calculated with respect to each individual point. Suppose that $\nabla_\Theta l_{bik}$ is i.i.d. with expectation $E[\nabla_\Theta l_{bik}]=\mu$ and variance $Var[\nabla_\Theta l_{bik}]=\sigma^2$, and sampled mean (n samples) $S_n=mean(\nabla_\Theta l_{bik})$. We can easily verify that $E[\nabla_\Theta l_{bik}]=\nabla_\Theta l_f$ and $S_n=\nabla_\Theta l_w$ with $n=C=||\matr{M}||_1$. According to the Central Limit Theorem, we have the following convergence in distribution:
\begin{equation}
\vspace{-0.2cm}
\begin{split}
&\sqrt{n}(S_n-\mu)\xrightarrow[]{d}\set{N}(0,\sigma^2),\\
\Rightarrow&\sqrt{||\matr{M}||_1}(\nabla_\Theta l_w-\nabla_\Theta l_f)\xrightarrow[]{d}\set{N}(0,\sigma^2),\\
\Rightarrow&(\nabla_\Theta l_w-\nabla_\Theta l_f)\xrightarrow[]{d}\set{N}(0,\sigma^2/{||\matr{M}||_1}).
\end{split}
\end{equation}
This basically indicates that the difference between the gradient of full supervision and weak supervision is subjected to a normal distribution with variance $\sigma^2/||\matr{M}||_1$. Consequently, a sufficient number of labelled points, i.e. sufficiently large $||\matr{M}||_1$, is able to approximate $\nabla_\Theta l_f$ well with $\nabla_\Theta l_w$. Although the value of $\sigma$ is hard to estimate in advance, we empirically found that
our method yields results comparable to full supervision
with 10$\times$ fewer labelled points.
The analysis also provides additional insight into data annotation under a fixed budget. For example, with $50\%$ of the total points to be labelled as illustrated in Fig.~\ref{fig:IncompleteSup} (right):
should we label 50\% of the points in each sample (Scheme 1) or label all the points in only 50\% of the samples (Scheme 2)?
From the above analysis, it is apparent that Scheme 1 is better than Scheme 2 since it is closer to the i.i.d. assumption. This is further backed up by experiments in Sect.~\ref{sect:PtsVsSamp}.
\subsection{Inexact Supervision Branch}
In addition to the Incomplete Supervision Branch, a so-called inexact supervision
accompanies the annotation. Assuming each part has at least one labelled point, every training sample $\matr{X}_b$ is accompanied with an inexact label $\vect{\bar{y}}_b=\max_i\vect{\hat{y}}_{bi}$ simply by doing maxpooling over all points. Consequently, the inexact supervision branch is constructed in a similar fashion as multi-instance learning \cite{pathak2014fully,ilse2018attention}. The feature embedding $\matr{Z}_b$ is first globally max-pooled, i.e. $\vect{\bar{z}}_{b}=\max_i \vect{z}_{bi}$.
We then introduce a loss for the inexact supervision branch. Since $\vect{\bar{z}}_b$ defines the logits on each category, the sigmoid cross entropy can be adopted as
\begin{equation}
\begin{split}
l_{mil} = &-\frac{1}{B\cdot K}\sum\limits_b\sum\limits_k \bar{y}_{bk}\log\frac{1}{1+\exp(-\bar{z}_{bk})} \\
&+ (1-\bar{y}_{bk})(\log(\frac{\exp(-\bar{z}_{bk})}{1+\exp(-\bar{z}_{bk})})).
\end{split}
\end{equation}
The rationale is that for those part categories that are absent from the sample, no points should be predicted with high logits. The incomplete supervision branch is only supervised on a tiny fraction of label points while the inexact supervision branch is supervised on the sample level with all points involved, so they are complementary to each other.
\subsection{Siamese Self-Supervision}
Despite the above two losses, majority of the unlabelled points are still not trained with any constraints. We believe additional constraints on those points can potentially further improve the results. To this end, we first introduce a Siamese self-supervision structure. We make the assumption that the prediction for any point is rotation and mirror flipping invariant. This assumption in particular holds true for 3D CAD shapes and indoor scenes with rotation in the XoY plane, e.g. the semantic label should not change with different view angle in a room. With this in mind, we design a Siamese network structure with two shared-parameter encoders $f_1(\matr{X})$ and $f_2(\matr{X})$. Then given a training sample $\matr{X}$, we apply a random transformation that consists of a random mirroring along the X and/or Y axes and an XoY plane rotation, i.e.
\vspace{-0.2cm}
\begin{equation}\label{eq:Tform}
\resizebox{0.90\linewidth}{!}{
$
\matr{R}=
\begin{bmatrix}
cos\theta & -sin\theta & 0 \\
sin\theta & cos\theta & 0 \\
0 & 0 & 1
\end{bmatrix}\cdot
\begin{bmatrix}
(2a-1)c & (2b-1)(1-c) & 0 \\
(2a-1)(1-c) & (2b-1)c & 0 \\
0 & 0 & 1
\end{bmatrix},
$
}
\end{equation}
where $\theta\sim \mathcal{U}(0,2\pi)$ (uniform distribution) and $a,b,c\sim\mathcal{B}(1,0.5)$ (Bernoulli distribution). Specifically, the first matrix controls the degree of rotation and the second matrix controls mirroring and X,Y swapping.
With the augmented sample denoted as $\tilde{\matr{X}}=\matr{X}\matr{R}^\top$, the rotation invariant constraint is turned into minimizing the divergence between the probabilistic predictions of $g(f_1(\matr{X}))$ and $g(f_2(\tilde{\matr{X}}))$, where $g(\cdot)$ is the softmax function. We use L2 distance to measure the divergence:
\begin{equation}
l_{sia}=\frac{1}{B\cdot N \cdot K}\sum_b||g(f_1(\matr{X}_b)) - g(f_2(\tilde{\matr{X}}_b))||_F^2,
\end{equation}
and empirically found it to be better than KL-Divergence.
\subsection{Spatial \& Color Smoothness Constraint}\label{sect:Smooth}
Semantic labels for 3D shape or scenes are usually smooth in both spatial and color spaces. Although they can be included by the state-of-the-art convolution networks \cite{wang2019graph}, explicit constraints are more beneficial in our context of weak supervision when the embedding of large amount of unlabelled points are not well constrained by the segmentation loss. Consequently, we introduce additional constraints at both training and inference stages.
\vspace{-0.4cm}
\paragraph{Spatial \& Color Manifold.} A manifold can be defined on the point cloud to account for the local geometry and color by a graph. We denote the 3D coordinate channels and RGB channels, if any, as $\matr{X^{xyz}}$ and $\matr{X^{rgb}}$, respectively. To construct a graph for the manifold, we first compute the pairwise distance $\matr{P}_c\in\set{R}^{N\times N}$ for channel $c$ (xyz or rgb) as $p_{ij}^c = ||\vect{x}_{i}^c-\vect{x}_{j}^c||_2,~\forall i,j\in \{1,\cdots N\}$.
A k-nn graph can be then constructed by searching for the k nearest neighbors $NN_k(\vect{x})$ of each point, and the corresponding weight matrix $\matr{W}^c\in\set{R}^{N\times N}$ is written as
\begin{equation}
\resizebox{0.90\linewidth}{!}{
$
w_{ij}^c=\left\{
\begin{array}{ll}
\text{exp}(-p_{ij}^c/\eta),\quad j\in NN_k(\vect{x}_i)\\
0, \quad \text{otherwise}
\end{array}, \forall i,j\in \{1,\cdots N\}.
\right.
$
}
\end{equation}
We take the sum of both weight matrices as $w_{ij}=w_{ij}^{xyz} + w_{ij}^{rgb}~\forall i,j$ to produce a more reliable manifold
when both xyz and rgb channels are available. This is reasonable since the xyz channel blurs the boundary and the rgb channel links faraway points, respectively. In case the manifold constructed on spatial distance and color contradicts the labelled ground-truth, we add additional must-link and must-not-link constraints \cite{wang2014constrained} to $\matr{W}$ to strengthen the compliance to known annotations, i.e.
\begin{equation}\label{eq:MustLink}
w_{ij} = \left\{
\begin{array}{ll}
1,\quad m_{i},m_j=1, y_i = y_j\\
-1, \quad m_{i},m_j=1, y_i \neq y_j
\end{array}.
\right.
\end{equation}
We further write the Laplacian matrix \cite{belkin2006manifold} as $\matr{L}=\matr{D}-\matr{W}$
with the degree matrix denoted as $\matr{D}=diag(\vect{d})$ \cite{von2007tutorial} and $d_i=\sum_j w_{ij}, \forall i\in \{1\cdots N\}$.
\vspace{-0.4cm}
\paragraph{Training Stage.}
We introduce a manifold regularizer \cite{belkin2006manifold} to encourage the feature embedding of each point to comply with the manifold obtained previously. More specifically, the prediction $f(\vect{x}_i)$ should stay close to $f(\vect{x}_j)$ if $w_{ij}$ indicates high and stay unconstrained otherwise. Thus the regularizer is given by
\begin{equation}
\resizebox{0.85\linewidth}{!}{
$
\begin{split}
l_{smo}&=\frac{1}{||\matr{W}||_0}\sum_i\sum_j w_{ij}||f(\vect{x}_i)-f(\vect{x}_j)||_2^2\\
&=\frac{2}{||\matr{W}||_0}(\sum_i d_{i}f(\vect{x}_i)^\top f(\vect{x}_i) - \sum_i \sum_j w_{ij} f(\vect{x}_i)^\top f(\vect{x}_j))\\
&=\frac{2}{||\matr{W}||_0}(tr(\matr{Z}^\top\matr{D}\matr{Z}) - tr(\matr{Z}^\top\matr{W}\matr{Z}))=\frac{2}{||\matr{W}||_0}tr(\matr{Z}^\top\matr{L}\matr{Z}),
\end{split}
$
}
\end{equation}
where $\matr{Z}$ is the prediction of all points.
\vspace{-0.4cm}
\paragraph{Inference Stage.} It is well known in image segmentation that the predictions of a CNN do not consider the boundaries well \cite{chen2017deeplab,Kolesnikov16} and CRF is often employed to refine the raw predictions. In weakly supervised point cloud segmentation, this issue exacerbates due to limited labels. To mitigate this problem, we introduce a semi-supervised label propagation procedure\cite{zhu2003semi} to refine the predictions. Specifically, the refined predictions $\matr{\tilde{Z}}$ should comply with the spatial and color manifold defined by the Laplacian $\matr{L}$, and at the same time should not deviate too much from the network predictions $\matr{Z}$. We write the objective as
\begin{equation}
\begin{split}
&\min_{\{\vect{\tilde{z}}\}} \sum_i\sum_j w_{ij}||\vect{\tilde{z}}_i-\vect{\tilde{z}}_j||_2^2 + \gamma \sum_i ||\vect{\tilde{z}}_i-\vect{z}_i||_2^2, \\
\implies &\min_{\matr{\tilde{Z}}} tr(\matr{\tilde{Z}}^\top\matr{L}\matr{\tilde{Z}}) + \gamma ||\matr{\tilde{Z}} - \matr{Z}||_F^2.
\end{split} \raisetag{20pt}
\end{equation}
A closed-form solution exists for the above optimization \cite{zhu2003semi} and the final prediction for each point is simply obtained via
\begin{equation}\label{eq:LPClosedForm}
\begin{split}
&\tilde{y}_i=\argmax_k {\tilde{z}}_{ik},~\forall i\in\{1,\cdots N\}, \quad \text{where}\\
&\matr{\tilde{Z}}=\gamma(\gamma \matr{I} + \matr{L})^{-1}\matr{Z}.
\end{split}
\end{equation}
\subsection{Training}
The final training objective is the combination of all the above objectives, i.e. $l_{total}=l_{seg}+\lambda_1 l_{mil}+\lambda_2 l_{sia}+\lambda_3 l_{smo}$. We empirically set $\lambda_1,\lambda_2,\lambda_3=1$. The k-nn graph is selected as $k=10$, $\eta=1e3$, and $\gamma$ in Eq.~(\ref{eq:LPClosedForm}) is chosen to be 1. For efficient training, we first train the network with segmentation loss $l_{seg}$ only for 100 epochs. Then the total loss $l_{total}$ is trained for another 100 epochs. The default learning rate decay and batchnorm decay are preserved during the trainings
of different encoder networks. The initial learning rate is fixed at $1e-3$ for all experiments and the batchsize varies from 5 to 32 for different dataset bounded by the GPU memory size. Our algorithm is summarized in Algo.~\ref{alg:WeakSupSeg}.
\begin{algorithm}[hb]
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
\SetKwFunction{Concatenate}{Concatenate}\SetKwFunction{Kmeans}{K-means}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{\small{Point Cloud $\{\matr{X}_b\in \set{R}^{N\times D}\}$, Labels $\{\matr{y}_b\in\set{Z}^{N}\}$}}
\Output{\small{Segmentation Predictions $\{\vect{\tilde{y}}_b\in\set{Z}^{N}\}$}}
\tcc{Training Stage:}
\For{$epoch \leftarrow 1$ \KwTo $100$}
{
Train One Epoch: $\Theta=\Theta - \alpha\nabla_\Theta l_{seg}|_{\{\matr{X}_b\},\{\vect{y}_b\}}$\;
}
\For{$epoch \leftarrow 1$ \KwTo $100$}
{
\tcp{Siamese Network}
Sample $\phi\sim \mathcal{U}(0,2\pi)$ and $a,b,c\sim\mathcal{B}(1,0.5)$\;
Calculate $\matr{R}$ according to Eq.~(\ref{eq:Tform})\;
Generate augmented sample $\matr{\tilde{X}}=\matr{X}\matr{R}^\top$\;
\tcp{Manifold Regularization}
Construct Laplacian $\matr{L}$ according to Sect.~\ref{sect:Smooth}\;
Train one epoch: $\Theta=\Theta - \alpha\nabla_\Theta l_{total}|_{\{\matr{X}_b\},\{\matr{\tilde{X}}_b\},\{\vect{y}_b\}}$\;
}
\tcc{Inference Stage:}
Forward pass $\matr{Z}_b=f(\matr{\tilde{X}}_b;\Theta)$\;
Obtain predictions $\{\vect{\tilde{y}}_b\}$ via Eq.~(\ref{eq:LPClosedForm})\;
\caption{\small{Weakly Supervised Point Cloud Segmentation}\label{alg:WeakSupSeg}}
\end{algorithm}
\section{Experiment}
\subsection{Dataset}
We conduct experiments of our weakly supervised segmentation model on three benchmark point cloud datasets. \textbf{ShapeNet} \cite{yi2016scalable} is a CAD model dataset with 16,881 shapes from 16 categories, each annotated with 50 parts. It is widely used as the benchmark for classification and segmentation evaluation.
We propose a weakly supervised setting. For each training sample we randomly select a subset of points from each part to be labelled. We use the default evaluation protocol for comparison. \textbf{PartNet} \cite{mo2019partnet} is proposed for more fine-grained point cloud learning. It consists of 24 unique shape categories with a total of 26,671 shapes. For the semantic segmentation task, it involves three levels of fine-grained annotation and we choose to evaluate at level 1. The incomplete weakly supervised setting is created in a similar way to ShapeNet, and we follow the original evaluation protocol. \textbf{S3DIS} \cite{armeni20163d} is proposed for indoor scene understanding. It consists of 6 areas each covering several rooms. Each room is scanned with RGBD sensors and is represented by point cloud with xyz coordinate and RGB value. For weakly supervised setting, we assume a subset of points are uniformly labelled within each room. The evaluation protocol on Area 5 as holdout is adopted.
\subsection{Weakly Supervised Segmentation}
Two weakly supervision settings are studied. i) 1 point label (1pt), we assume there is only 1 point within each category labelled with ground-truth. Less than $0.8\%$ of total points are labelled for ShapeNet under the 1pt scheme. For S3DIS, the total labelled points is less than $0.2\%$. ii) 10 percentage label ($10\%$), we uniformly label $10\%$ of all points for each training sample.
\vspace{-0.4cm}
\paragraph{Encoder Network.} We choose DGCNN \cite{wang2019dynamic} with default parameters as our encoder network due to its superior performance on benchmark shape segmentation and high training efficiency. However, as we point out in Sect.~\ref{sect:EncoderNet}, the proposed weakly supervised methods are compatible with alternative encoder networks.
\vspace{-0.4cm}
\paragraph{Comparisons.} We compare against 3 sub-categories of methods. i) Fully supervised approaches (Ful.Sup.), including the state-of-the-art networks for point cloud segmentation. These methods serve as the upper bound of weakly supervised approaches. ii) Weakly supervised approaches (Weak Sup.), we implemented several generic weakly supervised methods and adapt them to point cloud segmentation tasks. In particular, the following methods are compared.
The $\Pi$ model \cite{Laine_ICLR17} proposed to supervise on original input and the augmented input, but without the incomplete supervision on the augmented input. The mean teacher (MT) \cite{tarvainen2017mean} model employs a temporal ensemble for semi-supervised learning. The baseline method is implemented with only the segmentation loss $l_{seg}$ and DGCNN as encoder. Our final approach (Ours) is trained with the multi-task total loss $l_{total}$ with label propagation in the inference stage. iii) Unsupervised approaches, these methods do not rely on any annotations but instead directly infer clusters from the spatial and color affinities. Specifically, we experiment with Kmeans and normalized cut spectral clustering\cite{shi2000normalized} (Ncut). Both methods are provided with ground-truth number of parts.
\vspace{-0.4cm}
\paragraph{Evaluation.} For all datasets, we calculate the mean Intersect over Union (mIoU) for each test sample, and report the average mIoU over all samples (SampAvg) and all categories (CatAvg). For unsupervised methods, we find a best permutation between the prediction and ground-truth and then calculate the same mIoU metrics.
\vspace{-0.4cm}
\paragraph{ShapeNet.} We present the results in Tab.~\ref{tab:ShapeNet}, where we make the following observations. Firstly, the weak supervision model produces very competitive results
with only 1 labelled point per part category. The gap between full supervision and 1 point weak supervision is less than $12\%$. Secondly, we observe consistent improvement in the performance of segmentation with more labelled point from 1pt to 10$\%$. Interestingly, the weak supervision model is comparable to full supervision even
with 10$\%$ labelled points. Lastly, our proposed method that combines multiple losses and label propagation improves upon the baseline consistently, and outperforms alternative generic semi-supervised learning approaches and unsupervised clustering methods.
\begin{table*}[htbp]
\centering
\caption{\footnotesize{mIoU ($\%$) evaluation on ShapeNet dataset. The fully supervision (Ful. Sup.) methods are trained on $100\%$ labelled points. Three levels of weak supervisions (1pt, 1$\%$ and $100\%$) are compared. Ours method consists of DGCNN as encoder net, MIL branch, Siamese branch, Smooth branch and Inference label propagation.}}\vspace{-0.3cm}
\setlength\tabcolsep{2pt}
\resizebox{0.99\linewidth}{!}{
\begin{tabular}{cclcccccccccccccccccc}
\toprule
\multicolumn{2}{c}{Setting} & \multicolumn{1}{c}{Model} & CatAvg & SampAvg & Air. & Bag & Cap & Car & Chair & Ear. & Guitar & Knife & Lamp & Lap. & Motor. & Mug & Pistol & Rocket & Skate. & Table \\
\midrule
\multicolumn{2}{c}{\multirow{3}[2]{*}{\begin{sideways}\textbf{\footnotesize{Ful.Sup.}}\end{sideways}}} & PointNet\cite{Qi2017} & 80.4 & 83.7 & 83.4 & 78.7 & 82.5 & 74.9 & 89.6 & 73.0 & 91.5 & 85.9 & 80.8 & 95.3 & 65.2 & 93.0 & 81.2 & 57.9 & 72.8 & 80.6 \\
\multicolumn{2}{c}{} & PointNet++\cite{qi2017pointnet++} & 81.9 & 85.1 & 82.4 & 79.0 & \textbf{87.7} & \textbf{77.3} & 90.8 & 71.8 & 91.0 & 85.9 & \textbf{83.7} & 95.3 & \textbf{71.6} & \textbf{94.1} & 81.3 & 58.7 & \textbf{76.4} & \textbf{82.6} \\
\multicolumn{2}{c}{} & DGCNN\cite{wang2019dynamic} & \textbf{82.3} & \textbf{85.1} & \textbf{84.2} & \textbf{83.7} & 84.4 & 77.1 & \textbf{90.9} & \textbf{78.5} & \textbf{91.5} & \textbf{87.3} & 82.9 & \textbf{96.0} & 67.8 & 93.3 & \textbf{82.6} & \textbf{59.7} & 75.5 & 82.0 \\
\midrule
\multicolumn{2}{c}{\multirow{2}[1]{*}{\begin{sideways}\textbf{\footnotesize{Unsup.}}\end{sideways}}} & Kmeans & 39.4 & 39.6 & 36.3 & 34.0 & 49.7 & 18.0 & 48.0 & 37.5 & 47.3 & 75.6 & 42.0 & 69.7 & 16.6 & 30.3 & 43.3 & 33.1 & 17.4 & 31.7 \\
\multicolumn{2}{c}{} & Ncut\cite{shi2000normalized} & 43.5 & 43.2 & 41.0 & 38.0 & 53.4 & 20.0 & 52.1 & 41.1 & 52.1 & 83.5 & 46.1 & 77.5 & 18.0 & 33.5 & 48.0 & 36.5 & 19.6 & 35.0 \\
\midrule
\multirow{10}[3]{*}{\begin{sideways}\textbf{Weak Sup.}\end{sideways}} & \multirow{5}[1]{*}{\begin{sideways}1pt\end{sideways}} & $\Pi$ Model\cite{Laine_ICLR17} & 72.7 & 73.2 & 71.1 & \textbf{77.0} & 76.1 & 59.7 & 85.3 & \textbf{68.0} & 88.9 & 84.3 & 76.5 & \textbf{94.9} & 44.6 & 88.7 & 74.2 & 45.1 & 67.4 & 60.9 \\
& & MT\cite{tarvainen2017mean} & 68.6 & 72.2 & 71.6 & 60.0 & \textbf{79.3} & 57.1 & 86.6 & 48.4 & 87.9 & 80.0 & 73.7 & 94.0 & 43.3 & 79.8 & 74.0 & 45.9 & 56.9 & 59.8 \\
& & Baseline & 72.2 & 72.6 & 74.3 & 75.9 & 79.0 & \textbf{64.2} & 84.1 & 58.8 & 88.8 & 83.2 & 72.3 & 94.7 & 48.7 & 84.8 & 75.8 & \textbf{50.6} & 60.3 & 59.5 \\
& & Ours & \textbf{74.4} & \textbf{75.5} & \textbf{75.6} & 74.4 & {79.2} & 66.3 & \textbf{87.3} & 63.3 & \textbf{89.4} & \textbf{84.4} & \textbf{78.7} & 94.5 & \textbf{49.7} & \textbf{90.3} & \textbf{76.7} & 47.8 & \textbf{71.0} & \textbf{62.6} \\
\cmidrule{2-21} & \multirow{5}[2]{*}{\begin{sideways}10\%\end{sideways}} & $\Pi$ Model\cite{Laine_ICLR17} & 79.2 & 83.8 & 80.0 & 82.3 & 78.7 & 74.9 & 89.8 & \textbf{76.8} & 90.6 & 87.4 & \textbf{83.1} & 95.8 & 50.7 & 87.8 & 77.9 & 55.2 & 74.3 & 82.7 \\
& & MT\cite{tarvainen2017mean} & 76.8 & 81.7 & 78.0 & 76.3 & 78.1 & 64.4 & 87.6 & 67.2 & 88.7 & 85.5 & 79.0 & 94.3 & 63.3 & 90.8 & 78.2 & 50.7 & 67.5 & 78.5 \\
& & Baseline & 81.5 & 84.5 & 82.5 & 80.6 & \textbf{85.7} & 76.4 & 90.0 & 76.6 & 89.7 & 87.1 & 82.6 & 95.6 & 63.3 & 93.6 & 79.7 & \textbf{63.2} & 74.4 & 82.6 \\
& & Ours & \textbf{81.7} & \textbf{85.0} & \textbf{83.1} & \textbf{82.6} & 80.8 & \textbf{77.7} & \textbf{90.4} & 77.3 & \textbf{90.9} & \textbf{87.6} & 82.9 & \textbf{95.8} & \textbf{64.7} & \textbf{93.9} & \textbf{79.8} & 61.9 & \textbf{74.9} & \textbf{82.9} \\
\bottomrule
\end{tabular}%
}
\label{tab:ShapeNet}%
\end{table*}%
\vspace{-0.4cm}
\paragraph{S3DIS.} The results are presented in Tab.~\ref{tab:S3DIS_PartNet}. We make observations in a similar way to ShapeNet. First, the 1pt weak supervision provides strong results. The results of our proposed multi-task model is only $1\%$ lower than the fully supervised counterpart. Furthermore, the results
of our method with only $10\%$ labelled points is even slightly superior than the fully supervision. Finally, the results of our method consistently outperform both unsupervised and alternative weakly supervised methods.
\vspace{-0.4cm}
\paragraph{PartNet.} For the PartNet dataset, we report the average mIoU in Tab.~\ref{tab:S3DIS_PartNet}. Details for each category is included in the supplementary. We also observe the same patterns from the results. The 1pt setting yields particularly strong results and our own variant outperforms all unsupervised and alternative weak supervision methods.
\begin{table*}[htbp]
\begin{minipage}{0.79\textwidth}
\centering
\caption{\footnotesize{mIoU ($\%$) evaluations on S3DIS (Area 5) and PartNet datasets. We compared against fully supervised (Ful.Sup.), unsupervised (Unsup.) and alternative weakly supervised (Weak. Sup.) approaches.}}\vspace{-0.3cm}
\setlength\tabcolsep{2pt}
\resizebox{1\linewidth}{!}{
\begin{tabular}{cclcccccccccccccc|cc}
\toprule
& & & \multicolumn{14}{c|}{S3DIS} & \multicolumn{2}{c}{PartNet} \\
\cmidrule(lr){4-17} \cmidrule(lr){18-19} \multicolumn{2}{c}{Setting} & Model & \multicolumn{1}{l}{CatAvg} & \multicolumn{1}{l}{ceil.} & \multicolumn{1}{l}{floor} & \multicolumn{1}{l}{wall} & \multicolumn{1}{l}{beam} & \multicolumn{1}{l}{col.} & \multicolumn{1}{l}{win.} & \multicolumn{1}{l}{door} & \multicolumn{1}{l}{chair} & \multicolumn{1}{l}{table} & \multicolumn{1}{l}{book.} & \multicolumn{1}{l}{sofa} & \multicolumn{1}{l}{board} & \multicolumn{1}{l|}{clutter} & CatAvg & SampAvg \\
\midrule
\multicolumn{2}{c}{\multirow{3}[1]{*}{\begin{sideways}\textbf{\footnotesize{Ful.Sup.}}\end{sideways}}} & PointNet & 41.1 & 88.8 & 97.3 & 69.8 & 0.1 & 3.9 & 46.3 & 10.8 & 52.6 & 58.9 & \textbf{40.3} & 5.9 & 26.4 & 33.2 & 57.9 & 58.3 \\
\multicolumn{2}{c}{} & PointNet++ & \textbf{47.8} & 90.3 & 95.6 & 69.3 & 0.1 & \textbf{13.8} & 26.7 & \textbf{44.1} & 64.3 & \textbf{70.0} & 27.8 & \textbf{47.8} & 30.8 & 38.1 & 65.5 & 67.1
\\
\multicolumn{2}{c}{} & DGCNN & 47.0 & \textbf{92.4} & \textbf{97.6} & \textbf{74.5} & \textbf{0.5} & 13.3 & \textbf{48.0} & 23.7 & \textbf{65.4} & 67.0 & 10.7 & 44.0 & \textbf{34.2} & \textbf{40.0} & \textbf{65.6} & \textbf{67.2}
\\
\midrule
\multicolumn{2}{c}{\multirow{2}[1]{*}{\begin{sideways}\textbf{\footnotesize{Unsup.}}\end{sideways}}} & Kmeans & 38.4 & 59.8 & 63.3 & 34.9 & 21.5 & \textbf{24.6} & 34.2 & 29.3 & 35.7 & 33.1 & 45.0 & 45.6 & 41.7 & 30.4 & 34.6 & 35.2 \\
\multicolumn{2}{c}{} & Ncut & \textbf{40.0} & \textbf{63.5} & \textbf{63.8} & \textbf{37.2} & \textbf{23.4} & \textbf{24.6} & \textbf{35.5} & \textbf{29.9} & \textbf{38.9} & \textbf{34.3} & \textbf{47.1} & \textbf{46.3} & \textbf{44.1} & \textbf{31.5} & \textbf{38.6} & \textbf{40.1} \\
\midrule
\multirow{10}[3]{*}{\begin{sideways}\textbf{Weak Sup.}\end{sideways}} & \multirow{5}[1]{*}{\begin{sideways}1pt\end{sideways}} & $\Pi$ Model & 44.3 & 89.1 & 97.0 & 71.5 & 0.0 & \textbf{3.6} & 43.2 & 27.4 & 62.1 & 63.1 & 14.7 & \textbf{43.7} & \textbf{24.0} & 36.7 & 51.4 & 52.6 \\
& & MT & 44.4 & 88.9 & 96.8 & 70.1 & \textbf{0.1} & 3.0 & 44.3 & 28.8 & \textbf{63.6} & 63.7 & 15.5 & \textbf{43.7} & 23.0 & 35.8 & 52.9 & 53.6\\
& & Baseline & 44.0 & 89.8 & 96.7 & 71.5 & 0.0 & 3.0 & 43.2 & \textbf{32.8} & 60.8 & 58.7 & 15.0 & 41.2 & 22.5 & 36.8 & 50.2 & 51.4 \\
& & Ours & \textbf{44.5} & \textbf{90.1} & \textbf{97.1} & \textbf{71.9} & 0.0 & 1.9 & \textbf{47.2} & 29.3 & 62.9 & \textbf{64.0} & \textbf{15.9} & 42.2 & 18.9 & \textbf{37.5} & \textbf{54.6} & \textbf{55.7}\\
\cmidrule{2-19} & \multirow{5}[2]{*}{\begin{sideways}10\%\end{sideways}} & $\Pi$ Model & 46.3 & 91.8 & 97.1 & 73.8 & 0.0 & 5.1 & 42.0 & 19.6 & 66.7 & 67.2 & 19.1 & 47.9 & 30.6 & 41.3 & 64.1 & 64.7\\
& & MT & 47.9 & 92.2 & 96.8 & 74.1 & 0.0 & 10.4 & 46.2 & 17.7 & 67.0 & 70.7 & \textbf{24.4} & 50.2 & \textbf{30.7} & 42.2 & 63.8 & 64.5\\
& & Baseline & 45.7 & 92.3 & \textbf{97.4} & \textbf{75.4} & 0.0 & \textbf{11.7} & 47.2 & 22.9 & 65.3 & 66.7 & 11.7 & 43.6 & 17.8 & 41.5 & 63.1 & 63.9 \\
& & Ours & \textbf{48.0} & \textbf{90.9} & 97.3 & 74.8 & 0.0 & 8.4 & \textbf{49.3} & \textbf{27.3} & \textbf{69.0} & \textbf{71.7} & 16.5 & \textbf{53.2} & 23.3 & \textbf{42.8} & \textbf{64.5} & \textbf{64.9}\\
\bottomrule
\end{tabular}%
}
\label{tab:S3DIS_PartNet}%
\end{minipage}
\hfill
\begin{minipage}{0.2\textwidth}
\centering
\caption{\footnotesize{Comparisons of different labelling strategies on ShapeNet segmentation. All numbers are in $\%$.}}
\vspace{-0.1cm}
\setlength\tabcolsep{2pt}
\resizebox{1\textwidth}{!}{
\begin{tabular}{lcc}
\toprule
Label Strat. & CatAvg & SampAvg \\
\midrule
Samp=10\% & \multirow{2}[2]{*}{70.37} & \multirow{2}[2]{*}{77.71} \\
Pts=100\% & & \\
\midrule
Samp=20\% & \multirow{2}[2]{*}{72.19} & \multirow{2}[2]{*}{78.45} \\
Pts=50\% & & \\
\midrule
Samp=50\% & \multirow{2}[2]{*}{74.29} & \multirow{2}[2]{*}{79.65} \\
Pts=20\% & & \\
\midrule
Samp=80\% & \multirow{2}[2]{*}{76.15} & \multirow{2}[2]{*}{80.18} \\
Pts=12.5\% & & \\
\midrule
Samp=100\% & \multirow{2}[2]{*}{77.71} & \multirow{2}[2]{*}{80.94} \\
Pts=10\% & & \\
\bottomrule
\end{tabular}%
\label{tab:PtsVsSamp}%
}
\end{minipage}
\vspace{-0.3cm}
\end{table*}%
\subsection{Qualitative Examples}
We show qualitative examples of point cloud segmentation on all datasets and compare the segmentation quality.
Firstly, we present the segmentation results on selected rooms from the S3DIS dataset in Fig.~\ref{fig:S3DIS_Qualitative}. From left to right we sequentially visualize the RGB view, ground-truth, fully supervised segmentation, weakly supervised baseline method and our final approach results. For weakly supervised methods, $10\%$ training points are assumed to be labelled. We observe accurate segmentation of majority and continuous objects, e.g. wall, floor, table, chair and window. In particular, our proposed method is able to improve the baseline results substantially by smoothing out the noisy areas. Nonetheless, we observe some mistakes of our method at the boundaries between different objects.
The segmentation results on ShapeNet are shown in Fig.~\ref{fig:ShapeNetExample}. These examples again demonstrate the highly competitive performance by the weakly supervised approach. For both the plane and car categories, the results of the weak supervision are very close to the fully supervised ones.
\begin{figure*}
\centering
\includegraphics[width=1.02\linewidth]{./Figure/S3DIS_Qualitative/S3DIS_Qualitative_crc.pdf}
\vspace{-0.3cm}
\caption{\footnotesize{Qualtitative examples for S3DIS dataset test area 5. $10\%$ labelled points are used to train the weak supervision models. }}\label{fig:S3DIS_Qualitative}
\vspace{-0.4cm}
\end{figure*}
\begin{figure}
\includegraphics[width=1\linewidth]{./Figure/ShapeNet_Qualitative/ShapeNet_Qualitative_v1.pdf}
\caption{\footnotesize{Qualitative examples for ShapeNet shape segmentation.}}\label{fig:ShapeNetExample}
\vspace{-0.4cm}
\end{figure}
\subsection{Label More Points Or More Samples}\label{sect:PtsVsSamp}
Given a fixed annotation budget, e.g. the total number of labelled points, there are different combinations of labelling strategies to balance the amount of labelled samples and the amount of labelled points within each sample. In this experiment, we control these two variables and validate on ShapeNet segmentation with the PointNet encoder for efficient evaluation. We first restrict the fixed budget to be $10\%$ of all training points. The labelling strategy is described by $x\%$ samples (Samp) each with $y\%$ labelled points (Pts) and $xy=1000$ to satisfy the restriction. We evaluate 5 combinations and present the results in Tab.~\ref{tab:PtsVsSamp}. The consistent improvement of mIoU with $x\%$ from $10\%$ to $100\%$ suggests that, given fixed total annotation budget, it is better to extensively label more samples each with fewer labelled points than intensively labelling a fraction of the dataset.
\section{Ablation Study}
\paragraph{Importance of Individual Components.}
We analyze the importance of the proposed additional losses and inference label propagation. Different combinations of the losses are evaluated on all datasets with the 1pt annotation scheme. The results are presented in Tab.~\ref{tab:AblLoss}. We observe that the Siamese self-supervision introduces the most advantage for S3DIS. This is because S3DIS is a real dataset, where the orientations and layouts of objects are diverse, and the augmentation and consistency constraints increase the robustness of model.
In contrast, the pose of test shapes are always fixed for the other two datasets, and thus they benefit less from Siamese augmentation. We also compare against the use of only data augmentation (last row), and the results suggest it is better to have the consistency constraints on unlabelled points.
The results are also further improved with the multi-instance loss for inexact branch. Finally, the smooth constraint at both training (Smo.) and inference (TeLP) stages consistently bring additional advantage to the whole architecture.
\vspace{-0.4cm}
\paragraph{Compatibility with Encoder Network.}\label{sect:EncoderNet}
We further examine the compatibility of the proposed losses with different encoder networks. In particular, we investigate the performance with PointNet and DGCNN as the encoder network. The results are shown in Tab.~\ref{tab:AblLoss} and it is clear that both networks exhibit same patterns.
\begin{table}[htbp]
\centering
\caption{\small{Ablation study on the impact of individual losses and inference label propagation and the compatibility with alternative encoder networks.}}\vspace{-0.2cm}
\setlength\tabcolsep{2pt}
\resizebox{0.9\linewidth}{!}{
\begin{tabular}{cccc|ccc|ccc}
\toprule
\multicolumn{4}{c|}{Components} & \multicolumn{3}{c|}{PointNet} & \multicolumn{3}{c}{DGCNN} \\
\midrule
MIL & Siam. & Smo. & TeLP & ShapeNet & PartNet & S3DIS & ShapeNet & PartNet & S3DIS \\
\midrule
& & & & 65.2 & 49.7 & 36.8 & 72.2 & 50.2 & 44.0 \\
& \checkmark & & & 66.0 & 50.3 & 41.9 & 73.1 & 51.5 & 44.3 \\
\checkmark & \checkmark & & & 69.0 & 52.1 & 42.2 & 73.4 & 52.9 & 44.4 \\
\checkmark & \checkmark & \checkmark & & 69.6 & 52.5 & 43.0 & 73.8 & 53.6 & 44.2 \\
\checkmark & \checkmark & \checkmark & \checkmark & \textbf{70.2} & \textbf{52.8} & \textbf{43.1} & \textbf{74.4} & \textbf{54.6} & \textbf{44.5} \\
\midrule
\multicolumn{4}{c|}{Data Augmentation} & 65.3 & 49.9 & 38.9 & 73.0 & 52.7 & 43.2 \\
\bottomrule
\end{tabular}%
}
\label{tab:AblLoss}%
\vspace{-0.3cm}
\end{table}%
\vspace{-0.4cm}
\paragraph{Amount of Labelled Data.}
As suggested by previous study, the amount of labelled data has an significant impact on the point cloud segmentation performance. In this section, we investigate this relation by varying the amount of labelled points. In particular, we control the percentage of labelled points to be from $1\%$ to $100\%$ (full supervision) with the baseline weak supervision method. The results are presented in Fig.~\ref{fig:AmtLabelData}. We observe that the performance on all datasets approaches the full supervision after $10\%$ labelled points.
\begin{figure}
\centering
\subfloat[ShapeNet]{\includegraphics[width=0.34\linewidth]{./Figure/AmtLabelData/ShapeNet.pdf}}
\subfloat[PartNet]{\includegraphics[width=0.34\linewidth]{./Figure/AmtLabelData/PartNet.pdf}}
\subfloat[S3DIS]{\includegraphics[width=0.34\linewidth]{./Figure/AmtLabelData/S3DIS.pdf}}
\vspace{-0.3cm}
\caption{\small{The impact of amount of labelled points for all three datasets.}}\label{fig:AmtLabelData}
\vspace{-0.3cm}
\end{figure}
\vspace{-0.4cm}
\paragraph{Point Feature Embedding.}
We visualize the point cloud feature embedding to further understand why weak supervision leads to competitive performance. We first project the feature before the last layer into 2D space via T-SNE \cite{maaten2008visualizing} for both full supervision and $10\%$ weak supervision. The projected point embeddings are visualized in Fig.~\ref{fig:PtEmbedTSNE}. We observe similar feature embedding patterns. This again demonstrates a few labelled points can yield very competitive performance.
\begin{figure}
\includegraphics[width=1.05\linewidth]{./Figure/TSNE/TSNE.pdf}
\vspace{-0.5cm}
\caption{\small{T-SNE visualization of point embeddings in 2D space.}}\label{fig:PtEmbedTSNE}
\vspace{-0.5cm}
\end{figure}
\section{Conclusion}
\vspace{-0.2cm}
In this paper, we made a discovery that only a few labelled points is needed for existing point cloud encoder networks to produce very competitive performance
for the point cloud segmentation task. We provide analysis from a statistical point of view and gave insights into the annotation strategy under fixed labelling budget. Furthermore, we proposed three additional training losses, i.e. inexact supervision, Siamese self-supervision and spatial and color smoothness to further regularize the model. Experiments are carried out on three public datasets to validate the efficacy of our proposed methods. In particular, the results are comparable with full supervision with 10 $\times$ fewer labelled points.
\vspace{-0.4cm}
\paragraph{Acknowledgement.} This work was partially supported by the Singapore MOE Tier 1 grant R-252-000-A65-114.
{\small
\bibliographystyle{ieee_fullname}
\section{Compatibility with Alternative Encoder Networks}
We further evaluate an additional the state-of-the-art encoder network with the proposed weakly supervised strategy. Specifically, PointNet++ is evaluated on the ShapeNet dataset. The fully supervised setting (FullSup), 1 point per category labelling (1pt WeakSup) and 10$\%$ labelling (10$\%$ WeakSup) with our final model are compared. The results in Tab.~\ref{tab:Encoder} clearly demonstrate that with very few annotations, shape segmentation is still very robust with different encoder networks.
\section{Additional Details on Datasets}
We present more details on the weakly supervised segmentation experiments on PartNet in Tab.~\ref{tab:PartNet}.
\section{Additional Qualitative Examples}
More qualitative examples on S3DIS and ShapeNet are presented here. We show the following for S3DIS in Fig. 1: RGB
view, ground-truth segmentation (GT View), fully supervised
segmentation (FullSup. Seg.), baseline weakly supervised
method with $10\%$ labelled points ($10\%$ Baseline
WeakSup. Seg.), our final multi-task weakly supervised
method with $10\%$ points labelled ($10\%$ OurWeakSup. Seg.),
and our final multi-task weakly supervised method with 1
labelled point per category (1pt Our WeakSup. Seg.). Fig.1 shows 9 selected rooms in Area 5 of the S3DIS dataset. In these results, we observe consistent improvement of our method over baseline method. Moreover, the $10\%$ weak supervision results are even comparable to the fully supervised one and the 1pt weak supervision results is also surprisingly good. We further visualize additional segmentation results on the ShapeNet dataset with both 1pt and 10$\%$ weak supervision. The gap between weak supervision and full supervision is even smaller on the shape segmentation task.
\begin{table*}[!ht]
\centering
\caption{Evaluation of alternative encoder network on ShapeNet dataset.}
\setlength\tabcolsep{2pt}
\resizebox{0.99\linewidth}{!}{
\begin{tabular}{clrrrrrrrrrrrrrrrrrr}
\toprule
\multirow{2}[4]{*}{Encoder Net} & \multicolumn{1}{c}{\multirow{2}[4]{*}{Setting}} & \multicolumn{18}{c}{ShapeNet} \\
\cmidrule{3-20} & & \multicolumn{1}{c}{CatAvg} & \multicolumn{1}{c}{SampAvg} & \multicolumn{1}{c}{Air.} & \multicolumn{1}{c}{Bag} & \multicolumn{1}{c}{Cap} & \multicolumn{1}{c}{Car} & \multicolumn{1}{c}{Chair} & \multicolumn{1}{c}{Ear.} & \multicolumn{1}{c}{Guitar} & \multicolumn{1}{c}{Knife} & \multicolumn{1}{c}{Lamp} & \multicolumn{1}{c}{Lap.} & \multicolumn{1}{c}{Motor.} & \multicolumn{1}{c}{Mug} & \multicolumn{1}{c}{Pistol} & \multicolumn{1}{c}{Rocket} & \multicolumn{1}{c}{Skate.} & \multicolumn{1}{c}{Table} \\
\midrule
\multirow{3}[2]{*}{PointNet++} & FullSup & \multicolumn{1}{c}{81.87} & \multicolumn{1}{c}{84.89} & \multicolumn{1}{c}{82.74} & \multicolumn{1}{c}{81.19} & \multicolumn{1}{c}{87.84} & \multicolumn{1}{c}{78.11} & \multicolumn{1}{c}{90.71} & \multicolumn{1}{c}{73.40} & \multicolumn{1}{c}{90.93} & \multicolumn{1}{c}{86.04} & \multicolumn{1}{c}{83.36} & \multicolumn{1}{c}{95.07} & \multicolumn{1}{c}{72.38} & \multicolumn{1}{c}{94.96} & \multicolumn{1}{c}{80.03} & \multicolumn{1}{c}{55.13} & \multicolumn{1}{c}{76.05} & \multicolumn{1}{c}{81.98} \\
& 1pt WeakSup & \multicolumn{1}{c}{80.82} & \multicolumn{1}{c}{84.19} & \multicolumn{1}{c}{80.86} & \multicolumn{1}{c}{76.90} & \multicolumn{1}{c}{86.94} & \multicolumn{1}{c}{75.57} & \multicolumn{1}{c}{90.35} & \multicolumn{1}{c}{74.00} & \multicolumn{1}{c}{90.34} & \multicolumn{1}{c}{86.05} & \multicolumn{1}{c}{83.66} & \multicolumn{1}{c}{95.12} & \multicolumn{1}{c}{66.97} & \multicolumn{1}{c}{93.22} & \multicolumn{1}{c}{79.20} & \multicolumn{1}{c}{57.93} & \multicolumn{1}{c}{74.30} & \multicolumn{1}{c}{81.68} \\
& 10\% WeakSup & \multicolumn{1}{c}{81.27} & \multicolumn{1}{c}{84.70} & \multicolumn{1}{c}{82.36} & \multicolumn{1}{c}{76.55} & \multicolumn{1}{c}{86.82} & \multicolumn{1}{c}{77.48} & \multicolumn{1}{c}{90.52} & \multicolumn{1}{c}{73.01} & \multicolumn{1}{c}{91.16} & \multicolumn{1}{c}{85.35} & \multicolumn{1}{c}{83.07} & \multicolumn{1}{c}{95.34} & \multicolumn{1}{c}{69.97} & \multicolumn{1}{c}{94.88} & \multicolumn{1}{c}{80.04} & \multicolumn{1}{c}{57.03} & \multicolumn{1}{c}{74.59} & \multicolumn{1}{c}{82.14} \\
\bottomrule
\end{tabular}%
}
\label{tab:Encoder}%
\end{table*}%
\begin{table*}[htbp]
\centering
\caption{Detailed results on PartNet dataset.}
\setlength\tabcolsep{2pt}
\resizebox{0.99\linewidth}{!}{
\begin{tabular}{cclccccccccccccccccccccccccc}
\toprule
\multicolumn{2}{c}{Setting} & \multicolumn{1}{c}{Model} & CatAvg & Bag & Bed & Bott. & Bowl & Chair & Clock & Dish. & Disp. & Door & Ear. & Fauc. & Hat & Key & Knife & Lamp & Lap. & Micro. & Mug & Frid. & Scis. & Stora. & Table & Trash. & Vase \\
\midrule
\multicolumn{2}{c}{\multirow{3}[2]{*}{\begin{sideways}\textbf{Ful.Sup.}\end{sideways}}} & PointNet & 57.9 & 42.5 & 32.0 & 33.8 & 58.0 & 64.6 & 33.2 & 76.0 & 86.8 & 64.4 & 53.2 & 58.6 & 55.9 & 65.6 & 62.2 & 29.7 & 96.5 & 49.4 & 80.0 & 49.6 & 86.4 & 51.9 & 50.5 & 55.2 & 54.7 \\
\multicolumn{2}{c}{} & PointNet++ & 65.5 & 59.7 & 51.8 & \textbf{53.2} & \textbf{67.3} & 68.0 & \textbf{48.0} & \textbf{80.6} & 89.7 & 59.3 & \textbf{68.5} & \textbf{64.7} & 62.4 & 62.2 & 64.9 & \textbf{39.0} & \textbf{96.6} & 55.7 & \textbf{83.9} & 51.8 & 87.4 & 58.0 & \textbf{69.5} & 64.3 & 64.4 \\
\multicolumn{2}{c}{} & DGCNN & \textbf{65.6} & \textbf{53.3} & \textbf{58.6} & 48.9 & 66.9 & \textbf{69.1} & 35.8 & 75.2 & \textbf{91.2} & \textbf{68.5} & 59.3 & 62.6 & \textbf{63.7} & \textbf{69.5} & \textbf{71.8} & 38.5 & 95.7 & \textbf{57.6} & 83.3 & \textbf{53.7} & \textbf{89.7} & \textbf{62.6} & 65.3 & \textbf{67.8} & \textbf{66.8} \\
\midrule
\multirow{4}[4]{*}{\begin{sideways}\textbf{Weak Sup.}\end{sideways}} & \multirow{2}[2]{*}{\begin{sideways}1pt\end{sideways}} & Baseline & 50.2 & 24.4 & 30.1 & 20.5 & 38.0 & 65.9 & 35.3 & 64.9 & \textbf{84.3} & \textbf{52.6} & 36.7 & 47.1 & 47.9 & 52.2 & 55.2 & 34.1 & 92.4 & 49.3 & 59.5 & 49.6 & 80.1 & 44.6 & 49.8 & 40.4 & 49.5 \\
& & Ours & \textbf{54.6} & \textbf{28.4} & \textbf{30.8} & \textbf{26.0} & \textbf{54.3} & \textbf{66.4} & \textbf{37.7} & \textbf{66.3} & 81.0 & 51.7 & \textbf{44.4} & \textbf{51.2} & \textbf{55.2} & \textbf{56.2} & \textbf{63.1} & \textbf{37.6} & \textbf{93.5} & \textbf{49.7} & \textbf{73.5} & \textbf{50.6} & \textbf{83.6} & \textbf{46.8} & \textbf{61.1} & \textbf{44.1} & \textbf{56.8} \\
\cmidrule{2-28} & \multirow{2}[2]{*}{\begin{sideways}10\%\end{sideways}} & Baseline & 63.2 & 54.4 & 56.8 & 44.1 & \textbf{57.6} & 67.2 & 41.3 & \textbf{70.0} & \textbf{91.3} & \textbf{61.8} & \textbf{65.8} & 57.2 & \textbf{64.2} & 64.2 & 66.7 & \textbf{37.9} & 94.9 & 49.1 & 80.2 & \textbf{49.6} & 84.1 & 59.3 & 69.7 & 66.7 & 63.0 \\
& & Ours & \textbf{64.5} & \textbf{47.3} & \textbf{55.5} & \textbf{64.7} & 56.2 & \textbf{69.1} & \textbf{44.3} & 68.3 & 91.1 & 61.3 & 62.8 & \textbf{65.2} & 63.0 & \textbf{64.6} & \textbf{67.9} & 37.8 & \textbf{95.5} & \textbf{50.1} & \textbf{82.7} & \textbf{49.6} & \textbf{85.8} & \textbf{59.5} & \textbf{71.2} & \textbf{67.7} & \textbf{65.9} \\
\bottomrule
\end{tabular}%
}
\label{tab:PartNet}%
\end{table*}%
\begin{figure*}[!htb]
\caption{Additional examples comparing full supervision and weak supervision for S3DIS semantic segmentation.}\label{fig:S3DIS}
\subfloat[Area5\_conferenceRoom\_2]{\includegraphics[width=1\linewidth]{./Figure/S3DIS_Qualitative/Supp_CR/Area_5_conferenceRoom_2/frame_001.png}}
\subfloat[Area5\_conferenceRoom\_3]{\includegraphics[width=1\linewidth]{./Figure/S3DIS_Qualitative/Supp_CR/Area_5_conferenceRoom_3/frame_001.png}}
\end{figure*}
\begin{figure*}\ContinuedFloat
\subfloat[Area5\_hallway\_1]{\includegraphics[width=1\linewidth]{./Figure/S3DIS_Qualitative/Supp_CR/Area_5_hallway_1/frame_001.png}}
\subfloat[Area5\_lobby\_1]{\includegraphics[width=1\linewidth]{./Figure/S3DIS_Qualitative/Supp_CR/Area_5_lobby_1/frame_001.png}}
\subfloat[Area5\_office\_1]{\includegraphics[width=1\linewidth]{./Figure/S3DIS_Qualitative/Supp_CR/Area_5_office_1/frame_001.png}}
\end{figure*}
\begin{figure*}\ContinuedFloat
\subfloat[Area5\_office\_10]{\includegraphics[width=1\linewidth]{./Figure/S3DIS_Qualitative/Supp_CR/Area_5_office_10/frame_001.png}}
\subfloat[Area5\_pantry\_1]{\includegraphics[width=1\linewidth]{./Figure/S3DIS_Qualitative/Supp_CR/Area_5_pantry_1/frame_001.png}}
\subfloat[Area5\_storage\_2]{\includegraphics[width=1\linewidth]{./Figure/S3DIS_Qualitative/Supp_CR/Area_5_storage_2/frame_001.png}}
\end{figure*}
\begin{figure*}
\subfloat{\includegraphics[width=1\linewidth]{./Figure/ShapeNet_Qualitative/Supp/Shapes/instance-10_cat-Airplane.pdf}}\\
\subfloat{\includegraphics[width=1\linewidth]{./Figure/ShapeNet_Qualitative/Supp/Shapes/instance-346_cat-Bag.pdf}}\\
\subfloat{\includegraphics[width=1\linewidth]{./Figure/ShapeNet_Qualitative/Supp/Shapes/instance-360_cat-Cap.pdf}}\\
\subfloat{\includegraphics[width=1\linewidth]{./Figure/ShapeNet_Qualitative/Supp/Shapes/instance-425_cat-Car.pdf}}\\
\subfloat{\includegraphics[width=1\linewidth]{./Figure/ShapeNet_Qualitative/Supp/Shapes/instance-563_cat-Chair.pdf}}\\
\subfloat{\includegraphics[width=1\linewidth]{./Figure/ShapeNet_Qualitative/Supp/Shapes/instance-1237_cat-Earphone.pdf}}\\
\subfloat{\includegraphics[width=1\linewidth]{./Figure/ShapeNet_Qualitative/Supp/Shapes/instance-1253_cat-Guitar.pdf}}\\
\end{figure*}
\begin{figure*}\ContinuedFloat
\subfloat{\includegraphics[width=1\linewidth]{./Figure/ShapeNet_Qualitative/Supp/Shapes/instance-1253_cat-Guitar.pdf}}\\
\subfloat{\includegraphics[width=1\linewidth]{./Figure/ShapeNet_Qualitative/Supp/Shapes/instance-1414_cat-Knife.pdf}}\\
\subfloat{\includegraphics[width=1\linewidth]{./Figure/ShapeNet_Qualitative/Supp/Shapes/instance-1505_cat-Lamp.pdf}}\\
\subfloat{\includegraphics[width=1\linewidth]{./Figure/ShapeNet_Qualitative/Supp/Shapes/instance-1803_cat-Laptop.pdf}}\\
\subfloat{\includegraphics[width=1\linewidth]{./Figure/ShapeNet_Qualitative/Supp/Shapes/instance-1878_cat-Motorbike.pdf}}\\
\subfloat{\includegraphics[width=1\linewidth]{./Figure/ShapeNet_Qualitative/Supp/Shapes/instance-1968_cat-Pistol.pdf}}\\
\end{figure*}
\begin{figure*}[!t]\ContinuedFloat
\subfloat{\includegraphics[width=1\linewidth]{./Figure/ShapeNet_Qualitative/Supp/Shapes/instance-1990_cat-Rocket.pdf}}\\
\subfloat{\includegraphics[width=1\linewidth]{./Figure/ShapeNet_Qualitative/Supp/Shapes/instance-2016_cat-Skateboard.pdf}}\\
\subfloat{\includegraphics[width=1\linewidth]{./Figure/ShapeNet_Qualitative/Supp/Shapes/instance-2075_cat-Table.pdf}}\\
\end{figure*}
\end{document}
|
2,869,038,154,124 | arxiv | \section{Introduction}
Common generalization guarantees used to motivate supervised learning approaches under the empirical risk minimization framework (ERM) rely on the assumption that data is collected independently from a fixed underlying distribution. Such an assumption, however, is not without shortcomings; for instance: (\textbf{i})-i.i.d. requirements are \emph{unverifiable} \citep{langford2005tutorial} in the sense that, given a data sample and no access to the distribution it was observed from, one cannot tell whether such a sample was collected independently, and (\textbf{ii})-the i.i.d. assumption is \emph{unpractical} since in several scenarios the conditions under which data is collected will likely change relative to when training samples were observed and, as such, generalization cannot be expected. Yet another practical limitation given by the lack of robustness against distribution shifts in common predictors is the fact that one cannot benefit from data sources that differ from those against which such predictors will be tested. In some situations, for example, data can be collected from inexpensive simulations, but generalization to real data depends on how similar the synthetic data distribution is to the data distribution of interest.
Several approaches have been consequently introduced with the goal of relaxing requirements of i.i.d. data to some extent. For instance, \emph{domain adaptation approaches} \citep{ben2007analysis} assume the existence of two distributions: the source distribution -- which contains the bulk of the training data -- and the target distribution -- which corresponds to the test-time data distribution. While the domain adaptation setting enlarged the scope of the standard empirical risk minimization framework by enabling the use of predictors even when a distribution other than the one used for training is considered, a particular \emph{target} is expected to be defined at training time, often with unlabeled examples, and nothing can be guaranteed for distributions other than that particular \emph{target}, which renders such setting still unpractical since unseen variations in the data are possible during test. More general settings were introduced considering a larger set of supported target distributions while not requiring access to any target data during training \citep{albuquerque2019generalizing}. However, such approaches, including domain adaptation techniques discussed so far, despite of relaxing the i.i.d. requirement, still require other assumptions to be met such as \emph{covariate shift} (c.f. sec. \ref{sec:related_work} for a definition).
As will be further discussed in Section~\ref{sec:related_work}, a common feature across a number of approaches enabling some kind of out-of-distribution generalization is that they rely on some notion of invariance, be it at the feature level \citep{ganin2016domain,albuquerque2019generalizing}, in the sense that the domains' data distributions cannot be discriminated after being mapped to some intermediate features by a feature extractor, or in the predictor level \citep{arjovsky2019invariant,krueger2020out} in which case one expects distribution shifts will have little effect over predictions. In this contribution, the research question we pose to ourselves is as follows: \emph{can one leverage contextual information to induce generalization to novel data sources?} We thus take an alternative approach relative to previous work and propose a framework where the opposite direction is considered in order to tackle the limitations discussed above, i.e. instead of filtering out the domain influence over predictions, we explore approaches where such information is leveraged as a source of context on which predictions can be conditioned. We refer to predictors resulting of such an approach as \emph{domain conditional predictors}. We argue such a method includes the following advantages compared to settings seeking invariance:
\begin{enumerate}
\item Training strategies yielding domain conditional predictors do not rely on minimax formulations, often employed in domain invariant approaches where a domain discriminator is trained against a feature extractor. Such formulations are often observed to be source of training instabilities which do not appear in the setting considered herein.
\item The proposed setting does not rely on the covariate shift assumption since it considers multiple inputs, i.e. for a fixed input data instance, any prediction can be obtained through variations of the conditioning variable.
\item The proposed setting has a larger scope when compared to domain-invariant approaches in the sense that it can be used to perform inferences regarding the domains it observed during training.
\end{enumerate}
The remainder of this paper is organized as follows: related literature is discussed in Section~\ref{sec:related_work} along with background information and notation, while the proposed approach is presented in Section~\ref{sec:method}. The planned experimental setup is discussed in Section~\ref{sec:evaluation} and proof-of-concept results are reported in Section~\ref{sec:experiments}. The complete evaluation is reported in Section~\ref{sec:complete_evaluation}, while conclusions as well as future directions are drawn in Section~\ref{sec:conclusion}.
\section{Background and Related Work}
\label{sec:related_work}
\subsection{Domain adaptation guarantees and domain invariant approaches}
Assume $(x, y)$ represents instances from $\mathcal{X} \times \mathcal{Y}$, where $\mathcal{X} \subseteq \mathbb{R}^D$ is the data space while $\mathcal{Y}$ is the space of labels, and $\mathcal{Y}$ will be a discrete set in the cases we consider. Furthermore, consider a deterministic labeling function, denoted $f:\mathcal{X} \mapsto \mathcal{Y}$, is such that $y=f(x)$. We will refer to domains as the pairs given by a marginal distribution over $\mathcal{X}$, denoted by $\mathcal{D}$, and a labeling function $f$. We further consider a family of candidate predictors $\mathcal{H}$ where $h \in \mathcal{H} : \mathcal{X} \mapsto \mathcal{Y}$. For a particular predictor $h$, we use the standard definition of risk $R$, which is its expected loss:
\begin{equation}
R_{\mathcal{D}}[h] = \mathbb{E}_{x \sim \mathcal{D}} \ell [h(x), f(x)],
\end{equation}
where the loss $\ell:\mathcal{Y} \times \mathcal{Y} \rightarrow \mathbb{R}_{+}$ indicates how different $h(x)$ and $f(x)$ are (e.g. the 0-1 loss for $\mathcal{Y}=\{0,1\}$).
\cite{ben2007analysis} showed that the following bound holds for the risk on a target domain $\mathcal{D}_T$ depending on the risk measured on the source domain $\mathcal{D}_S$:
\begin{equation}\label{eq:bound_da}
R_{\mathcal{D}_T}[h] \leq R_{\mathcal{D}_S}[h] + d_{\mathcal{H}}[\mathcal{D}_S, \mathcal{D}_T] + \lambda,
\end{equation}
and the following details are worth highlighting regarding such result: (\textbf{i})-the term $\lambda$, as discussed by \cite{zhao2019learning}, accounts for differences between the labeling functions in the source and target domains, i.e. in the more general case the label $y'$ of a particular data point $x'$ depends on the underlying domain it was observed from. The \emph{covariate shift} assumption thus considers the more restrictive case where labeling functions match across domains, zeroing out $\lambda$ and tightening the bound shown above. (\textbf{ii})-the term $d_{\mathcal{H}}[\mathcal{D}_S,\mathcal{D}_T]$ corresponds to the discrepancy measure in terms of the $\mathcal{H}$-divergence (c.f. definition in \citep{ben2007analysis}) as measured across the two considered domains.
The covariate shift assumption thus induces a setting where generalization can be expected if the considered domains lie close to each other in the $\mathcal{H}$-divergence sense. Such setting motivated the domain invariant approaches appearing across a number of recent domain adaptation methods \citep{ganin2016domain,albuquerque2019generalizing,bhattacharya2019generative,bashivan2020adversarial} where a feature extractor is forced to ignore domain-specific cues from the data and induce a low discrepancy across domains, enabling generalization. A similar direction was recently proposed to define invariant predictors instead of invariant representations in \citep{arjovsky2019invariant,krueger2020out,ahuja2020invariant}. In such setting, while data representations might still be domain-dependent, one seeks predictors that disregard domain factors in the sense that their predictions are not affected by changes in the underlying domains.
\subsection{Conditional modeling}
The problem of conditional modeling, i.e. that of defining models of a conditional distribution where some kind of contextual information is considered, appears across several areas. In this work, we choose to use a conditioning approach commonly referred to as FiLM, which belongs to a family of feature-wise conditioning methods that is widely used in the literature \citep{dumoulin2018feature-wise}. FiLM layers were introduced by \cite{perez2017film} and employed to tackle multi-modal visual reasoning tasks. Another setting where FiLM layers have been shown effective is few-shot learning. This is the case, for instance, of TADAM \citep{oreshkin2018tadam}, CNAPs \citep{requeima2019fast}, and CAVIA \citep{zintgraf2019fast} where FiLM layers are used to enable adapting a global cross-task model to particular tasks. Moreover, the few-shot classification setting under domain shift is tackled in \citep{Tseng2020Cross-Domain}, where feature-wise transformations are used as a means to diversify data and artificially create new domains at training time.
In further detail, FiLM layers consist of a per-feature affine transformation where its parameters are themselves a function of the data, as defined in the following:
\begin{equation}
\label{eq:film_operator}
\text{FiLM}(x,z) = \gamma(z)F(x)+\beta(z),
\end{equation}
where $F(x)$ represents features extracted from an input denoted $x$, while $\gamma(z)$ and $\beta(z)$ are arbitrary functions of some available conditioning information from the data and represented by $z$ (e.g. $x$ corresponded to images and $z$ represented text in the original paper). $\gamma$ and $\beta$ were thus parameterized by neural networks and trained jointly with the main model, function of $x$.
An approach similar to FiLM was employed by \cite{karras2019style} for the case of generative modeling. A conditioning layer consisting of adaptive instance normalization layers \citep{huang2017arbitrary} was used to perform conditional generation of images providing control over the style of the generated data. Moreover, in \citep{prol2018cross} FiLM layers were applied in order to condition a predictor on entire classification tasks. The goal in that case was to enable adaptable predictors for few-shot classification of novel classes.
Other applications of such a framework include, for instance, conditional language modeling. In \citep{keskar2019ctrl} for example, deterministic codes are given at train time indicating the style of a particular corpus. At test time, one can control the sampling process by giving each such code as an additional input and generate outputs corresponding to a particular style. In the case of applications to speech recognition, a common approach for acoustic modelling is to augment acoustic features with speaker-dependent representations so that the model can account for factors that are specific to the underlying speaker such as accent and speaking speed \citep{peddinti2015time}. Representations such as i-vectors \citep{dehak2010front} are then combined with spectral features at every frame, and the combined representations are projected onto a mixed subspace, learned prior to training the acoustic model.
\section{Domain conditional predictors}
\label{sec:method}
The setting we consider consists in designing models that parameterize a conditional categorical distribution which simultaneously relies on data as well as on domain information. We then assume training data comes from multiple source domains. Each example has a label that indicates which domain it belongs to. Additional notation is introduced to account for that, in which case we denote the set of domain labels by $\mathcal{Y}_{\mathcal{D}}$. We then consider two models given by:
\begin{enumerate}
\item $M_{domain} : \mathcal{X} \mapsto \Delta^{|\mathcal{Y}_{\mathcal{D}}|-1}$ maps a data instance $x$ onto the $|\mathcal{Y}_{\mathcal{D}}|-1$ probability simplex that defines the following categorical conditional distribution: $P(\mathcal{Y}_{\mathcal{D}}|x)=M_{domain}(x)$.
\item $M_{task} : \mathcal{X} \times \mathbb{R}^d \mapsto \Delta^{|\mathcal{Y}|-1}$, where an extra input represented by $z \in \mathbb{R}^d$ is a conditioning variable expected to carry domain information. $M_{task}$ maps a data instance $x$ and its corresponding $z$ onto the $|\mathcal{Y}|-1$ probability simplex, thus defining the following categorical conditional distribution: $P(\mathcal{Y}|x,z)=M_{task}(x,z)$.
\end{enumerate}
We implement both $M_{domain}$ and $M_{task}$ using neural networks, and training is carried out with simultaneous maximum likelihood estimation over $P(\mathcal{Y}_{\mathcal{D}}|x)$ and $P(\mathcal{Y}|x,z)$ so that the training objective $\mathcal{L}=(1-\lambda)\mathcal{L}_{task}+\lambda\mathcal{L}_{domain}$ is defined by the sum of the multi-class cross-entropy losses defined over the set of task and domains labels, respectively, and $\lambda \in [0,1]$ is a hyperparameter that controls the importance of each loss term during training. Moreover, $z$ is given by the output of some inner layer of $M_{domain}$, since $z$ is expected to contain domain-dependent information. A training procedure is depicted in Algorithm \ref{alg:training}.
In order for $M_{task}$ to be able to use the domain conditioning information made available through $z$, we make use of FiLM layers represented by:
\begin{equation}
FiLM^k(x^{k-1}, z)=(W^k_1z+b^k_1)x^{k-1}+(W^k_2z+b^k_2),
\end{equation}
where $k$ indicates a particular layer within $M_{task}$, and $x^{k-1}$ corresponds to the output of the previous layer. $W^k_1$, $b^k_1$, $W^k_2$, and $b^k_2$ correspond to the conditioning parameters trained along with the complete model.
\vspace{1cm}
\begin{algorithm}[]
\caption{Training procedure.}
\label{alg:training}
\begin{algorithmic}
\STATE $M_{task}, M_{domain} = InitializeModels()$
\REPEAT
\STATE $x, y, y_{\mathcal{D}} = SampleMinibatch()$
\STATE $y'_{\mathcal{D}}, z = M_{domain}(x)$
\STATE $y' = M_{task}(x,z)$
\STATE $\mathcal{L}=\mathcal{L}_{task}(y',y)+\mathcal{L}_{domain}(y'_{\mathcal{D}},y_{\mathcal{D}})$
\STATE $M_{task}, M_{domain} = UpdateRule(M_{task}, M_{domain}, \mathcal{L})$
\UNTIL{Maximum number of iterations reached}
\STATE \textbf{return} $M_{task}, M_{domain}$
\end{algorithmic}
\end{algorithm}
\vspace{1cm}
At test time, two distinct classifiers can be defined such as the task predictor given by:
\begin{equation}
\argmax_{i \in [|\mathcal{Y}|]} M_{task}(x,z)_i,
\end{equation}
or the domain predictor defined by:
\begin{equation}
\argmax_{j \in [|\mathcal{Y}_{\mathcal{D}}|]} M_{domain}(x)_j,
\end{equation}
thus enabling extra prediction mechanisms compared to methods that remove domain information.
\section{Planned evaluation and results}
\label{sec:evaluation}
In this section, we list the datasets we consider for the evaluation along with baseline methods, ablations, and the considered variations of conditioning approaches.
\subsection{Datasets}
Evaluations are to be performed on a subset of the following well-known domain generalization benchmarks:
\begin{itemize}
\item PACS \citep{li2017deeper}: It consists of 224x224 RGB images distributed into 7 classes and originated from four different domains: Photo (P), Art painting (A), Cartoon (C), and Sketch (S).
\item VLCS \citep{fang2013unbiased}: VLCS is composed by 5 overlapping classes of objects obtained from the following datasets: VOC2007 \citep{everingham2010pascal}, LabelMe \citep{russell2008labelme}, Caltech-101 \citep{griffin2007caltech}, and SUN \citep{choi2010exploiting}.
\item OfficeHome \citep{venkateswara2017Deep}: This dataset consists of images from the following domains: artistic images, clip art, product images and natural images. Each domain contains images of 65 classes.
\item DomainNet \citep{peng2019moment}: DomainNet contains examples of 224x224 RGB images corresponding to 345 classes of objects across 6 distinct domains.
\end{itemize}
\paragraph{Evaluation metric} Across all mentioned datasets, we follow the \emph{leave-one-domain-out} evaluation protocol such that data from $|\mathcal{Y}_{\mathcal{D}}|-1$ out of the $|\mathcal{Y}_{\mathcal{D}}|$ available domains are used for training, while evaluation is carried out on the data from the left out domain. This procedure is repeated for all available domains, and once each domain is left out, the \emph{average top-1 accuracy} is the evaluation metric under consideration. Moreover, in order to provide comparisons with significance, performance is to be reported in terms of confidence intervals obtained from independent models trained with different random seeds.
\subsection{Baselines}
The main aspect under investigation within this work is whether one can leverage domain information rather than removing or disregarding it such as in typical settings. Our main baselines then correspond to two settings where some kind of domain invariance is enforced: \emph{domain-adversarial approaches} and \emph{invariant predictors}. We specifically consider DANN \citep{ganin2016domain} and G2DM \citep{albuquerque2019generalizing} corresponding to the former, while IRM \citep{arjovsky2019invariant} and Rex \citep{krueger2020out} are considered for the latter. Additionally, two further baselines are evaluated: an \emph{unconditional model} in the form of a standard classifier that disregards domain labels as well as a model replacing $M_{domain}$ by a standard embedding layer\footnote{For this case, evaluation is performed in-domain, i.e. fresh data from the same domains observed at training time are used for testing.}.
\subsection{Ablations}
In order to investigate different sources of potential improvement, we will drop the domain classification term of the loss ($\mathcal{L}_{domain}$), obtaining a predictor with the same capacity as the proposed model while having no explicit mechanism for conditional modeling. A drop in performance should serve as evidence that the conditioning approach yields improvement. Moreover, similarly to ablations performed in the original FiLM paper \citep{perez2017film}, we plan on evaluating cases where scaling and offset parameters (i.e. $\gamma$ and $\beta$ as indicated in Eq. \ref{eq:film_operator}) are all set to 1 or 0, indicating which parameter set is more important for the conditioning strategy to be effective.
\subsection{Further evaluation details}
As discussed in previous work \citep{albuquerque2019generalizing, krueger2020out, gulrajani2020search}, the chosen validation data source used to implement model selection and stopping criteria significantly affects the performance of domain generalization approaches. We remark that, in this work, the access model to left out domains is such that no access to target data is allowed for model selection or hyperparameter tuning. We thus only use in-domain validation data. However, we further consider the so-called ``privileged'' variants of both our models and baselines in the sense that they are given access to target data. In doing so, we can get a sense of the gap in performance observed across the settings.
\section{Proof-of-concept evaluation}
\label{sec:experiments}
We used MNIST to perform validation experiments and considered different domains simulated through transformations applied on training data. Considered such transformations are as follows: (\textbf{i})-horizontal flip of digits, (\textbf{ii})-switching colors between digits and background, (\textbf{iii})-blurring, and (\textbf{iv})-rotation. Examples from each transformation are shown in Figures \ref{fig:hflip}-\ref{fig:rotation}. Test data corresponds to the standard test examples without any transformation. In-domain performance is further assessed by applying the same transformations on the test data.
Two baselines are considered in this set of experiments consisting of an unconditional model as well as a domain adversarial approach similar to DANN. For the case of the adversarial baselines, training is carried out so that alternate updates are performed to jointly train a task classifier and a domain classifier. The task classifier trains to minimize its classification loss and further maximizes the entropy of the domain classifier aiming to enforce domain invariance in the representations extracted after its convolutional layer. The domain classifier trains to correctly classify domains. Two ablations are also considered. The first one consists of a conditional model with learned domain-level context variables used for conditioning, in which case the conditioning model is replaced by an embedding layer. Such model can only be evaluated in in-domain test data. Additionally, we consider the ablation described above so that the domain classification term of the loss is dropped. For both the case of baselines and conditional approaches, classifiers are implemented as two-layered ReLU activated convolutional networks. Moreover, in the case of conditional models, $M_{domain}$ is given by a single convolutional layer followed by a linear output layer, and FiLM layers are included after each convolutional layer in $M_{task}$ \footnote{Implementation of this set of experiments is made available at: \url{https://github.com/google-research/google-research/tree/master/domain_conditional_predictors}}.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=3cm]{Figures/hflip.png}
\caption{Horiz. flip.}
\label{fig:hflip}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=3cm]{Figures/colorflip.png}
\caption{Flip colors.}
\label{fig:colorflip}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=3cm]{Figures/blur.png}
\caption{Blurring.}
\label{fig:blur}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=3cm]{Figures/rotation.png}
\caption{Rotation.}
\label{fig:rotation}
\end{subfigure}
\caption{Examples of transformations used to simulate different data sources out of MNIST.}
\end{figure}
Results, as reported in Table \ref{tab:classification_results}, indicate that the conditioning approach boosts performance with respect to standard classifiers that disregard domain information as well as domain invariant approaches. In fact, the conditional predictors presented the highest out-of-domain accuracy amongst all evaluated methods. Surprisingly, in the ablation case where we drop $\mathcal{L}_{domain}$, domains can still be inferred from $z$ with high accuracy (c.f. Table \ref{tab:domain_classification}) which indicates the domain conditioning strategy enabled by the proposed architecture is exploited even if not enforced by an explicit training objective when multiple domains are present in the training sample.
\begin{table}[h]
\centering
\caption{Classification performance in terms of top-1 accuracy (\%). In-domain performance is measured using distorted MNIST test images while out-of-domain results correspond to evaluation on the standard test set of MNIST. The ablation with a learned embedding layer can only be used for in-domain predictions. For the in-domain evaluation, we loop over the test data 10 times to reduce the evaluation variance since each test example will be transformed differently each time.}
\resizebox{\textwidth}{!}{
\begin{tabular}{ccc}
\hline
\textbf{Model} & \textbf{\begin{tabular}[c]{@{}c@{}}In-domain\\ test accuracy (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Out-of-domain\\ test accuracy (\%)\end{tabular}} \\ \hline
Unconditional baseline & 94.97 & 92.51 \\
Adversarial baseline & 89.19 & 88.49 \\
Ablation: switching $M_{domain}$ for an embedding layer & 92.56 & -- \\
Ablation: dropping $\mathcal{L}_{domain}$ & 96.20 & 92.94 \\
Conditional predictor (\emph{Ours}) & 96.00 & 93.66 \\ \hline
\end{tabular}}
\label{tab:classification_results}
\end{table}
\begin{table}[h]
\centering
\caption{Domain classification top-1 accuracy (\%) measured for predictions of the underlying domain when the same transformations applied to train data are applied to test examples. In the ablation case, a linear classifier is trained on top of $z$ with the rest of the model frozen.}
\begin{tabular}{cc}
\hline
\textbf{Model} & \textbf{Test accuracy (\%)} \\ \hline
Adversarial baseline & 84.22 \\
Ablation: dropping $\mathcal{L}_{domain}$ & 96.01 \\
Conditional predictor (\emph{Ours}) & 99.90 \\ \hline
\end{tabular}
\label{tab:domain_classification}
\end{table}
The reported proof-of-concept evaluation provided indications that domain-conditional predictors might indeed offer competitive performance under domain shift relative to both invariant approaches as well as standard classifiers. We thus proceed and evaluate to which extent such conclusions hold when larger scale widely adopted evaluation settings are considered. Moreover, validation experiments showed evidence suggesting that $z$ is indeed domain-dependent, and interestingly, that is the case even when $\mathcal{L}_{domain}$ is dropped, and performance in this case is fairly close to the proposed conditional model. As such, we further use the following evaluation to try and understand the contributions in observed performance improvements given by the architectural changes resulting from the inclusion of FiLM layers, and the actual influence of domain-conditioning in the final performance.
\section{Domain Generalization Evaluation}
\label{sec:complete_evaluation}
\subsection{Experiments description}
\label{sec:eval_description}
Seeking to confirm the findings in the proof-of-concept evaluation reported in section \ref{sec:experiments}, we now turn our attention to common benchmarks used for the domain generalization case under the leave-one-domain-out setting described previously. We specifically aim to answer the following questions:
\begin{enumerate}
\item Are domain-conditional models able to generalize to data sources other than the ones observed during training? How do they compare against more common domain-invariant approaches?
\item Can in-domain performance be used to reliably determine the best out-of-domain performers?
\item Are conditioning variables indeed domain-dependent? What happens when domain supervision is dropped?
\item What is the best combination of FiLM layers and regularization in order to enable effective conditioning?
\end{enumerate}
We first discuss differences between the planned and executed evaluations, and define a regularization procedure for FiLM layers that we found to be necessary in order to avoid overfitting. Question 1 listed above is addressed in section \ref{sec:eval_benchmarking} where standard classifiers, domain-invariant models (both our implementation and published results), and proposed domain-conditional predictors are compared. In section \ref{sec:stopping_criteria}, given that in realistic practical settings one does not have access to unseen domains and thus can only be able to perform cross-validation using in-domain data, we evaluate the usefulness of different criteria relying on in-domain validation data by comparing best performers under such criteria against the best out-of-domain performers, i.e. those obtained by direct evaluation in the test data of unseen domains. Ablations are reported in section \ref{sec:eval_ablations} where we evaluate \emph{self-modulated models}, i.e. those with the same architecture as conditional predictors but trained without domain supervision, and further check the resulting performance given by simplified FiLM layers where either scale or offset parameters are dropped. Finally, we check for domain-dependency in the conditioning variable $z$ in section \ref{sec:z_properties} where model-based dependency tests, as defined in appendix B, are carried out to verify whether domain-specific factors are encoded in learned representations.
\paragraph{A note on the final choice of evaluation data and baselines}
As described in section \ref{sec:evaluation}, a subset of four candidate benchmarks would be employed for comparing different approaches. We then briefly describe the features of those datasets that guided the decision regarding which ones to actually use. We found VLCS \citep{fang2013unbiased}, for instance, to be relatively small in terms of training sample size when considering the size of models under analysis. Moreover, one of the domains within VLCS, Caltech-101, is a potential subset of ImageNet, which is commonly used to pre-train models for domain generalization. Yet another relevant factor to consider is the fact that VLCS has all 4 domains corresponding to natural images, representing not much of a shift across training and testing domains. For the case of OfficeHome \citep{venkateswara2017Deep}, we found previously reported results to be high enough to leave little room for improvement (c.f. \citep{peng2019moment} for the domain adaptation case, for instance), and, similarly to VLCS, it is not very diverse in terms of domains but also in terms of the classes considered since they all correspond to office objects. We thus focus our resource budget on: PACS \citep{li2017deeper} for the smaller scale range (but large enough to avoid instabilities) in terms of both sample size and number and diversity of classes/domains, and DomainNet \citep{peng2019moment} for a much larger scale case. In terms of baselines, we found performances reported across IRM \citep{arjovsky2019invariant} and Rex \citep{krueger2020out} to not be significantly different and then pick only one of them. We remark both approaches attempt to train invariant classifiers for varying domains. Moreover, we included extra baselines for a complete comparison. For the case of PACS, the domain-invariant approach reported by \cite{zhao2020domain} is considered, while challenging privileged baselines that have access to unlabeled data from the target domain were also included for the case of DomainNet; those correspond to \citep{saito2018maximum,xu2018deep,peng2019moment}.
\subsection{Regularized FiLM}
\label{sec:film_regularization}
Given the extra capacity provided by the inclusion of FiLM layers, we found it to be necessary to include regularization strategies aimed at avoiding conditional models of growing too complex, which could incur in overfitting. We thus consider penalizing FiLM layers when they differ from the \emph{identity map}\footnote{The identity map outputs its inputs.}, and do so through the mean squared error (MSE) measured between inputs and outputs of FiLM layers throughout the model. The regularization penalty, indicated by $\Omega_{FiLM}$, will be given by the average MSE given by:
\begin{equation}
\label{eq:film_reg}
\Omega_{FiLM} = \frac{1}{|\mathcal{F}|} \sum_{k \in \mathcal{F}} \text{MSE}(x^{k-1}, FiLM^k(x^{k-1}, z))
\end{equation}
where $\mathcal{F}$ stands for the set of indexes of FiLM layers while $k$ indicates general indexes of layers within $M_{task}$. The inputs to the conditioning layers correspond to $z$, the external conditioning variable, and $x^{k-1}$, representing the output of the layer preceding $FiLM^k$. The complete training loss then becomes:
\begin{equation}
\label{eq:full_training_objective}
\mathcal{L}=(1-\lambda)\mathcal{L}_{task}+\lambda\mathcal{L}_{domain}+\gamma\Omega_{FiLM},
\end{equation}
where $\gamma$ is a hyperparameter weighing the importance of the penalty in the overall training objective, which allows controlling how far from the identity operator FiLM layers are able to reach. We remark that the case where $\gamma=0$ is included within the range of all of our hyperparameter search so that the standard unconstrained FiLM can always be selected by the search procedure. We, however, found that adding some regularization consistently yielded better generalization, but $\gamma$ was usually assigned to relatively small values such as $10^{-9}$ or $10^{-10}$.
\subsection{Domain-conditional predictors are much simpler than and as performant as domain-invariant approaches}
\label{sec:eval_benchmarking}
In the following, unless stated otherwise, results are reported in terms of 95\% confidence intervals of the top-1 accuracy obtained with a sample of size 5 (i.e. 5 independent training runs were executed to generate each entry in the tables). Across tables, columns indicate left-out domains that were completely disregarded to train the models whose performance is reported under said column. Our models as well as most baselines correspond to the ResNet-50 architecture, and we follow the common practice of pre-training the models on ImageNet. We also freeze batch normalization layers during fine-tuning to comply with previously reported evaluation protocols for the considered tasks. Four FiLM layers are included in $M_{task}$ in total corresponding to one after each ResNet block. Further implementation details are included in the appendix.
\subsubsection{Matching the performance of state-of-the-art invariant approaches on PACS}
Evaluation for PACS is reported in Table \ref{tab:pacs_results}, and two distinct cases are considered consisting of \emph{in-domain performance}, which indicates the evaluation is carried out using fresh data (unseen during training) from the same domains used for training, while the \emph{out-of-domain} case consists of the evaluation performed on the test partition of left-out domains. Moreover, in order to enable a fair comparison with published results of competing approaches, out-of-domain performance is reported in terms of the best accuracy obtained by a given model throughout training, which we indicate by the term \emph{Oracle} (c.f. section \ref{sec:stopping_criteria} for a detailed discussion on different methods for selecting models for evaluation). Performance is reported for the conditional models proposed here along with a number of baselines. We use the term \emph{unconditional models} to refer to models trained under standard ERM and that have the same architecture as $M_{task}$ after removing conditioning layers, i.e. a standard ResNet-50. In order to control for the effect in performance given by the differences in model sizes, we further considered higher capacity versions of the unconditional baseline, where we doubled the width of the model by using two models trained side-by-side and concatenating their outputs prior to the final layer, and also included a deeper model implemented as a ResNet-101. For the adversarial case as well as IRM, we used the exact same model as ours corresponding to the pair $M_{task}$ and $M_{domain}$ as well as FiLM layers, and no domain-related supervision is performed in such cases. State-of-the-art results achieved by the domain-invariant approach reported by \cite{zhao2020domain} are finally included.
We highlight that, in the out-of-domain case and across most domains, the conditional model is able to outperform unconditional baselines regardless of their size as well as the domain-invariant approaches that we implemented, and does so without affecting its in-domain performance, which did occur for the domain-invariant baselines. This suggests that inducing invariance has a cost in terms of generalization performance for training domains which was not observed for conditional approaches. Moreover, in-domain performance seems to be a good predictor of out-of-domain differences across models for most cases. As will be further discussed later on, that doesn't appear to be always the case when using in-domain performance to do model selection or deciding when to stop training. We further remark that the conditional approach is able to closely match the performance of the state-of-the-art domain-invariant scheme while doing so with practical advantages such as a much simpler training scheme and less hyperparameters to be tuned. We further mention that, as observed in previous work \citep{gulrajani2020search}, unconditional models trained under standard ERM perform surprisingly well in the domain generalization case, and the reason for that remains an open question\footnote{Considering that the domain generalization setting violates assumptions required in ERM in order to guarantee generalization.}. However, in section \ref{sec:z_properties}, we show representations learned through ERM are domain-dependent for the types of data we consider. Regarding some of the invariant baselines we highlight that, similarly to what has been reported in the past \citep{albuquerque2019generalizing,krueger2020out,rosenfeld2020risks}, we found IRM to perform similarly to ERM (referred to as unconditional in the tables). Moreover, considering the case of the adversarial baseline, its performance gap with respect to other approaches is likely due to the added complexity resulting from the adversarial training scheme and the number of hyperparameters it incurs in addition to those conditional models bring in. We thus claim the overall simplicity of the domain-conditional setting as a relevant practical advantage over adversarial formulations of invariant approaches.
\begin{table}[]
\centering
\caption{Leave-one-domain-out evaluation on PACS. Results correspond to 95\% confidence intervals on the top-1 prediction accuracy (\%). Results in bold indicate the best performers for each left-out domain. Conditional models perform on pair with a state-of-the-art invariant approach while being much simpler to train.
\label{tab:pacs_results}}
\begin{tabular}{ccccc}
\hline
\multicolumn{1}{l}{} & \multicolumn{4}{c}{\textbf{Left-out-domain}} \\ \cline{2-5}
& \textit{\textbf{Art Painting}} & \textit{\textbf{Cartoon}} & \textit{\textbf{Photo}} & \textit{\textbf{Sketch}} \\ \hline
\multicolumn{5}{l}{\textit{In-domain test accuracy}} \\ \hline
Unconditional & 95.48$\pm$0.31 & 94.52$\pm$0.18 & 94.26$\pm$0.28 & 95.91$\pm$0.24 \\
Uncond. (\emph{Wide}) & \textbf{96.20$\pm$0.29} & 94.62$\pm$0.33 & 94.42$\pm$0.39 & 96.17$\pm$0.19 \\
Uncond. (\emph{Deep}) & 95.83$\pm$0.24 & 94.41$\pm$0.25 & 94.50$\pm$0.19 & 95.19$\pm$0.32 \\
Adversarial & 91.96$\pm$0.55 & 91.97$\pm$0.76 & 90.60$\pm$0.75 & 93.21$\pm$0.45 \\
IRM & 94.99$\pm$0.50 & 94.00$\pm$0.18 & 93.26$\pm$0.30 & 95.29$\pm$0.32 \\
Conditional (\emph{Ours}) & 95.86$\pm$0.31 & \textbf{95.06$\pm$0.15} & \textbf{94.90$\pm$0.49} & \textbf{96.33$\pm$0.29} \\ \hline
\multicolumn{5}{l}{\textit{Oracle out-of-domain test accuracy}} \\ \hline
Unconditional & 85.40$\pm$0.57 & 77.28$\pm$0.78 & 95.23$\pm$0.53 & 74.58$\pm$1.09 \\
Uncond. (\emph{Wide}) & 85.98$\pm$0.45 & 78.37$\pm$0.66 & 95.53$\pm$0.44 & 74.31$\pm$1.11 \\
Uncond. (\emph{Deep}) & 84.81$\pm$0.54 & 79.47$\pm$0.51 & 95.09$\pm$0.21 & 76.67$\pm$2.36 \\
Adversarial & 74.74$\pm$1.44 & 74.64$\pm$0.81 & 92.67$\pm$1.03 & \textbf{78.65$\pm$1.08} \\
IRM & 81.73$\pm$0.56 & 76.53$\pm$0.57 & 95.07$\pm$0.29 & 77.76$\pm$2.18 \\
Conditional (\emph{Ours}) & 86.28$\pm$0.84 & \textbf{80.29$\pm$1.48} & 96.75$\pm$0.54 & 77.34$\pm$2.16 \\ \hline
\cite{zhao2020domain} & \textbf{87.51$\pm$1.03} & 79.31$\pm$1.40 & \textbf{98.25$\pm$0.12} & 76.30$\pm$0.65 \\ \hline
\end{tabular}
\end{table}
\subsubsection{Closing the gap between models trained with access to unlabeled target data and domain generalization approaches on DomainNet}
For the case of DomainNet, results, as reported in Table \ref{tab:domainnet_results}, include what we refer to as \emph{privileged baselines}, indicating that such models have an advantage in that they have access to the test distribution through an unlabeled data sample made available to them during training\footnote{The setting under consideration in the case of privileged baselines is commonly referred to as multi-source unsupervised domain adaptation.}. Moreover, the three considered privileged baselines are implemented using deeper models corresponding to a ResNet-101 architecture as discussed by \cite{peng2019moment}, rendering the comparison unfair in their favor. We then compare such approaches with conditional models as well as the unconditional cases when trained under the domain generalization setting, i.e. with no access to the test domain during training. Once more, conditional models are observed to outperform standard classifiers even if the overall number of parameters is roughly matched considering the wide/deep cases. More importantly, conditional models are observed to be able to perform on pair with baselines that have access to the test domain, showing the proposed approach to reduce the gap between models that specialize to a particular target domain and those that are simply trained on diverse data sources, without focusing on any test distribution in particular. This is of practical significance since it can enable deployment of models that readily generalize to new sources, without requiring any prior data collection.
\begin{table}[]
\centering
\caption{Leave-one-domain-out evaluation on DomainNet. Results correspond to 95\% confidence intervals on the top-1 prediction accuracy (\%). Results in bold indicate the best performers for each left-out domain. In all cases, conditional predictors outperform at least one of the privileged baselines.
\label{tab:domainnet_results}}
\resizebox{\textwidth}{!}{
\begin{tabular}{ccccccc}
\hline
& \multicolumn{6}{c}{\textbf{Left-out domain}} \\ \cline{2-7}
& \textbf{Clipart} & \textbf{Infograph} & \textbf{Painting} & \textbf{Quickdraw} & \textbf{Real} & \textbf{Sketch} \\ \hline
\multicolumn{7}{l}{\textit{In-domain test accuracy}} \\ \hline
Unconditional & 60.85$\pm$0.78 & 64.42$\pm$1.26 & 62.29$\pm$1.47 & 62.07$\pm$1.19 & 57.19$\pm$1.66 & 62.30$\pm$1.61 \\
Uncond. (\emph{Wide}) & 61.67$\pm$1.85 & 63.99$\pm$1.48 & 63.18$\pm$1.13 & 63.70$\pm$0.92 & 58.37$\pm$0.64 & 63.98$\pm$0.99 \\
Uncond. (\emph{Deep}) & 61.22$\pm$0.94 & 65.01$\pm$1.86 & 63.08$\pm$1.40 & 62.88$\pm$0.96 & 57.88$\pm$0.69 & 63.48$\pm$1.08 \\
Conditional (\emph{Ours}) & \textbf{65.10$\pm$0.47} & \textbf{69.01$\pm$0.75} & \textbf{66.99$\pm$0.49} & \textbf{67.74$\pm$0.44} & \textbf{60.90$\pm$0.59} & \textbf{67.43$\pm$0.48} \\ \hline
\multicolumn{7}{l}{\textit{Oracle out-of-domain test accuracy}} \\ \hline
Unconditional & 52.47$\pm$1.13 & 19.64$\pm$0.42 & 44.12$\pm$0.71 & 11.67$\pm$0.50 & 51.94$\pm$1.68 & 44.40$\pm$1.42 \\
Uncond. (\emph{Wide}) & 54.12$\pm$2.25 & 18.96$\pm$0.62 & 44.88$\pm$0.83 & 12.16$\pm$0.16 & 52.03$\pm$1.12 & 45.51$\pm$1.68 \\
Uncond. (\emph{Deep}) & 54.70$\pm$1.26 & 21.05$\pm$0.44 & 45.54$\pm$0.66 & 12.71$\pm$0.33 & 51.62$\pm$1.43 & 46.83$\pm$1.17 \\
Conditional (\emph{Ours}) & 58.37$\pm$0.67 & 23.25$\pm$0.45 & 50.06$\pm$0.43 & \textbf{13.32$\pm$0.34} & 57.25$\pm$0.47 & \textbf{50.52$\pm$1.05} \\ \hline
\multicolumn{7}{l}{\textit{Privileged baselines with access to unlabeled target data}} \\ \hline
\cite{saito2018maximum} & 54.3$\pm$0.64 & 22.1$\pm$0.70 & 45.7$\pm$0.63 & 7.6$\pm$0.49 & 58.4$\pm$0.65 & 43.5$\pm$0.57 \\
\cite{xu2018deep} & 48.6$\pm$0.73 & 23.5$\pm$0.59 & 48.8$\pm$0.63 & 7.2$\pm$0.46 & 53.5$\pm$0.56 & 47.3$\pm$0.47 \\
\cite{peng2019moment} & \textbf{58.6$\pm$0.53} & \textbf{26.0$\pm$0.89} & \textbf{52.3$\pm$0.55} & 6.3$\pm$0.58 & \textbf{62.7$\pm$0.51} & 49.5$\pm$0.76 \\ \hline
\end{tabular}
}
\end{table}
\subsection{In-domain performance is a better predictor of out-of-domain accuracy for larger scale data}
\label{sec:stopping_criteria}
Following the discussion in previous work \citep{albuquerque2019generalizing,krueger2020out}, we study the impact in out-of-domain performance resulting from the choice of different criteria used to decide when to stop training. This is an important analysis which we argue should be reported by any work proposing approaches targeting domain generalization since performing cross-validation in this setting is non-trivial, given that no specific test domain is defined at training time. We thus consider the performance of the proposed models when two different criteria relying solely on in-domain validation data are used, and compare them against the case where the test data is used to select the best performing model (i.e. the \emph{oracle} case). In further detail, we follow \cite{albuquerque2019generalizing} and analyze the out-of-domain performance obtained by models that reached the highest in-domain prediction accuracy and the lowest in-domain classification loss, both measured with fresh in-domain data held out from training.
\begin{table}[]
\centering
\caption{Comparison of different stopping criteria on PACS. Results in terms of out-of-domain accuracy (\%) are shown for each left-out-domain and each method for selecting models to be evaluated. The best in-domain models are significantly worse than the best out-of-domain performers.
\label{tab:pacs_stopping}}
\begin{tabular}{ccccc}
\hline
\multicolumn{1}{l}{} & \multicolumn{4}{c}{\textbf{Left-out domain}} \\ \cline{2-5}
& \textbf{Art painting} & \textbf{Cartoon} & \textbf{Photo} & \textbf{Sketch} \\ \hline
\multicolumn{5}{l}{\textit{Best in-domain performers evaluated out-of-domain}} \\ \hline
Validation accuracy & 80.65$\pm$1.83 & 75.80$\pm$2.81 & 95.34$\pm$0.62 & 71.76$\pm$2.60 \\
Validation loss & 83.84$\pm$0.84 & 74.90$\pm$2.79 & 95.80$\pm$0.67 & 72.59$\pm$2.34 \\ \hline
\multicolumn{5}{l}{\textit{Best out-of-domain performer}} \\ \hline
Oracle & \textbf{86.28$\pm$0.84} & \textbf{80.29$\pm$1.48} & \textbf{96.75$\pm$0.54} & \textbf{77.34$\pm$2.16} \\ \hline
\end{tabular}
\end{table}
Results are reported in Tables \ref{tab:pacs_stopping} and \ref{tab:domainnet_stopping} for the cases of PACS and DomainNet, respectively. For the case of PACS, we observe a significant gap across all domains between the best performance which can be achieved and that observed by the best in-domain performer. The gap reduces drastically when we move to the larger scale case corresponding to DomainNet. We attribute the smaller gap in the case of DomainNet to the increased domain diversity since more distinct training sources are available in that case. Considering both datasets, we did not observe a significant difference in selecting either of the considered practical stopping criteria that only use in-domain validation data.
\begin{table}[]
\centering
\caption{Comparison of different stopping criteria on DomainNet. Results in terms of out-of-domain accuracy (\%) are shown for each left-out-domain and each method for selecting models to be evaluated. A smaller gap is observed for DomainNet between model selection approaches that use in-domain validation data and those that have access to the test set of the left-out-domain.
\label{tab:domainnet_stopping}}
\resizebox{\textwidth}{!}{
\begin{tabular}{ccccccc}
\hline
\multicolumn{1}{l}{} & \multicolumn{6}{c}{\textbf{Left-out domain}} \\ \cline{2-7}
& \textbf{Clipart} & \textbf{Infograph} & \textbf{Painting} & \textbf{Quickdraw} & \textbf{Real} & \textbf{Sketch} \\ \hline
\multicolumn{7}{l}{\textit{Best in-domain performers evaluated out-of-domain}} \\ \hline
Validation accuracy & 57.56$\pm$0.77 & 22.30$\pm$0.44 & 49.51$\pm$0.61 & 12.35$\pm$0.57 & 56.32$\pm$0.28 & 49.89$\pm$1.12 \\
Validation loss & 58.04$\pm$0.99 & 22.56$\pm$0.57 & 49.33$\pm$0.61 & 12.58$\pm$0.62 & 56.45$\pm$1.30 & 49.95$\pm$1.40 \\ \hline
\multicolumn{7}{l}{\textit{Best out-of-domain performer}} \\ \hline
Oracle & \textbf{58.37$\pm$0.67} & \textbf{23.25$\pm$0.45} & \textbf{50.06$\pm$0.43} & \textbf{13.32$\pm$0.34} & \textbf{57.25$\pm$0.47} & \textbf{50.52$\pm$1.05} \\ \hline
\end{tabular}
}
\end{table}
\subsection{Ablations}
\label{sec:eval_ablations}
\subsubsection{The self-modulated case: dropping domain supervision does not affect performance}
We now perform ablations in order to understand what are the sources of improvements provided by the discussed conditioning mechanism. Specifically, we drop the domain supervision term of the training objective defined in eq. \ref{eq:full_training_objective} to check whether the improvements observed in results reported in Tables \ref{tab:pacs_results} and \ref{tab:domainnet_results} with respect to unconditional and invariant baselines are due to the increase in model size rather than the conditioning approach. We then retrain our models while setting $\lambda=0$, in which case we refer to the resulting model as \emph{self-modulated}, given that in this case there's no supervision signal indicating what kind of information should be encoded in $z=M_{domain}(x)$ other than the classification criterion $\mathcal{L}_{task}$. Results are reported in Tables \ref{tab:pacs_selfmod} and \ref{tab:domainnet_selfmod} for the cases of PACS and DomainNet, respectively. In both datasets and for all left-out domains, we do not observe significant performance differences across the conditional and self-modulated models. We thus claim one of the two following statements explain the matching performance across the two schemes:
\begin{enumerate}
\item Even if domain supervision is not employed, $z$ is still domain-dependent.
\item Conditioning on domain information does not improve out-of-distribution performance, and improvements with respect to unconditional models are simply due to architectural changes.
\end{enumerate}
We provide evidence in section \ref{sec:z_properties} supporting the first statement enumerated on the above, where we show linear classifiers to be able to discriminate domains using only $z$ obtained after training self-modulated models with relatively high accuracy. Such result indicates that domain-dependent factors will be encoded in $z$ even if this property \emph{is not} explicitly enforced via supervision. This finding is of practical relevance given that no domain labels are required to enable domain-conditioning in this type of architecture, yielding models that perform better than standard classifiers and on pair with complex domain-invariant approaches that do require domain labels. We remark, however, that the conditional setting offers the advantage of enabling domain-related inferences, which are not supported in the self-modulated case. Given that both cases present a similar performance, we argue that the choice regarding which setting to use in a practical application should be guided by factors such as the availability of domain labels as well as the need for performing domain predictions at testing time.
\begin{table}[]
\centering
\caption{Ablation: self-modulated models compared with domain-conditional predictors on PACS. Results correspond to the best out-of-domain prediction accuracy (\%) obtained by each approach. Best results for each domain are in bold. No significant performance difference is observed across approaches.
\label{tab:pacs_selfmod}}
\begin{tabular}{ccccc}
\hline
& \multicolumn{4}{c}{\textbf{Left-out domain}} \\ \cline{2-5}
& \textbf{Art painting} & \textbf{Cartoon} & \textbf{Photo} & \textbf{Sketch} \\ \hline
Conditional & \textbf{86.28$\pm$0.84} & \textbf{80.29$\pm$1.48} & \textbf{96.75$\pm$0.54} & 77.34$\pm$2.16 \\
Self-modulated & 85.63$\pm$0.58 & 80.21$\pm$0.60 & 96.42$\pm$0.19 & \textbf{78.80$\pm$1.31} \\ \hline
\end{tabular}
\end{table}
\begin{table}[]
\centering
\caption{Self-modulated models compared with domain-conditional predictors on DomainNet. Results correspond to the best out-of-domain prediction accuracy (\%) obtained by each approach. Best results for each domain are in bold. No significant performance difference is observed across approaches.
\label{tab:domainnet_selfmod}}
\resizebox{\textwidth}{!}{
\begin{tabular}{ccccccc}
\hline
& \multicolumn{6}{c}{\textbf{Left-out domain}} \\ \cline{2-7}
& \textbf{Clipart} & \textbf{Infograph} & \textbf{Painting} & \textbf{Quickdraw} & \textbf{Real} & \textbf{Sketch} \\ \hline
Conditional & \textbf{58.37$\pm$0.67} & \textbf{23.25$\pm$0.45} & 50.06$\pm$0.43 & \textbf{13.32$\pm$0.34} & 57.25$\pm$0.47 & \textbf{50.52$\pm$1.05} \\
Self-modulated & 58.35$\pm$0.94 & 22.86$\pm$0.26 & \textbf{50.31$\pm$0.45} & 12.87$\pm$0.21 & \textbf{57.69$\pm$0.70} & 50.01$\pm$0.61 \\ \hline
\end{tabular}
}
\end{table}
We further qualitatively assess the differences in behavior between the two approaches using Grad-CAM heat-maps \citep{selvaraju2017grad}, as shown in Figures \ref{fig:gradcam_conditional} and \ref{fig:gradcam_selfmodulated} for the cases of conditional and self-modulated models, respectively. Those indicate parts of the input deemed relevant to resulting predictions, and in both cases, models employed in the analysis were trained when \emph{sketch} was arbitrarily chosen as the left-out-domain. From the left to the right, each group of three images corresponds to the original image, and Grad-CAM plots obtained from $M_{domain}$ and $M_{task}$, respectively. One can then observe that, for the case of the conditional model, $M_{domain}$ accounts for regions of the input other than the object of interest such as the background. On the other hand, $M_{task}$ attends to more specific parts of the input that lie on the foreground. Interestingly, this behavior seems to be reverted when the self-modulated case is evaluated. In this case, while $M_{domain}$ concentrates in specific regions of the object defining the underlying class, $M_{task}$ accounts for the entire input. In both cases however, the two models learn to work in a complementary fashion in such a way that $M_{task}$ and $M_{domain}$ often focus on different aspects of the input data. Further differences between the two schemes will be discussed in section \ref{sec:z_properties}.
\begin{figure}[h]
\centering
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=.65\linewidth]{Figures/gradcam_conditional.png}
\caption{Grad-CAM heat-maps resulting from the \emph{domain-conditional model}. Images correspond to the original input, and the results of the analysis performed using $M_{domain}$ and $M_{task}$, respectively. While $M_{task}$ focus on the object of interest, $M_{domain}$ accounts for secondary features including the background.}
\label{fig:gradcam_conditional}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=.65\linewidth]{Figures/gradcam_selfmodulated.png}
\caption{Grad-CAM heat-maps resulting from \emph{self-modulated model}. Images correspond to the original input, and the results of the analysis performed using $M_{domain}$ and $M_{task}$, respectively. In this case, $M_{task}$ spreads its focus over most of the input image while $M_{domain}$ targets specific regions.}
\label{fig:gradcam_selfmodulated}
\end{subfigure}
\caption{Heat-maps indicating parts of input images implying predictions for the different models we consider.}
\end{figure}
\subsubsection{Dropping FiLM parameters: Full FiLM layers along with regularization yield the best conditioning scheme}
We now perform a second set of ablations to determine whether simpler variations of FiLM layers are able to perform similarly to the full model. In table \ref{tab:pacs_filmablation}, results considering PACS are reported, and the full conditional model is compared against its variations where either the scale or the offset parameters are dropped from all FiLM layers throughout the model. Results indicate that using full FiLM yields better performance across most of the domains. We further highlight that $\gamma$, the parameter controlling the contribution of the regularization penalty $\Omega_{FiLM}$, was consistently selected to non-zero values. We then conclude that using the full FiLM layers and employing the regularization strategy described in eq. \ref{eq:film_reg} is the best performing combination for the types of models/problems we consider. We finally remark that the combination of over-parameterization and regularization is a common approach within recent work in neural networks, which is aligned with the setting we observed to yield the best performing predictors.
\begin{table}[]
\centering
\caption{Dropping parameters of FiLM layers. Results are reported for PACS in terms of the best out-of-domain prediction accuracy (\%) obtained by each approach. Best results for each domain are in bold.
\label{tab:pacs_filmablation}}
\begin{tabular}{ccccc}
\hline
& \multicolumn{4}{c}{\textbf{Left-out domain}} \\ \cline{2-5}
& \textbf{Art painting} & \textbf{Cartoon} & \textbf{Photo} & \textbf{Sketch} \\ \hline
Conditional & \textbf{86.28$\pm$0.84} & \textbf{80.29$\pm$1.48} & \textbf{96.75$\pm$0.54} & 77.34$\pm$2.16 \\ \hline
- scale & 84.79$\pm$0.61 & 78.68$\pm$1.34 & 96.50$\pm$0.10 & 75.96$\pm$2.11 \\
- offset & 85.09$\pm$0.28 & 78.68$\pm$0.80 & 95.86$\pm$0.37 & \textbf{77.46$\pm$0.99} \\ \hline
\end{tabular}
\end{table}
\subsection{Understanding properties of $z$: domain-dependency is observed with or without domain supervision}
\label{sec:z_properties}
In the following set of experiments, we study whether domain-dependency is indeed achieved in $z=M_{domain}(x)$, and further investigate what properties appear in $z$ when the self-modulated case is considered. Representations output by unconditional models are also evaluated in order to help understanding the properties of representations resulting from ERM with standard architectures. In order to perform such evaluation, we carry out a dependency analysis between representations extracted from the models and domain or class labels\footnote{Refer to the supplementary materials for a definition of the model-based statistical dependency analysis performed in this set of experiments.}. We specifically train two linear classifiers using the domain representations $z$ for both the cases of conditional and self-modulated models, and further considered the output of the convolutional stack of the unconditional ResNet-50. While one of the classifiers is trained to predict the domains, the other predicts the actual class labels, and these steps are repeated for the test data of training domains for representations obtained by the models trained when each domain is left-out. Extra results corresponding to low-rank projections of $z$ as well as a similar analysis performed using FiLM parameters rather than $z$ are included in the appendix.
Results reported in Tables \ref{tab:pacs_domainclassification} and \ref{tab:pacs_taskclassification} correspond to 95\% confidence intervals of the in-domain test accuracy obtained through 5-fold cross-validation given by logistic regression to predict domains or classes, respectively. Considering the conditional model, representations are clearly domain-dependent as originally expected, and class prediction is not as accurate as in the self-modulated case. Interestingly, the embedding spaces given by the self-modulated models are both domain- and class-dependent, and remarkably, one can predict classes from them with an accuracy comparable to the actual classifier (compare to the in-domain case in table \ref{tab:pacs_results}), while domains can be predicted with high accuracy relative to a random predictor in all of the considered cases as well.
\begin{table}[h]
\centering
\caption{Prediction accuracy (\%) of logistic regression for performing domain classification on top of $z$. Results correspond to 95\% confidence intervals obtained through 5-fold cross validation.
\label{tab:pacs_domainclassification}}
\begin{tabular}{ccccc}
\hline
& \multicolumn{4}{c}{\textbf{Left-out domain}} \\ \cline{2-5}
& \textbf{Art painting} & \textbf{Cartoon} & \textbf{Photo} & \textbf{Sketch} \\ \hline
Conditional & 98.88$\pm$0.18 & 98.19$\pm$0.61 & 99.76$\pm$0.11 & 96.10$\pm$0.51 \\
Self-modulated & 72.45$\pm$1.32 & 80.95$\pm$0.82 & 90.86$\pm$0.69 & 62.02$\pm$0.91 \\ \hline
Unconditional & 98.14$\pm$0.31 & 95.11$\pm$0.47 & 98.34$\pm$0.17 & 90.43$\pm$1.16 \\ \hline
\end{tabular}
\end{table}
\begin{table}[h]
\centering
\caption{Prediction accuracy (\%) of logistic regression for predicting class labels using $z$. Results correspond to 95\% confidence intervals obtained through 5-fold cross validation.
\label{tab:pacs_taskclassification}}
\begin{tabular}{ccccc}
\hline
& \multicolumn{4}{c}{\textbf{Left-out domain}} \\ \cline{2-5}
& \textbf{Art painting} & \textbf{Cartoon} & \textbf{Photo} & \textbf{Sketch} \\ \hline
Conditional & 79.65$\pm$1.30 & 76.71$\pm$0.42 & 66.55$\pm$1.30 & 73.71$\pm$1.07 \\
Self-modulated & 93.05$\pm$0.32 & 92.14$\pm$0.96 & 90.87$\pm$0.61 & 91.72$\pm$0.74 \\ \hline
Unconditional & 94.67$\pm$0.67 & 94.47$\pm$0.67 & 94.90$\pm$0.82 & 95.78$\pm$0.51 \\ \hline
\end{tabular}
\end{table}
Such observations support the claim that domain-conditional modeling is indeed being enforced by the proposed architecture, and, most notably, this is the case even if domain supervision is dropped. For the unconditional case, it was recently pointed out that simple ERM makes it for strong baselines in the domain generalization setting \citep{gulrajani2020search}. We found that representations learned by this type of model, in addition to being class-dependent as expected given that those features are obtained prior to the output layer, they are also domain-dependent in that domains can be predicted from the features with high accuracy by linear classifiers. This suggests that domain-dependent high level features are accounted for in this case, and this observation serves to both explain why ERM somewhat works out-of-domain, as well as to motivate domain-conditional approaches, further considering that including explicit conditioning mechanisms in the architecture improved performance relative to unconditional models.
\section{Conclusion}
\label{sec:conclusion}
In the following, we address the questions posed in section \ref{sec:eval_description} and additionally discuss further conclusions drawn from the reported evaluation.
\paragraph{Which approach is finally recommended? Domain-invariant or domain-conditional models?}
We found both approaches to work similarly well in terms of final out-of-domain performance. We however argue that the domain-conditional approach yields several practical advantages such as: (\textbf{i})-overall simplicity in that no adversarial training is required and a standard maximum likelihood estimation scheme is employed; (\textbf{ii})-Less hyperparameters need to be tuned for the domain-conditional case; (\textbf{iii})-There is no reliance on domain labels since the self-modulating case generates domain-dependent representations with no domain supervision.
\paragraph{Which stopping criterion should be used for domain generalization?}
In practice, one might not have any access to data sources over which models are expected to work well, and thus deciding when to stop training and which version of models to use at testing time becomes tricky. In our evaluation, we did not observe significant differences in using either in-domain loss or accuracy as a stopping criterion. However, we found that the scale of the data in terms of both domain diversity and sample size helps in reducing the gap between best in-domain and out-of-domain selected models. A practical recommendation is then to ensure datasets are created including diverse sets of data sources.
\paragraph{Are domain labels necessary for achieving out-of-distribution generalization?}
We found that dropping domain supervision does not affect performance as much as originally expected. This is due to the fact that learned conditioning representations $z$ are still domain-dependent (in addition to being strongly class-dependent) even if such property is not enforced via supervision, not affecting the conditioning mechanism significantly. We thus claim that the main requirements for achieving out-of-distribution generalization under the domain-conditional framework, besides having explicit conditioning mechanisms built in the architecture, is to have diverse enough training data in the sense that distinct data sources are available. In summary, if no domain-related inference is required for the application of interest, domain supervision can be dropped without significantly affecting performance.
\paragraph{Why does ERM achieve a relatively high accuracy in unseen data sources?}
We found evidence showing that domain-dependent representations are learned by ERM in the sense that domains can be easily inferred from representations it yields. We then argue that this behaviour suggests that a similar modeling approach to the one we propose herein naturally occurs when performing ERM, given a model family with enough capacity. While this is an interesting feature of ERM, there's just so much one can do without explicitly enforcing the conditioning behaviour. For instance, we observed that increasing model size did not result in better performance. On the other hand, introducing explicit conditioning mechanisms in the architectures does enable improvements in out-of-domain performance.
\paragraph{What are some open questions and directions for domain generalization research under the domain-conditional setting?}
Computing domain representations $z$ in the proposed setting is relatively expensive in that a separate model is used for that purpose. Studying approaches alleviating that cost by either reducing the size of $M_{domain}$ or dropping it altogether constitutes a natural extension of the method discussed herein. In addition to that, extending the conditional setting to cases where some information is available regarding possible test data sources is a relevant direction given its practical applicability. This could be achieved by either leveraging an unlabeled data sample from a particular target distribution, or using available metadata to infer, for instance, which training domain is closest to a target of interest, and focusing on that particular domain at training time.
|
2,869,038,154,125 | arxiv | \section{Introduction}
\label{s1}
The quiet Sun covers most of the solar surface, in particular at activity minimum, but
also plays an important role even during the active phase of the solar cycle. The magnetic
field in the quiet Sun is composed of the network \citep{1967SoPh....1..171S},
internetwork \citep[IN,][]{1971IAUS...43...51L, 1975BAAS....7..346L}, and the ephemeral
regions \citep{1973SoPh...32..389H}. For an overview of the small-scale magnetic
features, see \citet{1993SSRv...63....1S, 2009SSRv..144..275D, 2014masu.book.....P,
2014A&ARv..22...78W}.
The IN features are observed within the supergranular cells and carry hecto-Gauss fields
\citep[][]{1996A&A...310L..33S, 2003A&A...408.1115K, 2008A&A...477..953M} although
kilo-Gauss fields have also been observed in the IN \citep{2010ApJ...723L.164L,
Lagg16}. They evolve as unipolar and bipolar features with typical lifetimes of less than
10 minutes \citep{2010SoPh..267...63Z, 2013apj...774..127l,lsa}, i.e., they continuously
bring new flux to the solar surface, either flux that has been either freshly generated,
or recycled. They carry fluxes $\le 10^{18}$ Mx, with the lower limit on the smallest
flux decreasing with the increasing spatial resolution and polarimetric sensitivity of
the observing instruments, although the identification technique also plays an important
role.
Ephemeral regions are bipolar magnetic features appearing within the supergranular cells
carrying fluxes $\approx 10^{19}$ Mx \citep{2001ApJ...548..497C, 2003ApJ...584.1107H} and
are much longer-lived compared to the IN features, with lifetimes of 3 -- 4.4 hours
\citep{2000RSPTA.358..657T, 2001ApJ...555..448H}. The ephemeral regions also bring new
magnetic flux to the solar surface.
The network is more stable, with typical lifetimes of its structure of a few
hours
to a day, although the individual kG magnetic elements within the network live for a much
shorter time, as the entire flux within the network is exchanged within a period of 8--24
hr \citep{2003ApJ...584.1107H, 2014apj...797...49g}. The flux in the network is fed by
ephemeral regions \citep{0004-637x-487-1-424, 2001ApJ...555..448H} and IN features
\citep[][]{2014apj...797...49g}. The network features are found along the supergranular
boundaries and carry fields of kG strength with a typical flux of $10^{18}$ Mx
\citep{1995soph..160..277w}.
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{f1}
\caption{Two sample magnetograms recorded at $t$ = 00:47 UT and 00:58 UT on 2009 June 9
with the \textsc{Sunrise}/IMaX instrument during its first science flight.}
\label{magnetogram}
\end{figure*}
The magnetic flux is produced by a dynamo, the location of which is currently the subject
of debate, as is whether there is only a single dynamo acting in the Sun
\citep[e.g.,][]{2012A&A...547A..93S} or whether there is a small-scale dynamo acting in
addition to a global dynamo \citep{1993A&A...274..543P, 1999ApJ...515L..39C,
2001A&G....42c..18C, 2007A&A...465L..43V, 2008A&A...481L...5S, 2010A&A...513A...1D,
2013A&A...555A..33B, 2015ApJ...803...42H, 2016ApJ...816...28K}. In addition, it is unclear
if all the magnetic flux appearing on the Sun is actually new flux produced by a dynamo,
or possibly recycled flux transported under the surface to a new location, where it
appears again \citep[e.g.,][]{2001ASPC..236..363P}. This may be particularly important at
the smallest scales.
An important parameter constraining the production of magnetic flux is the amount of
magnetic flux appearing at the solar surface. In particular, the emergence of magnetic
flux at very small scales in the quiet Sun provides a probe for a possible small-scale
dynamo acting at or not very far below the solar surface. The deep minimum between solar
cycles 23 and 24 offered a particularly good chance to study such flux emergence, as the
long absence of almost any activity would suggest that most of the emerging flux is newly
produced one and is not flux transported from decaying active regions to the quiet Sun
(although the recycling of some flux from ephemeral regions cannot be ruled out).
The IN quiet Sun displays by far the largest magnetic flux emergence rate (FER). Already,
\citet{1987SoPh..110..101Z} pointed out that two orders of magnitude more flux appears in
ephemeral regions than in active regions, while the FER in the IN is another two orders
of
magnitude larger. This result is supported by more recent studies
\citep[e.g.,][]{2002ApJ...565.1323S, 2009SSRv..144..275D, 2009ApJ...698...75P,
2011SoPh..269...13T}. Given the huge emergence rate of the magnetic flux in the IN, it
is of prime importance to measure the amount of flux that is brought to the surface
by these features.
The current estimates of the FER in the IN vary over a wide range, which include:
$10^{24}\rm{\,Mx\, day^{-1}}$ \citep[][]{1987SoPh..110..101Z}, $3.7\times
10^{24}\rm{\,Mx\,day^{-1}}$
\citep[$120\rm{\,Mx\,cm^{-2}\,day^{-1}}$,][]{2016ApJ...820...35G} and
$3.8\times10^{26}\rm{\,Mx\,day^{-1}}$ \citep[][]{2013SoPh..283..273Z}.
By considering all the magnetic features (small-scale features and active regions),
\citet{2011SoPh..269...13T} measure a global FER of $3\times 10^{25}\rm{\,Mx\,day^{-1}}$
($450 \rm{\,Mx \,cm^{-2} \,day^{-1}}$), while \citet{th-thesis} measures $3.9 \times
10^{24} \rm{\,Mx \,day^{-1}}$ ($64\rm{\,\,Mx\,cm^{-2}\,day^{-1}}$), whereby almost all of
this flux emerged in the form of small IN magnetic features. The FER depends on the
observations and the method used to measure it. A detailed comparison of the FERs from
different works is presented Section~\ref{s6}.
To estimate the FER, \citet{1987SoPh..110..101Z} and \citet{2011SoPh..269...13T}
considered features with fluxes $\ge 10^{16}$ Mx, while \citet{2013SoPh..283..273Z} and
\citet{2016ApJ...820...35G} included features with fluxes as low as $6\times 10^{15}$
Mx and $6.5\times10^{15}$ Mx (M. Go\v{s}i\'{c}, priv. comm.), respectively. However, with
the launch of the balloon-borne \textsc{Sunrise}{} observatory in 2009 \citep{2010ApJ...723L.127S,
2011SoPh..268....1B, 2011SoPh..268..103B, 2011SoPh..268...35G} carrying the Imaging
Magnetograph eXperiment \citep[IMaX,][]{2011SoPh..268...57M}, it has now become possible
to estimate the FER including the contribution of IN features with fluxes as low as
$9\times10^{14}$ Mx \citep[][hereafter referred to as LSA17]{lsa}. The IMaX instrument
has
provided unprecedented high-resolution magnetograms of the quiet Sun observed at 5250\,
\AA. The high resolution is the main reason for the lower limiting flux. A detailed
statistical analysis of the IN features observed in Stokes $V$ recorded by \textsc{Sunrise}/IMaX
is carried out in LSA17. In the present paper we estimate the FER in the IN region using
the same data.
In Section~\ref{s2} we briefly describe the employed \textsc{Sunrise}{} data. The IN features
bringing flux to the solar surface that are considered in the estimation of FER are
outlined in Section~\ref{s3}. The FER from \textsc{Sunrise}{} are presented, discussed and
compared with previously obtained results in Section~\ref{s4} while our conclusions
are presented in Section~\ref{s8}.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{f2}
\caption{Panels a--d: Schematic representation of the different paths by
which magnetic flux is brought to the solar surface and of its subsequent evolution. The
quantities $f_i$ and $f_m$ are the instantaneous and maximum fluxes of a feature,
respectively, where instantaneous means just after it appears. {Panel e}: Typical
variation in the flux of a feature, born by {unipolar and bipolar} appearances
(top) and born by splitting/merging (bottom), over time. The flux gained by them after
birth is $\Delta f=f_m - f_i$. The features born by splitting carry fluxes $f_{i1,2}$ at
birth and reach $f_{m1,2}$ in the course of their lifetime, gaining flux $\Delta f_{1,2}$
after birth. $f_{i1}+f_{i2}$ is equal to the flux of the parent feature at the time of its
splitting. The feature born by merging carries flux equal to the sum of the fluxes of the
parent features $f_{i1}+f_{i2}$. The blue line indicates the merging of the two features.}
\label{flux}
\end{figure*}
\section{Data}
\label{s2}
The data used here were obtained during the first science flight of \textsc{Sunrise}{} described
by \citet{2010ApJ...723L.127S}. We consider 42 maps of the line-of-sight (LOS) magnetic
field, $B_{\rm LOS}$, obtained from sets of images in the four Stokes parameters
recorded with the IMaX instrument between 00:36 and 00:59 UT on 2009 June 9 at the solar
disk center, with a cadence of 33\,s, {a spatial resolution of
$0.^{\prime\prime}15-0.^{\prime\prime}18$ (plate scale is $0.^{\prime\prime}054$ per
pixel), and an effective field of view (FOV) of $43^{\prime\prime} \times
43^{\prime\prime}$ after phase diversity reconstruction}. {The data were
reconstructed with a point spread function determined by in-flight phase diversity
measurements to correct for the low-order aberrations of the telescope \citep[defocus,
coma, astigmatism, etc., see][]{2011SoPh..268...57M}. The instrumental noise of the
reconstructed data was $3\times10^{-3}$ in units of continuum intensity. For
identification of the features, spectral averaging was done which further reduced the
noise to $\sigma=1.5\times10^{-3}$. All features with signals above a $2\sigma$
threshold, which corresponds to 12\,G, were used \citep[][]{2011SoPh..268...57M}.}
To measure the flux, we use $B_{\rm LOS}$ determined with the center-of-gravity (COG)
method
\citep[][LSA17]{1979A&A....74....1R, 2010A&A...518A...2O}. {The inclination of the
IN fields has been under debate, with several studies dedicated to measuring their
angular distribution. By analysing the data from \textit{Hinode},
\citet{2007ApJ...670L..61O, 2008ApJ...672.1237L} concluded the IN fields to be
predominantly horizontal. However, using the same dataset, \citet{2011ApJ...735...74I}
found some of the IN fields to be vertical. \citet{2014A&A...569A.105J} arrived at similar
conclusions (vertical inclination) by analysing the magnetic bright points observed from
the first flight of \textsc{Sunrise}{}. Variations in the inclination of the IN fields with
heliocentric angle ($\mu$) have been reported by \citet{2012ApJ...751....2O,
2013A&A...550A..98B, 2013A&A...555A.132S}. Isotropic and quasi-isotropic distribution of
the IN field inclinations is favoured by \citet{2008A&A...479..229M}, using the Fe\,{\sc
i} $1.56\,\mu$m infrared lines, and by \citet{2009ApJ...701.1032A,2014A&A...572A..98A}
using \textit{Hinode} data. More recently, \citet{2016A&A...593A..93D} found the
distribution of IN field inclination to be quasi-isotropic by applying 2D inversions on
\textit{Hinode} data and comparing them with 3D magnetohydrodynamic simulations. For a
detailed review on this, see \citet{2015SSRv..tmp..113B}.}
{In the determination of the FER, we use $B_{\rm LOS}$ for consistency and for
easier comparison with earlier studies on the FER. Also, the determination of the exact
amount of flux carried in horizontal field features is non-trivial and requires estimates
of the vertical thickness of these features and the variation of their field strength with
height. In addition, if they are loop-like structures \citep[as is suggested by
local-dynamo simulations, e.g.,][]{2007A&A...465L..43V}, then there is the danger of
counting the flux multiple times if one or both of their footpoints happen to be resolved
by the \textsc{Sunrise} data. We avoid this by considering only the vertical component of the
magnetic field. It is likely that we miss the flux carried by unresolved magnetic loops by
concentrating on Stokes $V$, but this problem is suffered by all previous studies of FER
and should decrease as the spatial resolution of the observations is increased. With the
\textsc{Sunrise}{} I data analyzed here having the highest resolution, we expect them to see more
of the flux in the footpoints of the very small-scale loops that appear as horizontal
fields in \textit{Hinode} and \textsc{Sunrise}{} data \citep{2010A&A...513A...1D}.}
The small-scale features were identified and tracked using the feature tracking code
developed in LSA17. {For the sake of completeness we summarize the most
relevant results from LSA17 as follows. All the features covering at least 5 pixels
were considered with Stokes $V$ being larger than $2\sigma$ in each pixel. To determine
the flux per feature, the $B_{\rm LOS}$ averaged over the feature, denoted as
\textlangle$B_{\rm LOS}$\textrangle{} was used.} \textlangle$B_{\rm LOS}$\textrangle{}
had values up to 200\,G, even when the maximum field strength in the core of the feature
reached kG values. {A total of 50,255 features of both polarities were identified.
The sizes of the features varied from 5-1,585 pixels, corresponding to an area range of
$\approx 8\times 10^{-3}-2.5$\,Mm$^2$. The tracked features had lifetimes ranging from
0.55 to 13.2 minutes.} The smallest detected flux of a feature was $9\times10^{14}$ Mx
and
the largest $2.5\times10^{18}$ Mx.
At the time of the flight of \textsc{Sunrise}{} in 2009 the Sun was extremely quiet, with no signs
of activity on the solar disk. Two sample magnetograms at 00:47 UT and 00:58 UT are shown
in Figure~\ref{magnetogram}. Most of the features in these maps are part of the IN and in
this paper, we determine the rate at which they bring flux to the solar surface.
\section{Processes increasing magnetic flux at the solar surface}
\label{s3}
The different processes increasing the magnetic flux at the solar surface are
schematically represented in Figure~\ref{flux}a -- \ref{flux}d. In the figure, $f_i$
refers to the flux of the feature at its birth and $f_m$ is the maximum flux that a
feature attains over its lifetime. A typical evolution of the flux of a feature born by
{unipolar or bipolar} appearances, and by splitting/merging is shown in
Figure~\ref{flux}e (top and bottom, respectively). The gain in the flux of a feature
after its birth is $f_m-f_i$. Magnetic flux at the surface increases through the
following processes:
\begin{table*}
\centering
\caption{The instantaneous and maximum fluxes of the features,
{integrated over all features and time frames}, in different
processes measured in LSA17}
\begin{tabular}{|c|c|c|c|c|}
\hline
Process & Instantaneous flux & Maximum flux & Flux gain & Factor of increase\\
& ($f_i$ in Mx) & ($f_m$ in Mx) & ($\Delta f= f_m-f_i$ in Mx) & ($f_m/f_i$)\\
\hline
{Unipolar appearance} & $4.69\times10^{19}$ & $9.69\times10^{19}$ &
$4.99\times10^{19}$ &2.06\\
Splitting & $1.76\times10^{20}$& $2.12\times10^{20}$&$3.60\times10^{19}$&1.20\\
Merging & $1.64\times 10^{20}$& $1.85\times 10^{20}$&$2.20\times10^{19}$&1.31\\
{Bipolar appearance} & $3.85\times10^{18}$ & $9.53\times10^{18}$&
$5.67\times10^{18}$&2.47\\
\hline
\end{tabular}
\label{table1}
\end{table*}
\begin{enumerate}
\item{ {Unipolar appearance}}: The birth of an isolated feature with no spatial
overlap with any of the existing features in the current and/or previous time frame
(Figure~\ref{flux}a).
\item { {Bipolar appearance}}: Birth of bipolar features, with the two polarities
closely spaced, and either appearing simultaneously or separated by a couple of time
frames ( {referred to as} time symmetric and asymmetric emergence in LSA17;
see also Figure~\ref{flux}b).
\item { {Flux gained by features in the course of their
lifetime}}: The gain in the
flux of a feature in the course of its lifetime, i.e. the increase in flux between its
birth and the time it reaches its maximum flux, before dying in one way or another, either
by interacting with another feature, or by disappearing.
This gain can take place in features born in different ways, be it by growth, or
through the merging or splitting of pre-existing features (Figures~\ref{flux}c,
\ref{flux}d and \ref{flux}e).
\end{enumerate}
{Note that the bipolar appearance of magnetic flux is often referred to as
`emergence' in earlier papers including LSA17. However, the term `emergence' in FER
describes the appearance of new flux at the solar surface from all the three processes
described above. To avoid confusion, we refer to the emergence of bipolar features as
bipolar appearance.} {Of all the newly born features over the entire time series,
19,056 features were unique (for area ratio 10:1, see Section~\ref{s4}). Among them 48\%
(8728 features) were unipolar and 2\% (365 features) were part of bipolar appearances.
Features born by splitting constituted 38\% (6718 features), and 12\% (2226 features)
were born by merging. The remaining 1019 features correspond to those alive in the
first frame. A comparison of the rates of birth and death of the features by various
processes for different area ratio criteria is given in Table~2 of LSA17.}
In the FER estimations, the flux brought by the features born by
{unipolar and bipolar} appearances is the maximum flux that they attain
($f_m$) over their lifetime. In the case of features born by splitting or merging, the
flux gained after birth is taken as the flux brought by them to the surface. This gain is
the difference between their flux at birth $f_i$ and the maximum flux they attain $f_m$,
i.e. $f_m - f_i$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.3\textwidth]{f3}
\caption{Schematic representation of multiple peaks in the flux of a feature occurring
in the course of its lifetime. In the FER estimations, we consider only the largest gain,
i.e., flux increase during first peak in this example.}
\label{fg}
\end{figure}
Our approach is conservative in the sense that if a feature reaches multiple peaks of flux
in the course of its lifetime, as in the example shown in Figure~\ref{fg}, then we
consider only
the largest one (the flux gained during the first peak in Figure~\ref{fg}), and neglect
increases in flux contributing to smaller peaks such as the second and third peaks in
Figure~\ref{fg}. {Multiple peaks in the flux of a feature (shown in
Figure~\ref{fg}) are rarely seen, as most features do not live long enough to display
them (see LSA17). }
{Changes in the flux of a feature in the course of its lifetime can cause
it to seemingly appear and disappear with time if its total flux is close to the
threshold set in the study (given by a signal level twice the noise in at least five
contiguous pixels). If it disappears and reappears again, then it will be counted twice.
This introduces uncertainties in the measurement of FER. Uncertainties are discussed in
Section~\ref{s7}.}
\section{Results and Discussions}
\label{s4}
\subsection{Flux emergence rate (FER)}
\label{s5}
In this work, we consider the results from the area ratio criterion 10:1 of LSA17. In
that paper, the authors devise area ratio criteria (10:1, 5:1, 3:1 and 2:1) to avoid that
a feature dies each and every time that a tiny feature breaks off, or merges with it.
For example in a splitting event, the largest of the features formed by splitting
must have an area less than $n$ times the area of the second largest, under the $n:1$ area
ratio criterion. {We have verified that the choice of the area ratio criterion does
not drastically alter the estimated FER, with variations being less than 10\% for area
ratios varying between 10:1 and 2:1.}
The {instantaneous} and maximum fluxes of the features in different processes are
given in Tables~$1-5$ of LSA17. A summary is repeated in Table~\ref{table1} for
convenience {, where fluxes are given for features born by the four processes listed
in the first column. The instantaneous flux, in the second column, refers to the flux of
a feature at its birth ($f_i$ in Figure~\ref{flux}). In the third column is the maximum
flux of a feature during its lifetime ($f_m$ in Figure~\ref{flux}). The flux gain in the
fourth column is the difference of the second and third columns ($\Delta f=f_m-f_i$).
In the fifth column is the factor by which the flux increases from its birth to its peak
($f_m/f_i$). The fluxes given in this table are the sum total over all features in the
entire time series, for each process.}
To compute the FER, we add the fluxes from the
various processes described in the previous section. For features born by appearance
{(unipolar and bipolar), we take their maximum flux of the features ($f_m$)} to be
the fresh flux emerging at the surface. For the features formed by merging or splitting
only the flux increase after the birth {($\Delta f$)} is considered. From the
first two processes alone, the total flux brought to the surface is $1.1\times10^{20}$ Mx
over an FOV of $43^{\prime\prime} \times 43^{\prime\prime}$ in 22.5 minutes. This gives
an
FER of $700 \rm{\, \, Mx\, cm^{-2}\,day^{-1}}$. Including the flux gained by split/merged
features increases the FER to $1100 \rm{\,\,Mx\,cm^{-2}\,day^{-1}}$. Figure~\ref{flux1}
shows the contribution from each process to the total FER. The isolated features appearing
on the solar surface contribute the largest, nearly 60\%. Given that the emerging bipoles
contain only 2\% of the total observed flux (Table~5 in LSA17), they contribute only
about
5.7\% to the FER.
However, the flux brought to the solar surface by features born by splitting or merging,
after their birth is $5\times10^{19}$ Mx, which is quite significant and contributes
$\approx35\%$ to the FER. The contribution to solar surface flux by this process is
comparable to the flux brought to the surface by features born by {unipolar}
appearance ($9.7\times10^{19}$ Mx) and nearly an order of magnitude higher than that flux
from features born by {bipolar appearance} ($9.5\times10^{18}$ Mx).
Over their lifetimes, the features born by splitting and by merging gain 1.2 times their
initial flux (i.e., $f_m = 1.2 \times f_i$). The fluxes gained by features born by
appearance relative to their flux at birth is slightly higher ($\approx2$ times, i.e.,
$f_m = 2\times f_i$). This is because the initial magnetic flux of the features born by
appearance is quite low. The flux at birth of split or merged features is already quite
high because the parent features which undergo splitting or merging are at later stages in
their lives (see Figure~\ref{flux}e). This is also evident from the fact that the average
initial flux per feature of the features born by splitting or merging ($2.9\times10^{16}$
Mx and $7.4\times10^{16}$ Mx, respectively) is an order of magnitude higher than the
average initial flux per feature of the appeared {unipolar or bipolar} features
($5.4\times10^{15}$ Mx, see Table~2 of LSA17).
\begin{figure}
\centering
\includegraphics[scale=0.3]{f4}
\caption{The percentage of contribution to the flux emergence rate (FER) from different
processes bringing flux to the solar surface. In the case of {unipolar and bipolar}
appearances, the maximum flux of the feature is used to determine the FER. For the
features born by splitting/merging, the flux gained by them after birth is considered.
This gain is the difference $f_m - f_i$ in Figures~\ref{flux}c, \ref{flux}d and
\ref{flux}e.}
\label{flux1}
\end{figure}
The fact that the small-scale magnetic features are the dominant source of fresh flux in
the quiet photosphere is discussed in several publications \citep{2002ApJ...565.1323S,
2009SSRv..144..275D, 2009ApJ...698...75P, 2011SoPh..269...13T}. Our results extend these
earlier findings to lower flux per feature values. As shown in Figure~\ref{hist}, over the
range $10^{15}-10^{18}$ Mx, nearly $65\%$ of the detected features carry a flux $ \le
10^{16}$ Mx (left panel). They are also the dominant contributors to the FER (right
panel). In this figure, only the features that are born by {unipolar and bipolar}
appearances are considered. Below $2\times10^{15}$ Mx, we see a drop as we approach the
sensitivity limit of the instrument.
\subsubsection{Flux loss rate}
Flux is lost from the solar surface by disappearance, cancellation of opposite
polarity features, and decrease in the flux of the features in the course of their
evolution (i.e. the opposite process to the ``flux gain'' described earlier in
Section~\ref{s3}). As seen from Tables~3 and 4 of LSA17, the increase in flux at the
solar surface balances the loss of flux, as it obviously must if the total amount of flux
is to remain unchanged. To compute the flux loss rate, we take the maximum flux of the
features that die by cancellation and by disappearance to be the flux lost by them in the
course of their lifetime and during disappearance or cancellation. For the
features that die by splitting/merging we take the difference between the maximum flux of
the features and the flux at their death as a measure of the flux lost during their
lifetimes. By repeating the analyses for the 10:1 area ratio criterion, we find that the
flux is lost from the solar surface at a rate of 1150$\rm{\, \, Mx\, cm^{-2}\,day^{-1}}$
which corresponds within 4.5\% to the obtained FER. This agreement serves as a
consistency check of the FER value that we find.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{f5}
\caption{{Left panel:} histogram of the number of features born by
{unipolar and bipolar} appearances, carrying fluxes in the range $10^{15}-10^{18}$
Mx. {Right panel:} the flux emergence rate from the features born by
{unipolar and bipolar} appearances as a function of their flux.}
\label{hist}
\end{figure*}
\subsection{Uncertainties}
\label{s7}
{Although most of the uncertainties and ambiguities that arise during feature
tracking have been carefully taken care of, as discussed in LSA17, some additional
ones which can affect the estimated FER are addressed below.}
{In our computation of the FER, the features born before the time series began and
the features still alive at the end are not considered. According to LSA17, the first and
the last frames of the time series had 1019 and 1277 features, respectively. To estimate
their contribution, we assume that the features still living at the end have a similar
lifetime, size, flux distribution and formation mechanism as the total number of features
studied. We attribute the appropriate average flux at birth and the average flux gain
for
features born by splitting, merging, unipolar and bipolar appearance. After including
these additional fluxes, we get an FER of $\approx 1150 \rm{\, \,
Mx\,cm^{-2}\,day^{-1}}$,
corresponding to a $4-5\%$ increase. With this method, we are associating the features
with flux gain than they might actually contribute (as many of them are likely to reach
their maximum flux only after the end of the time series). This will be balanced out by
not considering the features that are already alive at the beginning (also, it is
impossible to determine the birth mechanism of these features).}
{Furthermore, in the analysis of LSA17, the features touching the spatial
boundaries were not counted. An estimate of their contribution, in ways similar to the
above, leads to a further increase of the FER by $5-6\%$. Thus combining the features in
the first and last frames and the features touching spatial boundaries together increase
the FER by $\approx 10\%$. }
Meanwhile, as discussed in Section~\ref{s3}, in the case of flux gained after birth by
features born from splitting or merging, we consider only the gain to reach the maximum
flux in the feature and not the smaller gains required to reach secondary maximum of flux
in the feature, if any (see Figure~\ref{fg}). {These instances are quite rare.
To estimate their contribution, we consider all features living for at least four minutes
(eight time steps) so as to distinguish changes in flux from noise fluctuations. They
constitute a small fraction of $\approx 4\%$. If all these features are assumed to show
two maxima of equal strength, then they increase the FER by $\approx
1.5\%$. This is a generous estimate and both these assumptions are unlikely to be met.
However this is balanced out by not considering the features that have more than two
maxima. Thus the increase in FER is quite minor.}
Additionally, some of the features identified in a given time frame could disappear, i.e.
drop below the noise level, for the next couple of frames, only to reappear after that.
{This is unlikely to happen due to the thermal or mechanical changes for the
\textsc{Sunrise}{} observatory, flying in a highly stable environment at float altitude, and with
active thermal control of critical elements in the IMaX instrument. As mentioned in
Section~\ref{s3}, the appearance and disappearance of features could also occur due to
the
applied threshold on the signal levels. In our analyses, the reappeared features are
treated as newly appeared. This leads to a higher estimation of the FER.
\citet{2016ApJ...820...35G} have estimated that accounting for reappeared features
decreases the FER by nearly $10\%$. If we assume the same amount of decrease in the FER
from the reappeared features in our dataset, then we finally obtain an FER of 1100$
\rm{\,
\, Mx\, cm^{-2}\,day^{-1}}$.}
\subsection{Comparison with previous studies}
\label{s6}
Below, we compare our results from \textsc{Sunrise}{} data with those from the \textit{Hinode}
observations analysed in three recent publications. Although all these papers use
observations from the same instrument, they reach very different estimates of FERs. The
important distinguishing factor between them is the method that is used to identify the
magnetic features and to calculate the FER. For comparison we summarize the main
result that we have obtained here. We find that in the quiet Sun (composed dominantly of
the IN) the FER is $1100 \rm{\,Mx\,cm^{-2}\,day^{-1}}$. This corresponds
to $6.6 \times 10^{25} \rm{\,Mx\,day^{-1}}$ under the assumption that the whole Sun is as
quiet as the {very tranquil} \textsc{Sunrise}/IMaX FOV.
According to \cite{2016ApJ...820...35G}, the flux appearance or emergence rate in
the IN region is $120 \rm{\,\,Mx\,cm^{-2}\,day^{-1}}$, which corresponds to $3.7
\times 10^{24} \rm{\,\,Mx\,day^{-1}}$ over the whole surface and the contribution from
the
IN is assumed to be $\approx50\%$. The authors track individual features and measure
their
fluxes, which is similar to the method used in LSA17. Their estimate is an order of
magnitude lower than the FER obtained in the present paper. This difference can be
explained by the higher spatial resolution of \textsc{Sunrise}{} compared to \textit{Hinode}. The
isolated magnetic feature with the smallest flux detected in \textsc{Sunrise}/IMaX data is
$9\times10^{14}$ Mx {(see LSA17)}, which is nearly an order of magnitude smaller
than the limit of $6.5\times10^{15}$ Mx (M. Go\v{s}i\'{c}, priv. comm.), underlying the
analysis of \citet{2016ApJ...820...35G}. Additionally, the IMaX data are recorded with
33\,s cadence, while the two data sets analysed by the above authors have cadences of 60
and 90\,s each. A higher cadence helps in better tracking of the evolution of features
and
their fluxes. {Also, a significant number of the very short-lived features that we
find may have been missed by \citet{2016ApJ...820...35G}.}
\citet{2011SoPh..269...13T}, also using \textit{Hinode} observations, estimate the FER
by fitting a power law to the distribution of frequency of emergence ($\rm{Mx^{-1}
\,cm^{-2} \,day^{-1}}$) over a wide range of fluxes ($10^{16}-10^{22}$ Mx, which covers
both, small-scale features as well as active regions). It is shown that a single power
law index of -2.7 can fit the entire range. Depending on the different emergence
detection methods used and described by these authors, such as Bipole Comparison (BC),
Tracked Bipolar (TB) and Tracked Cluster (TC), the authors find a wide range of FERs from
32 to $470 \rm{\,Mx \,cm^{-2}\,day^{-1}}$ which correspond to 2.0 to $28.7 \times 10^{24}
\rm{\,Mx \,day^{-1}}$ over the whole solar surface \citep[Table 2
of][]{2011SoPh..269...13T,th-thesis}. To match their results from \textit{Hinode} with
other studies, the authors choose an FER of $450\rm{\,\,Mx\,cm^{-2}\,day^{-1}}$, from the
higher end of the range (C. Parnell, priv. comm.). This is nearly four times higher than
the value quoted in \cite{2016ApJ...820...35G}, who also used the \textit{Hinode}
observations and a smaller minimum flux per feature, so that they should in principle
have caught more emerging features. However \citet{th-thesis}, using a power law
distribution similar to \citet{2011SoPh..269...13T} and a slightly different index of
-2.5, estimates an FER of $64 \rm{\,Mx \,cm^{-2}\,day^{-1}}$. {A possible reason
for this difference, as briefly discussed in both these studies, could be the different
feature tracking and identification methods used. In \citet{2011SoPh..269...13T}, all the
features identified by BC, TB and TC methods are considered in determining the FER.
According to the authors, the BC method counts the same feature multiple times and
over-estimates the rate of flux emergence. However in \citet{th-thesis}, only the
features
tracked by TB method are used. The large differences in the FERs from the three detection
methods quoted in Table~2 of \citet{2011SoPh..269...13T}, support this line of
reasoning.
The FER in \citet{th-thesis} is roughly half that found by \citet{2016ApJ...820...35G}
and hence is at least in qualitative agreement.} The FER estimated by us is 2.5 times
higher than {the largest value obtained by} \citet{2011SoPh..269...13T} and 17
times higher than that of \citet{th-thesis}.
Another recent estimate of the FER is by \citet{2013SoPh..283..273Z}. Using
\textit{Hinode} observations, they estimate that the IN fields contribute up to
$3.8\times10^{26} \rm{\,Mx\,day^{-1}}$ to the solar surface. This is an order of
magnitude higher than the global FER of $3 \times 10^{25}\rm{\,Mx\,day^{-1}}$ published
by \citet{2011SoPh..269...13T} and is two orders of magnitude higher than the
$3.7\times10^{24}\rm{\,Mx\,day^{-1}}$ obtained by \citet{2016ApJ...820...35G}. In
\citet{2013SoPh..283..273Z}, it is assumed that every three minutes, the IN features
replenish the flux at the solar surface with an average flux density of
12.4\,G ($\,\rm{\,Mx\,cm^{-2}})$. Here, three minutes is taken as the average lifetime of
the IN features \citep{2010SoPh..267...63Z}. {Their FER is nearly six times higher
than our estimate, although the lowest flux per feature to which \textit{Hinode}/SOT is
sensitive is significantly larger than for \textsc{Sunrise}/IMaX (due to the lower spatial
resolution of
the former). To understand this difference, we applied the method of
\citet{2013SoPh..283..273Z} to the \textsc{Sunrise}/IMaX observations. From the entire time
series, the total sum of the flux in all features with flux $> 9\times10^{14}$ Mx is
$1.1\times10^{21}$\,Mx over an area of $3.9\times10^{20}$\,cm$^{2}$. This gives us an
average flux density of 2.8\,G, which is 4.5 times smaller than 12.4\,G of
\citet{2013SoPh..283..273Z}. If the IN features are assumed to have an average lifetime
of three minutes, similar to \citet{2013SoPh..283..273Z}, then FER over the whole solar
surface is $8.2\times10^{25}$\,Mx\,day$^{-1}$. This is 1.2 times higher than our original
estimate from the feature tracking method. If instead, we take the average lifetime of
the features in our dataset from first appearance to final disappearance at the surface
of
$\approx1.8$ minutes, we get an FER of $1.38\times10^{26}$\,Mx\,day$^{-1}$, nearly 1.9
times higher than our original estimate and still 2.8 times smaller than
\citet{2013SoPh..283..273Z}. This is longer than 1.1 minute quoted in LSA17, which
includes death of a feature by splitting or merging (see LSA17), i.e. processes that do
not remove flux from the solar surface.}
{To be sure that the problem does not lie in the COG technique employed here, we
also estimated the average flux density by considering the $B_{\rm LOS}$ from the
recently
available inversions of the \textsc{Sunrise}{} data \citep{fatima}. The $B_{\rm LOS}$ values
returned by the inversions differ from those given by the COG technique by about 5\% on
average (individual pixels show much larger differences, of course), so that this cannot
explain the difference to the value adopted by \citet{2013SoPh..283..273Z}. If all the
pixels, including noise, are considered then the average flux density is 10.7\,G. This is
an absolute upper limit of the average flux density as a large part of it is due to noise
and it is still lower than the IN signal of 12.4\,G, estimated by
\citet{2013SoPh..283..273Z}.}
{Thus the high value of FER from \citet{2013SoPh..283..273Z} is at least partly
due to their possibly too high value of average flux density. The observations analysed
by these authors clearly show network and enhanced network features. If some of these
are misidentified, then this would result in a higher average flux density. If this is
indeed the case, then the estimate of the lifetime of 3 minutes may also be too short
\citep[the technique of][neglects any possible correlation between magnetic flux and
lifetime of a feature]{2013SoPh..283..273Z}. Although the amount of flux in IN
fields is not expected to change significantly with time or place
\citep[see][]{2013A&A...555A..33B}, this is not true for the amount of
flux in the network, which changes significantly. For example another time series taken
by \textsc{Sunrise}{} during its first flight, having slightly more network in the FOV, is found
to
have an average $B_{\rm LOS}$ of around 16\,G (including noise), which is higher than the
12.4\,G used by \citet{2013SoPh..283..273Z}. However, this is just a qualitative
assessment and the very large FER found by \citet{2013SoPh..283..273Z} needs to be probed
quantitatively in a future study.}
\section{Conclusions}
\label{s8}
In this paper, we have estimated the FER in the quiet Sun from the IN features
using the observations from \textsc{Sunrise}/IMaX recorded during its first science flight in
2009. We have included the contribution from features with fluxes in the range
$9\times10^{14} - 2.5\times10^{18}$ Mx, whose evolution was followed directly. By
accounting for the three important processes that bring flux to the solar surface:
{unipolar and bipolar} appearances, and flux gained after birth by features born by
splitting or merging over their lifetime, we estimate an FER of $1100
\rm{\,Mx\,cm^{-2}\,day^{-1}}$. The third process is found to contribute significantly to
the FER. The smaller features with fluxes $\le10^{16}$ Mx bring most of the flux to the
surface. Since our studies include fluxes nearly an order of magnitude smaller than the
smallest flux measured from the \textit{Hinode} data, our FER is also an order of
magnitude higher when compared to studies using a similar technique
\citep[i.e.,][]{2016ApJ...820...35G}. {We compare also with other estimates of the
FER in the literature. \citet{2011SoPh..269...13T} obtained a range of values. Those near
the lower end of the range \citep[also quoted by ][]{th-thesis}, which are possibly the
more reliable ones, are roughly consistent with the results of
\citet{2016ApJ...820...35G}. The high FER of $3.8\times10^{26}\,{\rm Mx\, day^{-1}}$
found by \citet{2013SoPh..283..273Z} is, however, difficult to reconcile with any other
study. It is likely so high partly due to the excessively large $B_{\rm LOS}$ of
IN fields of 12.4\,G used by these authors, which is more than a factor of 4
times larger than the averaged $B_{\rm LOS}$ of 2.8 G that we find. Even the absolute
upper limit of the spatially averaged $B_{\rm LOS}$ in our data (including noise) is
below the value used by \citet{2013SoPh..283..273Z}. We therefore
expect that they have overestimated the FER.}
{There is clearly a need for further investigation, not only to quantify the
reasons for the different results obtained by different techniques. There are also still
multiple open questions. What is the cause of the increase and decrease of the flux of a
feature during its lifetime? Is this due to interaction with ``hidden'' flux? Is this
hidden flux not visible because it is weak and thus below the noise threshold, or because
it is structured at very small scales, i.e. it is below the spatial resolution? How
strongly does the ``hidden'' or missed flux change with changing spatial resolution? The
most promising approach to answering these and related questions is to study the flux
evolution in an magnetohydrodynamic simulation that includes a working small-scale
turbulent dynamo.}
\begin{acknowledgements}
{We thank F. Kahil for helping with the comparison of COG and inversion results.}
H.N.S. and {L.S.A.} acknowledge the financial support from the Alexander von
Humboldt foundation. The German contribution to \textsc{Sunrise}{} and its reflight was funded by
the Max Planck Foundation, the Strategic Innovations Fund of the President of the Max
Planck Society (MPG), DLR, and private donations by supporting members of the Max Planck
Society, which is gratefully acknowledged. The Spanish contribution was funded by the
Ministerio de Econom\'ia y Competitividad under Projects ESP2013-47349-C6 and
ESP2014-56169-C6, partially using European FEDER funds. The HAO contribution was partly
funded through NASA grant number NNX13AE95G. This work was partly supported by the BK21
plus program through the National Research Foundation (NRF) funded by the Ministry of
Education of Korea.
\end{acknowledgements}
|
2,869,038,154,126 | arxiv | \section{Introduction}
There are a variety of astrophysical systems which experience mass
loss on a time-scale much shorter than the dynamical time of the
system, leading to a significant shift in the dynamics.
One example
of this phenomenon, highlighted in the recent literature, is the
merger of a binary black hole (BH): the burst of gravitational waves
during the last stage of the merger typically carries away a few
percent of the binary's rest-mass. This basic prediction of general
relativity (GR) has been confirmed by LIGO observations of
stellar-mass BH mergers, which show that a significant fraction of the
BHs' total mass is lost \citep{LIGO_BBH,GW170104}.
Several studies have examined the impact that this mass-loss would
imprint on a circumbinary disk, both in the context of super-massive
\citep{Schnittman08,O'Neill09,Megevand09,
Corrales10,Rossi10,Rosotti12} and stellar-mass
\citep{deMink17} BHs.
The key result of these studies is that a
sharply contoured density profile quickly emerges, with concentric
rings of large under- and over-densities, including shocks. The
origin of this morphology is simple: the disk gas, which is initially
on circular orbits, instantaneously changes to eccentric orbits. Over
time, the orbits at different radii shift out of phase, and in the
particle limit, intersect and create caustics (see
\citealt{Lippai08} and \citealt{Shields08} for a similar effect from
BH recoil, demonstrated by test-particle orbits). The concentric
density spikes and shocks found in hydrodynamical simulations
correspond to these caustics \citep[e.g.][and the other references
above]{Corrales10}.
In a different context, and on much larger spatial scales, dwarf
galaxies are believed to experience a similar rapid mass loss, when
early periods of rapid star formation (and associated supernova
feedback) blow out a large fraction of the gas from the nucleus.
Crucially, this mass ejection also occurs on a time-scale shorter than
the dynamical time. \citet{Governato10} have shown that such rapid
supernova feedback can transfer energy to the surrounding dark matter
(DM). This model can be extended to repeated mass outflow and infall
events \citep{Pontzen12} to gradually move DM away from the center of
the galaxy and turn a cuspy profile into a core. These simple models
have been implemented in hydrodynamical simulations
\citep[e.g.][]{Governato12, Pontzen14,ElZant16}, which confirm this
basic result.
In the latter context, the focus has been on the overall
expansion of the DM core. However in principle if the outflow is dominated by a single large event rather than repeated energy bursts this collisionless DM particle
core could develop shells of overdensities and caustics, analogous the
those in the circumbinary disks.
While self-gravity will reduce the effect, these systems are of particular
interest for indirect DM detection. As suggested, e.g., in
\citet{Lake90}, they are excellent candidates for seeing $\gamma$-rays
from DM annihilation, due to their abundance in the nearby universe,
their high mass-to-light ratio, and their lack of other $\gamma$-ray
sources. While a detection remains elusive, dwarf galaxies have been
used to put strong limits on the mass and interaction cross section of
DM particles \citep[e.g.][]{Abazajain12,Geringer15,Gaskins16}.
While the overall effect of rapid mass loss is a decrease in density
of the DM core, the presence of strong density spikes could
significantly boost the DM annihilation rate, even if these spikes
contain only a small fraction of the mass (note that the annihilation
rate is proportional to the square of the density). This would imply
a larger $\gamma$-ray flux, and strenghten the existing limits on DM
properties.
Motivated by the above, in this paper, we compute the density profiles
of spherical, collisionless systems, following an instantaneous
mass-loss at the center. Our models can include self-gravity, and directly
resolve the caustic structures. We emphasize that our results are
generic, and are applicable to any quasi-spherical collisionless
system on any scale. Our result is negative and completely
general: we find that the overall density decrease dominates over the
presence of caustics, even in the most idealized systems. As a result,
we conclude that mass-loss always decreases the net interaction
rate.\footnote{A similar result has been found in the context of
circumbinary BH disks, where several authors have computed the
Bremsstrahlung luminosity and found it to be lower than in the
pre-merger disk \citep{O'Neill09,Megevand09,Corrales10}; see further
discussion in \S~\ref{sec:conclude} below.}
This paper is organized as follows. We first present an analytic
derivation of the density profile due to mass loss in the idealized
case of a Keplerian potential (\S~\ref{Kepler}); we use these profiles
to show that there is an overall drop in the interaction rate. We
then show that the inclusion of more realistic physical effects
generally leads to less sharply contoured profiles, thus making the
Keplerian case the upper limit for the resulting interaction rates
(\S~\ref{Other}). Finally, we present general arguments about the
interaction rate in systems undergoing simple transformations of the
density profile, and argue that these likewise imply a generic drop in
the rate, as for a Keplerian potential (\S~\ref{Boost}). We end by
summarizing the implications of this work and offering our conclusions
(\S~\ref{sec:conclude}).
\section{Circular orbits in a Keplerian potential}
\label{Kepler}
We start with one of the simplest cases possible: an initially
circular orbit of a test particle around a point mass, which at some
point instantly loses a fraction of its mass. We choose this case not
just for its intuitive behaviour, but because it is relatively simple
to extend it to more realistic situations, and it provides an upper
limit on the interaction rate in response to rapid mass loss (as we
argue in \S~\ref{Other} below).
\subsection{Basic Keplerian orbit}
\label{BasicKepler}
Particles initially on circular orbits, if they remain bound after the
mass loss, will move on ellipses, so we start by recapitulating some
basic results for orbits in a Keplerian potential. We will utilize the
following results (derivations can be found in textbooks such as
\citealt{Binney08}).
In a Keplerian potential around a point with mass $M$ at some radius $r$,
\begin{equation}
\label{kepler_pot}
\Phi(r)=\frac{GM}{r},
\end{equation}
the radius of a particle's orbit follows
\begin{equation}
\label{kepler_basic}
r(\phi)=\frac{a (1-e^2)}{1+e \cos(\phi - \phi_0)}
\end{equation}
where $\phi$ is the phase, $\phi_0$ the initial phase, $a$ the
semi-major axis and $e$ the eccentricity. The particle's specific
angular momentum and energy are
\begin{equation}
\label{kepler_l}
l=r v_t=r^2 \dot{\phi} = \sqrt{G M a(1-e^2)}
\end{equation}
and
\begin{equation}
\label{kepler_E}
e=\frac{1}{2}(v_r^2 + v_t^2) - \Phi(r),
\end{equation}
both of which are conserved over an orbit. A less easily
visualised, but useful parameterisation in terms of the
eccentric anomaly $\eta$,
\begin{equation}
\label{eta}
\sqrt{1-e}\tan{\left( \frac{\phi-\phi_0}{2} \right)}=\sqrt{1+e}\tan{\left( \frac{\eta}{2} \right)},
\end{equation}
allows us to express the radius more simply as
\begin{equation}
\label{kepler_r}
r(\eta)=a(1-e \cos{\eta}).
\end{equation}
The expression for the time as a function of $\eta$ can be obtained by
integrating $\dot{\phi}$ from equation~\ref{kepler_l} and using $r$ in
the form of equation~\ref{kepler_r}. This yields,
\begin{equation}
\label{kepler_t}
t(\eta)-t_0=\sqrt{\frac{a^3}{GM}}(\eta - e \sin{\eta}),
\end{equation}
where $t_0$ is the time of the first pericenter passage (i.e. if the
particle is initially at some $\eta_0$,
$t_0=\sqrt{\frac{a^3}{GM}}[\eta_0 - e \sin{\eta_0}]$). An orbit has
been completed when $\eta=2\pi$, and hence the orbital period is
easily confirmed from equation~\ref{kepler_t} to be
\begin{equation}
T=2\pi\sqrt{\frac{a^3}{GM}}.
\end{equation}
\subsection{Response to mass loss}
\label{KeplerResponse}
\begin{figure}
\includegraphics[width=\columnwidth]{ellipse.pdf}
\caption{A particle is initially on a circular orbit with radius $r_0$
(grey dashed) around a central mass (large dark circle). At some
time there is an instant drop in the central mass; the position of
the particle at that time is shown as a light grey circle. The
subsequent elliptical orbit is shown as the black solid ellipse with
apocenter $r_{\rm max}$. The location of the particle at the current
time, at distance $r$ ($r_0<r<r_{\rm max}$), is marked by a black
circle. Two other particles on the same initial circular orbit but
at different initial azimuthal angles are shown (purple and blue
circles). Their current positions are shown in full colour with a
segment of their trajectory, while their initial positions are shown
in lighter colours. The dotted dark grey arc shows the common
current distance of all three particles along a circle with radius
$r>r_0$.}
\label{ellipse}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{backgroundCaustics.pdf}
\caption{A snapshot from a simple 2D interactive simulation of
instantaneous mass loss in a Keplerian potential (which can be found
here:
\url{http://user.astro.columbia.edu/~zpenoyre/causticsWeb.html}.)
Each dot is a particle, initially on a circular orbit and coloured
by initial radius. The background colour shows the number density of
particles in that circular shell, with lighter corresponding to
higher density. The grey circle shows the point mass, whose mass has
been dropped by 20\%, causing every particle to continue on a new
elliptical orbit. Where many orbits overlap circular overdensities
develop and move outwards.}
\label{web}
\end{figure}
To obtain the evolution of a spherical system, we start with a single
particle on a circular orbit, initially at some radius
$r_0$. Figure~\ref{ellipse} shows an illustration and our notation.
In the circular case, $e=0$ and hence $a=r_0$. (The corresponding
solution for non-circular orbits is given in
Appendix~\ref{EllipseKepler}.) When the central mass instantly drops
from $M$ to $m<M$, the particle's orbit is instantly changed. The
particle is now less tightly bound, and has been given a boost in
energy (i.e. a less negative gravitational potential) and will
continue on an elliptical orbit. The angular momentum is unchanged, as
we have given it no tangential impulse, and the position and velocity
must be conserved over the instant of mass loss. An interactive
demonstration can be found at
\url{http://user.astro.columbia.edu/~zpenoyre/causticsWeb.html} (a
still image of which is shown in Figure~\ref{web}).
The new orbit must also be Keplerian, of the form in equation
\ref{kepler_r}. Let the new eccentricity, semi-major axis and phase be
$\epsilon$, $\alpha$ and $\psi$ respectively, and let the moment of
mass loss be $t=0$. Since the velocity at $t=0$ is purely tangential,
the particle must be at its periapsis, and hence $\psi_0$, $\eta_0$
and $t_0$ must all be equal to 0. Conserving angular momentum
throughout, and energy for $t>0$, we can find the properties of the new
orbit.
First let us define the dimensionless constant
\begin{equation}
\label{mu}
\mu=\dfrac{1}{2-\frac{M}{m}}.
\end{equation}
We have $r_{\rm min}=r_0$ and we find that the apoapsis is at
\begin{equation}
\label{r_max}
r_{\rm max}=\dfrac{r_0}{2\frac{M}{m} -1} = \frac{M}{m}\mu \ r_0,
\end{equation}
where $\alpha=\mu \ r_0$ and $\epsilon=1-\frac{1}{\mu}$. Two
consequences of these results are worth noting:
\begin{itemize}
\item The physical scale of an orbit depends linearly on the initial
radius, and the eccentricity is constant for all orbits. This means
the orbit of any two particles with different initial radii are
similar, differing only in their period.
\item The above solution breaks down for $m \leq \frac{M}{2}$; this
corresponds to the particle becoming unbound and elliptical orbits
no longer existing.
\end{itemize}
Thus for a single particle initially at $r_0$,
\begin{equation}
\label{new_r}
r(r_0,\eta)=\mu r_0 (1-\epsilon \cos{\eta})
\end{equation}
and
\begin{equation}
\label{new_t}
t(r_0,\eta)=r_0^\frac{3}{2}\sqrt{\frac{\mu^3}{Gm}}(\eta - \epsilon \sin{\eta}).
\end{equation}
While this equation does not directly yield $r$ as a function of $r_0$
and $t$, we can solve it to find $\eta=\eta(r_0,t)$ and hence find $r$
from equation~\ref{kepler_r}.
The radius of a particle at some time $t$ depends only on the initial
radius $r_0$. Hence a family of particles that start at a given
$r_0$, regardless of orbital inclination, will always be at the same
radius at any moment in time. This is illustrated in Figure
\ref{ellipse} by the three particles on the dotted circular arc,
which, while they are on different orbits, all coincide at the same radius. As a
result, the radial motion of a spherical system, composed of
individual particles, can instead be described as that of a series of
concentric spherical shells (or cylindrical shells in 2D disks and
other axisymmetric systems).
Henceforth we will refer not to individual particles but to spherical
shells, with initial and current radii $r_0$ and $r$, which obey the
above equations.
\subsection{Recovering the density profile}
\label{KeplerDensity}
\begin{figure}
\includegraphics[width=\columnwidth]{r_r0.pdf}
\caption{The radius $r$ at which a particle (or spherical shell)
resides at time $t$, as a function of its initial radius $r_0$ at
the moment $t_0<t$ of mass loss (red curve). The pericenter of the
elliptical orbit of each particle is $r_0$, and its apocenter is
taken from equation~\ref{r_max} (black and blue lines,
respectively). The outermost six turning points of the function are
also marked by vertical black lines. Here we use an initial point
mass of $10^{9} M_\odot$, which drops by $10\%$. The particle
positions are shown $t=10$ Myr after the mass loss, although as discussed later the shape of the profile is self-similar and can be expressed by this curve at all times by rescaling using equation \ref{scale}.}
\label{r_r0}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{r0_rho_r.pdf}
\caption{Upper panel: The same curve as Figure \ref{r_r0}, but with
the axes reversed. Lower panel: The corresponding density
profile. The two plots have a shared horizontal axis. The profile is
analytic though computationally limited to some numerical resolution
(detailed in Appendix \ref{KeplerSolve}) causing truly singular
caustics to appear finite. The vertical lines show the $r$ values of
the same turning points as shown in Figure \ref{r_r0}. Again, the shape of both profiles does not change with time and all time evolution can be captured by rescaling the radial co-ordinate using equation \ref{scale}.}
\label{r0_rho_r}
\end{figure}
With particles suddenly on a range of eccentric orbits, shells can
pass through one another and overlap, leading to overdense regions
where shells bunch up together and underdense regions where shells are
widely spread. Our goal is to compute the time-evolution of the
density profile (and use it to compute the particle interaction
rate). The density at radius $r$ at time $t$ can be related to the initial
density profile and shell positions, using the 1D Jacobian determinant
\begin{equation}
\label{rho}
\rho(r,t)=\sum_{r_i(r_0,t)=r} \left\vert \frac{dV_0}{dV} \right\vert_{r_i} \rho_0(r_{0,i}).
\end{equation}
Here the sum is over all individual shells $i$ that are currently at a
radius $r$, but may have had different inital radii $r_{0,i}$. A
similar calculation was used in \citet{Schnittman08} under the
approximation of epicyclic orbits, whearas here we make no such
approximations. Each shell contributes a density equal to its initial
density $\rho_0(r_0)$, multiplied by its change in infinitesimal
volume, $dV$. For each individual shell with $dV=4 \pi r^2 dr$, we
have
\begin{equation}
\left\vert \frac{dV_0}{dV} \right\vert = \frac{r_0^2}{r^2} \left\vert \frac{dr_0}{dr} \right\vert.
\end{equation}
(Note that all analysis presented here can be easily modified to an
axisymmetric system, by replacing densities with surface densities and
the volume element with $dA=2 \pi r dr$.) Finding the density profile
then amounts to identifying the set of shells that are at a particular
radius $r$ at a time $t$.
To simplify operations involving equations \ref{new_r} and
\ref{new_t}, we can rearrange these to make use of the fact that we
only want to recover density profiles at fixed values of $t$. From
equation~\ref{new_t}, we find $r_0=r_0(t)$ as
\begin{equation}
\label{new_r0}
r_0 = \frac{1}{\mu} \left[ \frac{G m t^2}{(\eta - \epsilon \sin{\eta})^2} \right]^\frac{1}{3}
\end{equation}
and substituting this into equation~\ref{new_r} we obtain
\begin{equation}
\label{new_reta}
r=\left[ G m t^2 \frac{(1-\epsilon \cos{\eta})^3}{(\eta - \epsilon \sin{\eta})^2} \right]^\frac{1}{3}.
\end{equation}
Thus we have a parametric equation for $r=r(r_0,t)$.\footnote{Note
that this solution is a generalization of the parametric solution
for spherical collapse in cosmology -- the latter corresponds to the
limiting case of $\epsilon=1$, i.e. pure radial orbits, in
eq.~\ref{new_reta}.} This solution is shown in Figure~\ref{r_r0},
for a system of initially circular orbits around a point mass of $10^9
M_\odot$, $t=10$ Myrs after the moment of a drop in the central mass
by $10\%$. The same parameters are used throughout the rest of this
section.
To interpret the result shown in Figure~\ref{r_r0} in an intuitive
way, consider the period of each shell, $T^2 \propto a^3 \propto
r_0^3$. At larger initial radii, the period becomes longer and
longer. In the extreme case, there are some particles for whom $t \ll
T$ which have barely moved from their periapsis. Further in, we see
particles for whom $t=\frac{T}{2}$, just reaching aposapsis for the
fist time. In Figure \ref{r_r0}, particles which started at roughly
200 parsecs from the central mass are just completing their first
orbit, i.e. $t=T$. This is the outermost minimum in this figure; each
successive minimum, toward smaller radii, corresponds to those
particles which have completed another full orbit at time $t$.
Figure~\ref{r_r0} also shows that there are various locations where
multiple shells with different initial radii coincide at the same
$r$. To make this clearer, top panel of Figure \ref{r0_rho_r} shows
the same plot with the axes reversed, such that it becomes clearer to
see the radii at which multiple shells overlap. The bottom panel of
Figure \ref{r0_rho_r} shows the corresponding density profile, which
we explore next. (A more detailed discussion of how this profile is
computed can be found in Appendix~\ref{KeplerSolve}.)
The density profile in Figure \ref{r_r0} has two distinct features,
which can be understood from equation~\ref{rho}. First, large
step-like over- and underdensities, the cause of which we have already
identified as the overlap of shells from various initial radii,
i.e. they stem from the summation in equation~\ref{rho}. Second, the
sharp density spikes (caustics) arise when the derivative
$\frac{dr}{dr_0}$ in the $\frac{dV_0}{dV}$ term goes to zero. From
equations \ref{new_r0} and \ref{new_reta}, this derivative can be
written as
\begin{equation}
\label{dr_dr0}
\frac{dr}{dr_0}=\mu \left\{ 1-\epsilon \left[ \cos{\eta} +
\frac{3}{2}\sin{\eta} \left( \frac{\eta - \epsilon \sin{\eta}}{1 -
\epsilon \cos{\eta}} \right) \right] \right\}.
\end{equation}
In Figure~\ref{r_r0}, the vertical lines mark the points where
$\frac{dr}{dr_0}\rightarrow 0$. The same points are marked by
vertical lines in the top panel of Figure \ref{r0_rho_r}; their
locations clearly coincidence with the caustics in the bottom panel,
where $\rho(r)\rightarrow\infty$. At these turning points we have a
shell whose two edges are crossing itself, i.e. where one edge of the
shell has passed through a turning point and meets the other, still to
halt, travelling in the other direction. Hence all the mass contained
within the volume element is now in enclosed in a volume that goes to
0, and the corresponding density is infinite. This can be seen clearly
in Figure \ref{r0_rho_r}, where the turning points in $r_0$ vs $r$
(which now, with the reversed axes, are vertical with a gradient going
to infinity) are again highlighted with vertical bars that correspond
perfectly to the caustics in the density profile.
Note that $r$ and $r_0$ both scale with time as $\propto
t^{2/3}$. As a result, the solutions are self-similar, and
depend only on $\frac{m}{M}$. The density profile, in particular, has
a fixed shape -- any features, such as a caustic at position $r_c$,
correspond to fixed values of $\eta$ and hence obey
\begin{equation}
\label{scale}
r_c = g(\eta) \ (Gmt^2)^\frac{1}{3},
\end{equation}
where $g(\eta)$ is a constant function of $\eta$ and is of order
$\eta^{-2/3}$. Features move outward at the ``pattern speed''
$\dot{r}_c \propto t^{-1/3}$. Each caustic, and indeed the whole
profile, moves outward initially very fast and then slows at later
times.
\subsection{The nature of caustics}
\label{caustics}
\begin{figure*}
\includegraphics[width=\textwidth]{rho_grad_r.pdf}
\caption{The density profile (top, red) from equation \ref{rho} and
its gradient (bottom, blue) from equation~\ref{drho_dr}. The left
panels show a broad view of the profile, and the right panels zoom
in on the outermost caustics (shown as shaded regions on the left).
Positive gradients are shown in dark blue, and negative in light
blue. The same parameters are used as in Figure~\ref{r_r0}.}
\label{rho_grad_r}
\end{figure*}
\begin{figure}
\includegraphics[width=\columnwidth]{n_r.pdf}
\caption{The logaritmic slope $n$ of the density profile near a
density peak at radius $r_c$ (defined in eq.~\ref{n}). As we
approach the singularity, the slope tends to the limiting value
$n=1/2$. The same parameters are used as in Figure~\ref{r_r0}.}
\label{n_r}
\end{figure}
In the caustics shown in Figure \ref{r0_rho_r}, the density is
formally singular, which suggests a potentially large (if not
infinite) interaction rate ($\propto \rho^2$) in such a system of
particles. It can be shown, by expanding the expression for the
density close to the caustics, that as $\vert
r-r_c\vert/r_c\rightarrow0$, the density approaches the singularity as
\begin{equation}
\label{rho_close}
\rho(r) \propto (r-r_c) ^{-\frac{1}{2}}
\end{equation}
where $r_c$ is the location of the caustic. This derivation is shown
in Appendix \ref{Perturbation}. To understand the profile from a
finite distance from the caustic, we can directly compute the gradient
of the density from equation \ref{rho},
\begin{equation}
\label{drho_dr}
\frac{d\rho(r)}{dr}=\sum_{r_i(r_0,t)=r} \frac{d\rho_i}{dr},
\end{equation}
where
\begin{equation}
\frac{d\rho_i}{dr} = \frac{1}{\mu^3} \left[\frac{dr_0}{dr}\frac{d\rho_0}{dr_0} + \mu^3 \rho_0(r_0) \frac{d\eta}{dr} \frac{d}{d\eta} \left( \frac{r_0^2}{r^2} \left\vert \frac{dr}{dr_0} \right\vert \right) \right]_{\eta_i}
\end{equation}
is the gradient in density for a single shell, with corresponding
$\eta_i$, at radius $r$. With tedious differentiation which we will
not reproduce here, this expression can be evaluated as a function of
$\eta_i$. For completeness, we have included a pre-mass-loss gradient
$\frac{d\rho_0}{dr_0}$ here, although we for simplicity, we use a flat
profile ($\frac{d\rho_0}{dr_0}$=0) in our calculations. This is
justified by the fact that, near the caustic, the right hand term is
$O\left( \left\vert \frac{dr}{dr_0} \right\vert^2 \right)$ and
dominates over the $\frac{d\rho_0}{dr}\approx O\left(\left\vert
\frac{dr}{dr_0} \right\vert \right)$ term.
Figure~\ref{rho_grad_r} shows the profile and its gradient, and
includes a zoomed-in view near the outermost caustics. Notice that as
we zoom in and the numerical resolution increases, so does the height
of the caustics -- only numerical resolution keeps them from being
truly singular. We also note that the outermost turning point in
$\frac{dr}{dr_0}$ in Figure~\ref{r_r0} corresponds to the
second-largest-radius density peak in Figure~\ref{rho_grad_r}. (The
order of the caustics differs between
Figures~\ref{r_r0}~and~\ref{rho_grad_r}: they appear in pairs with the
larger $r$ corresponding to the smaller $r_0$.)
Let us assume a power-law density profile approaching a caustic from above,
\begin{equation}
\label{rho_c}
\rho_c \propto (r-r_c)^{-n},
\end{equation}
where $r_c$ is the radius of the caustic and $n$ is some power greater
than 0. Note that the sign of the term in brackets should be reversed
for peaks that approach the singular point from below (which is true
of every other peak). In either case,
we can differentiate equation \ref{rho_c} to give
\begin{equation}
\label{n}
n\equiv -\frac{d\ln{\rho}}{d\ln{\vert r-r_c \vert}}= \frac{\vert r_c-r\vert}{\rho}\frac{d\rho}{dr}.
\end{equation}
and comparing this to the calculated density gradient we can find the
best-fit value of $n$ as the profile approaches the peak. This expression is
true for caustics which approach the singularity either from above or below.
Figure \ref{n_r} shows the value of this exponent near the
peak. Notice that it tends to $n=\frac{1}{2}$ as it reaches the
caustic (as expected from equation \ref{rho_close}). Immediately
away from the peak the profile becomes shallower.
\subsection{The particle interaction rate}
\label{caustic_rate}
\begin{figure}
\includegraphics[width=\columnwidth]{boost_resolution.pdf}
\caption{The contribution to the interaction rate due to the caustics
$R_c$ (equation \ref{caustic_boost}), compared to the rate over the
whole system before mass loss $R_0$ (integrated to 500 pc, roughly
the radius at which the perturbed density profile coincides with the
unperturbed). The upper limit of integration, $\Delta$, is fixed at
5pc (and the qualitative result is independent of this value) and
the lower limit is varied to demonstrate numerical convergence. We
perform the integral on the analytic density profiles of the first
six caustics (counting in descending $r_0$). Red curves show the
result for caustics which approach the singularity from above, and
blue for those which approach from below. Darker curves show the
contribution from caustics at larger radii. The integral is
performed using Gaussian quadrature and the same parameters are used
as in Figure~\ref{r_r0}.}
\label{boost_resolution}
\end{figure}
We now turn to the issue of whether this density profile, with sharp
(and formally singular) post-mass-loss density spikes leads to a large
boost in the particle interaction rate.
Assuming, for simplicity, a constant velocity
dispersion and interaction cross section, the interaction rate is
proportional to the integral
\begin{equation}
\label{rate_plain}
R \propto \int \rho^2 r^2 dr
\end{equation}
(these assumptions are discussed further in \S~\ref{Boost} below).
We can calculate the contribution, $R_c$, to the total interaction
rate from a thin radial shell stretching from some small distance from
the caustic, $\varepsilon$, to some macroscopic distance, $\Delta$,
\begin{equation}
\label{caustic_integral}
R_c = \int^{r_c+\Delta}_{r_c+\varepsilon} \rho^2 r^2 dr.
\end{equation}
If the interaction rate over the caustic is finite then the value of
the integral should converge as $\varepsilon \rightarrow 0$.
Using the power-law form of the density near a caustic from equation
\ref{rho_c} with $x=r-r_c$ (and swapping all appropriate signs for
caustics which approach the singularity from below),
\begin{equation}
\label{caustic_boost}
R_c=\int^{\Delta}_{\varepsilon} k^2 x^{-2n} (r_c+x)^2 dx.
\end{equation}
For small $x$, the integrand approaches $k^2 r_c^2 x^{-2n}$, so that
the integral diverges for $n \ge \frac{1}{2}$ and $\varepsilon
\rightarrow 0$, implying an infinite net interaction rate. For $n <
1/2$, the integral is finite, though still potentially large, and
evaluates to
\begin{eqnarray}
\label{caustic_boost-solution}
R_c&=& k^2 x^{-2n}\left(\frac{x^3}{3-2n}+\frac{r_cx^2}{1-n}+\frac{r_c^2x}{1-2n}\right) \\
&=& \frac{k^2r_c^{3-2n}}{1-2n}\left(\frac{x}{r_c}\right)^{1-2n} \left[1+O\left(\frac{x}{r_c}\right)\right].
\end{eqnarray}
For $n$ close to $\frac{1}{2}$ the factor $\frac{1}{1-2n}$ can become
very large while the power of $x$ goes to 0 (and thus even for small
$x$ the integral can be large).
As the value of $n$ varies with radii this integral must be performed
numerically, and this result is shown in Figure
\ref{boost_resolution}. Here we see that the integral indeed
converges, and that the peaks at largest radii contribute the most to
the total interaction rate.
The total contribution to the interaction rate from the caustics,
found by summing over the first 100 peaks, is $R_{caustic}/R_0 =
0.23$, where the contribution from the inner caustics quickly becomes
vanishingly small. Thus the interaction rate of particles in caustics,
while very large for the small area the reside in, does not lead to a
net increase in the total interaction rate.
We can also integrate the rest of the profile separately, which is now
numerically feasible without having to resolve the caustics. The sum
of these two is a good approximation to the total ratio of $R$ to
$R_0$. Integrating the analytic profile, while capping the value of
$\rho$ at 100 (and thus not including the contribution from the
caustics) we find the contribution from the rest of the profile
$R_{profile}/R_0 \approx 0.25$.
Summing the interaction rates for the caustics and the rest of the
profile we find $R/R_0 \approx 0.5$. Hence the total interaction rate
following rapid mass loss is significantly less than the interaction
rate before mass loss. In fact, as we argue in more detail in
Section~\ref{Other}, we have calculated this rate in an extremely
idealised case. Introducing more realistic physical effects will cause
these peaks to flatten, further decreasing the interaction rate.
It should be noted however that if the integration range does not
enclose the whole profile (specifically if it includes a pair of
caustics but not the associated density deficit at higher radii) then
the ratio $R/R_0$ can be larger than one. The only justification for
limiting the integration range as such would be if this was the outer
extent of the system, e.g. if a uniform density disk had a sharp
cutoff at a finite radius. Similarly if the integration range is taken
to be very large the ratio will tend to 1, regardless of the
mass-loss, as it will be dominated by mass at large radii which has
barely deviated from its original orbit due to its long period.
The integration limits used here are chosen to capture the full region
(minus the minor contribution at small radii, which we are limited
from resolving numerically) in which the density deviates from its
initial state.
Thus we have shown that despite the presence of a formal singularity,
the caustics themselves will provide only a minor contribution only to
the total interaction rate. (A similar argument holds for disks and
other axisymmetric systems.) In fact the total interaction rate is
decreased by rapid mass loss, in direct contribution to the
expectation that these sharp density cusps may be an excellent
laboratory for observing high interaction rates.
As the shape of the profile is time-independent this result will hold
at all times, as long as the integration range is changed to encompass
the whole profile.
As an aside, we note that for three-body processes, whose rate is
$\propto\rho^3$, the rate near the caustics will diverge, and further
study into cases where these are physically relevant may be fruitful.
\subsection{Summary}
In this section, we have presented an analytic derivation of the
motions of shells of particles, initially on circular orbits,
following an instantaneous mass loss. We then found the corresponding
density profile numerically, and found the following properties:
\begin{itemize}
\item The profile can be broadly split into two components:
\begin{enumerate}
\item Step-like over- and underdensities corresponding to
regions where multiple shells overlap at one radius
\item Singular caustics at radii where the edges of a single
shell cross and hence its volume goes to 0 and its density
to $\infty$
\end{enumerate}
\item As $r \rightarrow 0$, there is an infinite sequence of
caustics, coming in pairs and spaced closer together at the edges
of the regions where multiple shells overlap
\item At large radii, particles have long periods, and well beyond
the radius where the orbital time is longer than the time elapsed
since the mass loss, the density profile tends to the unperturbed
profile
\item For a given fractional mass loss, the density profile is
self-similar, with a shape that expands to larger radii
as $r \propto t^{2/3}$
\item The interaction rate in the caustics is large given their
small spatial extent. Though the density profile is singular the
caustics are shallower than the curve $\rho \propto
r^{-\frac{1}{2}}$ and hence the integral of $\rho^2$ over some
small region is finite. However the total interaction rate in the
caustics is still small compared to the interaction rate of the
profile preceding mass loss.
\item Integrating over the whole profile, the interaction rate is
less than the unperturbed case, a result that is independent of
time.
\end{itemize}
\section{Response of less idealised systems}
\label{Other}
\begin{figure}
\includegraphics[width=\columnwidth]{rho_dm.pdf}
\caption{{\it Upper panel:} The density profile 6 Myrs after
instantaneous mass loss, shown for a range of values of $m<M$ for a
constant $M=10^9 M_\odot$. From dark to light, $m/M$ goes from 0.98
to 0.66 in intervals of 0.02. The profile has been computed using
the numerical CausticFrog package and sharp peaks of the caustics
have been smoothed over 1 pc (more details in Section~\ref{Other}
and Appendix~\ref{Code}). {\it Lower panel:} The total interaction
rate, $R$, compared to the rate before mass loss, $R_0$ (calculated
by numerically integrating the above smoothed profiles from 70 to
500 pc using equation \ref{rate_plain}). Three different smoothing
lengths are used: 5 pc (light red), 1 pc (dark red) and $10^{-10}$ pc
(black), to show that the results are very weakly dependent on
smoothing length.}
\label{rho_dm}
\end{figure}
In \S~\ref{Kepler}, we chose the simplest physical system, consisting
of massless particles initially on circular orbits in a Keplerian
potential, so as to find a semi-analytic density profile. Here we
discuss results from a range of more physical realisations. We show
that each amendment leads to a flatter, less sharply contoured,
profile than the circular orbit case.
Whenever density profiles are shown, they have been found with our new
public 1D Lagrangian simulation code \textsc{CausticFrog}. By evolving
the edges of a series of spherical shells, which are able to cross and
overlap, we can easily resolve both shell crossing and squeezing (and
hence resolve caustics) exploiting the spherical symmetries of the
problem to save computation costs. This code can be found at
\url{https://github.com/zpenoyre/CausticFrog} and is presented in
detail in Appendix \ref{Code}.
Numerical discreteness noise, caused by the thousands of interacting
shells, makes these profiles difficult to inspect visually and to
integrate over numerically, so we smooth the profile. This is done
by replacing each spherical shell, extending from radius $r_1$ to
$r_2$ with a Gaussian centred centered at $\frac{1}{2}(r_1 + r_2)$ and
with a width
\begin{equation}
\sigma = \sqrt{\frac{1}{4}(r_2 - r_1)^2 + r_s^2},
\end{equation}
where $r_s$ is a smoothing length. The profile is normalised to
conserve mass. Note that the realistic density profiles we consider
below consist of convolving a set of discrete caustics with a smooth
distribution. We thus expect the caustics structures to be physically
smoothed, justifying this approach. Furtheremore, in \S~\ref{rho_dm}
we show that the choice of smoothing length does not have a large
impact on the interaction rate, and hence does not qualitatively affect
the results.
\subsection{Degree of mass loss}
\label{total_rate}
We have shown in Section \ref{Kepler} that the density profile
resulting from a specific drop in a point mass potential ($10\%$ for
all above analysis) leads to a drop in the total interaction rate of
particles in the system. Now we extend this to any fractional mass
drop.
Figure \ref{rho_dm} shows the response of the Keplerian system with
initially circular orbits to varying degrees of mass loss. The top
panel shows the density profiles at the fixed time 6 Myr following a
mass-loss of various degrees. Clearly, more significant mass loss
leads to lower overall densities, as the particles are significantly
less tightly bound to the smaller remaining point mass. Thus they have
more eccentric orbits and move further outwards, giving a lower
density. Smaller mass losses lead to flatter and less perturbed
profiles.
The bottom panel in Figure \ref{rho_dm} shows the ratio of the
particle interaction rates 6 Myrs after ($R$) {\it vs} immediately
before ($R_0$) the moment of mass loss. The interaction rate is
reduced, regardless of the degree of mass loss. The larger the mass
loss, and thus the more eccentric and larger the orbits of the
particles, the lower the density and the lower the interaction
rate. Note that the particles become unbound for mass losses of
$50\%$, though the interaction rate will be non-zero for some period
while the unbound mass moves outwards.
Hence the drop in interaction rate is true for any fractional mass loss.
\subsection{Self-gravity}
\label{self_gravity}
\begin{figure}
\includegraphics[width=\columnwidth]{rho_self.pdf}
\caption{The density profile around the outermost peak, at the same
moment in time, for a range of initial densities (and hence
contribution via self-gravity). All profiles are smoothed over a 2
pc scale. From dark to light the densities increase from 1 (visually
indistinguishable from the analytic solution) to 33
$\mathrm{M}_\odot \mathrm{pc}^{-3}$ (in steps of 4 $\mathrm{M}_\odot
\mathrm{pc}^{-3}$). The same parameters are used as in Fig.~\ref{r_r0}.}
\label{rho_self}
\end{figure}
Depending on the system in question, self-gravity may be safe to
ignore (e.g. the inner regions of accretion disks around black holes),
or it may be the dominant source of gravitational potential (e.g. the
dark matter profile away from the centre of a gas-poor dwarf galaxy).
Equation \ref{scale} shows that the speed at which the contours of the
density profile move outwards depends on the enclosed mass. In a
self-gravitating system, as a feature moves outwards, the mass enclosed
generally increases; hence, the speed increases and the profile spreads
out. Equation \ref{scale} is no longer exact with the inclusion of
self-gravity (and hence unclosed orbits), but in the case of a central
point mass it is an increasingly good approximation as the initial
density goes to 0.
We next explore the impact of self-gravity with \textsc{CausticFrog}
by following a system of particles on initially circular orbits, as
before, but including the self-gravity of each shell. We examine the
profile for the mass losses and time periods as in the analytic case
(see Fig.~\ref{r_r0} for details).
The results of this exercise are shown in Figure~\ref{rho_self}, for
initial profiles with increasing density (and hence contribution of
self-gravity). The figure shows that denser systems have features
that move outward faster, reaching larger radii at a given time, with
more dispersed peaks. When the mass of gravitating fluid enclosed,
$m_{enc}$, is of order of the mass of the central object (as is true
for the lightest curve) the caustic is almost entirely dispersed. Even
when $m_{enc} \sim 0.1 m$ (darker curves) the difference between the
point mass and the self-gravitating profiles starts to become
apparent.
We conclude that self-gravity will generally disperse caustics and
lead to smoother and flatter density profiles.
\subsection{Non-circular orbits}
\label{non-circular}
\begin{figure}
\includegraphics[width=\columnwidth]{rho_ellipse.pdf}
\caption{The density profile near the outermost peak, at the same
moment in time, for different initial eccentricities. From dark to
light the eccentricities range from $e=$ 0 to 0.4 (in steps of
0.05). The same parameters are used as in Fig.~\ref{r_r0}, and the
curves are smoothed over 2 pc (for the highest eccentricities there
are few enough remaining bound shells that even this does not smooth
out all numerical noise).}
\label{rho_ellipse}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{rho_many.pdf}
\caption{The post-mass loss density profile for a range of initial
eccentricities. The initial eccentricity distribution is given by
Eq.~\ref{p_ecc} with different values of $n$ controlling its width,
from nearly circular (large $n$) to broader distributions including
high eccentricity (small $n$). From dark to light, the curves
correspond to $n=$ 64, 32, 16, 8, 4, 2, and 1 respectively. The same
parameters are used as in Fig.~\ref{r_r0}, and the curve is smoothed
over 2 pc.}
\label{rho_many}
\end{figure}
For most systems, we would not expect the gravitating particles to be
on circular orbits. In some cases, such as in a gas disk, viscous
dissipation may circularise orbits, but in a dissipationless system
such as a dark matter halo or a stellar bulge, we expect a wide
distribution of orbital eccentricities.
In this section, we present numerical solutions for non-zero initial
eccentricities using \textsc{CausticFrog}, but many of the results
below are equally apparent from the analytic derivation presented in
Appendix \ref{EllipseKepler}. In particular, particles with different
phases at the moment of mass loss will reach their turning points at
different times. We expect that this should cause sharp features of
the density profile near caustics to spread out.
Let us start with the simple case of orbits of a fixed initial
eccentricity. We would expect the distribution of initial phases to
correspond to $p(\phi) \propto \dot{\phi}^{-1}$ (the probability of a
particle being at some phase is inversely proportional to the rate of
change of phase), and initialise the initial positions of particles
along their elliptical orbits accordingly. We simulate just over a
million such orbits.
Figure \ref{rho_ellipse} shows the density profile for orbits with
different initial eccentricities (and a full range of initial phases),
between $0\leq e\leq 0.4$. As we move to higher and higher
eccentricity, the density peaks split into multiple peaks, and the
density profile flattens overall. At high eccentricities, a
significant amount of mass is unbound after mass loss and the density
drops precipitously. We next compute the density profiles in a more
realistic situation, for initial orbits with a wide range of initial
eccentricities and phases. We use a simple toy model of the
distribution of initial eccentricities,
\begin{equation}
\label{p_ecc}
p(e) \propto (1-e)^n,
\end{equation}
where $n$ can be chosen to give mostly circular orbits (large $n$) or
a much wider range of eccentricities (small $n$). We emphasize that
this distribution is ad-hoc, but it conveniently allows us to explore
the impact of the width of the initial eccentricity-distribution.
Figure \ref{rho_many} shows density profiles for a range of values of
$n$. When the distribution is sharply peaked around $e=0$, the
resulting profile still has sharp spikes, but for broader
distributions, those peaks are much smaller. When a large fraction of
high-eccentricity orbits are included, a significant amount of mass
can again be lost, as particles become unbound.
We conclude that a system with a wider range of initial eccentricities
will have smoother features, and a flatter overall density profile,
following a mass loss event. For sufficiently large mass loss, there
is a net reduction in mass as particles initially near their periapsis
can easily become unbound (see Eq.~\ref{ecc_unbound}).
\subsection{Other assumptions and approximations}
There are several additional complications that could change the
response of a system to rapid mass loss. Here we briefly discuss a
few of these complications qualitatively.
\subsubsection{Time dependence}
The basic premise of this system is that mass loss is almost
instantaneous, i.e. occurs on a timescale that is much shorter than
the particles' orbital time. While instantly removing the mass makes
our calculation much easier, allowing for the mass to decrease over a
finite (if short) period will smooth out peaks and further flatten the
density profile.
A simple way to picture this is to imagine the mass dropping not in a
single event, but two curtly spaced events. In an initially circular
case, the first event sets particles onto elliptical orbits. When the
second event occurs, particles are on a range of orbits with different
phases. We can think of
this a second 'initial' state, now with particles with different
orbital properties at the same radius. As shown in
\S~\ref{non-circular}, a system with a variety of initial orbits
generally has less strongly peaked features than a family of similar
initial orbits. Hence, the profile will be flatter than if the mass
had dropped in a single event. This argument could be extended to
reducing a single mass loss to any number of distinct steps, and hence
to a continuous mass loss rate.
The results shown above for a self gravitating fluid (\S~\ref{self_gravity}) can also be understood as a time dependant phase mixing, as now the orbit of one particle (or spherical shell) directly affects another and over time they exchange energy. Thus the caustic, a region of high or even infinite phase density, diffuses and flattens over time.
\subsubsection{Alternative potentials}
We expect most physical potentials to have some degree of asphericity
\citep{Pontzen15} which will break the spherical symmetry of our
solutions. Relativistic effects may also be important;
relativistic precession, for example, will also disrupt any simple dependence
between an orbit and its period. Furthermore, as shown in
\S~\ref{self_gravity}, the inclusion of self-gravity will also break
down sharp features, and thus any self-consistent profile (such
as the profile for dark matter halos suggested in \citep{Navarro97})
cannot maintain strong features.
\subsubsection{Dissipation}
We have so far assumed a collisionless system -- but, depending on the
context, there are several ways for the post-mass loss density waves
to dissipate energy. We expect that such dissipation will spread the
initially highly coherent waves, and the profile will flatten as a
result.
For baryonic matter, viscous dissipation due to turbulence and
magnetic fields, or due to radiative processes, can all
be effective at sapping energy from dense, fast-moving
regions. Furthermore, shocks can develop as the overdensities move
outwards, heating and transferring energy to the medium they move
through. Finally, pressure forces generally smooth the perturbations
caused by the mass loss \citep[see e.g. the discussion in][and
references therein]{Corrales10}.
\subsubsection{Unbound mass}
We have already seen that in systems with highly eccentric initial
orbits, only small changes in the central mass are needed for some
particles to become unbound. Any mass loss will of course lead to a
lower density, and this will further flatten the profile.
\subsection{Summary}
We have discussed some effects that should be incorporated
quantitatively into a more realistic picture of a dynamical system
before and after a period of rapid mass loss. A general trend is
clear: compared to the simplest idealised case presented in
\S~\ref{Kepler}, a more physical model develops a smoother and flatter
density profile. This will generally reduce the particle interaction
rate compared to the idealised case.
\section{Interaction Rates - A General Discussion}
\label{Boost}
In the one case for which we have calculated the interaction rate
(initially circular orbits in a Keplerian potential, \S~\ref{Kepler})
we have shown that there will be a smaller
interaction rate than before the mass loss event, as we have observed
in \S~\ref{total_rate}. Here present a more general
heuristic argument: namely, if mass on average moves outwards (as
is the case following mass loss), the interaction rate will generically
decrease.
First let us more carefully justify our calculation of the particle
interaction rate. The rate per unit volume, for a single fluid with a
Maxwell-Boltzmann velocity distribution, is $\propto n^2 \sigma
\sqrt{\langle v^2 \rangle}$ where $n$ is the number density, $\sigma$
the interaction cross section and $\langle v^2 \rangle$ the velocity
dispersion.
We will make the simplifying assumptions that (i) the cross section is
constant throughout and (ii) the velocity dispersion is unchanged
before and after mass loss. We have so far not specified the
orientation of the initial orbits. Two limiting cases are isotropic
initial velocities for randomly inclined orbits, or zero dispersion if
all orbits are co-planar and in the same direction. In the isotropic
case, the assumption of constant velocity dispersion before and after
mass loss is reasonable, but for anisotropic initial velocity
structures, this assumption may break down.
We note that as mass loss induces particles to move outward on
average, their velocities are generally lower than the initial
velocities. We have now introduced a radial velocity dispersion as
shells moving radially cross, so the assumption that velocity
dispersion is unchanged (or reduced) is equivalent to the assumption
that the newly introduced radial dispersion is smaller than the
original tangential velocity dispersion.
To characterise the change in the total interaction rate of a system, we
define the ``boost factor" as the ratio of the interaction rate at a given
post-merger time to that before the moment of mass loss,
\begin{equation}
\label{boost}
B(t)=\frac{\int \rho_1^2(t) r^2 dr}{\int \rho_0^2 r^2 dr}
\end{equation}
where $\rho_0$ and $\rho_1(t)$ are the density profiles before and
after mass loss. For the total interaction rate of the system the
integral should be evaluated out to the radius of the system and can
be converted to a luminosity (for a given DM particle cross section)
to compare to observations. We will assume here that we are
interested only in the integrated interaction rate, because the system
is unresolved. This is because we are dealing with small objects at
extragalactic distances (AGN disks) and/or because the actual signal
(e.g. gamma-rays from DM annihilations in dwarf galaxy cores) is
spatially unresolved.
We will also assume that mass is conserved as the density profile is
modified, i.e.
\begin{equation}
\label{mass}
M=\int 4 \pi \rho r^2 dr = \mathrm{const.}
\end{equation}
This integral will also extend to the outer edge of the system. As we
have seen in \S~\ref{Other}, mass can become unbound and lost, but
this will only reduce the densities and lead to smaller boost factors.
\subsection{General transformations}
\subsubsection{Change in volume}
\begin{figure}
\includegraphics[width=\columnwidth]{combinedTransformation.pdf}
\caption{A flat density profile is stretched (top panel), and shifted
(bottom panel), both whilst conserving mass. This leads to a drop in
the interaction rate.}
\label{combinedTransformation}
\end{figure}
Stretching the initial profile (see top panel of
Figure~\ref{combinedTransformation}) such that the outer radius is
some factor $(1 + \epsilon)$ times its initial value yields the new
density $\rho_1=(1+\epsilon)^{-3}\rho_0$, and thus the boost\footnote{In both this and the following calculation we have integrated over the whole profile. Integrating without changing the limits will lead to yet lower value of $B$.}
$B=(1+\epsilon)^{-3}$, i.e. $B<1$. The bottom panel of Figure~\ref{combinedTransformation} illustrates
another volume-expanding operation: shifting an initially flat density
profile to higher radii. The width of the profile, $R$, is unchanged,
and we use the dimensionless parameters $\alpha=\frac{R}{r_0}$ and
$\beta=\frac{\delta}{r_0}$ to describe the transformation. Conserving
mass leads to the density, and therefore the boost, dropping by a factor
\begin{equation}
\label{stretchRho}
\frac{\rho_1}{\rho_0}=B=\frac{(1+\alpha)^3 -1}{(1+\alpha+\beta)^3 - (1+\beta)^3}.
\end{equation}
For $\beta$ greater than 0 ($\alpha$ is always $> 0$), this again leads to $B<1$.
\subsubsection{Change in mass distribution}
\label{slope}
\begin{figure}
\includegraphics[width=\columnwidth]{power_boost.pdf}
\caption{The boost factor, as given by equation \ref{powerBoost}, if
the density function goes from a power law with power $\nu$ to one
with power $\omega$ (c.f. equation \ref{initPower}). The curves for which $B=1$ are shown in white. The
profile extends to infinity for positive $\nu$ and $\omega$ but must
be curtailed at $\nu=-\frac{3}{2}$ ($B \rightarrow 0$) and
$\omega=-\frac{3}{2}$ ($B \rightarrow \infty$).}
\label{power_boost}
\end{figure}
We also examine the effect of a more general transformation to the
density profile: changing from one power law to another. Even if the
profile has many features, if smoothed, or averaged over time, the
profile will be well described by a simple power law.
Let us assume an initial and final profiles of the form
\begin{equation}
\label{initPower}
\rho_0 = k r^\nu \ \mathrm{and} \ \rho_1 = \kappa r^\omega,
\end{equation}
where both extend to the same outer radius, $R$. Conserving mass gives a boost
\begin{equation}
\label{powerBoost}
B = \frac{3+2\nu}{3+2\omega} \left( \frac{3+\omega}{3+\nu} \right)^2.
\end{equation}
Figure \ref{power_boost} shows how the boost varies with the initial
and final power law. If both powers are of the same sign, the boost is
less than unity if $\vert \omega \vert < \vert \nu \vert$, i.e. if the
resulting power law is shallower than the original, as we might expect
for mass becoming less bound and moving outwards.
When the power law changes sign the behaviour is more complex, with
the boost going to infinity as the power approaches $-\frac{3}{2}$. If
either power is negative it tends to dominate, unless the other is
very large and positive.
An important feature to note is that the largest boosts are seen along
the line $\nu=0$, an argument that the case presented in
\S~\ref{Kepler}, with an initially flat density profile, produces the
largest boost (though more complex families of solutions with larger boosts may still exist).
Thus if mass moves outward and a density profile flattens, the boost
decreases.
\subsection{Combining a smooth profile with caustics}
As we have seen in Figure \ref{r_r0}, even when mass in
general moves outwards, there can be small regions where the opposite
happens: the mass is squeezed into a smaller volume, or the density
profile steepens.
This of course is the cause of the caustics, as some finite mass is
squeezed into an infinitesimal volume. We have shown numerically, for
initially circular orbits around a point mass, in Section \ref{Kepler}
(and extended it to more general situations in Section \ref{Other})
that rapid mass loss does not lead to a boost in the interaction rate
in a system.
In other words, the global phenomenon, of mass becoming less bound and
moving outwards, dominates over the local phenomenon, of sharp density
peaks developing in small regions.
Moving away from a flat initial density profile, as shown in Section
\ref{slope}, will further decrease the boost. Furthermore, switching
to a self gravitating system, discussed in Section \ref{self_gravity},
flattens out the caustics and reduces the interaction rate. Thus we
can generalise this result for other astrophysical potential and density
profiles.
\section{Conclusions}
\label{sec:conclude}
First, let us briefly summarise the argument presented in this paper:
\begin{itemize}
\item A system which develops a large overdensity seems a strong
candidate for observing large particle interaction rates ($\propto
\rho^2$).
\item Rapid mass loss in a system leads to instantaneously changed
orbits and the development of over- and underdensities as orbits
cross and overlap.
\item After mass loss in a Keplerian potential, the density profile of
particles on initially circular orbits is a combination of
infinite-density caustics and step-like over- and underdensities.
\item The caustics in the circular Keplerian case contribute only a
small amount to the interaction rate, significantly less than the
total interaction rate before mass loss.
\item Away from the caustics, the step-like profile leads to a drop in
the interaction rate as mass moves outwards (as particles are less
bound after mass loss) and the density drops.
\item Overall, rapid mass loss in the circular Keplerian case leads to
a \textit{smaller} interaction rate than the unperturbed case.
\item The inclusion of less idealized physical effects smooths and
flattens the density profile relative to the circular-orbit
Keplerian case. Mass still moves outward and thus the total
interaction rate is reduced.
\item Hence rapid mass loss will, in \textit{any} physical case, lead
to a drop in the interaction rate, rather than an increase.
\end{itemize}
Below, we elaborate upon how we arrived at this somewhat surprising
conclusion.
In \S~\ref{Kepler}, we present an analytic derivation of the response
of a system of particles, initially on circular orbits, in a Keplerian
potential to an instantaneous drop in the central point mass. From
this we numerically derive a density profile (Figures \ref{r0_rho_r}
and \ref{rho_grad_r}) that has a self-similar shape and expands
outward with time as $r \propto t^\frac{2}{3}$. The profile is
comprised of step-like over- and underdensities where multiple shells
on different orbits overlap, and singular caustics at the boundaries
of the multi-shell regions, where a finite mass is squeezed into an
infinitesimal volume.
These sharply peaked profiles with singular caustics naively appear
promising for a large increase in the total interaction rate. However
we show that the rate in this case is \textit{still} less than in the
unperturbed case. The caustics can be shown to contribute only a
small amount to the interaction rate, and regardless of degree of mass
loss or time (as the shape of the profile is time independent) the
interaction rate decreases.
In \S~\ref{Other}, we show that various effects to make the system
more realistic (such as self-gravity, non-circular initial orbits, and
non-Keplerian potentials) smooth out the sharp density spikes and lead
to flatter overall density profiles. Thus, the circular Keplerian
case provides the profile that is most sharply peaked.
Thus we have shown that even the best possible candidate environment
for observing large interaction rates following rapid mass
\textit{still} has a smaller net interaction rate than the same system
before mass loss.
Similar to our results here, the optically thin Brehmsstrahlung
luminosity, computed in post-merger binary black hole accretion disk
simulations of \citet{O'Neill09}, \citet{Megevand09} and
\citet{Corrales10} have been found to {\it decrease} after the
mass-loss caused by the BH merger.\footnote{As explained in
\citet{Corrales10}, this Brehmsstrahlung luminosity is not
self-consistent, as it yields an unphysically short cooling time.
Nevertheless, this luminosity involves an integral of $\rho^2$ over
volume, and its post-merger decrease can be traced to the reasons we
identified in this paper: the overall decrease in the emission due
to the expansion of the disk dominates over the increased emission
from the dense shocked rings.}
Our results -- the absence of a large boost in the
particle interaction rate -- also justify the simple density profiles
used to calculate $\gamma$-ray flux from dark matter annihilation in
dwarf galaxies \citep[e.g.][and references therein]{Geringer15}.
We emphasise that the arguments presented here are generalisable and
thus applicable to any other system or geometry where we observe
mass-loss over a period much shorter than the dynamical time of
orbiting particles.
There are extreme cases where the interaction
rate may increase, such as if three-body interactions are the main
source of the signal, or where the step-like behaviour of the density
function is precipitously steep. The large densities in the caustics may also lead to other observable phenomena, such as due to the heating of gas in an AGN disk, but we leave these considerations for future work.
But the overall conclusion of this
work is that rapid mass loss in dynamical systems is \textit{not} the
promising laboratory for observing high interaction rates as one may
have hoped for.
\section*{Acknowledgements}
The authors thank Andrew Pontzen, Jacqueline van Gorkom, George Lake,
Nick Stone and the anonymous referee for their insightful comments and
queries, and Emily Sandford for invaluable help with the text. This work
was supported in part by NASA grant NNX15AB19G; ZH also gratefully
acknowledges sabbatical support by a Simons Fellowship in Theoretical
Physics.
\bibliographystyle{mnras}
\providecommand{\noopsort}[1]{}
|
2,869,038,154,127 | arxiv | \section{Definitions and Basic Results}
\label{sec:1} This section provides some standard graph theoretic definitions and develops an efficient notation for what will follow. Since this paper deals only with directed graphs rather than undirected graphs, the term ``graph'' will have that meaning. Undirected graphs fit in easily as a subcategory of the directed case.
\begin{definition}
\begin{enumerate}
\item For any $N\ge 1$, the collection of {\it directed graphs}\index{graph!directed graph} on $N$ nodes is denoted ${\cal G}(N)$. The set of {\it nodes}\index{node} ${\cal N}$ is numbered by integers, i.e. ${\cal N}=\{1,\dots, N\}:=[N]$. Then $g\in{\cal G}(N)$, a graph on $N$ nodes, is a pair $({\cal N},{\cal E})$ where the set of edges is a subset ${\cal E}\subset{\cal N}\times{\cal N}$ and each element $\ell\in{\cal E}$ is an ordered pair $\ell=(v,w)$ called an {\it edge}\index{edge} or {\it link}\index{link}. Links are labelled by integers $\ell\in\{1, \dots, E\}:=[E]$ where $E=|{\cal E}|$. Normally, ``self-edges'' with $v=w$ are excluded from ${\cal E}$, that is, ${\cal E}\subset{\cal N}\times{\cal N}\setminus{\rm diag}$.
\item A given graph $g\in{\cal G}(N)$ can be represented by its $N\times N$ {\it adjacency matrix} \index{adjacency matrix} $M(g)$ with components
\[ M_{vw}(g)=\left\{\begin{array}{ll }
1 & \mbox{if $(v,w)\in g$ } \\
0 & \mbox{if $(v,w)\in{\cal N}\times{\cal N}\setminus g$ }
\end{array}\right.\ .
\]
\item The {\it in-degree}\index{in-degree} $\mbox{deg}^-(v)$ and {\it out-degree}\index{out-degree} $\mbox{deg}^+(v)$ of a node $v$ are
\[\mbox{deg}^-(v)=\sum_w M_{wv}(g),\quad \mbox{deg}^+(v)=\sum_w M_{vw}(g)\ .\]
\item A node $v\in{\cal N}$ has {\it node type}\index{node type} $(j,k)$ if its in-degree is $\mbox{deg}^-(v)=j$ and its out-degree is $\mbox{deg}^+(v)=k$; the node set partitions into node types, ${\cal N}=\cup_{jk}{\cal N}_{jk}$. One writes $k_v=k,j_v=j$ for any $v\in{\cal N}_{jk}$ and allow degrees to be any non-negative integer.
\item An edge $\ell=(v,w)\in{\cal E}=\cup_{kj}{\cal E}_{kj}$ is said to have {\it edge type}\index{edge type} $(k,j)$ with in-degree $j$ and out-degree $k$ if it is an out-edge of a node $v$ with out-degree $k_v=k$ and an in-edge of a node $w$ with in-degree $j_w=j$. One writes $\mbox{deg}^+(\ell)=k_\ell=k$ and $\mbox{deg}^-(\ell)=j_\ell=j$ whenever $ \ell\in{\cal E}_{kj}$.
\item For completeness, an \index{graph!undirected graph} undirected graph is defined to be any directed graph $g$ for which $M(g)$ is symmetric.
\end{enumerate}
\end{definition}
The standard visualization of a graph $g$ on $N$ nodes is to plot nodes as ``dots'' with labels $v\in{\cal N}$, and any edge $(v,w)$ as an arrow pointing ``downstream'' from node $v$ to node $w$. In the financial system application, such an arrow signifies that bank $v$ is a debtor of bank $w$ and the in-degree $\mbox{deg}^-(w)$ is the number of banks in debt to $w$, in other words the existence of the edge $(v,w)$ means ``$v$ owes $w$''. Figure \ref{2nodes} illustrates the labelling of types of nodes and edges.
\begin{figure
\includegraphics[scale=0.4]{2nodes.png}
\caption{A type $(3,2)$ debtor bank that owes to a type $(3,4)$ creditor bank through a type $(2,3)$ link.}
\label{2nodes}
\end{figure}
There are obviously constraints on the collections of node type $(j_v,k_v)_{v\in{\cal N}}$ and edge type $(k_\ell,j_\ell)_{\ell\in{\cal E}}$ if they derive from a graph. By computing the total number of edges $E=|{\cal E}|$, the number of edges with $k_\ell=k$ and the number of edges with $j_\ell=j$, one finds three conditions:
\begin{eqnarray}
E:=|{\cal E}|&=&\sum_v k_v=\sum_v j_v\nonumber\\
e^+_k:=|{\cal E}\cap\{k_\ell=k\}| &=& \sum_\ell {\bf 1}(k_\ell=k)=\sum_v k{\bf 1}(k_v=k)\nonumber\\
\label{graphical}e^-_j:=|{\cal E}\cap\{j_\ell=j\}| &=& \sum_\ell {\bf 1}(j_\ell=j)=\sum_v j{\bf 1}(j_v=j)\ .
\end{eqnarray}
It is useful to define some further graph theoretic objects and notation in terms of the adjacency matrix $M(g)$:\begin{enumerate}
\item The {\it in-neighbourhood}\index{in-neighbourhood} of a node $v$ is the set ${\cal N}_v^-:=\{w\in{\cal N}| M_{wv}(g)=1\}$ and the {\it out-neighbourhood}\index{out-neighbourhood} of $v$ is the set ${\cal N}_v^+:=\{w\in{\cal N}| M_{vw}(g)=1\}$.
\item One writes ${\cal E}^+_v$ (or ${\cal E}^-_v$) for the set of out-edges (respectively, in-edges) of a given node $v$ and $v^+_\ell$ (or $v^-_\ell$) for the node for which $\ell$ is an out-edge (respectively, in-edge).
\item Similarly, second-order neighbourhoods ${\cal N}^{--}_v, {\cal N}^{-+}_v, {\cal N}^{+-}_v, {\cal N}^{++}_v$ have the obvious definitions. Second and higher order neighbours can be determined directly from the powers of $M$ and $M^\top$. For example, $w\in {\cal N}^{-+}_v$ whenever $(M^\top M)_{wv}\ge 1$.
\item One often writes $j,j',j'',j_1$, etc. to refer to in-degrees and $k,k',k'',k_1$, etc. refer to out-degrees.
\end{enumerate}
Financial network models typically have a sparse adjacency matrix $M(g)$ when $N$ is large, meaning that the number of edges is a small fraction of the $N(N-1)$ potential edges. This reflects the fact that bank counterparty relationships are expensive to build and maintain, and thus ${\cal N}^+_v$ and ${\cal N}^-_v$ typically contain relatively few nodes even in a very large network.
\subsection{Random Graphs}
Random graphs are simply probability distributions on the sets ${\cal G}(N)$:
\begin{definition}
\begin{enumerate}
\item A {random graph}\index{random graph} of size $N$ is a probability distribution $\mathbb{P}$ on the finite set ${\cal G}(N)$. When the size $N$ is itself random, the probability distribution $\mathbb{P}$ is on the countable infinite set ${\cal G}:=\cup_N {\cal G}(N)$. Normally, it is assumed that $\mathbb{P}$ is invariant under permutations of the $N$ node labels.
\item For random graphs, define the {\it node-type distribution}\index{node-type distribution} to have probabilities $P_{jk}=\mathbb{P}[v\in{\cal N}_{jk}]$ and the {\it edge-type distribution}\index{edge-type distribution} to have probabilities $Q_{kj}=\mathbb{P}[\ell\in{\cal E}_{kj}]$.
\end{enumerate}\end{definition}
$P$ and $Q$ can be viewed as bivariate distributions on the natural numbers, with marginals $P^+_k=\sum_j P_{jk}, P^-_j=\sum_k P_{jk}$ and $Q^+_k=\sum_j Q_{kj}, Q^-_j=\sum_k Q_{kj}$. Edge and node type distributions cannot be chosen independently however, but must be consistent with the fact that they derive from actual graphs which is true if one imposes that equations \eqref{graphical} hold in expectation, that is, $P$ and $Q$ are ``consistent'':
\begin{eqnarray}
\nonumber z&:=&\sum_k kP^+_k=\sum_jjP^-_j\\
\label{Consistent1} Q^+_k &=& kP^+_k/z, \quad Q^-_j = jP^-_j/z\quad \forall k,j\ .
\end{eqnarray}
\label{PQassumptions}
A number of random graph construction algorithms have been proposed in the literature, motivated by the desire to create families of graphs that match the types and measures of network topology that have been observed in nature and society. The present paper focusses on so-called configuration graphs. The textbook ``Random Graphs and Complex Networks'' by van der Hofstad \cite{vdHofstad14} provides a complete and up-to-date review of the entire subject.
In the analysis to follow, asymptotic results are typically expressed in terms of convergence of random variables in probability, defined as:
\begin{definition}
A sequence $\{X_n\}_{n\ge 1}$ of random variables is said to {\it converge in probability}\index{convergence in probability} to a random variable $X$, written $\lim_{n\to\infty} X_n\stackrel{\mathrm{P}}{=} X$ or $X_n\stackrel{\mathrm{P}}{\longrightarrow} X$, if for any $\epsilon>0$
\[ \mathbb{P}[|X_n-X|>\epsilon]\to 0\ .
\] \end{definition}
Recall further standard notation for asymptotics of sequences of real numbers\\ $\{x_n\}_{n\ge 1}, \{y_n\}_{n\ge 1}$ and random variables $\{X_n\}_{n\ge 1}$: \begin{enumerate}
\item Landau's ``little oh'': $x_n=o(1)$ means $x_n\to 0$; $x_n=o(y_n)$ means $x_n/y_n=o(1)$;
\item Landau's ``big oh'': $x_n=O(y_n)$ means there is $N>0$ such that $x_n/y_n$ is bounded for $n\ge N$;
\item $x_n\sim y_n$ means $x_n/y_n\to 1$;
\item $X_n\stackrel{\mathrm{P}}{=} o(y_n)$ means $X_n/y_n\stackrel{\mathrm{P}}{\longrightarrow} 0$.
\end{enumerate}
\subsection{Configuration Random Graphs}\label{Configuration}
\label{sec:1.2}
In their classic paper \cite{ErdoReny59}, Erd\"os and Renyi introduced the undirected model $G(N,M)$ that consists of $N$ nodes and a random subset of exactly $M$ edges chosen uniformly from the collection of $\binom{N}{M}$ possible such edge subsets. This model can be regarded as the $M$th step of a random graph process that starts with $N$ nodes and no edges, and adds edges one at a time selected uniformly randomly from the set of available undirected edges. Gilbert's random graph model $G(N,p)$, which takes $N$ nodes and selects each possible edge independently with probability $p=z/(N-1)$, has mean degree $z$ and similar large $N$ asymptotics provided $M=zN/2$. In fact, it was proved by \cite{Bollobas01} and \cite{MollReed95} that the undirected Erd\"os-Renyi graph $G(N,zN/2)$ and $G(N,p_N)$ with probability $p_N=z/(N-1)$ both converge in probability to the same model as $N\to\infty$ for all $z\in\mathbb{R}_+$. Because of their popularity, the two models $G(N,p)\sim G(N,zN/2)$ have come to be known as ``the'' random graph\index{random graph}. Since the degree distribution of $G(N,p)$ is ${\rm Bin}(N-1, p)\sim_{N\to\infty} {\rm Pois}(z)$, this is also called the {\it Poisson graph} model\index{random graph!Poisson graph}. Both these constructions have obvious directed graph analogues.
The well known directed configuration multigraph model introduced by Bollobas \cite{Bollobas80} with general degree distribution $P=\{P_{jk}\}_{j,k=0,1,\dots}$ and size $N$ is constructed by the following random algorithm: \begin{enumerate}
\item Draw a sequence of $N$ node-type pairs $(j_1,k_1),\dots, (j_N,k_N)$ independently from $P$, and accept the draw if and only if it is feasible, i.e. $\sum_{n\in [N]}(j_n-k_n)=0$. Label the $n$th node with $k_n$ {\it out-stubs}\index{out-stub} (picture this as a half-edge with an out-arrow) and $j_n$ {\it in-stubs}\index{in-stub}.
\item While there remain available unpaired stubs, select (according to any rule, whether random or deterministic) any unpaired out-stub and pair it with an in-stub selected uniformly amongst unpaired in-stubs. Each resulting pair of stubs is a directed edge of the multigraph.
\end{enumerate}
The algorithm leads to objects with self-loops and multiple edges, which are usually called multigraphs rather than graphs. Only multigraphs that are free of self-loops and multiple edges, a condition called ``simple'', are considered to be graphs. For the most part, one does not care over much about the distinction, because the density of self-loops and multiple edges goes to zero as $N\to\infty$. In fact, Janson \cite{Janson09_simple} has proved in the undirected case that the probability for a multigraph to be simple is bounded away from zero for well-behaved sequences $(g_N)_{N>0}$ of size $N$ graphs with given $P$.
Exact simulation of the adjacency matrix in the configuration model with general $P$ is problematic because the feasibility condition met in the first step occurs only with asymptotic frequency $\sim \frac{\sigma}{\sqrt{2\pi N}} $, which is vanishingly small for large graphs. For this reason, practical Monte Carlo implementations use some kind of rewiring or {\it clipping}\index{clipping} to adjust each infeasible draw of node-type pairs.
Because of the uniformity of the matching in step 2 of the above construction, the edge-type distribution of the resultant random graph is
\begin{equation}\label{independentedge} Q_{kj}=\frac{jkP^+_kP^-_j}{z^2}=Q^+_kQ^-_j\end{equation}
which is called the {\it independent edge condition}\index{independent edge condition}. For many reasons, financial and otherwise, one is interested in the more general situation called {\it assortativity} when \eqref{independentedge} is not true. We will now show how such an extended class of assortative configuration graphs can be defined. The resultant class encompasses all reasonable type distributions $(P,Q)$ and has special properties that make it suitable for exact analytical results, including the possibility of a detailed percolation analysis.
\section{The ACG Construction}\label{finite_assortative_section}
\label{sec:2}
The assortative configuration (multi-)graph (ACG) of size $N$ parametrized by the node-edge degree distribution pair $(P,Q)$ that satisfy the consistency conditions \eqref{Consistent1} is defined by the following random algorithm: \begin{enumerate}
\item Draw a sequence of $N$ node-type pairs $X= ((j_1,k_1),\dots, (j_N,k_N))$ independently from $P$, and accept the draw if and only if it is feasible, i.e. $\sum_{n\in [N]}\ j_n =\sum_{n\in [N]}\ k_n$, and this defines the number of edges $E$ that will result. Label the $n$th node with $k_n$ {\it out-stubs}\index{out-stub} (picture each out-stub as a half-edge with an out-arrow, labelled by its degree $k_n$) and $j_n$ {\it in-stubs}\index{in-stub}, labelled by their degree $j_n$. Define the partial sums $u^-_j=\sum_n{\bf 1}(j_n=j), u^+_k=\sum_n{\bf 1}(k_n=k), u_{jk}=\sum_n{\bf 1}(j_n=j,k_n=k)$, the number $ e_k^+=k u_k^+$ of {\it k-stubs} (out-stubs of degree $k$) and the number of {\it j-stubs} (in-stubs of degree $j$), $ e_j^-=j u_j^-$.
\item Conditioned on $X$, the result of Step 1, choose an arbitrary ordering $\ell^-$ and $\ell^+$ of the $E$ in-stubs and $E$ out-stubs. The matching sequence, or ``wiring'', $W$ of edges is selected by choosing a pair of permutations $\sigma,\tilde\sigma\in S(E)$ of the set $[E]$. This determines the edge sequence $\ell=(\ell^-=\sigma(\ell), \ell^+=\tilde\sigma(\ell))$ labelled by $\ell\in[E]$, to which is attached a probability weighting factor
\begin{equation}\label{AWweighting}\prod_{\ell\in[E]} Q_{k_{\sigma(\ell)}j_{\tilde\sigma(\ell)}}\ .
\end{equation}
\end{enumerate}
Given the wiring $W$ determined in Step 2, the number of type $(k,j)$ edges is
\begin{equation}\label{ekj}
e_{kj}=e_{kj}(W)=\sum_{\ell\in[E]}{\bf 1}(k_{\tilde\sigma(\ell)}=k,\ j_{\sigma(\ell)}=j)\ .\end{equation}
The collection $e=(e_{kj})$ of edge-type numbers are constrained by the $e^+_{k}, e^-_j$ that are determined by Step 1:
\begin{equation}\label{e_constraints}
e^+_{k}=\sum_{j}e_{kj},\quad e^-_j=\sum_{k}e_{kj}, \quad E=\sum_{kj}e_{kj}\ .\end{equation}
Intuitively, since Step 1 leads to a product probability measure subject to a single linear constraint that is true in expectation, one expects that it will lead to the independence of node degrees for large $N$, with the probability $P$.
Similar logic suggests that since the matching weights in Step 2 define a product probability measure conditional on a set of linear constraints that are true in expectation, it should lead to edge degree independence in the large $N$ limit, with the limiting probabilities given by $Q$. However, the verification of these facts is not so easy, and their justification is the main object of this paper. First, certain combinatorial properties of the wiring algorithm of Step 2, conditioned on the node-type sequence $X$ resulting from Step 1 for a finite $N$ will be derived. One result says that the probability of any wiring sequence $W=(\ell\in[E])$ in step 2 depends only on the set of quantities $(e_{kj})$ where for each $k,j$, $e_{kj}:=|\{\ell\in[E]|\ell\in{\cal E}_{kj}\}|$. Another is that the conditional expectation of $e_{kj}/E$ is the exact edge-type probability for all edges in $W$.
\begin{proposition}
\label{P1} Consider Step 2 of the ACG construction for finite $N$ with probabilities $P,Q$ conditioned on the $X=( j_i, k_i), i\in[N]$.
\begin{enumerate}
\item The conditional probability of any wiring sequence $W=(\ell\in[E])$ is:
\begin{eqnarray}\label{wiringprob}
\mathbb{P}[W|X]&=&C^{-1}\ \prod_{kj} (Q_{kj})^{e_{kj}(W)}\ ,\\
\label{Cformula}C&=&C(e^-,e^+)=E! \sum_{e}\prod_{kj} \frac{(Q_{kj})^{e_{kj}}}{e_{kj}!}\prod_j\left(e_j^-!\right)\prod_k\left(e_k^+!\right)\ ,
\end{eqnarray}
where the sum in \eqref{Cformula} is over collections $e=(e_{kj})$ satisfying the constraints \eqref{e_constraints}. \item The conditional probability $p$ of any edge of the wiring sequence $W=(\ell\in[E])$ having type $k, j$ is
\begin{equation}\label{E_prob}p=\mathbb{E}[e_{kj}|X]/E\ .\end{equation}
\end{enumerate}
\end{proposition}
\bigskip\noindent{\bf Proof of Proposition \ref{P1}: \ } The denominator of \eqref{wiringprob} is $C=\sum_{\sigma,\tilde\sigma\in S(E)}\prod_{l\in[E]}Q_{k_{\sigma(\ell)}j_{\tilde\sigma(\ell)}}$, from which \eqref{Cformula} can be verified by induction on $E$.
Assuming \eqref{Cformula} is true for $E-1$, one can verify the inductive step for $E$:
\begin{eqnarray*} C&=&\sum_{\tilde k,\tilde j} \sum_{\sigma,\tilde\sigma\in S(E)}{\bf 1}(k_{\sigma(E)}=\tilde k,j_{\tilde\sigma(E)}=\tilde j)\prod_{l\in[E]}\ Q_{k_{\sigma(\ell)}j_{\tilde\sigma(\ell)}}\\
&=&\sum_{\tilde k,\tilde j} e^+_{\tilde k}e^-_{\tilde j}\ Q_{\tilde k\tilde j}\ \sum_{\sigma',\tilde\sigma'\in S(E-1)}\prod_{l\in[E-1]}\ Q_{k_{\sigma'(\ell)}j_{\tilde\sigma'(\ell)}}\\
&=&\sum_{\tilde k,\tilde j} e^+_{\tilde k}e^-_{\tilde j}\ Q_{\tilde k\tilde j}\ (E-1)!\ \sum_{e'}\prod_{kj} \frac{(Q_{kj})^{e'_{kj}}}{e'_{kj}!}\prod_j\left(e_j^{'-}!\right)\ \prod_k\left(e_k^{'+}!\right)\ . \end{eqnarray*}
Here, $e'_{kj}=e_{kj}-{\bf 1}(k=\tilde k,j=\tilde j),\ e_j^{'-}=e_j^--{\bf 1}(j=\tilde j), \ e_k^{'+}=e_k^+-{\bf 1}(k=\tilde k)$. After noting cancellations that occur in the last formula, and re-indexing the collection $e'$ one finds
\begin{eqnarray*} C&=&\sum_{\tilde k,\tilde j} \sum_{e'}e_{\tilde k\tilde j} \ (E-1)!\prod_{kj} \frac{(Q_{kj})^{e_{kj}}}{e_{kj}!}\prod_j\left(e_j^-!\right)\ \prod_k\left(e_k^+!\right)\ \\
&=& \sum_{e}\left(\sum_{\tilde k,\tilde j} e_{\tilde k\tilde j}\right)\ (E-1)!\ \prod_{kj} \frac{(Q_{kj})^{e_{kj}}}{e_{kj}!}\prod_j\left(e_j^-!\right)\ \prod_k\left(e_k^+!\right)\
\\ &=&E!\sum_{e}\prod_{kj} \frac{(Q_{kj})^{e_{kj}}}{e_{kj}!}\prod_j\left(e_j^-!\right)\ \prod_k\left(e_k^+!\right)\ \end{eqnarray*}
which is the desired result.
Because of the edge-permutation symmetry, it is enough to prove \eqref{E_prob} for the last edge. For this, one can follow the same logic and steps as in Part 1 to find:
\begin{eqnarray*} p&=& \frac1{C(e^-,e^+)}
\sum_{\sigma,\tilde\sigma\in S(E)}{\bf 1}(k_{\sigma(E)}=k,j_{\tilde\sigma(E)}=j)\ \prod_{l\in[E]}\ Q_{k_{\sigma(\ell)}j_{\tilde\sigma(\ell)}}\\&=&\frac{E!}{C(e^-,e^+)} \sum_{e} \frac{e_{kj}}{E}\ \prod_{k'j'} \frac{(Q_{k'j'})^{e_{k'j'}}}{e_{k'j'}!}\prod_{j'}\left(e_{j'}^{-}!\right)\ \prod_{k'}\left(e_{k'}^{+}!\right)=
\mathbb{E}[e_{kj}|X]/E\ .\end{eqnarray*}
\qquad
\qedsymbol
\bigskip
An easy consequence of the above proof is that the number of wirings $W$ consistent with a collection $e=(e_{kj})$ is given by
\begin{equation}\label{Nwiring} |\{W:e(W)=e\}|=\frac{E!\left(\prod_je^-_j!\right)\left(\prod_k e^+_k!\right)}{\prod_{kj}e_{kj}!}\ .\end{equation}
Because of the permutation symmetries of the construction, a host of more complex combinatorial identities hold for this model. The most important is that Part 2 of the Proposition can be extended inductively to determine the joint edge distribution for the first $M$ edges conditioned on $X$. To see how this goes, define two sequences $e^-_j(m), e^+_k(m)$ for $0\le m\le M$ to be the number of $j$-stubs and $k$-stubs available after $m$ wiring steps.
\begin{proposition}
\label{P2} Consider Step 2 of the ACG construction for finite $N$ with probabilities $P,Q$ conditioned on $X$ from Step 1.
The conditional probability $p$ of the first $M$ edges of the wiring sequence $W=(\ell\in[E])$ having types $(k_i, j_i)_{i\in[M]}$ is
\begin{equation}\label{EM_prob}\mathbb{P}[(k_i, j_i)_{i\in[M]}|X]=\frac{(E-M)!}{E!}\prod_{i\in[M]}\mathbb{E}[e_{k_ij_i}|e^-(i-1), e^+(i-1)]\ .\end{equation}
\end{proposition}
\bigskip\noindent{\bf Proof of Proposition \ref{P2}: \ } Note that Part 2 of Proposition \ref{P1} gives the correct result when $M=1$. For any $m$, an extension of the argument that proves Part 2 of Proposition \ref{P1} also shows that
\begin{eqnarray}\mathbb{P}[(k_i, j_i)_{i\in[m]}|X]&=& \frac1{C(e^-(0),e^+(0))}
\sum_{\sigma,\tilde\sigma\in S(E)}\prod_{\ell=1}^m{\bf 1}(k_{\sigma(\ell)}=k_\ell,j_{\tilde\sigma(\ell)}=j_\ell)\ \prod_{l\in[E]}\ Q_{k_{\sigma(\ell)}j_{\tilde\sigma(\ell)}}\nonumber\\
&&\hspace{-1.5in} =\ \frac{1}{C(e^-(0),e^+(0))} \prod_{\ell=1}^m\left[ e^-_{j_\ell}(\ell-1)e^+_{k_\ell}(\ell-1)\ Q_{k_\ell j_\ell}\right] \ \sum_{\sigma',\tilde\sigma'\in S(E-m)}\prod_{\ell=m+1}^{E}\ Q_{k_{\sigma'(\ell)}j_{\tilde\sigma'(\ell)}}
\label{EM_prob2}\end{eqnarray}
Now assume inductively that the result \eqref{EM_prob} is true for $M-1$ and compute \eqref{EM_prob} for $M$:
\[\mathbb{P}[(k_i, j_i)_{i\in[M]}|X]= \frac{\mathbb{P}[(k_i, j_i)_{i\in[M]}|X]}{\mathbb{P}[(k_i, j_i)_{i\in[M-1]}|X]}\times \frac{(E-M-1)!}{E!}\prod_{i\in[M-1]}\mathbb{E}[e_{k_ij_i}|e^-(i-1), e^+(i-1)]\ .
\]
The ratio in the first factor can be treated using \eqref{EM_prob2}, and the resulting cancellations lead to the formula
\begin{eqnarray*}
\mathbb{P}[(k_i, j_i)_{i\in[M]}|X]&&\\&&\hspace{-1.5in}=\ \frac{\left[ e^-_{j_M}(M-1)e^+_{k_M}(M-1)\ Q_{k_M j_M}\right] \ \sum_{\sigma',\tilde\sigma'\in S(E-M)}\prod_{\ell=M+1}^{E}\ Q_{k_{\sigma'(\ell)}j_{\tilde\sigma'(\ell)}}}{ \ \sum_{\sigma',\tilde\sigma'\in S(E-M+1)}\prod_{\ell=M}^{E}\ Q_{k_{\sigma'(\ell)}j_{\tilde\sigma'(\ell)}}}\\
&&\hspace{-1.5in}\times\ \frac{(E-M+1)!}{E!}\prod_{i\in[M-1]}\mathbb{E}[e_{k_ij_i}|e^-(i-1), e^+(i-1)]
\end{eqnarray*}
The desired result follows because Part 2 of Proposition \ref{P1} can be applied to show
\begin{eqnarray*} \frac{\left[ e^-_{j_M}(M-1)e^+_{k_M}(M-1)\ Q_{k_M j_M}\right] \ \sum_{\sigma',\tilde\sigma'\in S(E-M)}\prod_{\ell=M+1}^{E}\ Q_{k_{\sigma'(\ell)}j_{\tilde\sigma'(\ell)}}}{ \ \sum_{\sigma',\tilde\sigma'\in S(E-M+1)}\prod_{\ell=M}^{E}\ Q_{k_{\sigma'(\ell)}j_{\tilde\sigma'(\ell)}}}\\
&&\hspace{-2.7in}=\frac1{E-M+1}\mathbb{E}[e_{k_Mj_M}|e^-(M-1), e^+(M-1)]\ .
\end{eqnarray*}
\qedsymbol
\bigskip
\section{Asymptotic Analysis}
It is quite easy to prove that the empirical node-type distributions $(u_{jk},u^-_j,u^+_k)$ resulting from Step 1 of the ACG algorithm satisfy a law of large numbers:
\begin{equation}\label{LLN_P}
N^{-1}u_{jk}\stackrel{\mathrm{P}}{=} P_{jk},\quad N^{-1}u^-_{j}\stackrel{\mathrm{P}}{=} P^-_{j},\quad N^{-1}u^+_{k}\stackrel{\mathrm{P}}{=} P^+_{k},
\end{equation}
as $N\to\infty$. In this section, we focus on the new and more difficult problem to determine the asymptotic law of the empirical edge-type distribution, conditioned on the node-type sequence $X$. To keep the discussion as clear as possible, we confine the analysis to the case the distributions $P$ and $Q$ have support on the finite set $(j,k)\in\{0,1,\dots, K\}^2$.
\
One can see from Proposition \ref{P2} that the probability distribution of the first $M$ edge types will be given asymptotically by $\prod_{i\in[M]}Q_{k_ij_i}$ provided our intuition is correct that $\mathbb{E}[E^{-1}e_{kj}]\stackrel{\mathrm{P}}{=} Q_{kj}(1+o(1))$ asymptotically for large $N$. To validate this intuition, it turns out one can apply the Laplace asymptotic method to the cumulant generating function for the empirical edge-type random variables $e_{kj}$, conditioned on any feasible collection of $(e^+_k,e^-_j)$ with total number $E=\sum_ke^+_k=\sum_j e^-_j$:
\begin{eqnarray}\label{cumfn}
F(v;e^-,e^+)&:=&\log\mathbb{E}[e^{\sum_{kj}v_{kj} e_{kj}}|e^-,e^+], \quad \forall v=(v_{kj})\\
&=&\log \frac{\sum_{e}\prod_{kj} \frac{(Q_{kj}e^{v_{kj}})^{e_{kj}}}{e_{kj}!}\prod_j\left(e_j^-!\right)\prod_k\left(e_k^+!\right)}
{\sum_{e}\prod_{kj} \frac{(Q_{kj})^{e_{kj}}}{e_{kj}!}\prod_j\left(e_j^-!\right)\prod_k\left(e_k^+!\right)}\ ,
\end{eqnarray}
The constraints on $(e_{kj})$ can be introduced by auxiliary integrations over variables $u^-_j,u^+_k$ of the form
\[ 2\pi{\bf 1}(\sum_j e_{kj}=e_k^+)=\int^{2\pi}_0 d u^+_k \ e^{i\tilde u_k(\sum_j e_{kj}-e_k^+)}\ .
\]
This substitution leads to closed formulas for the sums over $e_{kj}$, and the expression for $e^F$:
\begin{equation}
\label{steep}\frac{\int_I d^{2K}u \exp[H(v,-iu;e)]}{\int_I d^{2K}u\ \exp [H(0,-iu;e)]}
\end{equation}
where
\begin{equation}\label{Hfunction} H(v,\alpha;e)=\sum_{kj}e^{(\alpha^-_j+\alpha^+_k)}e^{v_{kj}} Q_{kj} -(\sum_j \alpha^-_je^-_j+\sum_k\alpha^+_k e^+_k)=\sum_{kj}e^{\alpha\cdot\delta_{jk}}e^{v_{kj}} Q_{kj} -\alpha\cdot e\ .
\end{equation}
The integration in \eqref{steep} is over the set $I:=[0,2\pi]^{2K}$.
Here a ``double vector'' notation has been introduced for $u=(u^-,u^+), e=(e^-,e^+), \alpha=(\alpha^-,\alpha^+)$ where $u^-, u^+\in\mathbb{C}^K$ etc. and where $K$ is the number of possible in and out degrees (which one may want to take to be infinite). Define double vectors ${\bf 1}^-=(1,1,\dots\ 1;0,\dots,\ 0), {\bf 1}^+=(0,\dots,\ 0;1,\dots,\ 1), {\bf 1}={\bf 1}^-+{\bf 1}^+,\tilde{\bf 1}={\bf 1}^--{\bf 1}^+$. For any pair $(j,k)\in [K]^2$, let $\delta^-_j$ be the double vector with a $1$ in the $j$th place and zeros elsewhere, let $\delta^+_k$ be the double vector with a $1$ in the $K+k$th place and zeros elsewhere and $\delta_{jk}=\delta^-_j+\delta^+_k$. Using the natural inner product for double vectors $\alpha\cdot e:=\sum_j\alpha^-_je^-_j+\sum_k \alpha^+_ke^+_k$, etc., the feasibility condition on stubs can be written $e\cdot\tilde{\bf 1}=0$.
The main aim of the paper is to prove a conditional law of large numbers for $E^{-1}e_{jk}$ as $E\to\infty$, conditioned on $e=(e^-,e^+)$ satisfying $e\cdot\tilde{\bf 1}=0$. By explicit differentiation of the cumulant generating function, and some further manipulation, one finds that
\begin{eqnarray}
\mathbb{E}[e_{kj}|e]&=&\frac{\partial F}{\partial v_{kj}}\Big|_{v=0}=Q_{kj}\frac{\int_I d^{2K}u\ \exp[H(0,-iu;e-\delta_{jk})]}{\int_I d^{2K}u \ \exp [H(0,-iu;e)]}\label{firstmom}\\
{\mbox{Var}}[e_{kj}|e]&=&\frac{\partial^2 F}{\partial v_{kj}^2}\Big|_{v=0}=Q_{kj}\frac{\int_I d^{2K}u\ \exp[H(0,-iu;e-\delta_{jk})]}{\int_I d^{2K}u\ \exp [H(0,-iu;e)]}
\label{secondmom}
\\
&&\hspace{-1.05in}+\left(Q_{kj}\right)^2\left[\frac{\int_I d^{2K}u\ \exp[H(0,-iu;e-2\delta_{jk})]}{\int_I d^{2K}u \ \exp [H(0,-iu;e)]}-\left(\frac{\int_I d^{2K}u\ \exp[H(0,-iu;e-\delta_{jk})]}{\int_I d^{2K}u\ \exp [H(0,-iu;e)]}\right)^2\right]\ .\nonumber
\end{eqnarray}
Since our present aim is to understand \eqref{firstmom} and \eqref{secondmom}, we henceforth set $v=0$ in the $H$-function. The $H$ function defined by \eqref{Hfunction} with $v=0$ has special combinatorial features:
\begin{lemma}\label{Lem1} For all $e\in\mathbb{Z}_+^{2K}$ satisfying $e\cdot\tilde{\bf 1}=0$, the function $H=H(\alpha; e)$ satisfies the following properties:
\begin{enumerate}
\item $H$ is convex for $\alpha\in\mathbb{R}^{2K}$ and entire analytic for $\alpha\in\mathbb{C}^{2K}$;
\item $H$ is periodic: $H(\alpha+2\pi i \eta;e)=H(\alpha; e)$ for all $\eta\in\mathbb{Z}^{2K}$.
\item For any $\lambda\in \mathbb{C}$, $H(\alpha+\lambda\tilde{\bf 1};e)=H(\alpha; e)$ ;
\item For any $\lambda>0$, $H(\alpha;\lambda e)=\lambda H(\alpha-\frac{\log\lambda}{2}{\bf 1};e)-\lambda\ \log\lambda {\bf 1}\cdot e
$.
\item The $m$th partial derivative of $H$ with respect to $\alpha$ is given by
\begin{equation} \nabla^m H(\alpha;e) =\left\{\begin{array}{ll }
\sum_{jk} \delta_{jk} e^{\alpha\cdot\delta_{jk}}\ Q_{kj}-e,& m=1; \\
\sum_{jk} (\delta_{jk})^{\bigotimes m} e^{\alpha\cdot\delta_{jk}}\ Q_{kj},& m=2,3,\dots
\end{array}
\right.\end{equation} Here $(\delta_{jk})^{\bigotimes m}$ denotes the $m$th tensor power of the double vector $\delta_{jk}$.
\end{enumerate}
\end{lemma}
The Laplace asymptotic method (or saddlepoint method), reviewed for example in \cite{Hurd10}, involves shifting the $u$ integration in \eqref{steep} into the complex by an imaginary vector. The Cauchy Theorem, combined with the periodicity of the integrand in $u$, will ensure the value of the integral is unchanged under the shift. The desired shift is determined by the $e$-dependent critical points $\alpha^*$ of $H$ which by Part 5 of Lemma \ref{Lem1} are solutions of
\begin{equation}\label{criticalpt} \sum_{jk} \delta_{jk} e^{{\boldsymbol\alpha}\cdot\delta_{jk}}\ Q_{kj} = e\ .
\end{equation}
In view of Parts 1 and 3 of the Lemma, for each $e\in\mathbb{Z}^{2K}$ there is a unique critical point $\alpha^*(e)$ such that $\tilde{\bf 1}\cdot\alpha^*(e)=0$. The imaginary shift of the $u$-integration is implemented by writing $u=i\alpha^*(e)+\zeta$ where now $\zeta$ is integrated over $I$.
To unravel the $E$ dependence, one uses rescaled variables $x=E^{-1} e$ that lie on the plane $\tilde{\bf 1}\cdot x=1$ and by Part 4 of the Lemma with $\lambda=E^{-1}$ one has that $\alpha^*(e)=\alpha^*(x)+\frac{\log E}{2}\ {\bf 1}$. Now one can use the third order Taylor expansion with remainder to write
\begin{eqnarray} H(0,\alpha^*(e)-i\zeta;e)&=& EH(0,\alpha^*(x)-i\zeta;x)-E\log E ({\bf 1}\cdot x)\label{taylor} \\
&&\hspace{-1.2in}=\ -\ E\log E+E\left[H(0,\alpha^*(x);x)-\frac12\zeta^{\bigotimes 2}\cdot\nabla^2H+i\frac16\zeta^{\bigotimes 3}\cdot\nabla^3H\right]+EO(|\zeta|^4)\nonumber
\end{eqnarray}
where $\nabla^2H, \nabla^3H$ are evaluated at $\alpha^*(x)$ and the square-bracketed quantities are all $E$ independent. From \eqref{Hfunction} one can observe directly that $|e^H|$ has a unique maximum on the domain of integration at $\zeta=0$:
\begin{equation}\label{maxonI}
\max_{\zeta\in I}|e^{H(\alpha^*(e)-i\zeta;e)}|=e^{H(\alpha^*(e);e)}\ .
\end{equation}
The uniqueness of the maximum is essential to validate the following Laplace asymptotic analysis, and leads to the main result of the paper:
\begin{theorem}
\label{theorem}
For any double vector $x^*\in(0,1)^{2K}\cap \tilde{\bf 1}^\perp$, let $e(E)=Ex(E)$ be a sequence in $\mathbb{Z}_+^{2K}\cap \tilde{\bf 1}^\perp$ such that
\begin{equation}\lim_{E\to\infty} x(E)=x^*\ .
\end{equation}
Then asymptotically as $E\to\infty$,
\begin{eqnarray} {\cal I}(E)&=&\int_I d^{2K}u\ \exp[H(-iu;e(E))]\\
& =& (2\pi)^{K+1/2}E^{1/2-K}e^{-E\log E+ EH(\alpha^*(x^*);x^*)}\left[{\det}_0\nabla^2H\right]^{-1/2}\left[1+O(E^{-1})\right]\ .\nonumber
\end{eqnarray}
Here ${\det}_0\nabla^2H$ represents the determinant of the matrix projection onto $\tilde{\bf 1}^\perp$, the subspace orthogonal to $\tilde{\bf 1}$, of $\nabla^2H$ evaluated at the critical point $\alpha^*(x^*)$.
\end{theorem}
When applied to \eqref{firstmom} and \eqref{secondmom} this Theorem is powerful enough to yield the desired results on the edge-type distribution in the ACG model for fixed $e=(e^-,e^+)=Ex$ for large $E$.
\begin{corollary}\label{corollary}
Consider the ACG model with $(P,Q)$ supported on $\{0,1,\dots, K\}^2$.\begin{enumerate}
\item Conditioned on $X$,
\[ E^{-1}e_{kj}\stackrel{\mathrm{P}}{=} Q_{kj}e^{1-H(0,\alpha^*(x);x)-\alpha^*(x)\cdot\delta_{jk}}[1+O(E^{-1/2})]\]
where $x=E^{-1}e$ and $e=(e^-(X),e^+(X))$.
\item Unconditionally,
\[ E^{-1}e_{kj}\stackrel{\mathrm{P}}{=} Q_{kj}[1+O(N^{-1/2})]\ .\]
\end{enumerate}\end{corollary}
Combining this Law of Large Numbers result with the easier result for the empirical node-type distribution confirms that the large $N$ asymptotics of the empirical node- and edge-type distributions agree with the target $(P,Q)$ distributions.
\bigskip\noindent{\bf Proof of Corollary \ref{corollary}: \ } By applying Part 4 of Lemma \ref{Lem1} and the Theorem to \eqref{firstmom} one finds that
\begin{eqnarray*} \mathbb{E}[e_{kj}|e]&=&Q_{kj}\frac{\int_I d^{2K}u\ \exp[H(-iu;e-\delta_{jk})]}{\int_I d^{2K}u \ \exp [H(-iu;e)]}\\
&&\hspace{-.9in}=\ Q_{kj}\ \exp[-(E-1)\log (E-1)+ E\log E+ (E-1)H(\alpha^*(x');x')-EH(\alpha^*(x);x)]\\
&&\times\ \left[\frac{{\det}_0\nabla^2H(\alpha^*(x))}{{\det}_0\nabla^2H(\alpha^*(x'))}\right]^{1/2}\left[1+O(E^{-1})\right]\end{eqnarray*}
where $x=E^{-1}e$ and $x'=(E-1)^{-1}(e-\delta_{jk})$ are such that $\Delta x=x'-x=O(E^{-1})$. Now,
one can show that if $x, x'$ lie on the plane $\tilde{\bf 1}\cdot x=1$, and $\Delta x=x'-x$ is $O(E^{-1})$ then
\begin{equation}\label{DeltaH} H(\alpha^*(x');x')-H(\alpha^*(x);x)=\alpha^*(x)\cdot\Delta x +O(|\Delta x|^2)\ .
\end{equation}
It is also true that $\Delta\alpha^*=\alpha^*(x')-\alpha^*(x)=O(|\Delta x|)$ and satisfies
\begin{equation} \Delta\alpha^*\cdot x=O(|\Delta x|^2)\ .\end{equation}
Since ${\det}_0\nabla^2H(\alpha)$ is analytic in $\alpha$ with $O(1)$ derivatives, and $\Delta\alpha^*=O(|\Delta x|)$ \[ \left[\frac{{\det}_0\nabla^2H(\alpha^*(x))}{{\det}_0\nabla^2H(\alpha^*(x'))}\right]^{1/2}=\left[1+O(E^{-1})\right]\ .
\]
Also,
\[(E-1)H(\alpha^*(x');x')-(E-1)H(0,\alpha^*(x);x)=-\alpha^*(x)\cdot\delta_{jk} +O(|\Delta x|)\]
and $E\log E-(E-1)\log (E-1) ]\sim \log E +1 +O(E^{-1})$, from which one concludes
\begin{equation}\mathbb{E}[e_{kj}|e]=Q_{kj}\ E\ \exp[1-H(\alpha^*(x);x)-\alpha^*(x)\cdot\delta_{jk}]\left[1+O(E^{-1})\right]\ .\end{equation}
The conclusion of the Part 1 of the Corollary now follows from the Chebyshev inequality if one shows that \eqref{secondmom} is $O(E)$. Since the first term of \eqref{secondmom} equals $\mathbb{E}[e_{kj}|e]$, which is $O(E)$, it is only necessary to show that the $O(E^2)$ parts of the second term cancel. Each ratio in the second term can be analyzed exactly as above, leading to
\begin{eqnarray*} (Q_{kj})^2
E\ \exp[1-H(\alpha^*(x);x)-\alpha^*(x)\cdot\delta_{jk}]&&\\&&\hspace{-3.25in}\times \ \left((E-1) \exp[1-H(\alpha^*(x');x')-\alpha^*(x')\cdot\delta_{jk}]-E\ \exp[1-H(\alpha^*(x);x)-\alpha^*(x)\cdot\delta_{jk}]\right)\\
&&\hspace{-2.5in}\times \ \left[1+O(E^{-1})\right]\\
=\ \left[Q_{kj}E \exp[1-H(\alpha^*(x);x)-\alpha^*(x)\cdot\delta_{jk}]\right]^2&&\\&&\hspace{-2.3in}\times \ \left(\exp[H(\alpha^*(x);x)-H(\alpha^*(x');x')-\Delta\alpha^*(x')\cdot\delta_{jk}]\right)\left[1+O(E^{-1})\right]\\
=\ \left[Q_{kj}E \exp[1-H(\alpha^*(x);x)-\alpha^*(x)\cdot\delta_{jk}]\right]^2&&\\&&\hspace{-2.3in}\times \ \left(\exp[-\alpha^*(x)\cdot\Delta x -\Delta\alpha^*(x')\cdot\delta_{jk}]-1\right)\left[1+O(E^{-1})\right]\ =\ O(E)
\end{eqnarray*}
where one uses \eqref{DeltaH} again in the second last equality.
To prove Part 2, it is sufficient to note that $E^{-1}(e^-(X),e^+(X))=(Q^-,Q^+)[1+O(N^{-1/2})]$ and that $\alpha^*(Q^-,Q^+)=0, H(\alpha^*(Q^-,Q^+);Q^-,Q^+)=1$.
\qedsymbol
\bigskip
\noindent{\bf Proof of Theorem \ref{theorem}: \ } For each $E$, since the integrand of ${\cal I}(E)$ is entire analytic and periodic, its integral is unchanged under a purely imaginary shift of the contour. Also, since by Part 3 of Lemma \ref{Lem1} the integrand is constant in directions parallel to $\tilde{\bf 1}$, the integrand can be reduced to the set $I\cap\tilde{\bf 1}^\perp$. Thus, using \eqref{taylor} for $e=e(E)$ and $x=x(E)$, ${\cal I}(E)$ can be written
\begin{eqnarray*}
{\cal I}(E)&=&2\pi\int_{I\cap\tilde{\bf 1}^\perp} d^{2K-1}\zeta\ \exp[H(\alpha^*(e)-i\zeta;e)]\\
&=& 2\pi\int_{I\cap\tilde{\bf 1}^\perp} d^{2K-1}\zeta\\
&&\hspace{-.8in}\times \ \exp\left[-\ E\log E+E\left(H(\alpha^*(x);x)-\frac12\zeta^{\bigotimes 2}\cdot\nabla^2H+i\frac16\zeta^{\bigotimes 3}\cdot\nabla^3H+O(|\zeta|^4)\right)\right]\ .
\end{eqnarray*}
In rescaled variables $\tilde\zeta=E^{1/2}\zeta$ this becomes
\[
{\cal I}(E)=2\pi E^{1/2-K}\exp\left[-\ E\log E+EH(\alpha^*(x);x)\right]\times \tilde {\cal I}(E)\]
where
\begin{eqnarray*}
\tilde {\cal I}(E)&:=&\int_{E^{1/2}I\cap\tilde{\bf 1}^\perp} d^{2K-1}\tilde\zeta\ \exp[ H(\alpha^*(x)-iN^{-1/2}\tilde\zeta;x)- H(\alpha^*(x);x)]\\
&&\hspace{-.5in}=\ \int_{E^{1/2}I\cap\tilde{\bf 1}^\perp} d^{2K-1}\tilde\zeta\ \exp[-\frac12\tilde\zeta^{\bigotimes 2}\cdot\nabla^2H]\left(1+i\frac{E^{-1/2}}{6} \tilde\zeta^{\bigotimes 3}\cdot\nabla^3H+O(E^{-1})\right)\ .
\end{eqnarray*}
In this last integral the $O(E^{-1/2})$ term is odd in $\tilde\zeta$ and makes no contribution. Now,
\begin{eqnarray*} |\exp[H(\alpha^*(x)-iN^{-1/2}\tilde\zeta;x)- H(\alpha^*(x);x)]|&&\\
&&\hspace{-2in}=\ \exp\left[\sum_{kj}e^{\alpha^*(x)\cdot\delta_{jk}}\ (\cos(N^{-1/2}\tilde\zeta)-1)\cdot\delta_{jk})\ Q_{kj}\right]
\end{eqnarray*} clearly has a unique maximum at $\tilde\zeta=0$. Therefore, a standard version of the Laplace method such as that found in \cite{Erdelyi1956} is sufficient to imply that as $E\to\infty$,
\begin{equation} \tilde {\cal I}(E)= (2\pi)^{K-1/2}\left[{\det}_0\nabla^2H\right]^{-1/2}\left[1+O(E^{-1})\right]\end{equation}
where $\nabla^2H$ is evaluated at $\alpha^*(x^*)$.
\qedsymbol
\section{Locally Tree-like Property} \label{Sec:4}
To understand percolation theory on random graphs, or to derive a rigorous treatment of cascade mappings on random financial networks, it turns out to be important that the underlying random graph model have a property sometimes called ``locally tree-like''. In this section, the local tree-like property of the ACG model will be characterized as a particular large $N$ property of the probability distributions associated with {\it configurations}, that is, finite connected subgraphs $g$ of the skeleton labelled by their degree types.
First consider what it means in the $(P,Q)$ ACG model with size $N$ to draw a random configuration $g$ consisting of a pair of vertices $v_1,v_2$ joined by a link, that is, $v_2\in{\cal N}^-_{v_1}$. In view of the permutation symmetry of the ACG algorithm, the random link can without loss of generality be taken to be the first link $W(1)$ of the wiring sequence $W$. Following the ACG algorithm, Step 1 constructs a feasible node degree sequence $X=( j_i, k_i), i\in[N]$ on nodes labelled by $v_i=i$ and conditioned on $X$, Step 2 constructs a random $Q$-wiring sequence $W=\left(\ell=(v^+_{\ell},v^-_{\ell})\right)_{\ell\in[E]}$ with $E=\sum_i
k_i=\sum_i j_i$ edges. By an abuse of notation, we label their edge degrees by $k_{\ell}=k_{v^+_{\ell}}, j_{\ell}=j_{v^-_{\ell}}$ for $\ell\in[E]$ .
The configuration event in question, namely that the first link in the wiring sequence $W$ attaches to nodes of the required degrees $(j_1,k_1), (j_2,k_2)$, has probability $p=\mathbb{P}[v_i\in{\cal N}_{j_i,k_i}, i=1,2| v_2\in{\cal N}^-_{v_1}]$. To compute this, note that the fraction $j_1u_{j_1k_1}/e^-_{j_1}$ of available $j_1$-stubs come from a $j_1k_1$ node and the fraction $k_2u_{j_2k_2}/e^+_{k_2}$ available $k_2$-stubs come from a $j_2k_2$ node. Combining this fact with Part 2 of Proposition 1, equation \ref{E_prob} implies the configuration probability conditioned on $X$ is
exactly \begin{equation}
p=j_{1}u_{j_{1}k_{1}}k_2u_{j_2k_2}\ \frac{\mathbb{E}[e_{k_2j_{1}}|e^-,e^+]}{Ee^+_{k_2}e^-_{j_{1}}}\ .
\end{equation}
By the Corollary:
\begin{equation}\label{one_animal} p\ \stackrel{\mathrm{P}}{=}\ \frac{j_1k_2P_{j_1k_1}P_{j_2k_2} Q_{k_2j_1}}{z^2Q^+_{k_2}Q^-_{j_1}}[1+O(N^{-1/2})]\ .
\end{equation}
This argument justifies the following informal computation of the correct asymptotic expression for $p$ by successive conditioning:
\begin{eqnarray} p&=&\mathbb{P}[v_i\in{\cal N}_{j_ik_i}, i=1,2\left| v_2\in{\cal N}^-_{v_1}]\right.\\
&=&\mathbb{P}[v_1\in{\cal N}_{j_1k_1}\left| v_2\in{\cal N}^-_{v_1}\cap{\cal N}_{j_2k_2}]\right.\mathbb{P}[v_2\in{\cal N}_{j_2k_2}\left| v_2\in{\cal N}^-_{v_1}]\right.\\
&=&P_{k_1|j_1}Q_{j_1|k_2}P_{j_2|k_2}Q^+_{k_2}=\frac{P_{j_1k_1}P_{j_2k_2} Q_{k_2j_1}}{P^+_{k_2}P^-_{j_1}}
\end{eqnarray}
where we introduce conditional degree probabilities $P_{k|j}=P_{jk}/P^-_j$ etc.
Occasionally in the above matching algorithm, the first edge forms a self-loop, i.e. $v_1=v_2$. The probability of this event, jointly with fixing the degree of $v_1$, can be computed exactly for finite $N$ as follows:
\[ \tilde p:=\mathbb{E}[v_1=v_2, v_1\in{\cal N}_{jk}|v_2\in{\cal N}^-_{v_1}|X]=\left(\frac{jku_{jk}}{e^-_je^+_k}\right)\frac{\mathbb{E}[e_{kj}|X]}{E}\ .\]
As $N\to\infty$ this goes to zero, while $N\tilde p$ approaches a finite value:
\begin{equation}\label{self_loop} N\tilde p\stackrel{\mathrm{P}}{\longrightarrow} \frac{jkP_{jk}Q_{kj}}{z^2Q^+_kQ^-_j}
\end{equation}
which says that the relative fraction of edges being self loops is the asymptotically small $\sum_{jk}\frac{jkP_{jk}Q_{kj}}{Nz^2Q^+_kQ^-_j}$. In fact, following results of \cite{Janson09_simple} and others on the undirected configuration model, one expects that the total number of self loops in the multigraph converges in probability to a Poisson random variable with finite parameter
\begin{equation} \lambda = \sum_{jk}\frac{jkP_{jk}Q_{kj}}{z^2Q^+_kQ^-_j}\ .
\end{equation}
\subsection{General Configurations}
A general {\it configuration} is a connected subgraph $h$ of an ACG graph $({\cal N},{\cal E})$ with $L$ ordered edges and with each node labelled by its degree type. It results from a growth process that starts from a fixed node $w_0$ called the root and at step $\ell\le L$ adds one edge $\ell$ that connects a node $w_\ell$ to a specific existing node $w'_\ell$. The following is a precise definition:
\begin{definition} A configuration rooted to a node $w_0$ with degree $(j,k):=(j_0,k_0)$ is a connected subgraph $h$ consisting of a sequence of $L$ edges that connect nodes $(w_\ell)_{\ell\in[L]}$ of types
$(j_\ell,k_\ell)$, subject to the following condition:
For each $\ell\ge 1$, $w_\ell$ is connected by the edge labelled with $\ell$ to a node $w'_\ell\in\{w_j\}_{j\in\{0\}\cup[\ell-1]}$ by either an in-edge (that points into $w'_\ell$) $(w_\ell,w'_\ell)$ or an out-edge $(w'_\ell,w_\ell)$.
\end{definition}
A random realization of the configuration results when the construction of the size $N$ ACG graph $({\cal N},{\cal E})$ is conditioned on $X$ arising from Step 1 and the first $L$ edges of the wiring sequence of Step 2.
The problem is to compute the probability of the node degree sequence $(j_\ell,k_\ell)_{\ell\in[L]}$ conditioned on $X$, the graph $h$ and the root degree $(j,k)$, that is
\begin{equation}\label{configprob}
p=\mathbb{P}[w_\ell\in{\cal N}_{j_\ell,k_\ell}, \ell\in[L]|w_0\in{\cal N}_{jk},h,X] \ .
\end{equation}
Note that there is no condition that the node $w_\ell$ at step $\ell$ is distinct from the earlier nodes $w_{\ell'}, \ell'\in{\{0\}\cup [\ell-1]}$. With high probability each $w_\ell$ will be new, and the resultant subgraph $h$ will be a tree with $L$ distinct added nodes (not including the root) and $L$ edges. With small probability one or more of the $w_\ell$ will be preexisting, i.e. equal to $w_{\ell'}$ for some $\ell'\in{\{0\}\cup [\ell-1]}$: in this case the subgraph $h$ will have $M<L$ added nodes, will have cycles and not be a tree.
The following sequences of numbers are determined given $X$ and $h$:
\begin{itemize}
\item $e_{j,k}(\ell)$ is the number of available $j$-stubs connected to $(j,k)$ nodes after $\ell$ wiring steps;
\item $e_{k,j}(\ell)$ is the number of available $k$-stubs connected to $(j,k)$ nodes after $\ell$ wiring steps.
\item $e^-_{j}(\ell):=\sum_ke_{j,k}(\ell)$ and $e^+_{k}(\ell):=\sum_je_{k,j}(\ell)$ are the number of available $j$-stubs and $k$-stubs respectively after $\ell$ wiring steps.
\end{itemize}
Note that $e_{j,k}(0)=ju_{jk}$ and $e_{k,j}(0)=ku_{jk}$, and both decrease by at most $1$ at each step.
The analysis of configuration probabilities that follows is inductive on the step $\ell$.
\begin{theorem}\label{animal_theorem} Consider the ACG sequence with $(P,Q)$ supported on $\{0,1,\dots, K\}^2$. Let $h$ be any fixed finite configuration rooted to $w_0\in{\cal N}_{jk}$, with $M$ added nodes and $L\ge M$ edges, labelled by the node-type sequence $( j_m, k_m)_{m\in[M]}$. Then, as $N\to\infty$, the joint probability conditioned on $X$,
\[p=\mathbb{P}[w_m\in{\cal N}_{j_mk_m}, m\in[M]|w_0\in{\cal N}_{jk},h,X]\ ,\]
is given by
\begin{equation}
\prod_{m\in[M],\mbox{\ out-edge} }P_{k_m|j_{m}}Q_{j_m|k_{m'}} \prod_{m\in[M],\mbox{\ in-edge} }P_{j_m|k_m}Q_{k_m|j_{m'}}\left[1+O(N^{-1/2})\right]
\label{animalformula1}\end{equation}
if $h$ is a tree and
\begin{equation} O(N^{M-L})\label{animalformula2}\ .\end{equation}
if $h$ has cycles. For trees, the $\ell$th edge has $m=\ell$, and $m'\in\{0\}\cup[\ell-1]$ numbers the node to which $w_\ell$ attaches.\end{theorem}
\begin{remarks} \begin{enumerate}
\item
Formula \eqref{animalformula2} shows clearly what is meant by saying that configuration graphs are {\it locally tree-like}\index{locally tree-like} as $N\to\infty$. It means the number of occurrences of any fixed finite size graph $h$ with cycles embedded within a configuration graph of size $N$ remains bounded with high probability as $N\to\infty$.
\item Even more interesting is that \eqref{animalformula1} shows that large configuration graphs exhibit a strict type of conditional independence. Selection of any root node $w_0$ of the tree graph $h$ splits it into two (possibly empty) trees $h_1, h_2$ with node-types $( j_{m}, k_{m}), m\in [M_1]$ and $( j_{m}, k_{m}),m \in [M_1+M_2]\setminus[M_1]$ where $M=M_1+M_2$. When we condition on the node-type of $w_0$, \eqref{animalformula1} shows that the remaining node-types form independent families:
\begin{eqnarray} \mathbb{P}[w_m\in{\cal N}_{j_mk_m}, m\in[M], h\big | X, w_0\in{\cal N}_{jk} ]&&\nonumber \\ &&\hspace{-2in}=\ \mathbb{P}[w_m\in{\cal N}_{j_mk_m}, m\in[M_1], h_1\big |X, w_0\in{\cal N}_{jk} ]\nonumber \\
&&\hspace{-2in}\times \mathbb{P}[w_m\in{\cal N}_{j_mk_m}, m\in[M_1+M_2]\setminus[M_1], h_2\big | X, w_0\in{\cal N}_{jk} ] \ .\label{LTI_1}
\end{eqnarray}
We call this deep property of the general configuration graph the {\it locally tree-like independence property}\index{locally tree-like independence property} (LTI property). In \cite{Hurd2015}, the LTI property provides the key to unravelling cascade dynamics in large configuration graphs.
\end{enumerate} \end{remarks}
\bigskip\noindent{\bf Proof of Theorem \ref{animal_theorem}: \ } First, suppose Step 1 generates the node-type sequence $X$. Conditioned on $X$, now suppose the first step generates an in-edge $(w_1,w_0)$. Then, by refining Part 2 of Proposition \ref{P1}, the conditional probability that node $w_1$ has degree $j_1,k_1$ can be written
\begin{eqnarray*} \frac{\mathbb{P}[w_1\in{\cal N}_{j_1k_1},w_0\in{\cal N}_{jk}|h,X]}{\mathbb{P}[w_0\in{\cal N}_{jk}|h,X]}&&\\&&\hspace{-1.5in}=\ \frac{C^{-1}(e^-(0),e^+(0))e_{k_1,j_1}(0)e^-_{j,k}(0) Q_{k_1j}
C(e^-(1),e^+(1))}{C^{-1}(e^-(0),e^+(0))\sum_{k'}e^+_{k'}(0)e^-_{j,k}(0) Q_{k'j}C(e^-(1),e^+(1))}\\&&\hspace{-1.5in}=\ \left(\frac{e_{k_1,j_1}(0)e^-_{j,k}(0)}{e^+_{k_1}(0)e^-_{j}(0)}\right)\left(\frac{\mathbb{E}[e_{k_1j}|e^-(0), e^+(0)]}{E}\right)\left(\frac{e^-_{j,k}(0)}{E}\right)^{-1}\\
&&\hspace{-1.5in}=\ \left(\frac{k_1u_{k_1,j_1}}{k_1u^+_{k_1}}\right)\left(\frac{\mathbb{E}[e_{k_1j}|e^-(0), e^+(0)]}{e^-_{j}(0)}\right)\ .
\end{eqnarray*}
Be aware that $C(e^-(1),e^+(1))$ in the denominator after the first equality depends on $k'$ and hence does not cancel a factor in the numerator.
Now, for $N\to\infty$, Part 2 of the Corollary applies to the second factor, and \eqref{LLN_P} applies to the first factor, and shows that for the case of an in-edge on the first step, with high probability, $X$ is such that:
\[ \mathbb{P}[w_1\in{\cal N}_{j_1k_1}|w_0\in{\cal N}_{jk},h,X] \ = \ P_{j_1|k_1}\ Q_{k_1|j}\left[1+O(N^{-1/2})\right]\ .
\]
The case of an out-edge is similar.
Now we continue conditionally on $X$ from Step 1 and assume inductively that \eqref{animalformula1} is true for $M-1$ and prove it for $M$. Suppose the final node $w_M$ is in-connected to the node $w_{M'}$ for some $M'\le M$. The ratio $\mathbb{P}[w_m\in{\cal N}_{j_mk_m}, m\in[M]|v\in{\cal N}_{jk},h,X]/\mathbb{P}[w_m\in{\cal N}_{j_mk_m}, m\in[M-1]|w_0\in{\cal N}_{jk},h,X]$ can be treated just as in the previous step and shown to be
\[\left(\frac{e_{k_M,j_M}(M-1)}{e^+_{k_M}(M-1)}\right)\left(\frac{\mathbb{E}[e_{k_Mj_{M'}}|e^-(M-1), e^+(M-1)]}{ e^-_{j_{M'}}(M-1)}\right)
\]
which with high probability equals \[ \mathbb{P}[w_1\in{\cal N}_{j_1k_1}|w_0\in{\cal N}_{jk},h,X] \ = \ P_{j_M|k_M}\ Q_{k_M|j_{M'}}\left[1+O(N^{-1/2})\right]\ .
\]
The case $w_M$ is out-connected to the node $w_{M'}$ is similar.
The first step $m$ that a cycle is formed can be treated by imposing a condition that $w_m=w_{m''}$ for some fixed $m''<m$. One finds that the conditional probability of this is
\begin{eqnarray*} \mathbb{P}[w_m=w_{m''}, w_\ell \in{\cal N}_{j_\ell k_\ell}, \ell\in[m-1]|w_0\in{\cal N}_{jk},h,X]&&\\
&&\hspace{-2.5in}=\frac{k_{m''}}{{e^+_{k_{m''}}(m-1)}}\ \times\ \mathbb{P}[w_\ell \in{\cal N}_{j_\ell k_\ell}, \ell\in[m-1]|w_0\in{\cal N}_{jk},h,X]\ .
\end{eqnarray*}
The first factor is $O(N^{-1})$ as $N\to\infty$, which proves the desired statement \eqref{animalformula2} for cycles.
Finally, since \eqref{animalformula2} is true for cycles, with high probability all finite configurations are trees. Therefore their asymptotic probability laws are given by \eqref{animalformula1}, as required.
\qedsymbol
\bigskip
\section{Approximate ACG Simulation} \label{sec:5}
It was observed in Section \ref{Configuration} that Step 1 of the configuration graph construction draws a sequence $(j_i,k_i)_{i\in[N]}$ of node types that is iid with the correct distribution $P$, but is only feasible, $\sum_i(k_i-j_i)=0$, with small probability. Step 2 of the exact ACG algorithm in Section \ref{finite_assortative_section} requires is even less feasible in practice. Practical simulation algorithms address the first problem by ``clipping'' the drawn node bidegree sequence when the discrepancy $D=D_N:=\sum_i(k_i-j_i)$ is not too large, meaning it is adjusted by a small amount to make it feasible, without making a large change in the joint distribution. Step 1 of the following simulation algorithm generalizes slightly the method introduced by \cite{ChenOlve13} who verify that the effect of clipping vanishes with high probability as $N\to\infty$. The difficulty with Step 2 of the ACG construction is overcome by an approximate sequential wiring algorithm.
The {\it approximate assortative configuration simulation algorithm} for multigraphs of size $N$, parametrized by the node-edge degree distribution pair $(P,Q)$ that have support on the finite set $(j,k)\in\{0,1,\dots, K\}^2$, involves choosing a suitable threshold $T=T(N)$ and modifying the steps identified in Section \ref{finite_assortative_section}:
\begin{enumerate}
\item Draw a sequence of $N$ node-type pairs $X= ((j_1,k_1),\dots, (j_N,k_N))$ independently from $P$, and accept the draw if and only if $0<|D|\le T(N)$. When the sequence $(j_i,k_i)_{i\in[N]}$ is accepted, the sequence is adjusted by adding a few stubs, either in- or out- as needed. First draw a random subset $\sigma\subset{\cal N}$ of size $|D|$ with uniform probability $\binom{N}{|D|}^{-1}$, and then define the feasible sequence $\tilde X=(\tilde j_i,\tilde k_i)_{i\in[N]}$ by adjusting the degree types for $i\in\sigma$ as follows:
\begin{eqnarray}
\label{clipone}
\tilde j_i= j_i + \xi^-_i;\quad \xi^-_i&=&{\bf 1}(i\in\sigma,D>0)\\
\label{cliptwo}
\tilde k_i= k_i + \xi^+ _i;\quad \xi^+_i&=&{\bf 1}(i\in\sigma,D<0)\ .
\end{eqnarray}
\item Conditioned on $\tilde X$, the result of Step 1, randomly wire together available in and out stubs {\it sequentially}, with suitable weights, to produce the sequence of edges $W$. At each $\ell=1,2,\dots, E$, match from available in-stubs and out-stubs weighted according to their degrees $j,k$ by
\begin{equation}\label{ACWweights}
C^{-1}(\ell)\frac {Q_{kj}}{Q^+_kQ^-_j}\ .
\end{equation} In terms of the bivariate random process $( e^-_j(\ell), e^+_k(\ell))$ with initial values\\ $( e^-_j(1), e^+_k(1))=(e^-_j, e^+_k)$ that at each $\ell$ counts the number of available degree $j$ in-stubs and degree $k$ out-stubs, the $\ell$ dependent normalization factor $C(\ell)$ is given by: \begin{equation}
\label{Cfactor}
C(\ell)=\sum_{jk} e^-_j(\ell) e^+_k(\ell) \frac {Q_{kj}}{Q^+_kQ^-_j}
\ .
\end{equation}
\end{enumerate}
\begin{remark} An alternative simulation algorithm for the ACG model has been proposed and studied in \cite{DeprWuth15}.
\end{remark}
Chen and Olvera-Cravioto, \cite{ChenOlve13}, addresses the clipping in Step 1 and shows that the discrepancy of the approximation is negligible as $N\to\infty$:
\begin{theorem}
\label{nodedegreetheorem} Fix $\delta\in(0,1/2)$, and for each $N$ let the threshold be $T(N)=N^{1/2+\delta}$. Then:\begin{enumerate}
\item The acceptance probability $ \mathbb{P}[|D_N|\le T(N)]\to 1$
as $N\to\infty$;
\item
For any fixed finite $M$, $\Lambda$, and bounded function $f:(\mathbb{Z}_+\times{\mathbb{Z}_+})^{M}\to[-\Lambda,\Lambda]$
\begin{equation} \left| \mathbb{E}[f\left((\tilde j_i,\tilde k_i)_{i=1,\dots, M}\right)]-\mathbb{E}[f\left((\hat j_i, \hat k_i)_{i=1,\dots,M}\right)]\right|\to 0\ ;
\end{equation}
\item The following limits in probability hold:
\begin{equation} \frac1{N}\tilde u_{jk}\stackrel{\mathrm{P}}{\longrightarrow} P_{jk},\qquad \frac1{N}\tilde u^+_k \stackrel{\mathrm{P}}{\longrightarrow} P^+_{k},\qquad \frac1{N}\tilde u^-_{j}\stackrel{\mathrm{P}}{\longrightarrow} P^-_{j}\ .\label{empirical}
\end{equation}
\end{enumerate}
\end{theorem}
Similarly it is intuitively clear that the discrepancy of the approximation in Step 2 is negligible as $N\to\infty$. As long as $e^-_j(\ell), e^+_k(\ell)$ are good approximations of $(E-\ell)Q^-_j, (E-\ell)Q^+_k$, \eqref{ACWweights} shows that the probability that edge $\ell$ has type $(k,j)$ will be approximately $Q_{kj}$. Since the detailed analysis of this problem is not yet complete, we state the desired properties as a conjecture:
\begin{conjecture}
\label{Approx_assortative} In the approximate assortative configuration graph construction with probabilities $P,Q$, the following convergence properties hold as $N\to\infty$.
\begin{enumerate}
\item The fraction of type $(k,j)$ edges in the matching sequence $( k_{l}, j_{\ell})_{\ell\in[E]}$ concentrates with high probability around the nominal edge distribution $Q_{kj}$:
\begin{equation}
\frac{{e}_{kj}}{E} = Q_{kj}+o(1)\ .
\end{equation}
\item For any fixed finite number $L$, the first $L$ edges $\ell,\ell\in[L]$ have degree sequence $(k_{l},j_{\ell})_{\ell\in[L]}$ that converges in distribution to $(\hat k_{l},\hat j_{\ell})_{\ell\in[L]}$, an independent sequence of identical $Q$ distributed random variables.
\end{enumerate}\end{conjecture}
Although the conjecture is not yet completely proven, extensive simulations have verified the consistency of the approximate ACG algorithm with the theoretical large $N$ probabilities.
\bibliographystyle{plain}
|